--- a/src/lte/doc/source/lte-design.rst Sun Aug 05 21:33:37 2012 -0300
+++ b/src/lte/doc/source/lte-design.rst Tue Aug 21 23:18:40 2012 -0300
@@ -730,6 +730,113 @@
lowest past average throughput :math:`T_{j}(t)` until all RBGs are allocated to UEs. The principle behind this is
that, in every TTI, the scheduler tries the best to achieve the equal throughput among all UEs.
+Token Band Fair Queue Scheduler
+-------------------------------
+
+Token Band Fair Queue (TBFQ) is a QoS aware scheduler which derives from the leaky-bucket mechanism. In TBFQ,
+a traffic flow of user i is characterized by following parameters:
+
+ * :math:`t_{i}`: packet arrival rate (byte/sec )
+ * :math:`r_{i}`: token generation rate (byte/sec)
+ * :math:`p_{i}`: token pool sizeĀ (byte)
+ * :math:`E_{i}`: counter that records the number of token borrowed from or given to the token bank by flow i;
+ :math:`E_{i}` can be smaller than zero
+
+Each K bytes data consumes k tokens. Also, TBFQ maintains a shared token bank (:math:`B`) so as to balance the traffic
+between different flows. If token generation rate :math:`r_{i}` is bigger than packet arrival rate :math:`t_{i}`, then tokens
+overflowing from token pool are added to the token bank, and :math:`E_{i}` is increased by the same amount. Otherwise,
+flow i needs to withdraw tokens from token bank based on a priority metric: :math:`frac{E_{i}}{r_{i}}`, and :math:`E_{i}` is decreased.
+Obviously, the user contributes more on token bank has higher priority to borrow tokens; on the other hand, the
+user borrows more tokens from bank has lower priority to continue to withdraw tokens. Therefore, in case of several
+users having the same token generation rate, traffic rate and token pool size, user suffers from higher interference
+has more opportunity to borrow tokens from bank. In addition, TBFQ can police the traffic by setting the token
+generation rate to limit the throughput. Additionally, TBFQ also maintains following three parameters for each flow:
+
+ * Debt limit :math:`d_{i}`: if :math:`E_{i}` belows this threshold, user i cannot further borrow tokens from bank. This is for
+ preventing malicious UE to borrow too much tokens.
+ * Credit limit :math:`c_{i}`: the maximum number of tokens UE i can borrow from the bank in one time.
+ * Credit threshold :math:`C`: once :math:`E_{i}` reaches debt limit, UE i must store :math:`C` tokens to bank in order to further
+ borrow token from bank.
+
+In the implementation, token generation rate should be configured in script and equals to the Maximum Bit Rate (MBR)
+in bearer level QoS parameters. For constant bit rate (CBR) traffic, it is suggested to set MBR to the traffic generation
+rate. For variance bit rate (VBR) traffic, it is suggested to set MBR three times larger than traffic generation rate.
+Debt limit and credit limit are set to -5Mb and 5Mb respectively [FABokhari2009]_. Current implementation does not
+consider credit threshold (:math:`C` = 0).
+
+LTE in NS-3 has two versions of TBFQ scheduler: frequency domain TBFQ (FD-TBFQ) and time domain TBFQ (TD-TBFQ).
+In FD-TBFQ, the scheduler always select UE with highest metric and allocates RBG with highest subband cqi until
+there are no packets within UE's RLC buffer or all RBGs are allocated[FABokhari2009]_. In TD-TBFQ, after selecting
+UE with maximum metric, it allocates all RBGs to this UE by using wideband cqi[WKWong2004]_.
+
+Priority Set Scheduler
+----------------------
+
+Priority set scheduler (PSS) is a QoS aware scheduler which combines time domain (TD) and frequency domain (FD)
+packet scheduling operations into one scheduler[GMonghal2008]_. It controls the fairness among UEs by a specified
+Target Bit Rate (TBR).
+
+In TD scheduler part, PSS first selects UEs with non-empty RLC buffer and then divide them into two sets based
+on the TBR:
+
+* set 1: UE whose past average throughput is smaller than TBR; TD scheduler calculates their priority metric in
+ Blind Equal Throughput (BET) style:
+
+.. math::
+
+ \widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
+ \left( \frac{ 1 }{ T_\mathrm{j}(t) } \right)
+
+* set 2: UE whose past average throughput is larger (or equal) than TBR; TD scheduler calculates their priority
+ metric in Propotional Fair (PF) style:
+
+.. math::
+
+ \widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
+ \left( \frac{ R_{j}(k,t) }{ T_\mathrm{j}(t) } \right)
+
+UEs belonged to set 1 have higher priority than ones in set 2. Then PSS will select :math:`N_{mux}` UEs with
+highest metric in two sets and forward those UE to FD scheduler.
+
+In PSS, FD scheduler allocates RBG k to UE n that maximums the chosen metric. Two PF schedulers are used in
+PF scheduelr:
+
+ #. Proportional Fair scheduled (PFsch)
+
+.. math::
+
+ \widehat{Msch}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
+ \left( \frac{ R_{j}(k,t) }{ Tsch_\mathrm{j}(t) } \right)
+
+where :math:`Tsch_{j}(t)` is similar past througput performance perceived by the user :math:`j`, with the
+difference that it is updated only when the i-th user is actually served.
+
+ #. Carrier over Interference to Average (CoIta)
+
+.. math::
+
+ \widehat{Mcoi}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
+ \left( \frac{ CoI[j,k] }{ CoI[j,k] } \right)
+
+where :math: `CoI[j,k]` is an estimation of the SINR on the RBG :math: `k` of UE :math: `j`. Both PFsch
+and CoIta is for decoupling FD metric from TD scheduler.
+
+In addition, PSS FD scheduler also provide a weigth metric W[n] for helping controlling fairness in case
+of low number of UEs.
+
+.. math::
+
+ W[n] = max (1, \frac{TBR}{ T_{j}(t) })
+
+where :math:`T_{j}(t)` is the past throughput performance perceived by the user :math:`j` . Therefore, on
+RBG k, the FD scheduler selects the UE :math:`j` that maximises the product of the frequency domain
+metric (:math:`Msch`, :math:`MCoI`) by weigth :math: `W[n]`. This strategy will gurantee the throughput of lower
+quality UE tend towards the TBR.
+
+In the implementation, TBR equals to Guarantee Bit Rate (GBR) in bearer
+level QoS parameters. :math: `N_mux` is fixed to half value of total number of UE in set 1 and set 2. In addition,
+CQI report is used as an SINR estimate :math:`CoI[j,k]`.
+
Transport Blocks
----------------
--- a/src/lte/doc/source/lte-references.rst Sun Aug 05 21:33:37 2012 -0300
+++ b/src/lte/doc/source/lte-references.rst Tue Aug 21 23:18:40 2012 -0300
@@ -76,3 +76,8 @@
.. [FCapo2012] F.Capozzi, G.Piro L.A.Grieco, G.Boggia, P.Camarda, "Downlink Packet Scheduling in LTE Cellular Networks: Key Design Issues and a Survey", IEEE Comm. Surveys and Tutorials, to appear
+.. [FABokhari2009] F.A. Bokhari, H. Yanikomeroglu, W.K. Wong, M. Rahman, "Cross-Layer Resource Scheduling for Video Traffic in the Downlink of OFDMA-Based Wireless 4G Networks",EURASIP J. Wirel. Commun. Netw., vol.2009, no.3, pp. 1-10, Jan. 2009.
+
+.. [WKWong2004] W.K. Wong, H.Y. Tang, V.C.M, Leung, "Token bank fair queuing: a new scheduling algorithm for wireless multimedia services", Int. J. Commun. Syst., vol.17, no.6, pp.591-614, Aug.2004.
+
+.. [GMonghal2008] G.Mongha, K.I. Pedersen, I.Z. Kovacs, P.E. Mogensen, " QoS Oriented Time and Frequency Domain Packet Schedulers for The UTRAN Long Term Evolution", In Proc. IEEE VTC, 2008.
--- a/src/lte/doc/source/lte-testing.rst Sun Aug 05 21:33:37 2012 -0300
+++ b/src/lte/doc/source/lte-testing.rst Tue Aug 21 23:18:40 2012 -0300
@@ -433,7 +433,112 @@
T = \frac{1}{ \sum_{i=1}~N \frac{1}{R^{fb}_i} }
+Token Band Fair Queue scheduler performance
+-------------------------------------------
+Test suites ``lte-fdtbfq-ff-mac-scheduler`` and ``lte-tdtbfq-ff-mac-scheduler`` create different
+test cases for testing three key features of TBFQ scheduler: traffic policing, fairness and traffic
+balance. Constant Bit Rate UDP traffic is used in both downlink and uplink in all test cases.
+The packet interval is set to 1ms to keep the RLC buffer non-empty. Different traffic rate is
+achieved by setting different packet size. Specifically, two classes of flows are created in the
+testsuites:
+
+ * Homogeneous flow: flows with the same token generation rate and packet arrival rate
+ * Heterogeneous flow: flows with different packet arrival rate, but with the same token generation rate
+
+In test case 1 verifies traffic policing and fairness features for the scenario that all UEs are
+placed at the same distance from the eNB. In this case, all Ues have the same SINR value. Different
+test cases are implemented by using a different SINR value and a different number of UEs. Because each
+flow have the same traffic rate and token generation rate, TBFQ scheduler will gurantee the same
+throughput among UEs without the constraint of token generation rate. In addition, the exact value
+of UE throughput is depended on the total traffic rate:
+
+ * If total traffic rate <= maximum throughput, UE throughput = traffic rate
+
+ * If total traffic rate > maximum throughput, UE throughput = maximum throughput / N
+
+Here, N is the number of UE connected to eNodeB. The maximum throughput in this case equals to the rate
+that all RBGs are assigned to one UE(e.g., when distance equals 0, maximum throughput is 2196000 byte/sec).
+When the traffic rate is smaller than max bandwidth, TBFQ can police the traffic by token generation rate
+so that the UE throughput equals its actual traffic rate (token generation rate is set to traffic
+generation rate); On the other hand, when total traffic rate is bigger than the max throughput, eNodeB
+cannot forward all traffic to UEs. Therefore, in each TTI, TBFQ will allocate all RBGs to one UE due to
+the large packets buffered in RLC buffer. When a UE is scheduled in current TTI, its token counter is decreased
+so that it will not be scheduled in the next TTI. Because each UE has the same traffic generation rate,
+TBFQ will serve each UE in turn and only serve one UE in each TTI (both in TD TBFQ and FD TBFQ).
+Therefore, the UE throughput in the second condition equals to the evenly share of maximum throughput.
+
+Test case 2 verifies traffic policing and fairness features for the scenario that each UE is placed at
+the different distance from the eNB. In this case, each UE has the different SINR value. Similar to test
+case 1, UE throughput in test case 2 is also depended on the total traffic rate but with a different
+maximum thorughput. Suppose all UEs have a high traffic load. Then the traffic will saturate the RLC buffer
+in eNodeB. In each TTI, after selecting one UE with highest metric, TBFQ will allocate all RBGs to this
+UE due to the large RLC buffer size. On the other hand, once RLC buffer is saturated, the total throughput
+of all UEs cannot increase any more. In addition, as we discussed in test case 1, for homogeneous flows
+which have the same t_i and r_i, each UE will achieve the same throughput in long term. Therefore, we
+can use the same method in TD BET to calculate the maximum throughput:
+
+.. math::
+
+ T = \frac{N}{ \sum_{i=1}~N \frac{1}{R^{fb}_i} }
+
+Here, :math:`T` is the maximum throughput. :math:`R^{fb}_i` be the the full bandwidth achievable rate
+for user i. :math `N` is the number of UE.
+
+When the totol traffic rate is bigger than :math:`T`, the UE throughput equals to :math:`\frac{T}{N}` . Otherwise, UE throughput
+equals to its traffic generation rate.
+
+In test case 3, three flows with different traffic rate are created. Token generation rate for each
+flow is the same and equals to the average traffic rate of three flows. Because TBFQ use a shared token
+bank, tokens contributed by UE with lower traffic load can be utilized by UE with higher traffic load.
+In this way, TBFQ can guarantee the traffic rate for each flow. Although we use heterogeneous flow here,
+the calculation of maximum throughput is as same as that in test case 2. In calculation max throughput
+of test case 2, we assume that all UEs suffer high traffic load so that scheduler always assign all RBGs
+to one UE in each TTI. This assumes is also true in heterogeneous flow case. In other words, whether
+those flows have the same traffic rate and token generation rate, if their traffic rate is bigger enough,
+TBFQ performs as same as it in test case 2. Therefore, the maximum bandwidth in test case 3 is as
+same as it in test case 2.
+
+In test case 3, in some flows, token generate rate does not equal to MBR, although all flows are CBR
+traffic. This is not accorded with our parameter setting rules. Actually, the traffic balance feature
+is used in VBR traffic. Because different UE's peak rate may occur in different time, TBFQ use shared
+token bank to balance the traffic among those VBR traffics. Test case 3 use CBR traffic to verify this
+feature. But in the real simulation, it is recommended to set token generation rate to MBR.
+
+Priority Set scheduler performance
+----------------------------------
+
+Test suites ``lte-pss-ff-mac-scheduler`` create different test cases with a single eNB and several UEs.
+
+In the first class test case of ``lte-pss-ff-mac-scheduler``, the UEs are all placed at the same distance from
+the eNB, and hence all placed in order to have the same SINR. Different test cases are implemented
+by using a different TBR for each UEs. In each test cases, all UEs have the same
+Target Bit Rate configured by GBR in EPS bear setting. The exptected behavior of PSS is to gurantee that
+each UE's throughput at least equals its TBR if the total flow rate is blow maximum throughput. Similar
+to TBFQ, the maximum throughput in this case equals to the rate that all RBGs are assigned to one UE.
+When the traffic rate is smaller than max bandwidth, the UE throughput equals its actual traffic rate;
+On the other hand, UE throughput equals to the evenly share of the maximum throughput.
+
+In the first class of test cases, each UE has the same SINR. Therefore, the priority metric in PF scheduler will be
+determined by past average throughput :math:`T_{j}(t)` because each UE has the same achievable throughput
+:math:`R_{j}(k,t)` in PFsch or same :math:`CoI[k,n]` in CoItA. This means that PSS will performs like a
+TD-BET which allocates all RBGs to one UE in each TTI. Then the maximum value of UE throughput equals to
+the achievable rate that all RBGs are allocated to this UE.
+
+In the second class of test case of ``lte-pss-ff-mac-scheduler``, the UEs are all placed at the same distance from
+the eNB, and hence all placed in order to have the same SINR. Different TBR values are assigned to each UE.
+There also exist an maximum throughput in this case. Once total traffic rate is bigger than this threshold,
+there will be some UEs that cannot achieve their TBR. Because there is no fading, subband CQIs for each
+RBGs frequency are the same. Therefore, in FD scheduler,in each TTI, priority metrics of UE for all RBGs
+are the same. This means that FD scheduler will always allocate all RBGs to one user. Therefore, in the
+maximum throughput case, PSS performs like a TD-BET. Then we have:
+
+.. math::
+
+ T = \frac{N}{ \sum_{i=1}~N \frac{1}{R^{fb}_i} }
+
+Here, :math:`T` is the maximum throughput. :math:`R^{fb}_i` be the the full bandwidth achievable rate
+for user i. :math `N` is the number of UE.
Building Propagation Loss Model
-------------------------------
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/src/lte/model/pss-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
@@ -0,0 +1,1532 @@
+/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
+/*
+ * Copyright (c) 2011 Centre Tecnologic de Telecomunicacions de Catalunya (CTTC)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation;
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: Marco Miozzo <marco.miozzo@cttc.es>
+ * Dizhi Zhou <dizhi.zhou@gmail.com>
+ */
+
+#ifdef __FreeBSD__
+#define log2(x) (log(x) / M_LN2)
+#endif /* __FreeBSD__ */
+
+#include <ns3/log.h>
+#include <ns3/pointer.h>
+
+#include <ns3/simulator.h>
+#include <ns3/lte-amc.h>
+#include <ns3/pss-ff-mac-scheduler.h>
+
+NS_LOG_COMPONENT_DEFINE ("PssFfMacScheduler");
+
+namespace ns3 {
+
+int PssType0AllocationRbg[4] = {
+ 10, // RGB size 1
+ 26, // RGB size 2
+ 63, // RGB size 3
+ 110 // RGB size 4
+}; // see table 7.1.6.1-1 of 36.213
+
+
+NS_OBJECT_ENSURE_REGISTERED (PssFfMacScheduler);
+
+
+
+class PssSchedulerMemberCschedSapProvider : public FfMacCschedSapProvider
+{
+public:
+ PssSchedulerMemberCschedSapProvider (PssFfMacScheduler* scheduler);
+
+ // inherited from FfMacCschedSapProvider
+ virtual void CschedCellConfigReq (const struct CschedCellConfigReqParameters& params);
+ virtual void CschedUeConfigReq (const struct CschedUeConfigReqParameters& params);
+ virtual void CschedLcConfigReq (const struct CschedLcConfigReqParameters& params);
+ virtual void CschedLcReleaseReq (const struct CschedLcReleaseReqParameters& params);
+ virtual void CschedUeReleaseReq (const struct CschedUeReleaseReqParameters& params);
+
+private:
+ PssSchedulerMemberCschedSapProvider ();
+ PssFfMacScheduler* m_scheduler;
+};
+
+PssSchedulerMemberCschedSapProvider::PssSchedulerMemberCschedSapProvider ()
+{
+}
+
+PssSchedulerMemberCschedSapProvider::PssSchedulerMemberCschedSapProvider (PssFfMacScheduler* scheduler) : m_scheduler (scheduler)
+{
+}
+
+
+void
+PssSchedulerMemberCschedSapProvider::CschedCellConfigReq (const struct CschedCellConfigReqParameters& params)
+{
+ m_scheduler->DoCschedCellConfigReq (params);
+}
+
+void
+PssSchedulerMemberCschedSapProvider::CschedUeConfigReq (const struct CschedUeConfigReqParameters& params)
+{
+ m_scheduler->DoCschedUeConfigReq (params);
+}
+
+
+void
+PssSchedulerMemberCschedSapProvider::CschedLcConfigReq (const struct CschedLcConfigReqParameters& params)
+{
+ m_scheduler->DoCschedLcConfigReq (params);
+}
+
+void
+PssSchedulerMemberCschedSapProvider::CschedLcReleaseReq (const struct CschedLcReleaseReqParameters& params)
+{
+ m_scheduler->DoCschedLcReleaseReq (params);
+}
+
+void
+PssSchedulerMemberCschedSapProvider::CschedUeReleaseReq (const struct CschedUeReleaseReqParameters& params)
+{
+ m_scheduler->DoCschedUeReleaseReq (params);
+}
+
+
+
+
+class PssSchedulerMemberSchedSapProvider : public FfMacSchedSapProvider
+{
+public:
+ PssSchedulerMemberSchedSapProvider (PssFfMacScheduler* scheduler);
+
+ // inherited from FfMacSchedSapProvider
+ virtual void SchedDlRlcBufferReq (const struct SchedDlRlcBufferReqParameters& params);
+ virtual void SchedDlPagingBufferReq (const struct SchedDlPagingBufferReqParameters& params);
+ virtual void SchedDlMacBufferReq (const struct SchedDlMacBufferReqParameters& params);
+ virtual void SchedDlTriggerReq (const struct SchedDlTriggerReqParameters& params);
+ virtual void SchedDlRachInfoReq (const struct SchedDlRachInfoReqParameters& params);
+ virtual void SchedDlCqiInfoReq (const struct SchedDlCqiInfoReqParameters& params);
+ virtual void SchedUlTriggerReq (const struct SchedUlTriggerReqParameters& params);
+ virtual void SchedUlNoiseInterferenceReq (const struct SchedUlNoiseInterferenceReqParameters& params);
+ virtual void SchedUlSrInfoReq (const struct SchedUlSrInfoReqParameters& params);
+ virtual void SchedUlMacCtrlInfoReq (const struct SchedUlMacCtrlInfoReqParameters& params);
+ virtual void SchedUlCqiInfoReq (const struct SchedUlCqiInfoReqParameters& params);
+
+
+private:
+ PssSchedulerMemberSchedSapProvider ();
+ PssFfMacScheduler* m_scheduler;
+};
+
+
+
+PssSchedulerMemberSchedSapProvider::PssSchedulerMemberSchedSapProvider ()
+{
+}
+
+
+PssSchedulerMemberSchedSapProvider::PssSchedulerMemberSchedSapProvider (PssFfMacScheduler* scheduler)
+ : m_scheduler (scheduler)
+{
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedDlRlcBufferReq (const struct SchedDlRlcBufferReqParameters& params)
+{
+ m_scheduler->DoSchedDlRlcBufferReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedDlPagingBufferReq (const struct SchedDlPagingBufferReqParameters& params)
+{
+ m_scheduler->DoSchedDlPagingBufferReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedDlMacBufferReq (const struct SchedDlMacBufferReqParameters& params)
+{
+ m_scheduler->DoSchedDlMacBufferReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedDlTriggerReq (const struct SchedDlTriggerReqParameters& params)
+{
+ m_scheduler->DoSchedDlTriggerReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedDlRachInfoReq (const struct SchedDlRachInfoReqParameters& params)
+{
+ m_scheduler->DoSchedDlRachInfoReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedDlCqiInfoReq (const struct SchedDlCqiInfoReqParameters& params)
+{
+ m_scheduler->DoSchedDlCqiInfoReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedUlTriggerReq (const struct SchedUlTriggerReqParameters& params)
+{
+ m_scheduler->DoSchedUlTriggerReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedUlNoiseInterferenceReq (const struct SchedUlNoiseInterferenceReqParameters& params)
+{
+ m_scheduler->DoSchedUlNoiseInterferenceReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedUlSrInfoReq (const struct SchedUlSrInfoReqParameters& params)
+{
+ m_scheduler->DoSchedUlSrInfoReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedUlMacCtrlInfoReq (const struct SchedUlMacCtrlInfoReqParameters& params)
+{
+ m_scheduler->DoSchedUlMacCtrlInfoReq (params);
+}
+
+void
+PssSchedulerMemberSchedSapProvider::SchedUlCqiInfoReq (const struct SchedUlCqiInfoReqParameters& params)
+{
+ m_scheduler->DoSchedUlCqiInfoReq (params);
+}
+
+
+
+
+
+PssFfMacScheduler::PssFfMacScheduler ()
+ : m_cschedSapUser (0),
+ m_schedSapUser (0),
+ m_timeWindow (99.0),
+ m_nextRntiUl (0)
+{
+ m_amc = CreateObject <LteAmc> ();
+ m_cschedSapProvider = new PssSchedulerMemberCschedSapProvider (this);
+ m_schedSapProvider = new PssSchedulerMemberSchedSapProvider (this);
+}
+
+PssFfMacScheduler::~PssFfMacScheduler ()
+{
+ NS_LOG_FUNCTION (this);
+}
+
+void
+PssFfMacScheduler::DoDispose ()
+{
+ NS_LOG_FUNCTION (this);
+ delete m_cschedSapProvider;
+ delete m_schedSapProvider;
+}
+
+TypeId
+PssFfMacScheduler::GetTypeId (void)
+{
+ static TypeId tid = TypeId ("ns3::PssFfMacScheduler")
+ .SetParent<FfMacScheduler> ()
+ .AddConstructor<PssFfMacScheduler> ()
+ .AddAttribute ("CqiTimerThreshold",
+ "The number of TTIs a CQI is valid (default 1000 - 1 sec.)",
+ UintegerValue (1000),
+ MakeUintegerAccessor (&PssFfMacScheduler::m_cqiTimersThreshold),
+ MakeUintegerChecker<uint32_t> ())
+ ;
+ return tid;
+}
+
+
+
+void
+PssFfMacScheduler::SetFfMacCschedSapUser (FfMacCschedSapUser* s)
+{
+ m_cschedSapUser = s;
+}
+
+void
+PssFfMacScheduler::SetFfMacSchedSapUser (FfMacSchedSapUser* s)
+{
+ m_schedSapUser = s;
+}
+
+FfMacCschedSapProvider*
+PssFfMacScheduler::GetFfMacCschedSapProvider ()
+{
+ return m_cschedSapProvider;
+}
+
+FfMacSchedSapProvider*
+PssFfMacScheduler::GetFfMacSchedSapProvider ()
+{
+ return m_schedSapProvider;
+}
+
+void
+PssFfMacScheduler::DoCschedCellConfigReq (const struct FfMacCschedSapProvider::CschedCellConfigReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // Read the subset of parameters used
+ m_cschedCellConfig = params;
+ FfMacCschedSapUser::CschedUeConfigCnfParameters cnf;
+ cnf.m_result = SUCCESS;
+ m_cschedSapUser->CschedUeConfigCnf (cnf);
+ return;
+}
+
+void
+PssFfMacScheduler::DoCschedUeConfigReq (const struct FfMacCschedSapProvider::CschedUeConfigReqParameters& params)
+{
+ NS_LOG_FUNCTION (this << " RNTI " << params.m_rnti << " txMode " << (uint16_t)params.m_transmissionMode);
+ std::map <uint16_t,uint8_t>::iterator it = m_uesTxMode.find (params.m_rnti);
+ if (it==m_uesTxMode.end ())
+ {
+ m_uesTxMode.insert (std::pair <uint16_t, double> (params.m_rnti, params.m_transmissionMode));
+ }
+ else
+ {
+ (*it).second = params.m_transmissionMode;
+ }
+ return;
+ return;
+}
+
+void
+PssFfMacScheduler::DoCschedLcConfigReq (const struct FfMacCschedSapProvider::CschedLcConfigReqParameters& params)
+{
+ NS_LOG_FUNCTION (this << " New LC, rnti: " << params.m_rnti);
+
+ std::map <uint16_t, pssFlowPerf_t>::iterator it;
+ for (uint16_t i = 0; i < params.m_logicalChannelConfigList.size (); i++)
+ {
+ it = m_flowStatsDl.find (params.m_rnti);
+
+ if (it == m_flowStatsDl.end ())
+ {
+ double tbrDlInBytes = params.m_logicalChannelConfigList.at(i).m_eRabGuaranteedBitrateDl / 8; // byte/s
+ double tbrUlInBytes = params.m_logicalChannelConfigList.at(i).m_eRabGuaranteedBitrateUl / 8; // byte/s
+
+ NS_LOG_DEBUG(this<<" UE "<<(*it).first<<" GBR "<<tbrDlInBytes);
+
+ pssFlowPerf_t flowStatsDl;
+ flowStatsDl.flowStart = Simulator::Now ();
+ flowStatsDl.totalBytesTransmitted = 0;
+ flowStatsDl.lastTtiBytesTrasmitted = 0;
+ flowStatsDl.lastAveragedThroughput = 1;
+ flowStatsDl.secondLastAveragedThroughput = 1;
+ flowStatsDl.targetThroughput = tbrDlInBytes;
+ m_flowStatsDl.insert (std::pair<uint16_t, pssFlowPerf_t> (params.m_rnti, flowStatsDl));
+ pssFlowPerf_t flowStatsUl;
+ flowStatsUl.flowStart = Simulator::Now ();
+ flowStatsUl.totalBytesTransmitted = 0;
+ flowStatsUl.lastTtiBytesTrasmitted = 0;
+ flowStatsUl.lastAveragedThroughput = 1;
+ flowStatsUl.secondLastAveragedThroughput = 1;
+ flowStatsUl.targetThroughput = tbrUlInBytes;
+ m_flowStatsUl.insert (std::pair<uint16_t, pssFlowPerf_t> (params.m_rnti, flowStatsUl));
+ }
+ else
+ {
+ NS_LOG_ERROR ("RNTI already exists");
+ }
+ }
+
+ return;
+}
+
+void
+PssFfMacScheduler::DoCschedLcReleaseReq (const struct FfMacCschedSapProvider::CschedLcReleaseReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+void
+PssFfMacScheduler::DoCschedUeReleaseReq (const struct FfMacCschedSapProvider::CschedUeReleaseReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+
+void
+PssFfMacScheduler::DoSchedDlRlcBufferReq (const struct FfMacSchedSapProvider::SchedDlRlcBufferReqParameters& params)
+{
+ NS_LOG_FUNCTION (this << params.m_rnti << (uint32_t) params.m_logicalChannelIdentity);
+ // API generated by RLC for updating RLC parameters on a LC (tx and retx queues)
+
+ std::map <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator it;
+
+ LteFlowId_t flow (params.m_rnti, params.m_logicalChannelIdentity);
+
+ it = m_rlcBufferReq.find (flow);
+
+ if (it == m_rlcBufferReq.end ())
+ {
+ m_rlcBufferReq.insert (std::pair <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters> (flow, params));
+ }
+ else
+ {
+ (*it).second = params;
+ }
+
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedDlPagingBufferReq (const struct FfMacSchedSapProvider::SchedDlPagingBufferReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedDlMacBufferReq (const struct FfMacSchedSapProvider::SchedDlMacBufferReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+int
+PssFfMacScheduler::GetRbgSize (int dlbandwidth)
+{
+ for (int i = 0; i < 4; i++)
+ {
+ if (dlbandwidth < PssType0AllocationRbg[i])
+ {
+ return (i + 1);
+ }
+ }
+
+ return (-1);
+}
+
+
+int
+PssFfMacScheduler::LcActivePerFlow (uint16_t rnti)
+{
+ std::map <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator it;
+ int lcActive = 0;
+ for (it = m_rlcBufferReq.begin (); it != m_rlcBufferReq.end (); it++)
+ {
+ if (((*it).first.m_rnti == rnti) && (((*it).second.m_rlcTransmissionQueueSize > 0)
+ || ((*it).second.m_rlcRetransmissionQueueSize > 0)
+ || ((*it).second.m_rlcStatusPduSize > 0) ))
+ {
+ lcActive++;
+ }
+ if ((*it).first.m_rnti > rnti)
+ {
+ break;
+ }
+ }
+ return (lcActive);
+
+}
+
+
+void
+PssFfMacScheduler::DoSchedDlTriggerReq (const struct FfMacSchedSapProvider::SchedDlTriggerReqParameters& params)
+{
+ NS_LOG_FUNCTION (this << " Frame no. " << (params.m_sfnSf >> 4) << " subframe no. " << (0xF & params.m_sfnSf));
+ // API generated by RLC for triggering the scheduling of a DL subframe
+
+ // evaluate the relative channel quality indicator for each UE per each RBG
+ // (since we are using allocation type 0 the small unit of allocation is RBG)
+ // Resource allocation type 0 (see sec 7.1.6.1 of 36.213)
+
+ RefreshDlCqiMaps ();
+
+ int rbgSize = GetRbgSize (m_cschedCellConfig.m_dlBandwidth);
+ int rbgNum = m_cschedCellConfig.m_dlBandwidth / rbgSize;
+ std::map <uint16_t, std::vector <uint16_t> > allocationMap;
+ std::map <uint16_t, pssFlowPerf_t>::iterator it;
+
+ // schedulability check
+ std::map <uint16_t, pssFlowPerf_t> ueSet;
+ for (it = m_flowStatsDl.begin (); it != m_flowStatsDl.end (); it++)
+ {
+ if( LcActivePerFlow ((*it).first) > 0 )
+ {
+ ueSet.insert(std::pair <uint16_t, pssFlowPerf_t> ((*it).first, (*it).second));
+ }
+ }
+
+ if (ueSet.size() == 0)
+ {
+ // no data in RLC buffer
+ return;
+ }
+
+ // Time Domain scheduler
+ std::vector <std::pair<double, uint16_t> > ueSet1;
+ std::vector <std::pair<double,uint16_t> > ueSet2;
+ for (it = ueSet.begin (); it != ueSet.end (); it++)
+ {
+ double metric;
+ if ((*it).second.lastAveragedThroughput < (*it).second.targetThroughput )
+ {
+ // calculate TD BET metric
+ metric = 1 / (*it).second.lastAveragedThroughput;
+ ueSet1.push_back(std::pair<double, uint16_t> (metric, (*it).first));
+ }
+ else
+ {
+ // calculate TD PF metric
+ std::map <uint16_t,uint8_t>::iterator itCqi;
+ itCqi = m_p10CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ uint8_t wbCqi = 0;
+ if (itCqi == m_p10CqiRxed.end())
+ {
+ wbCqi = 1; // start with lowest value
+ }
+ else
+ {
+ wbCqi = (*itCqi).second;
+ }
+
+ if (wbCqi > 0)
+ {
+ if (LcActivePerFlow ((*it).first) > 0)
+ {
+ // this UE has data to transmit
+ double achievableRate = 0.0;
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ uint8_t mcs = 0;
+ mcs = m_amc->GetMcsFromCqi (wbCqi);
+ achievableRate += ((m_amc->GetTbSizeFromMcs (mcs, rbgSize) / 8) / 0.001); // = TB size / TTI
+ }
+
+ metric = achievableRate / (*it).second.lastAveragedThroughput;
+ }
+ } // end of wbCqi
+
+ ueSet2.push_back(std::pair<double, uint16_t> (metric, (*it).first));
+ }
+ }// end of ueSet
+
+ // sorting UE in ueSet1 and ueSet1 in descending order based on their metric value
+ std::sort (ueSet1.rbegin (), ueSet1.rend ());
+ std::sort (ueSet2.rbegin (), ueSet2.rend ());
+
+ std::map <uint16_t, pssFlowPerf_t> tdUeSet;
+ int nMux;
+ if (ueSet1.size() + ueSet2.size() <=2 )
+ nMux = 1;
+ else
+ nMux = (int)((ueSet1.size() + ueSet2.size()) / 2) ; // TD scheduler only transfers half selected UE per RTT to TD scheduler
+ for (it = m_flowStatsDl.begin (); it != m_flowStatsDl.end (); it--)
+ {
+ std::vector <std::pair<double, uint16_t> >::iterator itSet;
+ for (itSet = ueSet1.begin (); itSet != ueSet1.end () && nMux != 0; itSet++)
+ {
+ std::map <uint16_t, pssFlowPerf_t>::iterator itUe;
+ itUe = m_flowStatsDl.find((*itSet).second);
+ tdUeSet.insert(std::pair<uint16_t, pssFlowPerf_t> ( (*itUe).first, (*itUe).second ) );
+ nMux--;
+ }
+
+ if (nMux == 0)
+ break;
+
+ for (itSet = ueSet2.begin (); itSet != ueSet2.end () && nMux != 0; itSet++)
+ {
+ std::map <uint16_t, pssFlowPerf_t>::iterator itUe;
+ itUe = m_flowStatsDl.find((*itSet).second);
+ tdUeSet.insert(std::pair<uint16_t, pssFlowPerf_t> ( (*itUe).first, (*itUe).second ) );
+ nMux--;
+ }
+
+ if (nMux == 0)
+ break;
+
+ } // end of m_flowStatsDl
+
+#if 1
+ // FD scheduler: Carrier over Interference to Average (CoItA)
+ std::map < uint16_t, uint8_t > sbCqiSum;
+ for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ {
+ uint8_t sum = 0;
+ for (int i = 0; i < rbgNum; i++)
+ {
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> sbCqis;
+ if (itCqi == m_a30CqiRxed.end ())
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ sbCqis.push_back (1); // start with lowest value
+ }
+ }
+ else
+ {
+ sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
+ }
+
+ uint8_t cqi1 = sbCqis.at(0);
+ uint8_t cqi2 = 1;
+ if (sbCqis.size () > 1)
+ {
+ cqi2 = sbCqis.at(1);
+ }
+
+ uint8_t sbCqi;
+ if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ if (sbCqis.size () > k)
+ {
+ sbCqi = sbCqis.at(k);
+ }
+ else
+ {
+ // no info on this subband
+ sbCqi = 0;
+ }
+ sum += sbCqi;
+ }
+ } // end if cqi
+ }// end of rbgNum
+
+ sbCqiSum.insert(std::pair<uint16_t, uint8_t> ((*it).first, sum));
+ }// end tdUeSet
+
+ for (int i = 0; i < rbgNum; i++)
+ {
+ std::map <uint16_t, pssFlowPerf_t>::iterator itMax = tdUeSet.end ();
+ double metricMax = 0.0;
+ for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ {
+ // calculate PF weigth
+ double weight = (*it).second.targetThroughput / (*it).second.lastAveragedThroughput;
+ if (weight < 1.0)
+ weight = 1.0;
+
+ std::map < uint16_t, uint8_t>::iterator itSbCqiSum;
+ itSbCqiSum = sbCqiSum.find((*it).first);
+
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> sbCqis;
+ if (itCqi == m_a30CqiRxed.end ())
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ sbCqis.push_back (1); // start with lowest value
+ }
+ }
+ else
+ {
+ sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
+ }
+
+ uint8_t cqi1 = sbCqis.at(0);
+ uint8_t cqi2 = 1;
+ if (sbCqis.size () > 1)
+ {
+ cqi2 = sbCqis.at(1);
+ }
+
+ uint8_t sbCqi;
+ double colMetric = 0.0;
+ if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ if (sbCqis.size () > k)
+ {
+ sbCqi = sbCqis.at(k);
+ }
+ else
+ {
+ // no info on this subband
+ sbCqi = 0;
+ }
+ colMetric += (double)sbCqi / (double)(*itSbCqiSum).second;
+ }
+ } // end if cqi
+
+ double metric;
+ if (colMetric != 0)
+ metric= weight * colMetric;
+ else
+ metric = 1;
+
+ if (metric > metricMax )
+ {
+ metricMax = metric;
+ itMax = it;
+ }
+ } // end of tdUeSet
+
+ if (itMax == m_flowStatsDl.end ())
+ {
+ // no UE available for downlink
+ return;
+ }
+ else
+ {
+ // assign all RBGs to this UE
+ std::vector <uint16_t> tempMap;
+ for (int i = 0; i < rbgNum; i++)
+ {
+ tempMap.push_back (i);
+ }
+ allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
+ }
+ }// end of rbgNum
+
+#endif // end of CoItA
+
+#if 0
+ // FD scheduler: Proportional Fair scheduled (PFsch)
+ for (int i = 0; i < rbgNum; i++)
+ {
+ std::map <uint16_t, pssFlowPerf_t>::iterator itMax = tdUeSet.end ();
+ double metricMax = 0.0;
+ for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ {
+ // calculate PF weigth
+ double weight = (*it).second.targetThroughput / (*it).second.lastAveragedThroughput;
+ if (weight < 1.0)
+ weight = 1.0;
+
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> sbCqis;
+ if (itCqi == m_a30CqiRxed.end ())
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ sbCqis.push_back (1); // start with lowest value
+ }
+ }
+ else
+ {
+ sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
+ }
+
+ uint8_t cqi1 = sbCqis.at(0);
+ uint8_t cqi2 = 1;
+ if (sbCqis.size () > 1)
+ {
+ cqi2 = sbCqis.at(1);
+ }
+
+ double schMetric = 0.0;
+ if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
+ {
+ double achievableRate = 0.0;
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ uint8_t mcs = 0;
+ if (sbCqis.size () > k)
+ {
+ mcs = m_amc->GetMcsFromCqi (sbCqis.at (k));
+ }
+ else
+ {
+ // no info on this subband -> worst MCS
+ mcs = 0;
+ }
+ achievableRate += ((m_amc->GetTbSizeFromMcs (mcs, rbgSize) / 8) / 0.001); // = TB size / TTI
+ }
+ schMetric = achievableRate / (*it).second.secondLastAveragedThroughput;
+ } // end if cqi
+
+ double metric;
+ metric= weight * schMetric;
+
+ if (metric > metricMax )
+ {
+ metricMax = metric;
+ itMax = it;
+ }
+ } // end of tdUeSet
+
+ if (itMax == m_flowStatsDl.end ())
+ {
+ // no UE available for downlink
+ return;
+ }
+ else
+ {
+ // assign all RBGs to this UE
+ std::vector <uint16_t> tempMap;
+ for (int i = 0; i < rbgNum; i++)
+ {
+ tempMap.push_back (i);
+ }
+ allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
+ }
+
+ }// end of rbgNum
+
+#endif // end of PFsch
+
+ // reset TTI stats of users
+ std::map <uint16_t, pssFlowPerf_t>::iterator itStats;
+ for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
+ {
+ (*itStats).second.lastTtiBytesTrasmitted = 0;
+ }
+
+ // generate the transmission opportunities by grouping the RBGs of the same RNTI and
+ // creating the correspondent DCIs
+ FfMacSchedSapUser::SchedDlConfigIndParameters ret;
+ std::map <uint16_t, std::vector <uint16_t> >::iterator itMap = allocationMap.begin ();
+ while (itMap != allocationMap.end ())
+ {
+ // create new BuildDataListElement_s for this LC
+ BuildDataListElement_s newEl;
+ newEl.m_rnti = (*itMap).first;
+ // create the DlDciListElement_s
+ DlDciListElement_s newDci;
+ std::vector <struct RlcPduListElement_s> newRlcPduLe;
+ newDci.m_rnti = (*itMap).first;
+
+ uint16_t lcActives = LcActivePerFlow ((*itMap).first);
+// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
+ uint16_t rbgPerRnti = (*itMap).second.size ();
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*itMap).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*itMap).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*itMap).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> worstCqi (2, 15);
+ if (itCqi != m_a30CqiRxed.end ())
+ {
+ for (uint16_t k = 0; k < (*itMap).second.size (); k++)
+ {
+ if ((*itCqi).second.m_higherLayerSelected.size () > (*itMap).second.at (k))
+ {
+// NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " << (uint16_t)((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (0)) );
+ for (uint8_t j = 0; j < nLayer; j++)
+ {
+ if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size ()> j)
+ {
+ if (((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (j)) < worstCqi.at (j))
+ {
+ worstCqi.at (j) = ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (j));
+ }
+ }
+ else
+ {
+ // no CQI for this layer of this suband -> worst one
+ worstCqi.at (j) = 1;
+ }
+ }
+ }
+ else
+ {
+ for (uint8_t j = 0; j < nLayer; j++)
+ {
+ worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
+ }
+ }
+ }
+ }
+ else
+ {
+ for (uint8_t j = 0; j < nLayer; j++)
+ {
+ worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
+ }
+ }
+// NS_LOG_DEBUG (this << " CQI " << (uint16_t)worstCqi);
+ uint32_t bytesTxed = 0;
+ for (uint8_t j = 0; j < nLayer; j++)
+ {
+ newDci.m_mcs.push_back (m_amc->GetMcsFromCqi (worstCqi.at (j)));
+ int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), rbgPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
+ newDci.m_tbsSize.push_back (tbSize);
+ bytesTxed += tbSize;
+ //NS_LOG_DEBUG ( Simulator::Now() << " Allocate user " << newEl.m_rnti << " tbSize "<< tbSize << " RBGs " << rbgPerRnti << " mcs " << (uint16_t) newDci.m_mcs.at (j) << " layers " << nLayer);
+
+ //NS_LOG_DEBUG (this << " MCS " << m_amc->GetMcsFromCqi (worstCqi.at (j)));
+ }
+
+ newDci.m_resAlloc = 0; // only allocation type 0 at this stage
+ newDci.m_rbBitmap = 0; // TBD (32 bit bitmap see 7.1.6 of 36.213)
+ uint32_t rbgMask = 0;
+ for (uint16_t k = 0; k < (*itMap).second.size (); k++)
+ {
+ rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
+// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
+ }
+ newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
+
+ // create the rlc PDUs -> equally divide resources among actives LCs
+ std::map <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator itBufReq;
+ for (itBufReq = m_rlcBufferReq.begin (); itBufReq != m_rlcBufferReq.end (); itBufReq++)
+ {
+ if (((*itBufReq).first.m_rnti == (*itMap).first) &&
+ (((*itBufReq).second.m_rlcTransmissionQueueSize > 0)
+ || ((*itBufReq).second.m_rlcRetransmissionQueueSize > 0)
+ || ((*itBufReq).second.m_rlcStatusPduSize > 0) ))
+ {
+ for (uint8_t j = 0; j < nLayer; j++)
+ {
+ RlcPduListElement_s newRlcEl;
+ newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
+ newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
+ //NS_LOG_DEBUG (this << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
+ newRlcPduLe.push_back (newRlcEl);
+ UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
+ }
+ }
+ if ((*itBufReq).first.m_rnti > (*itMap).first)
+ {
+ break;
+ }
+ }
+ newDci.m_ndi.push_back (1); // TBD (new data indicator)
+ newDci.m_rv.push_back (0); // TBD (redundancy version)
+
+ newEl.m_dci = newDci;
+ // ...more parameters -> ingored in this version
+
+ newEl.m_rlcPduList.push_back (newRlcPduLe);
+ ret.m_buildDataList.push_back (newEl);
+
+ // update UE stats
+ std::map <uint16_t, pssFlowPerf_t>::iterator it;
+ it = m_flowStatsDl.find ((*itMap).first);
+ if (it != m_flowStatsDl.end ())
+ {
+ (*it).second.lastTtiBytesTrasmitted = bytesTxed;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+
+
+ }
+ else
+ {
+ NS_LOG_DEBUG (this << " No Stats for this allocated UE");
+ }
+
+ itMap++;
+ } // end while allocation
+ ret.m_nrOfPdcchOfdmSymbols = 1; // TODO: check correct value according the DCIs txed
+
+
+ // update UEs stats
+ for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
+ {
+ std::map <uint16_t, pssFlowPerf_t>::iterator itUeScheduleted = tdUeSet.end();
+ itUeScheduleted = tdUeSet.find((*itStats).first);
+ if (itUeScheduleted != tdUeSet.end())
+ {
+ (*itStats).second.secondLastAveragedThroughput = ((1.0 - (1 / m_timeWindow)) * (*itStats).second.secondLastAveragedThroughput) + ((1 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ }
+
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ // update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE ā The UMTS Long Term Evolution, Ed Wiley)
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ // NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
+// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
+ (*itStats).second.lastTtiBytesTrasmitted = 0;
+ }
+
+
+ m_schedSapUser->SchedDlConfigInd (ret);
+
+
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedDlRachInfoReq (const struct FfMacSchedSapProvider::SchedDlRachInfoReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedDlCqiInfoReq (const struct FfMacSchedSapProvider::SchedDlCqiInfoReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+
+ for (unsigned int i = 0; i < params.m_cqiList.size (); i++)
+ {
+ if ( params.m_cqiList.at (i).m_cqiType == CqiListElement_s::P10 )
+ {
+ // wideband CQI reporting
+ std::map <uint16_t,uint8_t>::iterator it;
+ uint16_t rnti = params.m_cqiList.at (i).m_rnti;
+ it = m_p10CqiRxed.find (rnti);
+ if (it == m_p10CqiRxed.end ())
+ {
+ // create the new entry
+ m_p10CqiRxed.insert ( std::pair<uint16_t, uint8_t > (rnti, params.m_cqiList.at (i).m_wbCqi.at (0)) ); // only codeword 0 at this stage (SISO)
+ // generate correspondent timer
+ m_p10CqiTimers.insert ( std::pair<uint16_t, uint32_t > (rnti, m_cqiTimersThreshold));
+ }
+ else
+ {
+ // update the CQI value and refresh correspondent timer
+ (*it).second = params.m_cqiList.at (i).m_wbCqi.at (0);
+ // update correspondent timer
+ std::map <uint16_t,uint32_t>::iterator itTimers;
+ itTimers = m_p10CqiTimers.find (rnti);
+ (*itTimers).second = m_cqiTimersThreshold;
+ }
+ }
+ else if ( params.m_cqiList.at (i).m_cqiType == CqiListElement_s::A30 )
+ {
+ // subband CQI reporting high layer configured
+ std::map <uint16_t,SbMeasResult_s>::iterator it;
+ uint16_t rnti = params.m_cqiList.at (i).m_rnti;
+ it = m_a30CqiRxed.find (rnti);
+ if (it == m_a30CqiRxed.end ())
+ {
+ // create the new entry
+ m_a30CqiRxed.insert ( std::pair<uint16_t, SbMeasResult_s > (rnti, params.m_cqiList.at (i).m_sbMeasResult) );
+ m_a30CqiTimers.insert ( std::pair<uint16_t, uint32_t > (rnti, m_cqiTimersThreshold));
+ }
+ else
+ {
+ // update the CQI value and refresh correspondent timer
+ (*it).second = params.m_cqiList.at (i).m_sbMeasResult;
+ std::map <uint16_t,uint32_t>::iterator itTimers;
+ itTimers = m_a30CqiTimers.find (rnti);
+ (*itTimers).second = m_cqiTimersThreshold;
+ }
+ }
+ else
+ {
+ NS_LOG_ERROR (this << " CQI type unknown");
+ }
+ }
+
+ return;
+}
+
+
+double
+PssFfMacScheduler::EstimateUlSinr (uint16_t rnti, uint16_t rb)
+{
+ std::map <uint16_t, std::vector <double> >::iterator itCqi = m_ueCqi.find (rnti);
+ if (itCqi == m_ueCqi.end ())
+ {
+ // no cqi info about this UE
+ return (NO_SINR);
+
+ }
+ else
+ {
+ // take the average SINR value among the available
+ double sinrSum = 0;
+ int sinrNum = 0;
+ for (uint32_t i = 0; i < m_cschedCellConfig.m_ulBandwidth; i++)
+ {
+ double sinr = (*itCqi).second.at (i);
+ if (sinr != NO_SINR)
+ {
+ sinrSum += sinr;
+ sinrNum++;
+ }
+ }
+ double estimatedSinr = sinrSum / (double)sinrNum;
+ // store the value
+ (*itCqi).second.at (rb) = estimatedSinr;
+ return (estimatedSinr);
+ }
+}
+
+void
+PssFfMacScheduler::DoSchedUlTriggerReq (const struct FfMacSchedSapProvider::SchedUlTriggerReqParameters& params)
+{
+ NS_LOG_FUNCTION (this << " UL - Frame no. " << (params.m_sfnSf >> 4) << " subframe no. " << (0xF & params.m_sfnSf));
+
+ RefreshUlCqiMaps ();
+
+ std::map <uint16_t,uint32_t>::iterator it;
+ int nflows = 0;
+
+ for (it = m_ceBsrRxed.begin (); it != m_ceBsrRxed.end (); it++)
+ {
+ // remove old entries of this UE-LC
+ if ((*it).second > 0)
+ {
+ nflows++;
+ }
+ }
+
+ if (nflows == 0)
+ {
+ return ; // no flows to be scheduled
+ }
+
+
+ // Divide the resource equally among the active users
+ int rbPerFlow = m_cschedCellConfig.m_ulBandwidth / nflows;
+ if (rbPerFlow == 0)
+ {
+ rbPerFlow = 1; // at least 1 rbg per flow (till available resource)
+ }
+ int rbAllocated = 0;
+
+ FfMacSchedSapUser::SchedUlConfigIndParameters ret;
+ std::vector <uint16_t> rbgAllocationMap;
+ std::map <uint16_t, pssFlowPerf_t>::iterator itStats;
+ if (m_nextRntiUl != 0)
+ {
+ for (it = m_ceBsrRxed.begin (); it != m_ceBsrRxed.end (); it++)
+ {
+ if ((*it).first == m_nextRntiUl)
+ {
+ break;
+ }
+ }
+ if (it == m_ceBsrRxed.end ())
+ {
+ NS_LOG_ERROR (this << " no user found");
+ }
+ }
+ else
+ {
+ it = m_ceBsrRxed.begin ();
+ m_nextRntiUl = (*it).first;
+ }
+ do
+ {
+ if (rbAllocated + rbPerFlow > m_cschedCellConfig.m_ulBandwidth)
+ {
+ // limit to physical resources last resource assignment
+ rbPerFlow = m_cschedCellConfig.m_ulBandwidth - rbAllocated;
+ }
+
+ UlDciListElement_s uldci;
+ uldci.m_rnti = (*it).first;
+ uldci.m_rbStart = rbAllocated;
+ uldci.m_rbLen = rbPerFlow;
+ std::map <uint16_t, std::vector <double> >::iterator itCqi = m_ueCqi.find ((*it).first);
+ int cqi = 0;
+ if (itCqi == m_ueCqi.end ())
+ {
+ // no cqi info about this UE
+ uldci.m_mcs = 0; // MCS 0 -> UL-AMC TBD
+// NS_LOG_DEBUG (this << " UE does not have ULCQI " << (*it).first );
+ }
+ else
+ {
+ // take the lowest CQI value (worst RB)
+ double minSinr = (*itCqi).second.at (uldci.m_rbStart);
+ if (minSinr == NO_SINR)
+ {
+ minSinr = EstimateUlSinr ((*it).first, uldci.m_rbStart);
+ }
+ for (uint16_t i = uldci.m_rbStart; i < uldci.m_rbStart + uldci.m_rbLen; i++)
+ {
+// NS_LOG_DEBUG (this << " UE " << (*it).first << " has SINR " << (*itCqi).second.at(i));
+ double sinr = (*itCqi).second.at (i);
+ if (sinr == NO_SINR)
+ {
+ sinr = EstimateUlSinr ((*it).first, i);
+ }
+ if ((*itCqi).second.at (i) < minSinr)
+ {
+ minSinr = (*itCqi).second.at (i);
+ }
+ }
+
+ // translate SINR -> cqi: WILD ACK: same as DL
+ double s = log2 ( 1 + (
+ pow (10, minSinr / 10 ) /
+ ( (-log (5.0 * 0.00005 )) / 1.5) ));
+ cqi = m_amc->GetCqiFromSpectralEfficiency (s);
+ if (cqi == 0)
+ {
+ it++;
+ if (it == m_ceBsrRxed.end ())
+ {
+ // restart from the first
+ it = m_ceBsrRxed.begin ();
+ }
+ continue; // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
+ }
+ uldci.m_mcs = m_amc->GetMcsFromCqi (cqi);
+// NS_LOG_DEBUG (this << " UE " << (*it).first << " minsinr " << minSinr << " -> mcs " << (uint16_t)uldci.m_mcs);
+
+ }
+
+ rbAllocated += rbPerFlow;
+ // store info on allocation for managing ul-cqi interpretation
+ for (int i = 0; i < rbPerFlow; i++)
+ {
+ rbgAllocationMap.push_back ((*it).first);
+ }
+ uldci.m_tbSize = (m_amc->GetTbSizeFromMcs (uldci.m_mcs, rbPerFlow) / 8);
+// NS_LOG_DEBUG (this << " UE " << (*it).first << " startPRB " << (uint32_t)uldci.m_rbStart << " nPRB " << (uint32_t)uldci.m_rbLen << " CQI " << cqi << " MCS " << (uint32_t)uldci.m_mcs << " TBsize " << uldci.m_tbSize << " RbAlloc " << rbAllocated);
+ UpdateUlRlcBufferInfo (uldci.m_rnti, uldci.m_tbSize);
+ uldci.m_ndi = 1;
+ uldci.m_cceIndex = 0;
+ uldci.m_aggrLevel = 1;
+ uldci.m_ueTxAntennaSelection = 3; // antenna selection OFF
+ uldci.m_hopping = false;
+ uldci.m_n2Dmrs = 0;
+ uldci.m_tpc = 0; // no power control
+ uldci.m_cqiRequest = false; // only period CQI at this stage
+ uldci.m_ulIndex = 0; // TDD parameter
+ uldci.m_dai = 1; // TDD parameter
+ uldci.m_freqHopping = 0;
+ uldci.m_pdcchPowerOffset = 0; // not used
+ ret.m_dciList.push_back (uldci);
+
+ // update TTI UE stats
+ itStats = m_flowStatsUl.find ((*it).first);
+ if (itStats != m_flowStatsUl.end ())
+ {
+ (*itStats).second.lastTtiBytesTrasmitted = uldci.m_tbSize;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+
+
+ }
+ else
+ {
+ NS_LOG_DEBUG (this << " No Stats for this allocated UE");
+ }
+
+
+ it++;
+ if (it == m_ceBsrRxed.end ())
+ {
+ // restart from the first
+ it = m_ceBsrRxed.begin ();
+ }
+ if (rbAllocated == m_cschedCellConfig.m_ulBandwidth)
+ {
+ // Stop allocation: no more PRBs
+ m_nextRntiUl = (*it).first;
+ break;
+ }
+ }
+ while ((*it).first != m_nextRntiUl);
+
+
+ // Update global UE stats
+ // update UEs stats
+ for (itStats = m_flowStatsUl.begin (); itStats != m_flowStatsUl.end (); itStats++)
+ {
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ // update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE ā The UMTS Long Term Evolution, Ed Wiley)
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ // NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
+ // NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
+ (*itStats).second.lastTtiBytesTrasmitted = 0;
+ }
+ m_allocationMaps.insert (std::pair <uint16_t, std::vector <uint16_t> > (params.m_sfnSf, rbgAllocationMap));
+ m_schedSapUser->SchedUlConfigInd (ret);
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedUlNoiseInterferenceReq (const struct FfMacSchedSapProvider::SchedUlNoiseInterferenceReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedUlSrInfoReq (const struct FfMacSchedSapProvider::SchedUlSrInfoReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+ // TODO: Implementation of the API
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedUlMacCtrlInfoReq (const struct FfMacSchedSapProvider::SchedUlMacCtrlInfoReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+
+ std::map <uint16_t,uint32_t>::iterator it;
+
+ for (unsigned int i = 0; i < params.m_macCeList.size (); i++)
+ {
+ if ( params.m_macCeList.at (i).m_macCeType == MacCeListElement_s::BSR )
+ {
+ // buffer status report
+ uint16_t rnti = params.m_macCeList.at (i).m_rnti;
+ it = m_ceBsrRxed.find (rnti);
+ if (it == m_ceBsrRxed.end ())
+ {
+ // create the new entry
+ uint8_t bsrId = params.m_macCeList.at (i).m_macCeValue.m_bufferStatus.at (0);
+ int buffer = BufferSizeLevelBsr::BsrId2BufferSize (bsrId);
+ m_ceBsrRxed.insert ( std::pair<uint16_t, uint32_t > (rnti, buffer)); // only 1 buffer status is working now
+ }
+ else
+ {
+ // update the CQI value
+ (*it).second = BufferSizeLevelBsr::BsrId2BufferSize (params.m_macCeList.at (i).m_macCeValue.m_bufferStatus.at (0));
+ }
+ }
+ }
+
+ return;
+}
+
+void
+PssFfMacScheduler::DoSchedUlCqiInfoReq (const struct FfMacSchedSapProvider::SchedUlCqiInfoReqParameters& params)
+{
+ NS_LOG_FUNCTION (this);
+// NS_LOG_DEBUG (this << " RX SFNID " << params.m_sfnSf);
+ // retrieve the allocation for this subframe
+ std::map <uint16_t, std::vector <uint16_t> >::iterator itMap;
+ std::map <uint16_t, std::vector <double> >::iterator itCqi;
+ itMap = m_allocationMaps.find (params.m_sfnSf);
+ if (itMap == m_allocationMaps.end ())
+ {
+ //NS_LOG_DEBUG (this << " Does not find info on allocation, size : " << m_allocationMaps.size ());
+ return;
+ }
+ for (uint32_t i = 0; i < (*itMap).second.size (); i++)
+ {
+ // convert from fixed point notation Sxxxxxxxxxxx.xxx to double
+// NS_LOG_INFO (this << " i " << i << " size " << params.m_ulCqi.m_sinr.size () << " mapSIze " << (*itMap).second.size ());
+ double sinr = LteFfConverter::fpS11dot3toDouble (params.m_ulCqi.m_sinr.at (i));
+ //NS_LOG_DEBUG (this << " UE " << (*itMap).second.at (i) << " SINRfp " << params.m_ulCqi.m_sinr.at (i) << " sinrdb " << sinr);
+ itCqi = m_ueCqi.find ((*itMap).second.at (i));
+ if (itCqi == m_ueCqi.end ())
+ {
+ // create a new entry
+ std::vector <double> newCqi;
+ for (uint32_t j = 0; j < m_cschedCellConfig.m_ulBandwidth; j++)
+ {
+ if (i == j)
+ {
+ newCqi.push_back (sinr);
+ }
+ else
+ {
+ // initialize with NO_SINR value.
+ newCqi.push_back (NO_SINR);
+ }
+
+ }
+ m_ueCqi.insert (std::pair <uint16_t, std::vector <double> > ((*itMap).second.at (i), newCqi));
+ // generate correspondent timer
+ m_ueCqiTimers.insert (std::pair <uint16_t, uint32_t > ((*itMap).second.at (i), m_cqiTimersThreshold));
+ }
+ else
+ {
+ // update the value
+ (*itCqi).second.at (i) = sinr;
+ // update correspondent timer
+ std::map <uint16_t, uint32_t>::iterator itTimers;
+ itTimers = m_ueCqiTimers.find ((*itMap).second.at (i));
+ (*itTimers).second = m_cqiTimersThreshold;
+
+ }
+
+ }
+ // remove obsolete info on allocation
+ m_allocationMaps.erase (itMap);
+
+ return;
+}
+
+void
+PssFfMacScheduler::RefreshDlCqiMaps(void)
+{
+ // refresh DL CQI P01 Map
+ std::map <uint16_t,uint32_t>::iterator itP10 = m_p10CqiTimers.begin ();
+ while (itP10!=m_p10CqiTimers.end ())
+ {
+// NS_LOG_INFO (this << " P10-CQI for user " << (*itP10).first << " is " << (uint32_t)(*itP10).second << " thr " << (uint32_t)m_cqiTimersThreshold);
+ if ((*itP10).second == 0)
+ {
+ // delete correspondent entries
+ std::map <uint16_t,uint8_t>::iterator itMap = m_p10CqiRxed.find ((*itP10).first);
+ NS_ASSERT_MSG (itMap != m_p10CqiRxed.end (), " Does not find CQI report for user " << (*itP10).first);
+ NS_LOG_INFO (this << " P10-CQI exired for user " << (*itP10).first);
+ m_p10CqiRxed.erase (itMap);
+ std::map <uint16_t,uint32_t>::iterator temp = itP10;
+ itP10++;
+ m_p10CqiTimers.erase (temp);
+ }
+ else
+ {
+ (*itP10).second--;
+ itP10++;
+ }
+ }
+
+ return;
+}
+
+
+void
+PssFfMacScheduler::RefreshUlCqiMaps(void)
+{
+ // refresh UL CQI Map
+ std::map <uint16_t,uint32_t>::iterator itUl = m_ueCqiTimers.begin ();
+ while (itUl!=m_ueCqiTimers.end ())
+ {
+// NS_LOG_INFO (this << " UL-CQI for user " << (*itUl).first << " is " << (uint32_t)(*itUl).second << " thr " << (uint32_t)m_cqiTimersThreshold);
+ if ((*itUl).second == 0)
+ {
+ // delete correspondent entries
+ std::map <uint16_t, std::vector <double> >::iterator itMap = m_ueCqi.find ((*itUl).first);
+ NS_ASSERT_MSG (itMap != m_ueCqi.end (), " Does not find CQI report for user " << (*itUl).first);
+ NS_LOG_INFO (this << " UL-CQI exired for user " << (*itUl).first);
+ (*itMap).second.clear ();
+ m_ueCqi.erase (itMap);
+ std::map <uint16_t,uint32_t>::iterator temp = itUl;
+ itUl++;
+ m_ueCqiTimers.erase (temp);
+ }
+ else
+ {
+ (*itUl).second--;
+ itUl++;
+ }
+ }
+
+ return;
+}
+
+void
+PssFfMacScheduler::UpdateDlRlcBufferInfo (uint16_t rnti, uint8_t lcid, uint16_t size)
+{
+ std::map<LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator it;
+ LteFlowId_t flow (rnti, lcid);
+ it = m_rlcBufferReq.find (flow);
+ if (it!=m_rlcBufferReq.end ())
+ {
+// NS_LOG_DEBUG (this << " UE " << rnti << " LC " << (uint16_t)lcid << " txqueue " << (*it).second.m_rlcTransmissionQueueSize << " retxqueue " << (*it).second.m_rlcRetransmissionQueueSize << " status " << (*it).second.m_rlcStatusPduSize << " decrease " << size);
+ // Update queues: RLC tx order Status, ReTx, Tx
+ // Update status queue
+ if ((*it).second.m_rlcStatusPduSize <= size)
+ {
+ size -= (*it).second.m_rlcStatusPduSize;
+ (*it).second.m_rlcStatusPduSize = 0;
+ }
+ else
+ {
+ (*it).second.m_rlcStatusPduSize -= size;
+ return;
+ }
+ // update retransmission queue
+ if ((*it).second.m_rlcRetransmissionQueueSize <= size)
+ {
+ size -= (*it).second.m_rlcRetransmissionQueueSize;
+ (*it).second.m_rlcRetransmissionQueueSize = 0;
+ }
+ else
+ {
+ (*it).second.m_rlcRetransmissionQueueSize -= size;
+ return;
+ }
+ // update transmission queue
+ if ((*it).second.m_rlcTransmissionQueueSize <= size)
+ {
+ size -= (*it).second.m_rlcTransmissionQueueSize;
+ (*it).second.m_rlcTransmissionQueueSize = 0;
+ }
+ else
+ {
+ (*it).second.m_rlcTransmissionQueueSize -= size;
+ return;
+ }
+ }
+ else
+ {
+ NS_LOG_ERROR (this << " Does not find DL RLC Buffer Report of UE " << rnti);
+ }
+}
+
+void
+PssFfMacScheduler::UpdateUlRlcBufferInfo (uint16_t rnti, uint16_t size)
+{
+
+
+ std::map <uint16_t,uint32_t>::iterator it = m_ceBsrRxed.find (rnti);
+ if (it!=m_ceBsrRxed.end ())
+ {
+// NS_LOG_DEBUG (this << " UE " << rnti << " size " << size << " BSR " << (*it).second);
+ if ((*it).second >= size)
+ {
+ (*it).second -= size;
+ }
+ else
+ {
+ (*it).second = 0;
+ }
+ }
+ else
+ {
+ NS_LOG_ERROR (this << " Does not find BSR report info of UE " << rnti);
+ }
+
+}
+
+void
+PssFfMacScheduler::TransmissionModeConfigurationUpdate (uint16_t rnti, uint8_t txMode)
+{
+ NS_LOG_FUNCTION (this << " RNTI " << rnti << " txMode " << (uint16_t)txMode);
+ FfMacCschedSapUser::CschedUeConfigUpdateIndParameters params;
+ params.m_rnti = rnti;
+ params.m_transmissionMode = txMode;
+ m_cschedSapUser->CschedUeConfigUpdateInd (params);
+}
+
+}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/src/lte/model/pss-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
@@ -0,0 +1,228 @@
+/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
+/*
+ * Copyright (c) 2011 Centre Tecnologic de Telecomunicacions de Catalunya (CTTC)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation;
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: Marco Miozzo <marco.miozzo@cttc.es>
+ * Dizhi Zhou <dizhi.zhou@gmail.com>
+ */
+
+#ifndef PSS_FF_MAC_SCHEDULER_H
+#define PSS_FF_MAC_SCHEDULER_H
+
+#include <ns3/lte-common.h>
+#include <ns3/ff-mac-csched-sap.h>
+#include <ns3/ff-mac-sched-sap.h>
+#include <ns3/ff-mac-scheduler.h>
+#include <vector>
+#include <map>
+#include <algorithm>
+#include <ns3/nstime.h>
+#include <ns3/lte-amc.h>
+
+
+// value for SINR outside the range defined by FF-API, used to indicate that there
+// is no CQI for this element
+#define NO_SINR -5000
+
+namespace ns3 {
+
+
+struct pssFlowPerf_t
+{
+ Time flowStart;
+ unsigned long totalBytesTransmitted;
+ unsigned int lastTtiBytesTrasmitted;
+ double lastAveragedThroughput;
+ double secondLastAveragedThroughput;
+ double targetThroughput;
+};
+
+/**
+ * \ingroup ff-api
+ * \defgroup FF-API PssFfMacScheduler
+ */
+/**
+ * \ingroup PssFfMacScheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Proportional Fair scheduler
+ *
+ * This class implements the interface defined by the FfMacScheduler abstract class
+ */
+
+class PssFfMacScheduler : public FfMacScheduler
+{
+public:
+ /**
+ * \brief Constructor
+ *
+ * Creates the MAC Scheduler interface implementation
+ */
+ PssFfMacScheduler ();
+
+ /**
+ * Destructor
+ */
+ virtual ~PssFfMacScheduler ();
+
+ // inherited from Object
+ virtual void DoDispose (void);
+ static TypeId GetTypeId (void);
+
+ // inherited from FfMacScheduler
+ virtual void SetFfMacCschedSapUser (FfMacCschedSapUser* s);
+ virtual void SetFfMacSchedSapUser (FfMacSchedSapUser* s);
+ virtual FfMacCschedSapProvider* GetFfMacCschedSapProvider ();
+ virtual FfMacSchedSapProvider* GetFfMacSchedSapProvider ();
+
+ friend class PssSchedulerMemberCschedSapProvider;
+ friend class PssSchedulerMemberSchedSapProvider;
+
+ void TransmissionModeConfigurationUpdate (uint16_t rnti, uint8_t txMode);
+
+private:
+ //
+ // Implementation of the CSCHED API primitives
+ // (See 4.1 for description of the primitives)
+ //
+
+ void DoCschedCellConfigReq (const struct FfMacCschedSapProvider::CschedCellConfigReqParameters& params);
+
+ void DoCschedUeConfigReq (const struct FfMacCschedSapProvider::CschedUeConfigReqParameters& params);
+
+ void DoCschedLcConfigReq (const struct FfMacCschedSapProvider::CschedLcConfigReqParameters& params);
+
+ void DoCschedLcReleaseReq (const struct FfMacCschedSapProvider::CschedLcReleaseReqParameters& params);
+
+ void DoCschedUeReleaseReq (const struct FfMacCschedSapProvider::CschedUeReleaseReqParameters& params);
+
+ //
+ // Implementation of the SCHED API primitives
+ // (See 4.2 for description of the primitives)
+ //
+
+ void DoSchedDlRlcBufferReq (const struct FfMacSchedSapProvider::SchedDlRlcBufferReqParameters& params);
+
+ void DoSchedDlPagingBufferReq (const struct FfMacSchedSapProvider::SchedDlPagingBufferReqParameters& params);
+
+ void DoSchedDlMacBufferReq (const struct FfMacSchedSapProvider::SchedDlMacBufferReqParameters& params);
+
+ void DoSchedDlTriggerReq (const struct FfMacSchedSapProvider::SchedDlTriggerReqParameters& params);
+
+ void DoSchedDlRachInfoReq (const struct FfMacSchedSapProvider::SchedDlRachInfoReqParameters& params);
+
+ void DoSchedDlCqiInfoReq (const struct FfMacSchedSapProvider::SchedDlCqiInfoReqParameters& params);
+
+ void DoSchedUlTriggerReq (const struct FfMacSchedSapProvider::SchedUlTriggerReqParameters& params);
+
+ void DoSchedUlNoiseInterferenceReq (const struct FfMacSchedSapProvider::SchedUlNoiseInterferenceReqParameters& params);
+
+ void DoSchedUlSrInfoReq (const struct FfMacSchedSapProvider::SchedUlSrInfoReqParameters& params);
+
+ void DoSchedUlMacCtrlInfoReq (const struct FfMacSchedSapProvider::SchedUlMacCtrlInfoReqParameters& params);
+
+ void DoSchedUlCqiInfoReq (const struct FfMacSchedSapProvider::SchedUlCqiInfoReqParameters& params);
+
+
+ int GetRbgSize (int dlbandwidth);
+
+ int LcActivePerFlow (uint16_t rnti);
+
+ double EstimateUlSinr (uint16_t rnti, uint16_t rb);
+
+ void RefreshDlCqiMaps(void);
+ void RefreshUlCqiMaps(void);
+
+ void UpdateDlRlcBufferInfo (uint16_t rnti, uint8_t lcid, uint16_t size);
+ void UpdateUlRlcBufferInfo (uint16_t rnti, uint16_t size);
+ Ptr<LteAmc> m_amc;
+
+ /*
+ * Vectors of UE's LC info
+ */
+ std::map <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters> m_rlcBufferReq;
+
+
+ /*
+ * Map of UE statistics (per RNTI basis) in downlink
+ */
+ std::map <uint16_t, pssFlowPerf_t> m_flowStatsDl;
+
+ /*
+ * Map of UE statistics (per RNTI basis)
+ */
+ std::map <uint16_t, pssFlowPerf_t> m_flowStatsUl;
+
+
+ /*
+ * Map of UE's DL CQI P01 received
+ */
+ std::map <uint16_t,uint8_t> m_p10CqiRxed;
+ /*
+ * Map of UE's timers on DL CQI P01 received
+ */
+ std::map <uint16_t,uint32_t> m_p10CqiTimers;
+
+ /*
+ * Map of UE's DL CQI A30 received
+ */
+ std::map <uint16_t,SbMeasResult_s> m_a30CqiRxed;
+ /*
+ * Map of UE's timers on DL CQI A30 received
+ */
+ std::map <uint16_t,uint32_t> m_a30CqiTimers;
+
+ /*
+ * Map of previous allocated UE per RBG
+ * (used to retrieve info from UL-CQI)
+ */
+ std::map <uint16_t, std::vector <uint16_t> > m_allocationMaps;
+
+ /*
+ * Map of UEs' UL-CQI per RBG
+ */
+ std::map <uint16_t, std::vector <double> > m_ueCqi;
+ /*
+ * Map of UEs' timers on UL-CQI per RBG
+ */
+ std::map <uint16_t, uint32_t> m_ueCqiTimers;
+
+ /*
+ * Map of UE's buffer status reports received
+ */
+ std::map <uint16_t,uint32_t> m_ceBsrRxed;
+
+ // MAC SAPs
+ FfMacCschedSapUser* m_cschedSapUser;
+ FfMacSchedSapUser* m_schedSapUser;
+ FfMacCschedSapProvider* m_cschedSapProvider;
+ FfMacSchedSapProvider* m_schedSapProvider;
+
+
+ // Internal parameters
+ FfMacCschedSapProvider::CschedCellConfigReqParameters m_cschedCellConfig;
+
+ double m_timeWindow;
+
+ uint16_t m_nextRntiUl; // RNTI of the next user to be served next scheduling in UL
+
+ uint32_t m_cqiTimersThreshold; // # of TTIs for which a CQI canbe considered valid
+
+ std::map <uint16_t,uint8_t> m_uesTxMode; // txMode of the UEs
+
+};
+
+} // namespace ns3
+
+#endif /* PSS_FF_MAC_SCHEDULER_H */
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/src/lte/test/lte-test-pss-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
@@ -0,0 +1,783 @@
+/* -*- Mode: C++; c-file-style: "gnu"; indent-tabs-mode:nil; -*- */
+/*
+ * Copyright (c) 2011 Centre Tecnologic de Telecomunicacions de Catalunya (CTTC)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation;
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: Marco Miozzo <marco.miozzo@cttc.es>,
+ * Nicola Baldo <nbaldo@cttc.es>
+ * Dizhi Zhou <dizhi.zhou@gmail.com>
+ */
+
+#include <iostream>
+#include <sstream>
+#include <string>
+
+#include <ns3/object.h>
+#include <ns3/spectrum-interference.h>
+#include <ns3/spectrum-error-model.h>
+#include <ns3/log.h>
+#include <ns3/test.h>
+#include <ns3/simulator.h>
+#include <ns3/packet.h>
+#include <ns3/ptr.h>
+#include "ns3/radio-bearer-stats-calculator.h"
+#include <ns3/constant-position-mobility-model.h>
+#include <ns3/eps-bearer.h>
+#include <ns3/node-container.h>
+#include <ns3/mobility-helper.h>
+#include <ns3/net-device-container.h>
+#include <ns3/lte-ue-net-device.h>
+#include <ns3/lte-enb-net-device.h>
+#include <ns3/lte-ue-rrc.h>
+#include <ns3/lte-helper.h>
+#include "ns3/string.h"
+#include "ns3/double.h"
+#include <ns3/lte-enb-phy.h>
+#include <ns3/lte-ue-phy.h>
+#include <ns3/boolean.h>
+#include <ns3/enum.h>
+
+#include "ns3/epc-helper.h"
+#include "ns3/network-module.h"
+#include "ns3/ipv4-global-routing-helper.h"
+#include "ns3/internet-module.h"
+#include "ns3/applications-module.h"
+#include "ns3/point-to-point-helper.h"
+
+#include "lte-test-pss-ff-mac-scheduler.h"
+
+NS_LOG_COMPONENT_DEFINE ("LenaTestPssFfMacCheduler");
+
+namespace ns3 {
+
+LenaTestPssFfMacSchedulerSuite::LenaTestPssFfMacSchedulerSuite ()
+ : TestSuite ("lte-pss-ff-mac-scheduler", SYSTEM)
+{
+ NS_LOG_INFO ("creating LenaTestPssFfMacSchedulerSuite");
+
+ // General config
+ // Traffic: UDP traffic with fixed rate
+ // Token generation rate = traffic rate
+ // RLC header length = 2 bytes, PDCP header = 2 bytes
+ // Simulation time = 1.0 sec
+ // Throughput in this file is calculated in RLC layer
+
+ //Test Case 1: homogeneous flow test in PSS (same distance)
+
+ // DOWNLINK -> DISTANCE 0 -> MCS 28 -> Itbs 26 (from table 7.1.7.2.1-1 of 36.2 13)
+ // Traffic info
+ // UDP traffic: payload size = 200 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 232000 byte/rate
+ // Totol bandwidth: 24 PRB at Itbs 26 -> 2196 -> 2196000 byte/sec
+ // 1 user -> 232000 * 1 = 232000 < 2196000 -> throughput = 232000 byte/sec
+ // 3 user -> 232000 * 3 = 696000 < 2196000 -> througphut = 232000 byte/sec
+ // 6 user -> 232000 * 6 = 139200 < 2196000 -> throughput = 232000 byte/sec
+ // 12 user -> 232000 * 12 = 2784000 > 2196000 -> throughput = 2196000 / 12 = 183000 byte/sec
+ // 15 user -> 232000 * 15 = 3480000 > 2196000 -> throughput = 2196000 / 15 = 146400 byte/sec
+ // UPLINK -> DISTANCE 0 -> MCS 28 -> Itbs 26 (from table 7.1.7.2.1-1 of 36.2 13)
+ // 1 user -> 25 PRB at Itbs 26 -> 2292 -> 2292000 > 232000 -> throughput = 232000 bytes/sec
+ // 3 users -> 8 PRB at Itbs 26 -> 749 -> 749000 > 232000 -> throughput = 232000 bytes/sec
+ // 6 users -> 4 PRB at Itbs 26 -> 373 -> 373000 > 232000 -> throughput = 232000 bytes/sec
+ // 12 users -> 2 PRB at Itbs 26 -> 185 -> 185000 < 232000 -> throughput = 185000 bytes/sec
+ // 15 users -> 1 PRB at Itbs 26 -> 89 -> 89000 bytes/sec
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (1,0,0,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (3,0,0,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (6,0,0,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (12,0,0,183000,185000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (15,0,0,146400,0,200,1));
+
+ // DOWNLINK - DISTANCE 3000 -> MCS 24 -> Itbs 20 (from table 7.1.7.2.1-1 of 36.213)
+ // DOWNLINK -> DISTANCE 0 -> MCS 28 -> Itbs 26 (from table 7.1.7.2.1-1 of 36.2 13)
+ // Traffic info
+ // UDP traffic: payload size = 200 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 232000 byte/rate
+ // Totol bandwidth: 24 PRB at Itbs 20 -> 1383 -> 1383000 byte/sec
+ // 1 user -> 232000 * 1 = 232000 < 1383000 -> throughput = 232000 byte/sec
+ // 3 user -> 232000 * 3 = 696000 < 1383000 -> througphut = 232000 byte/sec
+ // 6 user -> 232000 * 6 = 139200 > 1383000 -> throughput = 1383000 / 6 = 230500 byte/sec
+ // 12 user -> 232000 * 12 = 2784000 > 1383000 -> throughput = 1383000 / 12 = 115250 byte/sec
+ // 15 user -> 232000 * 15 = 3480000 > 1383000 -> throughput = 1383000 / 15 = 92200 byte/sec
+ // UPLINK - DISTANCE 3000 -> MCS 20 -> Itbs 18 (from table 7.1.7.2.1-1 of 36.213)
+ // 1 user -> 25 PRB at Itbs 18 -> 1239 -> 1239000 > 232000 -> throughput = 232000 bytes/sec
+ // 3 users -> 8 PRB at Itbs 18 -> 389 -> 389000 > 232000 -> throughput = 232000 bytes/sec
+ // 6 users -> 4 PRB at Itbs 18 -> 193 -> 193000 < 232000 -> throughput = 193000 bytes/sec
+ // 12 users -> 2 PRB at Itbs 18 -> 97 -> 97000 < 232000 -> throughput = 97000 bytes/sec
+ // 15 users -> 1 PRB at Itbs 18 -> 47 -> 47000 bytes/sec
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (1,0,3000,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (3,0,3000,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (6,0,3000,230500,193000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (12,0,3000,115250,97000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (15,0,3000,92200,0,200,1));
+
+ // DOWNLINK - DISTANCE 6000 -> MCS 16 -> Itbs 15 (from table 7.1.7.2.1-1 of 36.213)
+ // Traffic info
+ // UDP traffic: payload size = 200 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 232000 byte/rate
+ // Totol bandwidth: 24 PRB at Itbs 15 -> 903 -> 903000 byte/sec
+ // 1 user -> 232000 * 1 = 232000 < 903000 -> throughput = 232000 byte/sec
+ // 3 user -> 232000 * 3 = 696000 < 903000 -> througphut = 232000 byte/sec
+ // 6 user -> 232000 * 6 = 139200 > 903000 -> throughput = 903000 / 6 = 150500 byte/sec
+ // 12 user -> 232000 * 12 = 2784000 > 903000 -> throughput = 903000 / 12 = 75250 byte/sec
+ // 15 user -> 232000 * 15 = 3480000 > 903000 -> throughput = 903000 / 15 = 60200 byte/sec
+ // UPLINK - DISTANCE 6000 -> MCS 12 -> Itbs 11 (from table 7.1.7.2.1-1 of 36.213)
+ // 1 user -> 25 PRB at Itbs 11 -> 621 -> 621000 > 232000 -> throughput = 232000 bytes/sec
+ // 3 users -> 8 PRB at Itbs 11 -> 201 -> 201000 < 232000 -> throughput = 201000 bytes/sec
+ // 6 users -> 4 PRB at Itbs 11 -> 97 -> 97000 < 232000 -> throughput = 97000 bytes/sec
+ // 12 users -> 2 PRB at Itbs 11 -> 47 -> 47000 < 232000 -> throughput = 47000 bytes/sec
+ // 15 users -> 1 PRB at Itbs 11 -> 22 -> 22000 bytes/sec
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (1,0,6000,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (3,0,6000,232000,201000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (6,0,6000,150500,97000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (12,0,6000,75250,47000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (15,0,6000,60200,0,200,1));
+
+ // DOWNLINK - DISTANCE 9000 -> MCS 12 -> Itbs 11 (from table 7.1.7.2.1-1 of 36.213)
+ // Traffic info
+ // UDP traffic: payload size = 200 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 232000 byte/rate
+ // Totol bandwidth: 24 PRB at Itbs 11 -> 597 -> 597000 byte/sec
+ // 1 user -> 232000 * 1 = 232000 < 597000 -> throughput = 232000 byte/sec
+ // 3 user -> 232000 * 3 = 696000 > 597000 -> througphut = 597000 / 3 = 199000byte/sec
+ // 6 user -> 232000 * 6 = 139200 > 597000 -> throughput = 597000 / 6 = 99500 byte/sec
+ // 12 user -> 232000 * 12 = 2784000 > 597000 -> throughput = 597000 / 12 = 49750 byte/sec
+ // 15 user -> 232000 * 15 = 3480000 > 597000 -> throughput = 597000 / 15 = 39800 byte/sec
+ // UPLINK - DISTANCE 9000 -> MCS 8 -> Itbs 8 (from table 7.1.7.2.1-1 of 36.213)
+ // 1 user -> 24 PRB at Itbs 8 -> 437 -> 437000 > 232000 -> throughput = 232000 bytes/sec
+ // 3 users -> 8 PRB at Itbs 8 -> 137 -> 137000 < 232000 -> throughput = 137000 bytes/sec
+ // 6 users -> 4 PRB at Itbs 8 -> 67 -> 67000 < 232000 -> throughput = 67000 bytes/sec
+ // 12 users -> 2 PRB at Itbs 8 -> 32 -> 32000 < 232000 -> throughput = 32000 bytes/sec
+ // 15 users -> 1 PRB at Itbs 8 -> 15 -> 15000 bytes/sec
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (1,0,9000,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (3,0,9000,199000,137000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (6,0,9000,99500,67000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (12,0,9000,49750,32000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (15,0,9000,39800,0,200,1));
+
+ // DONWLINK - DISTANCE 15000 -> MCS 6 -> Itbs 6 (from table 7.1.7.2.1-1 of 36.213)
+ // Traffic info
+ // UDP traffic: payload size = 200 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 232000 byte/rate
+ // Totol bandwidth: 24 PRB at Itbs 6 -> 309 -> 309000 byte/sec
+ // 1 user -> 232000 * 1 = 232000 < 309000 -> throughput = 232000 byte/sec
+ // 3 user -> 232000 * 3 = 696000 > 309000 -> througphut = 309000 / 3 = 103000byte/sec
+ // 6 user -> 232000 * 6 = 139200 > 309000 -> throughput = 309000 / 6 = 51500 byte/sec
+ // 12 user -> 232000 * 12 = 2784000 > 309000 -> throughput = 309000 / 12 = 25750 byte/sec
+ // 15 user -> 232000 * 15 = 3480000 > 309000 -> throughput = 309000 / 15 = 20600 byte/sec
+ // UPLINK - DISTANCE 15000 -> MCS 6 -> Itbs 6 (from table 7.1.7.2.1-1 of 36.213)
+ // 1 user -> 25 PRB at Itbs 6 -> 233 -> 233000 > 232000 -> throughput = 232000 bytes/sec
+ // 3 users -> 8 PRB at Itbs 6 -> 69 -> 69000 < 232000 -> throughput = 69000 bytes/sec
+ // 6 users -> 4 PRB at Itbs 6 -> 32 -> 32000 < 232000 -> throughput = 32000 bytes/sec
+ // 12 users -> 2 PRB at Itbs 6 -> 15 -> 15000 < 232000 -> throughput = 15000 bytes/sec
+ // 15 users -> 1 PRB at Itbs 6 -> 7 -> 7000 bytes/sec
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (1,0,15000,232000,232000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (3,0,15000,103000,69000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (6,0,15000,51500,32000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (12,0,15000,25750,15000,200,1));
+ AddTestCase (new LenaPssFfMacSchedulerTestCase1 (15,0,15000,20600,0,200,1));
+
+ // Test Case 2: homogeneous flow test in PSS (different distance)
+ // Traffic1 info
+ // UDP traffic: payload size = 100 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 132000 byte/rate
+ // Maximum throughput = 5 / ( 1/2196000 + 1/1383000 + 1/903000 + 1/597000 + 1/309000) = 694720 byte/s
+ // 132000 * 5 = 660000 < 694720 -> estimated throughput in downlink = 694720/5 = 132000 bytes/sec
+ std::vector<uint16_t> dist1;
+ dist1.push_back (0); // User 0 distance --> MCS 28
+ dist1.push_back (3000); // User 1 distance --> MCS 24
+ dist1.push_back (6000); // User 2 distance --> MCS 16
+ dist1.push_back (9000); // User 3 distance --> MCS 12
+ dist1.push_back (15000); // User 4 distance --> MCS 6
+ std::vector<uint16_t> packetSize1;
+ packetSize1.push_back (100);
+ packetSize1.push_back (100);
+ packetSize1.push_back (100);
+ packetSize1.push_back (100);
+ packetSize1.push_back (100);
+ std::vector<uint32_t> estThrPssDl1;
+ estThrPssDl1.push_back (132000);
+ estThrPssDl1.push_back (132000);
+ estThrPssDl1.push_back (132000);
+ estThrPssDl1.push_back (132000);
+ estThrPssDl1.push_back (132000);
+ AddTestCase (new LenaPssFfMacSchedulerTestCase2 (dist1,estThrPssDl1,packetSize1,1));
+
+ // Traffic2 info
+ // UDP traffic: payload size = 200 bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 132000 byte/rate
+ // Maximum throughput = 5 / ( 1/2196000 + 1/1383000 + 1/903000 + 1/597000 + 1/309000) = 694720 byte/s
+ // 232000 * 5 = 1160000 > 694720 -> estimated throughput in downlink = 694720/5 = 138944 bytes/sec
+ std::vector<uint16_t> dist2;
+ dist2.push_back (0); // User 0 distance --> MCS 28
+ dist2.push_back (3000); // User 1 distance --> MCS 24
+ dist2.push_back (6000); // User 2 distance --> MCS 16
+ dist2.push_back (9000); // User 3 distance --> MCS 12
+ dist2.push_back (15000); // User 4 distance --> MCS 6
+ std::vector<uint16_t> packetSize2;
+ packetSize2.push_back (200);
+ packetSize2.push_back (200);
+ packetSize2.push_back (200);
+ packetSize2.push_back (200);
+ packetSize2.push_back (200);
+ std::vector<uint32_t> estThrPssDl2;
+ estThrPssDl2.push_back (138944);
+ estThrPssDl2.push_back (138944);
+ estThrPssDl2.push_back (138944);
+ estThrPssDl2.push_back (138944);
+ estThrPssDl2.push_back (138944);
+ AddTestCase (new LenaPssFfMacSchedulerTestCase2 (dist2,estThrPssDl2,packetSize2,1));
+
+ // Test Case 3: : heterogeneous flow test in PSS (same distance)
+ // Traffic3 info:
+ // UDP traffic: payload size = [100, 200,300,400,500] bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> 232000 byte/rate
+ // Maximum throughput = 2196000 byte/sec
+ // 132000 + 232000 + 332000 + 432000 + 532000 = 1660000 < 2196000 -> estimated throughput in downlink = [132000, 232000,332000, 432000, 53200] bytes/sec
+ std::vector<uint16_t> dist3;
+ dist3.push_back (0); // User 0 distance --> MCS 28
+ dist3.push_back (0); // User 1 distance --> MCS 24
+ dist3.push_back (0); // User 2 distance --> MCS 16
+ dist3.push_back (0); // User 3 distance --> MCS 12
+ dist3.push_back (0); // User 4 distance --> MCS 6
+ std::vector<uint16_t> packetSize3;
+ packetSize3.push_back (100);
+ packetSize3.push_back (200);
+ packetSize3.push_back (300);
+ packetSize3.push_back (400);
+ packetSize3.push_back (500);
+ std::vector<uint32_t> estThrPssDl3;
+ estThrPssDl3.push_back (132000);
+ estThrPssDl3.push_back (232000);
+ estThrPssDl3.push_back (332000);
+ estThrPssDl3.push_back (432000);
+ estThrPssDl3.push_back (532000);
+ AddTestCase (new LenaPssFfMacSchedulerTestCase2 (dist3,estThrPssDl3,packetSize3,1));
+
+
+ // Test Case 4: heterogeneous flow test in PSS (different distance)
+ // Traffic4 info
+ // UDP traffic: payload size = [100,200,300] bytes, interval = 1 ms
+ // UDP rate in scheduler: (payload + RLC header + PDCP header + IP header + UDP header) * 1000 byte/sec -> [132000, 232000, 332000] byte/rate
+ // Maximum throughput = 5 / ( 1/2196000 + 1/1383000 + 1/903000 ) = 1312417 byte/s
+ // 132000 + 232000 + 332000 = 696000 < 1312417 -> estimated throughput in downlink = [132000, 232000, 332000] byte/sec
+ std::vector<uint16_t> dist4;
+ dist4.push_back (0); // User 0 distance --> MCS 28
+ dist4.push_back (3000); // User 1 distance --> MCS 24
+ dist4.push_back (6000); // User 2 distance --> MCS 16
+ std::vector<uint16_t> packetSize4;
+ packetSize4.push_back (100);
+ packetSize4.push_back (200);
+ packetSize4.push_back (300);
+ std::vector<uint32_t> estThrPssDl4;
+ estThrPssDl4.push_back (132000); // User 0 estimated TTI throughput from PSS
+ estThrPssDl4.push_back (232000); // User 1 estimated TTI throughput from PSS
+ estThrPssDl4.push_back (332000); // User 2 estimated TTI throughput from PSS
+ AddTestCase (new LenaPssFfMacSchedulerTestCase2 (dist4,estThrPssDl4,packetSize4,1));
+}
+
+static LenaTestPssFfMacSchedulerSuite lenaTestPssFfMacSchedulerSuite;
+
+// --------------- T E S T - C A S E # 1 ------------------------------
+
+
+std::string
+LenaPssFfMacSchedulerTestCase1::BuildNameString (uint16_t nUser, uint16_t dist)
+{
+ std::ostringstream oss;
+ oss << nUser << " UEs, distance " << dist << " m";
+ return oss.str ();
+}
+
+
+LenaPssFfMacSchedulerTestCase1::LenaPssFfMacSchedulerTestCase1 (uint16_t nUser, uint16_t nLc, uint16_t dist, double thrRefDl, double thrRefUl, uint16_t packetSize, uint16_t interval)
+ : TestCase (BuildNameString (nUser, dist)),
+ m_nUser (nUser),
+ m_nLc (nLc),
+ m_dist (dist),
+ m_packetSize (packetSize),
+ m_interval (interval),
+ m_thrRefDl (thrRefDl),
+ m_thrRefUl (thrRefUl)
+{
+}
+
+LenaPssFfMacSchedulerTestCase1::~LenaPssFfMacSchedulerTestCase1 ()
+{
+}
+
+void
+LenaPssFfMacSchedulerTestCase1::DoRun (void)
+{
+ Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
+ Ptr<EpcHelper> epcHelper = CreateObject<EpcHelper> ();
+ lteHelper->SetEpcHelper (epcHelper);
+
+ Ptr<Node> pgw = epcHelper->GetPgwNode ();
+
+ // Create a single RemoteHost
+ NodeContainer remoteHostContainer;
+ remoteHostContainer.Create (1);
+ Ptr<Node> remoteHost = remoteHostContainer.Get (0);
+ InternetStackHelper internet;
+ internet.Install (remoteHostContainer);
+
+ // Create the Internet
+ PointToPointHelper p2ph;
+ p2ph.SetDeviceAttribute ("DataRate", DataRateValue (DataRate ("100Gb/s")));
+ p2ph.SetDeviceAttribute ("Mtu", UintegerValue (1500));
+ p2ph.SetChannelAttribute ("Delay", TimeValue (Seconds (0.001)));
+ NetDeviceContainer internetDevices = p2ph.Install (pgw, remoteHost);
+ Ipv4AddressHelper ipv4h;
+ ipv4h.SetBase ("1.0.0.0", "255.0.0.0");
+ Ipv4InterfaceContainer internetIpIfaces = ipv4h.Assign (internetDevices);
+ // interface 0 is localhost, 1 is the p2p device
+ Ipv4Address remoteHostAddr = internetIpIfaces.GetAddress (1);
+
+ Ipv4StaticRoutingHelper ipv4RoutingHelper;
+ Ptr<Ipv4StaticRouting> remoteHostStaticRouting = ipv4RoutingHelper.GetStaticRouting (remoteHost->GetObject<Ipv4> ());
+ remoteHostStaticRouting->AddNetworkRouteTo (Ipv4Address ("7.0.0.0"), Ipv4Mask ("255.0.0.0"), 1);
+
+ Config::SetDefault ("ns3::LteAmc::AmcModel", EnumValue (LteAmc::PiroEW2010));
+ Config::SetDefault ("ns3::LteAmc::Ber", DoubleValue (0.00005));
+ Config::SetDefault ("ns3::LteSpectrumPhy::PemEnabled", BooleanValue (false));
+
+ LogComponentDisableAll (LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteEnbRrc", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteUeRrc", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteEnbMac", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteUeMac", LOG_LEVEL_ALL);
+// LogComponentEnable ("LteRlc", LOG_LEVEL_ALL);
+//
+// LogComponentEnable ("LtePhy", LOG_LEVEL_ALL);
+// LogComponentEnable ("LteEnbPhy", LOG_LEVEL_ALL);
+// LogComponentEnable ("LteUePhy", LOG_LEVEL_ALL);
+
+ // LogComponentEnable ("LteSpectrumPhy", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteInterference", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteSinrChunkProcessor", LOG_LEVEL_ALL);
+ //
+ // LogComponentEnable ("LtePropagationLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("ShadowingLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("PenetrationLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("MultipathLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("PathLossModel", LOG_LEVEL_ALL);
+ //
+ // LogComponentEnable ("LteNetDevice", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteUeNetDevice", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteEnbNetDevice", LOG_LEVEL_ALL);
+
+// LogComponentEnable ("PssFfMacScheduler", LOG_LEVEL_AlL);
+ LogComponentEnable ("LenaTestPssFfMacCheduler", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteAmc", LOG_LEVEL_ALL);
+// LogComponentEnable ("RadioBearerStatsCalculator", LOG_LEVEL_ALL);
+
+
+ lteHelper->SetAttribute ("PathlossModel", StringValue ("ns3::FriisSpectrumPropagationLossModel"));
+
+ // Create Nodes: eNodeB and UE
+ NodeContainer enbNodes;
+ NodeContainer ueNodes;
+ enbNodes.Create (1);
+ ueNodes.Create (m_nUser);
+
+ // Install Mobility Model
+ MobilityHelper mobility;
+ mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
+ mobility.Install (enbNodes);
+ mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
+ mobility.Install (ueNodes);
+
+ // Create Devices and install them in the Nodes (eNB and UE)
+ NetDeviceContainer enbDevs;
+ NetDeviceContainer ueDevs;
+ lteHelper->SetSchedulerType ("ns3::PssFfMacScheduler");
+ enbDevs = lteHelper->InstallEnbDevice (enbNodes);
+ ueDevs = lteHelper->InstallUeDevice (ueNodes);
+
+ // Attach a UE to a eNB
+ lteHelper->Attach (ueDevs, enbDevs.Get (0));
+
+ Ptr<LteEnbNetDevice> lteEnbDev = enbDevs.Get (0)->GetObject<LteEnbNetDevice> ();
+ Ptr<LteEnbPhy> enbPhy = lteEnbDev->GetPhy ();
+ enbPhy->SetAttribute ("TxPower", DoubleValue (30.0));
+ enbPhy->SetAttribute ("NoiseFigure", DoubleValue (5.0));
+
+ // Set UEs' position and power
+ for (int i = 0; i < m_nUser; i++)
+ {
+ Ptr<ConstantPositionMobilityModel> mm = ueNodes.Get (i)->GetObject<ConstantPositionMobilityModel> ();
+ mm->SetPosition (Vector (m_dist, 0.0, 0.0));
+ Ptr<LteUeNetDevice> lteUeDev = ueDevs.Get (i)->GetObject<LteUeNetDevice> ();
+ Ptr<LteUePhy> uePhy = lteUeDev->GetPhy ();
+ uePhy->SetAttribute ("TxPower", DoubleValue (23.0));
+ uePhy->SetAttribute ("NoiseFigure", DoubleValue (9.0));
+ }
+
+ // Install the IP stack on the UEs
+ internet.Install (ueNodes);
+ Ipv4InterfaceContainer ueIpIface;
+ ueIpIface = epcHelper->AssignUeIpv4Address (NetDeviceContainer (ueDevs));
+ // Assign IP address to UEs, and install applications
+ for (uint32_t u = 0; u < ueNodes.GetN (); ++u)
+ {
+ Ptr<Node> ueNode = ueNodes.Get (u);
+ // Set the default gateway for the UE
+ Ptr<Ipv4StaticRouting> ueStaticRouting = ipv4RoutingHelper.GetStaticRouting (ueNode->GetObject<Ipv4> ());
+ ueStaticRouting->SetDefaultRoute (epcHelper->GetUeDefaultGatewayAddress (), 1);
+
+ // Activate an EPS bearer
+ Ptr<NetDevice> ueDevice = ueDevs.Get (u);
+ GbrQosInformation qos;
+ qos.gbrDl = (m_packetSize+32) * (1000/m_interval) * 8 * 1; // bit/
+ qos.gbrUl = (m_packetSize+32) * (1000/m_interval) * 8 * 1;
+ qos.mbrDl = 0;
+ qos.mbrUl = 0;
+
+ enum EpsBearer::Qci q = EpsBearer::GBR_CONV_VOICE;
+ EpsBearer bearer (q, qos);
+ lteHelper->ActivateEpsBearer (ueDevice, bearer, EpcTft::Default ());
+ }
+
+ lteHelper->EnableMacTraces ();
+ lteHelper->EnableRlcTraces ();
+ lteHelper->EnablePdcpTraces ();
+
+ // Install downlind and uplink applications
+ uint16_t dlPort = 1234;
+ uint16_t ulPort = 2000;
+ PacketSinkHelper dlPacketSinkHelper ("ns3::UdpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), dlPort));
+ PacketSinkHelper ulPacketSinkHelper ("ns3::UdpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), ulPort));
+ ApplicationContainer clientApps;
+ ApplicationContainer serverApps;
+ for (uint32_t u = 0; u < ueNodes.GetN (); ++u)
+ {
+ ++ulPort;
+ serverApps.Add (dlPacketSinkHelper.Install (ueNodes.Get(u))); // receive packets from remotehost
+ serverApps.Add (ulPacketSinkHelper.Install (remoteHost)); // receive packets from UEs
+
+ UdpClientHelper dlClient (ueIpIface.GetAddress (u), dlPort); // uplink packets generator
+ dlClient.SetAttribute ("Interval", TimeValue (MilliSeconds(m_interval)));
+ dlClient.SetAttribute ("MaxPackets", UintegerValue(1000000));
+ dlClient.SetAttribute ("PacketSize", UintegerValue(m_packetSize));
+
+ UdpClientHelper ulClient (remoteHostAddr, ulPort); // downlink packets generator
+ ulClient.SetAttribute ("Interval", TimeValue (MilliSeconds(m_interval)));
+ ulClient.SetAttribute ("MaxPackets", UintegerValue(1000000));
+ ulClient.SetAttribute ("PacketSize", UintegerValue(m_packetSize));
+
+ clientApps.Add (dlClient.Install (remoteHost));
+ clientApps.Add (ulClient.Install (ueNodes.Get(u)));
+ }
+
+ serverApps.Start (Seconds (0.001));
+ clientApps.Start (Seconds (0.001));
+
+ double simulationTime = 1.0;
+ double tolerance = 0.1;
+ Simulator::Stop (Seconds (simulationTime));
+
+ Ptr<RadioBearerStatsCalculator> rlcStats = lteHelper->GetRlcStats ();
+ rlcStats->SetAttribute ("EpochDuration", TimeValue (Seconds (simulationTime)));
+
+ Simulator::Run ();
+
+ /**
+ * Check that the downlink assignation is done in a "priority set scheduler" manner
+ */
+
+ NS_LOG_INFO ("DL - Test with " << m_nUser << " user(s) at distance " << m_dist);
+ std::vector <uint64_t> dlDataRxed;
+ for (int i = 0; i < m_nUser; i++)
+ {
+ // get the imsi
+ uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
+ // get the lcId
+ uint8_t lcId = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetRrc ()->GetLcIdVector ().at (0);
+ uint64_t data = rlcStats->GetDlRxData (imsi, lcId);
+ dlDataRxed.push_back (data);
+ NS_LOG_INFO ("\tUser " << i << " imsi " << imsi << " bytes rxed " << (double)dlDataRxed.at (i) << " thr " << (double)dlDataRxed.at (i) / simulationTime << " ref " << m_thrRefDl);
+ }
+
+ for (int i = 0; i < m_nUser; i++)
+ {
+ NS_TEST_ASSERT_MSG_EQ_TOL ((double)dlDataRxed.at (i) / simulationTime, m_thrRefDl, m_thrRefDl * tolerance, " Unfair Throughput!");
+ }
+
+ /**
+ * Check that the uplink assignation is done in a "round robin" manner
+ */
+
+ NS_LOG_INFO ("UL - Test with " << m_nUser << " user(s) at distance " << m_dist);
+ std::vector <uint64_t> ulDataRxed;
+ for (int i = 0; i < m_nUser; i++)
+ {
+ // get the imsi
+ uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
+ // get the lcId
+ uint8_t lcId = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetRrc ()->GetLcIdVector ().at (0);
+ ulDataRxed.push_back (rlcStats->GetUlRxData (imsi, lcId));
+ NS_LOG_INFO ("\tUser " << i << " imsi " << imsi << " bytes rxed " << (double)ulDataRxed.at (i) << " thr " << (double)ulDataRxed.at (i) / simulationTime << " ref " << m_thrRefUl);
+ }
+
+ for (int i = 0; i < m_nUser; i++)
+ {
+ NS_TEST_ASSERT_MSG_EQ_TOL ((double)ulDataRxed.at (i) / simulationTime, m_thrRefUl, m_thrRefUl * tolerance, " Unfair Throughput!");
+ }
+ Simulator::Destroy ();
+
+}
+
+
+
+// --------------- T E S T - C A S E # 2 ------------------------------
+
+
+std::string
+LenaPssFfMacSchedulerTestCase2::BuildNameString (uint16_t nUser, std::vector<uint16_t> dist)
+{
+ std::ostringstream oss;
+ oss << "distances (m) = [ " ;
+ for (std::vector<uint16_t>::iterator it = dist.begin (); it != dist.end (); ++it)
+ {
+ oss << *it << " ";
+ }
+ oss << "]";
+ return oss.str ();
+}
+
+
+LenaPssFfMacSchedulerTestCase2::LenaPssFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrPssDl, std::vector<uint16_t> packetSize, uint16_t interval)
+ : TestCase (BuildNameString (dist.size (), dist)),
+ m_nUser (dist.size ()),
+ m_dist (dist),
+ m_packetSize (packetSize),
+ m_interval (interval),
+ m_estThrPssDl (estThrPssDl)
+{
+}
+
+LenaPssFfMacSchedulerTestCase2::~LenaPssFfMacSchedulerTestCase2 ()
+{
+}
+
+void
+LenaPssFfMacSchedulerTestCase2::DoRun (void)
+{
+ Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
+ Ptr<EpcHelper> epcHelper = CreateObject<EpcHelper> ();
+ lteHelper->SetEpcHelper (epcHelper);
+
+ Ptr<Node> pgw = epcHelper->GetPgwNode ();
+
+ // Create a single RemoteHost
+ NodeContainer remoteHostContainer;
+ remoteHostContainer.Create (1);
+ Ptr<Node> remoteHost = remoteHostContainer.Get (0);
+ InternetStackHelper internet;
+ internet.Install (remoteHostContainer);
+
+ // Create the Internet
+ PointToPointHelper p2ph;
+ p2ph.SetDeviceAttribute ("DataRate", DataRateValue (DataRate ("100Gb/s")));
+ p2ph.SetDeviceAttribute ("Mtu", UintegerValue (1500));
+ p2ph.SetChannelAttribute ("Delay", TimeValue (Seconds (0.001)));
+ NetDeviceContainer internetDevices = p2ph.Install (pgw, remoteHost);
+ Ipv4AddressHelper ipv4h;
+ ipv4h.SetBase ("1.0.0.0", "255.0.0.0");
+ Ipv4InterfaceContainer internetIpIfaces = ipv4h.Assign (internetDevices);
+ // interface 0 is localhost, 1 is the p2p device
+ Ipv4Address remoteHostAddr = internetIpIfaces.GetAddress (1);
+
+ Ipv4StaticRoutingHelper ipv4RoutingHelper;
+ Ptr<Ipv4StaticRouting> remoteHostStaticRouting = ipv4RoutingHelper.GetStaticRouting (remoteHost->GetObject<Ipv4> ());
+ remoteHostStaticRouting->AddNetworkRouteTo (Ipv4Address ("7.0.0.0"), Ipv4Mask ("255.0.0.0"), 1);
+
+ Config::SetDefault ("ns3::LteAmc::AmcModel", EnumValue (LteAmc::PiroEW2010));
+ Config::SetDefault ("ns3::LteAmc::Ber", DoubleValue (0.00005));
+ Config::SetDefault ("ns3::LteSpectrumPhy::PemEnabled", BooleanValue (false));
+
+ LogComponentDisableAll (LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteEnbRrc", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteUeRrc", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteEnbMac", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteUeMac", LOG_LEVEL_ALL);
+// LogComponentEnable ("LteRlc", LOG_LEVEL_ALL);
+//
+// LogComponentEnable ("LtePhy", LOG_LEVEL_ALL);
+// LogComponentEnable ("LteEnbPhy", LOG_LEVEL_ALL);
+// LogComponentEnable ("LteUePhy", LOG_LEVEL_ALL);
+
+ // LogComponentEnable ("LteSpectrumPhy", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteInterference", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteSinrChunkProcessor", LOG_LEVEL_ALL);
+ //
+ // LogComponentEnable ("LtePropagationLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("ShadowingLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("PenetrationLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("MultipathLossModel", LOG_LEVEL_ALL);
+ // LogComponentEnable ("PathLossModel", LOG_LEVEL_ALL);
+ //
+ // LogComponentEnable ("LteNetDevice", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteUeNetDevice", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteEnbNetDevice", LOG_LEVEL_ALL);
+
+// LogComponentEnable ("PssFfMacScheduler", LOG_LEVEL_AlL);
+ LogComponentEnable ("LenaTestPssFfMacCheduler", LOG_LEVEL_ALL);
+ // LogComponentEnable ("LteAmc", LOG_LEVEL_ALL);
+// LogComponentEnable ("RadioBearerStatsCalculator", LOG_LEVEL_ALL);
+
+
+ lteHelper->SetAttribute ("PathlossModel", StringValue ("ns3::FriisSpectrumPropagationLossModel"));
+
+ // Create Nodes: eNodeB and UE
+ NodeContainer enbNodes;
+ NodeContainer ueNodes;
+ enbNodes.Create (1);
+ ueNodes.Create (m_nUser);
+
+ // Install Mobility Model
+ MobilityHelper mobility;
+ mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
+ mobility.Install (enbNodes);
+ mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
+ mobility.Install (ueNodes);
+
+ // Create Devices and install them in the Nodes (eNB and UE)
+ NetDeviceContainer enbDevs;
+ NetDeviceContainer ueDevs;
+ lteHelper->SetSchedulerType ("ns3::PssFfMacScheduler");
+ enbDevs = lteHelper->InstallEnbDevice (enbNodes);
+ ueDevs = lteHelper->InstallUeDevice (ueNodes);
+
+ // Attach a UE to a eNB
+ lteHelper->Attach (ueDevs, enbDevs.Get (0));
+
+ Ptr<LteEnbNetDevice> lteEnbDev = enbDevs.Get (0)->GetObject<LteEnbNetDevice> ();
+ Ptr<LteEnbPhy> enbPhy = lteEnbDev->GetPhy ();
+ enbPhy->SetAttribute ("TxPower", DoubleValue (30.0));
+ enbPhy->SetAttribute ("NoiseFigure", DoubleValue (5.0));
+
+ // Set UEs' position and power
+ for (int i = 0; i < m_nUser; i++)
+ {
+ Ptr<ConstantPositionMobilityModel> mm = ueNodes.Get (i)->GetObject<ConstantPositionMobilityModel> ();
+ mm->SetPosition (Vector (m_dist.at(i), 0.0, 0.0));
+ Ptr<LteUeNetDevice> lteUeDev = ueDevs.Get (i)->GetObject<LteUeNetDevice> ();
+ Ptr<LteUePhy> uePhy = lteUeDev->GetPhy ();
+ uePhy->SetAttribute ("TxPower", DoubleValue (23.0));
+ uePhy->SetAttribute ("NoiseFigure", DoubleValue (9.0));
+ }
+
+ // Install the IP stack on the UEs
+ internet.Install (ueNodes);
+ Ipv4InterfaceContainer ueIpIface;
+ ueIpIface = epcHelper->AssignUeIpv4Address (NetDeviceContainer (ueDevs));
+ // Assign IP address to UEs, and install applications
+ for (uint32_t u = 0; u < ueNodes.GetN (); ++u)
+ {
+ Ptr<Node> ueNode = ueNodes.Get (u);
+ // Set the default gateway for the UE
+ Ptr<Ipv4StaticRouting> ueStaticRouting = ipv4RoutingHelper.GetStaticRouting (ueNode->GetObject<Ipv4> ());
+ ueStaticRouting->SetDefaultRoute (epcHelper->GetUeDefaultGatewayAddress (), 1);
+
+ // Activate an EPS bearer
+ Ptr<NetDevice> ueDevice = ueDevs.Get (u);
+ GbrQosInformation qos;
+ qos.gbrDl = (m_packetSize.at(u) + 32) * (1000/m_interval) * 8 * 1; // bit/s, = Target Bit Rate(TBR)
+ qos.gbrUl = (m_packetSize.at(u) + 32) * (1000/m_interval) * 8 * 1;
+ qos.mbrDl = qos.gbrDl;
+ qos.mbrUl = qos.gbrUl;
+
+ enum EpsBearer::Qci q = EpsBearer::GBR_CONV_VOICE;
+ EpsBearer bearer (q, qos);
+ lteHelper->ActivateEpsBearer (ueDevice, bearer, EpcTft::Default ());
+
+ }
+
+ lteHelper->EnableMacTraces ();
+ lteHelper->EnableRlcTraces ();
+ lteHelper->EnablePdcpTraces ();
+
+ // Install downlind and uplink applications
+ uint16_t dlPort = 1234;
+ uint16_t ulPort = 2000;
+ PacketSinkHelper dlPacketSinkHelper ("ns3::UdpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), dlPort));
+ PacketSinkHelper ulPacketSinkHelper ("ns3::UdpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), ulPort));
+ ApplicationContainer clientApps;
+ ApplicationContainer serverApps;
+ for (uint32_t u = 0; u < ueNodes.GetN (); ++u)
+ {
+ ++ulPort;
+ serverApps.Add (dlPacketSinkHelper.Install (ueNodes.Get(u))); // receive packets from remotehost
+ serverApps.Add (ulPacketSinkHelper.Install (remoteHost)); // receive packets from UEs
+
+ UdpClientHelper dlClient (ueIpIface.GetAddress (u), dlPort); // uplink packets generator
+ dlClient.SetAttribute ("Interval", TimeValue (MilliSeconds(m_interval)));
+ dlClient.SetAttribute ("MaxPackets", UintegerValue(1000000));
+ dlClient.SetAttribute ("PacketSize", UintegerValue(m_packetSize.at(u)));
+
+ UdpClientHelper ulClient (remoteHostAddr, ulPort); // downlink packets generator
+ ulClient.SetAttribute ("Interval", TimeValue (MilliSeconds(m_interval)));
+ ulClient.SetAttribute ("MaxPackets", UintegerValue(1000000));
+ ulClient.SetAttribute ("PacketSize", UintegerValue(m_packetSize.at(u)));
+
+ clientApps.Add (dlClient.Install (remoteHost));
+ clientApps.Add (ulClient.Install (ueNodes.Get(u)));
+ }
+
+ serverApps.Start (Seconds (0.001));
+ clientApps.Start (Seconds (0.001));
+
+ double simulationTime = 1.0;
+ double tolerance = 0.1;
+ Simulator::Stop (Seconds (simulationTime));
+
+ Ptr<RadioBearerStatsCalculator> rlcStats = lteHelper->GetRlcStats ();
+ rlcStats->SetAttribute ("EpochDuration", TimeValue (Seconds (simulationTime)));
+
+ Simulator::Run ();
+
+ /**
+ * Check that the downlink assignation is done in a "priority set scheduler" manner
+ */
+
+ NS_LOG_INFO ("DL - Test with " << m_nUser << " user(s)");
+ std::vector <uint64_t> dlDataRxed;
+ for (int i = 0; i < m_nUser; i++)
+ {
+ // get the imsi
+ uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
+ // get the lcId
+ uint8_t lcId = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetRrc ()->GetLcIdVector ().at (0);
+ dlDataRxed.push_back (rlcStats->GetDlRxData (imsi, lcId));
+ NS_LOG_INFO ("\tUser " << i << " dist " << m_dist.at (i) << " imsi " << imsi << " bytes rxed " << (double)dlDataRxed.at (i) << " thr " << (double)dlDataRxed.at (i) / simulationTime << " ref " << m_nUser);
+ }
+
+ for (int i = 0; i < m_nUser; i++)
+ {
+ NS_TEST_ASSERT_MSG_EQ_TOL ((double)dlDataRxed.at (i) / simulationTime, m_estThrPssDl.at(i), m_estThrPssDl.at(i) * tolerance, " Unfair Throughput!");
+ }
+
+ Simulator::Destroy ();
+
+}
+
+
+} // namespace ns3
+
+
+
+
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/src/lte/test/lte-test-pss-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
@@ -0,0 +1,89 @@
+/* -*- Mode: C++; c-file-style: "gnu"; indent-tabs-mode:nil; -*- */
+/*
+ * Copyright (c) 2011 Centre Tecnologic de Telecomunicacions de Catalunya (CTTC)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation;
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: Marco Miozzo <marco.miozzo@cttc.es>,
+ * Nicola Baldo <nbaldo@cttc.es>
+ * Dizhi Zhou <dizhi.zhou@gmail.com>
+ */
+
+#ifndef LENA_TEST_PSS_FF_MAC_SCHEDULER_H
+#define LENA_TEST_PSS_FF_MAC_SCHEDULER_H
+
+#include "ns3/simulator.h"
+#include "ns3/test.h"
+
+
+namespace ns3 {
+
+
+/**
+* This system test program creates different test cases with a single eNB and
+* several UEs, all having the same Radio Bearer specification. In each test
+* case, the UEs see the same SINR from the eNB; different test cases are
+* implemented obtained by using different SINR values and different numbers of
+* UEs. The test consists on checking that the obtained throughput performance
+* is equal among users is consistent with the definition of priority set scheduling
+*/
+class LenaPssFfMacSchedulerTestCase1 : public TestCase
+{
+public:
+ LenaPssFfMacSchedulerTestCase1 (uint16_t nUser, uint16_t nLc, uint16_t dist, double thrRefDl, double thrRefUl, uint16_t packetSize, uint16_t interval);
+ virtual ~LenaPssFfMacSchedulerTestCase1 ();
+
+private:
+ static std::string BuildNameString (uint16_t nUser, uint16_t dist);
+ virtual void DoRun (void);
+ uint16_t m_nUser;
+ uint16_t m_nLc;
+ uint16_t m_dist;
+ uint16_t m_packetSize; // byte
+ uint16_t m_interval; // ms
+ double m_thrRefDl;
+ double m_thrRefUl;
+};
+
+
+class LenaPssFfMacSchedulerTestCase2 : public TestCase
+{
+public:
+ LenaPssFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrPssDl, std::vector<uint16_t> packetSize, uint16_t interval);
+ virtual ~LenaPssFfMacSchedulerTestCase2 ();
+
+private:
+ static std::string BuildNameString (uint16_t nUser, std::vector<uint16_t> dist);
+ virtual void DoRun (void);
+ uint16_t m_nUser;
+ std::vector<uint16_t> m_dist;
+ std::vector<uint16_t> m_packetSize; // byte
+ uint16_t m_interval; // ms
+ std::vector<uint32_t> m_estThrPssDl;
+};
+
+
+class LenaTestPssFfMacSchedulerSuite : public TestSuite
+{
+public:
+ LenaTestPssFfMacSchedulerSuite ();
+};
+
+
+
+
+} // namespace ns3
+
+
+#endif /* LENA_TEST_PSS_FF_MAC_SCHEDULER_H */
--- a/src/lte/wscript Sun Aug 05 21:33:37 2012 -0300
+++ b/src/lte/wscript Tue Aug 21 23:18:40 2012 -0300
@@ -64,6 +64,7 @@
'model/fdbet-ff-mac-scheduler.cc',
'model/fdtbfq-ff-mac-scheduler.cc',
'model/tdtbfq-ff-mac-scheduler.cc',
+ 'model/pss-ff-mac-scheduler.cc',
'model/epc-gtpu-header.cc',
'model/trace-fading-loss-model.cc',
'model/epc-enb-application.cc',
@@ -90,6 +91,7 @@
'test/lte-test-fdbet-ff-mac-scheduler.cc',
'test/lte-test-fdtbfq-ff-mac-scheduler.cc',
'test/lte-test-tdtbfq-ff-mac-scheduler.cc',
+ 'test/lte-test-pss-ff-mac-scheduler.cc',
'test/lte-test-earfcn.cc',
'test/lte-test-spectrum-value-helper.cc',
'test/lte-test-pathloss-model.cc',
@@ -174,6 +176,7 @@
'model/fdbet-ff-mac-scheduler.h',
'model/fdtbfq-ff-mac-scheduler.h',
'model/tdtbfq-ff-mac-scheduler.h',
+ 'model/pss-ff-mac-scheduler.h',
'model/trace-fading-loss-model.h',
'model/epc-gtpu-header.h',
'model/epc-enb-application.h',
@@ -193,6 +196,7 @@
'test/lte-test-fdbet-ff-mac-scheduler.h',
'test/lte-test-fdtbfq-ff-mac-scheduler.h',
'test/lte-test-tdtbfq-ff-mac-scheduler.h',
+ 'test/lte-test-pss-ff-mac-scheduler.h',
'test/lte-test-phy-error-model.h',
'test/lte-test-pathloss-model.h',
'test/epc-test-gtpu.h',