--- a/src/lte/doc/source/lte-design.rst Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/doc/source/lte-design.rst Wed Aug 22 22:55:22 2012 -0300
@@ -652,7 +652,7 @@
\right)}{\tau}
-Maximum Throughphut (MT) Scheduler
+Maximum Throughput (MT) Scheduler
----------------------------------
The Maximum Throughput (MT) scheduler [FCapo2012]_ aims to maximize the overall throughput of eNB.
@@ -661,7 +661,7 @@
In FDMT, every TTI, MAC scheduler allocates RBGs to the UE who has highest achievable rate calculated
by subband cqi. In TDMT, every TTI, MAC scheduler selects one UE which has highest achievable rate
calculated by wideband cqi. Then MAC scheduler allocates all RBGs to this UE in current TTI.
-The caculation of achievable rate in FDMT and TDMT is as same as the one in PF.
+The calculation of achievable rate in FDMT and TDMT is as same as the one in PF.
Let :math:`i,j` denote generic users; let :math:`t` be the
subframe index, and :math:`k` be the resource block index; let :math:`M_{i,k}(t)` be MCS
usable by user :math:`i` on resource block :math:`k` according to what reported by the AMC
@@ -686,14 +686,14 @@
When there are several UEs having the same achievable rate, current implementation always select
the first UE created in script. Although MT can maximize cell throughput, it cannot provide
-faireness to UEs in poor chennel condition.
+fairness to UEs in poor channel condition.
-Throughphut to Average (TTA) Scheduler
+Throughput to Average (TTA) Scheduler
--------------------------------------
-The Throughput to Average (TTA) scheduler [FCapo2012]_ can be considered as an intermidiate between MT and PF.
-The metric used in TTA is caculated as follows:
+The Throughput to Average (TTA) scheduler [FCapo2012]_ can be considered as an intermediate between MT and PF.
+The metric used in TTA is calculated as follows:
.. math::
@@ -702,35 +702,35 @@
Here,:math:`R_{i}(k,t)` in bit/s represents the achievable rate for user :math:`i`
on resource block :math:`k` at subframe :math:`t`. The
-caculation method already is shown in MT and PF. Meanwhile, :math:`R_{i}(t)` in bit/s stands
+calculation method already is shown in MT and PF. Meanwhile, :math:`R_{i}(t)` in bit/s stands
for the achievable rate for :math:`i` at subframe :math:`t`. The difference between those two
-achievable rates is how to get MCS. For :math:`R_{i}(k,t)`, MCS is caculatd by subband cqi while
-:math:`R_{i}(t)` is caculated by wideband cqi. TTA scheduler can only be implemented in frequency domain (FD) because
-the achievable rate of paritular RBG is only related to FD scheduling.
+achievable rates is how to get MCS. For :math:`R_{i}(k,t)`, MCS is calculated by subband cqi while
+:math:`R_{i}(t)` is calculated by wideband cqi. TTA scheduler can only be implemented in frequency domain (FD) because
+the achievable rate of particular RBG is only related to FD scheduling.
Blind Average Throughput Scheduler
----------------------------------
The Blind Average Throughput scheduler [FCapo2012]_ aims to provide equal throughput to all UEs under eNB. The metric
-used in TTA is caculated as follows:
+used in TTA is calculated as follows:
.. math::
\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
\left( \frac{ 1 }{ T_\mathrm{j}(t) } \right)
-where :math:`T_{j}(t)` is the past througput performance perceived by the user :math:`j` and can be caculated by the
+where :math:`T_{j}(t)` is the past throughput performance perceived by the user :math:`j` and can be calculated by the
same method in PF scheduler. In the time domain blind average throughput (TD-BET), the scheduler selects the UE
-with largest priority metirc and allocates all RBGs to this UE. On the other hand, in the frequency domain blind
+with largest priority metric and allocates all RBGs to this UE. On the other hand, in the frequency domain blind
average throughput (FD-BET), every TTI, the scheduler first selects one UE with lowest pastAverageThroughput (largest
-prioriy metric). Then scheduler assigns one RBG to this UE, it calculates expected throughput of this UE and uses it
+priority metric). Then scheduler assigns one RBG to this UE, it calculates expected throughput of this UE and uses it
to compare with past average throughput :math:`T_{j}(t)` of other UEs. The scheduler continues
to allocate RBG to this UE until its expected throughput is not the smallest one among past average throughput
math:`T_{j}(t)` of all UE. Then the scheduler will use the same way to allocate RBG for a new UE which has the
lowest past average throughput :math:`T_{j}(t)` until all RBGs are allocated to UEs. The principle behind this is
that, in every TTI, the scheduler tries the best to achieve the equal throughput among all UEs.
-Token Band Fair Queue Scheduler
+Token Bank Fair Queue Scheduler
-------------------------------
Token Band Fair Queue (TBFQ) is a QoS aware scheduler which derives from the leaky-bucket mechanism. In TBFQ,
@@ -739,8 +739,8 @@
* :math:`t_{i}`: packet arrival rate (byte/sec )
* :math:`r_{i}`: token generation rate (byte/sec)
* :math:`p_{i}`: token pool size (byte)
- * :math:`E_{i}`: counter that records the number of token borrowed from or given to the token bank by flow i;
- :math:`E_{i}` can be smaller than zero
+ * :math:`E_{i}`: counter that records the number of token borrowed from or given to the token bank by flow i;
+ :math:`E_{i}` can be smaller than zero
Each K bytes data consumes k tokens. Also, TBFQ maintains a shared token bank (:math:`B`) so as to balance the traffic
between different flows. If token generation rate :math:`r_{i}` is bigger than packet arrival rate :math:`t_{i}`, then tokens
@@ -752,17 +752,26 @@
has more opportunity to borrow tokens from bank. In addition, TBFQ can police the traffic by setting the token
generation rate to limit the throughput. Additionally, TBFQ also maintains following three parameters for each flow:
- * Debt limit :math:`d_{i}`: if :math:`E_{i}` belows this threshold, user i cannot further borrow tokens from bank. This is for
- preventing malicious UE to borrow too much tokens.
+ * Debt limit :math:`d_{i}`: if :math:`E_{i}` belows this threshold, user i cannot further borrow tokens from bank. This is for
+ preventing malicious UE to borrow too much tokens.
* Credit limit :math:`c_{i}`: the maximum number of tokens UE i can borrow from the bank in one time.
* Credit threshold :math:`C`: once :math:`E_{i}` reaches debt limit, UE i must store :math:`C` tokens to bank in order to further
- borrow token from bank.
+ borrow token from bank.
+
+In the implementation, :math:`r_{i}`,:math:`p_{i}`,:math:`d_{i}`,:math:`c_{i}`,:math:`C` need to be configured in the script
+by following sentence::
-In the implementation, token generation rate should be configured in script and equals to the Maximum Bit Rate (MBR)
+ Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
+ lteHelper->SetSchedulerAttribute("DebtLimit", IntegerValue(yourvalue)); // default value -625000 bytes (-5Mb)
+ lteHelper->SetSchedulerAttribute("CreditLimit", UintegerValue(yourvalue)); // default value 625000 bytes (5Mb)
+ lteHelper->SetSchedulerAttribute("TokenPoolSize", UintegerValue(yourvalue)); // default value 1 byte
+ lteHelper->SetSchedulerAttribute("CreditableThreshold", UintegerValue(yourvalue)); // default value 0
+
+:math:`r_{i}` should be configured in script and equals to the Maximum Bit Rate (MBR)
in bearer level QoS parameters. For constant bit rate (CBR) traffic, it is suggested to set MBR to the traffic generation
rate. For variance bit rate (VBR) traffic, it is suggested to set MBR three times larger than traffic generation rate.
-Debt limit and credit limit are set to -5Mb and 5Mb respectively [FABokhari2009]_. Current implementation does not
-consider credit threshold (:math:`C` = 0).
+Default value of Debt limit and credit limit are set to -5Mb and 5Mb respectively based on paper [FABokhari2009]_.
+Current implementation does not consider credit threshold (:math:`C` = 0).
LTE in NS-3 has two versions of TBFQ scheduler: frequency domain TBFQ (FD-TBFQ) and time domain TBFQ (TD-TBFQ).
In FD-TBFQ, the scheduler always select UE with highest metric and allocates RBG with highest subband cqi until
@@ -788,7 +797,7 @@
\left( \frac{ 1 }{ T_\mathrm{j}(t) } \right)
* set 2: UE whose past average throughput is larger (or equal) than TBR; TD scheduler calculates their priority
- metric in Propotional Fair (PF) style:
+ metric in Proportional Fair (PF) style:
.. math::
@@ -799,44 +808,49 @@
highest metric in two sets and forward those UE to FD scheduler.
In PSS, FD scheduler allocates RBG k to UE n that maximums the chosen metric. Two PF schedulers are used in
-PF scheduelr:
+PF scheduler:
- #. Proportional Fair scheduled (PFsch)
+* Proportional Fair scheduled (PFsch)
.. math::
\widehat{Msch}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
\left( \frac{ R_{j}(k,t) }{ Tsch_\mathrm{j}(t) } \right)
-where :math:`Tsch_{j}(t)` is similar past througput performance perceived by the user :math:`j`, with the
-difference that it is updated only when the i-th user is actually served.
+
- #. Carrier over Interference to Average (CoIta)
+* Carrier over Interference to Average (CoIta)
.. math::
\widehat{Mcoi}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
\left( \frac{ CoI[j,k] }{ CoI[j,k] } \right)
-where :math: `CoI[j,k]` is an estimation of the SINR on the RBG :math: `k` of UE :math: `j`. Both PFsch
-and CoIta is for decoupling FD metric from TD scheduler.
-
-In addition, PSS FD scheduler also provide a weigth metric W[n] for helping controlling fairness in case
-of low number of UEs.
+where :math:`Tsch_{j}(t)` is similar past throughput performance perceived by the user :math:`j`, with the
+difference that it is updated only when the i-th user is actually served. :math: `CoI[j,k]` is an
+estimation of the SINR on the RBG :math: `k` of UE :math: `j`. Both PFsch and CoIta is for decoupling
+FD metric from TD scheduler. In addition, PSS FD scheduler also provide a weight metric W[n] for helping
+controlling fairness in case of low number of UEs.
.. math::
W[n] = max (1, \frac{TBR}{ T_{j}(t) })
where :math:`T_{j}(t)` is the past throughput performance perceived by the user :math:`j` . Therefore, on
-RBG k, the FD scheduler selects the UE :math:`j` that maximises the product of the frequency domain
-metric (:math:`Msch`, :math:`MCoI`) by weigth :math: `W[n]`. This strategy will gurantee the throughput of lower
+RBG k, the FD scheduler selects the UE :math:`j` that maximizes the product of the frequency domain
+metric (:math:`Msch`, :math:`MCoI`) by weight :math: `W[n]`. This strategy will guarantee the throughput of lower
quality UE tend towards the TBR.
In the implementation, TBR equals to Guarantee Bit Rate (GBR) in bearer
-level QoS parameters. :math: `N_mux` is fixed to half value of total number of UE in set 1 and set 2. In addition,
-CQI report is used as an SINR estimate :math:`CoI[j,k]`.
+level QoS parameters. :math: `N_mux` can be set in script by::
+
+ Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
+ lteHelper->SetSchedulerAttribute("nMux", UIntegerValue(yourownvalue))
+If user does not define nMux, PSS will set this value to half of total UE.
+Besides, user can select FD scheduler type using the same method above:
+SetSchedulerAttribute("PssFdSchedulerType", StringValue("CoItA"));
+The default FD scheduler is PFsch.
Transport Blocks
----------------
--- a/src/lte/doc/source/lte-testing.rst Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/doc/source/lte-testing.rst Wed Aug 22 22:55:22 2012 -0300
@@ -37,12 +37,12 @@
Unit Tests
~~~~~~~~~~
-SINR calculation in the Downlink
+SNR calculation in the Downlink
--------------------------------
The test suite ``lte-downlink-sinr``
-checks that the SINR calculation in
-downlink is performed correctly. The SINR in the downlink is calculated for each
+checks that the SNR calculation in
+downlink is performed correctly. The SNR in the downlink is calculated for each
RB assigned to data transmissions by dividing the power of the
intended signal from the considered eNB by the sum of the noise power plus all
the transmissions on the same RB coming from other eNBs (the interference
@@ -57,9 +57,9 @@
events of type either start or end of a waveform. In other words, a
chunk identifies a time interval during which the set of active
waveforms does not change. Let :math:`i` be the generic chunk,
-:math:`T_i` its duration and :math:`\mathrm{SINR_i}` its SINR,
+:math:`T_i` its duration and :math:`\mathrm{SNR_i}` its SNR,
calculated with the above equation. The calculation of the average
-SINR :math:`\overline{\gamma}` to be used for CQI feedback reporting
+SNR :math:`\overline{\gamma}` to be used for CQI feedback reporting
uses the following formula:
.. math::
@@ -78,10 +78,10 @@
-SINR calculation in the Uplink
+SNR calculation in the Uplink
------------------------------
-The test suite ``lte-uplink-sinr`` checks that the SINR calculation in
+The test suite ``lte-uplink-sinr`` checks that the SNR calculation in
uplink is performed correctly. This test suite is identical to
``lte-downlink-sinr`` described in the previous section, with the
difference than both the signal and the interference now refer to
@@ -94,7 +94,7 @@
The test vectors are obtained by a dedicated Octave script. The test
passes if the calculated values are equal to the test vector within a
-tolerance of :math:`10^{-7}` which, as for the downlink SINR test,
+tolerance of :math:`10^{-7}` which, as for the downlink SNR test,
deals with floating point arithmetic approximation issues.
@@ -139,12 +139,12 @@
The MCS which is used by the simulator is measured by
obtaining the tracing output produced by the scheduler after 4ms (this
-is needed to account for the initial delay in CQI reporting). The SINR
+is needed to account for the initial delay in CQI reporting). The SNR
which is calcualted by the simulator is also obtained using the
``LteSinrChunkProcessor`` interface. The test
passes if both the following conditions are satisfied:
- #. the SINR calculated by the simulator correspond to the SNR
+ #. the SNR calculated by the simulator correspond to the SNR
of the test vector within an absolute tolerance of :math:`10^{-7}`;
#. the MCS index used by the simulator exactly corresponds to
the one in the test vector.
@@ -187,7 +187,7 @@
(available in
`src/lte/test/reference/lte_link_budget_interference.m`), which does
the link budget calculations (including interference) corresponding to the topology of each
-test case, and outputs the resulting SINR and spectral efficiency. The
+test case, and outputs the resulting SNR and spectral efficiency. The
latter is then used to determine (using the same procedure adopted for
:ref:`sec-lte-amc-tests`. We note that the test vector
contains separate values for uplink and downlink.
@@ -199,11 +199,11 @@
The test suite ``lte-rr-ff-mac-scheduler`` creates different test cases with
a single eNB and several UEs, all having the same Radio Bearer specification. In
-each test case, the UEs see the same SINR from the eNB; different test cases are
+each test case, the UEs see the same SNR from the eNB; different test cases are
implemented by using different distance among UEs and the eNB (i.e., therefore
-having different SINR values) and different numbers of UEs. The test consists on
+having different SNR values) and different numbers of UEs. The test consists on
checking that the obtained throughput performance is equal among users and
-matches a reference throughput value obtained according to the SINR perceived
+matches a reference throughput value obtained according to the SNR perceived
within a given tolerance.
The test vector is obtained according to the values of transport block
@@ -213,7 +213,7 @@
[TS36213]_. Let :math:`\tau` be the TTI duration, :math:`N` be the
number of UEs, :math:`B` the transmission bandwidth configuration in
number of RBs, :math:`G` the RBG size, :math:`M` the modulation and
-coding scheme in use at the given SINR and :math:`S(M, B)` be the
+coding scheme in use at the given SNR and :math:`S(M, B)` be the
transport block size in bits as defined by 3GPP TS 36.213. We first
calculate the number :math:`L` of RBGs allocated to each user as
@@ -261,8 +261,8 @@
In the first category of test cases, the UEs are all placed at the
same distance from the eNB, and hence all placed in order to have the
-same SINR. Different test cases are implemented by using a different
-SINR value and a different number of UEs. The test consists on
+same SNR. Different test cases are implemented by using a different
+SNR value and a different number of UEs. The test consists on
checking that the obtained throughput performance matches with the
known reference throughput up to a given tolerance. The expected
behavior of the PF scheduler when all UEs have the same SNR is that
@@ -272,7 +272,7 @@
at the given SNR by the total number of UEs.
Let :math:`\tau` be the TTI duration, :math:`B` the transmission
bandwidth configuration in number of RBs, :math:`M` the modulation and
-coding scheme in use at the given SINR and :math:`S(M, B)` be the
+coding scheme in use at the given SNR and :math:`S(M, B)` be the
transport block size as defined in [TS36213]_. The reference
throughput :math:`T` in bit/s achieved by each UE is calculated as
@@ -285,11 +285,11 @@
The second category of tests aims at verifying the fairness of the PF
scheduler in a more realistic simulation scenario where the UEs have a
-different SINR (constant for the whole simulation). In these conditions, the PF
+different SNR (constant for the whole simulation). In these conditions, the PF
scheduler will give to each user a share of the system bandwidth that is
proportional to the capacity achievable by a single user alone considered its
-SINR. In detail, let :math:`M_i` be the modulation and coding scheme being used by
-each UE (which is a deterministic function of the SINR of the UE, and is hence
+SNR. In detail, let :math:`M_i` be the modulation and coding scheme being used by
+each UE (which is a deterministic function of the SNR of the UE, and is hence
known in this scenario). Based on the MCS, we determine the achievable
rate :math:`R_i` for each user :math:`i` using the
procedure described in Section~\ref{sec:pfs}. We then define the
@@ -353,22 +353,22 @@
Test suites ``lte-fdmt-ff-mac-scheduler`` and ``lte-tdmt-ff-mac-scheduler``
create different test cases with a single eNB and several UEs, all having the same
Radio Bearer specification, using the Frequency Domain Maximum Throughput (FDMT)
-scheduler and Time Domain Maximum Throughput (TDMT) scheduler repectively,
+scheduler and Time Domain Maximum Throughput (TDMT) scheduler respectively.
In other words, UEs are all placed at the
same distance from the eNB, and hence all placed in order to have the
-same SINR. Different test cases are implemented by using a different
-SINR value and a different number of UEs. The test consists on
+same SNR. Different test cases are implemented by using a different
+SNR values and a different number of UEs. The test consists on
checking that the obtained throughput performance matches with the
known reference throughput up to a given tolerance.The expected
behavior of both FDMT and TDMT scheduler when all UEs have the same SNR is that
scheduler allocates all RBGs to the first UE defined in script. This is because
-current FDMT and TDMT implementation always select the first UE to serve when there are
+the current FDMT and TDMT implementation always select the first UE to serve when there are
multiple UEs having the same SNR value. We calculate the reference
-throughput value for first UE by the throughput achievable of a sinlge UE
+throughput value for first UE by the throughput achievable of a single UE
at the given SNR, while reference throughput value for other UEs by zero.
Let :math:`\tau` be the TTI duration, :math:`B` the transmission
bandwidth configuration in number of RBs, :math:`M` the modulation and
-coding scheme in use at the given SINR and :math:`S(M, B)` be the
+coding scheme in use at the given SNR and :math:`S(M, B)` be the
transport block size as defined in [TS36.213]_. The reference
throughput :math:`T` in bit/s achieved by each UE is calculated as
@@ -376,17 +376,13 @@
T = \frac{S(M,B)}{\tau}
-The curves labeled "MT" in Figure XX represent the test values
-calculated for the FDMT and TDMT scheduler test case that we just described.
-
-
Throughput to Average scheduler performance
-------------------------------------------
Test suites ``lte-tta-ff-mac-scheduler``
create different test cases with a single eNB and several UEs, all having the same
Radio Bearer specification using TTA scheduler. Network topology and configurations in
-TTA test case are as same as the test for MT scheduler. More complex test case need to be
+TTA test case are as the same as the test for MT scheduler. More complex test case needs to be
developed to show the fairness feature of TTA scheduler.
@@ -398,7 +394,7 @@
In the first test case of ``lte-tdbet-ff-mac-scheduler`` and ``lte-fdbet-ff-mac-scheduler``,
the UEs are all placed at the same distance from the eNB, and hence all placed in order to
-have the same SINR. Different test cases are implemented by using a different SINR value and
+have the same SNR. Different test cases are implemented by using a different SNR value and
a different number of UEs. The test consists on checking that the obtained throughput performance
matches with the known reference throughput up to a given tolerance. In long term, the expected
behavior of both TD-BET and FD-BET when all UEs have the same SNR is that each UE should get an
@@ -406,17 +402,17 @@
the same.
When all UEs have the same SNR, TD-BET can be seen as a specific case of PF where achievable rate equals
-to 1. Therefore, the throughput get by TD-BET is equal to that of PF. On the other hand, FD-BET performs
-very similar to the round robin (RR) scheduler in case of that all UEs have the same SNR and the number of UE(RBG)
-is an integer multiple of the number of RBG(UE). In this case, FD-BET always allocate the same number of RBGs
+to 1. Therefore, the throughput obtained by TD-BET is equal to that of PF. On the other hand, FD-BET performs
+very similar to the round robin (RR) scheduler in case of that all UEs have the same SNR and the number of UE( or RBG)
+is an integer multiple of the number of RBG( or UE). In this case, FD-BET always allocate the same number of RBGs
to each UE. For example, if eNB has 12 RBGs and there are 6 UEs, then each UE will get 2 RBGs in each TTI.
Or if eNB has 12 RBGs and there are 24 UEs, then each UE will get 2 RBGs per two TTIs. When the number of
UE (RBG) is not an integer multiple of the number of RBG (UE), FD-BET will not follow the RR behavior because
it will assigned different number of RBGs to some UEs, while the throughput of each UE is still the same.
The second category of tests aims at verifying the fairness of the both TD-BET and FD-BET schedulers in a more realistic
-simulation scenario where the UEs have a different SINR (constant for the whole simulation). In this case,
-both scheduler should give the same amount of throughput to each user.
+simulation scenario where the UEs have a different SNR (constant for the whole simulation). In this case,
+both scheduler should give the same amount of averaged throughput to each user.
Specifically, for TD-BET, let :math:`F_i` be the fraction of time allocated to user i in total simulation time,
:math:`R^{fb}_i` be the the full bandwidth achievable rate for user i and :math: `T_i` be the achieved throughput of
@@ -447,9 +443,9 @@
* Heterogeneous flow: flows with different packet arrival rate, but with the same token generation rate
In test case 1 verifies traffic policing and fairness features for the scenario that all UEs are
-placed at the same distance from the eNB. In this case, all Ues have the same SINR value. Different
-test cases are implemented by using a different SINR value and a different number of UEs. Because each
-flow have the same traffic rate and token generation rate, TBFQ scheduler will gurantee the same
+placed at the same distance from the eNB. In this case, all Ues have the same SNR value. Different
+test cases are implemented by using a different SNR value and a different number of UEs. Because each
+flow have the same traffic rate and token generation rate, TBFQ scheduler will guarantee the same
throughput among UEs without the constraint of token generation rate. In addition, the exact value
of UE throughput is depended on the total traffic rate:
@@ -469,9 +465,9 @@
Therefore, the UE throughput in the second condition equals to the evenly share of maximum throughput.
Test case 2 verifies traffic policing and fairness features for the scenario that each UE is placed at
-the different distance from the eNB. In this case, each UE has the different SINR value. Similar to test
+the different distance from the eNB. In this case, each UE has the different SNR value. Similar to test
case 1, UE throughput in test case 2 is also depended on the total traffic rate but with a different
-maximum thorughput. Suppose all UEs have a high traffic load. Then the traffic will saturate the RLC buffer
+maximum throughput. Suppose all UEs have a high traffic load. Then the traffic will saturate the RLC buffer
in eNodeB. In each TTI, after selecting one UE with highest metric, TBFQ will allocate all RBGs to this
UE due to the large RLC buffer size. On the other hand, once RLC buffer is saturated, the total throughput
of all UEs cannot increase any more. In addition, as we discussed in test case 1, for homogeneous flows
@@ -509,24 +505,27 @@
----------------------------------
Test suites ``lte-pss-ff-mac-scheduler`` create different test cases with a single eNB and several UEs.
+In all test cases, we select PFsch in FD scheduler. Same testing results can also be obtained by using CoItA
+scheduler. In addition, all test cases do not define nMux so that TD scheduler in PSS will always select half
+of total UE.
In the first class test case of ``lte-pss-ff-mac-scheduler``, the UEs are all placed at the same distance from
-the eNB, and hence all placed in order to have the same SINR. Different test cases are implemented
+the eNB, and hence all placed in order to have the same SNR. Different test cases are implemented
by using a different TBR for each UEs. In each test cases, all UEs have the same
-Target Bit Rate configured by GBR in EPS bear setting. The exptected behavior of PSS is to gurantee that
+Target Bit Rate configured by GBR in EPS bear setting. The expected behavior of PSS is to guarantee that
each UE's throughput at least equals its TBR if the total flow rate is blow maximum throughput. Similar
to TBFQ, the maximum throughput in this case equals to the rate that all RBGs are assigned to one UE.
When the traffic rate is smaller than max bandwidth, the UE throughput equals its actual traffic rate;
On the other hand, UE throughput equals to the evenly share of the maximum throughput.
-In the first class of test cases, each UE has the same SINR. Therefore, the priority metric in PF scheduler will be
+In the first class of test cases, each UE has the same SNR. Therefore, the priority metric in PF scheduler will be
determined by past average throughput :math:`T_{j}(t)` because each UE has the same achievable throughput
:math:`R_{j}(k,t)` in PFsch or same :math:`CoI[k,n]` in CoItA. This means that PSS will performs like a
TD-BET which allocates all RBGs to one UE in each TTI. Then the maximum value of UE throughput equals to
the achievable rate that all RBGs are allocated to this UE.
In the second class of test case of ``lte-pss-ff-mac-scheduler``, the UEs are all placed at the same distance from
-the eNB, and hence all placed in order to have the same SINR. Different TBR values are assigned to each UE.
+the eNB, and hence all placed in order to have the same SNR. Different TBR values are assigned to each UE.
There also exist an maximum throughput in this case. Once total traffic rate is bigger than this threshold,
there will be some UEs that cannot achieve their TBR. Because there is no fading, subband CQIs for each
RBGs frequency are the same. Therefore, in FD scheduler,in each TTI, priority metrics of UE for all RBGs
@@ -545,8 +544,8 @@
The aim of the system test is to verify the integration of the
BuildingPathlossModel with the lte module. The test exploits a set of
-three pre calculated losses for generating the expected SINR at the
-receiver counting the transmission and the noise powers. These SINR
+three pre calculated losses for generating the expected SNR at the
+receiver counting the transmission and the noise powers. These SNR
values are compared with the results obtained from a LTE
simulation that uses the BuildingPathlossModel. The reference loss values are
calculated off-line with an Octave script
@@ -564,12 +563,12 @@
The parameters of the nine test cases are reported in the following:
- #. 4 UEs placed 1800 meters far from the eNB, which implies the use of MCS 2 (SINR of -5.51 dB) and a TB of 256 bits, that in turns produce a BER of 0.33 (see point A in figure :ref:`fig-mcs-2-test`).
- #. 2 UEs placed 1800 meters far from the eNB, which implies the use of MCS 2 (SINR of -5.51 dB) and a TB of 528 bits, that in turns produce a BER of 0.11 (see point B in figure :ref:`fig-mcs-2-test`).
- #. 1 UE placed 1800 meters far from the eNB, which implies the use of MCS 2 (SINR of -5.51 dB) and a TB of 1088 bits, that in turns produce a BER of 0.02 (see point C in figure :ref:`fig-mcs-2-test`).
- #. 1 UE placed 600 meters far from the eNB, which implies the use of MCS 12 (SINR of 4.43 dB) and a TB of 4800 bits, that in turns produce a BER of 0.3 (see point D in figure :ref:`fig-mcs-12-test`).
- #. 3 UEs placed 600 meters far from the eNB, which implies the use of MCS 12 (SINR of 4.43 dB) and a TB of 1632 bits, that in turns produce a BER of 0.55 (see point E in figure :ref:`fig-mcs-12-test`).
- #. 1 UE placed 470 meters far from the eNB, which implies the use of MCS 16 (SINR of 8.48 dB) and a TB of 7272 bits (segmented in 2 CBs of 3648 and 3584 bits), that in turns produce a BER of 0.14, since each CB has CBLER equal to 0.075 (see point F in figure :ref:`fig-mcs-14-test`).
+ #. 4 UEs placed 1800 meters far from the eNB, which implies the use of MCS 2 (SNR of -5.51 dB) and a TB of 256 bits, that in turns produce a BER of 0.33 (see point A in figure :ref:`fig-mcs-2-test`).
+ #. 2 UEs placed 1800 meters far from the eNB, which implies the use of MCS 2 (SNR of -5.51 dB) and a TB of 528 bits, that in turns produce a BER of 0.11 (see point B in figure :ref:`fig-mcs-2-test`).
+ #. 1 UE placed 1800 meters far from the eNB, which implies the use of MCS 2 (SNR of -5.51 dB) and a TB of 1088 bits, that in turns produce a BER of 0.02 (see point C in figure :ref:`fig-mcs-2-test`).
+ #. 1 UE placed 600 meters far from the eNB, which implies the use of MCS 12 (SNR of 4.43 dB) and a TB of 4800 bits, that in turns produce a BER of 0.3 (see point D in figure :ref:`fig-mcs-12-test`).
+ #. 3 UEs placed 600 meters far from the eNB, which implies the use of MCS 12 (SNR of 4.43 dB) and a TB of 1632 bits, that in turns produce a BER of 0.55 (see point E in figure :ref:`fig-mcs-12-test`).
+ #. 1 UE placed 470 meters far from the eNB, which implies the use of MCS 16 (SNR of 8.48 dB) and a TB of 7272 bits (segmented in 2 CBs of 3648 and 3584 bits), that in turns produce a BER of 0.14, since each CB has CBLER equal to 0.075 (see point F in figure :ref:`fig-mcs-14-test`).
.. _fig-mcs-2-test:
@@ -619,7 +618,7 @@
instead uses the default IsotropicAntennaModel. The test
checks that the received power both in uplink and downlink account for
the correct value of the antenna gain, which is determined
-offline; this is implemented by comparing the uplink and downlink SINR
+offline; this is implemented by comparing the uplink and downlink SNR
and checking that both match with the reference value up to a
tolerance of :math:`10^{-6}` which accounts for numerical errors.
Different test cases are provided by varying the x and y coordinates
--- a/src/lte/model/fdbet-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/fdbet-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ *
*/
#ifdef __FreeBSD__
@@ -295,14 +296,13 @@
NS_LOG_FUNCTION (this << " RNTI " << params.m_rnti << " txMode " << (uint16_t)params.m_transmissionMode);
std::map <uint16_t,uint8_t>::iterator it = m_uesTxMode.find (params.m_rnti);
if (it==m_uesTxMode.end ())
- {
- m_uesTxMode.insert (std::pair <uint16_t, double> (params.m_rnti, params.m_transmissionMode));
- }
+ {
+ m_uesTxMode.insert (std::pair <uint16_t, double> (params.m_rnti, params.m_transmissionMode));
+ }
else
- {
- (*it).second = params.m_transmissionMode;
- }
- return;
+ {
+ (*it).second = params.m_transmissionMode;
+ }
return;
}
@@ -321,13 +321,13 @@
fdbetsFlowPerf_t flowStatsDl;
flowStatsDl.flowStart = Simulator::Now ();
flowStatsDl.totalBytesTransmitted = 0;
- flowStatsDl.lastTtiBytesTrasmitted = 0;
+ flowStatsDl.lastTtiBytesTransmitted = 0;
flowStatsDl.lastAveragedThroughput = 1;
m_flowStatsDl.insert (std::pair<uint16_t, fdbetsFlowPerf_t> (params.m_rnti, flowStatsDl));
fdbetsFlowPerf_t flowStatsUl;
flowStatsUl.flowStart = Simulator::Now ();
flowStatsUl.totalBytesTransmitted = 0;
- flowStatsUl.lastTtiBytesTrasmitted = 0;
+ flowStatsUl.lastTtiBytesTransmitted = 0;
flowStatsUl.lastAveragedThroughput = 1;
m_flowStatsUl.insert (std::pair<uint16_t, fdbetsFlowPerf_t> (params.m_rnti, flowStatsUl));
}
@@ -452,7 +452,7 @@
int rbgNum = m_cschedCellConfig.m_dlBandwidth / rbgSize;
std::map <uint16_t, std::vector <uint16_t> > allocationMap;
std::map <uint16_t, fdbetsFlowPerf_t>::iterator itFlow;
- std::map <uint16_t, double> estAveThr; // store exptected average throughput for UE
+ std::map <uint16_t, double> estAveThr; // store expected average throughput for UE
std::map <uint16_t, double>::iterator itMax = estAveThr.end();
std::map <uint16_t, double>::iterator it;
std::map <uint16_t, int> rbgPerRntiLog; // record the number of RBG assigned to UE
@@ -526,7 +526,6 @@
bytesTxed += tbSize;
}
double expectedAveThr = ((1.0 - (1.0 / m_timeWindow)) * (*itPastAveThr).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)(bytesTxed / 0.001));
- //NS_LOG_DEBUG (Simulator::Now()<< " UE " << (*itMax).first << " espectedAveThr "<< expectedAveThr << " RBG " << (*itRbgPerRntiLog).second << " mcs " << (uint16_t)mcs.at(0) << " tbsize "<< bytesTxed << " layer "<<nLayer);
int rbgPerRnti = (*itRbgPerRntiLog).second;
rbgPerRnti++;
@@ -553,7 +552,7 @@
std::map <uint16_t, fdbetsFlowPerf_t>::iterator itStats;
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
// generate the transmission opportunities by grouping the RBGs of the same RNTI and
@@ -571,7 +570,6 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
uint16_t rbgPerRnti = (*itMap).second.size ();
std::map <uint16_t,uint8_t>::iterator itCqi;
itCqi = m_p10CqiRxed.find ((*itMap).first);
@@ -598,9 +596,6 @@
for (uint8_t i = 0; i < nLayer; i++)
{
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (0), rbgPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
- //NS_LOG_DEBUG ( Simulator::Now() << this << "Allocate user " << newEl.m_rnti << " tbSize "<< tbSize << " PRBs " << rbgPerRnti*rbgSize << " mcs " << (uint16_t) newDci.m_mcs.at (0) << " layers " << nLayer);
- NS_LOG_DEBUG ( "UE " << newEl.m_rnti << " tbSize "<< tbSize << " PRBs " << rbgPerRnti*rbgSize << " mcs " << (uint16_t) newDci.m_mcs.at (0) << " layers " << nLayer);
-
newDci.m_tbsSize.push_back (tbSize);
bytesTxed += tbSize;
}
@@ -611,7 +606,6 @@
for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -629,7 +623,6 @@
RlcPduListElement_s newRlcEl;
newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
- //NS_LOG_DEBUG (this << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
newRlcPduLe.push_back (newRlcEl);
UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
}
@@ -653,10 +646,7 @@
it = m_flowStatsDl.find ((*itMap).first);
if (it != m_flowStatsDl.end ())
{
- (*it).second.lastTtiBytesTrasmitted = bytesTxed;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
-
-
+ (*it).second.lastTtiBytesTransmitted = bytesTxed;
}
else
{
@@ -671,12 +661,10 @@
// update UEs stats
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
-// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
-// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_schedSapUser->SchedDlConfigInd (ret);
@@ -908,8 +896,8 @@
itStats = m_flowStatsUl.find ((*it).first);
if (itStats != m_flowStatsUl.end ())
{
- (*itStats).second.lastTtiBytesTrasmitted = uldci.m_tbSize;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+ (*itStats).second.lastTtiBytesTransmitted = uldci.m_tbSize;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTransmitted);
}
@@ -939,12 +927,12 @@
// update UEs stats
for (itStats = m_flowStatsUl.begin (); itStats != m_flowStatsUl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_allocationMaps.insert (std::pair <uint16_t, std::vector <uint16_t> > (params.m_sfnSf, rbgAllocationMap));
m_schedSapUser->SchedUlConfigInd (ret);
--- a/src/lte/model/fdbet-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/fdbet-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,8 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmai.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmai.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
*/
#ifndef FDBET_FF_MAC_SCHEDULER_H
@@ -43,7 +43,7 @@
{
Time flowStart;
unsigned long totalBytesTransmitted;
- unsigned int lastTtiBytesTrasmitted;
+ unsigned int lastTtiBytesTransmitted;
double lastAveragedThroughput;
};
@@ -54,7 +54,7 @@
*/
/**
* \ingroup FdBetFfMacScheduler
- * \brief Implements the SCHED SAP and CSCHED SAP for a Proportional Fair scheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Frequency Domain Blind Equal Throughput scheduler
*
* This class implements the interface defined by the FfMacScheduler abstract class
*/
--- a/src/lte/model/fdmt-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/fdmt-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#include <ns3/log.h>
@@ -242,7 +243,7 @@
UintegerValue (1000),
MakeUintegerAccessor (&FdMtFfMacScheduler::m_cqiTimersThreshold),
MakeUintegerChecker<uint32_t> ())
- ;
+ ;
return tid;
}
@@ -289,15 +290,14 @@
{
NS_LOG_FUNCTION (this << " RNTI " << params.m_rnti << " txMode " << (uint16_t)params.m_transmissionMode);
std::map <uint16_t,uint8_t>::iterator it = m_uesTxMode.find (params.m_rnti);
- if (it==m_uesTxMode.end ())
- {
- m_uesTxMode.insert (std::pair <uint16_t, double> (params.m_rnti, params.m_transmissionMode));
- }
+ if (it == m_uesTxMode.end ())
+ {
+ m_uesTxMode.insert (std::pair <uint16_t, double> (params.m_rnti, params.m_transmissionMode));
+ }
else
- {
- (*it).second = params.m_transmissionMode;
- }
- return;
+ {
+ (*it).second = params.m_transmissionMode;
+ }
return;
}
@@ -399,7 +399,7 @@
}
-int
+int
FdMtFfMacScheduler::LcActivePerFlow (uint16_t rnti)
{
std::map <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator it;
@@ -429,18 +429,17 @@
// API generated by RLC for triggering the scheduling of a DL subframe
- // evaluate the relative channel quality indicator for each UE per each RBG
+ // evaluate the relative channel quality indicator for each UE per each RBG
// (since we are using allocation type 0 the small unit of allocation is RBG)
// Resource allocation type 0 (see sec 7.1.6.1 of 36.213)
-
+
RefreshDlCqiMaps ();
-
+
int rbgSize = GetRbgSize (m_cschedCellConfig.m_dlBandwidth);
int rbgNum = m_cschedCellConfig.m_dlBandwidth / rbgSize;
std::map <uint16_t, std::vector <uint16_t> > allocationMap;
for (int i = 0; i < rbgNum; i++)
{
-// NS_LOG_DEBUG (this << " ALLOCATION for RBG " << i << " of " << rbgNum);
std::set <uint16_t>::iterator it;
std::set <uint16_t>::iterator itMax = m_flowStatsDl.end ();
double metricMax = 0.0;
@@ -450,7 +449,7 @@
itCqi = m_a30CqiRxed.find ((*it));
std::map <uint16_t,uint8_t>::iterator itTxMode;
itTxMode = m_uesTxMode.find ((*it));
- if (itTxMode == m_uesTxMode.end())
+ if (itTxMode == m_uesTxMode.end ())
{
NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it));
}
@@ -458,7 +457,6 @@
std::vector <uint8_t> sbCqi;
if (itCqi == m_a30CqiRxed.end ())
{
-// NS_LOG_DEBUG (this << " No DL-CQI for this UE " << (*it));
for (uint8_t k = 0; k < nLayer; k++)
{
sbCqi.push_back (1); // start with lowest value
@@ -467,27 +465,25 @@
else
{
sbCqi = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
-// NS_LOG_INFO (this << " CQI " << (uint32_t)cqi);
}
- uint8_t cqi1 = sbCqi.at(0);
+ uint8_t cqi1 = sbCqi.at (0);
uint8_t cqi2 = 1;
if (sbCqi.size () > 1)
{
- cqi2 = sbCqi.at(1);
+ cqi2 = sbCqi.at (1);
}
-
+
if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
{
-// NS_LOG_DEBUG (this << " LC active " << LcActivePerFlow (*it));
if (LcActivePerFlow (*it) > 0)
{
// this UE has data to transmit
double achievableRate = 0.0;
- for (uint8_t k = 0; k < nLayer; k++)
+ for (uint8_t k = 0; k < nLayer; k++)
{
- uint8_t mcs = 0;
+ uint8_t mcs = 0;
if (sbCqi.size () > k)
- {
+ {
mcs = m_amc->GetMcsFromCqi (sbCqi.at (k));
}
else
@@ -497,9 +493,8 @@
}
achievableRate += ((m_amc->GetTbSizeFromMcs (mcs, rbgSize) / 8) / 0.001); // = TB size / TTI
}
- // always select first UE when there are multiple UEs have same SINR
- double metric = achievableRate;
-// NS_LOG_DEBUG (this << " RNTI " << (*it) << " MCS " << (uint32_t)mcs << " achievableRate " << achievableRate << " RCQI " << rcqi);
+
+ double metric = achievableRate;
if (metric > metricMax)
{
@@ -530,7 +525,6 @@
{
(*itMap).second.push_back (i);
}
-// NS_LOG_DEBUG (this << " UE assigned " << (*itMax).first);
}
} // end for RBGs
@@ -549,13 +543,12 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
uint16_t RgbPerRnti = (*itMap).second.size ();
std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
itCqi = m_a30CqiRxed.find ((*itMap).first);
std::map <uint16_t,uint8_t>::iterator itTxMode;
itTxMode = m_uesTxMode.find ((*itMap).first);
- if (itTxMode == m_uesTxMode.end())
+ if (itTxMode == m_uesTxMode.end ())
{
NS_FATAL_ERROR ("No Transmission Mode info on user " << (*itMap).first);
}
@@ -567,10 +560,9 @@
{
if ((*itCqi).second.m_higherLayerSelected.size () > (*itMap).second.at (k))
{
-// NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " << (uint16_t)((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (0)) );
- for (uint8_t j = 0; j < nLayer; j++)
+ for (uint8_t j = 0; j < nLayer; j++)
{
- if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size ()> j)
+ if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size () > j)
{
if (((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (j)) < worstCqi.at (j))
{
@@ -600,13 +592,12 @@
worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
}
}
-// NS_LOG_DEBUG (this << " CQI " << (uint16_t)worstCqi);
+
for (uint8_t j = 0; j < nLayer; j++)
{
newDci.m_mcs.push_back (m_amc->GetMcsFromCqi (worstCqi.at (j)));
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), RgbPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
newDci.m_tbsSize.push_back (tbSize);
- NS_LOG_DEBUG (this << " MCS " << m_amc->GetMcsFromCqi (worstCqi.at (j)));
}
newDci.m_resAlloc = 0; // only allocation type 0 at this stage
@@ -615,7 +606,6 @@
for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -623,10 +613,10 @@
std::map <LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator itBufReq;
for (itBufReq = m_rlcBufferReq.begin (); itBufReq != m_rlcBufferReq.end (); itBufReq++)
{
- if (((*itBufReq).first.m_rnti == (*itMap).first) &&
- (((*itBufReq).second.m_rlcTransmissionQueueSize > 0)
- || ((*itBufReq).second.m_rlcRetransmissionQueueSize > 0)
- || ((*itBufReq).second.m_rlcStatusPduSize > 0) ))
+ if (((*itBufReq).first.m_rnti == (*itMap).first)
+ && (((*itBufReq).second.m_rlcTransmissionQueueSize > 0)
+ || ((*itBufReq).second.m_rlcRetransmissionQueueSize > 0)
+ || ((*itBufReq).second.m_rlcStatusPduSize > 0) ))
{
for (uint8_t j = 0; j < nLayer; j++)
{
@@ -658,7 +648,7 @@
m_schedSapUser->SchedDlConfigInd (ret);
-
+
return;
}
@@ -677,30 +667,7 @@
for (unsigned int i = 0; i < params.m_cqiList.size (); i++)
{
- if ( params.m_cqiList.at (i).m_cqiType == CqiListElement_s::P10 )
- {
- // wideband CQI reporting
- std::map <uint16_t,uint8_t>::iterator it;
- uint16_t rnti = params.m_cqiList.at (i).m_rnti;
- it = m_p10CqiRxed.find (rnti);
- if (it == m_p10CqiRxed.end ())
- {
- // create the new entry
- m_p10CqiRxed.insert ( std::pair<uint16_t, uint8_t > (rnti, params.m_cqiList.at (i).m_wbCqi.at (0)) ); // only codeword 0 at this stage (SISO)
- // generate correspondent timer
- m_p10CqiTimers.insert ( std::pair<uint16_t, uint32_t > (rnti, m_cqiTimersThreshold));
- }
- else
- {
- // update the CQI value and refresh correspondent timer
- (*it).second = params.m_cqiList.at (i).m_wbCqi.at (0);
- // update correspondent timer
- std::map <uint16_t,uint32_t>::iterator itTimers;
- itTimers = m_p10CqiTimers.find (rnti);
- (*itTimers).second = m_cqiTimersThreshold;
- }
- }
- else if ( params.m_cqiList.at (i).m_cqiType == CqiListElement_s::A30 )
+ if ( params.m_cqiList.at (i).m_cqiType == CqiListElement_s::A30 )
{
// subband CQI reporting high layer configured
std::map <uint16_t,SbMeasResult_s>::iterator it;
@@ -766,10 +733,10 @@
FdMtFfMacScheduler::DoSchedUlTriggerReq (const struct FfMacSchedSapProvider::SchedUlTriggerReqParameters& params)
{
NS_LOG_FUNCTION (this << " UL - Frame no. " << (params.m_sfnSf >> 4) << " subframe no. " << (0xF & params.m_sfnSf));
-
+
RefreshUlCqiMaps ();
- std::map <uint16_t,uint32_t>::iterator it;
+ std::map <uint16_t,uint32_t>::iterator it;
int nflows = 0;
for (it = m_ceBsrRxed.begin (); it != m_ceBsrRxed.end (); it++)
@@ -824,7 +791,7 @@
// limit to physical resources last resource assignment
rbPerFlow = m_cschedCellConfig.m_ulBandwidth - rbAllocated;
}
-
+
UlDciListElement_s uldci;
uldci.m_rnti = (*it).first;
uldci.m_rbStart = rbAllocated;
@@ -878,7 +845,7 @@
// NS_LOG_DEBUG (this << " UE " << (*it).first << " minsinr " << minSinr << " -> mcs " << (uint16_t)uldci.m_mcs);
}
-
+
rbAllocated += rbPerFlow;
// store info on allocation for managing ul-cqi interpretation
for (int i = 0; i < rbPerFlow; i++)
@@ -944,29 +911,29 @@
NS_LOG_FUNCTION (this);
std::map <uint16_t,uint32_t>::iterator it;
-
+
for (unsigned int i = 0; i < params.m_macCeList.size (); i++)
- {
- if ( params.m_macCeList.at (i).m_macCeType == MacCeListElement_s::BSR )
{
- // buffer status report
- uint16_t rnti = params.m_macCeList.at (i).m_rnti;
- it = m_ceBsrRxed.find (rnti);
- if (it == m_ceBsrRxed.end ())
- {
- // create the new entry
- uint8_t bsrId = params.m_macCeList.at (i).m_macCeValue.m_bufferStatus.at (0);
- int buffer = BufferSizeLevelBsr::BsrId2BufferSize (bsrId);
- m_ceBsrRxed.insert ( std::pair<uint16_t, uint32_t > (rnti, buffer)); // only 1 buffer status is working now
- }
- else
- {
- // update the CQI value
- (*it).second = BufferSizeLevelBsr::BsrId2BufferSize (params.m_macCeList.at (i).m_macCeValue.m_bufferStatus.at (0));
- }
+ if ( params.m_macCeList.at (i).m_macCeType == MacCeListElement_s::BSR )
+ {
+ // buffer status report
+ uint16_t rnti = params.m_macCeList.at (i).m_rnti;
+ it = m_ceBsrRxed.find (rnti);
+ if (it == m_ceBsrRxed.end ())
+ {
+ // create the new entry
+ uint8_t bsrId = params.m_macCeList.at (i).m_macCeValue.m_bufferStatus.at (0);
+ int buffer = BufferSizeLevelBsr::BsrId2BufferSize (bsrId);
+ m_ceBsrRxed.insert ( std::pair<uint16_t, uint32_t > (rnti, buffer)); // only 1 buffer status is working now
+ }
+ else
+ {
+ // update the CQI value
+ (*it).second = BufferSizeLevelBsr::BsrId2BufferSize (params.m_macCeList.at (i).m_macCeValue.m_bufferStatus.at (0));
+ }
+ }
}
- }
-
+
return;
}
@@ -975,7 +942,7 @@
{
NS_LOG_FUNCTION (this);
// NS_LOG_DEBUG (this << " RX SFNID " << params.m_sfnSf);
- // retrieve the allocation for this subframe
+// retrieve the allocation for this subframe
std::map <uint16_t, std::vector <uint16_t> >::iterator itMap;
std::map <uint16_t, std::vector <double> >::iterator itCqi;
itMap = m_allocationMaps.find (params.m_sfnSf);
@@ -1020,7 +987,7 @@
std::map <uint16_t, uint32_t>::iterator itTimers;
itTimers = m_ueCqiTimers.find ((*itMap).second.at (i));
(*itTimers).second = m_cqiTimersThreshold;
-
+
}
}
@@ -1031,34 +998,11 @@
}
void
-FdMtFfMacScheduler::RefreshDlCqiMaps(void)
+FdMtFfMacScheduler::RefreshDlCqiMaps (void)
{
- // refresh DL CQI P01 Map
- std::map <uint16_t,uint32_t>::iterator itP10 = m_p10CqiTimers.begin ();
- while (itP10!=m_p10CqiTimers.end ())
- {
-// NS_LOG_INFO (this << " P10-CQI for user " << (*itP10).first << " is " << (uint32_t)(*itP10).second << " thr " << (uint32_t)m_cqiTimersThreshold);
- if ((*itP10).second == 0)
- {
- // delete correspondent entries
- std::map <uint16_t,uint8_t>::iterator itMap = m_p10CqiRxed.find ((*itP10).first);
- NS_ASSERT_MSG (itMap != m_p10CqiRxed.end (), " Does not find CQI report for user " << (*itP10).first);
- NS_LOG_INFO (this << " P10-CQI exired for user " << (*itP10).first);
- m_p10CqiRxed.erase (itMap);
- std::map <uint16_t,uint32_t>::iterator temp = itP10;
- itP10++;
- m_p10CqiTimers.erase (temp);
- }
- else
- {
- (*itP10).second--;
- itP10++;
- }
- }
-
// refresh DL CQI A30 Map
std::map <uint16_t,uint32_t>::iterator itA30 = m_a30CqiTimers.begin ();
- while (itA30!=m_a30CqiTimers.end ())
+ while (itA30 != m_a30CqiTimers.end ())
{
// NS_LOG_INFO (this << " A30-CQI for user " << (*itA30).first << " is " << (uint32_t)(*itA30).second << " thr " << (uint32_t)m_cqiTimersThreshold);
if ((*itA30).second == 0)
@@ -1078,17 +1022,17 @@
itA30++;
}
}
-
- return;
+
+ return;
}
void
-FdMtFfMacScheduler::RefreshUlCqiMaps(void)
+FdMtFfMacScheduler::RefreshUlCqiMaps (void)
{
// refresh UL CQI Map
std::map <uint16_t,uint32_t>::iterator itUl = m_ueCqiTimers.begin ();
- while (itUl!=m_ueCqiTimers.end ())
+ while (itUl != m_ueCqiTimers.end ())
{
// NS_LOG_INFO (this << " UL-CQI for user " << (*itUl).first << " is " << (uint32_t)(*itUl).second << " thr " << (uint32_t)m_cqiTimersThreshold);
if ((*itUl).second == 0)
@@ -1109,8 +1053,8 @@
itUl++;
}
}
-
- return;
+
+ return;
}
void
@@ -1119,7 +1063,7 @@
std::map<LteFlowId_t, FfMacSchedSapProvider::SchedDlRlcBufferReqParameters>::iterator it;
LteFlowId_t flow (rnti, lcid);
it = m_rlcBufferReq.find (flow);
- if (it!=m_rlcBufferReq.end ())
+ if (it != m_rlcBufferReq.end ())
{
// NS_LOG_DEBUG (this << " UE " << rnti << " LC " << (uint16_t)lcid << " txqueue " << (*it).second.m_rlcTransmissionQueueSize << " retxqueue " << (*it).second.m_rlcRetransmissionQueueSize << " status " << (*it).second.m_rlcStatusPduSize << " decrease " << size);
// Update queues: RLC tx order Status, ReTx, Tx
@@ -1134,7 +1078,7 @@
(*it).second.m_rlcStatusPduSize -= size;
return;
}
- // update retransmission queue
+ // update retransmission queue
if ((*it).second.m_rlcRetransmissionQueueSize <= size)
{
size -= (*it).second.m_rlcRetransmissionQueueSize;
@@ -1166,11 +1110,11 @@
void
FdMtFfMacScheduler::UpdateUlRlcBufferInfo (uint16_t rnti, uint16_t size)
{
-
+
std::map <uint16_t,uint32_t>::iterator it = m_ceBsrRxed.find (rnti);
- if (it!=m_ceBsrRxed.end ())
+ if (it != m_ceBsrRxed.end ())
{
-// NS_LOG_DEBUG (this << " UE " << rnti << " size " << size << " BSR " << (*it).second);
+// NS_LOG_DEBUG (this << " UE " << rnti << " size " << size << " BSR " << (*it).second);
if ((*it).second >= size)
{
(*it).second -= size;
@@ -1184,7 +1128,7 @@
{
NS_LOG_ERROR (this << " Does not find BSR report info of UE " << rnti);
}
-
+
}
void
--- a/src/lte/model/fdmt-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/fdmt-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifndef FDMT_FF_MAC_SCHEDULER_H
@@ -78,7 +79,7 @@
friend class FdMtSchedulerMemberCschedSapProvider;
friend class FdMtSchedulerMemberSchedSapProvider;
-
+
void TransmissionModeConfigurationUpdate (uint16_t rnti, uint8_t txMode);
private:
@@ -130,10 +131,10 @@
int LcActivePerFlow (uint16_t rnti);
double EstimateUlSinr (uint16_t rnti, uint16_t rb);
-
- void RefreshDlCqiMaps(void);
- void RefreshUlCqiMaps(void);
-
+
+ void RefreshDlCqiMaps (void);
+ void RefreshUlCqiMaps (void);
+
void UpdateDlRlcBufferInfo (uint16_t rnti, uint8_t lcid, uint16_t size);
void UpdateUlRlcBufferInfo (uint16_t rnti, uint16_t size);
Ptr<LteAmc> m_amc;
@@ -156,15 +157,6 @@
/*
- * Map of UE's DL CQI P01 received
- */
- std::map <uint16_t,uint8_t> m_p10CqiRxed;
- /*
- * Map of UE's timers on DL CQI P01 received
- */
- std::map <uint16_t,uint32_t> m_p10CqiTimers;
-
- /*
* Map of UE's DL CQI A30 received
*/
std::map <uint16_t,SbMeasResult_s> m_a30CqiRxed;
@@ -204,7 +196,7 @@
FfMacCschedSapProvider::CschedCellConfigReqParameters m_cschedCellConfig;
uint16_t m_nextRntiUl; // RNTI of the next user to be served next scheduling in UL
-
+
uint32_t m_cqiTimersThreshold; // # of TTIs for which a CQI canbe considered valid
std::map <uint16_t,uint8_t> m_uesTxMode; // txMode of the UEs
--- a/src/lte/model/fdtbfq-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/fdtbfq-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifdef __FreeBSD__
@@ -25,6 +26,7 @@
#include <ns3/log.h>
#include <ns3/pointer.h>
+#include <ns3/integer.h>
#include <ns3/simulator.h>
#include <ns3/lte-amc.h>
@@ -247,6 +249,26 @@
UintegerValue (1000),
MakeUintegerAccessor (&FdTbfqFfMacScheduler::m_cqiTimersThreshold),
MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("DebtLimit",
+ "Flow debt limit (default -625000 bytes)",
+ IntegerValue (-625000),
+ MakeIntegerAccessor (&FdTbfqFfMacScheduler::m_debtLimit),
+ MakeIntegerChecker<int> ())
+ .AddAttribute ("CreditLimit",
+ "Flow credit limit (default 625000 bytes)",
+ UintegerValue (625000),
+ MakeUintegerAccessor (&FdTbfqFfMacScheduler::m_creditLimit),
+ MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("TokenPoolSize",
+ "The maximum value of flow token pool (default 1 bytes)",
+ UintegerValue (1),
+ MakeUintegerAccessor (&FdTbfqFfMacScheduler::m_tokenPoolSize),
+ MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("CreditableThreshold",
+ "Threshold of flow credit (default 0 bytes)",
+ UintegerValue (0),
+ MakeUintegerAccessor (&FdTbfqFfMacScheduler::m_creditableThreshold),
+ MakeUintegerChecker<uint32_t> ())
;
return tid;
}
@@ -303,7 +325,6 @@
(*it).second = params.m_transmissionMode;
}
return;
- return;
}
void
@@ -326,25 +347,23 @@
flowStatsDl.packetArrivalRate = 0;
flowStatsDl.tokenGenerationRate = mbrDlInBytes;
flowStatsDl.tokenPoolSize = 0;
- flowStatsDl.maxTokenPoolSize = 1;
+ flowStatsDl.maxTokenPoolSize = m_tokenPoolSize;
flowStatsDl.counter = 0;
- flowStatsDl.burstCredit = 625000; // bytes
- flowStatsDl.debtLimit = -625000; // bytes
- flowStatsDl.creditableThreshold = 0;
+ flowStatsDl.burstCredit = m_creditLimit; // bytes
+ flowStatsDl.debtLimit = m_debtLimit; // bytes
+ flowStatsDl.creditableThreshold = m_creditableThreshold;
m_flowStatsDl.insert (std::pair<uint16_t, fdtbfqsFlowPerf_t> (params.m_rnti, flowStatsDl));
fdtbfqsFlowPerf_t flowStatsUl;
flowStatsUl.flowStart = Simulator::Now ();
flowStatsUl.packetArrivalRate = 0;
flowStatsUl.tokenGenerationRate = mbrUlInBytes;
flowStatsUl.tokenPoolSize = 0;
- flowStatsUl.maxTokenPoolSize = 1;
+ flowStatsUl.maxTokenPoolSize = m_tokenPoolSize;
flowStatsUl.counter = 0;
- flowStatsUl.burstCredit = 625000; // bytes
- flowStatsUl.debtLimit = -625000; // bytes
- flowStatsUl.creditableThreshold = 0;
+ flowStatsUl.burstCredit = m_creditLimit; // bytes
+ flowStatsUl.debtLimit = m_debtLimit; // bytes
+ flowStatsUl.creditableThreshold = m_creditableThreshold;
m_flowStatsUl.insert (std::pair<uint16_t, fdtbfqsFlowPerf_t> (params.m_rnti, flowStatsUl));
-
- NS_LOG_DEBUG(" UE "<<params.m_rnti<<" MaximulBitrateDl "<<params.m_logicalChannelConfigList.at(i).m_eRabMaximulBitrateDl << " gbrDlInbytes "<<mbrDlInBytes);
}
else
{
@@ -759,7 +778,6 @@
// creating the correspondent DCIs
FfMacSchedSapUser::SchedDlConfigIndParameters ret;
std::map <uint16_t, std::vector <uint16_t> >::iterator itMap = allocationMap.begin ();
-
while (itMap != allocationMap.end ())
{
// create new BuildDataListElement_s for this LC
@@ -771,7 +789,6 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
uint16_t RgbPerRnti = (*itMap).second.size ();
std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
itCqi = m_a30CqiRxed.find ((*itMap).first);
@@ -789,7 +806,6 @@
{
if ((*itCqi).second.m_higherLayerSelected.size () > (*itMap).second.at (k))
{
- //NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " << (uint16_t)((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (0)) );
for (uint8_t j = 0; j < nLayer; j++)
{
if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size ()> j)
@@ -804,7 +820,6 @@
// no CQI for this layer of this suband -> worst one
worstCqi.at (j) = 1;
}
- //NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " <<(uint16_t)j <<" "<< (uint16_t)worstCqi.at(j));
}
}
else
@@ -823,26 +838,21 @@
worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
}
}
-// NS_LOG_DEBUG (this << " CQI " << (uint16_t)worstCqi);
uint32_t bytesTxed = 0;
for (uint8_t j = 0; j < nLayer; j++)
{
newDci.m_mcs.push_back (m_amc->GetMcsFromCqi (worstCqi.at (j)));
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), RgbPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
- // NS_LOG_DEBUG ( Simulator::Now() << " Allocateuser " << newEl.m_rnti << " tbSize "<< tbSize << " RBG " << RgbPerRnti << " mcs " << (uint16_t) newDci.m_mcs.at (j) << " cqi " << (uint16_t)worstCqi.at(j));
newDci.m_tbsSize.push_back (tbSize);
bytesTxed += tbSize;
- //NS_LOG_DEBUG (this << " MCS " << m_amc->GetMcsFromCqi (worstCqi.at (j)));
}
newDci.m_resAlloc = 0; // only allocation type 0 at this stage
newDci.m_rbBitmap = 0; // TBD (32 bit bitmap see 7.1.6 of 36.213)
uint32_t rbgMask = 0;
- //for (uint16_t k = 0; k < (*itMap).second.size (); k++)
- for (uint16_t k = 0; k < RgbPerRnti; k++)
+ for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -860,8 +870,6 @@
RlcPduListElement_s newRlcEl;
newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
- //NS_LOG_DEBUG ( Simulator::Now() << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
-
newRlcPduLe.push_back (newRlcEl);
UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
}
--- a/src/lte/model/fdtbfq-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/fdtbfq-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifndef FDTBFQ_FF_MAC_SCHEDULER_H
@@ -60,7 +61,7 @@
*/
/**
* \ingroup FdTbfqFfMacScheduler
- * \brief Implements the SCHED SAP and CSCHED SAP for a Token Bank Fair Queue scheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Frequency Domain Token Bank Fair Queue scheduler
*
* This class implements the interface defined by the FfMacScheduler abstract class
*/
@@ -223,6 +224,15 @@
std::map <uint16_t,uint8_t> m_uesTxMode; // txMode of the UEs
uint64_t bankSize; // the number of bytes in token bank
+
+ int m_debtLimit; // flow debt limit (byte)
+
+ uint32_t m_creditLimit; // flow credit limit (byte)
+
+ uint32_t m_tokenPoolSize; // maximum size of token pool (byte)
+
+ uint32_t m_creditableThreshold; // threshold of flow credit
+
};
} // namespace ns3
--- a/src/lte/model/pf-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/pf-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -320,13 +320,13 @@
pfsFlowPerf_t flowStatsDl;
flowStatsDl.flowStart = Simulator::Now ();
flowStatsDl.totalBytesTransmitted = 0;
- flowStatsDl.lastTtiBytesTrasmitted = 0;
+ flowStatsDl.lastTtiBytesTransmitted = 0;
flowStatsDl.lastAveragedThroughput = 1;
m_flowStatsDl.insert (std::pair<uint16_t, pfsFlowPerf_t> (params.m_rnti, flowStatsDl));
pfsFlowPerf_t flowStatsUl;
flowStatsUl.flowStart = Simulator::Now ();
flowStatsUl.totalBytesTransmitted = 0;
- flowStatsUl.lastTtiBytesTrasmitted = 0;
+ flowStatsUl.lastTtiBytesTransmitted = 0;
flowStatsUl.lastAveragedThroughput = 1;
m_flowStatsUl.insert (std::pair<uint16_t, pfsFlowPerf_t> (params.m_rnti, flowStatsUl));
}
@@ -550,7 +550,7 @@
std::map <uint16_t, pfsFlowPerf_t>::iterator itStats;
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
// generate the transmission opportunities by grouping the RBGs of the same RNTI and
@@ -678,8 +678,8 @@
it = m_flowStatsDl.find ((*itMap).first);
if (it != m_flowStatsDl.end ())
{
- (*it).second.lastTtiBytesTrasmitted = bytesTxed;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+ (*it).second.lastTtiBytesTransmitted = bytesTxed;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTransmitted);
}
@@ -696,12 +696,12 @@
// update UEs stats
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_schedSapUser->SchedDlConfigInd (ret);
@@ -954,8 +954,8 @@
itStats = m_flowStatsUl.find ((*it).first);
if (itStats != m_flowStatsUl.end ())
{
- (*itStats).second.lastTtiBytesTrasmitted = uldci.m_tbSize;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+ (*itStats).second.lastTtiBytesTransmitted = uldci.m_tbSize;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTransmitted);
}
@@ -985,12 +985,12 @@
// update UEs stats
for (itStats = m_flowStatsUl.begin (); itStats != m_flowStatsUl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_allocationMaps.insert (std::pair <uint16_t, std::vector <uint16_t> > (params.m_sfnSf, rbgAllocationMap));
m_schedSapUser->SchedUlConfigInd (ret);
--- a/src/lte/model/pf-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/pf-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -42,7 +42,7 @@
{
Time flowStart;
unsigned long totalBytesTransmitted;
- unsigned int lastTtiBytesTrasmitted;
+ unsigned int lastTtiBytesTransmitted;
double lastAveragedThroughput;
};
--- a/src/lte/model/pss-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/pss-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifdef __FreeBSD__
@@ -28,6 +29,7 @@
#include <ns3/simulator.h>
#include <ns3/lte-amc.h>
+#include <ns3/string.h>
#include <ns3/pss-ff-mac-scheduler.h>
NS_LOG_COMPONENT_DEFINE ("PssFfMacScheduler");
@@ -247,6 +249,16 @@
UintegerValue (1000),
MakeUintegerAccessor (&PssFfMacScheduler::m_cqiTimersThreshold),
MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("PssFdSchedulerType",
+ "FD scheduler in PSS (default value is PFsch)",
+ StringValue ("PFsch"),
+ MakeStringAccessor (&PssFfMacScheduler::m_fdSchedulerType),
+ MakeStringChecker ())
+ .AddAttribute ("nMux",
+ "The number of UE selected by TD scheduler (default value is 0)",
+ UintegerValue (0),
+ MakeUintegerAccessor (&PssFfMacScheduler::m_nMux),
+ MakeUintegerChecker<uint32_t> ())
;
return tid;
}
@@ -303,7 +315,6 @@
(*it).second = params.m_transmissionMode;
}
return;
- return;
}
void
@@ -321,12 +332,10 @@
double tbrDlInBytes = params.m_logicalChannelConfigList.at(i).m_eRabGuaranteedBitrateDl / 8; // byte/s
double tbrUlInBytes = params.m_logicalChannelConfigList.at(i).m_eRabGuaranteedBitrateUl / 8; // byte/s
- NS_LOG_DEBUG(this<<" UE "<<(*it).first<<" GBR "<<tbrDlInBytes);
-
pssFlowPerf_t flowStatsDl;
flowStatsDl.flowStart = Simulator::Now ();
flowStatsDl.totalBytesTransmitted = 0;
- flowStatsDl.lastTtiBytesTrasmitted = 0;
+ flowStatsDl.lastTtiBytesTransmitted = 0;
flowStatsDl.lastAveragedThroughput = 1;
flowStatsDl.secondLastAveragedThroughput = 1;
flowStatsDl.targetThroughput = tbrDlInBytes;
@@ -334,7 +343,7 @@
pssFlowPerf_t flowStatsUl;
flowStatsUl.flowStart = Simulator::Now ();
flowStatsUl.totalBytesTransmitted = 0;
- flowStatsUl.lastTtiBytesTrasmitted = 0;
+ flowStatsUl.lastTtiBytesTransmitted = 0;
flowStatsUl.lastAveragedThroughput = 1;
flowStatsUl.secondLastAveragedThroughput = 1;
flowStatsUl.targetThroughput = tbrUlInBytes;
@@ -535,13 +544,19 @@
// sorting UE in ueSet1 and ueSet1 in descending order based on their metric value
std::sort (ueSet1.rbegin (), ueSet1.rend ());
std::sort (ueSet2.rbegin (), ueSet2.rend ());
-
+
std::map <uint16_t, pssFlowPerf_t> tdUeSet;
- int nMux;
- if (ueSet1.size() + ueSet2.size() <=2 )
- nMux = 1;
+ uint32_t nMux;
+ if ( m_nMux > 0)
+ nMux = m_nMux;
else
- nMux = (int)((ueSet1.size() + ueSet2.size()) / 2) ; // TD scheduler only transfers half selected UE per RTT to TD scheduler
+ {
+ // select half number of UE
+ if (ueSet1.size() + ueSet2.size() <=2 )
+ nMux = 1;
+ else
+ nMux = (int)((ueSet1.size() + ueSet2.size()) / 2) ; // TD scheduler only transfers half selected UE per RTT to TD scheduler
+ }
for (it = m_flowStatsDl.begin (); it != m_flowStatsDl.end (); it--)
{
std::vector <std::pair<double, uint16_t> >::iterator itSet;
@@ -569,257 +584,260 @@
} // end of m_flowStatsDl
-#if 1
- // FD scheduler: Carrier over Interference to Average (CoItA)
- std::map < uint16_t, uint8_t > sbCqiSum;
- for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ if ( m_fdSchedulerType.compare("CoItA") == 0)
{
- uint8_t sum = 0;
- for (int i = 0; i < rbgNum; i++)
- {
- std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
- itCqi = m_a30CqiRxed.find ((*it).first);
- std::map <uint16_t,uint8_t>::iterator itTxMode;
- itTxMode = m_uesTxMode.find ((*it).first);
- if (itTxMode == m_uesTxMode.end())
- {
- NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
- }
- int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
- std::vector <uint8_t> sbCqis;
- if (itCqi == m_a30CqiRxed.end ())
- {
- for (uint8_t k = 0; k < nLayer; k++)
+ // FD scheduler: Carrier over Interference to Average (CoItA)
+ std::map < uint16_t, uint8_t > sbCqiSum;
+ for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ {
+ uint8_t sum = 0;
+ for (int i = 0; i < rbgNum; i++)
+ {
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> sbCqis;
+ if (itCqi == m_a30CqiRxed.end ())
{
- sbCqis.push_back (1); // start with lowest value
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ sbCqis.push_back (1); // start with lowest value
+ }
}
- }
- else
- {
- sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
- }
+ else
+ {
+ sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
+ }
- uint8_t cqi1 = sbCqis.at(0);
- uint8_t cqi2 = 1;
- if (sbCqis.size () > 1)
- {
- cqi2 = sbCqis.at(1);
- }
+ uint8_t cqi1 = sbCqis.at(0);
+ uint8_t cqi2 = 1;
+ if (sbCqis.size () > 1)
+ {
+ cqi2 = sbCqis.at(1);
+ }
- uint8_t sbCqi;
- if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
- {
- for (uint8_t k = 0; k < nLayer; k++)
+ uint8_t sbCqi;
+ if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
{
- if (sbCqis.size () > k)
- {
- sbCqi = sbCqis.at(k);
- }
- else
+ for (uint8_t k = 0; k < nLayer; k++)
{
- // no info on this subband
- sbCqi = 0;
- }
- sum += sbCqi;
- }
- } // end if cqi
- }// end of rbgNum
+ if (sbCqis.size () > k)
+ {
+ sbCqi = sbCqis.at(k);
+ }
+ else
+ {
+ // no info on this subband
+ sbCqi = 0;
+ }
+ sum += sbCqi;
+ }
+ } // end if cqi
+ }// end of rbgNum
- sbCqiSum.insert(std::pair<uint16_t, uint8_t> ((*it).first, sum));
- }// end tdUeSet
+ sbCqiSum.insert(std::pair<uint16_t, uint8_t> ((*it).first, sum));
+ }// end tdUeSet
+
+ for (int i = 0; i < rbgNum; i++)
+ {
+ std::map <uint16_t, pssFlowPerf_t>::iterator itMax = tdUeSet.end ();
+ double metricMax = 0.0;
+ for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ {
+ // calculate PF weigth
+ double weight = (*it).second.targetThroughput / (*it).second.lastAveragedThroughput;
+ if (weight < 1.0)
+ weight = 1.0;
+
+ std::map < uint16_t, uint8_t>::iterator itSbCqiSum;
+ itSbCqiSum = sbCqiSum.find((*it).first);
- for (int i = 0; i < rbgNum; i++)
- {
- std::map <uint16_t, pssFlowPerf_t>::iterator itMax = tdUeSet.end ();
- double metricMax = 0.0;
- for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
- {
- // calculate PF weigth
- double weight = (*it).second.targetThroughput / (*it).second.lastAveragedThroughput;
- if (weight < 1.0)
- weight = 1.0;
-
- std::map < uint16_t, uint8_t>::iterator itSbCqiSum;
- itSbCqiSum = sbCqiSum.find((*it).first);
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> sbCqis;
+ if (itCqi == m_a30CqiRxed.end ())
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ sbCqis.push_back (1); // start with lowest value
+ }
+ }
+ else
+ {
+ sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
+ }
- std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
- itCqi = m_a30CqiRxed.find ((*it).first);
- std::map <uint16_t,uint8_t>::iterator itTxMode;
- itTxMode = m_uesTxMode.find ((*it).first);
- if (itTxMode == m_uesTxMode.end())
+ uint8_t cqi1 = sbCqis.at(0);
+ uint8_t cqi2 = 1;
+ if (sbCqis.size () > 1)
+ {
+ cqi2 = sbCqis.at(1);
+ }
+
+ uint8_t sbCqi;
+ double colMetric = 0.0;
+ if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ if (sbCqis.size () > k)
+ {
+ sbCqi = sbCqis.at(k);
+ }
+ else
+ {
+ // no info on this subband
+ sbCqi = 0;
+ }
+ colMetric += (double)sbCqi / (double)(*itSbCqiSum).second;
+ }
+ } // end if cqi
+
+ double metric;
+ if (colMetric != 0)
+ metric= weight * colMetric;
+ else
+ metric = 1;
+
+ if (metric > metricMax )
+ {
+ metricMax = metric;
+ itMax = it;
+ }
+ } // end of tdUeSet
+
+ if (itMax == m_flowStatsDl.end ())
{
- NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
- }
- int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
- std::vector <uint8_t> sbCqis;
- if (itCqi == m_a30CqiRxed.end ())
- {
- for (uint8_t k = 0; k < nLayer; k++)
- {
- sbCqis.push_back (1); // start with lowest value
- }
+ // no UE available for downlink
+ return;
}
else
{
- sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
- }
-
- uint8_t cqi1 = sbCqis.at(0);
- uint8_t cqi2 = 1;
- if (sbCqis.size () > 1)
- {
- cqi2 = sbCqis.at(1);
+ // assign all RBGs to this UE
+ std::vector <uint16_t> tempMap;
+ for (int i = 0; i < rbgNum; i++)
+ {
+ tempMap.push_back (i);
+ }
+ allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
}
-
- uint8_t sbCqi;
- double colMetric = 0.0;
- if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
- {
- for (uint8_t k = 0; k < nLayer; k++)
- {
- if (sbCqis.size () > k)
- {
- sbCqi = sbCqis.at(k);
- }
- else
- {
- // no info on this subband
- sbCqi = 0;
- }
- colMetric += (double)sbCqi / (double)(*itSbCqiSum).second;
- }
- } // end if cqi
+ }// end of rbgNum
- double metric;
- if (colMetric != 0)
- metric= weight * colMetric;
- else
- metric = 1;
-
- if (metric > metricMax )
- {
- metricMax = metric;
- itMax = it;
- }
- } // end of tdUeSet
-
- if (itMax == m_flowStatsDl.end ())
+ }// end of CoIta
+
+ if ( m_fdSchedulerType.compare("PFsch") == 0)
+ {
+ // FD scheduler: Proportional Fair scheduled (PFsch)
+ for (int i = 0; i < rbgNum; i++)
{
- // no UE available for downlink
- return;
- }
- else
- {
- // assign all RBGs to this UE
- std::vector <uint16_t> tempMap;
- for (int i = 0; i < rbgNum; i++)
- {
- tempMap.push_back (i);
- }
- allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
- }
- }// end of rbgNum
+ std::map <uint16_t, pssFlowPerf_t>::iterator itMax = tdUeSet.end ();
+ double metricMax = 0.0;
+ for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
+ {
+ // calculate PF weigth
+ double weight = (*it).second.targetThroughput / (*it).second.lastAveragedThroughput;
+ if (weight < 1.0)
+ weight = 1.0;
-#endif // end of CoItA
+ std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
+ itCqi = m_a30CqiRxed.find ((*it).first);
+ std::map <uint16_t,uint8_t>::iterator itTxMode;
+ itTxMode = m_uesTxMode.find ((*it).first);
+ if (itTxMode == m_uesTxMode.end())
+ {
+ NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
+ }
+ int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
+ std::vector <uint8_t> sbCqis;
+ if (itCqi == m_a30CqiRxed.end ())
+ {
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ sbCqis.push_back (1); // start with lowest value
+ }
+ }
+ else
+ {
+ sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
+ }
-#if 0
- // FD scheduler: Proportional Fair scheduled (PFsch)
- for (int i = 0; i < rbgNum; i++)
- {
- std::map <uint16_t, pssFlowPerf_t>::iterator itMax = tdUeSet.end ();
- double metricMax = 0.0;
- for (it = tdUeSet.begin (); it != tdUeSet.end (); it++)
- {
- // calculate PF weigth
- double weight = (*it).second.targetThroughput / (*it).second.lastAveragedThroughput;
- if (weight < 1.0)
- weight = 1.0;
-
- std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
- itCqi = m_a30CqiRxed.find ((*it).first);
- std::map <uint16_t,uint8_t>::iterator itTxMode;
- itTxMode = m_uesTxMode.find ((*it).first);
- if (itTxMode == m_uesTxMode.end())
+ uint8_t cqi1 = sbCqis.at(0);
+ uint8_t cqi2 = 1;
+ if (sbCqis.size () > 1)
+ {
+ cqi2 = sbCqis.at(1);
+ }
+
+ double schMetric = 0.0;
+ if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
+ {
+ double achievableRate = 0.0;
+ for (uint8_t k = 0; k < nLayer; k++)
+ {
+ uint8_t mcs = 0;
+ if (sbCqis.size () > k)
+ {
+ mcs = m_amc->GetMcsFromCqi (sbCqis.at (k));
+ }
+ else
+ {
+ // no info on this subband -> worst MCS
+ mcs = 0;
+ }
+ achievableRate += ((m_amc->GetTbSizeFromMcs (mcs, rbgSize) / 8) / 0.001); // = TB size / TTI
+ }
+ schMetric = achievableRate / (*it).second.secondLastAveragedThroughput;
+ } // end if cqi
+
+ double metric;
+ metric= weight * schMetric;
+
+ if (metric > metricMax )
+ {
+ metricMax = metric;
+ itMax = it;
+ }
+ } // end of tdUeSet
+
+ if (itMax == m_flowStatsDl.end ())
{
- NS_FATAL_ERROR ("No Transmission Mode info on user " << (*it).first);
- }
- int nLayer = TransmissionModesLayers::TxMode2LayerNum ((*itTxMode).second);
- std::vector <uint8_t> sbCqis;
- if (itCqi == m_a30CqiRxed.end ())
- {
- for (uint8_t k = 0; k < nLayer; k++)
- {
- sbCqis.push_back (1); // start with lowest value
- }
+ // no UE available for downlink
+ return;
}
else
{
- sbCqis = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
- }
-
- uint8_t cqi1 = sbCqis.at(0);
- uint8_t cqi2 = 1;
- if (sbCqis.size () > 1)
- {
- cqi2 = sbCqis.at(1);
- }
-
- double schMetric = 0.0;
- if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
- {
- double achievableRate = 0.0;
- for (uint8_t k = 0; k < nLayer; k++)
+ // assign all RBGs to this UE
+ std::vector <uint16_t> tempMap;
+ for (int i = 0; i < rbgNum; i++)
{
- uint8_t mcs = 0;
- if (sbCqis.size () > k)
- {
- mcs = m_amc->GetMcsFromCqi (sbCqis.at (k));
- }
- else
- {
- // no info on this subband -> worst MCS
- mcs = 0;
- }
- achievableRate += ((m_amc->GetTbSizeFromMcs (mcs, rbgSize) / 8) / 0.001); // = TB size / TTI
- }
- schMetric = achievableRate / (*it).second.secondLastAveragedThroughput;
- } // end if cqi
-
- double metric;
- metric= weight * schMetric;
+ tempMap.push_back (i);
+ }
+ allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
+ }
+
+ }// end of rbgNum
- if (metric > metricMax )
- {
- metricMax = metric;
- itMax = it;
- }
- } // end of tdUeSet
+ } // end of PFsch
- if (itMax == m_flowStatsDl.end ())
- {
- // no UE available for downlink
- return;
- }
- else
- {
- // assign all RBGs to this UE
- std::vector <uint16_t> tempMap;
- for (int i = 0; i < rbgNum; i++)
- {
- tempMap.push_back (i);
- }
- allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
- }
-
- }// end of rbgNum
-
-#endif // end of PFsch
// reset TTI stats of users
std::map <uint16_t, pssFlowPerf_t>::iterator itStats;
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
// generate the transmission opportunities by grouping the RBGs of the same RNTI and
@@ -837,7 +855,6 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
uint16_t rbgPerRnti = (*itMap).second.size ();
std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
itCqi = m_a30CqiRxed.find ((*itMap).first);
@@ -855,7 +872,6 @@
{
if ((*itCqi).second.m_higherLayerSelected.size () > (*itMap).second.at (k))
{
-// NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " << (uint16_t)((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (0)) );
for (uint8_t j = 0; j < nLayer; j++)
{
if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size ()> j)
@@ -888,7 +904,6 @@
worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
}
}
-// NS_LOG_DEBUG (this << " CQI " << (uint16_t)worstCqi);
uint32_t bytesTxed = 0;
for (uint8_t j = 0; j < nLayer; j++)
{
@@ -896,9 +911,6 @@
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), rbgPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
newDci.m_tbsSize.push_back (tbSize);
bytesTxed += tbSize;
- //NS_LOG_DEBUG ( Simulator::Now() << " Allocate user " << newEl.m_rnti << " tbSize "<< tbSize << " RBGs " << rbgPerRnti << " mcs " << (uint16_t) newDci.m_mcs.at (j) << " layers " << nLayer);
-
- //NS_LOG_DEBUG (this << " MCS " << m_amc->GetMcsFromCqi (worstCqi.at (j)));
}
newDci.m_resAlloc = 0; // only allocation type 0 at this stage
@@ -907,7 +919,6 @@
for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -925,7 +936,6 @@
RlcPduListElement_s newRlcEl;
newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
- //NS_LOG_DEBUG (this << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
newRlcPduLe.push_back (newRlcEl);
UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
}
@@ -949,10 +959,7 @@
it = m_flowStatsDl.find ((*itMap).first);
if (it != m_flowStatsDl.end ())
{
- (*it).second.lastTtiBytesTrasmitted = bytesTxed;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
-
-
+ (*it).second.lastTtiBytesTransmitted = bytesTxed;
}
else
{
@@ -971,15 +978,13 @@
itUeScheduleted = tdUeSet.find((*itStats).first);
if (itUeScheduleted != tdUeSet.end())
{
- (*itStats).second.secondLastAveragedThroughput = ((1.0 - (1 / m_timeWindow)) * (*itStats).second.secondLastAveragedThroughput) + ((1 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ (*itStats).second.secondLastAveragedThroughput = ((1.0 - (1 / m_timeWindow)) * (*itStats).second.secondLastAveragedThroughput) + ((1 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
}
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
- // NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
-// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
@@ -1233,8 +1238,8 @@
itStats = m_flowStatsUl.find ((*it).first);
if (itStats != m_flowStatsUl.end ())
{
- (*itStats).second.lastTtiBytesTrasmitted = uldci.m_tbSize;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+ (*itStats).second.lastTtiBytesTransmitted = uldci.m_tbSize;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTransmitted);
}
@@ -1264,12 +1269,12 @@
// update UEs stats
for (itStats = m_flowStatsUl.begin (); itStats != m_flowStatsUl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_allocationMaps.insert (std::pair <uint16_t, std::vector <uint16_t> > (params.m_sfnSf, rbgAllocationMap));
m_schedSapUser->SchedUlConfigInd (ret);
--- a/src/lte/model/pss-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/pss-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifndef PSS_FF_MAC_SCHEDULER_H
@@ -44,7 +45,7 @@
{
Time flowStart;
unsigned long totalBytesTransmitted;
- unsigned int lastTtiBytesTrasmitted;
+ unsigned int lastTtiBytesTransmitted;
double lastAveragedThroughput;
double secondLastAveragedThroughput;
double targetThroughput;
@@ -56,7 +57,7 @@
*/
/**
* \ingroup PssFfMacScheduler
- * \brief Implements the SCHED SAP and CSCHED SAP for a Proportional Fair scheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Priority Set scheduler
*
* This class implements the interface defined by the FfMacScheduler abstract class
*/
@@ -221,6 +222,9 @@
std::map <uint16_t,uint8_t> m_uesTxMode; // txMode of the UEs
+ std::string m_fdSchedulerType;
+
+ uint32_t m_nMux; // TD scheduler selects nMux UEs and transfer them to FD scheduler
};
} // namespace ns3
--- a/src/lte/model/tdbet-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tdbet-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,8 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
*/
#ifdef __FreeBSD__
@@ -303,7 +303,6 @@
(*it).second = params.m_transmissionMode;
}
return;
- return;
}
void
@@ -321,13 +320,13 @@
tdbetsFlowPerf_t flowStatsDl;
flowStatsDl.flowStart = Simulator::Now ();
flowStatsDl.totalBytesTransmitted = 0;
- flowStatsDl.lastTtiBytesTrasmitted = 0;
+ flowStatsDl.lastTtiBytesTransmitted = 0;
flowStatsDl.lastAveragedThroughput = 1;
m_flowStatsDl.insert (std::pair<uint16_t, tdbetsFlowPerf_t> (params.m_rnti, flowStatsDl));
tdbetsFlowPerf_t flowStatsUl;
flowStatsUl.flowStart = Simulator::Now ();
flowStatsUl.totalBytesTransmitted = 0;
- flowStatsUl.lastTtiBytesTrasmitted = 0;
+ flowStatsUl.lastTtiBytesTransmitted = 0;
flowStatsUl.lastAveragedThroughput = 1;
m_flowStatsUl.insert (std::pair<uint16_t, tdbetsFlowPerf_t> (params.m_rnti, flowStatsUl));
}
@@ -450,9 +449,7 @@
int rbgSize = GetRbgSize (m_cschedCellConfig.m_dlBandwidth);
int rbgNum = m_cschedCellConfig.m_dlBandwidth / rbgSize;
- NS_LOG_DEBUG (this << " DlBandwidth " << rbgNum * rbgSize << " rbgSize " <<rbgSize);
std::map <uint16_t, std::vector <uint16_t> > allocationMap;
- //NS_LOG_DEBUG (this << " ALLOCATION for RBG " << i << " of " << rbgNum);
std::map <uint16_t, tdbetsFlowPerf_t>::iterator it;
std::map <uint16_t, tdbetsFlowPerf_t>::iterator itMax = m_flowStatsDl.end ();
double metricMax = 0.0;
@@ -467,7 +464,6 @@
}
} // end for m_flowStatsDl
-
if (itMax == m_flowStatsDl.end ())
{
// no UE available for downlink
@@ -482,14 +478,13 @@
tempMap.push_back (i);
}
allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax).first, tempMap));
- //NS_LOG_DEBUG (this << " UE assigned " << (*itMax).first);
}
// reset TTI stats of users
std::map <uint16_t, tdbetsFlowPerf_t>::iterator itStats;
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
// generate the transmission opportunities by grouping the RBGs of the same RNTI and
@@ -507,7 +502,6 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
std::map <uint16_t,uint8_t>::iterator itCqi;
itCqi = m_p10CqiRxed.find ((*itMap).first);
std::map <uint16_t,uint8_t>::iterator itTxMode;
@@ -533,7 +527,6 @@
for (uint8_t i = 0; i < nLayer; i++)
{
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (0), rbgNum * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
- NS_LOG_DEBUG ( Simulator::Now() << this << "Allocate user " << newEl.m_rnti << " tbSize "<< tbSize << " PRBs " << 0 << " ... " << rbgNum * rbgSize - 1 << " mcs " << (uint16_t) newDci.m_mcs.at (0) << " layers " << nLayer);
newDci.m_tbsSize.push_back (tbSize);
bytesTxed += tbSize;
}
@@ -544,7 +537,6 @@
for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -562,7 +554,6 @@
RlcPduListElement_s newRlcEl;
newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
- //NS_LOG_DEBUG (this << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
newRlcPduLe.push_back (newRlcEl);
UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
}
@@ -586,10 +577,7 @@
it = m_flowStatsDl.find ((*itMap).first);
if (it != m_flowStatsDl.end ())
{
- (*it).second.lastTtiBytesTrasmitted = bytesTxed;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
-
-
+ (*it).second.lastTtiBytesTransmitted = bytesTxed;
}
else
{
@@ -604,12 +592,10 @@
// update UEs stats
for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
-// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
-// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_schedSapUser->SchedDlConfigInd (ret);
@@ -841,8 +827,8 @@
itStats = m_flowStatsUl.find ((*it).first);
if (itStats != m_flowStatsUl.end ())
{
- (*itStats).second.lastTtiBytesTrasmitted = uldci.m_tbSize;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
+ (*itStats).second.lastTtiBytesTransmitted = uldci.m_tbSize;
+// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTransmitted);
}
@@ -872,12 +858,12 @@
// update UEs stats
for (itStats = m_flowStatsUl.begin (); itStats != m_flowStatsUl.end (); itStats++)
{
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
+ (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTransmitted;
// update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
+ (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTransmitted / 0.001));
// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
+ (*itStats).second.lastTtiBytesTransmitted = 0;
}
m_allocationMaps.insert (std::pair <uint16_t, std::vector <uint16_t> > (params.m_sfnSf, rbgAllocationMap));
m_schedSapUser->SchedUlConfigInd (ret);
--- a/src/lte/model/tdbet-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tdbet-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -15,7 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ *
*/
#ifndef TDBET_FF_MAC_SCHEDULER_H
@@ -42,7 +44,7 @@
{
Time flowStart;
unsigned long totalBytesTransmitted;
- unsigned int lastTtiBytesTrasmitted;
+ unsigned int lastTtiBytesTransmitted;
double lastAveragedThroughput;
};
@@ -53,7 +55,7 @@
*/
/**
* \ingroup TdBetFfMacScheduler
- * \brief Implements the SCHED SAP and CSCHED SAP for a Proportional Fair scheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Time Domain Blind Equal Throughput scheduler
*
* This class implements the interface defined by the FfMacScheduler abstract class
*/
--- a/src/lte/model/tdmt-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tdmt-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#include <ns3/log.h>
@@ -298,7 +299,6 @@
(*it).second = params.m_transmissionMode;
}
return;
- return;
}
void
@@ -428,7 +428,6 @@
NS_LOG_FUNCTION (this << " Frame no. " << (params.m_sfnSf >> 4) << " subframe no. " << (0xF & params.m_sfnSf));
// API generated by RLC for triggering the scheduling of a DL subframe
-
// evaluate the wideband channel quality indicator for each UE and allocate
// all RBGs to the UE with highest achievable rate
// (since we are using allocation type 0 the small unit of allocation is RBG)
@@ -456,13 +455,11 @@
uint8_t wbCqi = 0;
if (itCqi == m_p10CqiRxed.end())
{
- //NS_LOG_DEBUG (this << " No DL-CQI for this UE " << (*it));
wbCqi = 1; // start with lowest value
}
else
{
wbCqi = (*itCqi).second;
- //NS_LOG_INFO (this << " CQI " << (uint32_t)wbCqi);
}
if (wbCqi > 0)
@@ -479,7 +476,6 @@
}
double metric = achievableRate;
- //NS_LOG_DEBUG (this << " RNTI " << (*it) << " MCS " << (uint32_t)mcs << " achievableRate " << achievableRate << " RCQI " << rcqi);
if (metric > metricMax)
{
@@ -504,7 +500,6 @@
tempMap.push_back (i);
}
allocationMap.insert (std::pair <uint16_t, std::vector <uint16_t> > ((*itMax), tempMap));
- //NS_LOG_DEBUG (this << " UE assigned " << (*itMax));
}
// generate the transmission opportunities by allocating all RBGs to selected UE and
@@ -522,7 +517,6 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
std::map <uint16_t,uint8_t>::iterator itCqi;
itCqi = m_p10CqiRxed.find((*itMap).first);
std::map <uint16_t,uint8_t>::iterator itTxMode;
@@ -557,7 +551,6 @@
for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -642,27 +635,6 @@
(*itTimers).second = m_cqiTimersThreshold;
}
}
- else if ( params.m_cqiList.at (i).m_cqiType == CqiListElement_s::A30 )
- {
- // subband CQI reporting high layer configured
- std::map <uint16_t,SbMeasResult_s>::iterator it;
- uint16_t rnti = params.m_cqiList.at (i).m_rnti;
- it = m_a30CqiRxed.find (rnti);
- if (it == m_a30CqiRxed.end ())
- {
- // create the new entry
- m_a30CqiRxed.insert ( std::pair<uint16_t, SbMeasResult_s > (rnti, params.m_cqiList.at (i).m_sbMeasResult) );
- m_a30CqiTimers.insert ( std::pair<uint16_t, uint32_t > (rnti, m_cqiTimersThreshold));
- }
- else
- {
- // update the CQI value and refresh correspondent timer
- (*it).second = params.m_cqiList.at (i).m_sbMeasResult;
- std::map <uint16_t,uint32_t>::iterator itTimers;
- itTimers = m_a30CqiTimers.find (rnti);
- (*itTimers).second = m_cqiTimersThreshold;
- }
- }
else
{
NS_LOG_ERROR (this << " CQI type unknown");
@@ -997,29 +969,6 @@
itP10++;
}
}
-
- // refresh DL CQI A30 Map
- std::map <uint16_t,uint32_t>::iterator itA30 = m_a30CqiTimers.begin ();
- while (itA30!=m_a30CqiTimers.end ())
- {
-// NS_LOG_INFO (this << " A30-CQI for user " << (*itA30).first << " is " << (uint32_t)(*itA30).second << " thr " << (uint32_t)m_cqiTimersThreshold);
- if ((*itA30).second == 0)
- {
- // delete correspondent entries
- std::map <uint16_t,SbMeasResult_s>::iterator itMap = m_a30CqiRxed.find ((*itA30).first);
- NS_ASSERT_MSG (itMap != m_a30CqiRxed.end (), " Does not find CQI report for user " << (*itA30).first);
- NS_LOG_INFO (this << " A30-CQI exired for user " << (*itA30).first);
- m_a30CqiRxed.erase (itMap);
- std::map <uint16_t,uint32_t>::iterator temp = itA30;
- itA30++;
- m_a30CqiTimers.erase (temp);
- }
- else
- {
- (*itA30).second--;
- itA30++;
- }
- }
return;
}
--- a/src/lte/model/tdmt-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tdmt-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifndef TDMT_FF_MAC_SCHEDULER_H
@@ -165,15 +166,6 @@
std::map <uint16_t,uint32_t> m_p10CqiTimers;
/*
- * Map of UE's DL CQI A30 received
- */
- std::map <uint16_t,SbMeasResult_s> m_a30CqiRxed;
- /*
- * Map of UE's timers on DL CQI A30 received
- */
- std::map <uint16_t,uint32_t> m_a30CqiTimers;
-
- /*
* Map of previous allocated UE per RBG
* (used to retrieve info from UL-CQI)
*/
--- a/src/lte/model/tdtbfq-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tdtbfq-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifdef __FreeBSD__
@@ -25,6 +26,7 @@
#include <ns3/log.h>
#include <ns3/pointer.h>
+#include <ns3/integer.h>
#include <ns3/simulator.h>
#include <ns3/lte-amc.h>
@@ -247,6 +249,26 @@
UintegerValue (1000),
MakeUintegerAccessor (&TdTbfqFfMacScheduler::m_cqiTimersThreshold),
MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("DebtLimit",
+ "Flow debt limit (default -625000 bytes)",
+ IntegerValue (-625000),
+ MakeIntegerAccessor (&TdTbfqFfMacScheduler::m_debtLimit),
+ MakeIntegerChecker<int> ())
+ .AddAttribute ("CreditLimit",
+ "Flow credit limit (default 625000 bytes)",
+ UintegerValue (625000),
+ MakeUintegerAccessor (&TdTbfqFfMacScheduler::m_creditLimit),
+ MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("TokenPoolSize",
+ "The maximum value of flow token pool (default 1 bytes)",
+ UintegerValue (1),
+ MakeUintegerAccessor (&TdTbfqFfMacScheduler::m_tokenPoolSize),
+ MakeUintegerChecker<uint32_t> ())
+ .AddAttribute ("Creditable Threshold",
+ "Threshold of flow credit (default 0 bytes)",
+ UintegerValue (0),
+ MakeUintegerAccessor (&TdTbfqFfMacScheduler::m_creditableThreshold),
+ MakeUintegerChecker<uint32_t> ())
;
return tid;
}
@@ -303,7 +325,6 @@
(*it).second = params.m_transmissionMode;
}
return;
- return;
}
void
@@ -327,25 +348,24 @@
flowStatsDl.packetArrivalRate = 0;
flowStatsDl.tokenGenerationRate = mbrDlInBytes;
flowStatsDl.tokenPoolSize = 0;
- flowStatsDl.maxTokenPoolSize = 1;
+ flowStatsDl.maxTokenPoolSize = m_tokenPoolSize;
flowStatsDl.counter = 0;
- flowStatsDl.burstCredit = 625000; // bytes
- flowStatsDl.debtLimit = -625000; // bytes
- flowStatsDl.creditableThreshold = 0;
+ flowStatsDl.burstCredit = m_creditLimit; // bytes
+ flowStatsDl.debtLimit = m_debtLimit; // bytes
+ flowStatsDl.creditableThreshold = m_creditableThreshold;
m_flowStatsDl.insert (std::pair<uint16_t, tdtbfqsFlowPerf_t> (params.m_rnti, flowStatsDl));
tdtbfqsFlowPerf_t flowStatsUl;
flowStatsUl.flowStart = Simulator::Now ();
flowStatsUl.packetArrivalRate = 0;
flowStatsUl.tokenGenerationRate = mbrUlInBytes;
flowStatsUl.tokenPoolSize = 0;
- flowStatsUl.maxTokenPoolSize = 1;
+ flowStatsUl.maxTokenPoolSize = m_tokenPoolSize;
flowStatsUl.counter = 0;
- flowStatsUl.burstCredit = 625000; // bytes
- flowStatsUl.debtLimit = -625000; // bytes
- flowStatsUl.creditableThreshold = 0;
+ flowStatsUl.burstCredit = m_creditLimit; // bytes
+ flowStatsUl.debtLimit = m_debtLimit; // bytes
+ flowStatsUl.creditableThreshold = m_creditableThreshold;
m_flowStatsUl.insert (std::pair<uint16_t, tdtbfqsFlowPerf_t> (params.m_rnti, flowStatsUl));
- NS_LOG_DEBUG(" UE "<<params.m_rnti<<" MaximulBitrateDl "<<params.m_logicalChannelConfigList.at(i).m_eRabMaximulBitrateDl << " gbrDlInbytes "<<mbrDlInBytes);
}
else
{
@@ -561,7 +581,6 @@
{
if ((*itCqi).second.m_higherLayerSelected.size () > (*itMap).second.at (k))
{
- //NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " << (uint16_t)((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (0)) );
for (uint8_t j = 0; j < nLayer; j++)
{
if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size ()> j)
@@ -576,7 +595,6 @@
// no CQI for this layer of this suband -> worst one
worstCqi.at (j) = 1;
}
- //NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " <<(uint16_t)j <<" "<< (uint16_t)worstCqi.at(j));
}
}
else
@@ -595,26 +613,21 @@
worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
}
}
-// NS_LOG_DEBUG (this << " CQI " << (uint16_t)worstCqi);
uint32_t bytesTxed = 0;
for (uint8_t j = 0; j < nLayer; j++)
{
newDci.m_mcs.push_back (m_amc->GetMcsFromCqi (worstCqi.at (j)));
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), RgbPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
- // NS_LOG_DEBUG ( Simulator::Now() << " Allocateuser " << newEl.m_rnti << " tbSize "<< tbSize << " RBG " << RgbPerRnti << " mcs " << (uint16_t) newDci.m_mcs.at (j) << " cqi " << (uint16_t)worstCqi.at(j));
newDci.m_tbsSize.push_back (tbSize);
bytesTxed += tbSize;
- //NS_LOG_DEBUG (this << " MCS " << m_amc->GetMcsFromCqi (worstCqi.at (j)));
}
newDci.m_resAlloc = 0; // only allocation type 0 at this stage
newDci.m_rbBitmap = 0; // TBD (32 bit bitmap see 7.1.6 of 36.213)
uint32_t rbgMask = 0;
- //for (uint16_t k = 0; k < (*itMap).second.size (); k++)
- for (uint16_t k = 0; k < RgbPerRnti; k++)
+ for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -632,8 +645,6 @@
RlcPduListElement_s newRlcEl;
newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
- //NS_LOG_DEBUG ( Simulator::Now() << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
-
newRlcPduLe.push_back (newRlcEl);
UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
}
@@ -676,24 +687,10 @@
NS_LOG_DEBUG (this << " No Stats for this allocated UE");
}
- //NS_LOG_DEBUG (Simulator::Now() << " After UE " << (*it).first << " counter " << (*it).second.counter << " bank " << bankSize);
-
itMap++;
} // end while allocation
ret.m_nrOfPdcchOfdmSymbols = 1; // TODO: check correct value according the DCIs txed
-
- // update UEs stats
- /*for (itStats = m_flowStatsDl.begin (); itStats != m_flowStatsDl.end (); itStats++)
- {
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
- // update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
-// NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
-// NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
- }*/
-
m_schedSapUser->SchedDlConfigInd (ret);
@@ -940,21 +937,6 @@
uldci.m_pdcchPowerOffset = 0; // not used
ret.m_dciList.push_back (uldci);
- // update TTI UE stats
- /*itStats = m_flowStatsUl.find ((*it).first);
- if (itStats != m_flowStatsUl.end ())
- {
- (*itStats).second.lastTtiBytesTrasmitted = uldci.m_tbSize;
-// NS_LOG_DEBUG (this << " UE bytes txed " << (*it).second.lastTtiBytesTrasmitted);
-
-
- }
- else
- {
- NS_LOG_DEBUG (this << " No Stats for this allocated UE");
- }*/
-
-
it++;
if (it == m_ceBsrRxed.end ())
{
@@ -970,18 +952,6 @@
}
while ((*it).first != m_nextRntiUl);
-
- // Update global UE stats
- // update UEs stats
- /*for (itStats = m_flowStatsUl.begin (); itStats != m_flowStatsUl.end (); itStats++)
- {
- (*itStats).second.totalBytesTransmitted += (*itStats).second.lastTtiBytesTrasmitted;
- // update average throughput (see eq. 12.3 of Sec 12.3.1.2 of LTE – The UMTS Long Term Evolution, Ed Wiley)
- (*itStats).second.lastAveragedThroughput = ((1.0 - (1.0 / m_timeWindow)) * (*itStats).second.lastAveragedThroughput) + ((1.0 / m_timeWindow) * (double)((*itStats).second.lastTtiBytesTrasmitted / 0.001));
- // NS_LOG_DEBUG (this << " UE tot bytes " << (*itStats).second.totalBytesTransmitted);
- // NS_LOG_DEBUG (this << " UE avg thr " << (*itStats).second.lastAveragedThroughput);
- (*itStats).second.lastTtiBytesTrasmitted = 0;
- }*/
m_allocationMaps.insert (std::pair <uint16_t, std::vector <uint16_t> > (params.m_sfnSf, rbgAllocationMap));
m_schedSapUser->SchedUlConfigInd (ret);
return;
--- a/src/lte/model/tdtbfq-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tdtbfq-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -15,8 +15,9 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
- * Author: Marco Miozzo <marco.miozzo@cttc.es>
- * Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Author: Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Marco Miozzo <marco.miozzo@cttc.es>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifndef TDTBFQ_FF_MAC_SCHEDULER_H
@@ -60,7 +61,7 @@
*/
/**
* \ingroup TdTbfqFfMacScheduler
- * \brief Implements the SCHED SAP and CSCHED SAP for a Token Bank Fair Queue scheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Time Domain Token Bank Fair Queue scheduler
*
* This class implements the interface defined by the FfMacScheduler abstract class
*/
@@ -224,6 +225,14 @@
std::map <uint16_t,uint8_t> m_uesTxMode; // txMode of the UEs
uint64_t bankSize; // the number of bytes in token bank
+
+ int m_debtLimit; // flow debt limit (byte)
+
+ uint32_t m_creditLimit; // flow credit limit (byte)
+
+ uint32_t m_tokenPoolSize; // maximum size of token pool (byte)
+
+ uint32_t m_creditableThreshold; // threshold of flow credit
};
} // namespace ns3
--- a/src/lte/model/tta-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tta-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#include <ns3/log.h>
@@ -298,7 +299,6 @@
(*it).second = params.m_transmissionMode;
}
return;
- return;
}
void
@@ -440,7 +440,6 @@
std::map <uint16_t, std::vector <uint16_t> > allocationMap;
for (int i = 0; i < rbgNum; i++)
{
-// NS_LOG_DEBUG (this << " ALLOCATION for RBG " << i << " of " << rbgNum);
std::set <uint16_t>::iterator it;
std::set <uint16_t>::iterator itMax = m_flowStatsDl.end ();
double metricMax = 0.0;
@@ -460,7 +459,6 @@
std::vector <uint8_t> sbCqi;
if (itCqi == m_a30CqiRxed.end ())
{
-// NS_LOG_DEBUG (this << " No DL-CQI for this UE " << (*it));
for (uint8_t k = 0; k < nLayer; k++)
{
sbCqi.push_back (1); // start with lowest value
@@ -469,21 +467,19 @@
else
{
sbCqi = (*itCqi).second.m_higherLayerSelected.at (i).m_sbCqi;
-// NS_LOG_INFO (this << " CQI " << (uint32_t)cqi);
}
uint8_t wbCqi = 0;
if (itWbCqi == m_p10CqiRxed.end())
{
- //NS_LOG_DEBUG (this << " No DL-CQI for this UE " << (*it));
wbCqi = 1; // start with lowest value
}
else
{
wbCqi = (*itWbCqi).second;
- //NS_LOG_INFO (this << " CQI " << (uint32_t)wbCqi);
}
+
uint8_t cqi1 = sbCqi.at(0);
uint8_t cqi2 = 1;
if (sbCqi.size () > 1)
@@ -493,14 +489,13 @@
if ((cqi1 > 0)||(cqi2 > 0)) // CQI == 0 means "out of range" (see table 7.2.3-1 of 36.213)
{
-// NS_LOG_DEBUG (this << " LC active " << LcActivePerFlow (*it));
if (LcActivePerFlow (*it) > 0)
{
// this UE has data to transmit
uint8_t sbMcs = 0;
uint8_t wbMcs = 0;
- double achievableWbRate = 0.0;
double achievableSbRate = 1.0;
+ double achievableWbRate = 1.0;
for (uint8_t k = 0; k < nLayer; k++)
{
if (sbCqi.size () > k)
@@ -513,16 +508,13 @@
sbMcs = 0;
}
achievableSbRate += ((m_amc->GetTbSizeFromMcs (sbMcs, rbgSize) / 8) / 0.001); // = TB size / TTI
-
wbMcs = m_amc->GetMcsFromCqi (wbCqi);
- achievableWbRate += ((m_amc->GetTbSizeFromMcs (wbMcs, rbgSize) / 8) / 0.001); // = TB size / TTI
+ achievableWbRate = ((m_amc->GetTbSizeFromMcs (wbMcs, rbgSize) / 8) / 0.001); // = TB size / TTI
}
-
- // always select first UE when there are multiple UEs have same SINR
+
double metric = achievableSbRate / achievableWbRate;
- //NS_LOG_DEBUG (Simulator::Now()<< " " << this << " RNTI " << (*it).first << " SBMCS " << (uint32_t)sbMcs <<" WBMCS " << (uint32_t)wbMcs <<" achievableSbRate " << achievableSbRate << " achievableWbRate " << achievableWbRate << " RCQI " << metric);
- if (metric - metricMax > 1e-3)
+ if (metric > metricMax)
{
metricMax = metric;
itMax = it;
@@ -551,7 +543,6 @@
{
(*itMap).second.push_back (i);
}
-// NS_LOG_DEBUG (this << " UE assigned " << (*itMax).first);
}
} // end for RBGs
@@ -570,7 +561,6 @@
newDci.m_rnti = (*itMap).first;
uint16_t lcActives = LcActivePerFlow ((*itMap).first);
-// NS_LOG_DEBUG (this << "Allocate user " << newEl.m_rnti << " rbg " << lcActives);
uint16_t RgbPerRnti = (*itMap).second.size ();
std::map <uint16_t,SbMeasResult_s>::iterator itCqi;
itCqi = m_a30CqiRxed.find ((*itMap).first);
@@ -588,7 +578,6 @@
{
if ((*itCqi).second.m_higherLayerSelected.size () > (*itMap).second.at (k))
{
-// NS_LOG_DEBUG (this << " RBG " << (*itMap).second.at (k) << " CQI " << (uint16_t)((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.at (0)) );
for (uint8_t j = 0; j < nLayer; j++)
{
if ((*itCqi).second.m_higherLayerSelected.at ((*itMap).second.at (k)).m_sbCqi.size ()> j)
@@ -621,13 +610,11 @@
worstCqi.at (j) = 1; // try with lowest MCS in RBG with no info on channel
}
}
-// NS_LOG_DEBUG (this << " CQI " << (uint16_t)worstCqi);
for (uint8_t j = 0; j < nLayer; j++)
{
newDci.m_mcs.push_back (m_amc->GetMcsFromCqi (worstCqi.at (j)));
int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), RgbPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)
newDci.m_tbsSize.push_back (tbSize);
- NS_LOG_DEBUG (this << " MCS " << m_amc->GetMcsFromCqi (worstCqi.at (j)));
}
newDci.m_resAlloc = 0; // only allocation type 0 at this stage
@@ -636,7 +623,6 @@
for (uint16_t k = 0; k < (*itMap).second.size (); k++)
{
rbgMask = rbgMask + (0x1 << (*itMap).second.at (k));
-// NS_LOG_DEBUG (this << " Allocated PRB " << (*itMap).second.at (k));
}
newDci.m_rbBitmap = rbgMask; // (32 bit bitmap see 7.1.6 of 36.213)
@@ -654,7 +640,6 @@
RlcPduListElement_s newRlcEl;
newRlcEl.m_logicalChannelIdentity = (*itBufReq).first.m_lcId;
newRlcEl.m_size = newDci.m_tbsSize.at (j) / lcActives;
- NS_LOG_DEBUG (this << " LCID " << (uint32_t) newRlcEl.m_logicalChannelIdentity << " size " << newRlcEl.m_size << " layer " << (uint16_t)j);
newRlcPduLe.push_back (newRlcEl);
UpdateDlRlcBufferInfo (newDci.m_rnti, newRlcEl.m_logicalChannelIdentity, newRlcEl.m_size);
}
--- a/src/lte/model/tta-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/model/tta-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -17,6 +17,7 @@
*
* Author: Marco Miozzo <marco.miozzo@cttc.es>
* Dizhi Zhou <dizhi.zhou@gmail.com>
+ * Nicola Baldo <nbaldo@cttc.es>
*/
#ifndef TTA_FF_MAC_SCHEDULER_H
@@ -46,7 +47,7 @@
*/
/**
* \ingroup TtaFfMacScheduler
- * \brief Implements the SCHED SAP and CSCHED SAP for a Throughput to Average scheduler
+ * \brief Implements the SCHED SAP and CSCHED SAP for a Throughput to Average scheduler
*
* This class implements the interface defined by the FfMacScheduler abstract class
*/
--- a/src/lte/test/lte-test-fdbet-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-fdbet-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -36,8 +36,8 @@
* case, the UEs see the same SINR from the eNB; different test cases are
* implemented obtained by using different SINR values and different numbers of
* UEs. The test consists on checking that the obtained throughput performance
-* is equal among users is consistent with the definition of proportional
-* fair scheduling
+* is equal among users is consistent with the definition of blind equal throughput
+* scheduling
*/
class LenaFdBetFfMacSchedulerTestCase1 : public TestCase
{
--- a/src/lte/test/lte-test-fdtbfq-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-fdtbfq-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -211,13 +211,7 @@
estThrFdTbfqDl1.push_back (132000); // User 2 estimated TTI throughput from FDTBFQ
estThrFdTbfqDl1.push_back (132000); // User 3 estimated TTI throughput from FDTBFQ
estThrFdTbfqDl1.push_back (132000); // User 4 estimated TTI throughput from FDTBFQ
- std::vector<uint32_t> estThrFdTbfqUl1;
- estThrFdTbfqUl1.push_back (469000); // User 0 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl1.push_back (249000); // User 1 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl1.push_back (125000); // User 2 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl1.push_back (85000); // User 3 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl1.push_back (41000); // User 4 estimated TTI throughput from FDTBFQ
- AddTestCase (new LenaFdTbfqFfMacSchedulerTestCase2 (dist1,estThrFdTbfqDl1, estThrFdTbfqUl1,packetSize1,1));
+ AddTestCase (new LenaFdTbfqFfMacSchedulerTestCase2 (dist1,estThrFdTbfqDl1,packetSize1,1));
// Traffic2 info
// UDP traffic: payload size = 200 bytes, interval = 1 ms
@@ -242,13 +236,7 @@
estThrFdTbfqDl2.push_back (138944); // User 2 estimated TTI throughput from FDTBFQ
estThrFdTbfqDl2.push_back (138944); // User 3 estimated TTI throughput from FDTBFQ
estThrFdTbfqDl2.push_back (138944); // User 4 estimated TTI throughput from FDTBFQ
- std::vector<uint32_t> estThrFdTbfqUl2;
- estThrFdTbfqUl2.push_back (469000); // User 0 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl2.push_back (249000); // User 1 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl2.push_back (125000); // User 2 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl2.push_back (85000); // User 3 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl2.push_back (41000); // User 4 estimated TTI throughput from FDTBFQ
- AddTestCase (new LenaFdTbfqFfMacSchedulerTestCase2 (dist2,estThrFdTbfqDl2,estThrFdTbfqUl2,packetSize2,1));
+ AddTestCase (new LenaFdTbfqFfMacSchedulerTestCase2 (dist2,estThrFdTbfqDl2,packetSize2,1));
// Test Case 3: heterogeneous flow test in FDTBFQ
// UDP traffic: payload size = [100,200,300] bytes, interval = 1 ms
@@ -267,11 +255,7 @@
estThrFdTbfqDl3.push_back (132000); // User 0 estimated TTI throughput from FDTBFQ
estThrFdTbfqDl3.push_back (232000); // User 1 estimated TTI throughput from FDTBFQ
estThrFdTbfqDl3.push_back (332000); // User 2 estimated TTI throughput from FDTBFQ
- std::vector<uint32_t> estThrFdTbfqUl3;
- estThrFdTbfqUl3.push_back (469000); // User 0 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl3.push_back (249000); // User 1 estimated TTI throughput from FDTBFQ
- estThrFdTbfqUl3.push_back (125000); // User 2 estimated TTI throughput from FDTBFQ
- AddTestCase (new LenaFdTbfqFfMacSchedulerTestCase2 (dist3,estThrFdTbfqDl3, estThrFdTbfqUl3,packetSize3,1));
+ AddTestCase (new LenaFdTbfqFfMacSchedulerTestCase2 (dist3,estThrFdTbfqDl3,packetSize3,1));
}
@@ -546,14 +530,13 @@
}
-LenaFdTbfqFfMacSchedulerTestCase2::LenaFdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrFdTbfqDl, std::vector<uint32_t> estThrFdTbfqUl, std::vector<uint16_t> packetSize, uint16_t interval)
+LenaFdTbfqFfMacSchedulerTestCase2::LenaFdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrFdTbfqDl, std::vector<uint16_t> packetSize, uint16_t interval)
: TestCase (BuildNameString (dist.size (), dist)),
m_nUser (dist.size ()),
m_dist (dist),
m_packetSize (packetSize),
m_interval (interval),
- m_estThrFdTbfqDl (estThrFdTbfqDl),
- m_estThrFdTbfqUl (estThrFdTbfqUl)
+ m_estThrFdTbfqDl (estThrFdTbfqDl)
{
}
@@ -763,22 +746,6 @@
NS_TEST_ASSERT_MSG_EQ_TOL ((double)dlDataRxed.at (i) / simulationTime, m_estThrFdTbfqDl.at(i), m_estThrFdTbfqDl.at(i) * tolerance, " Unfair Throughput!");
}
- /**
- * Check that the assignation in uplink is done in a round robin manner.
- */
-
- NS_LOG_INFO ("UL - Test with " << m_nUser);
- std::vector <uint64_t> ulDataRxed;
- for (int i = 0; i < m_nUser; i++)
- {
- // get the imsi
- uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
- // get the lcId
- uint8_t lcId = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetRrc ()->GetLcIdVector ().at (0);
- ulDataRxed.push_back (rlcStats->GetUlRxData (imsi, lcId));
- NS_LOG_INFO ("\tUser " << i << " dist " << m_dist.at (i) << " bytes rxed " << (double)ulDataRxed.at (i) << " thr " << (double)ulDataRxed.at (i) / simulationTime << " ref " << (double)m_estThrFdTbfqUl.at (i));
- //NS_TEST_ASSERT_MSG_EQ_TOL ((double)ulDataRxed.at (i) / simulationTime, (double)m_estThrFdTbfqUl.at (i), (double)m_estThrFdTbfqUl.at (i) * tolerance, " Unfair Throughput!");
- }
Simulator::Destroy ();
}
--- a/src/lte/test/lte-test-fdtbfq-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-fdtbfq-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -61,7 +61,7 @@
class LenaFdTbfqFfMacSchedulerTestCase2 : public TestCase
{
public:
- LenaFdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrFdTbfqDl, std::vector<uint32_t> estThrFdTbfqUl, std::vector<uint16_t> packetSize, uint16_t interval);
+ LenaFdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrFdTbfqDl, std::vector<uint16_t> packetSize, uint16_t interval);
virtual ~LenaFdTbfqFfMacSchedulerTestCase2 ();
private:
@@ -72,7 +72,6 @@
std::vector<uint16_t> m_packetSize; // byte
uint16_t m_interval; // ms
std::vector<uint32_t> m_estThrFdTbfqDl;
- std::vector<uint32_t> m_estThrFdTbfqUl;
};
--- a/src/lte/test/lte-test-pss-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-pss-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -403,6 +403,7 @@
NetDeviceContainer enbDevs;
NetDeviceContainer ueDevs;
lteHelper->SetSchedulerType ("ns3::PssFfMacScheduler");
+ lteHelper->SetSchedulerAttribute("PssFdSchedulerType", StringValue("PFsch"));
enbDevs = lteHelper->InstallEnbDevice (enbNodes);
ueDevs = lteHelper->InstallUeDevice (ueNodes);
@@ -659,6 +660,7 @@
NetDeviceContainer enbDevs;
NetDeviceContainer ueDevs;
lteHelper->SetSchedulerType ("ns3::PssFfMacScheduler");
+ lteHelper->SetSchedulerAttribute("PssFdSchedulerType", StringValue("PFsch"));
enbDevs = lteHelper->InstallEnbDevice (enbNodes);
ueDevs = lteHelper->InstallUeDevice (ueNodes);
--- a/src/lte/test/lte-test-tdbet-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-tdbet-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -36,8 +36,8 @@
* case, the UEs see the same SINR from the eNB; different test cases are
* implemented obtained by using different SINR values and different numbers of
* UEs. The test consists on checking that the obtained throughput performance
-* is equal among users is consistent with the definition of proportional
-* fair scheduling
+* is equal among users is consistent with the definition of blind equal throughput
+* scheduling
*/
class LenaTdBetFfMacSchedulerTestCase1 : public TestCase
{
--- a/src/lte/test/lte-test-tdtbfq-ff-mac-scheduler.cc Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-tdtbfq-ff-mac-scheduler.cc Wed Aug 22 22:55:22 2012 -0300
@@ -211,13 +211,7 @@
estThrTdTbfqDl1.push_back (132000); // User 2 estimated TTI throughput from TDTBFQ
estThrTdTbfqDl1.push_back (132000); // User 3 estimated TTI throughput from TDTBFQ
estThrTdTbfqDl1.push_back (132000); // User 4 estimated TTI throughput from TDTBFQ
- std::vector<uint32_t> estThrTdTbfqUl1;
- estThrTdTbfqUl1.push_back (469000); // User 0 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl1.push_back (249000); // User 1 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl1.push_back (125000); // User 2 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl1.push_back (85000); // User 3 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl1.push_back (41000); // User 4 estimated TTI throughput from TDTBFQ
- AddTestCase (new LenaTdTbfqFfMacSchedulerTestCase2 (dist1,estThrTdTbfqDl1, estThrTdTbfqUl1,packetSize1,1));
+ AddTestCase (new LenaTdTbfqFfMacSchedulerTestCase2 (dist1,estThrTdTbfqDl1,packetSize1,1));
// Traffic2 info
// UDP traffic: payload size = 200 bytes, interval = 1 ms
@@ -242,13 +236,7 @@
estThrTdTbfqDl2.push_back (138944); // User 2 estimated TTI throughput from TDTBFQ
estThrTdTbfqDl2.push_back (138944); // User 3 estimated TTI throughput from TDTBFQ
estThrTdTbfqDl2.push_back (138944); // User 4 estimated TTI throughput from TDTBFQ
- std::vector<uint32_t> estThrTdTbfqUl2;
- estThrTdTbfqUl2.push_back (469000); // User 0 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl2.push_back (249000); // User 1 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl2.push_back (125000); // User 2 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl2.push_back (85000); // User 3 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl2.push_back (41000); // User 4 estimated TTI throughput from TDTBFQ
- AddTestCase (new LenaTdTbfqFfMacSchedulerTestCase2 (dist2,estThrTdTbfqDl2,estThrTdTbfqUl2,packetSize2,1));
+ AddTestCase (new LenaTdTbfqFfMacSchedulerTestCase2 (dist2,estThrTdTbfqDl2,packetSize2,1));
// Test Case 3: heterogeneous flow test in TDTBFQ
// UDP traffic: payload size = [100,200,300] bytes, interval = 1 ms
@@ -267,11 +255,7 @@
estThrTdTbfqDl3.push_back (132000); // User 0 estimated TTI throughput from TDTBFQ
estThrTdTbfqDl3.push_back (232000); // User 1 estimated TTI throughput from TDTBFQ
estThrTdTbfqDl3.push_back (332000); // User 2 estimated TTI throughput from TDTBFQ
- std::vector<uint32_t> estThrTdTbfqUl3;
- estThrTdTbfqUl3.push_back (469000); // User 0 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl3.push_back (249000); // User 1 estimated TTI throughput from TDTBFQ
- estThrTdTbfqUl3.push_back (125000); // User 2 estimated TTI throughput from TDTBFQ
- AddTestCase (new LenaTdTbfqFfMacSchedulerTestCase2 (dist3,estThrTdTbfqDl3, estThrTdTbfqUl3,packetSize3,1));
+ AddTestCase (new LenaTdTbfqFfMacSchedulerTestCase2 (dist3,estThrTdTbfqDl3,packetSize3,1));
}
@@ -543,14 +527,13 @@
}
-LenaTdTbfqFfMacSchedulerTestCase2::LenaTdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrTdTbfqDl, std::vector<uint32_t> estThrTdTbfqUl, std::vector<uint16_t> packetSize, uint16_t interval)
+LenaTdTbfqFfMacSchedulerTestCase2::LenaTdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrTdTbfqDl, std::vector<uint16_t> packetSize, uint16_t interval)
: TestCase (BuildNameString (dist.size (), dist)),
m_nUser (dist.size ()),
m_dist (dist),
m_packetSize (packetSize),
m_interval (interval),
- m_estThrTdTbfqDl (estThrTdTbfqDl),
- m_estThrTdTbfqUl (estThrTdTbfqUl)
+ m_estThrTdTbfqDl (estThrTdTbfqDl)
{
}
@@ -759,22 +742,6 @@
NS_TEST_ASSERT_MSG_EQ_TOL ((double)dlDataRxed.at (i) / simulationTime, m_estThrTdTbfqDl.at(i), m_estThrTdTbfqDl.at(i) * tolerance, " Unfair Throughput!");
}
- /**
- * Check that the assignation in uplink is done in a round robin manner.
- */
-
- NS_LOG_INFO ("UL - Test with " << m_nUser);
- std::vector <uint64_t> ulDataRxed;
- for (int i = 0; i < m_nUser; i++)
- {
- // get the imsi
- uint64_t imsi = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetImsi ();
- // get the lcId
- uint8_t lcId = ueDevs.Get (i)->GetObject<LteUeNetDevice> ()->GetRrc ()->GetLcIdVector ().at (0);
- ulDataRxed.push_back (rlcStats->GetUlRxData (imsi, lcId));
- NS_LOG_INFO ("\tUser " << i << " dist " << m_dist.at (i) << " bytes rxed " << (double)ulDataRxed.at (i) << " thr " << (double)ulDataRxed.at (i) / simulationTime << " ref " << (double)m_estThrTdTbfqUl.at (i));
- //NS_TEST_ASSERT_MSG_EQ_TOL ((double)ulDataRxed.at (i) / simulationTime, (double)m_estThrTdTbfqUl.at (i), (double)m_estThrTdTbfqUl.at (i) * tolerance, " Unfair Throughput!");
- }
Simulator::Destroy ();
}
--- a/src/lte/test/lte-test-tdtbfq-ff-mac-scheduler.h Tue Aug 21 23:18:40 2012 -0300
+++ b/src/lte/test/lte-test-tdtbfq-ff-mac-scheduler.h Wed Aug 22 22:55:22 2012 -0300
@@ -61,7 +61,7 @@
class LenaTdTbfqFfMacSchedulerTestCase2 : public TestCase
{
public:
- LenaTdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrTdTbfqDl, std::vector<uint32_t> estThrTdTbfqUl, std::vector<uint16_t> packetSize, uint16_t interval);
+ LenaTdTbfqFfMacSchedulerTestCase2 (std::vector<uint16_t> dist, std::vector<uint32_t> estThrTdTbfqDl, std::vector<uint16_t> packetSize, uint16_t interval);
virtual ~LenaTdTbfqFfMacSchedulerTestCase2 ();
private:
@@ -72,7 +72,6 @@
std::vector<uint16_t> m_packetSize; // byte
uint16_t m_interval; // ms
std::vector<uint32_t> m_estThrTdTbfqDl;
- std::vector<uint32_t> m_estThrTdTbfqUl;
};