Sie sind auf Seite 1von 32

LTE

OPTIMIZATION
AND TOOLS
Method

There are 4 method that we propose to improve throughput:

• Physical optimization to improve SINR that will impact on the call quality and throughput

• Parameter changes

• Features trials

• Transport optimization

2
Physical
Optimization
DL Throughput Improvement – SINR
Average SINR 13.5
DL Denpasar-LTE_UE_SINR (dB)
Below -5.00 (55) 0.3%
>= -5.00 to < 0.00 (399) 1.8%
>= 0.00 to < 10.00 (6086) 28.2%
>= 10.00 to < 20.00 (11266) 52.2%
Above 20.00 (3766) 17.5%

4
DL Throughput Improvement – Cross Feeder Analysis

Suspect Cross Feeder Site N-MGW036ML-LEGIANKAJA

Note:
We will do field checking but waiting permit to acces site for audit

5
DL Throughput Improvement – Transmission Mode
TM3&4 82% DL Denpasar-LTE_UE_Transmission_Mode (N/ A)
* tm1 - Single Antenna Port (12) 1.2%
* tm2 - Transmit Diversity (165) 16.7%
* tm3 - Open-Loop SM (798) 80.6%
* tm4 - Closed-Loop SM (15) 1.5%

Some transmission mode still have


TM 1 and TM 2 need to be
checked to the vendor whether
incorrect setting or special case

6
DL Throughput Improvement – Interference Reduction

High Interference spot


define as more than 3
neighbour within 5 db
windows, RSRP > -105
dbm, and SINR < 0dbm

7
Parameter
Optimization
DL Throughput Improvement – PCI Conflict (Collision & Confusion)

9
DL Throughput Improvement – PCI Conflict (Collision & Confusion)
PCI Confusion

PCI Confusion on 20 cells found


based on adjajency from dump

No PCI Collision found based on


adjajency from dump

10
PCI Confussion Deep Dive
1. UE ada di PCI 77 (C_JKT758ML_PERPUSTAKAANUNJML3) dan report A3 untuk neighbor PCI 2. eNB indicate PCI 213 sebagai missing neighbor dan minta UE untuk lebih dulu report CGI 3. UE report eCGI untuk PCI 213 dan eNB baru bisa initiate handover UE dari PCI 77 ke PCI 213.
213 untuk PCI 213 2016 Mar 24 07:59:57.157 [F7] 0xB0C0 LTE RRC OTA Packet -- UL_DCCH / MeasurementReport
2016 Mar 24 07:59:56.295 [99] 0xB0C0 LTE RRC OTA Packet -- UL_DCCH / MeasurementReport 2016 Mar 24 07:59:56.318 [78] 0xB0C0 LTE RRC OTA Packet -- DL_DCCH / Pkt Version = 7
Pkt Version = 7 RRCConnectionReconfiguration RRC Release Number.Major.minor = 10.7.1
Pkt Version = 7 Radio Bearer ID = 1, Physical Cell ID = 77
RRC Release Number.Major.minor = 10.7.1
RRC Release Number.Major.minor = 10.7.1 Freq = 1875
Radio Bearer ID = 1, Physical Cell ID = 77 Radio Bearer ID = 1, Physical Cell ID = 77
Freq = 1875 SysFrameNum = N/A, SubFrameNum = 0
Freq = 1875
SysFrameNum = N/A, SubFrameNum = 0 SysFrameNum = N/A, SubFrameNum = 2 PDU Number = UL_DCCH Message, Msg Length = 15
PDU Number = UL_DCCH Message, Msg Length = 7 PDU Number = DL_DCCH Message, Msg Length = 25 SIB Mask in SI = 0x00
SIB Mask in SI = 0x00 SIB Mask in SI = 0x00
Interpreted PDU:
Interpreted PDU: Interpreted PDU:
value UL-DCCH-Message ::=
value UL-DCCH-Message ::= value DL-DCCH-Message ::= {
{ message c1 : measurementReport :
{
message c1 : rrcConnectionReconfiguration : {
message c1 : measurementReport : {
{ criticalExtensions c1 : measurementReport-r8 :
rrc-TransactionIdentifier 0,
criticalExtensions c1 : measurementReport-r8 : {
criticalExtensions c1 : rrcConnectionReconfiguration-r8 :
{ {
measResults
measResults measConfig {
{ { measId 1,
measId 3, measObjectToAddModList measResultPCell
measResultPCell { {
{ { rsrpResult 49,
measObjectId 2, rsrqResult 21
rsrpResult 45, //RSRP -95 measObject measObjectEUTRA : },
rsrqResult 18 { measResultNeighCells measResultListEUTRA :
}, carrierFreq 1875, {
measResultNeighCells measResultListEUTRA : allowedMeasBandwidth mbw50,
{
{ presenceAntennaPort1 FALSE,
physCellId 213,
{ neighCellConfig '01'B,
offsetFreq dB0, cgi-Info
physCellId 213,
cellsToAddModList {
measResult
{ cellGlobalId
{ {
{
rsrpResult 48 //RSRP -92 plmn-Identity
cellIndex 1,
} physCellId 77, {
} cellIndividualOffset dB0 mcc
} } {
} }, 5,
} cellForWhichToReportCGI 213 1,
} }
0
} }
},
},
reportConfigToAddModList mnc
{ {
{ 1,
reportConfigId 1, 0
reportConfig reportConfigEUTRA : }
{ },
triggerType periodical : cellIdentity '00100000 11010110 01010000
{ 1011'B //eNodeB 134501, Cell ID 11
purpose reportCGI },
}, trackingAreaCode '00000110
triggerQuantity rsrp,
11011000'B //TAC 1752
reportQuantity sameAsTriggerQuantity,
maxReportCells 1, },
reportInterval ms640, measResult
reportAmount r1
11
UL Throughput Improvement – PUCCH Optimization

UE maximum RB only 40RB


need to optimize PUCCH
capacity parameter such as
nCQIRB, pucchan, CQI & SRI
peridiocity, etc
Existing PUCCH use 8 RB

- Review all cells that max RRC Connected user < 100
- Reduce PUCCH usage for those cells to 4 RB
- CQI periodicity increase from 20ms to 40ms
- Monitor the result

12
UL Throughput Improvement – Reduce PUCCH Power

Avg RSSI PUCCH based on statistic March 9th , 2016 mostly


in Power Control Window -98 to -103 dBm.

Current PUCCH Power Control window is between -98 to -


103 dBm. Refer to related vendor golden parameter
PUCCH power control window is between -109 to -114
dBm. Setting lower power control window will reduce UL
interference in the network

Parameter Current Proposed Value


p0NomPucch -100 -112
ulpcLowlevCch -103 -114
ulpcUplevCch -98 -109

13
PUCCH Optimization and LTE Periodic CQI Trial

14
UL Throughput Improvement – PUCCH Optimization

UE maximum RB increase to
45RB from 40RB after optim
PUCCH capacity and peak
throughput increase around
15% from 20 Mbps to 23
Mbps

Trial PUCCH Optimization on Site


EVCOKTRESNA Sector 3

15
PUCCH Optimization Result

• After increase PUCCH region (single UE can use 45 RB from 40RB) and number of active UE &
RRC Connected no rejection by Radio Admission Control except at date 7 April time around
00:00 due to transport problem
LTE Periodic CQI Result

• RSSI PUCCH decrease as per expected therefor UL interference has been reduced and SINR
increased
• E-RAB Drop Ratio slightly improved
PUCCH Power Control
Current PUCCH Power Control Setting
Parameter Current Proposed Value
p0NomPucch -100 -112
ulpcLowlevCch -103 -114
ulpcUplevCch -98 -109

Test case on other Network


Analysis
PUCCH power control plays significant role in CQI
based RL Failure detection.

Action Planned
Try out lower PUCCH power control setting to reduce interference
and imrove ERAB Retainability

Expected Results
Lower PUCCH RSSI and better ERAB Retainability
Optimizing 2x2 MIMO mode switching.
Upgrade Switch :
CQI SM SM If
mimoCQI > mimoSmCqiThUpOL
mimoOlCqiThU and
mimoOlCqiThD mimoRANK > mimoSmRiThUpOL

RI Downgrade Switch::
If
mimoOlRiThU mimoCQI <= mimoDivCqiThDownOL
or
mimoOlRiThD
mimoRANK <= mimoDivRiThDownOL
Tim
e

Optimization case example:


Parameter Old New
mimoOlCqiThD 9 7
mimoOlCqiThU 11 8
mimoOlRiThD 1.4 1.2
mimoOlRiThU 1.6 1.4

MIMO parameter proposal has


resulted in DL throughput
average improvement in the
range of 2-3 Mbps (38Mbps -> 41
Mbps).
Feature
Activation
Interference Aware UL Power Control

Not Activated

Analysis
Increase of UL power brings additional useful
signal for own cell UEs, but adds additional
interference for neighboring cells
Reduction of UL power decrease useful signal but
also lowers the interference

Action Planned
Try out feature IAwPC in high load area

Expected Results
Improvement on uplink SINR and Throughput.

Activated
Transport
Traffic Shaping not Activated in Transport Network
• Traffic shaping should be activated whenever there is a transition from high-speed link to a lower-speed link (e.g. Flexi Packet
Radio, Huawei NG-SDH boxes, etc etc)

Flexi Packet Hub (A2200) @ PRGOPA site: config-set


interface ethernet 1/1 shaper-rate 280.0 -> 1000.0

Shaping on* Shaping off

*throughput fluctuations
seem to be driven mainly by
RF conditions
Traffic Shaping not Activated in Transport Network
• Whenever feeding a low bandwidth box with high bandwidth input, check shaping in the
transport chain
• Especially if the tx buffer in the low bw box is small packet loss can occur  reduces TCP throughput!

Example: A-2200 feeding FlexiPacket


315Mbps
A2200 1Gbps A2200/A1200
FPR
FPR
ODU
ODU

Packet loss here due to too small tx


Activate buffer relative to data burstiness

shaping here Data transfer direction


to reduce
burstiness
Generalization:

thick pipe Box Y thin pipe Box Z


Box X

Be alert for packet loss here !


Troubleshooting Checklist – Executive Summary!

1. Check with UDP download


• If UDP tput not ok  problem in radio parameters or maybe interference.
– Use scanner to check that there are no neighbours within >30dB of serving cell.
Do not trust RS CINR measurement!! Serving cell RSRP should be >-80dBm for
peak tput.
– Check HSS profile that there is no APN-AMBR or UE-AMBR limitation

• If UDP tput ok  problem is either i) packet loss in transport network, or ii) TCP
settings of FTP server and/or UE.
– Change to Linux FTP server and optimize UE TCP settings.
– Check that you have no packet loss in transport network.
UDP Testing – Motivation
• in >50Mbps regime FTP throughput suffers greatly from:
• Packet loss, even very little packet loss!!
• Too small TCP RX and TX window
– BTS parameters can also have misconfiguration
– If FTP throughput is bad, it is often difficult to say if this is caused by radio,
transport or TCP setting problem
– Solution: test with UDP!
– If UDP throughput is also bad  then it’s most probably a radio problem
• Either interference or parameters are wrong
• Could also be a problem with HSS profile
– If UDP throughput is good  then it’s transport or TCP problem
• Troubleshooting can be narrowed down
UDP versus TCP, Same Drive Route For Both

• 74Mbps average

50Mbps average

If you must use FTP, then having multiple


parallel FTP sessions may help.
iPerf/jPerf
• iPerf is a freeware UDP/TCP command line tool for network performance testing
jPerf is a java-based graphical user interface for iPerf
iPerf needs to be run at both ends of the connection
In iPerf terminology:
• “Client” is always the sender
• “Server” is always the receiver

– http://openmaniak.com/iperf.php, or Google for more


iPerf/jPerf
• UDP server operation with jPerf 2.0.0, receiver side

Set to server

Set listening port,


default port is fine
but must match
with sender
iPerf/jPerf
• UDP server operation with jPerf 2.0.0, sender side

Set to client

Time to
transmit
Destination
(server) IP@
and port

UDP
sending
parameters
iPerf/jPerf
• UDP server operation with jPerf 2.0.0, sender/client side
In jPerf 2.0.0 this
means actually
Mbits/sec, not
Mbytes/sec (a bug)

Use small
enough packet
size to avoid
IP
fragmentation
@client: UDP tx buffer size
should be large enough not to @server: UDP rx buffer
limit tx speed, may depend on size should be large
operating system enough not to cause
(experiment!), good starting packet loss at receiving
point = 128KB – 1MB host, try 128KB – 1MB

To double-check UDP buffer size settings, do a test over high-speed wireline connection to check
that there is no packet loss at your target test bit rates.

Das könnte Ihnen auch gefallen