Sie sind auf Seite 1von 176

2G RADIO PARAMETER

Rxlev - Avg received signal strength of the serving cell measured on all time slot and subset of time
slots.

Rxlev (Full) Rxlev of a full rate traffic channel

Rxlev(Sub)- If DTX-DL is used in the network the SUB set is used.

-10 to -65 Very Good


-65 to -75 Good

-75 to -95 Fair

-95 TO -105 WORST

Rxqual - Avg received signal quality of the serving cell measured on all time slot and subset of

time slots. It is measured on basis of BER. For call w/o hopping Rxqual should be less than 4

And for call with hopping it should be less than 5

FER : The percentage of frames being dropped due to high number of non corrected bit errors in

the frame. It is the ratio of discarded speech frames to that of received speech frames.

It should be 0.

0 TO 4% GOOD
4 TO 15 % DEGRADED

15 % < USELESS

BER Actual : Ratio of the number of bit errors to the total number of bits transmitted in a given

time interval. BER is a measure for the voice quality in network.. Depending on BER

RxQual is measured.

SQI : SQI is a more sophisticated measure which is dedicated to reflecting the quality of the

speech (as opposed to radio environment conditions). This means that when optimizing the

speech quality in your network, SQI is the best criterion to use. SQI is updated at 0.5 s intervals.

It is computed on basis of BER and FER.

18 to 30 Good

0 to 18 - Bad

C/I : The carrier-over-interference ratio is the ratio between the level of the signal strength of the

current serving cell to that of the signal strength of undesired (interfering) signal components.

It is updated twice after every sec and is measured when freq. hopping is enabled.

25 to 15 Good

15 to 9 so so
C/A : It is the ratio between the level of the signal strength of the current serving cell to that of

the signal strength of adjacent frequency

MS Power Control Level : Displays range of power control from 0 to 8 depending upon

network design. E.g. 0 means no power control and 1 means level that is defined by operator

DTX : It is based on detecting voice activity and switching on the transmitter only during period

when there is active speech to be transmitted .And switching off transmitter during silent

period reducing interference

TA : Value that the base station calculates from access bursts and sends to the mobile station

(MS) enabling the MS to advance the timing of its transmissions to the BS so as to compensate

for propagation delay. Value of 0 means MS in radius of 550mt. From BS.

RL Timeout Counter (Cur) : This parameter define the maximum value of the radio link

counter expressed in SACCH blocks. Range of 4 64 in step size of 4. it shows current value of

RLT. Decrease by 1 but increase by 2. When it reaches zero it results in normal DROP Call.

RL Timeout Counter (MAX) : This parameter define the maximum value of the radio link
counter expressed in SACCH blocks. Range of 4 64 in step size of 4. it shows current value of

RLT. Normally 16, 20, 24.

MS Behavior Modified : This window shows current settings for the mobile station, for

instance whether handover is disabled or multiband reporting enabled.

CURRENT CHANNEL

When hopping is activated

Time: It is system time of computer.

Cell name: It displays the name of the sector which is serving according to the cellfile that is

loaded in TEMS

CGI : It stands for the Cell Global Identity which is unique for every sector of the site. It

consists of MCC,MNC,LAC,CI.

Mobile Country Code + Mobile Network Code + Location Area Code +Cell Identity
Cell GPRS Support: Tells sector is having GPRS or not. (Yes or No)

.Band : It tells in which Freq. Band mobile is operating e.g. GSM 900/ 1800.

BCCH ARFCN: It tells by which BCCH is the mobile station getting served.

TCH ARFCN: On which Traffic Freq. call is going on.

BSIC It is combination of Network Color Code (NCC) (0 7) & Base Station Color Code

(BCC) (0 7). e.g. 62. It is decoded by mobile on every Sync. Channel Message.

Mode: It is shows in which state is mobile operating, Idle, Dedicated & Packet.

Time slot: On which time slot of current TCH call is going on. (time slot no. of TRX.)

Channel Type: Type of channel mobile is getting now. Like BCCH / SDCCH/8 + SACCH/C8

or CBCH / TCH/F +FACCH/F +SACCH/F.

Channel Mode : Shows mode of coding like Speech Full Rate of Half Rate.(speech full rate or

half rate version 3)


Speech Codec: It shows FR for Full Rate, HR for Half Rate & EFR for Enhanced Full Rate.

(AMR Full rate)

Ciphering Algorithm : It shows ciphering algorithm used by the system to protect data for

privacy. E.g. Cipher by A5/2. , Cipher with algorithm A5/1.

Sub Channel Number: It is displayed at a time when mobile is on dedicated mode at time of

call setup when it is getting SDCCH at that time it shows which SDCCH it is getting out of 8

available. E.g. 2.

Hopping Channel : It shows that current sector is having hopping feature or not. Values are

Yes or No.

Hopping Frequencies : It displays no. of freq. on which mobile is allowed to hop. viz. MA List

for hopping of that sector.

Mobile Allocation Index Offset (MAIO): It is the number which tells from which freq. from

given MA list for sector hopping has to be started. E.g. 0 means sector will start from first freq.

to hop.
Hopping Sequence Number (HSN) Indicates the sequence in which the frequencies from the

MA List are allowed to hop.(0- 63).

0 for Cyclic Hopping,

1 63 for random hopping

Cell Selection and Reselection


23:26 Posted by

Cell selection refers to the initial registration that a MS will make with a network. This
normally only occurs when the phone powers up or when the MS roams from one network to
another.
Cell reselection refers to the process of a MS choosing a new cell to monitor once it has
already registered and is camped on a cell. It is important to distinguish that selection and
reselection are done by the MS itself and not governed by the network. The network would
only be responsible for this function when the MS is in a Traffic Channel (TCH). When the MS
reselects a new cell it will not inform the network that it has done so unless that new cell is
in a new Location Area (LA).
There are many parameters involved in selection and reselection of a new cell. The MS must
ensure it is getting the best signal and the network must ensure that the MS does not cause
unneeded strain on the network by switching cells when unnecessary or undesired.

Cell Selection and Re-selection in GSM


18:56 Posted by

Cell Selection Procedure


First MS powers-on

MS starts measuring received power level from all cells in range

MS calculates average power level received from each cell:

Stored in RXLEV(n) parameter

MS calculates C1 parameter for each cell based on RXLEV(n)

Mobile compares cells which give a positive value of C1 and camps-on to the cell with the highest
C1 value.

On switch-on, an MS periodically measures the received power level on each of the BCCH
frequencies of all cells within range. From these periodic measurements the MS calculates the mean
received level value from each cell, stored in the parameter RXLEV (n) where n=neighboring cell
number.

Cell Re-selection GSM Phase 1 Mobiles

For GSM Phase 1 mobiles, cell reselection is achieved by comparing current cell C1 with
neighboring C1 cell measurements:

Between cells within a Location Area:

C1 (new) > C1 (old) (for more than 5 seconds)

Between cells on a Location Area boundary:

C1 (new) > C1 (old) + OFFSET (for more than 5 seconds)


Cell Re-selection GSM Phase 2 Mobiles

GSM Phase 2 introduced a separate cell re-selection parameter, C2

Intended to:

Prevent multiple handovers for fast-moving mobiles

Ensure MS camps on to cell with greatest chance of successful communications

The C2 calculated is:

C2 = C1 + OFFSET (TEMPORARY_OFFSET x H(PENALTY_TIME

INTERFERENCE in GSM
22:05 Posted by

In GSM systems, there can be interference to cells from their neighbor cells that use the same

frequency or adjacent frequencies or both. There can even be interference from cells in other systems.
Interference can be classified as Co-Channel Interference (CCI) and Adjacent Channel Interference

(ACI) from the same system (intrasystem interference) or between different systems (intersystem

interference).

Co-channel Interference

Because of frequency reuse in GSM systems, the reused frequencies in other cells can interfere with

the serving cell. CCI depends on the reuse plan. For example, the 7/21 reuse plan has a CCI greater

than the 9/27 reuse and less than the 4/12 reuse plan. Therefore, to reduce the CCI it is recommended

to increase the reuse pattern or, equivalently, increase the reuse distance between cells of the same

frequency groups. The impact of increased CCI is a degradation in voice quality. The CCI is measured in

terms of Carrier-to-Interference Ratio ( C/I) . For analog systems, C/I of 17 dB is considered

appropriate for good voice quality.The BER for a digital system with and without diversity for different

C/I levels.

Relationship between C/I and BER


Adjacent Channel Interference

The avoidance of using adjacent channels in neighbor cells is a good practice and is possible in a

theoretical 7/21 reuse pattern. In practical systems, adjacent channel assignments in neighbor cells

can occur quite often. When the carrier power is increased in a cell, it can cause interference to an

adjacent channel in the neighbor cell. The IS-136 standard specifies acceptable levels for Carrier-to-

Adjacent Interference Ratio (C/A) that produces acceptable voice quality in terms of BER for uplink

interference as shown in figure


Speech quality index calculation
for gsm?
EDIT

Answered by The Community


Making the world better, one answer at a time.

Speech Quality Index is a performance metric for voice quality in


GSM. However it is to be noted that SQI (speech quality index), is
not a standard performance metric for GSM, and is specific only to
the TEMS family of drive testing/field testing tools.

SQI aims to provide a reasonable estimate of the voice quality, as


perceived by a human ear. It aims to achieve what Mean opinion
score for PESQ does, however it does so without comparing a
received sample with a reference sample. Instead SQI simply takes
the downlink RF measurments into account and provides a value
within 0-30db, with 30 db being the best quality and 0 db being the
worst.

The inputs to the SQI algorithm for calculating SQI are,

FER (Frame error rate), Number of frames lost, out of total frames.
This also accounts for frames lost in GSM handovers.

BER (Bit Error Rate), Number of bits received in error, out of total
received bits.
Speech Codec used: What exact speech codec is being used for the
call at the moment, since the bitrate of a speech codec directly
influences voice quality.

IMSI, MSISDN
00:43 Posted by

MCC MNC MSIN

( 3Digits) ( 2Digits)( 10Digits)

IMSI : International Mobile Subscriber Identity IMSI = MCC + MNC + MSINMCC=


Mobile Country Code (3 digits)MNC= Mobile Network Code (2 digit )MSIN= Mobile

Subscriber Identification Number (10 digits)Ex: IMSI = MCC-MNC-MSIN = 404-22-

1234567890 where,602 - India Country Code22 - Bharti Airtel Network

Code1234567890 - Mobile Subscriber Identification Number

MSISDN : Mobile Station Integrated Services Digital NetworkMSISDN = CC + NDC


+ SNCC= Country Code (2-3 digits)NDC= Network Destination Code (2-3 digit )SN=

Subscriber Number ( max 10 digits)

Absolute Radio Frequency Channel


Number (ARFCN)
00:12 Posted by

The ARFCN is a number that describes a pair of frequencies, one uplink and one downlink. The
uplink and downlink frequencies each have a bandwidth of 200 kHz. The uplink and downlink have a
specific offset that varies for each band. The offset is the frequency separation of the uplink from the
downlink. Every time the ARFCN increases, the uplink will increase by 200 khz and the downlink
also increases by 200 khz.

The given below table summarizes the frequency ranges, offsets, and ARFCNs for several popular
bands.
Calculating Uplink/Downlink Frequencies

The following is a way to calculate the uplink and downlink frequencies for some of the bands, given
the band, the ARFCN, and the offset.

GSM 900

Uplink = 890.0 + (ARFCN * .2) & Downlink = Up + 45.0

Example:

Given the ARFCN 72, and we know the offset is 45MHz for the GSM900 band: Up = 890.0 + (72 * .
2) Up = 890.0 + (14.4) Up = 904.40 MHz

Down = Up + Offset Down = 904.40 + 45.0 Down = 949.40 MHz


Power Control Overview

1. During handover, MS will access the target cell with the maximum transmitting power (associated
handover command) allowed by the target cell. But if MS power prediction after HO is enable, then MS
will use the optimized power to access the target cell.

2. During intra-cell handover, the current power will be retained.

3. Power control can be implemented on TCH carriers only, BCCH carrier is not allowed power control.
Because MS needs to measure the receiving level of BCCH from the adjacent cell. It will be inaccurate
when power control is performed on BCCH.

4. Power control is performed independently for each channel.

When MS accesses the network via RACH channel, its transmitting power is the MS max. TX power
level get from the system information sent on BCCH. MS send the first message on dedicated channel
also uses the MS max TX power level ,this is not under the control of the system before the power
control command which carried on SACCH of SDCCH or TCH. The implementation procedure is as
follows:

1. According to the uplink receiving level and receiving quality reported by BTS, consider the maximum
transmitting power of MS, BSC calculates the proper transmitting power for the MS.

2. Power control command and the TA value will be transmitted to MS at layer 1 header carried by each
downlink SACCH block.
3. MS receives the power control command carried by SACCH header at the end of each SACCH report
period, Then MS will carry out the command in the beginning of next report period. MS can change power
2dB per 13 frames (60ms) maximum.

4. After MS executed the power control command, it will set the current power class at the layer 1
message header of the next uplink SACCH, and transmit it to BTS in the measurement report. Therefore,
it will take 3 measurement report periods for the new power class (in each power control command) to be
available.

Note: Each integral SACCH message block (measurement report) is composed of 4 burst. A power
control implementation process in whole takes the time of 3 measurement reports.

power control algorithm: HW I and HW II power control

Power control judgment and the selection of HWI algorithm or HWII algorithm

Power control algorithm selected in power control data table

Power control judgment is controlled by BTS measurement report pre-processing item which can be selected
in handover control data table

MR. Pre-process (measurement report pre-processing): This switch decide where power control be processed.
If measurement report pre-processing is yes, power control is processed in BTS, and when setting it
no, power control is processed in BSC.

Set pre-processing yes is to reduce the signaling load in Abis interface.

i.e.:

1.If Abis E1 is in 15:1 mode, it should be set yes, otherwise the Abis capacity for speed is not enough;

2.BTS22C 0110 version should be set No;

3.Satellite transmission BTS should be set yes;

4.We should consider few type of BTS has relationship with HWI algorithm and HWII algorithm, please
check the product manual.
Cell Reselect Penalty Time PT

Value range: 0~31, the corresponding time is 20~620s. 31 is to change the function direction of
CRO on C2.

Unit: None

Content: Cell Reselect Penalty Time.

Recommendation: 0

If the communication in a cell is affected due to very heavy traffic or some other reasons, this cell should be
the last where MS works in (there is a repulse for this cell). In this case, PT can be set as 31, which
causes TO invalid. C2 value is the difference of C1 and CRO, therefore, C2 value of this cell is decreased
manually. MS will reselect this cell with little possibility. Besides, the network operator can set CRO
according to the repulse degree to this cell. The higher the repulse degree, the larger CRO.

For the cell with very low traffic, MS should prefer to work in this cell. In this case, CRO is recommended to
be between 0-20dB. It can be set according to the preference degree to this cell. The higher the
preference degree and the larger CRO. Generally TO is recommended to be the same or a little more
than CRO. PT is mainly used to prevent MSs too frequent cell reselections. Generally it is recommended
to be 0 (20s) or 1 (40s).

For the cell with medium traffic, generally CRO is recommended to be 0 and PT be 31 as a result ofC2=C1,
that is, no manual modification on the cell.

Setting of PT can effectively prevent the fast moving MS from accessing the micro-cell. This parameter can
be set according to the size of micro-cell. And it is recommended to be 20s for the ordinary micro-cells.
When PT is set as 31, it is used to change the direction of action to CRO.
Timing Advance (TA)

Transmission delay is unavoidable in the radio interface. If the mobile station moves away from the
base station during a call, the further distance the more delay. The uplink is as the same.

If the delay is too high, the timeslots of the signal from a certain mobile station and that of the next
signal from another mobile station received by the base station will overlap each other, thus causing inter-
code interference. To avoid this, during a call, the measurement report sent from the mobile station to the
base station carries a delay value. Moreover, the base station should monitor the time when the call
arrives and send an instruction to the mobile station via the downlink channel every 480ms so as to
inform the mobile station the time of advance transmission. This time is the TA (timing advance), which
ranges between 0~63 (0~233s ). The TA value is limited by the timing advance code 0~63bit of the
GSM system. Therefore, the maximum coverage distance of the GSM is 35km. Its calculation is as
follows:

1/2*3.7 s /bit*63bit*c=35km

{In the formula, 3.7s /bit is the duration per bit (156/577); 63bit is the maximum bit number of the
time adjustment; c is the light speed (transmission speed of the signal); and indicates that the go and
return trip of the signal.}

According to the above description, the distance corresponding to 1bit period is 554m. Influenced by
the multi-path propagation and MS synchronization precision, the TA error may reach up to about 3bit
(1.6km).
Radio Link Timeout

Value range: 4~64, the step size is 4

Unit: SACCH period (480ms)

Content: This parameter is used for MS to decide down-link disconnection in case of SACCH
decoding failures.

Recommendation: 20~56

Once assigned with a dedicated channel, MS will start counter S. From then on, S will decrease by 1 when a
SACCH message fails to be decoded, and will increase by 2 when decoded correctly. When S decreases
to 0, there will be a radio link failure. This allows either re-establishment or release of the connection. If
the value of this parameter is too small, the radio link will easily get failed which will result in call drops. If
it is too large, MS will not release for a long time which will lower the availability of resources (this
parameter functions for the downlink).

For area with little traffic (remote area), it is recommended to be between 52~62.

For area with light traffic and large coverage(suburb or countryside), it is recommended to be between
36~48.

For area with heavy traffic (urban), it is recommended to be between 20~32.

For the area with very heavy traffic (area covered by microcell), it is recommended to be between 4~16.

For the cell with obvious coverage hole or the area where the call drops is serious during movement, this
parameter can be increased appropriately in order to increase the possibility to resume the conversation.
Note: Radio link timeout is the parameter used to judge the downlink failure. Likewise, the uplink will be
monitored at BTS, either based on the uplink SACCH error or based on the receiving level and quality of
the uplink.

Application of Radio Link Timeout

If cell A and B are adjacent to each other, assume that one MS moves from point P to point Q during a
conversation, usually an outgoing cell handover will occur. If the value of parameter radio link timeout is
too small and the quality of signal at the edge of cells A and B is poor, it is likely that the radio link will be
timeout before the handover starts, thus resulting in call drops.

If it is too large, when MS stays near point P and makes a conversation, though the voice quality is
unacceptable, the network still has to wait for a long time before the related resources can be released,
thus the resources utilization rate is lowed.

Discontinuous Transmission / DTX

Actually, during the communication process, the mobile subscriber talks only 40% of the time and there is
not much useful information transmitted during rest of the time. If all the information is transmitted to the
network, it will not only be a waste of the system resources but also add more interference to the system.
In order to overcome this problem, the DTX technique is used in the GSM system, i.e. the transmission of
radio signals is prohibited when there is no voice signal being transmitted. This is to reduce the
interference level and increase the system efficiency. In addition, this mechanism can also save the
battery of the mobile station and prolong the standby time of the mobile station. Note that, the DTX
function is not used for data transmission.

There are two transmission modes for the GSM system: one is the normal mode. In this case, the noise
obtains the same transmission quality as the voice; the other is the discontinuous transmission mode. In
this case, the mobile station only transmits the voice signals. The noise at the receiving end is artificial.
The artificial noise is used to inform the hearer that communication connection is ok when none of the
subscribers are speaking. And the noise is designed as a comfortable noise which will not make the
hearer uncomfortable.

The comfortable noise transmission also meets the requirements of the system measurement. In DTX
mode, only 260bit codes are transmitted per 480ms; while in normal mode, 260bit codes are transmitted
per 20ms. In the DTX mode, these 260 bits will generate SID (Silence Descriptor) frames. These frames,
like the voice frames, will be processed via channel coding, interleaving, encryption and modulation, and
then be transmitted in 8 continuous bursts. In other time, there is no message transmitted.

The DTX mode is optional. However, the transmission quality will be reduced a bit in the DTX mode.
Especially when both ends of the communication are mobile subscribers, the influence on the
transmission quality will be more severe because the DTX will be used twice on the same path. In
addition, to implement the DTX function, the system should be able to indicate when to start the
discontinuous transmission and when to stop it; and when the DTX is active the coder should be able to
detect whether the signal is a voice signal or a noise signal. Thus, the VAD technique has to be used. The
VAD algorithm determines whether each output frame contains voice or background noise by comparing
the measured signal energy with the threshold defined for it. The principle of the determination is that the
noise energy should always be lower than the voice energy.

Power Control
During the process of radio transmission of signals, to reduce the interference, to increase the utilization
efficiency of the frequency spectrum, and to prolong the battery life, the transmission power can be
adjusted, that is called power control. More specifically, the power control refers to adjust the transmission
power of the mobile station or base station in the radio mode within a certain range. Its objective is the
same as that of the DTX. When the receiving level and quality is rather strong, the transmission power at
the TX terminal can be reduced appropriately so that the communication can be kept at a certain level. In
this way, the interference on other calls around can be reduced. The specific process will be described in
the subsequent content together with Huawei power control algorithm.

Frequency Hopping
19:28 Posted by

Frequency Hopping:

Frequency Hopping is mechanism in which the system changes the frequency


(uplink and downlink) during transmission at regular intervals. It allows the RF channel used for
signaling channel (SDCCH) timeslot or traffic channel (TCH) timeslots, to change frequency every TDMA
frame (4.615 ms). The frequency is changed on a per burst basis, which means that all the bits in a
burst are transmitted in the same frequency. In 1Sec= 217Hopes
Advantages of Frequency Hopping: Frequency Diversity, Interference Averaging, Increase capacity

There are two types of hopping 1 Base Band FH (BBH) 2 Synthesizer FH (SFH).

1. Base Band Frequency Hopping: In baseband hopping, each transmitter is assigned with a fixed
frequency. At transmission, all bursts, irrespective of which connection, are routed to the appropriate
transmitter of the proper frequency. The advantage with this mode is that narrow-band tunable filter
combiners can be used.

2. Synthesizer Frequency Hopping (SFH). Synthesizer hopping means that one transmitter handles all
bursts that belong to a specific connection. The bursts are sent "straight on forward" and not routed by
the bus. In contrast to baseband hopping, the transmitter tunes to the correct frequency at the
transmission of each burst the advantage of this mode is that the number of frequencies that can be
used for hopping is not dependent on the number of transmitters. It is possible to hop over a lot of
frequencies even if only a few transceivers are installed. A disadvantage with synthesizer hopping is
that wide-band hybrid combiners have to be used. This type of combiner has approximately 3 dB loss
making more than two combiners in cascade impractical.

Frequency Hopping Parameters

Mobile Allocation (MA): Set of frequencies the mobile is allowed to hop over. Maximum of 63
frequencies can be defined in the MA list.

Hopping Sequence Number (HSN): Determines the hopping order used in the cell. It is possible to
assign 64 different HSNs. Setting HSN = 0 provides cyclic hopping sequence and HSN = 1 to 63 provide
various pseudo-random hopping sequences.
Mobile Allocation Index Offset (MAIO): Determines inside the hopping sequence, which frequency the
mobile starts do transmit on. The value of MAIO ranges between 0 to (N-1) where N is the number of
frequencies defined in the MA list. Presently MAIO is set on per carrier basis.

What Is BSIC And Its Use In GSM


BSIC=NCC+BCC

In GSM system, each BTS is allocated with a color code, which is called BSIC. MS can identify
two cells with the same BCCH by the help of BSIC. In network planning, effort should be made
to make sure that BCCH of neighbor cells are different from the serving cells BCCH to reduce
the interference.

Practically it is still possible that a same BCCH is re-used in the surrounding cells. For cells
using the same BCCH in a relevant near distance, their BSIC must be different so that MS can
identify two neighbor cells with same BCCH.

BSIC is transmitted on Synchronous Channel (SCH) of each cell. Its functions are as
below:

If MS have read SCH, it is considered as being synchronous with that cell. However, to
correctly read the information on the downlink common signaling channel, MS must get
the TSC (Training Sequent Code) that is adopted by the common signaling channel.
According to GSM specification, TS (Training Sequent) has eight fixed formats, which
are represented by TSC ranged 0~7 respectively.

TSC number adopted by common signaling channel of each cell is just the BCC of the
cell. So one of the functions of BSIC is to inform MS of the TSC adopted by the common
signaling channel of the cell.

Since BSIC attends the coding process of information bits in random access burst, it can
be used to prevent the BTS from accepting a RACH transmitted from MS in a neighbor
cell as the access signal from the MS of the serving cell.
When MS is in dedicated mode, it must measure the BCCH level of the neighbor cells
and report it to BTS according to BA2 that is sent on SACCH, including their respective
BSIC. In special circumstance, when there are two or more cells using the same BCCH in
the neighbor cells, BSC can use BSIC to distinguish these cells and avoid wrong
handover or even handover failure.

MS must measure the BCCH signals of neighbor cells in dedicated mode, and report the
results to the network. Since MS sends measurement report which contain the contents
of a maximum of 6 neighbor cells each time, it is necessary to control MS to report only
the cells which have neighbor relationships with the serving cell.

The NCC is used for the above purpose. Network operators can use parameter NCC
Permitted to control MS to report the neighbor cells with NCC permitted in the serving
cell only.

Tags:

BSIC in telecom

bsic bcch

bsic in gsm

function of BCC in telecommunications

ncc and bcc

the same bcch for two sites

what is difference between BCCH and BSIC

What is Antenna Electrical and Mechanical Tilt (and How to use it)?

- October 2011
S M T WT F S

2 2 2 2 2 3
1
5 6 7 8 9 0

2 3 4 5 6 7 8

1 1 1 1 1 1
9
0 1 2 3 4 5

1 1 1 1 2 2 2
6 7 8 9 0 1 2

2 2 2 2 2 2 2
3 4 5 6 7 8 9

3 3
1 2 3 4 5
0 1

Statistics

Entries (30)
Categories

Course (30)

CSFB (1)

IMS (1)

LTE (3)

RF Components (1)

SRVCC (1)

VoLTE (1)
Related Posts

What is CSFB and SRVCC in LTE?

What is CP (Cyclic Prefix) in LTE?

What is LCS (and LBS)?

What does Orthogonal means in Wireless Networks?

What is ISI (Inter Symbol Interference) in LTE?

What is Splitter and Combiner?

Analyzing Coverage with Propagation Delay - PD and Timing Advance - TA (GSM-WCDMA-LTE)

What is RRC and RAB?

What is Retransmission, ARQ and HARQ?

IP Packet switching in Telecom - Part 4

IP Packet switching in Telecom - Part 3

IP Packet switching in Telecom - Part 2

IP Packet switching in Telecom - Part 1

Goodbye IPv4... Hello IPv6!

What is MIMO?
Archives

May, 2015 (1)

November, 2014 (2)

October, 2014 (1)

February, 2014 (1)

October, 2013 (1)

June, 2013 (1)

May, 2013 (1)

June, 2012 (1)


March, 2012 (1)

February, 2012 (2)

January, 2012 (1)

November, 2011 (1)

October, 2011 (1)

September, 2011 (1)

June, 2011 (1)

April, 2011 (2)

March, 2011 (3)

February, 2011 (5)

January, 2011 (1)

December, 2010 (2)

Posted by leopedrini Thursday, October 13, 2011 11:09:00 AM Categories: Course

Previous Post << >> Next Post

Rate this Content

39 Votes

The efficiency of a cellular network depends of its correct configuration and adjustment of
radiant systems: their transmit and receive antennas.

And one of the more important system optimizations task is based on correct adjusting tilts,
or the inclination of the antenna in relation to an axis. With the tilt, we direct irradiation
further down (or higher), concentrating the energy in the new desired direction.

When the antenna is tilted down, we call it 'downtilt', which is the most common use. If the
inclination is up (very rare and extreme cases), we call 'uptilt'.

Note: for this reason, when we refer to tilt in this tutorial this means we're talking about
'downtilt'. When we need to talk about 'uptilt' we'll use this nomenclature, explicitly.
The tilt is used when we want to reduce interference and/or coverage in some specific
areas, having each cell to meet only its designed area.

Although this is a complex issue, let's try to understand in a simple way how all of this
works?

Note: All telecomHall articles are originally written in Portuguese. Following we translate to
English and Spanish. As our time is short, maybe you find some typos (sometimes we just
use the automatic translator, with only a final and 'quick' review). We apologize and we
have an understanding of our effort. If you want to contribute translating / correcting of
these languages, or even creating and publishing your tutorials, please contact us: contact.

But Before: Antenna Radiation Diagram

Before we talk about tilt, it is necessary to talk about another very important concept: the
antennas radiation diagram.

The antenna irradiation diagram is a graphical representation of how the signal is spread
through that antenna, in all directions.

It is easier to understand by seeing an example of a 3D diagram of an antenna (in this case,


a directional antenna with horizontal beamwidth of 65 degrees).
The representation shows, in a simplified form, the gain of the signal on each of these
directions. From the center point of the X, Y and Z axis, we have the gain in all directions.

If you look at the diagram of antenna 'from above', and also 'aside', we would see
something like the one shown below.

These are the Horizontal (viewed from above) and Vertical (viewed from the side) diagrams
of the antenna.

But while this visualization is good to understand the subject, in practice do not work with
the 3D diagrams, but with the 2D representation.
So, the same antenna we have above may be represented as follows.

Usually the diagrams have rows and numbers to help us verify the exact 'behavior' in each
of the directions.

The 'straight lines' tells us the direction (azimuth) as the numbers 0, 90, 180 and 270 in the
figures above.

And the 'curves' or 'circles' tells us the gain in that direction (for example, the larger circle tells you
where the antenna achieves a gain of 15 db).

According to the applied tilt, we'll have a different modified diagram, i.e. we affect the
coverage area. For example, if we apply an electrical tilt of 10 degrees to antenna shown
above, its diagrams are as shown below.
The most important here is to understand this 'concept', and be able to imagine how would
the 3D model be, a combination of its Horizontal and Vertical diagrams.

Now yes, what is Tilt?

Right, now we can talk specifically about Tilt. Let's start reminding what is the Tilt of an
antenna, and what is its purpose.

The tilt represents the inclination or angle of the antenna to its axis.
As we have seen, when we apply a tilt, we change the antenna radiation diagram.

For a standard antenna, without Tilt, the diagram is formed as we see in the following
figure.

There are two possible types of Tilt (which can be applied together): the electrical Tilt and
Mechanical Tilt.

The mechanical tilt is very easy to be understood: tilting the antenna, through specific
accessories on its bracket, without changing the phase of the input signal, the diagram (and
consequently the signal propagation directions) is modified.
And for the electrical tilt, the modification of the diagram is obtained by changing the
characteristics of signal phase of each element of the antenna, as seen below.

Note: the electrical tilt can have a fixed value, or can be variable, usually adjusted through
an accessory such as a rod or bolt with markings. This adjustment can be either manual or
remote, in the latter case being known as 'RET' (Remote Electrical Tilt) usually a small
engine connected to the screw stem/regulator that does the job of adjusting the tilt.

With no doubt the best option is to use antennas with variable electrical tilt AND remote
adjustment possibility, because it gives much more flexibility and ease to the optimizer.

However these solutions are usually more expensive, and therefore the antennas with
manual variable electrical tilt option are more common.

So, if you don't have the budget for antennas with RET, choose at least antennas with
manual but 'variable' electrical tilt only when you have no choice/options, choose
antennas with fixed electrical tilt.
Changes in Radiation diagrams: depends on the Tilt Type

We have already seen that when we apply a tilt (electrical or mechanical) to an antenna, we
have change of signal propagation, because we change the 3D diagram as discussed earlier.

But this variation is also different depending on the type of electrical or mechanical tilt.
Therefore, it is very important to understand how the irradiated signal is affected in each
case.

To explain these effects through calculations and definitions of db, null and gains on the
diagram is possible. But the following figures shows it in a a much more simplified way, as
horizontal beamwidth behaves when we apply electrical and mechanical tilt to an antenna.

See how is the Horizontal Irradiation Diagram for an antenna with horizontal beamwidth of
90 degrees.

Of course, depending on the horizontal beamwidth, we'll have other figures. But the idea, or
the 'behavior' is the same. Below, we have the same result for an antenna with horizontal
beamwidth of 65 degrees.
Our goal it that with the pictures above you can understand how each type of tilt affects the
end result in coverage one of the most important goals of this tutorial.

But the best way to verify this concept in practice is by checking the final coverage that
each one produces.

To do this, then let's take as a reference a simple 'coverage prediction' of a sample cell.
(These results could also be obtained from detailed Drive Test measurements in the cell
region).
Then we will generate 2 more predictions: the first with electrical tilt = 8 degrees (and no
mechanical tilt). And the second with only mechanical of 8 degrees.

Analyzing the diagrams for both types of tilt, as well as the results of the predictions (these
results also can also be proven by drive test measures) we find that:

With the mechanical tilt, the coverage area is reduced in central direction, but the
coverage area in side directions are increased.

With the electrical tilt, the coverage area suffers a uniform reduction in the
direction of the antenna azimuth, that is, the gain is reduced uniformly.

Conclusion: the advantages of one tilt type to another tilt type are very based on its
application when one of the above two result is desired/required.
But in General, the basic concept of tilt is that when we apply the tilt to an antenna, we
improve the signal in areas close to the site, and reduced the coverage in more remote
locations. In other words, when we're adjusting the tilt we seek a signal as strong as
possible in areas of interest (where the traffic must be), and similarly, a signal the weakest
as possible beyond the borders of the cell.

Of course everything depends on the 'variables' involved as tilt angle, height and type of
antenna and also of topography and existing obstacles.

Roughly, but that can be used in practice, the tilt angles can be estimated through simple
calculation of the vertical angle between the antenna and the area of interest.

In other words, we chose a tilt angle in such a way that the desired coverage areas are in
the direction of vertical diagram.

It is important to compare:

the antenna angle toward the area of interest;

the antenna vertical diagram.

We must also take into account the antenna nulls. These null points in antenna diagrams
should not be targeted to important areas.

As basic formula, we have:

Angle = ArcTAN (Height / Distance)

Note: the height and distance must be in the same measurement units.
Recommendations

The main recommendation to be followed when applying tilts, is to use it with caution.
Although the tilt can reduce interference, it can also reduce coverage, especially in indoor
locations.

So, calculations (and measurements) must be made to predict (and check) the results, and
if that means coverage loss, we should re-evaluate the tilt.

It is a good practice to define some 'same' typical values (default) of tilt to be applied on the
network cells, varying only based on region, cell size, and antennas heights and types.

It is recommended not to use too aggressive values: it is better to start with a small tilt in
all cells, and then go making any adjustments as needed to improve coverage/interference.

When using mechanical tilt, remember that the horizontal beamwidth is wider to the
antenna sides, which can represent a problem in C/I ratio in the coverage of neighboring
cells.

Always make a local verification, after changing any tilt, by less than it has been. This
means assessing the coverage and quality in the area of the changed cell, and also in the
affected region. Always remember that a problem may have been solved ... but another
may have arisen!

Documentation

The documentation is a very important task in all activities of the telecommunications area.
But this importance is even greater when we talk about Radiant System documentation
(including tilts).

It is very important to know exactly 'what' we have currently configured at each network
cell. And equally important, to know 'why' that given value has changed, or optimized.

Professionals who do not follow this rule often must perform rework for several reasons
simply because the changes were not properly documented.

For example, if a particular tilt was applied to remove the interfering signal at a VIP
customer, the same should go back to the original value when the frequency plan is fixed.

Other case for example is if the tilt was applied due to problems of congestion. After the
sector expansion (TRX, Carriers, etc), the tilt must return to the previous value, reaching a
greater coverage area, and consequently, generating overall greater revenue.

Another case still is when we have the activation of a new site: all neighboring sites should
be reevaluated both tilts and azimutes.
Of course that each case should be evaluated according to its characteristics and only then
deciding to aplly final tilt values. For example, if there is a large building in front of an
antenna, increasing the tilt could end up completely eliminating the signal.

In all cases, common sense should prevail, evaluating the result through all the possible
tools and calculations (as Predictions), data collection (as Drive Test) and KPI's.

Practical Values

As we can see, there's not a 'rule', or default value for all the tilts of a network.

But considering the most values found in field, reasonable values are:

15 dBi gain: default tilt between 7 and 8 degrees (being 8 degrees to smaller cells).

18 dBi gain: default tilt between 3.5 and 4 degrees (again, being 4 degrees to smaller cells).

These values have tipically 3 to 5 dB of loss on the horizon.

Note: the default tilt is slightly larger in smaller cells because these are cells are in dense
areas, and a slightly smaller coverage loss won't have as much effect as in larger cells. And
in cases of very small cells, the tilt is practically mandatory otherwise we run the risk of
creating very poor coverage areas on its edges due to antenna nulls.

It is easier to control a network when all cells have approximately the same value on almost
all antennas: with a small value or even without tilt applied to all cells, we have an almost
negligible coverage loss, and a good C/I level.

Thus, we can worry about - and focus - only on the more problematic cells.

When you apply tilts in antennas, make in a structured manner, for example with steps of 2
or 3 degrees document it and also let your team know this steps.

As already mentioned, the mechanical tilt is often changed through the adjust of mechanical
devices (1) and (2) that fixes the antennas to brackets.
And the electrical tilt can be modified for example through rods or screws, usually located at
the bottom of the antenna, which when moved, applies some corresponding tilt to the
antenna.
For example in the above figure, we have a dual antenna (two frequency bands), and of
course, 2 rods (1) and (2) that are moved around, and have a small display (3) indicating
the corresponding electrical tilt one for each band.

And what are the applications?

In the definitions so far, we've already seen that the tilts applications are several, as to
minimize neighboring cells unwanted overlap, e.g. improving the conditions for the
handover. Also we can apply tilt to remove local interference and increase the traffic
capacity, and also cases where we simply want to change the size of certain cells, for
example when we insert a new cell.
In A Nutshell: the most important thing is to understand the concept, or effect of each
type of tilt, so that you can apply it as best as possible in each situation.

Final Tips

The tilt subject is far more comprehensive that we (tried to) demonstrate here today, but
we believe it is enough for you to understand the basic concepts.

A final tip is when applying tilts in antennas with more than one band.

This is because in different frequency bands, we have different propagation losses. For this
reason, antennae that allow more than one band has different propagation diagrams, and
above all, different gains and electrical tilt range.

And what's the problem?

Well, suppose as an example an antenna that has the band X, the lower, and a band Y,
highest.

Analyzing the characteristics of this specific antenna, you'll see that the ranges of electrical
tilt are different for each band.

For example, for this same dual antenna we can have:


X band: electrical tilt range from 0 to 10 degrees.

Y band: electrical tilt range from 0 to 6 degrees.

The gain of the lower band is always smaller, like to 'adjust' the smaller loss that this band
has in relation to each other. In this way, we can achieve a coverage area roughly equal on
both bands of course if we use 'equivalent' tilts.

Okay, but in the example above, the maximum is 10 and 6. What would be equivalent tilt?

So the tip is this: always pay attention to the correlation of tilts between antennas with
more than one band being transmitted!

The suggestion is to maintain an auxiliary table, with the correlation of these pre-defined
values.

Thus, for the electrical tilt of a given cell:

X Band ET = 0 (no tilt), then Y Band ET = 0 (no tilt). Ok.

X Band ET = 10 (maximum possible tilt), then Y Band ET = 6 (maximum possible tilt). Ok.

X Band ET = 5. And there? By correlation, Y Band ET = 3!

Obviously, this relationship is not always a 'rule', because it depends on each band specific
diagrams and how each one will reach the areas of interest.

But worth pay attention to not to end up applying the maximum tilt in a band (Y ET = 6),
and the 'same' (X ET = 6) in another band because even though they have the same
'value', actually they're not 'equivalent'.

After you set this correlation table for your antennas, distribute it to your team so, when
in the field, when they have to change a tilt of a band they will automatically know the
approximate tilt that should be adjusted in the other(s).

And how to verify changes?

We have also said previously that the verifications, or the effects of tilt adjustments can be
checked in various ways, such as through drive test, coverage predictions, on-site/interest
areas measurements, or also through counters or Key Performance Indicators-KPI.

Specifically about the verifications through Performance counters, in addition to KPI directly
affected, an interesting and efficient form of verification is through Distance counters.
On GSM for example, we have TA counters (number of MR per TA, number of Radio Link
Failure by TA).

Note: we talked about TA here at telecomHall, and if you have more interest in the
subject, click hereto read the tutorial.

This type of check is very simple to be done, and the results can be clearly evaluated.

For example, we can check the effect of a tilt applied to a particular cell through counters in
a simple Excel worksheet.

Through the information of TA for each cell, we know how far the coverage of each one is
reached. So, after we change a particular tilt, simply export the new KPI data (TA), and
compare the new coverage area (and also the new distributions/concentrations of traffic).

Another way, perhaps even more interesting, is plotting this data in a GIS program, for
example in Google Earth. From the data counters table, and an auxiliary table with the
physical information of cells (cellname, coordinates, azimuth) can have a result far more
detailed, allowing precise result checking as well.

Several other interesting information can be obtained from the report (map) above.

When you click some point, we have its traffic information. The color legend also assists in
this task. For example, in regions around the red dots, we have a traffic between 40 and 45
Erlangs. In the same logic, light yellow points between 10 and 15 resulting Erlangs
according to legend see what happens when we click at that particular location: we have
12.5 Erlangs.
Another piece of information that adds value to the analysis, also obtained by clicking any
point, is the percentage of traffic at that specific location. For example, in the yellow dot we
have clicked, or 12.5 Erlangs = 14% out of a total of 88.99 Erlangs that cell has (the sum of
all points).

Also as interesting information, we have the checking of coverage to far from the site,
where we still have some traffic. In the analysis, the designer must take into account if the
coverage is rural or not. If a rural coverage, it may be maintained (depends on company
strategy). Such cases in sites located on cities, are most likely signal 'spurious' and probably
should be removed for example with the use of tilt!
The creation and manipulation of tables and maps processed above are subject of our next
tutorial 'Hunter GE TA', but they aren't complicated be manually obtained mainly the data
in Excel, which already allow you to extract enough information and help.

Conclusion

Today we've seen the main characteristics of tilts applied to antennas.

A good tilts choice maintains network interference levels under control, and consequently
provides best overall results.

The application of tilt always results in a loss of coverage, but what one should always bear
in mind is whether the reduced coverage should be there or not!

Knowing well the concept of tilt, and especially understanding the different effects of
mechanical and electrical tilt, you will be able to achieve the best results in your network.
As always, we do that our last ever request: If you liked this tutorial, please share it with
your friends: so you give us reason to continue publishing new articles like this! Thank you!

Mechanical tilt means physically or manually downtilting the antenna. This type has
drawbacks as mentioned later. Due to these drawbacks, electrical tilt has been invented
by the RF and system engineers.Electrical tilt does not involve any physical movement
but changes the phases of the radiation pattern of individual antennas used in sector
array antenna. Electrical tilt can also provide the gain to support concept known as
beamforming to extend the coverage.

Antenna Mechanical tilt and their drawbacks

Figure-2

Till today, RF engineers has been using mechanical tilt method to alter the position of
the RF antenna. But as depicted in the figure-2, antenna in this method tilts only one
plane. Moreover when the front part is tilted down to decrease the gain on horizon, the
back side tilts up side. This results into change in front to back ratio as well as increase
in inter sector interference. Mechanical tilt results into pattern blooming as shown in
figure-4. The outer most part of pattern in fig-4 right side postion represents mechanical
tilted antenna with 0 degrees of downtilt. Change in radiation patterns with respect to
different degrees of mechanical tilt is also shown.
Antenna Electrical Tilt and their benefits

Figure-3

Electrical tilt concept has provided great amount of control to shape the radiation pattern
of antenna and boost the pattern as desired. This has made life of cellular operators
very easy. Electrical downtilt changes the phase element of the antenna's different
radiating elements separately and simultaneously. This will allow RF engineers to
change the gain of the pattern around the tower in full 360 degrees. Figure-3 depicts the
coverage achieved using electrical tilt type.

Major difference between mechanical tilt and electrical tilt

Figure-4

Figure-4 depicts difference between mechanical tilt and electrical tilt with respect to
radiation pattern. As shown in the figure, mechanical tilt results into pattern blooming
while electrical tilt suppresses the pattern bloom. The electrical tilt achieves this result
as it is able to tune individual radiating elements of antenna array. Mechanical tilt fails
as it tunes the entire antenna as a fixed single unit.

What Is LTE Requirement ?


Peak data rate 100Mbps (DL) and 50Mbps (UL) to 20MHz

Throughput increased by 3-4 times and 2-3 times for the downlink to uplink from HSDPA
Rel6 (DL=14.4Mbps, to use transmitter sites that have been used in UTRA/GERAN

Throughput increased by 3-4 times and 2-3 time UL=5.7Mbps)

Spectrum efficiency by continuing as for the downlink to uplink from HSDPA Rel-6
(DL=14.4Mbps,UL=5.7Mbps)

Flexible use of spectrum (1.4,3,5,10,15,20MHz)

Lower latency :
Radio access network latency ( user plane UE RNC-UE ) below 10 ms

The ability of the use mobility up to 350 km / hour

Coverage up to a radius of approximately 5 km

Enhance MBMS ( Multimedia Broadcast / Multicast Service ) efficiency ( 1 bit/s/Hz)

Retaining 3GPP RAT ( Radio Access Technology ) which already exist and support
internetworking with him.
Architecture simplification , minimization and packet based interface , full IP

GSM Frame Structure


- GSM frame structure uses slots, frames, multiframes, superframes and hyperframes to give
the required structure and timing to the data transmitted.

The GSM system has a defined GSM frame structure to enable the orderly passage of information.
The GSM frame structure establishes schedules for the predetermined use of timeslots.

By establishing these schedules by the use of a frame structure, both the mobile and the base
station are able to communicate not only the voice data, but also signalling information without the
various types of data becoming intermixed and both ends of the transmission knowing exactly what
types of information are being transmitted.

The GSM frame structure provides the basis for the various physical channels used within GSM, and
accordingly it is at the heart of the overall system.
Basic GSM frame structure

The basic element in the GSM frame structure is the frame itself. This comprises the eight slots,
each used for different users within the TDMA system. As mentioned in another page of the tutorial,
the slots for transmission and reception for a given mobile are offset in time so that the mobile does
not transmit and receive at the same time.

Eight slot GSM frame structure

The basic GSM frame defines the structure upon which all the timing and structure of the GSM
messaging and signalling is based. The fundamental unit of time is called a burst period and it lasts
for approximately 0.577 ms (15/26 ms). Eight of these burst periods are grouped into what is known
as a TDMA frame. This lasts for approximately 4.615 ms (i.e.120/26 ms) and it forms the basic unit
for the definition of logical channels. One physical channel is one burst period allocated in each
TDMA frame.

In simplified terms the base station transmits two types of channel, namely traffic and control.
Accordingly the channel structure is organised into two different types of frame, one for the traffic on
the main traffic carrier frequency, and the other for the control on the beacon frequency.

GSM multiframe

The GSM frames are grouped together to form multiframes and in this way it is possible to establish
a time schedule for their operation and the network can be synchronised.

There are several GSM multiframe structures:

Traffic multiframe: The Traffic Channel frames are organised into multiframes consisting
of 26 bursts and taking 120 ms. In a traffic multiframe, 24 bursts are used for traffic. These
are numbered 0 to 11 and 13 to 24. One of the remaining bursts is then used to
accommodate the SACCH, the remaining frame remaining free. The actual position used
alternates between position 12 and 25.
Control multiframe: the Control Channel multiframe that comprises 51 bursts and
occupies 235.4 ms. This always occurs on the beacon frequency in time slot zero and it may
also occur within slots 2, 4 and 6 of the beacon frequency as well. This multiframe is
subdivided into logical channels which are time-scheduled. These logical channels and
functions include the following:

o Frequency correction burst

o Synchronisation burst

o Broadcast channel (BCH)

o Paging and Access Grant Channel (PACCH)

o Stand Alone Dedicated Control Channel (SDCCH)

GSM Superframe

Multiframes are then constructed into superframes taking 6.12 seconds. These consist of 51 traffic
multiframes or 26 control multiframes. As the traffic multiframes are 26 bursts long and the control
multiframes are 51 bursts long, the different number of traffic and control multiframes within the
superframe, brings them back into line again taking exactly the same interval.

GSM Hyperframe

Above this 2048 superframes (i.e. 2 to the power 11) are grouped to form one hyperframe which
repeats every 3 hours 28 minutes 53.76 seconds. It is the largest time interval within the GSM frame
structure.

Within the GSM hyperframe there is a counter and every time slot has a unique sequential number
comprising the frame number and time slot number. This is used to maintain synchronisation of the
different scheduled operations with the GSM frame structure. These include functions such as:

Frequency hopping: Frequency hopping is a feature that is optional within the GSM
system. It can help reduce interference and fading issues, but for it to work, the transmitter
and receiver must be synchronised so they hop to the same frequencies at the same time.

Encryption: The encryption process is synchronised over the GSM hyperframe period
where a counter is used and the encryption process will repeat with each hyperframe.
However, it is unlikely that the cellphone conversation will be over 3 hours and accordingly it
is unlikely that security will be compromised as a result.
GSM Frame Structure

n GSM frequency band of 25 MHz is divided into 200 KHz of smaller bands, each carry
one RF carrier, this gives 125 carriers.As one carrier is used as guard channel between
GSM and other frequency bands 124 carriers are useful RF channels.This division of
frequency pool is called FDMA. Now each RF carrier will have eight time slots. This
division time wise is called TDMA. Here each RF carrier frequency is shared between 8
users hence in GSM system, the basic radio resource is a time slot with duration of
about 577 microsec. As mentioned each time slot has 15/26 or 0.577ms of time
duration. This time slot carries 156.25 bits which leads to bit rate of 270.833 kbps. This
is explained below in TDMA gsm frame structure. For E-GSM number of ARFCNs are
174, for DCS1800 ARFNCs are 374.

The GSM frame structure is designated as hyperframe, superframe, multiframe and


frame. The minimum unit being frame (or TDMA frame) is made of 8 time slots.
One GSM hyperframe composed of 2048 superframes.
Each GSM superframe composed of multiframes (either 26 or 51 as described below).
Each GSM multiframe composed of frames (either 51 or 26 based on multiframe type).
Each frame composed of 8 time slots.
Hence there will be total of 2715648 TDMA frames available in GSM and the same
cycle continues.
As shown in the figure 2 below, there are two varients to multiframe structure.
1. 26 frame multiframe - Called traffic multiframe,composed of 26 bursts in a duration of
120ms, out of these 24 are used for traffic, one for SACCH and one is not used.
2. 51 frame multiframe- Called control multiframe,composed of 51 bursts in a duration of
235.4 ms.
This type of multiframe is divided into logical channels. These logical channels are time
sheduled by BTS. Always occur at beacon frequency in time slot 0, it may also take up
other time slots if required by system for example 2,4,6.

Fig.2 GSM Frame Structure

As shown in fig 3. each ARFCN or each channel in GSM will have 8 time slots TS0 to
TS7. During network entry each GSM mobile phone is allocated one slot in downlink
and one slot in uplink. Here in the figure GSM Mobile is allocated 890.2 MHz in the
uplink and 935.2 MHz in the downlink. As mentioned TS0 is allocated which follows
either 51 or 26 frame multiframe structure. Hence if at start 'F' is depicted which is
FCCH after 4.615 ms ( which is 7 time slot duration) S(SCH) will appear then after
another 7 slots B(BCCH) will appear and so on till end of 51 frame Multiframe structure
is completed and cycle continues as long as connection between Mobile and base
station is active. similarly in the uplink, 26 frame multiframe structure follow, where T is
TCH/FS (Traffic channel for full rate speech), and S is SACCH. The gsm frame structure
can best be understood as depicted in the figure below with respect to downlink(BTS to
MS) and uplink (MS to BTS) directions.
Fig.3 GSM Physical and logical channel concept

Frequencies in the uplink = 890.2 + 0.2 (N-1) MHz


Frequencies in the downlink = 935.2 + 0.2 (N-1) MHz
where, N is from 1 to 124 called ARFCN
As same antenna is used for transmit as well as receive, there is 3 time slots delay
introduced between TS0 of uplink and TSO of downlink frequency. This helps avoid
need of simultaneous transmission and reception by GSM mobile phone. The 3 slot
time period is used by the Mobile subscriber to perform various functions e.g.
processing data, measuring signal quality of neighbour cells etc.

Engineers working in GSM should know gsm frame structure for both the downlink as
well as uplink. They should also understand mapping of different channels to time slots
in these gsm frame structures.

GSM uses a variety of channels in which the data is carried. In GSM, these channels are separated
into physical channels and logical channels. The Physical channels are determined by the timeslot,
whereas the logical channels are determined by the information carried within the physical channel.
It can be further summarised by saying that several recurring timeslots on a carrier constitute a
physical channel. These are then used by different logical channels to transfer information. These
channels may either be used for user data (payload) or signalling to enable the system to operate
correctly.

Common and dedicated channels

The channels may also be divided into common and dedicated channels. The forward common
channels are used for paging to inform a mobile of an incoming call, responding to channel requests,
and broadcasting bulletin board information. The return common channel is a random access
channel used by the mobile to request channel resources before timing information is conveyed by
the BSS.

The dedicated channels are of two main types: those used for signalling, and those used for traffic.
The signalling channels are used for maintenance of the call and for enabling call set up, providing
facilities such as handover when the call is in progress, and finally terminating the call. The traffic
channels handle the actual payload.

The following logical channels are defined in GSM:

TCHf - Full rate traffic channel.

TCH h - Half rate traffic channel.

BCCH - Broadcast Network information, e.g. for describing the current control channel structure. The
BCCH is a point-to-multipoint channel (BSS-to-MS).

SCH - Synchronisation of the MSs.

FCHMS - frequency correction.

AGCH - Acknowledge channel requests from MS and allocate a SDCCH.

PCHMS - terminating call announcement.

RACHMS - access requests, response to call announcement, location update, etc.

FACCHt - For time critical signalling over the TCH (e.g. for handover signalling). Traffic burst is
stolen for a full signalling burst.

SACCHt - TCH in-band signalling, e.g. for link monitoring.

SDCCH - For signalling exchanges, e.g. during call setup, registration / location updates.
FACCHs - FACCH for the SDCCH. The SDCCH burst is stolen for a full signalling burst. Function
not clear in the present version of GSM (could be used for e.g. handover of an eight-rate channel,
i.e. using a "SDCCH-like" channel for other purposes than signalling).

SACCHs - SDCCH in-band signalling, e.g. for link monitoring.

Control channel
From Wikipedia, the free encyclopedia

In radio communication, a control channel is a central channel that controls other


constituent radios by handling data streams. It is most often used in the context of
atrunked radio system, where the control channel sends data which coordinates users
in talkgroups.
In GSM networks, Control Channels can be broadly divided into 3 categories;
Broadcast Control Channel (BCCH), Common Control Channel (CCCH), and
Dedicated Control Channels (DCCH).

Contents

1Broadcast Control Channel

2Common Control Channels

3Dedicated Control Channels

4Channel Combination

5See also

6References

Broadcast Control Channel

The Broadcast Control Channel is transmitted by the base transceiver station (BTS) at
all times. The RF carrier used to transmit the BCCH is referred to as the BCCH
carrier. The MS monitors the information carried on the BCCH periodically (at least
every 30 secs), when it is switched on and not in a call.

The BCCH Consists of:

a. Broadcast Control Channel (BCCH): Carries the following information:

Location Area Identity (LAI).

List of neighboring cells that should be monitored by the MS.

List of frequencies used in the cell.

Cell identity.

Power control indicator.

DTX permitted.
Access control (i.e., emergency calls, call barring ... etc.).

CBCH description.

The BCCH is transmitted at constant power at all times, and all MS that may seek to
use it to measure its signal strength. Dummy bursts are transmitted to ensure
continuity when there is no BCCH carrier traffic.

b. Frequency Correction Channel (FCCH): This is transmitted frequently on the


BCCH timeslot and allows the mobile to synchronize its own frequency to that of the
transmitting base site. The FCCH may only be sent during timeslot 0 on the BCCH
carrier frequency and therefore it acts as a flag to the mobile to identify Timeslot 0.

c. Synchronization Channel (SCH) The SCH carries the information to enable the MS
to synchronize to the TDMA frame structure and know the timing of the individual
timeslots. The following parameters are sent:

Frame number.

Base Site Identity Code (BSIC).

The MS will monitor BCCH information from surrounding cells and store the
information from the best six cells. The SCH information on these cells is also stored
so that the MS may quickly resynchronize when it enters a new cell.

Common Control Channels

The Common Control Channel (CCCH) is responsible for transferring control


information between all mobiles and the BTS. This is necessary for the
implementation of call origination and call paging functions. It consists of the
following:

a. Random Access Channel (RACH) Used by the mobile when it requires gaining
access to the system. This occurs when the mobile initiates a call or responds to a
page.

b. Paging Channel (PCH) Used by the BTS to page MS, (paging can be performed by
an IMSI, TMSI or IMEI).

c. Access Grant Control Channel (AGCH) Used by the BTS to assign a dedicated
control channel to a MS in response to an access message received on the Random
Access Channel. The MS will move to the dedicated channel in order to proceed with
either a call setup, response to a paging message, Location Area Update or Short
Message Service.

d. Cell Broadcast Channel (CBCH) This channel is used to transmit messages to be


broadcast to all MSs within a cell. The CBCH uses a dedicated control channel to
send its messages, however it is considered a common channel because all mobiles in
the cell can receive the messages.

Active MSs must frequently monitor both BCCH and CCCH. The CCCH will be
transmitted on the RF carrier with the BCCH.

Dedicated Control Channels

The DCCH is a single timeslot on an RF carrier that is used to convey eight Stand-
alone Dedicated Control Channels (SDCCH). A single MS for call setup,
authentication, location updating and SMS point to point use a SDCCH. As we will
see later, SDCCH can also be found on a BCCH/CCCH timeslot, this configuration
only allows four SDCCHs.

b. Slow Associated Control Channel (SACCH) Conveys power control and timing
information in the downlink direction (towards the MS) and Receive Signal Strength
Indicator (RSSI), and link quality reports in the uplink direction.

c. Fast Associated Control Channel (FACCH) The FACCH is transmitted instead of a


TCH. The FACCH steals the TCH burst and inserts its own information. The
FACCH is used to carry out user authentication, handovers and immediate
assignment.

All of the control channels are required for system operation, however, in the same
way that we allow different users to share the radio channel by using different
timeslots to carry the conversation data, the control channels share timeslots on the
radio channel at different times. This allows efficient passing of control information
without wasting capacity that could be used for call traffic. To do this we must
organize the timeslots between those, which will be used for traffic, and those, which
will carry control signaling.

Channel Combination

The different logical channel types mentioned are grouped into what are called
channel combination. The four most common channel combination are listed below:

1. Full Rate Traffic Channel Combination TCH8/FACCH + SACCH


2. Broadcast Channel Combination BCCH + CCCH

3. Dedicated Channel Combination SDCCH8 + SACCH8

4. Combined Channel Combination BCCH+CCCH+SDCCH4+SACCH4

5. Half Rate Traffic Channel Combination TCH16/FACCH + SACCH

The Half Rate Channel Combination (when introduced) will be very similar to the
Full Rate Traffic Combination.
GSM Logical Channels
Posted on January 14, 2011

Logical channels in GSM are defined by the TYPE of information that is carried over
them. They are used to transport speech, user data, high layer signally, cell specific
information to name a few.

The following picture depicts the logical channel hierarchy:


Based on, as mentioned earlier, whether they carry GSM traffic (ie, speech or data) or GSM
signalling (for ex, Layer 3 signalling), logical channels in GSM are broadly categorized in to:

Traffic Channels (TCH)

Control Channels

Below is the brief description about them, full details and exact content of these channels
will be described in the appropriate context in later posts.

Traffic channels: They are further categorized as Half Rate (5.6kbps), Full
Rate(13kbps) and Enhanced Full Rate (13 kbps) bidirectional channels.

Control channels: They are further classified


as Broadcast, Common and DedicatedControl channels.

Broadcast channels (BCH) are one to many down link channels used by the network
to transmit various types of information in the down link to all users in the cell. The following
sub class of channels exist:

Frequency Correction Channel (FCCH): This channel is used by the


Mobile to tune on to the frequency. This an all zero channel transmitted by
the network and mobile uses this information to tune in and correct
frequency errors if any.

Synchronization Channel (SCH): This channel is used by the mobile to


synchronize with the network. The information transmitted on this channel
contains the Base Station Identity Code (BSIC) and TDMA frame number.

Broadcast Control Channel (BCCH): Used by the network to transmit


various information necessary for the UE to avail services (for ex,
frequencies used in the NW, hopping sequence, neighbor cell information,
Location Code etc). These are the information needed for the mobile for its
operation (like camping on to the cell, MO and MT call)

Common Control Channels (CCCH) are the channels used by all/for all mobiles
Random Access Channel (RACH): This is an uplink only channel used
by the mobile station when it has to access the network and there is no
dedicated channel allocated. This is used for ex during the paging
procedure or during an MO call setup procedure for requesting dedicated
resources.

Access Grant Channel (AGCH): This is a downlink only channel used by


the network to grant the access request made by the mobile station.
Network may assign SDCCH on which further transaction will be performed.

Paging Channel (PCH): Paging channel is a downlink only channel used


by the network for performing paging.

Dedicated Control Channels (DCCH), as the name suggests are point to point bi
directional channels used to carry dedicated signalling between mobile station (MS) and the
network.

Standalone Dedicated Control Channel (SDCCH): This channel is


used to carry all the call setup signalling information including message
from Layer3 procedures (for ex, authentication, ciphering). A TCH will be
allocated depending on the scenario.

Fast Associated Control Channel (FACCH): Used when fast signalling


needs to be performed such as in handover scenarios. When this channel
needs to be used stealing mode is used (ie transmitting FACCH instead of
20msec speech frame).

Slow Associated Control Channel (SACCH): Used to transmit call


related and radio link related control data during speech connection. Used
also to transmit measurement related parameters and reports.

Note that the traffic channels described above are point to point dedicated channels, even
though in the classification picture above they are not mentioned with a dedicated tag.
Considering this, there exists another classification which is based on:

Common channels
Dedicated channels

This classification is depicted in the picture below:


In this classification, the Dedicated Control Channels (DCCH) which was mentioned
under Control Channels category in earlier classification, has been moved
to Dedicated Channels category. Also, Traffic Channels (TCH), which were
mentioned separately in earlier classification, has been moved to Dedicated
Channels category.

However, please note that the description made above of the basic channels at the bottom
of the tree (ie, fundamental logical channels like TCH/F, FACCH, RACH etc), remains the
same.

This should serve as a basic explanation towards the GSM logical channels. As mentioned
at the beginning of the post, a more detailed treatment of the channel content will be done
as part of another post where it is more appropriate.

GSM Audio Codec / Vocoder


- Audio / voice codecs and vocoders convert the voice signals required to be transmitted over a
GSM link into a compact digital format. Voice codec technologies used with GSM include LPC-
RPE, EFR, Full Rate, Half Rate, AMR codec and AMR-WB codec & CELP, ACELP, VSELP,
speech codec technologies.

Audio codecs or vocoders are universally used within the GSM system. They reduce the bit rate of
speech that has been converted from its analogue for into a digital format to enable it to be carried
within the available bandwidth for the channel. Without the use of a speech codec, the digitised
speech would occupy a much wider bandwidth then would be available. Accordingly GSM codecs
are a particularly important element in the overall system.

A variety of different forms of audio codec or vocoder are available for general use, and the GSM
system supports a number of specific audio codecs. These include the RPE-LPC, half rate, and AMR
codecs. The performance of each voice codec is different and they may be used under different
conditions, although the AMR codec is now the most widely used. Also the newer AMR wideband
(AMR-WB) codec is being introduced into many areas, including GSM

Voice codec technology has advanced by considerable degrees in recent years as a result of the
increasing processing power available. This has meant that the voice codecs used in the GSM
system have large improvements since the first GSM phones were introduced.

Vocoder / codec basics

Vocoders or speech codecs are used within many areas of voice communications. Obviously the
focus here is on GSM audio codecs or vocoders, but the same principles apply to any form of codec.

If speech were digitised in a linear fashion it would require a high data rate that would occupy a very
wide bandwidth. As bandwidth is normally limited in any communications system, it is necessary to
compress the data to send it through the available channel. Once through the channel it can then be
expanded to regenerate the audio in a fashion that is as close to the original as possible.

To meet the requirements of the codec system, the speech must be captured at a high enough
sample rate and resolution to allow clear reproduction of the original sound. It must then be
compressed in such a way as to maintain the fidelity of the audio over a limited bit rate, error-prone
wireless transmission channel.

Audio codecs or vocoders can use a variety of techniques, but many modern audio codecs use a
technique known as linear prediction. In many ways this can be likened to a mathematical modelling
of the human vocal tract. To achieve this the spectral envelope of the signal is estimated using a
filter technique. Even where signals with many non-harmonically related signals are used it is
possible for voice codecs to give very large levels of compression.

A variety of different codec methodologies are used for GSM codecs:

CELP: The CELP or Code Excited Linear Prediction codec is a vocoder algorithm that was
originally proposed in 1985 and gave a significant improvement over other voice codecs of
the day. The basic principle of the CELP codec has been developed and used as the basis of
other voice codecs including ACELP, RCELP, VSELP, etc. As such the CELP codec
methodology is now the most widely used speech coding algorithm. Accordingly CELP is
now used as a generic term for a particular class of vocoders or speech codecs and not a
particular codec.


The main principle behind the CELP codec is that is uses a principle known as "Analysis by
Synthesis". In this process, the encoding is performed by perceptually optimising the
decoded signal in a closed loop system. One way in which this could be achieved is to
compare a variety of generated bit streams and choose the one that produces the best
sounding signal.

ACELP codec: The ACELP or Algebraic Code Excited Linear Prediction codec. The ACELP
codec or vocoder algorithm is a development of the CELP model. However the ACELP
codec codebooks have a specific algebraic structure as indicated by the name.

VSELP codec: The VSELP or Vector Sum Excitation Linear Prediction codec. One of the
major drawbacks of the VSELP codec is its limited ability to code non-speech sounds. This
means that it performs poorly in the presence of noise. As a result this voice codec is not
now as widely used, other newer speech codecs being preferred and offering far superior
performance.

GSM audio codecs / vocoders

A variety of GSM audio codecs / vocoders are supported. These have been introduced at different
times, and have different levels of performance.. Although some of the early audio codecs are not as
widely used these days, they are still described here as they form part of the GSM system.

CODEC NAME BIT RATE COMPRESSION TECHNOLOGY


(KBPS)

Full rate 13 RTE-LPC


CODEC NAME BIT RATE COMPRESSION TECHNOLOGY
(KBPS)

EFR 12.2 ACELP

Half rate 5.6 VSELP

AMR 12.2 - 4.75 ACELP

AMR-WB 23.85 - 6.60 ACELP

GSM Full Rate / RPE-LPC codec

The RPE-LPC or Regular Pulse Excited - Linear Predictive Coder. This form of voice codec was the
first speech codec used with GSM and it chosen after tests were underta

GSM Power Control and Power Class


- tutorial, overview of the GSM power control, GSM power levels, power class and power
amplifier design.

The power levels and power control of GSM mobiles is of great importance because of the effect of
power on the battery life. Also to group mobiles into groups, GSM power class designations have
been allocated to indicate the power capability of various mobiles.

In addition to this the power of the GSM mobiles is closely controlled so that the battery of the mobile
is conserved, and also the levels of interference are reduced and performance of the basestation is
not compromised by high power local mobiles.

GSM power levels

The base station controls the power output of the mobile, keeping the GSM power level sufficient to
maintain a good signal to noise ratio, while not too high to reduce interference, overloading, and also
to preserve the battery life.

A table of GSM power levels is defined, and the base station controls the power of the mobile by
sending a GSM "power level" number. The mobile then adjusts its power accordingly. In virtually all
cases the increment between the different power level numbers is 2dB.

The accuracies required for GSM power control are relatively stringent. At the maximum power
levels they are typically required to be controlled to within +/- 2 dB, whereas this relaxes to +/- 5 dB
at the lower levels.

The power level numbers vary according to the GSM band in use. Figures for the three main bands
in use are given below:
POWER LEVEL POWER OUTPUT
NUMBER LEVEL DBM

2 39

3 37

4 35

5 33

6 31

7 29

8 27

9 25

10 23

11 21

12 19

13 17

14 15

15 13
POWER LEVEL POWER OUTPUT
NUMBER LEVEL DBM

16 11

17 9

18 7

19 5

GSM power level table for GSM 900


POWER LEVEL NUMBER POWER OUTPUT LEVEL DBM

29 36

30 34

31 32

0 30

1 28

2 26

3 24

4 22

5 20

6 18

7 16

8 14

9 12

10 10

11 8

12 6

13 4
GSM power level table for GSM 1800
POWER LEVEL POWER OUTPUT
NUMBER LEVEL DBM

30 33

31 32

0 30

1 28

2 26

3 24

4 22

5 20

6 18

7 16

8 14

9 12

10 10

11 8

12 6

13 4

14 2
GSM power level table for GSM 1900

GSM Power class

Not all mobiles have the same maximum power output level. In order that the base station knows the
maximum power level number that it can send to the mobile, it is necessary for the base station to
know the maximum power it can transmit. This is achieved by allocating a GSM power class number
to a mobile. This GSM power class number indicates to the base station the maximum power it can
transmit and hence the maximum power level number the base station can instruct it to use.

Again the GSM power classes vary according to the band in use.

GSM GSM 900 GSM 1800 GSM 1900


POWER
CLASS
NUMBER

Power Maximum Power Maximum Power Maximum


level power level power level power
number output number output number output

1 PL0 30 dBm / PL0 30 dBm /


1W 1W

2 PL2 39dBm / PL3 24 dBm/ PL3 24 dBm /


GSM GSM 900 GSM 1800 GSM 1900
POWER
CLASS
NUMBER

8W 250 mW 250 mW

3 PL3 37dBm / PL29 36 dBm / PL30 33 dBm /


5W 4W 2W

4 PL4 33dBm /
2W

5 PL5 29 dBm /
800 mW

GSM power amplifier design considerations

One of the main considerations for the RF power amplifier design in any mobile phone is its
efficiency. The RF power amplifier is one of the major current consumption areas. Accordingly, to
ensure long battery life it should be as efficient as possible.

It is also worth remembering that as mobiles may only transmit for one eighth of the time, i.e. for
their allocated slot which is one of eight, the average power is an eighth of the maximum.

GSM Network Interfaces


- a summary or tutorial of the different interfaces used to provide communication between
various elements in a GSM cell phone network

The network structure is defined within the GSM standards. Additionally each interface between the
different elements of the GSM network is also defined. This facilitates the information interchanges
can take place. It also enables to a large degree that network elements from different manufacturers
can be used. However as many of these interfaces were not fully defined until after many networks
had been deployed, the level of standardisation may not be quite as high as many people might like.

1. Um interface The "air" or radio interface standard that is used for exchanges between a
mobile (ME) and a base station (BTS / BSC). For signalling, a modified version of the ISDN
LAPD, known as LAPDm is used.

2. Abis interface This is a BSS internal interface linking the BSC and a BTS, and it has not
been totally standardised. The Abis interface allows control of the radio equipment and radio
frequency allocation in the BTS.

3. A interface The A interface is used to provide communication between the BSS and the
MSC. The interface carries information to enable the channels, timeslots and the like to be
allocated to the mobile equipments being serviced by the BSSs. The messaging required
within the network to enable handover etc to be undertaken is carried over the interface.

4. B interface The B interface exists between the MSC and the VLR . It uses a protocol
known as the MAP/B protocol. As most VLRs are collocated with an MSC, this makes the
interface purely an "internal" interface. The interface is used whenever the MSC needs
access to data regarding a MS located in its area.

5. C interface The C interface is located between the HLR and a GMSC or a SMS-G. When a
call originates from outside the network, i.e. from the PSTN or another mobile network it ahs
to pass through the gateway so that routing information required to complete the call may be
gained. The protocol used for communication is MAP/C, the letter "C" indicating that the
protocol is used for the "C" interface. In addition to this, the MSC may optionally forward
billing information to the HLR after the call is completed and cleared down.

6. D interface The D interface is situated between the VLR and HLR. It uses the MAP/D
protocol to exchange the data related to the location of the ME and to the management of
the subscriber.

7. E interface The E interface provides communication between two MSCs. The E interface
exchanges data related to handover between the anchor and relay MSCs using the MAP/E
protocol.
8. F interface The F interface is used between an MSC and EIR. It uses the MAP/F protocol.
The communications along this interface are used to confirm the status of the IMEI of the ME
gaining access to the network.

9. G interface The G interface interconnects two VLRs of different MSCs and uses the
MAP/G protocol to transfer subscriber information, during e.g. a location update procedure.

10. H interface The H interface exists between the MSC the SMS-G. It transfers short
messages and uses the MAP/H protocol.

11. I interface The I interface can be found between the MSC and the ME. Messages
exchanged over the I interface are relayed transparently through the BSS.

Although the interfaces for the GSM cellular system may not be as rigorouly defined as many might
like, they do at least provide a large element of the definition required, enabling the functionality of
GSM network entities to be defined sufficiently.

Cellular Concepts and Basics


- a summary or tutorial about the essentials or basic concepts of a mobile phone and cellular
telecommunications systems and cellular technology.

Cellular systems are widely used today and cellular technology needs to offer very efficient use of
the available frequency spectrum. With billions of mobile phones in use around the globe today, it is
necessary to re-use the available frequencies many times over without mutual interference of one
cell phone to another. It is this concept of frequency re-use that is at the very heart of cellular
technology. However the infrastructure technology needed to support it is not simple, and it required
a significant investment to bring the first cellular networks on line.

Early schemes for radio telephones schemes used a single central transmitter to cover a wide area.
These radio telephone systems suffered from the limited number of channels that were available.
Often the waiting lists for connection were many times greater than the number of people that were
actually connected. In view of these limitations this form of radio communications technology did not
take off in a big way. Equipment was large and these radio communications systems were not
convenient to use or carry around.
The need for a spectrum efficient system

To illustrate the need for efficient spectrum usage for a radio communications system, take the
example where each user is allocated a channel. While more effective systems are now in use, the
example will take the case of an analogue system. Each channel needs to have a bandwidth of
around 25 kHz to enable sufficient audio quality to be carried as well as enabling there to be a guard
band between adjacent signals to ensure there are no undue levels of interference. Using this
concept it is only possible to accommodate 40 users in a frequency band 1 MHz wide. Even of 100
MHz were allocated to the system this would only enable 4000 users to have access to the system.
Today cellular systems have millions of subscribers and therefore a far more efficient method of
using the available spectrum is needed.

Cell system for frequency re-use

The method that is employed is to enable the frequencies to be re-used. Any radio transmitter will
only have a certain coverage area. Beyond this the signal level will fall to a limited below which it
cannot be used and will not cause significant interference to users associated with a different radio
transmitter. This means that it is possible to re-use a channel once outside the range of the radio
transmitter. The same is also true in the reverse direction for the receiver, where it will only be able to
receive signals over a given range. In this way it is possible to arrange split up an area into several
smaller regions, each covered by a different transmitter / receiver station.

These regions are conveniently known as cells, and give rise to the name of a "cellular" technology
used today. Diagrammatically these cells are often shown as hexagonal shapes that conveniently fit
together. In reality this is not the case. They have irregular boundaries because of the terrain over
which they travel. Hills, buildings and other objects all cause the signal to be attenuated and diminish
differently in each direction.

It is also very difficult to define the exact edge of a cell. The signal strength gradually reduces and
towards the edge of the cell performance will fall. As the mobiles themselves will have different levels
of sensitivity, this adds a further greying of the edge of the cell. Therefore it is never possible to have
a sharp cut-off between cells. In some areas they may overlap, whereas in others there will be a
"hole" in coverage.

Cell clusters

When devising the infrastructure technology of a cellular system, the interference between adjacent
channels is reduced by allocating different frequency bands or channels to adjacent cells so that
their coverage can overlap slightly without causing interference. In this way cells can be grouped
together in what is termed a cluster.

Often these clusters contain seven cells, but other configurations are also possible. Seven is a
convenient number, but there are a number of conflicting requirements that need to be balanced
when choosing the number of cells in a cluster for a cellular system:

Limiting interference levels

Number of channels that can be allocated to each cell site

It is necessary to limit the interference between cells having the same frequency. The topology of the
cell configuration has a large impact on this. The larger the number of cells in the cluster, the greater
the distance between cells sharing the same frequencies.

In the ideal world it might be good to choose a large number of cells to be in each cluster.
Unfortunately there are only a limited number of channels available. This means that the larger the
number of cells in a cluster, the smaller the number available to each cell, and this reduces the
capacity.

This means that there is a balance that needs to be made between the number of cells in a cluster,
and the interference levels, and the capacity that is required.

Cell size

Even though the number of cells in a cluster in a cellular system can help govern the number of
users that can be accommodated, by making all the cells smaller it is possible to increase the overall
capacity of the cellular system. However a greater number of transmitter receiver or base stations
are required if cells are made smaller and this increases the cost to the operator. Accordingly in
areas where there are more users, small low power base stations are installed.

The different types of cells are given different names according to their size and function:

Macro cells: Macro cells are large cells that are usually used for remote or sparsely
populated areas. These may be 10 km or possibly more in diameter.

Micro cells: Micro cells are those that are normally found in densely populated areas which
may have a diameter of around 1 km.

Pico cells: Picocells are generally used for covering very small areas such as particular
areas of buildings, or possibly tunnels where coverage from a larger cell in the cellular
system is not possible. Obviously for the small cells, the power levels used by the base
stations are much lower and the antennas are not position to cover wide areas. In this way
the coverage is minimised and the interference to adjacent cells is reduced.

Selective cells: Sometimes cells termed selective cells may be used where full 360 degree
coverage is not required. They may be used to fill in a hole in the coverage in the cellular
system, or to address a problem such as the entrance to a tunnel etc.

Umbrella cells: Another type of cells known as an umbrella cell is sometimes used in
instances such as those where a heavily used road crosses an area where there are
microcells. Under normal circumstances this would result in a large number of handovers as
people driving along the road would quickly cross the microcells. An umbrella cell would take
in the coverage of the microcells (but use different channels to those allocated to the
microcells). However it would enable those people moving along the road to be handled by
the umbrella cell and experience fewer handovers than if they had to pass from one
microcell to the next.

Intrastructure technology

Although the illustrations used here to describe the basic infrastructure technology used for cellular
systems refers to the original first generation systems, it serves to provide an overview of the basic
cellular concepts that form the cornerstones of today's cellular technology. New techniques are being
used, but the basic concepts employed are still in in use.

GSM EDGE Tutorial


- GSM EDGE, Enhanced Data rates for GSM Evolution, was the evolution of GSM, & GPRS
which used 8PSK modulation to achieve data transfer rates up to 384 kbps.

EDGE is an evolution to the GSM mobile cellular phone system. The name EDGE stands for
Enhanced Data rates for GSM Evolution and it enables data to be sent over a GSM TDMA system at
speeds up to 384 kbps. In some instances GSM EDGE evolution systems may also be known as
EGPRS, or Enhanced General Packet Radio Service systems. Although strictly speaking a "2.5G"
system, the GSM EDGE cellular technology is capable of providing data rates that are a distinct
increase on those that could be supported by GPRS.

EDGE evolution is intended to build on the enhancements provided by the addition of GPRS
(General Packet Radio Service) where packet switching is applied to a network. It then enables a
three-fold increase in the speed at which data can be transferred by adopting a new form of
modulation. GSM uses a form of modulation known as Gaussian Minimum Shift Keying (GMSK), but
EDGE evolution changes the modulation to 8PSK and thereby enabling a significant increase in data
rate to be achieved.

What is EDGE? - the basics

GSM EDGE cellular technology is an upgrade to the existing GSM / GPRS networks, and can often
be implemented as a software upgrade to existing GSM / GPRS networks. This makes it a
particularly attractive option proving virtually 3G data rates for a small upgrade to an existing GPRS
network.

GSM EDGE evolution can provide data rates of up to 384 kbps, and this means that it offers a
significantly higher data rate than GPRS.

There are a number of key elements in the upgrade from GSM or GPRS to EDGE. The GSM EDGE
technology requires a number of new elements to be added to the system:

Use of 8PSK modulation: In order to achieve the higher data rates within GSM EDGE, the
modulation format can be changed from GMSK to 8PSK. This provides a significant
advantage in being able to convey 3 bits per symbol, thereby increasing the maximum data
rate. This upgrade requires a change to the base station. Sometimes hardware upgrades
may be required, although it is often simply a software change.

Base station: Apart from the upgrade to incorporate the 8PSK modulation capability, other
small changes are required to the base station. These are normally relatively small and can
often be accomplished by software upgrades.

Upgrade to network architecture: GSM EDGE provides the capability for IP based data
transfer. As a result, additional network elements are required. These are the same as those
needed for GPRS and later for UMTS. In this way the introduction of EDGE technology is
part of the overall migration path from GSM to UMTS.

The two main additional nodes required for the network are the Gateway GPRS Service
Node (GGSN) and the Serving GPRS Service Node (SGSN). The GGSN connects to
packet-switched networks such as the Internet and other GPRS networks. The SGSN
provides the packet-switched link to mobile stations.

Mobile stations: It is necessary to have a GSM EDGE handset that is EDGE compatible.
As it is not possible to upgrade handsets, this means that the user needs to buy a new GSM
EDGE handset.

Despite the number of changes that need to be made, the cost of the upgrade to move to GSM
EDGE cellular technology is normally relatively small. The elements in the core network are required
for GPRS which may already be available on the network, and hence these elements will already be
present. The new network entities are also needed for UMTS and therefore they are on the overall
upgrade and migration path. Other changes to the base stations are comparatively small and can
often be achieved very easily.

GSM EDGE evolution specification overview

It is worth summarizing the key parameters of GSM EDGE cellular technology.

PARAMETER DETAILS

Multiple Access Technology FDMA / TDMA

Duplex Technique FDD

Channel Spacing 200 kHz

Modulation GMSK, 8PSK

Slots per channel 8

Frame duration 4.615 ms


PARAMETER DETAILS

Latency Below 100 ms

Overall symbol rate 270 k symbols / s

Overall modulation bit rate 810 kbps

Radio data rate per time slot 69.2 kbps

Max user data rate per time slot 59.2 kbps (MCS-9)

Max user data rate when using 8 time slots 473.6 kbps **

GSM EDGE specification highlights

Note:
** A maximum user data rate of 384 kbps is often seen quoted as the data rate for GSM EDGE.
This data rate corresponds to the International Telecommunications Union (ITU) definition of the data
rate limit required for a service to fulfill the International Mobile Telecommunications-2000 (IMT-2000)
standard(i.e. 3G) in a pedestrian environment.

GPRS General Packet Radio Service Tutorial


- GPRS technology, General Packet Radio Service, provides the basic GSM upgrade
technology used to provide packet data at up to 172 kbps.

GSM was the most successful second generation cellular technology, but the need for higher data
rates spawned new developments to enable data to be transferred at much higher rates.

The first system to make an impact on the market was GPRS. The letters GPRS stand for General
Packet Radio System, GPRS technology enabled much higher data rates to be conveyed over a
cellular network when compared to GSM that was voice centric.

GPRS technology became the first stepping-stone on the path between the second-generation GSM
cellular technology and the 3G W-CDMA / UMTS system. With GPRS technology offering data
services with data rates up to a maximum of 172 kbps, facilities such as web browsing and other
services requiring data transfer became possible. Although some data could be transferred using
GSM, the rate was too slow for real data applications.

What is GPRS? - benefits

GPRS technology brings a number of benefits for users and network operators alike over the basic
GSM system. It was widely deployed to provide a realistic data capability via cellular
telecommunications technology.

GPRS technology offered some significant benefits when it was launched:

Speed: One of the headline benefits of GPRS technology is that it offers a much higher
data rate than was possible with GSM. Rates up to 172 kbps are possible, although the
maximum data rates realistically achievable under most conditions will be in the range 15 -
40 kbps.

Packet switched operation: Unlike GSM which was used circuit switched techniques,
GPRS technology uses packet switching in line with the Internet. This makes far more
efficient use of the available capacity, and it allows greater commonality with Internet
techniques.
Always on connectivity: A further advantage of GPRS is that it offers an "Always On"
capability. When using circuit switched techniques, charges are based on the time a circuit is
used, i.e. how long the call is. For packet switched technology charges are for the amount of
data carried as this is what uses the services provider's capacity. Accordingly, always on
connectivity is possible.

More applications: The packet switched technology including the always on connectivity
combined with the higher data rates opens up many more possibilities for new applications.
One of the chief growth areas that arose from GPRS was the Blackberry form of mobile or
PDA. This provided for remote email applications along with web browsing, etc.

CAPEX and OPEX: The Capital expenditure (CAPEX) and operational expenditure (OPEX)
are two major concerns for operators. As GPRS was an upgrade to existing GSM networks
(often implemented as a software upgrade achieved remotely), the capital expenditure for
introducing GPRS technology was not as high as deploying a complete new network.
Additionally OPEX was not greatly affected as the basic base-station infrastructure remained
basically the same. It was mainly new core network elements that were required.

The GSM and GPRS elements of the system operate separately. The GSM technology still carries
the voice calls, while GPRS technology is used for the data. As a result voice and data can be sent
and received simultaneously. Some people refer to the system as GSM GPRS.

Note on GSM:

GSM or Global System for Mobile Telecommunications was initially a European mobile telecommunications system. It

was the most successful 2G system, and was adopted globally. Allowing roaming, text messages and many other

features above the basic voice, it was highly successful in meeting the needs of the users.

Click for a GSM tutorial

In order to further develop the capability of GPRS, further advances were made and another system
known as EDGE or Enhanced GPRS, EGPRS was developed.
Note on EDGE:

GSM EDGE, Enhanced Data rates for GSM Evolution, was the evolution of GSM, & GPRS which used 8PSK

modulation to achieve data transfer rates up to 384 kbps..

Click for a EDGE tutorial

What is GPRS? - packet switching

The key element of GPRS technology is that it uses packet switched data rather than circuit
switched data, and this technique makes much more efficient use of the available capacity. This is
because most data transfer occurs in what is often termed a "bursty" fashion. The transfer occurs in
short peaks, followed by breaks when there is little or no activity.

Using a traditional approach a circuit is switched permanently to a particular user. This is known as a
circuit switched mode. In view of the "bursty" nature of data transfer it means that there are periods
when it will not be carrying data.

To improve the situation the overall capacity can be shared between several users. To achieve this,
the data is split into packets and tags inserted into the packet to provide the destination address.
Packets from several sources can then be transmitted over the link. As it is unlikely that the data
burst for different users will occur all at the same time, by sharing the overall resource in this fashion,
the channel, or combined channels can be used far more efficiently. This approach is known as
packet switching, and it is at the core of many cellular data systems, and in this case GPRS.

PACKET SWITCHING VS CIRCUIT SWITCHING

CIRCUIT SWITCHED MODE PACKET SWITCHED MODE

IMSI attach GPRS attach

Call setup TBF establishment


PDP context activation

Call state (bi-directional) Block transfer (uni-directional)


PACKET SWITCHING VS CIRCUIT SWITCHING

CIRCUIT SWITCHED MODE PACKET SWITCHED MODE

Exclusive use of channel Channel shared between users

Calls cleared on completion Always on

GPRS network

GPRS and GSM are able to operate alongside one another on the same network, and using the
same base stations. However upgrades are needed. The network upgrades reflect many of those
needed for 3G, and in this way the investment in converting a network for GPRS prepares the core
infrastructure for later evolution to a 3G W-CDMA / UMTS.

The upgraded network, as described in later pages of this tutorial, has both the elements used for
GSM as well as new entities that are used for the GPRS packet data service.

The upgrades that were required for GPRS also formed the basis of the network required for the 3G
deployments (UMTS Rel 99). In this way the investment required for GPRS would not be a one off
investment used only on GPRS, it also formed the basis of the network for further developments. In
this way GPRS became a stepping stone used for the migration from 2G to 3G.

Read more about GPRS network.

GPRS mobiles

Not only does the network need to be upgraded for GPRS, but new GPRS mobiles were also
required. It is not possible to upgrade an existing GSM mobile for use as a GPRS mobile, although
GSM mobiles can be used for GSM speech on a network that also carries GPRS. To utilise GPRS
new modes are required to enable it to transmit the data in the required format.

With the incorporation of packet data into the network, this allowed far greater levels of functionality
to be accessed by mobiles. As a result a new breed of mobiles started to appear. These PDAs were
able to provide email and Internet browsing, and they were widely used especially by businesses as
they allowed their key people to remain in touch with the office at all times.
What is GPRS? key parameters

The key parameters for the GPRS, General Packet Radio System, are tabulated below:

WHAT IS GPRS? - THE KEY PARAMETERS

PARAMETER SPECIFICATION

Channel Bandwidth 200 kHz

Modulation type GMSK

Data handling Packet data

Max data rate 172 kbps

GPRS technology offered a significant improvement in the data transfer capacity over existing
cellular systems. It enabled many of the first email and web browsing phones such as PDAs,
Blackberrys, etc to be launched. Accordingly GPRS technology heralded the beginning of a new era
in cellular communications where the mobile phone capabilities allowed significantly more than voice
calls and simple texts. GPRS enabled real data applications to be used and the new phones to
become mobile computers on the move allowing businessmen to be always in touch with the office
and domestic users to be able to use many more data applications.

GSM EDGE Modulation, Slot, Burst and Air


Interface
- summary, overview or tutorial about the basics of GSM EDGE air interface including the
modulation scheme, slot, burst configurations.

The air interface for GSM EDGE, including the modulation, as well as the slot and burst structures,
have been developed to be compatible with the overall GSM concept. In this way EDGE cellular
technology is able to operate alongside the existing GSM systems by adding an EDGE upgrade.

In addition to this EDGE technology re-uses many of the features of the existing systems allowing
both technologies to utilise the same base stations, etc. This provides a lower cost option to upgrade
the network rather than having to deploy a completely new system.

With EDGE operating alongside GSM and GPRS, it has been necessary for the air interface to
accommodate all signals, often catering for all three simultaneously. This approach, while proving
some technical challenges has been very successful, as demonstrated by the number of operators
whose networks are able to accommodate all three signals.

GSM EDGE modulation characteristics

One of the ways in which EDGE is able to provide higher data rates is to use a different modulation
scheme for higher data rates. However the GMSK modulation scheme used for the basic GSM
system is still used for the lower data rates.

GMSK was chosen for the original GSM system for a variety of reasons:

It is resilient to noise when compared to many other forms of modulation.

Radiation outside the accepted bandwidth is lower than other forms of phase shift keying.

It has a constant power level which allows higher efficiency RF power amplifiers to be used
in the handset, thereby reducing current consumption and conserving battery life.

The GMSK modulation format is is used for the lower data rate transfers. The advantages mean that
it is well suited for situations where lower data rates can be tolereated.

Note on GMSK:

GMSK, Gaussian Minimum Shift Keying is a form of phase modulation that is used in a number of portable radio and

wireless applications. It has advantages in terms of spectral efficiency as well as having an almost constant amplitude
which allows for the use of more efficient transmitter power amplifiers, thereby saving on current consumption, a

critical issue for battery power equipment.

Click on the link for a GMSK tutorial

In order to enable data to be transmitted a form of phase modulation known as Octonary Phase Shift
Keying, 8PSK was used. This form of modulation has a number of advantages that meant it was
chosen for carrying high speed EDGE data:

Able to operate within the existing GSM / GPRS channel structure.

Able to operate within the existing GSM / GPRS channel bandwidth.

Able to operate within the existing GSM / GPRS channel coding structure.

Provides a higher data capability than the existing GSM GMSK modulation scheme.

The 8-PSK modulation scheme fulfils these requirements. It has the equivalent bandwidth and
adjacent channel interference levels to GMSK. This makes it possible to integrate EDGE channels
into the existing GSM / GPRS network and frequency plan as well as keeping the same channel
coding structure.

Note on PSK:

Phase shift Keying, PSK is a form of modulation used particularly for data transmissions. If offers an effective way of

transmitting data. By altering the number of different phase states which can be adopted, the data speeds that can be

achieved within a given channel can be increased, but at the cost of lower resilience to noise an interference.

Click on the link for a PSK tutorial

The 8PSK modulation method is a linear method in which three consecutive bits are mapped onto
one symbol in the I/Q plane as shown below
8PSK Modulation Constellation

Using 8-PSK, the rate at which symbols are sent remains the same. However each symbol now
represents three bits instead of one. This means that the actual data rate is increased by a factor of
three.

The "distance" between the different positions on the constellation diagram is shorter using 8PSK
modulation than when using GMSK. This means that there is an increased risk of any of the symbols
being misinterpreted, especially in the presence of interference or noise. This occurs because it is
more difficult for the radio receiver to detect which symbol it has received. To overcome this,
additional error coding may be required to protect against the possibility of errors. However
increased levels of error protection require additional data to be sent and this reduces the data
throughput of the required data.

In view of this, it is found that when the signal is poor GMSK can be more effective than 8PSK, and
as a result, the overall EDGE modulation scheme is a mixture of GMSK and 8PSK.

GSM EDGE time slots

EDGE, GPRS and GSM have to all operate along side each other in a network. It is a primary
requirement that the evolutionary technologies are able to all operate on the same network. This
ensures the service offered to existing customers using older phones along with those paying
additional rates for the premium EDGE services. This means that the network has to support both
services operating simultaneously.

Accordingly different slots within the traffic frames will need to be able to support different structures
and different types of modulation dependent upon the phones being used, the calls being made and
the prevailing conditions. It is quite possible that one slot may be supporting a GSM call, the next a
GPRS data connection, and the third an EDGE connection using GMSK or 8PSK.
GPRS Channels
- a summary or tutorial describing GPRS channels including the GPRS physical and logical
channels.

Like other cellular systems, GPRS uses a variety of physical and logical channels to carry the data
payload as well as the signalling required to control the calls.

GPRS physical channel

GPRS builds on the basic GSM structure. GPRS uses the same modulation and frame structure that
is employed by GSM, and in this way it is an evolution of the GSM standard. Slots can be assigned
dynamically by the BSC to GPRS calls dependent upon the demand, the remaining ones being used
for GSM traffic.

There is a new data channel that is used for GPRS and it is called the Packet Data Channel
(PDCH). The overall slot structure for this channel is the same as that used within GSM, having the
same power profile, and timing advance attributes to overcome the different signal travel times to the
base station dependent upon the distance the mobile is from the base station. This enables the burst
to fit in seamlessly with the existing GSM structure.

Each burst of information for GPRS is 0.577 mS in length and is the same as that used in GSM. It
also carries two blocks of 57 bits of information, giving a total of 114 bits per burst. It therefore
requires four bursts to carry each 20 mS block of data, i.e. 456 bits of encoded data.

The BSC assigns PDCHs to particular time slots, and there will be times when the PDCH is inactive,
allowing the mobile to check for other base stations and monitor their signal strengths to enable the
network to judge when handover is required. The GPRS slot may also be used by the base station to
judge the time delay using a logical channel known as the Packet Timing Advance Control Channel
(PTCCT).

GPRS channel allocation

Although GPRS uses only one physical channel (PDCH) for the sending of data, it employs several
logical channels that are mapped into this to enable the GPRS data and facilities to be managed. As
the data in GPRS is handled as packet data, rather than circuit switched data the way in which this is
organised is very different to that on a standard GSM link. Packets of data are assigned a space
within the system according to the current needs, and routed accordingly.
The MAC layer is central to this and there are three MAC modes that are used to control the
transmissions. These are named fixed allocation, dynamic allocation, and extended dynamic
allocation.

The fixed allocation mode is required when a mobile requires a data to be sent at a consistent data
rate. To achieve this, a set of PDCHs are allocated for a given amount of time. When this mode is
used there is no requirement to monitor for availability, and the mobile can send and receive data
freely. This mode is used for applications such as video conferencing.

When using the dynamic allocation mode, the network allocates time slots as they are required. A
mobile is allowed to transmit in the uplink when it sees an identifier flag known as the Uplink Status
Flag (USF) that matches its own. The mobile then transmits its data in the allocated slot. This is
required because up to eight mobiles can have potential access to a slot, but obviously only one can
transmit at any given time.

A further form of allocation known as extended dynamic allocation is also available. Use of this mode
allows much higher data rates to be achieved because it enables mobiles to transmit in more than
one slot. When the USF indicates that a mobile can use this mode, it can transmit in the number
allowed, thereby increasing the rate at which it can send data.

Logical channels

There is a variety of channels used within GPRS, and they can be set into groups dependent upon
whether they are for common or dedicated use. Naturally the system does use the GSM control and
broadcast channels for initial set up, but all the GPRS actions are carried out within the GPRS
logical channels carried within the PDCH.

Broadcast channels:

Packet Broadcast Central Channel (PBCCH): This is a downlink only channel that is
used to broadcast information to mobiles and informs them of incoming calls etc. It is very
similar in operation to the BCCH used for GSM. In fact the BCCH is still required in the initial
to provide a time slot number for the PBCCH. In operation the PBCCH broadcasts general
information such as power control parameters, access methods and operational modes,
network parameters, etc, required to set up calls.

Common control channels:


Packet Paging Channel (PPCH): This is a downlink only channel and is used to alert the
mobile to an incoming call and to alert it to be ready to receive data. It is used for control
signalling prior to the call set up. Once the call is in progress a dedicated channel referred to
as the PACCH takes over.

Packet Access Grant Channel (PAGCH): This is also a downlink channel and it sends
information telling the mobile which traffic channel has been assigned to it. It occurs after the
PPCH has informed the mobile that there is an incoming call.

Packet Notification Channel (PNCH): This is another downlink only channel that is used
to alert mobiles that there is broadcast traffic intended for a large number of mobiles. It is
typically used in what is termed point-to-point multicasting.

Packet Random Access Channel (PRACH): This is an uplink channel that enables the
mobile to initiate a burst of data in the uplink. There are two types of PRACH burst, one is an
8 bit standard burst, and a second one using an 11 bit burst has added data to allow for
priority setting. Both types of burst allow for timing advance setting.

Dedicated control channels:

Packet Associated Control Channel (PACCH): : This channel is present in both uplink
and downlink directions and it is used for control signalling while a call is in progress. It takes
over from the PPCH once the call is set up and it carries information such as channel
assignments, power control messages and acknowledgements of received data.

Packet Timing Advance Common Control Channel (PTCCH): This channel, which is
present in both the uplink and downlink directions is used to adjust the timing advance. This
is required to ensure that messages arrive at the correct time at the base station regardless
of the distance of the mobile from the base station. As timing is critical in a TDMA system
and signals take a small but finite time to travel this aspect is very important if long guard
bands are not to be left.

Dedicated traffic channel:

Packet Data Traffic Channel (PDTCH): This channel is used to send the traffic and it is
present in both the uplink and downlink directions. Up to eight PDTCHs can be allocated to a
mobile to provide high speed data.
GSM EDGE network architecture
- a summary, overview or tutorial about the basics of the enhancements required for the GSM
EDGE network architecture including the GGSN and SGSN.
GSM EDGE TUTORIAL INCLUDES:

GSM EDGE technology

EDGE network architecture

Modulation, slot, & burst

EDGE MCS coding schemes and classes

Evolved EDGE

In order that the GSM EDGE upgrade can be implemented, additions are required within the EDGE
network architecture to be able to cater for the packet data that is carried by the system. The
additional network entities required are the same as those used for GPRS and also for UMTS.

With the introduction of the new entities within the network, it was still necessary for the new EDGE
network elements and those from the existing GSM elements to work along side one another.
Accordingly the introduction of GPRS and EDGE technology saw the addition of some new entities
within the over network architecture.

The two main elements that are required by the GSM EDGE network architecture are the GGSN and
SGSN. These enable the network to be able to cater for the packet data that is passed over the
network.

GSM EDGE network architecture upgrades

Although in practice a variety of elements are required within the network architecture, the main new
network architecture entities that are needed for the EDGE upgrade are:

SGSN: GPRS Support Node - this forms a gateway to the services within the network.

GGSN: Gateway GPRS Support Node which forms the gateway to the outside world.
PCU: Packet Control Unit which differentiates whether data is to be routed to the packet
switched or circuit switched networks.

A simplified view of the GSM EDGE network architecture can be seen in the diagram below. From
this it can be seen that it is very similar to the more basic GSM network architecture, but with
additional elements.

GSM EDGE network architecture

SGSN

The SGSN or Serving GPRS Support Node element of the GPRS network provides a number of
takes focussed on the IP elements of the overall system. It provides a variety of services to the
mobiles:

Packet routing and transfer


Mobility management

Authentication

Attach/detach

Logical link management

Charging data

There is a location register within the SGSN and this stores location information (e.g., current cell,
current VLR). It also stores the user profiles (e.g., IMSI, packet addresses used) for all the GPRS
users registered with the particular SGSN.

GGSN

The GGSN, Gateway GPRS Support Node is one of the most important entities within the GSM
EDGE network architecture.

The GGSN organises the inter-working between the GPRS / EDGE network and external packet
switched networks to which the mobiles may be connected. These may include both Internet and
X.25 networks.

The GGSN can be considered to be a combination of a gateway, router and firewall as it hides the
internal network to the outside. In operation, when the GGSN receives data addressed to a specific
user, it checks if the user is active, then forwarding the data. In the opposite direction, packet data
from the mobile is routed to the right destination network by the GGSN.

PCU

The PCU or Packet Control Unit is a hardware router that is added to the BSC. It differentiates data
destined for the standard GSM network (circuit switched data) and data destined for the EDGE
network (Packet Switched Data). The PCU itself may be a separate physical entity, or more often
these days it is incorporated into the base station controller, BSC, thereby saving additional
hardware costs.
GSM EDGE network upgrading

One of the key elements for any network operator is the cost of capital expenditure (capex) to buy
and establish a network. Capex costs are normally very high for a new network, and operators
endeavour to avoid this and use any existing networks they may have to make the optimum use of
any capital. In addition to the capex, there are the operational costs, (opex). These costs are for
general maintenance and other operational costs that may be incurred. Increasing efficiency and
reliability will reduce the opex costs.

Any upgrade such as that from GSM to EDGE will require new investment and operators are keen to
keep this to the minimum. The upgrades for the EDGE network are not as large as starting from
scratch and rolling out a new network.

The EDGE network adds to the existing GSM network. The main new entities required within the
network are the SGSN and GGSN, and these are required as the starting point.

The base station subsystems require some updates. The main one is the addition of the PCU
described above. Some modifications may be required to the BTS, but often only a software upgrade
is required, and this may often be achieved remotely. In this way costs are kept to a minimum.

By Ian Poole

GPRS Operation & States


- an introduction, overview or tutorial of the basics of the operation of the GPRS cellular system.

When looking at the way in which GPRS operates, it can be seen that there are three basic modes
in which it operates. These are: initialisation / idle, standby, and ready.

Initialisation / idle

When the mobile is turned on it must register with the network and update the location register. This
is very similar to that performed with a GSM mobile, but it is referred to as a location update. It first
locates a suitable cell and transmits a radio burst on the RACH using a shortened burst because it
does not know what timing advance is required. The data contained within this burst temporarily
identifies the mobile, and indicates that the reason for the update is to perform a location update.

When the mobile performs its location update the network also performs an authentication to ensure
that it is allowed to access the network. As for GSM it accesses the HLR and VLR as necessary for
the location update and the AuC for authentication. It is at registration that the network detects that
the mobile has a GPRS capability. The SGSN also maintains a record of the location of the mobile
so that data can be sent there is required.

Standby

The mobile then enters a standby mode, periodically updating its position as required. It monitors the
MNC of the base station to ensure that it has not changed base stations and also looks for stronger
base station control channels.

The mobile will also monitor the PPCH in case of an incoming alert indicating that data is ready to be
sent. As for GSM, most base stations set up a schedule for paging alerts based on the last figures of
the mobile number. In this way it does not have to monitor all the available alert slots and can
instead only monitor a reduced number where it knows alerts can be sent for it. In this way the
receiver can be turned off for longer and battery life can be extended.

Ready

In the ready mode the mobile is attached to the system and a virtual connection is made with the
SGSN and GGSN. By making this connection the network knows where to route the packets when
they are sent and received. In addition to this the mobile is likely to use the PTCCH to ensure that its
timing is correctly set so that it is ready for a data transfer should one be needed.

With the mobile attached to the network, it is prepared for a call or data transfer. To transmit data the
mobile attempts a Packet Channel Request using the PRACH uplink channel. As this may be busy
the mobile monitors the PCCCH which contains a status bit indicating the status of the base station
receiver, whether it is busy or idle and capable of receiving data. When the mobile sees this status
bit indicates the receiver is idle, it sends its packet channel request message. If accepted the base
station will respond by sending an assignment message on the PAGCH on the downlink. This will
indicate which channel the mobile is to use for its packet data transfer as well as other details
required for the data transfer.

This only sets up the packet data transfers for the uplink. If data needs to be transferred in the
downlink direction then a separate assignment is performed for the downlink channel.

When data is transferred this is controlled by the action of the MAC layer. In most instances it will
operate in an acknowledge mode whereby the base station acknowledges each block of data. The
acknowledgement may be contained within the data packets being sent in the downlink, or the base
station may send data packets down purely to acknowledge the data.

When disconnecting the mobile will send a packet temporary block flow message, and this is
acknowledged. Once this has taken place the USF assigned to the mobile becomes redundant and
can be assigned to another mobile wanting access. With this the mobile effectively becomes
disconnected and although still attached to the network no more data transfer takes place unless it is
re-initiated. Separate messages are needed to detach the mobile from the network.

By Ian Poole

General Packet Radio Service


From Wikipedia, the free encyclopedia

General packet radio service (GPRS) is a packet oriented mobile data service on
the 2G and 3G cellular communication system's global system for mobile
communications (GSM). GPRS was originally standardized by European Telecommunications
Standards Institute (ETSI) in response to the earlier CDPD and i-mode packet-switched cellular
technologies. It is now maintained by the 3rd Generation Partnership Project (3GPP).[1][2]

GPRS usage is typically charged based on volume of data transferred, contrasting with circuit
switched data, which is usually billed per minute of connection time. Usage above the bundle
cap is charged per megabyte, speed limited, or disallowed.

GPRS is a best-effort service, implying variable throughput and latency that depend on the
number of other users sharing the service concurrently, as opposed to circuit switching, where a
certain quality of service (QoS) is guaranteed during the connection. In 2G systems, GPRS
provides data rates of 56114 kbit/second.[3] 2G cellular technology combined with GPRS is
sometimes described as 2.5G, that is, a technology between the second (2G) and third (3G)
generations of mobile telephony.[4] It provides moderate-speed data transfer, by using
unused time division multiple access (TDMA) channels in, for example, the GSM system. GPRS
is integrated into GSM Release 97 and newer releases.

Contents

1Technical overview

1.1Services offered

1.2Protocols supported

1.3Hardware

1.4Addressing

2Coding schemes and speeds

2.1Multiple access schemes

2.2Channel encoding

2.3Multislot Class

2.3.1Multislot Classes for GPRS/EGPRS

2.3.2Attributes of a multislot class

3Usability

4History of GPRS

5See also

6References

7External links

Technical overview

The GPRS core network allows 2G, 3G and WCDMA mobile networks to transmit IP packets to
external networks such as the Internet. The GPRS system is an integrated part of
the GSM network switching subsystem.
Services offered

GPRS extends the GSM Packet circuit switched data capabilities and makes the following
services possible:

SMS messaging and broadcasting

"Always on" internet access

Multimedia messaging service (MMS)

Push to talk over cellular (PoC)

Instant messaging and presencewireless village

Internet applications for smart devices through wireless application protocol (WAP)

Point-to-point (P2P) service: inter-networking with the Internet (IP)

Point-to-Multipoint (P2M) service: point-to-multipoint multicast and point-to-multipoint


group calls

If SMS over GPRS is used, an SMS transmission speed of about 30 SMS messages per minute
may be achieved. This is much faster than using the ordinary SMS over GSM, whose SMS
transmission speed is about 6 to 10 SMS messages per minute.

Protocols supported

GPRS supports the following protocols:

Internet protocol (IP). In practice, built-in mobile browsers use IPv4 since IPv6 was not
yet popular.

Point-to-point protocol (PPP). In this mode PPP is often not supported by the mobile
phone operator but if the mobile is used as a modem to the connected computer, PPP is
used to tunnel IP to the phone. This allows an IP address to be assigned dynamically
(IPCP not DHCP) to the mobile equipment.

X.25 connections. This is typically used for applications like wireless payment terminals,
although it has been removed from the standard. X.25 can still be supported over PPP,
or even over IP, but doing this requires either a network-based router to perform
encapsulation or intelligence built into the end-device/terminal; e.g., user equipment
(UE).

When TCP/IP is used, each phone can have one or more IP addresses allocated. GPRS will
store and forward the IP packets to the phone even during handover. The TCP handles any
packet loss (e.g. due to a radio noise induced pause).
Hardware

Devices supporting GPRS are divided into three classes:

Class A

Can be connected to GPRS service and GSM service (voice, SMS), using both at the
same time. Such devices are known to be available today.

Class B

Can be connected to GPRS service and GSM service (voice, SMS), but using only one
or the other at a given time. During GSM service (voice call or SMS), GPRS service is
suspended, and then resumed automatically after the GSM service (voice call or SMS)
has concluded. Most GPRS mobile devices are Class B.

Class C

Are connected to either GPRS service or GSM service (voice, SMS). Must be switched
manually between one or the other service.

A true Class A device may be required to transmit on two different frequencies at the same time,
and thus will need two radios. To get around this expensive requirement, a GPRS mobile may
implement thedual transfer mode (DTM) feature. A DTM-capable mobile may use simultaneous
voice and packet data, with the network coordinating to ensure that it is not required to transmit
on two different frequencies at the same time. Such mobiles are considered pseudo-Class A,
sometimes referred to as "simple class A". Some networks support DTM since 2007.

Huawei E2203G/GPRS Modem

USB 3G/GPRS modems use a terminal-like interface over USB 1.1, 2.0 and later, data
formats V.42bis, and RFC 1144 and some models have connector for external antenna.
Modems can be added as cards (for laptops) or external USB devices which are similar in
shape and size to a computer mouse, or nowadays more like a pendrive.

Addressing

A GPRS connection is established by reference to its access point name (APN). The APN
defines the services such as wireless application protocol (WAP) access, short message
service(SMS), multimedia messaging service (MMS), and for Internet communication services
such as email and World Wide Web access.
In order to set up a GPRS connection for a wireless modem, a user must specify an APN,
optionally a user name and password, and very rarely an IP address, all provided by the
network operator.

Coding schemes and speeds

The upload and download speeds that can be achieved in GPRS depend on a number of
factors such as:

the number of BTS TDMA time slots assigned by the operator

the channel encoding is used.

the maximum capability of the mobile device expressed as a GPRS multislot class

Multiple access schemes

The multiple access methods used in GSM with GPRS are based on frequency division
duplex (FDD) and TDMA. During a session, a user is assigned to one pair of up-link and down-
link frequency channels. This is combined with time domain statistical multiplexing which makes
it possible for several users to share the same frequency channel. The packets have constant
length, corresponding to a GSM time slot. The down-link uses first-come first-served packet
scheduling, while the up-link uses a scheme very similar to reservation ALOHA (R-ALOHA).
This means that slotted ALOHA (S-ALOHA) is used for reservation inquiries during a contention
phase, and then the actual data is transferred using dynamic TDMA with first-come first-served.

Channel encoding

The channel encoding process in GPRS consists of two steps: first, a cyclic code is used to add
parity bits, which are also referred to as the Block Check Sequence, followed by coding with a
possibly puncturedconvolutional code.[5] The Coding Schemes CS-1 to CS-4 specify the number
of parity bits generated by the cyclic code and the puncturing rate of the convolutional code.[5] In
Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is
converted into two coded bits.[5] In Coding Schemes CS-2 and CS-3, the output of the
convolutional code is punctured to achieve the desired code rate.[5] In Coding Scheme CS-4, no
convolutional coding is applied.[5] The following table summarises the options.

GPRS Bitrate including RLC/MAC Bitrate excluding


Modulatio Code
Coding overhead[a][b] RLC/MAC overhead[c]
n rate
scheme (kbit/s/slot) (kbit/s/slot)

CS-1 9.20 8.00 GMSK 1/2


CS-2 13.55 12.00 GMSK 2/3

CS-3 15.75 14.40 GMSK 3/4

CS-4 21.55 20.00 GMSK 1

1. This is rate at which the RLC/MAC layer protocol data unit (PDU) (called a radio
block) is transmitted. As shown in TS 44.060 section 10.0a.1,[6] a radio block consists of
MAC header, RLC header, RLC data unit and spare bits. The RLC data unit represents
the payload, the rest is overhead. The radio block is coded by the convolutional code
specified for a particular Coding Scheme, which yields the same PHY layer data rate for
all Coding Schemes.

2. Cited in various sources, e.g. in TS 45.001 table 1.[5] is the bitrate including the
the RLC/MAC headers, but excluding the uplink state flag (USF), which is part of the
MAC header,[7] yielding a bitrate that is 0.15 kbit/s lower.

3. The net bitrate here is the rate at which the RLC/MAC layer payload (the RLC
data unit) is transmitted. As such, this bit rate excludes the header overhead from the
RLC/MAC layers.

The least robust, but fastest, coding scheme (CS-4) is available near a base transceiver
station (BTS), while the most robust coding scheme (CS-1) is used when the mobile station
(MS) is further away from a BTS.

Using the CS-4 it is possible to achieve a user speed of 20.0 kbit/s per time slot. However, using
this scheme the cell coverage is 25% of normal. CS-1 can achieve a user speed of only 8.0
kbit/s per time slot, but has 98% of normal coverage. Newer network equipment can adapt the
transfer speed automatically depending on the mobile location.

In addition to GPRS, there are two other GSM technologies which deliver data services: circuit-
switched data (CSD) and high-speed circuit-switched data (HSCSD). In contrast to the shared
nature of GPRS, these instead establish a dedicated circuit (usually billed per minute). Some
applications such as video calling may prefer HSCSD, especially when there is a continuous
flow of data between the endpoints.

The following table summarises some possible configurations of GPRS and circuit switched
data services.

Technology Download Upload (kbit/s) TDMA timeslots allocated


(kbit/s) (DL+UL)

CSD 9.6 9.6 1+1

HSCSD 28.8 14.4 2+1

HSCSD 43.2 14.4 3+1

21.4 (Class 8 & 10 and


GPRS 85.6 4+1
CS-4)

42.8 (Class 10 and CS-


GPRS 64.2 3+2
4)

59.2 (Class 8, 10 and


EGPRS (EDGE) 236.8 4+1
MCS-9)

118.4 (Class 10 and


EGPRS (EDGE) 177.6 3+2
MCS-9)

Multislot Class

The multislot class determines the speed of data transfer available in


the Uplink and Downlink directions. It is a value between 1 to 45 which the network uses to
allocate radio channels in the uplink and downlink direction. Multislot class with values greater
than 31 are referred to as high multislot classes.

A multislot allocation is represented as, for example, 5+2. The first number is the number of
downlink timeslots and the second is the number of uplink timeslots allocated for use by the
mobile station. A commonly used value is class 10 for many GPRS/EGPRS mobiles which uses
a maximum of 4 timeslots in downlink direction and 2 timeslots in uplink direction. However
simultaneously a maximum number of 5 simultaneous timeslots can be used in both uplink and
downlink. The network will automatically configure the for either 3+2 or 4+1 operation depending
on the nature of data transfer.
Some high end mobiles, usually also supporting UMTS, also support GPRS/EDGE multislot
class 32. According to 3GPP TS 45.002 (Release 12), Table B.1,ref name=TS45002>3rd
Generation Partnership Project (March 2015). "3GGP TS45.002: Technical Specification Group
GSM/EDGE Radio Access Network; Multiplexing and multiple access on the radio path
(Release 12)". 12.4.0. Retrieved 2015-12-05.</ref> mobile stations of this class support 5
timeslots in downlink and 3 timeslots in uplink with a maximum number of 6 simultaneously
used timeslots. If data traffic is concentrated in downlink direction the network will configure the
connection for 5+1 operation. When more data is transferred in the uplink the network can at
any time change the constellation to 4+2 or 3+3. Under the best reception conditions, i.e. when
the best EDGE modulation and coding scheme can be used, 5 timeslots can carry a bandwidth
of 5*59.2 kbit/s = 296 kbit/s. In uplink direction, 3 timeslots can carry a bandwidth of 3*59.2
kbit/s = 177.6 kbit/s.[8]

Multislot Classes for GPRS/EGPRS

Multislot Class Downlink TS Uplink TS Active TS

1 1 1 2

2 2 1 3

3 2 2 3

4 3 1 4

5 2 2 4

6 3 2 4

7 3 3 4

8 4 1 5

9 3 2 5
10 4 2 5

11 4 3 5

12 4 4 5

30 5 1 6

31 5 2 6

32 5 3 6

33 5 4 6

34 5 5 6

Attributes of a multislot class

Each multislot class identifies the following:

the maximum number of Timeslots that can be allocated on uplink

the maximum number of Timeslots that can be allocated on downlink

the total number of timeslots which can be allocated by the network to the mobile

the time needed for the MS to perform adjacent cell signal level measurement and get
ready to transmit

the time needed for the MS to get ready to transmit

the time needed for the MS to perform adjacent cell signal level measurement and get
ready to receive

the time needed for the MS to get ready to receive.


The different multislot class specification is detailed in the Annex B of the 3GPP Technical
Specification 45.002 (Multiplexing and multiple access on the radio path)

Usability

The maximum speed of a GPRS connection offered in 2003 was similar to a modem connection
in an analog wire telephone network, about 3240 kbit/s, depending on the phone
used. Latency is very high; round-trip time (RTT) is typically about 600700 ms and often
reaches 1s. GPRS is typically prioritized lower than speech, and thus the quality of connection
varies greatly.

Devices with latency/RTT improvements (via, for example, the extended UL TBF mode feature)
are generally available. Also, network upgrades of features are available with certain operators.
With these enhancements the active round-trip time can be reduced, resulting in significant
increase in application-level throughput speeds.

History of GPRS

GPRS opened in 2000 as a packet-switched data service embedded to the channel-switched


cellular radio network GSM. GPRS extends the reach of the fixed Internet by connecting mobile
terminals worldwide.

The CELLPAC[9] protocol developed 1991-1993 was the trigger point for starting in 1993
specification of standard GPRS by ETSI SMG. Especially, the CELLPAC Voice & Data functions
introduced in a 1993 ETSI Workshop contribution[10] anticipate what was later known to be the
roots of GPRS. This workshop contribution is referenced in 22 GPRS related US-Patents.
[11]
Successor systems to GSM/GPRS like W-CDMA (UMTS) and LTE rely on key GPRS
functions for mobile Internet access as introduced by CELLPAC.

According to a study on history of GPRS development[12] Bernhard Walke and his student Peter
Decker are the inventors of GPRS the first system providing worldwide mobile Internet
access.

Google search for "Inventors of mobile Internet" returns the company Unwired Planet, Inc.
naming itself "The Founder of the Mobile Internet". The company is exploiting a large patent
portfolio which GPRS relevant (Ericsson) patents are referencing CELLPAC.

Enhanced Data Rates for GSM Evolution


From Wikipedia, the free encyclopedia
EDGE sign shown in notification bar on an Android-based smartphone.

Enhanced Data rates for GSM Evolution (EDGE) (also known as Enhanced GPRS (EGPRS),
or IMT Single Carrier (IMT-SC), or Enhanced Data rates for Global Evolution) is a
digital mobile phone technology that allows improved data transmission rates as a backward-
compatible extension of GSM. EDGE is considered a pre-3G radio technology and is part
of ITU's 3G definition.[1] EDGE was deployed on GSM networks beginning in 2003 initially
by Cingular (now AT&T) in the United States.[2]

EDGE is standardized also by 3GPP as part of the GSM family. A variant, so called Compact-
EDGE, was developed for use in a portion of Digital AMPS network spectrum.[3]

Through the introduction of sophisticated methods of coding and transmitting data, EDGE
delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and
performance compared with an ordinary GSM/GPRS connection.

EDGE can be used for any packet switched application, such as an Internet connection.

Evolved EDGE continues in Release 7 of the 3GPP standard providing reduced latency and
more than doubled performance e.g. to complement High-Speed Packet Access (HSPA). Peak
bit-rates of up to 1 Mbit/s and typical bit-rates of 400 kbit/s can be expected.

Contents

1Technology

1.1Transmission techniques

1.2EDGE modulation and coding scheme (MCS)

1.3Evolved EDGE

2Networks

3See also

4References

5External links
Technology

EDGE/EGPRS is implemented as a bolt-on enhancement for 2.5G GSM/GPRS networks,


making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and
can function on any network with GPRS deployed on it, provided the carrier implements the
necessary upgrade. EDGE requires no hardware or software changes to be made in GSM core
networks. EDGE-compatible transceiver units must be installed and the base station subsystem
needs to be upgraded to support EDGE. If the operator already has this in place, which is often
the case today, the network can be upgraded to EDGE by activating an optional software
feature. Today EDGE is supported by all major chip vendors for both GSM and WCDMA/HSPA.

Transmission techniques

In addition to Gaussian minimum-shift keying (GMSK), EDGE uses higher-order PSK/8 phase
shift keying (8PSK) for the upper five of its nine modulation and coding schemes. EDGE
produces a 3-bit word for every change in carrier phase. This effectively triples the gross data
rate offered by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the
modulation and coding scheme (MCS) according to the quality of the radio channel, and thus
the bit rate and robustness of data transmission. It introduces a new technology not found in
GPRS, Incremental Redundancy, which, instead of retransmitting disturbed packets, sends
more redundancy information to be combined in the receiver. This increases the probability of
correct decoding.

EDGE can carry a bandwidth up to 500 kbit/s (with end-to-end latency of less than 150 ms) for
4 timeslots (theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it
can handle four times as much traffic as standard GPRS. EDGE meets the International
Telecommunications Union's requirement for a 3G network, and has been accepted by the ITU
as part of the IMT-2000 family of 3G standards.[1] It also enhances the circuit data mode
called HSCSD, increasing the data rate of this service.

EDGE modulation and coding scheme (MCS)


The channel encoding process in GPRS as well as EGPRS/EDGE consists of two steps: first, a
cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence,
followed by coding with a possibly punctured convolutional code.[4] In GPRS, the Coding
Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the
puncturing rate of the convolutional code.[4]In GPRS Coding Schemes CS-1 through CS-3, the
convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits.[4] In Coding
Schemes CS-2 and CS-3, the output of the convolutional code is punctured to achieve the
desired code rate.[4] In GPRS Coding Scheme CS-4, no convolutional coding is applied.[4]

In EGPRS/EDGE, the Modulation and Coding Schemes MCS-1 to MCS-9 take the place of the
Coding Schemes of GPRS, and additionally specify which modulation scheme is used, GMSK
or 8PSK.[4] MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to
GPRS, while MCS-5 through MCS-9 use 8PSK.[4] In all EGPRS Modulation and Coding
Schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the
desired code rate.[4] In contrast to GRPS, the Radio Link Control (RLC) and Media Access
Control (MAC) headers and the payload data are coded separately in EGPRS.[4] The headers
are coded more robustly than the data.[4]

GPRS Bitrate including RLC/MAC Bitrate excluding


Modulatio Code
Coding overhead[a][b] RLC/MAC overhead[c]
n rate
scheme (kbit/s/slot) (kbit/s/slot)

CS-1 9.20 8.00 GMSK 1/2

CS-2 13.55 12.00 GMSK 2/3


CS-3 15.75 14.40 GMSK 3/4

CS-4 21.55 20.00 GMSK 1

EDGE Bitrate including Bitrate excluding


Data Header
Modulation and RLC/MAC RLC/MAC Modulatio
code code
Coding overhead[a] overhead[c] n
rate rate
Scheme (MCS) (kbit/s/slot) (kbit/s/slot)

MCS-1 9.20 8.00 GMSK 0.53 0.53

MCS-2 11.60 10.40 GMSK 0.66 0.53

MCS-3 15.20 14.80 GMSK 0.85 0.53

MCS-4 18.00 16.80 GMSK 1 0.53

MCS-5 22.80 21.60 8PSK 0.37 1/3

MCS-6 30.00 28.80 8PSK 0.49 1/3

MCS-7 45.20 44.00 8PSK 0.76 0.39

MCS-8 54.80 53.60 8PSK 0.92 0.39

MCS-9 59.60 58.40 8PSK 1 0.39

1. This is rate at which the RLC/MAC layer protocol data unit (PDU) (called a radio
block) is transmitted. As shown in TS 44.060 section 10.0a.1,[5] a radio block consists of
MAC header, RLC header, RLC data unit and spare bits. The RLC data unit represents
the payload, the rest is overhead. The radio block is coded by the convolutional code
specified for a particular Coding Scheme, which yields the same PHY layer data rate for
all Coding Schemes.

2. Cited in various sources, e.g. in TS 45.001 table 1.[4] is the bitrate including the
the RLC/MAC headers, but excluding the uplink state flag (USF), which is part of the
MAC header,[6] yielding a bitrate that is 0.15 kbit/s lower.

3. The net bitrate here is the rate at which the RLC/MAC layer payload (the RLC
data unit) is transmitted. As such, this bit rate excludes the header overhead from the
RLC/MAC layers.

Evolved EDGE

Evolved EDGE improves on EDGE in a number of ways. Latencies are reduced by lowering
the Transmission Time Interval by half (from 20 ms to 10 ms). Bit rates are increased up to 1
Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate
and higher-order modulation (32QAM and 16QAM instead of 8PSK), and turbo codes to
improve error correction. And finally signal quality is improved using dual antennas improving
average bit-rates and spectrum efficiency. EDGE Evolution can be gradually introduced as
software upgrades, taking advantage of the installed base. With EDGE Evolution, end-users will
be able to experience mobile internet connections corresponding to a 500 kbit/s ADSL service.[7]

Networks

The Global mobile Suppliers Association (GSA) states that,[8] as of May 2013, there were 604
GSM/EDGE networks in 213 countries, from a total of 6

Universal Mobile Telecommunications System


From Wikipedia, the free encyclopedia

The Universal Mobile Telecommunications System (UMTS) is a third generation mobile


cellular system for networks based on the GSM standard. Developed and maintained by
the 3GPP (3rd Generation Partnership Project), UMTS is a component of the International
Telecommunications Union IMT-2000 standard set and compares with the CDMA2000 standard
set for networks based on the competingcdmaOne technology. UMTS uses wideband code
division multiple access (W-CDMA) radio access technology to offer greater spectral efficiency
and bandwidth to mobile network operators.

UMTS specifies a complete network system, which includes the radio access network (UMTS
Terrestrial Radio Access Network, or UTRAN), the core network (Mobile Application Part, or
MAP) and the authentication of users via SIM (subscriber identity module) cards.

The technology described in UMTS is sometimes also referred to as Freedom of Mobile


Multimedia Access (FOMA)[1] or 3GSM.

Unlike EDGE (IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS
requires new base stations and new frequency allocations.

Features

UMTS supports maximum theoretical data transfer rates of 42 Mbit/s when Evolved
HSPA (HSPA+) is implemented in the network.[2] Users in deployed networks can expect a
transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and
7.2 Mbit/s for High-Speed Downlink Packet Access (HSDPA) handsets in the downlink
connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-
corrected circuit switched data channel, multiple 9.6 kbit/s channels in High-Speed Circuit-
Switched Data (HSCSD) and 14.4 kbit/s for CDMAOne channels.

Since 2006, UMTS networks in many countries have been or are in the process of being
upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as 3.5G.
Currently, HSDPA enablesdownlink transfer speeds of up to 21 Mbit/s. Work is also progressing
on improving the uplink transfer speed with the High-Speed Uplink Packet Access (HSUPA).
Longer term, the 3GPP Long Term Evolution(LTE) project plans to move UMTS to 4G speeds of
100 Mbit/s down and 50 Mbit/s up, using a next generation air interface technology based
upon orthogonal frequency-division multiplexing.

The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-
provided mobile applications such as mobile TV and video calling. The high data speeds of
UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has
shown that user demand for video calls is not high, and telco-provided audio/video content has
declined in popularity in favour of high-speed access to the World Wide Webeither directly on
a handset or connected to a computer via Wi-Fi, Bluetooth or USB.
Technology

UMTS network architecture

UMTS combines three different air interfaces, GSM's Mobile Application Part (MAP) core, and
the GSM family of speech codecs.

Air interfaces
UMTS provides several different terrestrial air interfaces, called UMTS Terrestrial Radio
Access (UTRA).[3] All air interface options are part of ITU's IMT-2000. In the currently most
popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used.

Please note that the terms W-CDMA, TD-CDMA and TD-SCDMA are misleading. While they
suggest covering just a channel access method(namely a variant of CDMA), they are actually
the common names for the whole air interface standards.[4]

W-CDMA (UTRA-FDD)

3G sign shown in notification bar on an Android powered smartphone.


UMTS base station on the roof of a building

W-CDMA uses the DS-CDMA channel access method with a pair of 5 MHz wide channels. In
contrast, the competing CDMA2000 system uses one or more available 1.25 MHz channels for
each direction of communication. W-CDMA systems are widely criticized for their large spectrum
usage, which has delayed deployment in countries that acted relatively slowly in allocating new
frequencies specifically for 3G services (such as the United States).

The specific frequency bands originally defined by the UMTS standard are 18852025 MHz for
the mobile-to-base (uplink) and 21102200 MHz for the base-to-mobile (downlink). In the US,
17101755 MHz and 21102155 MHz are used instead, as the 1900 MHz band was already
used.[5] While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS
operators use the 850 MHz and/or 1900 MHz bands (independently, meaning uplink and
downlink are within the same band), notably in the US by AT&T Mobility, New Zealand
byTelecom New Zealand on the XT Mobile Network and in Australia by Telstra on the Next
G network. Some carriers such as T-Mobile use band numbers to identify the UMTS
frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V
(850 MHz).

W-CDMA is a part of IMT-2000 as IMT Direct Spread.

UMTS-FDD, is an acronym for Universal Mobile Telecommunications System (UMTS)


- frequency-division duplexing (FDD) and a 3GPP standardized version of UMTS networks that
makes use of frequency-division duplexing for duplexing over an UMTS Terrestrial Radio
Access (UTRA) air interface.[6]

W-CDMA or WCDMA (Wideband Code Division Multiple Access), along with UMTS-
FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interfacestandard found
in 3G mobile telecommunications networks. It supports conventional cellular voice, text
and MMS services, but can also carry data at high speeds, allowing mobile operators to deliver
higher bandwidth applications including streaming and broadband Internet access.[7]

W-CDMA is the basis of Japan's NTT DoCoMo's FOMA service and the most-commonly used
member of the Universal Mobile Telecommunications System (UMTS) family and sometimes
used as a synonym for UMTS.[8] It uses the DS-CDMA channel access method and
the FDD duplexing method to achieve higher speeds and support more users compared to most
previously used time division multiple access (TDMA) and time division duplex (TDD) schemes.

While not an evolutionary upgrade on the airside, it uses the same core network as
the 2G GSM networks deployed worldwide, allowing dual mode mobile operation along with
GSM/EDGE; a feature it shares with other members of the UMTS family.

Development

In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G
network FOMA. Later NTT DoCoMo submitted the specification to theInternational
Telecommunication Union (ITU) as a candidate for the international 3G standard known as IMT-
2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as
an alternative to CDMA2000, EDGE, and the short range DECT system. Later, W-CDMA was
selected as an air interface forUMTS.

As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their
network was initially incompatible with UMTS.[9] However, this has been resolved by NTT
DoCoMo updating their network.

Code Division Multiple Access communication networks have been developed by a number of
companies over the years, but development of cell-phone networks based on CDMA (prior to W-
CDMA) was dominated by Qualcomm. Qualcomm was the first company to succeed in
developing a practical and cost-effective CDMA implementation for consumer cell phones and
its early IS-95 air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000)
standard. Qualcomm created an experimental wideband CDMA system called
CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000(3GPP2) network
technologies into a single design for a worldwide standard air interface. Compatibility with
CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since
Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage
in 58 countries as of 2006. However, divergent requirements resulted in the W-CDMA standard
being retained and deployed globally. W-CDMA has then become the dominant technology with
457 commercial networks in 178 countries as of April 2012.[10] Several cdma2000 operators
have even converted their networks to W-CDMA for international roaming compatibility and
smooth upgrade path to LTE.
Despite incompatibility with existing air-interface standards, late introduction and the high
upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the
dominant standard.

Rationale for W-CDMA

W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one
or several pairs of 1.25 MHz radio channels. Though W-CDMA does use a direct
sequence CDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband
version of CDMA2000. The W-CDMA system is a new design by NTT DoCoMo, and it differs in
many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a
different balance of trade-offs between cost, capacity, performance, and density; it also
promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be
better suited for deployment in the very dense cities of Europe and Asia. However, hurdles
remain, and cross-licensing of patents between Qualcomm and W-CDMA vendors has not
eliminated possible patent issues due to the features of W-CDMA which remain covered by
Qualcomm patents.[11]

W-CDMA has been developed into a complete set of specifications, a detailed protocol that
defines how a mobile phone communicates with the tower, how signals are modulated, how
datagrams are structured, and system interfaces are specified allowing free competition on
technology elements.

Deployment

The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo
in Japan in 2001.

Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand. See the
main UMTS article for more information.

W-CDMA has also been adapted for use in satellite communications on the U.S. Mobile User
Objective System using geosynchronous satellites in place of cell towers.

J-Phone Japan (once Vodafone and now SoftBank Mobile) soon followed by launching their
own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming
UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank
3G") in December 2004.

Beginning in 2003, Hutchison Whampoa gradually launched their upstart UMTS networks.

Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the
radio frequencies to the company willing to pay the most, or conducted a "beauty contest"
asking the various companies to present what they intend to commit to if awarded the licences.
This strategy has been criticised for aiming to drain the cash of operators to the brink of
bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for
the rollout of the servicewhere a certain "coverage" must be achieved within a given date or
the licence will be revoked.

Vodafone launched several UMTS networks in Europe in February


2004. MobileOne of Singapore commercially launched its 3G (W-CDMA) services in February
2005. New Zealand in August 2005 andAustralia in October 2005.

AT&T Wireless (now a part of Cingular Wireless) has deployed UMTS in several cities. Though
advancements in its network deployment have been delayed due to the merger with Cingular,
Cingular began offering HSDPA service in December 2005.

Rogers in Canada March 2007 has launchedHSDPA in the Torronto Golden Horseshoe district
on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities
October, 2007.

TeliaSonera opened W-CDMA service in Finland October 13, 2004 with speeds up to 384 kbit/s.
Availability only in main cities. Pricing is approx. 2/MB.

SK Telecom and KTF, two largest mobile phone service providers in South Korea, have each
started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in
handhelds, the W-CDMA service has barely made a dent in the Korean market which was
dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities
while SK Telecom has announced that it will provide nationwide coverage for its WCDMA
network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of
2007. KT Freecel will thus cut funding to its CDMA2000 network development to the minimum.

In Norway, Telenor introduced W-CDMA in major cities by the end of 2004, while their
competitor, NetCom, followed suit a few months later. Both operators have 98% national
coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the
UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN
service in Austria (2006).

Maxis Communications and Celcom, two mobile phone service providers in Malaysia, started
offering W-CDMA services in 2005.

In Sweden, Telia introduced W-CDMA March 2004.

TD-CDMA (UTRA-TDD 3.84 Mcps High Chip Rate (HCR))

TD-CDMA, an acronym for Time-division-Code division multiple access, is a channel access


method based on using spread spectrum multiple access (CDMA) across multiple time slots
(TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym
for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate.[12]
UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized
as UTRA-TDD HCR, which uses increments of 5 MHz of spectrum, each slice divided into 10ms
frames containing fifteen time slots (1500 per second).[13] The time slots (TS) are allocated in
fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to
multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and
downstream, allowing deployment in tight frequency bands.

TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one
of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR.
UTRA-TDD HCR is closely related to W-CDMA (UMTS), and provides the same types of
channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under
TD-CDMA.[14]

TD-SCDMA (UTRA-TDD 1.28 Mcps Low Chip Rate (LCR))

TD-SCDMA uses the TDMA channel access method combined with an adaptive synchronous
CDMA component[15] on 1.6 MHz slices of spectrum, allowing deployment in even tighter
frequency bands than TD-CDMA. However, the main incentive for development of this Chinese-
developed standard was avoiding or reducing the license fees that have to be paid to non-
Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from
the beginning but has been added in Release 4 of the specification.

Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000.

Radio access network

UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is
composed of multiple base stations, possibly using different terrestrial air interface standards
and frequency bands.

UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio
access network to GERAN (GSM/EDGE RAN), and allowing (mostly) transparent switching
between the RANs according to available coverage and service needs. Because of that,
UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to
as UTRAN/GERAN.

UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-
2000.

The UE (User Equipment) interface of the RAN (Radio Access Network) primarily consists of
RRC (Radio Resource Control), RLC (Radio Link Control) and MAC (Media Access Control)
protocols. RRC protocol handles connection establishment, measurements, radio bearer
services, security and handover decisions. RLC protocol primarily divides into three Modes
Transparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The
functionality of AM entity resembles TCP operation whereas UM operation resembles UDP
operation. In TM mode, data will be sent to lower layers without adding any header to SDU of
higher layers. MAC handles the scheduling of data on air interface depending on higher layer
(RRC) configured parameters.

The set of properties related to data transmission is called Radio Bearer (RB). This set of
properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB
includes RLC information and RB mapping. RB mapping decides the mapping between RB<-
>logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers
(SRBs) and data packets (either CS or PS) are sent on data RBs. RRC and NAS messages go
on SRBs.

Security includes two procedures: integrity and ciphering. Integrity validates the resource of
messages and also makes sure that no one (third/unknown party) on the radio interface has
modified the messages. Ciphering ensures that no one listens to your data on the air interface.
Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data
RBs.

Core network
With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE.
This allows a simple migration for existing GSM operators. However, the migration path to
UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of
obtaining new spectrum licenses and overlaying UMTS at existing towers is high.

The CN can be connected to various backbone networks, such as the Internet or an Integrated
Services Digital Network (ISDN) telephone network. UMTS (and GERAN) include the three
lowest layers of OSI model. The network layer (OSI 3) includes the Radio Resource
Management protocol (RRM) that manages the bearer channels between the mobile terminals
and the fixed network, including the handovers.

Frequency bands and channel bandwidths

UARFCN

A UARFCN (abbreviation for UTRA Absolute Radio Frequency Channel Number, where UTRA
stands for UMTS Terrestrial Radio Access) is used to identify a frequency in the UMTS
frequency bands.

Typically channel number is derived from the frequency in MHz through the formula Channel
Number = Frequency * 5. However, this is only able to represent channels that are centered on
a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several
special values for the common North American channels.
Spectrum allocation

Over 130 licenses have already been awarded to operators worldwide (as of December 2004),
specifying W-CDMA radio access technology that builds on GSM. In Europe, the license
process occurred at the tail end of the technology bubble, and the auction mechanisms for
allocation set up in some countries resulted in some extremely high prices being paid for the
original 2100 MHz licenses, notably in the UK and Germany. In Germany, bidders paid a total
50.8 billion for six licenses, two of which were subsequently abandoned and written off by their
purchasers (Mobilcom and the Sonera/Telefonica consortium). It has been suggested that these
huge license fees have the character of a very large tax paid on future income expected many
years down the road. In any event, the high prices paid put some European telecom operators
close to bankruptcy (most notably KPN). Over the last few years some operators have written
off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to
use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for
rural area coverage, a trend that is expected to expand over Europe in the next 13 years.

The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for
UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is
used for 2G (PCS) services, and 2100 MHz range is used for satellite communications.
Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with
a different range around 1700 MHz for the uplink.

AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using
the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T
Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed
itself AT&T Mobility and is rolling out some cities with a UMTS network at 850 MHz to enhance
its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band
UMTS 850/1900 phones.

T-Mobile's rollout of UMTS in the US focused on the 1700 MHz band.

In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the
Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new
providers Wind Mobile, Mobilicityand Videotron have begun operations in the 1700 MHz band.

In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-
based 3G network, branded as NextG, operating in the 850 MHz band. Telstra currently
provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a
co-ownership of the owning and administrating company 3GIS. This company is also co-owned
by Hutchison 3G Australia, and this is the primary network used by their customers. Optus is
currently rolling out a 3G network operating on the 2100 MHz band in cities and most large
towns, and the 900 MHz band in regional areas. Vodafone is also building a 3G network using
the 900 MHz band.
In India, BSNL has started its 3G services since October 2009, beginning with the larger cities
and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater
coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to
regional areas where greater distances separate base station and subscriber.

Carriers in South America are now also rolling out 850 MHz networks.

Interoperability and global roaming

UMTS phones (and data cards) are highly portablethey have been designed to roam easily
onto other UMTS networks (if the providers have roaming agreements in place). In addition,
almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels
outside of UMTS coverage during a call the call may be transparently handed off to available
GSM coverage. Roaming charges are usually significantly higher than regular usage charges.

Most UMTS licensees consider ubiquitous, transparent global roaming an important issue. To
enable a high degree of interoperability, UMTS phones usually support several different
frequencies in addition to their GSM fallback. Different countries support different UMTS
frequency bands Europe initially used 2100 MHz while the most carriers in the USA use
850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz
(uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US
and in Canada and Latin America. A UMTS phone and network must support a common
frequency to work together. Because of the frequencies used, early models of UMTS phones
designated for the United States will likely not be operable elsewhere and vice versa. There are
now 11 different frequency combinations used around the worldincluding frequencies formerly
used solely for 2G services.

UMTS phones can use a Universal Subscriber Identity Module, USIM (based on GSM's SIM)
and also work (including UMTS services) with GSM SIM cards. This is a global standard of
identification, and enables a network to identify and authenticate the (U)SIM in the phone.
Roaming agreements between networks allow for calls to a customer to be redirected to them
while roaming and determine the services (and prices) available to the user. In addition to user
subscriber information and authentication information, the (U)SIM provides storage space for
phone book contact. Handsets can store their data on their own memory or on the (U)SIM card
(which is usually more limited in its phone book contact information). A (U)SIM can be moved to
another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM,
meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and
the billing for calls made from the phone.

Japan was the first country to adopt 3G technologies, and since they had not used GSM
previously they had no need to build GSM compatibility into their handsets and their 3G
handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G
network was the first commercial UMTS networkusing a pre-release specification,[16] it was
initially incompatible with the UMTS standard at the radio level but used standard USIM cards,
meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or
GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile(which launched 3G in
December 2002) now use standard UMTS.

Handsets and modems

The Nokia 6650, an early (2003) UMTS handset

All of the major 2G phone manufacturers (that are still in business) are now manufacturers of
3G phones. The early 3G handsets and modems were specific to the frequencies required in
their country, which meant they could only roam to other countries on the same 3G frequency
(though they can fall back to the older GSM standard). Canada and USA have a common share
of frequencies, as do most European countries. The article UMTS frequency bands is an
overview of UMTS network frequencies around the world.

Using a cellular router, PCMCIA or USB card, customers are able to access 3G broadband
services, regardless of their choice of computer (such as a tablet PC or a PDA). Some
software installs itself from the modem, so that in some cases absolutely no knowledge of
technology is required to get online in moments. Using a phone that supports 3G and Bluetooth
2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones
can also act as a mobile WLAN access point.

There are very few 3G phones or modems available supporting all 3G frequencies
(UMTS850/900/1700/1900/2100 MHz). Nokia has recently released a range of phones that
have Pentaband 3G coverage, including the N8 and E7. Many other phones are offering more
than one band which still enables extensive roaming. For example, Apple's iPhone 4 contains a
quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of
countries where UMTS-FDD is deployed.

Other competing standards

The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the 3GPP2.
Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne,
and is able to operate within the same frequency allocations. This and CDMA2000's narrower
bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases,
existing GSM operators only have enough spectrum to implement either UMTS or GSM, not
both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum
available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum.
Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the
co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two
standards in the same licensed slice of spectrum.

Another competitor to UMTS is EDGE (IMT-SC), which is an evolutionary upgrade to the 2G


GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and
considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their
existing GSM transmission hardware to support EDGE rather than having to install almost all
brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS,
EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-
out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS
specification are jointly developed and rely on the same core network, allowing dual-mode
operation including vertical handovers.

China's TD-SCDMA standard is often seen as a competitor, too. TD-SCDMA has been added to
UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). Unlike TD-
CDMA (UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-
CDMA (UTRA-FDD), it is suitable for both micro and macro cells. However, the lack of vendors'
support is preventing it from being a real competitor.

While DECT is technically capable of competing with UMTS and other cellular networks in
densely populated, urban areas, it has only been deployed for domestic cordless phones and
private in-house networks.

All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G
standards, along with UMTS-FDD.

On the Internet access side, competing systems include WiMAX and Flash-OFDM.

Migrating from GSM/GPRS to UMTS

From a GSM/GPRS network, the following network elements can be reused:

Home Location Register (HLR)

Visitor Location Register (VLR)

Equipment Identity Register (EIR)

Mobile Switching Center (MSC) (vendor dependent)


Authentication Center (AUC)

Serving GPRS Support Node (SGSN) (vendor dependent)

Gateway GPRS Support Node (GGSN)

From a GSM/GPRS communication radio network, the following elements cannot be reused:

Base station controller (BSC)

Base transceiver station (BTS)

They can remain in the network and be used in dual network operation where 2G and 3G
networks co-exist while network migration and new 3G terminals become available for use in
the network.

The UMTS network introduces new network elements that function as specified by 3GPP:

Node B (base transceiver station)

Radio Network Controller (RNC)

Media Gateway (MGW)

The functionality of MSC and SGSN changes when going to UMTS. In a GSM system the MSC
handles all the circuit switched operations like connecting A- and B-subscriber through the
network. SGSN handles all the packet switched operations and transfers all the data in the
network. In UMTS the Media gateway (MGW) take care of all data transfer in both circuit and
packet switched networks. MSC and SGSN control MGW operations. The nodes are renamed
to MSC-server and GSN-server.

Problems and issues

Some countries, including the United States, have allocated spectrum differently from
the ITU recommendations, so that the standard bands most commonly used for UMTS (UMTS-
2100) have not been available. In those countries, alternative bands are used, preventing the
interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of
different equipment for the use in these markets. As is the case with GSM900 today, standard
UMTS 2100 MHz equipment will not work in those markets. However, it appears as though
UMTS is not suffering as much from handset band compatibility issues as GSM did, as many
UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700 /
2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-
band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace.

In its early days, UMTS had problems in many countries: Overweight handsets with poor battery
life were first to arrive on a market highly sensitive to weight and form factor. The Motorola
A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even
featured a detachable camera to reduce handset weight. Another significant issue involved call
reliability, related to problems with handover from UMTS to GSM. Customers found their
connections being dropped as handovers were possible only in one direction (UMTS GSM),
with the handset only changing back to UMTS after hanging up. In most networks around the
world this is no longer an issue.

Compared to GSM, UMTS networks initially required a higher base station density. For fully-
fledged UMTS incorporating video on demand features, one base station needed to be set up
every 11.5 km (0.620.93 mi). This was the case when only the 2100 MHz band was being
used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this
is no longer so. This has led to increasing rollout of the lower-band networks by operators since
2006.

Even with current technologies and low-band UMTS, telephony and data over UMTS
requires more power than on comparable GSM networks. Apple Inc. cited[17] UMTS
power consumption as the reason that the first generation iPhone only
supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half
that available when the handset is set to use GSM. Other manufacturers indicate
different battery lifetime for UMTS mode compared to GSM mode as well. As battery
and network technology improve, this issue is diminishing.

Security issues

As early as 2008 it was known that carrier networks can be used to surreptitiously
gather user location information.[18] In August 2014, the Washington Post reported on
widespread marketing of surveillance systems using Signalling System No. 7 (SS7)
protocols to locate callers anywhere in the world. [18]

In December 2014, news broke that SS7's very own functions can be repurposed for
surveillance, because of its lax security, in order to listen to calls in real time or to
record encrypted calls and texts for later decryption,or to defraud users and cellular
carriers.[19]

The German Telekom and Vodafone declared the same day that they had fixed gaps in
their networks, but that the problem is global and can only be fixed with a
telecommunication system-wide solution.[20]

Releases

The evolution of UMTS progresses according to planned releases. Each release is


designed to introduce new features and improve upon existing ones.

Release '99

Bearer services

64 kbit/s circuit switch

384 kbit/s packet switched

Location services

Call service: compatible with Global System for Mobile


Communications (GSM), based on Universal Subscriber Identity
Module (USIM)

Voice quality features Tandem Free Operation

Release 4

Edge radio

Multimedia messaging

MExE (Mobile Execution Environment)

Improved location services

IP Multimedia Services (IMS)

TD-SCDMA (UTRA-TDD 1.28 Mcps low chip rate)


Release 5

IP Multimedia Subsystem (IMS)

IPv6, IP transport in UTRAN

Improvements in GERAN, MExE, etc.

HSDPA

Release 6

WLAN integration

Multimedia broadcast and multicast

Improvements in IMS

HSUPA

Fractional DPCH

Release 7

Enhanced L2

64 QAM, MIMO

Voice over HSPA

CPC continuous packet connectivity

FRLC Flexible RLC

Release 8

Dual-Cell HSDPA

Release 9

Dual-Cell HSUPA
Diversity scheme

In telecommunications, a diversity scheme refers to a method for improving the


reliability of a message signal by using two or more communication channels with
different characteristics. Diversity is mainly used in radio communication and is a
common technique for combatting fading and co-channel interference and
avoiding error bursts. It is based on the fact that individual channels experience
different levels of fading and interference. Multiple versions of the same signal may
be transmitted and/or received and combined in the receiver. Alternatively, a
redundant forward error correction code may be added and different parts of the
message transmitted over different channels. Diversity techniques may exploit
the multipath propagation, resulting in a diversity gain, often measured in decibels.

The following classes of diversity schemes can be identified:

Time diversity: Multiple versions of the same signal are


transmitted at different time instants. Alternatively, a
redundant forward error correction code is added and the
message is spread in time by means of bit-interleaving before
it is transmitted. Thus, error bursts are avoided, which
simplifies the error correction.

Frequency diversity: The signal is transmitted using several


frequency channels or spread over a wide spectrum that is
affected by frequency-selective fading. Middle-late 20th
century microwave radio relay lines often used several
regular wideband radio channels, and one protection channel
for automatic use by any faded channel. Later examples
include:

OFDM modulation in combination with


subcarrier interleaving and forward error correction

Spread spectrum, for example frequency hopping or DS-


CDMA.

Space diversity: The signal is transmitted over several


different propagation paths. In the case of wired transmission,
this can be achieved by transmitting via multiple wires. In the
case of wireless transmission, it can be achieved by antenna
diversity using multiple transmitter antennas (transmit
diversity) and/or multiple receiving antennas (reception
diversity). In the latter case, a diversity combining technique is
applied before further signal processing takes place. If the
antennas are far apart, for example at different cellular base
station sites or WLAN access points, this is
called macrodiversity or site diversity. If the antennas are at a
distance in the order of one wavelength, this is
called microdiversity. A special case is phased antenna arrays,
which also can be used for beamforming, MIMO channels
and spacetime coding (STC).

Polarization diversity: Multiple versions of a signal are


transmitted and received via antennas with different
polarization. A diversity combining technique is applied on the
receiver side.

Multiuser diversity: Multiuser diversity is obtained by


opportunistic user scheduling at either the transmitter or the
receiver. Opportunistic user scheduling is as follows: at any
given time, the transmitter selects the best user among
candidate receivers according to the qualities of each channel
between the transmitter and each receiver. A receiver must
feed back the channel quality information to the transmitter
using limited levels of resolution, in order for the transmitter to
implement Multiuser diversity.

Cooperative diversity: Achieves antenna diversity gain by


using the cooperation of distributed antennas belonging to
each node.

UMTS channels
From Wikipedia, the free encyclopedia

The UMTS channels are communication channels used by third generation


(3G) wireless Universal Mobile Telecommunications System (UMTS) networks. [1][2][3]

UMTS channels can be divided into three levels:

Physical

Transport

Logical

UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

broadcast For broadcasting system 5.3.1.1.


3_LOGICAL BCCH DL
control channel control information 1[4]

3_LOGICAL CCCH common control Supports common UL/DL 5.3.1.1.


channel procedures required to 1[4]
establish a dedicated link
UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

with the network.

A point-to-multipoint
unidirectional channel for
common traffic 5.3.1.1.
3_LOGICAL CTCH transfer of dedicated user DL
channel 1[4]
information for all or a
group of specified UEs.

A point-to-point dedicated
channel for transmitting
dedicated 5.3.1.1.
3_LOGICAL DCCH dedicated control UL/DL
control channel 1[4]
information between a UE
and the network.

A point-to-point channel
dedicated traffic 5.3.1.1.
3_LOGICAL DTCH dedicated to one UE for the UL/DL
channel 1[4]
transfer of user information.

Transfers paging
information. Used when the
paging control 5.3.1.1.
3_LOGICAL PCCH network does not know the DL
channel 1[4]
location cell of the UE or the
UE is in sleep mode.

2_TRANSP broadcast Used to broadcast cell and 4.1.2.1[


BCH DL
ORT channel system information. 5]

Used for transmission of


bursty data traffic. Shared
2_TRANSP common packet 4.1.2.5[
CPCH by the UEs in a cell. UL
ORT channel 6]
Employs fast power control.
NOTE - removed after R5.
UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

Allocated to an individual
2_TRANSP dedicated 4.1.1.1[
DCH user and typically used to UL/DL
ORT channel 5]
support a speech channel.

Carries dedicated user data


or control information. May
2_TRANSP downlink shared be shared by several UEs. 4.1.2.6[
DSCH DL
ORT channel Associated with a downlink 6]

DCH. NOTE - removed after


R5.

enhanced HSUPA Enhanced (high-


2_TRANSP 4.1.1.2[
E-DCH dedicated speed) dedicated uplink UL
ORT 5]
channel transport channel.

Carries control information


to UEs in a cell (may also be
2_TRANSP forward access 4.1.2.2[
FACH used to transmit packet DL
ORT channel 5]
data). Makes up the
RACH/FACH pair.

high speed The HSDPA transport


2_TRANSP HS- 4.1.2.7[
downlink shared channel. Shared by several DL
ORT DSCH 5]
channel UEs.

Carries paging procedure


data (for sleep mode) used
2_TRANSP 4.1.2.3[
PCH paging channel by network to establish DL
ORT 5]
connection to UE. Always
transmitted over entire cell.

2_TRANSP RACH random access Used to gain access to the UL 4.1.2.4[


ORT channel system when first attaching
UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

to it (can also carry packet


data). Makes up the
RACH/FACH pair. Always
received from entire cell. 5]

Entails a collision risk.


Transmitted using open loop
power control.

Carries acquisition
acquisition
1_PHYSICA indicators. AI corresponds to 5.3.3.7[
AICH indicator DL
L signature on the PRACH. 5]
channel
Fixed rate (sf=256).

collision Carries CD Indicator (CDI) or


detection/chann CD Indicator/CA Indicator
1_PHYSICA CD/CA- 5.3.3.9[
el assignment (CDI/CAI). Fixed rate DL
L ICH 6]
indicator (sf=256). NOTE - removed
channel after R5.

Provides the default phase


reference for demodulation
of the other downlink
1_PHYSICA common pilot 5.3.3.1
CPICH channels and enables DL
L channel 1[5]
channel estimation. Uses a
pre-defined bit sequence.
Fixed rate (sf=256).

Carries CPCH status


cpch status
1_PHYSICA information. Fixed rate 5.3.3.1
CSICH indication DL
L (sf=256). NOTE - removed 1[6]
channel
after R5.

1_PHYSICA DPCCH dedicated Carries physical layer UL/DL 5.3.3.1


L physical control control information 1[5]
channel including known pilot bits to
UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

support channel estimation,


transmit power-control
(TPC) commands, feedback
information (FBI) and,
optionally, transport-format
combination indicator
(TFCI).

dedicated Carries DCH data. Used with


1_PHYSICA 5.3.3.1
DPDCH physical data DPCCH. May be multiple UL/DL
L 1[5]
channel DPDCHs on each radio link.

[HSUPA] Establishes
enhanced
1_PHYSICA E- absolute power lever for UE 5.3.3.1
absolute grant DL
L AGCH transmission on E-DCH. 4[5]
channel
Fixed rate (sf=256).

enhanced [HSUPA] Control information


1_PHYSICA E- dedicated associated with E-DCH. 5.2.1.3[
UL/DL
L DPCCH physical control Transmitted along with E- 5]

channel DPDCH.

enhanced [HSUPA] Carries the E-DCH


1_PHYSICA E- dedicated transport channel. 5.2.1.3[
UL
L DPDCH physical data Transmitted along with E- 5]

channel DPCCH.

[HSUPA] Dedicated channel


enhanced hybrid which carries the E-DCH
1_PHYSICA 5.3.2.5[
E-HICH indicator hybrid ARQ DL
L 5]
channel acknowledgement indicator.
Fixed rate (sf=128).
UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

enhanced [HSUPA] Carries the uplink


1_PHYSICA E- 5.3.2.4[
relative grant E-DCH relative grants. Fixed DL
L RGCH 5]
channel rate (sf=128).

high speed [HSDPA] Carries feedback


1_PHYSICA HS- dedicated signalling related to HS- 5.2.1.2[
UL
L DPCCH physical control DSCH including HARQ-ACK 5]

channel and CQI.

high speed
1_PHYSICA HS- physical [HSDPA] Carries actual user 5.3.3.1
DL
L PDSCH downlink shared data for HS-DSCH. 3[5]
channel

[HSDPA] Contains downlink


high speed
1_PHYSICA HS- signalling information 5.3.3.1
shared control DL
L SCCH related to HS-DSCH. Fixed 2[5]
channel
rate (sf=128).

Carries BCH (Broadcast


primary Channel). There is one P-
1_PHYSICA P- common control CCPCH within a cell used to 5.3.3.3[
DL
L CCPCH physical carry synchronization and 5]

channels broadcast information for all


users.

physical Carries CPCH (Common


1_PHYSICA 5.2.2.2[
PCPCH common packet Packet Channel). NOTE - UL
L 6]
channel removed after R5.

1_PHYSICA PDSCH physical Carries DSCH. NOTE - DL 5.3.3.6[


L downlink shared removed after R5. 6]
UMTS Channels

Abr UL/
Level Full Name Description Ref
ev. DL

channel

1_PHYSICA page indicator Carries PCH. Fixed rate 5.3.3.1


PICH DL
L channel (sf=256). 0[5]

1_PHYSICA physical random Carries RACH (Random 5.2.2.1[


PRACH UL
L access channel Access Channel). 5]

secondary
Carries FACH (Forward
1_PHYSICA S- common control 5.3.3.4[
Access Channel) and PCH DL
L CCPCH physical 5]
(Paging Channel).
channels

Used for cell search and


conveying synchronization
1_PHYSICA synchronization 5.3.3.5[
SCH information. Consists of two DL
L channel 5]
sub-channels - Primary and
Secondary SCH.
3G UMTS Data Channels: physical; logical;
transport
- in order to carry the required data across the radio access network, data is carried in various
channels. These are split into three groups: physical channels; logical channels; transport
channels.

sThere are many 3G UMTS channels that are used within the UMTS system. The data carried by the
UMTS / WCDMA transmissions is organised into frames, slots and channels.

In this way all the payload data as well as the control and status data can be carried in an efficient
manner.

3G UMTS channel structures

3G UMTS uses CDMA techniques (as WCDMA) as its multiple access technology, but it additionally
uses time division techniques with a slot and frame structure to provide the full channel structure.

A channel is divided into 10 ms frames, each of which has fifteen time slots each of 666
microseconds length. On the downlink the time is further subdivided so that the time slots contain
fields that contain either user data or control messages.

On the uplink dual channel modulation is used so that both data and control are transmitted
simultaneously. Here the control elements contain a pilot signal, Transport Format Combination
Identifier (TFCI), FeedBack Information (FBI) and Transmission Power Control (TPC).

The channels carried are categorised into three:

Logical Channels: The logical channels define the way in which the data will be
transferred

Transport Channels: The 3G transport channels along with the logical channel again
defines the way in which the data is transferred

Physical channels: The physical channels carry the payload data and govern the physical
characteristics of the signal.

The channels are organised such that the logical channels are related to what is transported,
whereas the physical layer transport channels deal with how, and with what characteristics. The
MAC layer provides data transfer services on logical channels. A set of logical channel types is
defined for different kinds of data transfer services.
3G UMTS Logical Channels:

The 3G logical channels include:

Broadcast Control Channel (BCCH) (downlink). This channel broadcasts information to


UEs relevant to the cell, such as radio channels of neighbouring cells, etc.

Paging Control Channel (PCCH) (downlink). This channel is associated with the PICH
and is used for paging messages and notification information.

Dedicated Control Channel (DCCH) (up and downlinks) This channel is used to carry
dedicated control information in both directions.

Common Control Channel (CCCH) (up and downlinks). This bi-directional channel is used
to transfer control information.

Shared Channel Control Channel (SHCCH) (bi-directional). This channel is bi-directional


and only found in the TDD form of WCDMA / UMTS, where it is used to transport shared
channel control information.

Dedicated Traffic Channel (DTCH) (up and downlinks). This is a bidirectional channel
used to carry user data or traffic.

Common Traffic Channel (CTCH) (downlink) A unidirectional channel used to transfer


dedicated user information to a group of UEs.

3G UMTS Transport Channels:

The 3G UMTS transport channels include:

Dedicated Transport Channel (DCH) (up and downlink). This is used to transfer data to a
particular UE. Each UE has its own DCH in each direction.

Broadcast Channel (BCH) (downlink). This channel broadcasts information to the UEs in
the cell to enable them to identify the network and the cell.

Forward Access Channel (FACH) (down link). This is channel carries data or information
to the UEs that are registered on the system. There may be more than one FACH per cell as
they may carry packet data.
Paging Channel (PCH) (downlink). This channel carries messages that alert the UE to
incoming calls, SMS messages, data sessions or required maintenance such as re-
registration.

Random Access Channel (RACH) (uplink). This channel carries requests for service from
UEs trying to access the system

Uplink Common Packet Channel (CPCH) (uplink). This channel provides additional
capability beyond that of the RACH and for fast power control.

Downlink Shared Channel (DSCH) (downlink).This channel can be shared by several


users and is used for data that is "bursty" in nature such as that obtained from web browsing
etc.

3G UMTS Physical Channels:


The 3G UMTS physical channels include:

Primary Common Control Physical Channel (PCCPCH) (downlink). This channel


continuously broadcasts system identification and access control information.

Secondary Common Control Physical Channel (SCCPCH) (downlink) This channel


carries the Forward Access Channel (FACH) providing control information, and the Paging
Channel (PACH) with messages for UEs that are registered on the network.

Physical Random Access Channel (PRACH) (uplink). This channel enables the UE to
transmit random access bursts in an attempt to access a network.

Dedicated Physical Data Channel (DPDCH) (up and downlink). This channel is used to
transfer user data.

Dedicated Physical Control Channel (DPCCH) (up and downlink). This channel carries
control information to and from the UE. In both directions the channel carries pilot bits and
the Transport Format Combination Identifier (TFCI). The downlink channel also includes the
Transmit Power Control and FeedBack Information (FBI) bits.

Physical Downlink Shared Channel (PDSCH) (downlink). This channel shares control
information to UEs within the coverage area of the node B.

Physical Common Packet Channel (PCPCH) This channel is specifically intended to


carry packet data. In operation the UE monitors the system to check if it is busy, and if not it
then transmits a brief access burst. This is retransmitted if no acknowledgement is gained
with a slight increase in power each time. Once the node B acknowledges the request, the
data is transmitted on the channel.

Synchronisation Channel (SCH) The synchronisation channel is used in allowing UEs to


synchronise with the network.

Common Pilot Channel (CPICH) This channel is transmitted by every node B so that the
UEs are able estimate the timing for signal demodulation. Additionally they can be used as a
beacon for the UE to determine the best cell with which to communicate.

Acquisition Indicator Channel (AICH) The AICH is used to inform a UE about the Data
Channel (DCH) it can use to communicate with the node B. This channel assignment occurs
as a result of a successful random access service request from the UE.

Paging Indication Channel (PICH) This channel provides the information to the UE to be
able to operate its sleep mode to conserve its battery when listening on the Paging Channel
(PCH). As the UE needs to know when to monitor the PCH, data is provided on the PICH to
assign a UE a paging repetition ratio to enable it to determine how often it needs to 'wake up'
and listen to the PCH.

CPCH Status Indication Channel (CSICH) This channel, which only appears in the
downlink carries the status of the CPCH and may also be used to carry some intermittent, or
"bursty" data. It works in a similar fashion to PICH.

Collision Detection/Channel Assignment Indication Channel (CD/CA-ICH) This


channel, present in the downlink is used to indicate whether the channel assignment is
active or inactive to the UE.

By using the logical, physical and transport channels it is possible to carry the data for the control
and payload in a structured manner and provide efficient effective communications. The 3G UMTS
channels are thus an essential element of the overall system.

SIR, Ec/Io, RTWP, RSCP, and Eb/No in


WCDMA
19:30 Posted by
What is SIR?

SIR is the Signal-to-Interference Ratio the ratio of the energy in dedicated physical control channel

bits to the power density of interference and noise after dispreading.

What is RSCP?

RSCP stands for Received Signal Code Power the energy per chip in CPICH averaged over 512 chips.

What is Eb/No?

By definition Eb/No is energy bit over noise density, i.e. is the ratio of the energy per information bit

to the power spectral density (of interference and noise) after dispreading.

Eb/No = Processing Gain + SIR

For example, if Eb/No is 5dB and processing gain is 25dB then the SIR should be -20dB or better.

What are the Eb/No targets in your design?

The Eb/No targets are dependent on the service:

on the uplink, typically CS is 5 to 6dB and PS is 3 to 4dB PS is about 2dB lower.

on the downlink, typically CS has 6 to 7dB and PS is 5 to 6dB PS is about 1dB lower.
Why is Eb/No requirement lower for PS than for CS?

PS has a better error correction capability and can utilize retransmission, therefore it can afford to a

lower Eb/No. CS is real-time and cannot tolerate delay so it needs a higher Eb/No to maintain a

stronger RF link.

What is Ec/Io?

Ec/Io is the ratio of the energy per chip in CPICH to the total received power density (including CPICH

itself).

Sometimes we say Ec/Io and sometimes we say Ec/No, are they different?

Io = own cell interference + surrounding cell interference + noise density

No = surrounding cell interference + noise density

That is, Io is the total received power density including CPICH of its own cell, No is the total received

power density excluding CPICH of its own cell. Technically Ec/Io should be the correct measurement

but, due to equipment capability, Ec/No is actually measured. In UMTS, Ec/No and Ec/Io are often

used interchangeably.

What is RTWP? What is the significance of it?

Received Total Wide-band Power

It gives the Total Uplink Power (Interference) level received at NodeB

WCDMA - Idle Mode Parameter


23:23 Posted by
Idle Mode Parameter:

qQualMin:

Minimum required quality level in the cell measured in the UE.

qRxLevMin:

Parameter that indicates the min. required signal strength in the cell

qualMeasQuantity:

Used for decision as to whether the 3G ranking for cell selection and reselection is based on Ec/No or

RSCP. Default is Ec/No.

qHyst1:

Hysteresis values used for serving cell, when ranking is based on CPICH RSCP

qHyst2:

Hysteresis values used for serving cell, when ranking is based on CPICHEc/No

qOffset1sn:

Signal strength offset b/w source and target cell for cell ranking based on CPICH RSCP.

qOffset2sn:

Signal offset between serving cell and neighbor cell, based on CPICHEc/No.

sIntraSearch:

Decision on when intra-freq. measurements should be performed. Following criteria is used:


sIntraSearch qQualmeas - qQualMin (where qQualmeas is the value measured by UE )

sInterSearch:

Parameter is used to make decision to start inter-freq. measurements.

sInterSearch qQualmeas - qQualMin (where qQualmeas is the value measured by UE )

sRatSearch:

Decision on when GSM measurement should be performed in relation to qQualMin.

sRatSearch qQualMeas qQualMin (where qQualmeas is the value measured by UE )

sHcsRatSearch:

Decision on when GSM measurement should be performed in relation to qRxLevMin.

sHcsRatSearch qRxLevMeas qRxLevMin (where qRxLevMeas is the value measured by UE)

Cell Search procedure in WCDMA

Cell Search procedure:


During the cell search, the mobile station searches for a cell and determines the downlink scrambling

code and common channel frame synchronization of that cell. The cell search is typically carried out in

three steps: slot synchronization; frame synchronization and code-group identification; and scrambling-

code identification.

Step 1: Slot synchronization.

During the first step of the cell search procedure, the mobile station uses the SCHs primary

synchronization code to acquire slot synchronization to a cell. This can be done with a single matched

filter matched to the primary synchronization code that is common to all cells.

Step 2: Frame synchronization and code-group identification.

During the second step of the cell search procedure, the mobile station uses the SCHs secondary

synchronization code to find frame synchronization and identify the code group of the cell found in the

first step. This is done by correlating the received signal with all possible secondary synchronization

code sequences and identifying the maximum correlation value. Because the cyclic shifts of the

sequences are unique, the code group and the frame synchronization are determined.

Step 3: Scrambling-code identification.

During the third and last step of the cell search procedure, the mobile station determines the exact

primary scrambling code used by the found cell. The primary scrambling code is typically identified

through symbol-by-symbol correlation over the CPICH with all codes within thecode group identified in

the second step. Afterthe primary scrambling code has been identified; the primary CCPCH can be

detected. And the system- and cell specific BCH information can be read.
Pilot Pollution
21:51 Posted by

Simply speaking, when the number of strong cells exceeds the active set size, there is pilot
pollution in the area. Typically the active set size is 3, so if there are more than 3 strong cells then
there is pilot pollution. Definition of strong cell: pilots within the handover window size from the
strongest cell. Typical handover window size is between 4 to 6dB. For example, if there are more
than 2 cells (besides the strongest cell) within 4dB of the strongest cell then there is pilot pollution.

What are the possible causes for a Drop


Call on a UMTS network?
23:41 Posted by

What are the possible causes for a Drop Call on a UMTS network?

Poor Coverage (DL / UL)

Pilot Pollution / Pilot Spillover

Missing Neighbor

SC Collisions

Delayed Handovers

No resource availability (Congestion) for Hand in


Loss of Synchronization

Fast Fading

Delayed IRAT Triggers

Hardware Issues

External Interference

What are the possible causes for an


Access Failure in UMTS?
23:40 Posted by

What are the possible causes for an Access Failure in UMTS?

Missing Neighbors

Poor Coverage

Pilot Pollution / Spillover

Poor Cell Reselection

Core Network Issues

Non availability of resources. Admission Control denies

Hardware Issues
Improper RACH Parameters

External Interference

WCDMA Handover Parameter


19:28 Posted by

Handover Parameter

maxActiveSet: Maximum number of cells allowed in the Active Set.

IndividualOffset:

Offset value which can be assigned to each cell. It is added to the measurement quantity before the UE
evaluates whether or not an event has occurred. It can either be positive or negative value.

measQuantity1:

Defines the measurement quantity for intra-frequency reporting evaluation. Default is Ec/No.

hsQualityEstimate:

Indicates whether Ec/No or RSCP should be used for indicating "best cell" for HS-DSCH Cell Change.
Default is RSCP.

reportingRange1a:

Relative threshold referred to the CPICH of the best cell in the Active Set used as evaluation criteria
for event 1a (a primary CPICH enters the reporting range).
reportingRange1b:

Relative threshold referred to CPICH of the best cell in the Active Set used as evaluation criteria for
event 1b (a primary CPICH leaves the reporting range).

reportingInterval1a:

Time between periodic reports at event-triggered periodic reporting for event 1a

timeToTrigger1a:

If event 1a condition is fulfilled during at least a time greater than or equal to timeToTrigger1a
milliseconds, then event 1a occurs.

timeToTrigger2dEcno:

If event 2d condition is fulfilled during at least a time greater than or equal to timeToTrigger2dEcno
milliseconds, then event 2d occurs

WCDMA - Power Control


23:27 Posted by

Power Control:

We know all users are in same frequency at same time soPower control is used to controlled the level

of the transmitting power in order to minimize interference,improve quality of connection, reducing of

NEAR-FAR effects and increase capacity of system.

There are three type of power control

Open loop power control


Inner loop Closed loop power control

Outer loop closed loop power control

Open loop power control:

The UE determine an estimation of the downlink path-loss between thee base

station and the UE by the measuring of UTRA carrier received signal strength at the mobile through the

medium of the SI message on the P-CCPCH.

Outer loop closed loop power control

It is used to compare the received BLER (Block error rate) and target the BLER.

Inner loop Closed loop power control:

It is used to compare the received SIR (Signal to interference ratio) and target the SIR

Power control Parameter:

Primary CPICH Power: Power to be used for transmitting the PCPICH.

BCH Power: BCH power is the power to be used for transmitting on the BCH, relative to the primary

CPICH Power value.

Primary SCH Power:


Secondary SCH Power:

AICH Power: AICH power to be used for transmitting on AICH, relative to the primary CPICH Power

value.

The value range is set in a short term to cover both the RRC and NBAP spec.

Rake receiver
From Wikipedia, the free encyclopedia

A rake receiver is a radio receiver designed to counter the effects of multipath fading.
It does this by using several "sub-receivers" called fingers, that is, several correlators
each assigned to a different multipathcomponent. Each finger independently decodes a
single multipath component; at a later stage the contribution of all fingers are
combined in order to make the most use of the different transmissioncharacteristics of
each transmission path. This could very well result in higher signal-to-noise
ratio (or Eb/N0) in a multipath environment than in a "clean" environment.

The multipath channel through which a radio wave transmits can be viewed as
transmitting the original (line of sight) wave pulse through a number of multipath
components. Multipath components are delayed copies of the original transmitted
wave traveling through a different echo path, each with a different magnitude and
time-of-arrival at the receiver. Since each component contains the original
information, if the magnitude and time-of-arrival (phase) of each component is
computed at the receiver (through a process called channel estimation), then all the
components can be added coherently to improve the information reliability.

The rake receiver is so named because it reminds the function of a garden rake, each
finger collecting symbol energy similarly to how tines on a rake collect leaves.

Contents

1Mathematical definition

2History

3Use

4References

Mathematical definition

A rake receiver utilizes multiple correlators to separately detect M strongest multipath


components. Each correlator may be quantized using 1, 2, 3 or 4 bits.

The outputs of each correlator are weighted to provide better estimate of the
transmitted signal than is provided by a single component. Demodulation and bit
decisions are then based on the weighted outputs of the M correlators.
History

Rake receivers must have either a general-purpose CPU or some other form of digital
signal processing hardware in them to process and correlate the intended signal. Rake
receivers only became common after 16-bit CPUs capable of signal processing
became widely available. The rake receiver was patented in the US in 1956, [1] but it
took until the 1970s to design practical implementations of the receiver.

Radio astronomers were the first substantial users of rake receivers in the late 1960s to
mid-1980s as this kind of receiver could scan large sky regions yet not create large
volumes of data beyond what most data recorders could handle at the
time. Astropulse that is part of SETI@Home project uses a variant of a rake receiver
as part of its sky searchesso this kind of receiver is still current for the needs of
radio astronomy.

Use

Rake receivers are common in a wide variety of CDMA and W-CDMA radio devices
such as mobile phones and wireless LAN equipment.

Rake receivers are also used in radio astronomy. The CSIRO Parkes radio
telescope and Jodrell Bank telescope have 1-bit filterbank recording formats that can
be processed in real time or prognostically by software based rake receivers.

RAKE RECIEVER
Have you ever heard of "Rake Receiver"? Surely you've heard of Receiver (Receiver in
English), and you probably have heard of Rake (Rake in English).

With the pictures bellow, can you imagine what a Rake Receiver can be?
Ok, if the analogy does not help much, let's go.

In a wireless communication system, the signal can reach the receiver via multiple distinct
pathways.

In each path, the signal can be blocked, reflected, diffracted and refracted. The signal of
this many routes reach receivers faded. The Rake receiver is used to correct this effect,
selecting the correct / stronger signals, bringing great help in CDMA and WCDMA systems.

Okay, but what is the Rake Receiver, and how it does it?
Definition

The Rake Receiver is nothing more than a radio, whose goal is to try to minimize the effects
of the signal fading due to multipath suffers when he travels. In fact, we can understand a
set of Rake Receiver sub-radios, each lagged slightly, to allow the individual components of
the multipath can be tuned properly.

Each of these components is decoded completely independently, but are combined in the
final. It is as if we took the original signal, and adicionssemos other copies of the original
signal reaching the receiver with different amplitudes and arrival times. If the receiver
knows the amplitude and arrival time of each of these components, it is possible to estimate
the channel, allowing the addition of components.

Each of these sub-radios Rake Receiver is called Finger. Each finger is responsible for
collecting the energy of bit or symbol, hence the analogy with the groomer that we use in
the garden, where each branch of the rake collecting twigs and leaves!

To ease some of the understanding, imagine two signal components arriving at the mobile
unit as seen in the previous figure, with a lag t among them.

Notice how each Finger works:

The first with component g1 and time reference t;

The second with component g2, but with the time reference t - t.
The Fingers are so receptors that works independently with the function of demodulating
the signal, ie, receive and remove the RF components from the information.

The big idea behind the methodology of combining multiple copies of the transmitted signal
to obtain a better signal is that if we have multiple copies, probably at least one must be in
good condition, and we have more chance of a better decoding!

Key Benefits

The main advantage of Rake Receiver is that it improves the SNR (or Eb / No). Naturally,
this improvement is observed in larger environments with many multipaths than in
environments without obstruction.

In simplified form: we have a better signal than we would have without using Rake
Receiver! This is already a sufficient argument, isn't it?

Disadvantages and Limitations

The main disadvantage of Rake Receiver is not necessarily technical, and is not as
problematic. This disadvantage is primarily because the cost of the receivers. When we
insert one more radio receiver, we need more space and also increase complexity.
Consequently, we increase costs.

The greater the number of multipath components supported by the receiver, the more
complex is the algorithm. As we always do here, we will not be deducting formulas involved,
but the complexity increases almost exponentially.
And in the real world, the amount of multipath components that arrive at the receiver is
quite large, there is not a 'limit'. Everything will depend on the environment.

The threshold number of fingers in a mobile unit is determined by each technology


standard, which for example in CDMA is 6, corresponding to the maximum number of
channels to direct traffic that can be processed by the mobile unit at once (Active Set).

However, in cellular environments, most of the CDMA mobile units need only actually 3 of
demodulators (WCDMA uses 4). More than that would be a waste of resources, and an
additional cost to manufacture the phone.

Searcher

An important detail in the CDMA and WCDMA systems is the use of one finger of the Rake
Receiver as a 'Searcher'. It is so called because of its function of seeking pilot signals being
transmitted by any station (BS) in the system. These pilot signals can be understood as
beacons used to alert the mobile, the presence of a BS.

Thus, in the UMTS UE(User Equipment), we have a simplified form of the configuration of
the Rake Receiver as below.
Fingers on BS and UE

To conclude, the number of Rake Fingers used in the BS and the UE is generally different.
That's because we saw that to have more fingers, the physical size of the receiver increases,
as their power requirements. This can be a problem for the UE but not a problem for the BS,
since it is able to offer more space and power for new fingers. It is only the criterion of cost
to be taken into account in the BS.

Anyway, the only critical issue is with UE. But the current three/four fingers ensure excellent
gain proven in practice (CDMA/WCDMA).

Conclusion

We saw today that the Rake receiver is used in CDMA and WCDMA as an efficient way of
multipath signal reception, where several recptores are able to reconstruct the signal with
different time-codes, amplitude and phase.
Equalization (communications)
From Wikipedia, the free encyclopedia

In telecommunication, equalization is the reversal of distortion incurred by a signal


transmitted through a channel. Equalizers are used to render the frequency response
for instance of a telephone lineflatfrom end-to-end. When a channel has been
equalized the frequency domain attributes of the signal at the input are faithfully
reproduced at the output. Telephones, DSL lines and television cables use equalizers
to prepare data signals for transmission.

Equalizers are critical to the successful operation of electronic systems such as analog
broadcast television. In this application the actual waveform of the transmitted signal
must be preserved, not just its frequency content. Equalizing filters must cancel out
any group delay and phase delay between different frequency components.

Contents

1Analog telecommunications

1.1Audio lines

1.2Television lines

1.3Analog equalizer types

2Digital telecommunications

2.1Digital equalizer types

3See also

4References

5External links
Analog telecommunications

Audio lines

Early telephone systems used equalization to correct for the reduced level of high
frequencies in long cables, typically using Zobel networks. These kinds of equalizers
can also be used to produce a circuit with a wider bandwidth than the standard
telephone band of 300 Hz to 3.4 kHz. This was particularly useful for broadcasters
who needed "music" quality, not "telephone" quality on landlines carrying program
material. It is necessary to remove or cancel any loading coils in the line before
equalization can be successful. Equalization was also applied to correct the response
of the transducers, for example, a particularmicrophone might be more sensitive to
low frequency sounds than to high frequency sounds, so an equalizer would be used to
increase the volume of the higher frequencies (boost), and reduce the volume of the
low frequency sounds (cut).

Television lines

A similar approach to audio was taken with television landlines with two important
additional complications. The first of these is that the television signal is a wide
bandwidth covering many more octaves than an audio signal. A television equalizer
consequently typically requires more filter sections than an audio equalizer. To keep
this manageable, television equalizer sections were often combined into a single
network using ladder topology to form a Cauer equalizer.

The second issue is that phase equalization is essential for an analog television signal.
Without it dispersion causes the loss of integrity of the original waveshape and is seen
as smearing of what were originally sharp edges in the picture.

Analog equalizer types

Zobel network

Lattice phase equaliser

Bridged T delay equaliser

Digital telecommunications

Modern digital telephone systems have less trouble in the voice frequency range as
only the local line to the subscriber now remains in analog format, but DSL circuits
operating in the MHz range on those same wires may suffer severe attenuation
distortion, which is dealt with by automatic equalization or by abandoning the worst
frequencies. Picturephone circuits also had equalizers.

In digital communications, the equalizer's purpose is to reduce intersymbol


interference to allow recovery of the transmit symbols. It may be a simple linear
filter or a complex algorithm.

Digital equalizer types

Linear Equalizer: processes the incoming signal with a linear


filter

MMSE equalizer: designs the filter to minimize E[|e|2],


where e is the error signal, which is the filter output
minus the transmitted signal.[1]

Zero Forcing Equalizer: approximates the inverse of the


channel with a linear filter.

Decision Feedback Equalizer: augments a linear equalizer by


adding a filtered version of previous symbol estimates to the
original filter output.[2]

Blind Equalizer: estimates the transmitted signal without


knowledge of the channel statistics, using only knowledge of
the transmitted signal's statistics.

Adaptive Equalizer: is typically a linear equalizer or a DFE. It


updates the equalizer parameters (such as the filter
coefficients) as it processes the data. Typically, it uses the MSE
cost function; it assumes that it makes the correct symbol
decisions, and uses its estimate of the symbols to compute e,
which is defined above.

Viterbi Equalizer: Finds the maximum likelihood (ML) optimal


solution to the equalization problem. Its goal is to minimize the
probability of making an error over the entire sequence.

BCJR Equalizer: uses the BCJR algorithm (also called


the Forward-backward algorithm) to find the maximum a
posteriori (MAP) solution. Its goal is to minimize the probability
that a given bit was incorrectly estimated.
Turbo equalizer: applies turbo decoding while treating the
channel as a convolutional code.

All about rx level

Rx level means Received Level, it is the level which mobile receives. It is


calculated from below given formula:

RxLev(dbm) = EiRP(dbm) - Path Loss(db)

Free Space Path Loss(db) = 32.44 + 20log(d) + 20log(f) in our case I have
taken in vacuum environment

EiRP(dbm) = Antenna Output Power(dbm) + Antenna Gain(dbi)

for example:

EiRP = 52dbm, FSPL = 94.64db (1.5km, f = 890.2MHz)

RxLev(dbm) = 52 - 94.64 = -42.64dbm

Difference between Rx_Level_Full & Rx_Level_Sub : Rx_Level_Full is measured


when DTX is off & Sub is when DTX is on.
RX Lev Full: Its is nothing but the Mobile transmit the measurment
report(SACCH multiframe) for every 480ms. this multiframe containes 104
TDMA frames, in 104 TDMA frames 4 TDMA frames for Decode the BSIC and
remaining 100 TDMA frames for Average measurment of serving cell and
neighbouring cell.This average measurment of 100 TDMA frames are RX Lev Full

RX Lev Sub: DTX is a discontinouse trasmission, When the mobile conversation


40% of the time either Trasmitter or Receive is idle. When DTX is ON, DTX will
switch off the Trasmitter or Receiver when they is no speech Pulses. only few
TDMA frames will trasmit, the average of this TDMA frames is called RX Lev
Sub, give you proper measurment of RX level...
Why Rx_Level is negative??
1-The measurements of power is expressed in decibels ( dB) rather than the
Watt because of the wide range of power levels in RF signals.

2-Generally the letters (dB) with another character or more tacked on the end
means dB compared to some value that the character represents.

3-For analyzing radio system , was found that dBm convention is more
convenient than using Watt convention ( regarding to the calculations, graphics,
comparing .etc )

4-dB refers to a ratio and dBm refers to a specific power level .


5-In Our case "m" as in dBm , it's referenced to a value of miliWatt of power .

6-dBm can have either positive value or negative value , where the positive
values indicates we have gain in the power and the negative values indicates we
have loss in the power

7-In a nother word A Negative No. means less than a 1 milliwatt and a positive
No. means more than a 1 milliwatt.

8-
0dBm=1mW
3dBm=2mW,
-3dBm=1/2 mW and so on .....

1. How to improve HOSR for a 2G system ?

Pls tell me methods other than neighbour tuning n all for improving HOSR for a
cell.

I want to know the parameter which when altered produces quick and efficient
improvement in HOSR ?

Hi dear,

Look there is direct relation between DL qual and HOSR So if you will be able to
improve your DL qual than your HOSR will automaticaly improve.
To improve DL qual you can refer

How to improve DL RX Quality?

One more to improve HOSR is to increase Handover timer T3103 in BSC.Make


sure it should be less timer T10 atleast by 2 sec.
You can also change thresold values of PbgtMargin,LevelMargin and Qualmargin
to have some improvement.
Also check if there are lot of intracell HO failures are there.Then check for the
hardware issues or intracell HO thresolds.

Hi dear,

You can do following things to improve HOSR.

1.HOSR is directly related to DL qual so improve network DL qual.


please refer to How to improve DL RX Quality?

2. You can also increase timer T3103 in BSC.Make sure it should be less than
T10 atlease by 2 sec.
3. Also check if there are lot of Intracell HO failures.Then check for Hardware
issues or inracell HO theresolds.
4. Try to change PBGT,Qual and Level Margin and see any effect.
5. Check if MSC controlled HO's are getting failed.
6. Remove Co-BCCH,BSIC combination in neighbour.
7. Remove hardware alarms