Sie sind auf Seite 1von 12

CHAPTER 1 : INTRODUCTION

1.1 INTRODUCTION TO WCDMA

W-CDMA (Wideband Code Division Multiple Access), UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct
Spread is an air interface standard found in 3G mobile telecommunications networks. It is the basis of
Japan's NTT DoCoMo's FOMA service and the most-commonly used member of the Universal Mobile
Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS. It uses the DS-
CDMA channel access method and the FDD duplexing method to achieve higher speeds and support
more users compared to most time division multiple access (TDMA) and time division duplex (TDD)
schemes used before.WCDMA is an approved 3G technology which increases data transmission rates via
the Code Division Multiplexing air interface, rather than the Time Division Multiplexing air interface of
GSM systems. It supports very high-speed multimedia services such as full-motion video, Internet access
and video conferencing. It can also easily handle bandwidth-intensive applications such as data and image
transmission via the Internet.WCDMA is a direct spreading technology, it spreads its transmissions over a
wide, 5 MHz, carrier and can carry both voice and data simultaneously. It features a peak data rate of 384
kbps, a peak network downlink speed of 2 Mbps and average user throughputs (for file downloads) of
220-320 kbps. In addition, WCDMA boasts increased capacity over EDGE for high-bandwidth
applications and features which include, among other things, enhanced security, QoS, multimedia
support, and reduced latency.

1.2 EVOLUTION OF WCDMA


In the mid 1980s a second generation (2G) digital system known as the Global System for Mobile
Communications (GSM) was introduced for mobile telephony. It significantly improved speech quality
over the older analog-based systems and, as it was an international standard, enabled a single telephone
number and mobile phone to be used by consumers around the world. It led to significantly improved
connectivity and voice quality, as well as the introduction of a whole slew of new digital services like
low-speed data. Proving to be very successful, GSM was officially adopted by the European
Telecommunications Standardization Institute (ETSI) in 1991. It is now widely used in over 160 countries
worldwide.The success of GSM spurred the demand for further development in mobile telephony, and put
it on an evolutionary path to third generation (3G) technology. Along the way, that development path has
included 2G technologies like TimeDivision Multiple Access (TDMA) and Code Division Multiple
Access (CDMA). TDMA is similar in nature to GSM and provides for a tripling of network capacity over
the earlier AMPS analog system. In contrast, CDMA is based on the principles of spread spectrum
communication. Access to it is provided via a system of digital coding.

In 1997 a 2.5G system called the General Radio Packet Service (GPRS) was introduced to accommodate
the growing demand for Internet applications. As opposed to the existing 2G systems, it offered higher
data rates and Quality of Service (QoS) features for mobile users by dynamically allocating multiple
channels. GPRS installs a packet switch network on top of the existing circuit switch network of GSM,
without altering the radio interface.In 1999, the International Telecommunications Union (ITU) began
evaluating and accepting proposals for 3G protocols in an effort to coordinate worldwide migration to 3G
mobile networks. These proposals were known as International Mobile Telecommunication 2000 (IMT-
2000). One of the most important IMT-2000 proposals to emerge was Universal Telecommunications
Services (UMTS).
While GPRS is considered the first step in enhancing the GSM core network in preparation for EDGE and
3G, WCDMA is a 3G technology according to the 3GPP standard (Figure 1). It is the digital access
system for the UMTS network and is today considered one of the worlds leading 3G wireless standards.

FIG 1 :EVOLUTION OF CELLULAR TECHNOLOGIES

Second generation (2G) mobile communication standards were developed to provide higher bandwidth
efficiency, security and digital modulation schemes. Third generation (3G) wireless capability has been
developed in response to a growing demand in data services. The International Telecommunications
Union (ITU) defined the specification known as the International Mobile Telecommunications 2000
(IMT-2000) with the goal of developing a global standard that enables worldwide roaming, multimedia
application services, improved spectral efficiency, and flexibility of evolution from existing standards.
The evolution goal simplified the transition of carriers from 2/2.5G CDMA based networks to WCDMA.
The targeted data rates were 2Mbps for fixed, 384kbps for pedestrians and 144kbps for vehicular access.
Different regional solutions were proposed as solutions to the requirements of IMT-2000. These included
Time Division Multiple Access (TDMA) and Code Division Multiple Access (CDMA) utilizing
Frequency Division Duplex (FDD) and Time Division Duplex (TDD). The fragmentation of the proposals
led to the creation of two working groups. One, known as the Third Generation Partnership Project
(3GPP) is working on the Unified Mobile Telecommunication Standard (UMTS) based on WCDMA. The
other, 3GPP2 works on CDMA2000.

A transition from 2G can be modeled with the following table :-

Generation Technology Data Rate Bandwidth Data Network


2G GSM 9.6 or 14.4 kbps 200 kHz Circuit/Packet
2G CDMA 9.6 or 14.4 kbps 1.25 MHz Circuit/Packet
2.5G GPRS 128 kbps 200 kHz Circuit/Packet
2.5G EDGE 384 kbps 200 kHz Circuit/Packet
2.5G CDMA2000- 153 kbps 1.25 MHz Circuit/Packet
1XRTT
3G WCDMA(UMTS 384 kbps 5 MHz Packet
99)

Table 1 - Standards Differentiation


WCDMA differs from other multiple access in the manner in which it segments concurrent individual
users. Early analog cellular utilized Frequency Division Multiple Access (FDMA). FDMA splits available
frequency spectrum among users over time. Time Division Multiple Access (TDMA), which is used in
the GSM standard, allocates the bandwidth to an individual user but switches users access over time. As
in Code Division Multiple Access (CDMA), WCDMA is principally based on the principal of spread
spectrum. The bandwidth is shared among multiple users concurrently. Users signals are differentiated
through Direct Sequence Spread Spectrum (DS-SS) which utilizes a discrete code per user to differentiate
the users channel.

1.3 WCDMA BASICS


WCDMA is a direct spreading technology, it spreads its transmissions over a wide, 5 MHz, carrier and
can carry both voice and data simultaneously. It features a peak data rate of 384 kbps, a peak network
downlink speed of 2 Mbps and average user throughputs (for file downloads) of 220-320 kbps. In
addition, WCDMA boasts increased capacity over EDGE for high-bandwidth applications and features
which include, among other things, enhanced security, QoS, multimedia support, and reduced latency
(Table 1).

Parameters WCDMA
Bandwidth 5 MHz
Chip Rate 3.84 Mcps
Power Control Frequency 1500 Hz up/down
Base Station Synchronization Not Needed
Cell Search 3-step approach via primary,secondary search code
CPICH
Downlink Pilot CDM common (CPICH)
TDM dedicated (bits in DPCH)
User Separation CDM/TDM (shared channel)
2G Interoperability GSM-UMTS handover (Multi-mode terminals)
SYSTEM PERFORMANCE OF WCDMA

Unlike GSM and GPRS, which rely on the use of the TDMA protocol, WCDMA like CDMA - allows
all users to transmit at the same time and to share the same RF carrier. Each mobile users call is uniquely
differentiated from other calls by a set of specialized codes added to the transmission.
WCDMA base stations differ from some of the other CDMA systems in that they do not have to be in
system-wide time synchronization, nor do they depend on a Global Positioning System (GPS) signal.
Instead, they work by transmitting a sync signal along with the downlink signal.
A downlink or forward link is defined as the RF signal transmitted from the base station to the subscriber
mobile phone. It consists of the RF channel, scrambling code (one per sector), an orthogonal variable
spreading factor (OVSF) channel for signaling (one per call), and one or more OVSF channels for data
(Figure 2). It also contains the sync signals (P-SCH and S-SCH), which are independent of OVSF and
scrambling codes. The RF signal transmitted from the mobile phone is referred to as the uplink or reverse
channel.
FIG 2: WCDMA CHANNEL STRUCTURE
The WCDMA downlink and uplink data streams run at a constant 3.84 Mcps, are divided into time slots
and grouped as frames. The frame is the basic unit of data information that the system works with in the
coding, interleaving and transmitting processes.
Data transmitted via a WCDMA network whether digitized voice or actual data is spread using a code
which is running at a 3.84 Mbps code rate. Once the transmitted data is received by the subscribers
mobile receiver, its demodulator/correlator reapplies the code and recovers the original data (Figure 3).
The signal received by the mobile is a spread signal together with noise, interference and messages on
other code channels in the same RF frequency slot. The interference may emanate from multiple sources
including other users in the same cell or from neighboring cells.

FIG 3:SIGNAL SPREADING IN WCDMA SYSTEMS


WCDMA has two basic modes of operation:
Frequency Division Duplex (FDD) mode. Here separate frequencies are used for uplink and
downlink. FDD is currently being deployed and is usually referred to as WCDMA.
Time Division Duplex (TDD) mode. In this mode, the uplink and downlink are carried in
alternating bursts on a single frequency.

One of the important features of a WCDMA system is its highly adaptive radio interface. WCDMA is
designed to allow many users to efficiently share the same RF carrier by dynamically reassigning data
rates. The spreading factor (SF) may be updated as often as every 10 ms, which in turn, permits the
overall data capacity of the system to be used more efficiently.

1.4 DEVELOPMENT & DEPLOYMENT


In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G
network FOMA. Later NTT DoCoMo submitted the specification to the International
Telecommunication Union (ITU) as a candidate for the international 3G standard known as IMT-
2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards,
as an alternative to CDMA2000, EDGE, and the short range DECT system. Later, W-CDMA
was selected as an air interface for UMTS.

As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their
network was initially incompatible with UMTS.[2] However, this has been resolved by NTT
DoCoMo updating their network.

Code Division Multiple Access communication networks have been developed by a number of
companies over the years, but development of cell-phone networks based on CDMA (prior to W-
CDMA) was dominated by Qualcomm. Qualcomm was the first company to succeed in
developing a practical and cost-effective CDMA implementation for consumer cell phones: its
early IS-95 air interface standard, which has since evolved into the current CDMA2000 (IS-
856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called
CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network
technologies into a single design for a worldwide standard air interface. Compatibility with
CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since
Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with
coverage in 58 countries as of 2006. However, divergent requirements resulted in the W-CDMA
standard being retained and deployed globally. WCDMA has then become the dominant
technology with 457 commercial networks in 178 countries as of April 2012.[3] Several
cdma2000 operators have even converted their networks to WCDMA for international roaming
compatibility and smooth upgrade path to LTE.

Despite incompatibilities with existing air-interface standards, the late introduction of this 3G
system, and despite the high upgrade cost of deploying an all-new transmitter technology, W-
CDMA has become the dominant standard.
The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo in
Japan in 2001.Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand.
See the main UMTS article for more information.

1.5 ADVANTAGES
WCDMA networks offer a number of significant benefits. They are:
High bandwidth and low latency which contributes significantly to a higher-quality user experience
and in turn increases data revenue and improves customer satisfaction.
Support for a wide array of new and emerging multimedia services.
Considered the most cost-effective means of adding significant capacity for both voice and data
services.
Far better integration of RF components in the base station as compared to any other radio or mobile
technology. A WCDMA base station cabinet has several times the RF capacity of GSM cabinets.
Extreme flexibility in allocating capacity to offer the optimal QoS for different traffic types.

To date, WCDMA has been adopted for 3G use as specified in the 3GPP standard by ETSI in Europe, and
as an ITU standard under the name IMT-2000 direct spread. NTT DoCoMo launched the first WCDMA
service in 2001 and now has millions of subscribers. WCDMA (BTS) is also the 3G technology of choice
for many GSM/GPRS operators, with dozens currently in trials. More than 100 GSM/GPRS operators
have even licensed new spectrum with the intent to launch WCDMA services in the coming years .

CHAPTER 2 : CONVERSION

In Communication systems, the signal needs to ride on a carrier frequency to be efficiently sent
from one location to another. To do this, the signal frequency spectrum needs to be able to be
moved up and down the frequency axis at will. This is a process of (frequency) upconversion and
downconversion. All early methods used analog circuits to accomplish this task. In the past
couple of decades, digital circuits, particularly FPGAs, have developed the computational
capacity to perform these functions in many cases and offer important advantages over analog
methods. These methods are known as digital upconversion and downconversion (also known as
DUC and DDC, respectively).

Digital down convertor (DDC) and Digital up convertor (DUC) are extensively used in the radio
systems. They are more popular than their analogue counterparts because of small size, low
power consumption and accurate performance. The DDC converts the signal at the output of
analog to digital convertor (ADC), centered at the intermediate frequency (IF), to complex
baseband signal. In addition, DDC also decimates the baseband signal without affecting its
spectral characteristics. The decimated signal, with a lower data rate, is easier to process on a
low speed DSP processor. Similarly, the DUC converts a baseband signal to a passband IF
signal. The functional behaviors of the two circuits are therefore equal and opposite.
2.2 Digital Down Converter
Digital downconversion is the opposite of upconversion. The circuit diagram for this process
looks similar, as you can see in Figure 11.5.Digital downconversion involves sampling, so naturally
aliasing needs to be considered.The Nyquist sampling rule states that the sampling frequency must be at
least twice thehighest frequency of the signal being sampled. In practice, usually the sampling frequency
is at least 2 times the rate of signal frequency, to allow an extra margin for the transition band of the
digital low-pass filters following.But this is not quite true. It is possible to sample a signal at a frequency
lower than its carrierfrequency. The Nyquist rule applies to the actual bandwidth of the signal, not the
frequency of the carrier.

FIG 4 DOWN CONVERTER

Digital radio receivers often have fast ADC converters to digitise the band limited RF or IF
signal generating high data rates; but in many cases, the signal of interest represents a small
proportion of that bandwidth. To extract the band of interest at this high sample rate would
require a prohibitively large filter. A DDC allows the frequency band of interest to be moved
down the spectrum so the sample rate can be reduced, filter requirements and further
processing on the signal of interest become more easily realisable. Consider a radio signal lying
in the range 39-40MHz. The signal bandwidth is 1MHz. However, it is often digitized with a
sampling rate over 100Msamples per Second, representing in the region of 200Mbyte/second.
The DDC allows us to select the 39-40MHz band, and to shift its frequency down to baseband
and in doing so reduce the sample rate, with a 1MHz bandwidth, a sample rate of 2.5MHz
would be fine - giving a data rate of only 5Mbyte/second. This is shown in Figure 1.
FIG 5 DDC FUNCTION
2.2 IF Subsampling

A Digital Down Converter is basically complex mixer, shifting the frequency band of interest to
baseband.Consider the spectrum of the original continuous analogue signal prior to digitisation, as shown
in Figure 2,because it is a real signal it has both positive and negative frequency components. If this
signal is sampled by a single A/D converter at a rate that is greater than twice the highest frequency the
resulting spectrum is as shownin Figure 3. The continuous analogue spectrum repeated around all of the
sample frequency spectral lines.

Using a technique called IF subsampling, we are able to sample at a much lower frequency than the
carrier frequency. The term IF subsampling is used because the frequencies typically used in this
technique lie somewhere between baseband and RF. The term IF refers to intermediate frequency. In this
case, we are going to deliberately take advantage of an alias of the signal of interest. This is best
illustrated using an example.Lets use a 4G (fourth generation) wireless example. The signal of interest
lies at 2500 MHz.Analog circuits are used to downconvert the signal to an IF, or intermediate frequency.
Lets assume that we have an ADC sampling at 200 MHz. Further, let us also assume we have 20 MHz
BW IF signal centered at 60 MHz, as in Figure, which we are trying to sample and downconvert for
baseband processing. By the Nyquist rule, we can sample up to the sample rate, or 100 MHz. To allow
easier post-sampling filtering, we may want to limit this amount to 80 MHz instead. Our signal here lies
between 50 and 70 MHz, so these conditions are met.
FIG 6 SAMPLING

If, the IF signal is centered at 460 MHz, as in Figure 7, rather than 60 MHz.

FIG 7
Two principal characteristics are of concern here. Of course, the first is the maximum sampling rate at
which the ADC operates. In our example, we assumed an ADC that could sample at 200 MHz. When we
use IFsubsampling, we must also consider the analog bandwidth of the sampling circuit in the ADC.
Ideally, the ADC samples the signal for an infinitely small instant of time and converts that measurement
into a digital number. In practice, this circuit has a sampling window,or period of time in which it
samples. The narrower this window, the higher signal frequencies it can sample. This specification is
given in the datasheets provided by ADC manufacturers. In our example, we sampled a signal at 460
MHz. So we should check that our ADC has an analog front-end bandwidth of 500 MHz or higher.
Another factor that must be considered is clock jitter. Clock jitter is the amount of timing variability of
the edge of the ADC sampling clock from cycle to cycle. It can be readily seen on an oscilloscope of
sufficient quality. It is more easily seen as clock phase noise on a spectrum analyzer. Jitter shows up as
spectral noise tapering off on either side of the clock frequency component. The less jitter, the more
closely the clock appears as simply a vertical line in the frequency response. The effect of clock jitter is
proportional to the frequency of the signal being sampled. It limits the SNR according to the following
relationship:

A major difference between an ADC and DAC is that while an ADC converts the signal to a digital
representation, the DAC converts the digital samples to analog form and performs a sample and hold
function of the analog signal between clocks. It is this sample and hold function that is a critical
difference.The sample and hold function is a rectangular filter in the time domain. Each digital sample
input is an impulse of a given magnitude. The output is a rectangular shape of the input magnitude. The
DAC impulse response, just like in any filter, defines the frequency response. In this case, the rectangular
impulse response yields a sinc function [sin(x)/x] response in the frequency domain.

The above example is to demonstrate the basic operation of a Digital Down Converter but the detail of a
particular converter will depend on the application. The maximum A/D sample frequency may not be
high enough so that band pass sampling techniques, as discussed in more detail later, may be used. The
signal in the example has been mixed down to base band to demonstrate the maximum reduction in output
sample frequency, but this may not be suitable for your application. The frequency shift is determined by
the quadrature oscillator frequency. The filter design is a function of the application requirement,
broadband, narrowband, linear phase? We mentioned that the filters used to reduce the bandwidth of the
signal usually have linear phase characteristics. Linear phase filters are usually more complex than those
with arbitrary phase characteristics, so once again, there is a good reason for this. Communications
systems often depend on the relationships between multiple carriers. These carriers may be the same
frequency but with different phase; or they may be completely different frequencies. In either case,
disturbing the phase relationships would be a bad thing.For this reason, most DDC designers will try to
use linear phase filters exclusively. These appear as a simple delay to the signal, and as all elements of the
signal are delayed by the same amount, the signals integrity is preserved.

2.3 Decimation by non-integer ratios


In our description, we described decimation as simply throwing away samples. This is a valid thing to
do for integer changes in the sampling rate; for example, when decimating a bandwidth limited signal
from 3MHz to 1MHz it is perfectly OK to discard every second and third samples.However, this is not
always the case. In some systems we will want to decimate by non-integer ratios for example, 2,5MHz
down to 1MHz. This represents a decimation ratio of 2.5, or 5/2.In this case, we first INCREASE the
sampling rate. This is known as interpolation. We add samples, then filter the signal back to its original
bandwidth. Here, we double the sample rate up to 5MHz.
Once the sample rate has been increased, we can once again decimate by discarding samples. In the
example, we discard four samples from every five, giving us the 1MHz output rate.All these concepts are
drawn together in the block diagram of Figure 8

FIG 8 THEORATICAL DDC BLOCK

The concept of the DDC is to sample the whole input signal and to use digital techniques to reduce the data.
However, that may require an unrealistically fast ADC.For example, Nyquists theory states that we should sample
at a rate at least double the bandwidth of interest.If we do this on a 1GHz carrier, we would need an ADC
sampling at well over 2GHz; that ADC would beprotected by anti-aliasing filters, removing any signal above 1GHz.
However, this is beyond what can be achieved with todays technology.Frequency shifting is required before the
ADC. This can be done in analog, using an IF stage; or it may be possible to use a composite approach.
Typically, if the signal is at 1GHz, the bandwidth of interest may be only a MHz wide. The filters required to
select such a narrow band would be large and complex; but selecting a band perhaps 40MHz wide could be
practical. If we can select this band using analog filters, it is possible to sample at a reduced rate using the
bandpass sampling mentioned earlier. In this case, a sample rate (Fs) of 100MHz would mean that our signal
could be frequency shifted by 10Fs, bringing the 1GHz signal in at a much more reasonable 100MHz sample
rate. The more aggressively this technique is applied, the greater the strains on the anti-aliasing filters. It also places
great emphasis on high analog bandwidth for the ADC, and extremely low jitter for the ADC clocks; any errors
here will be magnified greatly. However, it can be used very effectively. This technique is complementary to
frequency-shifting the signal to an Intermediate Frequency or IF. Typically the approach used direct conversion,
IF or bandpass sampling will be chosen dependent on the frequencies involved and other system issues even
down to the type of antenna deployed. Most systems use a combination approach, using an analog IF stage to bring
the signal down to something that can then be processed digitally.

2.4 DDC OVER ANALOGUE TECHNIQUES

Plainly a DDC is implementing something which could be done in analogue its sometimes good to stop and
check why wed want to do this.The DDC is typically used to convert an RF signal down to baseband. It does this
by digitising at a high sample rate, and then using purely digital techniques to perform the data reduction.
Being digital gives many advantages, including:
Digital stability not affected by temperature or manufacturing processes. With a DDC, if the
system operates at all, it works perfectly theres never any tuning or component tolerance to
worry about.
Controllability all aspects of the DDC are controlled from software. The local oscillator can
change frequency very rapidly indeed in many cases a frequency change can take place on the
next sample. Additionally, that frequency hop can be large there is no settling time for the
oscillator.
Size. A single ADC can feed many DDCs, a boon for multi-carrier applications. A single DDC
can be implemented in part of an FPGA device, so multiple channels can be implemented or
additional circuitry could also be added.
However, there are some disadvantages:

ADC speeds are limited. It is not possible today to digitise high-frequency carriers directly.
There are techniques to extend the range of ADCs, but often it is simpler to use analogue circuits
to bring the carrier down to an IF that digital circuits can then manage.
ADC dynamic range is limited. In many communications systems, the signals amplitude can
vary greatly. Fast ADCs often only have 12bits of resolution giving an absolute maximum
dynamic range of 72dB. It is often better to use analogue circuits in conjunction with the ADC to
implement AGC functions to ensure that this range is best used.
In time, more and more systems will use predominantly digital technology. However, the high speeds of many
communication systems will ensure that a hybrid approach, using analogue and digital, will be the best route for
many systems for a long time to come. The quest for more spectral space will ensure that new systems will use
ever higher frequencies, ensuring that analog approaches will be around for a long time to come.

CHAPTER 3 LITERATURE REVIEW