Sie sind auf Seite 1von 10

Name Jatender Kumar Roll no. RH6803B50 Regd. No. 10812127 Class B.tech E.C.E 5th Sem.

Term Paper Of Digital signal processing On Applications of signal processing in CDMA,GSM,Optical Fibre,Radio transmitter & DTH communication
Contents
CDMA communication. GSM communication. Optical fibre communication. Radio transmitter. DTH communication. CDMA is a military technology first used during World War II by English allies to foil German attempts at jamming transmissions. The allies decided to transmit over several frequencies, instead of one, making it difficult for the Germans to pick up the complete signal. Because Qualcomm created communications chips for CDMA technology, it was privy to the classified information. Once the information became public, Qualcomm claimed patents on the technology and became the first to commercialize it. This slim volume is a compilation of some of the author's past research in signal processing for CDMA systems. Writing with refreshing clarity, the author first presents a brisk introduction of CDMA concepts, then moves on to detection and parameter estimation for both long and short code systems, using various methods based on the MOE, subspace concepts, space-time processing, with accompanying investigations of adaptive implementation and performance analysis. Especially useful are the numerous derivations of formula and proofs where every last bit of detail is shown. A short chapter with introductory material to multicarrier CDMA comes next, and the book is rounded off by a novel scheme which combines

CDMA communication Introduction


Short for Code-Division Multiple Access, a digital cellular technology that uses spread-spectrum techniques. Unlike competing systems, such as GSM, that use TDMA, CDMA does not assign a specific frequency to each user. Instead, every channel uses the full available spectrum. Individual conversations are encoded with a pseudo-random digital sequence. CDMA consistently provides better capacity for voice and data communications than other commercial mobile technologies, allowing more subscribers to connect at any given time, and it is the common platform on which 3G technologies are built.

CDMA and "space-division multiple access" to create a MAC protocol which adapts transmission to channel conditions. Although the topics covered are somewhat dated (in a relative sense when the fast pace of development in the field is considered), this book is still useful reading for those who want a better understanding of the subject matter, especially that of linear optimisation methods crafted for CDMA. Oh, and don't let the term "CDMA" mislead you; most of what's in here can be applied to 3G systems as well.

access (TDMA) divides access by time, while frequency-division multiple access (FDMA) divides it by frequency. CDMA is a form of "spreadspectrum" signaling, since the modulated coded signal has a much higher data bandwidth than the data being communicated. An analogy to the problem of multiple access is a room (channel) in which people wish to communicate with each other. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but not other people. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can understand each other.

Coherent signal processing for CDMA communication system Abstract


A novel and improved method and apparatus for coherently processing a CDMA signal without the use of pilot or other synchronization information is described. By utilizing a coherent receive processing system, a reverse link signal can be properly processed when received at a lower power level than that associated with noncoherent only processing. This reduces the transmit power necessary for successful communication, and this reduction in necessary transmit power reduces the degree to which a set of subscriber units communicating with the same base station interfere with one another. In turn, this increases the overall system capacity of a CDMA wireless telecommunication system incorporating the invention. Code division multiple access (CDMA) is a channel access method utilized by various radio communication technologies. It should not be confused with the mobile phone standards called cdmaOne and CDMA2000 (which are often referred to as simply "CDMA"), which use CDMA as an underlying channel access method. One of the basic concepts in data communication is the idea of allowing several transmitters to send information simultaneously over a single communication channel. This allows several users to share a bandwidth of different frequencies. This concept is called multiplexing. CDMA employs spread-spectrum technology and a special coding scheme (where each transmitter is assigned a code) to allow multiple users to be multiplexed over the same physical channel. By contrast, time division multiple

Uses

CDMA mobile phone One of the early applications for code division multiplexing is in GPS. This predates and is distinct from CDMA One. The Qualcomm standard IS-95, marketed as CDMA One. The Qualcomm standard IS-2000, known as CDMA2000. This standard is used by several mobile phone companies, including the Globalstar satellite phone network. CDMA has been used in the Omni TRACS satellite system for transportation logistics.

GSM communication
Global System for Mobile communications (GSM: originally from Groupe Spcial Mobile) is the most popular standard for mobile phones in the world. Its promoter, the GSM Association, estimates that 82% of the global mobile market uses the standard. GSM is used by over 3 billion people across more than 212 countries and territories. Its ubiquity makes international roaming very common between mobile phone operators, enabling subscribers to use their phones in many parts of the world. GSM differs from its predecessors in that both signalling and speech channels are digital, and thus is considered a second generation (2G) mobile phone system. This has also meant that data communication was easy to build into the system. The ubiquity of the GSM standard has been an advantage to both consumers (who benefit from the ability to roam and switch carriers without switching phones) and also to network operators (who can choose equipment from any of the many vendors implementing GSM). GSM also pioneered a lowcost, to the network carrier, alternative to voice calls, the Short message service (SMS, also called 'text messaging'), which is now supported on other mobile standards as well. Another advantage is that the standard includes one worldwide Emergency telephone number,This makes it easier for international travellers to connect to emergency services without knowing the local emergency number.

transmission limitations, in terms of bandwidth and cost, do not allow the standard ISDN B-channel bit rate of 64 kbps to be practically achieved. Using the ITU-T definitions, telecommunication services can be divided into bearer services, teleservices, and supplementary services. The most basic tele service supported by GSM is telephony. As with all other communications, speech is digitally encoded and transmitted through the GSM network as a digital stream. There is also an emergency service, where the nearest emergency-service provider is notified by dia ling three digits (similar to 911). A variety of data services is offered. GSM users can send and receive data, at rates up to 9600 bps, to users on POTS (Plain Old Telephone Service), ISDN, Packet Switched Public Data Networks, and Circuit Switched Public Data Networks using a variety of access methods and protocols, such as X.25 or X.32. Since GSM is a digital network, a modem is not required between the user and GSM network, although an audio modem is required inside the GSM network to interwork with POTS. Other data services include Group 3 facsimile, as described in ITU-T recommendation T.30, which is supported by use of an appropriate fax adaptor. A unique feature of GSM, not found in older analog systems, is the Short Message Service (SMS). SMS is a bidirectional service for short alphanumeric (up to 160 bytes) messages. Messages are transported in a store-and-forward fashion. For point-to-point SMS, a message can be sent to another subscriber to the service, and an acknowledgement of receipt is provided to the sender. SMS can also be used in a cell-broadcast mode, for sending messages such as traffic updates or news updates. Messages can also be stored in the SIM card for later retrieval. Supplementary services are provided on top of teleservices or bearer services. In the current (Phase I) specifications, they include several forms of call forward (such as call forwarding when the mobile subscriber is unreachable by the network), and call barring of outgoing or incoming calls, for example when roaming in another country. Many additional supplementary services will be provided in the Phase 2 specifications, such as caller identification, call waiting, multi-party conversations.

Mobile Station
The mobile station (MS) consists of the mobile equipment (the terminal) and a smart card called the Subscriber Identity Module (SIM). The SIM provides personal mobility, so that the user can have access to subscribed services irrespective of a specific terminal. By inserting the SIM card into another GSM terminal, the user is able to receive calls at that terminal, make calls from that terminal, and receive other subscribed services.

Services provided by GSM


From the beginning, the planners of GSM wanted ISDN compatibility in terms of the services offered and the control signalling used. However, radio

The mobile equipment is uniquely identified by the International Mobile Equipment Identity (IMEI). The SIM card contains the International Mobile Subscriber Identity (IMSI) used to identify the subscriber to the system, a secret key for authentication, and other information. The IMEI and the IMSI are independent, thereby allowing personal mobility. The SIM card may be protected against unauthorized use by a password or personal identity number.

it is better "because it is the 3G generation chosen technology and GSM will migrate to CDMA since CDMA is more advanced..." But which one of these statements are correct? Accordingly to Nokia , "this discussion is not about technology anymore, but about market". We think this is the best way to describe the war between these two cell phone technologies. In the beginning, GSM was in fact superior. It had more services and allowed more data transfer. But CDMA, facing the advantages of the competitor standard, soon delivered the same features found on GSM. Nowadays, it is not possible to say that GSM services are better than CDMA. Multimedia messages, video, high-speed Internet access, digital camera and even PDA function are some of the features we can found on both technologies. The new CDMA 1XRTT technology, which previews what G3 cell phones will bring, is more advanced than EDGE, technology from the beginning of 3G generation, allowing higher transfer rates. Even the GSM SIM card advantage, that allows you to change your cell phone and keep your phone list, is being surplaced by some CDMA operators with a service that allows you to store your phone book on the operator's database, allowing you to recover your phone book even if your cell phone is stolen (which is not possible with GSM, since if your cell phone is stolen, your SIM card will be stolen together). Notice that recently a new accessory called SIM backup was released, which allows you to backup the data stored in your SIM card. Also some GSM operators are offering a similar backup service. So, nowadays both technologies are equip rated in technology, but this picture won't be like that in the future. After all, CDMA evolution ground is wider and in a few years it will be superior than GSM. This means that GSM operators will disappear? Not at all. They will migrate over CDMA and the war will continue, because the existing CDMA operators chose to use 1xEV-DO and1XEV-DV technologies for their 3G network and the existing GSM operators have opted for a different technology, called WCDMA. Also, even though the current GSM operators will migrate to WCDMA, they still can use their existing GSM network. So users won't feel anything special when the operators shift to the new cell generation (3G), independently from the technology they choose. CDMA and GSM both are used in mobile communication.

Which Technology is Better: GSM or CDMA?


Before deciding which technology is superior, let's talk a little more about these two tecnologies: CDMA: stands for Code Division Multiple Access. Both data and voice are separated from signals using codes and then transmitted using a wide frequency range. Because of this, there are more space left for data transfer (this was one of the reasons why CDMA is the prefered technology for the 3G generation, which is broadband access and the use of big multimedia messages). 14% of the worldwide market goes to CDMA. For the 3G generation CDMA uses 1x EV-DO and EV-DV. It has a lot of users in Asia, especially in South Korea. GSM: stands for Global System Mobile. Even though it is sold as "the latest technology" in several countries, this technology is older than CDMA (and also TDMA). But keep in mind that this doesn't mean that GSM is inferior or worse than CDMA. Roaming readiness and fraud prevention are two major advantages from this technology. GSM is the most used cell phone technology in the world, with 73% of the worldwide market. It has a very strong presence in Europe.

TDMA technology is the less used from the three main digital technologies (GSM, CDMA and TDMA) and we think it will gradually be replaced to CDMA or GSM. That's why the GSM vs CDMA war. At one corner, GSM operators say it is better "because it uses a SIM chip, it is the most used technology worldwide, it is more secure and it is more advanced". On the other corner, CDMA followers say

Optical fibre communication ABSTRACT


We review main signal processing techniques for mitigation of physical impairments in optical systems and discuss some of the important issues in their design and application in the optical domain.

Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Physical impairments in the optical fiber, in particular, the chromatic dispersion, fiber nonlinearities, polarization effects, and amplified spontaneous emission noise from the amplifiers, all interact limiting the data rates and/or transmission distances. Electronic domain (post detection) techniques offer effective means for mitigating effects of these impairments, an observation made more than a decade ago. Successful demonstrations of the effectiveness of these techniques for high-speed optical communications have appeared more recently, and promising results in this area attracted attention to the field. The solutions based on signal processing offer flexible and effective means for mitigating a wide array of optical domain impairments, and now with the increasing availability of voltage-tunable integrated circuits for high speed operation, they are practical and cost-effective solutions allowing for seamlessly integration within the receiver electronics. Signal processing can be used for mitigating effects of a number of distortion mechanisms, such as polarization mode dispersion (PMD), chromatic dispersion, cross-phase modulation among channels in time- or wavelength-division-multiplexed (WDM) systems, and noise. Filtering, in particular adaptive filtering (linear or nonlinear), maximum likelihood sequence estimation (MLSE), maximum a posterior (MAP) detection, and adaptive threshold selection can all be effective for physical distortions that introduce inter symbol and inter carrier interference (ICI). Adaptive filtering has been effectively used since the 1970s for mitigating distortion in highspeed wire line and wireless communications and in other media such as magnetic storage, and provides a desirable trade-off in terms of effectiveness and implementation cost among other signal processing techniques. The MLSE and MAP algorithms are optimal in the sense that they minimize the probability of error. In the MAP approach, the symbol error rate, and in the MLSE, the probability of sequence error is minimized by computing the appropriate metrics (probabilities). Both methods require knowledge of the channel characteristics as well as statistics of the noise for the task, hence are typically computationally complex. However, they are useful as benchmarks for comparing the performance of other approaches that are suboptimal, and when some prior information on the channel is available or can be computed efficiently, they also can offer practical solutions.

INTRODUCTION
An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers. Fibre-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world.

APPROACHES BASED (ADAPTIVE) FILTERING

ON

Filters can be designed based on prior information and kept fixed in the receiver (the receiver filter used after detection to suppress the out of band noise is a typical example), or they can be adaptive, and designed each time a connection is established (as in equalization for PMD mitigation), and are called adaptive regardless of the computation mechanism used for determining their coefficients, i.e., adaptively or not. When the distortion is timevarying, such as PMD, the coefficients need to be recomputed at given time intervals or continuously adapted to track the variations in the channel. The most common linear filter structure is the tappeddelay line (feedforward) structure and can be implemented either in continuous or discrete-time. It produces an output equal to the sum of weighted combination of differently delayed signals. The coefficients of the filter are computed by optimizing a suitably chosen metric. The minimum mean squared error (MSE) and the least squares (LS) are the most commonly used two criteria. Minimum MSE filter coefficients are given by the Wiener-Hopf equations and can be adaptively estimated using gradient optimization, and the LS weights, using recursive estimates of the input correlation matrix. Least Mean Squares (LMS) algorithm uses an instantaneous estimate of the statistics required for the gradient updates of the Wiener-Hopf solution, and is a simple and effective solution that has been successfully used in most adaptive filtering applications to date. Adaptive filtering applications in optical systems mostly rely on either system identification (primarily for identification of the interference to cancel out its effect) or equalization.

received signal to cancel out the multi-path effect due to differential mode dispersion. The approach can be used in a number of interference cancellation problems, such as to correct for power leakage from time- or wavelength-demultiplexed neighboring channels. It is important to note that, if some prior information about the physical impairment (e.g. measurements of the channel characteristics) is available, it can be incorporated into the design to simplify the determination of the filter parameters, and when it is of deterministic nature, the filter designed can be a simple fixed filter.

Applications in Equalization:
In the equalization, or the inverse system identification problem, the filter is cascaded to the distorting system such that the originally transmitted signal is restored, possibly with a delay. Adaptive equalizers are typically implemented as linear (feed forward or lattice) or nonlinear (decision-feedback) structures, the latter usually in conjunction with a feed forward filter. The decision feedback equalizer (DFE) subtracts the inter symbol interference (ISI) due to the previously detected bits (post cursors) and can achieve performance close to that of an MLSE or MAP detection scheme with modest implementation complexity. Feed forward and decision-feedback equalizers have been implemented and tested at 10 Gbit/s for mitigation of PMD distortion, the primary source of ISI limiting the transmission rates and distances in installed terrestrial fibre systems and for reducing the penalty due to chromatic dispersion. These equalizers are based on SiGe and GaAs technology, and the filter coefficients are either usertuned to minimize the MSE, computed adaptively by LMS, or by a gradient descent procedure by using a control signal such as eye opening or an error monitor. In, Bulow presents a comprehensive overview of different approaches for the task with relevant references.

Applications in System Identification:


the deterministic nature of acoustic effect is used to introduce a fixed feedforward filter to predict the amount of timing jitter as a function of the previous transmitted bits. This information is then used to adjust the sampling period of the received soliton pulses. The same structure is used in to correct for timing jitter due to cross-phase modulation in dispersion-managed WDM-RZ systems, this time predicting the interference, time shifts in pulses, due to collisions with pulses in other channels. Another interference cancellation example is given in where the feedforward filter is used to identify the multipath distortion in multi-mode fibers, which is then subtracted from the

PERFORMANCE FILTERS

OF

ADAPTIVE

To study the performance of different equalizers, consider the following first-order PMD channel impulse response h(t) = _(t) + (1 )_(t _ ) where represents the ratio of signal strengths in the two principal states of polarization and _ the differential group delay (DGD). The frequency response of the PMD channel based on this model is given by: |H(f)|2 = 14(1) sin2(_f_ ) and has zeros at the frequencies fk = 2k+1 2_ , k 2 Z when = 0.5. The spectral

dip exists for other values of , approaching to a flat response (no distortion) as tends to 1 (or 0), and its location depends on the value of the DGD, moving into the signal spectrum and introducing severe distortion when the DGD is large (for _ > 50ps in a 10 Gbit/s system). This property exhibits itself as a penalty pole for the PMD equalizers at = 0.5 [3, 6]. The feed forward equalizer places a large gain in the vicinity of the spectral null to compensate for the distortion, hence amplifying the additive noise in the system, which is observed for the optical noise dominated system in. The DFE, on the other hand, can compensate for the distortion without significant noise amplification when the PMD is severe. Another impact of the spectral null of the channel is on the statistics of the detected signal, the input to the equalizer. High PMD distortion (when is close to 0.5 and the DGD is large) introduces strong correlation into the signal, which in turn slows down the convergence of gradient descent type minimization schemes, such as the LMS. Also, in optical systems, bipolar signal transmission formats are seldom used, hence the transmitted signal is non-zero mean. The optical noise is nonzero mean and includes a signaldependent term because of the use of direct detection at the receiver. All these facts contribute to the increased eigenvalue spread of the signal and introduce a bias into the estimates of the optimal minimum MSE coefficients. Processing the signal prior to the equalizer by subtracting its mean is one way to improve the conditioning of the input signal, and hence of the gradient descent-based estimation of the filter coefficients. It is also important to note that the signal-dependent term does not have a significant effect on the overall performance of the adaptive filter as it is zero mean. The selection of modulation formats and the receiver filter characteristics also play an important role in the performance of adaptive equalizers. PMD distortion introduces significant ISI into non return to- zero (NRZ) pulses while for return-to-zero (RZ) pulses, the amount of ISI introduced is usually limited except in the case of RZ pulses with larger duty cycles. The adaptive filter optimizes its coefficients to minimize the ISI at its output and hence works more effectively for NRZ signals, and typically needs fractionallyspaced samples for RZ signals, especially when the pulses are narrow. It should also be noted that the typical receiver filter has a bandwidth smaller than the signal spectrum introducing additional ISI into the signal, especially for NRZ pulses. Thus, as the interplay between the noise and additional ISI introduced into the system is taken into account for receiver design, the implications for the following processing steps, such as adaptive filters, should be carefully accounted for as well. It is important to

remember that an equalizer primarily relies on the ISI to optimize its coefficients and that its performance will be affected by the nature of the noise present in the system, in some instances noise actually helping the filter performance, e.g. by improving the conditioning of the input signal.

CONCLUSION
Signal processing methods such as adaptive filters have successfully been used in wireline and wireless communications to push the transmission rates and distances for over three decades, and even in cases of very severe channel distortions. They do offer important gains in optical communications systems as well, even with the given challenges for very high processing speeds. However, it is important to recognize and study the unique characteristics of the optical domain, in order to optimize the solutions and to ultimately incorporate their characteristics into the designs of the solutions offered. An example in this area is approaches that are hybrid, i.e., those that divide the processing between the optical and the electrical domains. In [7], a polarization diversity receiver is introduced for mitigation of PMD distortion and it is shown that such a receiver can provide performance better that that of an equalizer for power penalty reduction without the need to optimize any coefficients, as in adaptive filters.

Optical fibre is a medium which we can send the data and receive the data in light signal. It is used in communication between transmitter and receiver.

Radio transmitter communication.


Radio transmitter design is a complex topic which can be broken down into a series of smaller topics. A radio communication system requires two tuned circuits each at the transmitter and receiver, all four tuned to the same frequency. The transmitter is an electronic device which, usually with the aid of an antenna, propagates an electromagnetic signal such as radio, television, or other telecommunications.

Abstract
The interface between analog and digital signal processing paths in radio receivers and transmitters is steadily migrating toward the antenna as engineers

learnt combine the unique attributes and capabilities of DSP with those of traditional communication system designs to achieve systems with superior and broadened capabilities while reducing system cost. Digital signal processing (DSP) techniques are rapidly being applied to many signal conditioning and signal processing tasks traditionally performed by analog components and subsystems in RF communication receivers and transmitters . The incentive tore place analog implementations of signal processing functions with DSP-based processing includes reduced cost, enhanced performance, improved reliability, ease of manufacturing and maintenance, and operating flexibility and configurability. Technologies that facilitate costeffective DSP-based implementation include a very large market base supporting high-performance programmable signal processing chips, field programmable gate arrays (FPGA),applicationspecific integrated circuits (ASICs), and highperformance analog-to-digital and digital-to-analog converters (ADC and DAC respectively). The optimum point for inserting DSP in a signal processing chains determined by matching the system performance requirements to bandwidth and signalto-noise ratio(i.e., speed and precision) limitations of the signal processors and the signal converters. In this paper we review how clever algorithmic structures interact with DSP hardware to extend the range and performance of DSP-based processing in RF transmitters and receivers. A signal transmitter for generating a wideband RF signal for use in, for example, a 60 GHz wireless area network, wherein a wideband (e.g 4 GHz) baseband signal is divided into a number of sub-signals that can be synthesized in parallel, thereby relaxing the requirements of the mixed-signal and RF blocks. This division can be performed either in time or frequency and one DAC is used for each sub-band. Where frequency division multiplexing is used to divide the baseband signal into sub-bands the additional advantage is afforded whereby analogue adjustment of the gain in each sub-band is possible, so as to compensate for wideband frequency selective fading in the channel. A radio communication system send signals by radio. Types of radio communication systems deployed depend on technology, standards, regulations, radio spectrum allocation, user requirements, service positioning, and investment.[2]

The radio equipment involved in communication systems includes a transmitter and a receiver, each having an antenna and appropriate terminal equipment such as a microphone at the transmitter and a loudspeaker at the receiver in the case of a voice-communication system.

Transmitter
A transmitter is an electronic device which, usually with the aid of an antenna, propagates an electromagnetic signal such as radio, television, or other telecommunications.

Modulation

An audio signal (top) may be carried by an AM or FM radio wave.

Amplitude modulation
Amplitude modulation is a technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. AM works by varying the strength of the transmitted signal in relation to the information being sent. For example, changes in the signal strength can be used to reflect the sounds to be reproduced by a speaker, or to specify the light intensity of television pixels. (Contrast this with frequency modulation, also commonly used for sound transmissions, in which the frequency is varied; and phase modulation, often used in remote controls, in which the phase is varied) In the mid-1870s, a form of amplitude modulation initially called "undulatory currents"was the first method to successfully produce quality audio over telephone lines. Beginning with Reginald Fessenden's audio demonstrations in 1906, it was also the original method used for audio radio transmissions, and remains in use today by many forms of

communication"AM" is often used to refer to the mediumwave broadcast band (see AM radio).

DTH Communication.
An overview of satellite direct-to-home (DTH) digital television in the Americas is presented, including history, service applications, and a reference architecture identifying key system building blocks. Satellite DTH's relationship to and differences from terrestrial ATSC are highlighted. The paper concludes with notes on the technology evolutions that allowed the introduction of digital DTH satellite service and contribute to its continued growth today.

Angle modulation
Angle modulation is a class of analog modulation. These techniques are based on altering the angle (or phase) of a sinusoidal carrier wave to transmit data, as opposed to varying the amplitude, such as in AM transmission.

Frequency modulation
Frequency modulation (FM) conveys information over a carrier wave by varying its frequency (contrast this with amplitude modulation, in which the amplitude of the carrier is varied while its frequency remains constant). In analog applications, the instantaneous frequency of the carrier is directly proportional to the instantaneous value of the input signal. Digital data can be sent by shifting the carrier's frequency among a set of discrete values, a technique known as frequency-shift keying. FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech (see FM broadcasting). Normal (analog) TV sound is also broadcast using FM. A narrow band form is used for voice communications in commercial and amateur radio settings. The type of FM used in broadcast is generally called wide-FM, or W-FM. In two-way radio, narrowband narrow-fm (N-FM) is used to conserve bandwidth. In addition, it is used to send signals into space.

Satellite contribution
A Satellite contribution link or service is a means to transport video programming by a satellite link from a remote source (such as an outside broadcast unit) to a broadcaster's studio or from the studio to a satellite TV uplink centre (for onward distribution by DTH, cable etc).

Signal path for a Satellite Contribution link from a broadcaster's studio to DTH viewers Such contribution links are often made by terrestrial connections (landline, fibre, etc) but the use of a satellite "hop" provides advantages in some situations. Satellite operators and third party agencies provide satellite contribution links for the occasional or regular use of the client broadcasters.

Phase modulation
Phase modulation is a form of modulation that represents information as variations in the instantaneous phase of a carrier wave. Unlike its more popular counterpart, frequency modulation (FM), PM is not very widely used. This is because it tends to require more complex receiving hardware and there can be ambiguity problems with determining whether, for example, the signal has 0 phase or 180 phase. Radio transmitters are used for transmit the radio waves to receiver in communication.

Advantages
In remote locations, using terrestrial links such as fibre is prohibitively expensive whereas satellite can cheaply and easily overcome the "first-mile" connectivity gaps in rural and other remote areas. A comparable fibre service would have to use extremely diverse routing to achieve the same availability.

A single satellite link can span a huge distance that would take a terrestrial link through many countries and commercial operators. The satellite operator provides a single point of accountability whereas establishing a link and resolving service interruptions with fibre can prove difficult, especially across national borders and with multiple carriers. Duplication of active components in the transmission and reception sites, and in-orbit backup satellite capacity provides a fully redundant contribution connection. Monitoring of the signal allows for fast and effective changes before problems affect the service. Eg uplink power may be automatically increased during adverse weather conditions. SES Astra provides an example service that provides permanent delivery of live and recorded TV and radio signals to the company's Luxembourg uplink facility (used for 15 satellites, serving over 120 million viewers[1]) from almost any location across Europe. [2].

contribution link service using a single Ka-band transponder with a European footprint and a Ku-band transponder serving southern Africa. Contribution feeds can be transferred from one region to the other, and in one frequency band to the other, in a single satellite hop. DTH is used in communication for receiving a particular band of signal or channel in any place like Home, industry & any other place etc.

References
ASTRA Info booklet SES ASTRA. March 2009. Company facts compendium. http://broadcastengineering.com/newsrooms/ optibase-essa-satellite-canal33-news/. http://www.freewebs.com/telecomm/cdma.h tml .for CDMA. "CDMA Spectrum". http://www.activexperts.com/asmssrvr/cellul ar/cdmaspectrum/. W. Wang, T. Adali, W. Xi, and C. R. Menyuk, On the properties of adaptive equalizers for optical communications, in preparation.

Technology
Whereas satellite contribution links may be provided using transmission in Ku-band (or even C-band) frequencies, it is increasingly common to use the higher frequency Ka-band uplink and downlink for the contribution feed, as that band is relatively unused. Ka-band provides for a smaller contribution uplink dish size (typically 1.8m compared to a minimum of 2.4m using Ku-band) and it can also be used as a backup for the DTH uplink itself or when a full DTH uplink antenna (typically 9m) cannot be accommodated at the client's studio. Signals at the studio playout centre or outside broadcast unit are typically MPEG-4 compressed and transmitted in DVB-S2[2] for turnaround uplink to the DTH satellites, without additional processing, although IP-over-satellite transmission may also be used.[3]

Advanced applications
Whereas a contribution link from a studio to a DTH uplink centre is usually provided within a single satellite footprint, the Astra 4A/Sirius 4 satellite provides for an innovative intercontinental

Das könnte Ihnen auch gefallen