Sie sind auf Seite 1von 55

ELECTRONIC SHOP

Sec: D

Course Teacher: ARIFUZZMAN, AKM

ASSIGNMENT (Midterm)

Students Name: Ashraf, Sanowar ID: 08-09864-1

Introduction
Television was not invented or created by any one person. There was no single Eureka moment in the invention of television, instead there were many such moments for various people. The ideas and innovations of several people led to the invention of television. At the dawn of television history there were two distinct paths of technology experimented with by researchers. Early inventors attempted to either build a mechanical television system based on the technology of Paul Nipkow's rotating disks; or they attempted to build an electronic television system using a cathode ray tube developed independently in 1907 by English inventor A.A. Campbell-Swinton and Russian scientist Boris Rosing. The 'first' generation of television sets were not entirely electronic. The display (TV screen) had a small motor with a spinning disc and a neon lamp, which worked together to give a blurry reddish-orange picture about half the size of a business card! The period before 1935 is called the "Mechanical Television Era". This type of television is not compatible with today's fully-electronic television system. Electronic television systems worked better and eventual replaced mechanical systems.

Pre 1935

Mechanical Television Systems Paul Gottlieb Nipkow


German engineering student, Paul Nipkow proposed and patented the world's first mechanical television system in 1884. Paul Nipkow devised the notion of dissecting the image and transmitting it sequentially. To do this he designed the first television scanning device. Paul Nipkow was the first person to discover television's scanning principle, in which the light intensities of small portions of an image are successively analyzed and transmitted. In 1873, the photoconductive properties of the element selenium were discovered, the fact that selenium's electrical conduction varied with the amount of illumination it received. Paul Nipkow created a rotating scanning disk camera called the Nipkow disk, a device for picture analyzation that consisted of a rapidly rotating disk placed between a scene and a light

sensitive selenium element. The image had only 18 lines of resoution. Fig: Nipkow Disk

Nipkow Disk According to R. J. Reiman author of Who Invented Television: The Nipkow disk was a rotating disk with holes arranged in a spiral around its edge. Light passing through the holes as the disk rotated produced a rectangular scanning pattern or raster which could be used to either generate an electrical signal from the scene for transmitting or to produce an image from the signal at the receiver. As the disk rotated, the image was scanned by the perforations in the disk, and light from different portions of it passed to a selenium photocell. The number of scanned lines was equal to the number of perforations and each rotation of the disk produced a television frame. In the receiver, the brightness of the light source would be varied by the signal voltage. Again, the light passed through a synchronously rotating perforated disk and formed a raster on the projection screen. Mechanical viewers had the serious limitation of resolution and brightness. No one is sure if Paul Nipkow actually built a working prototype of his television system. It would take the development of the amplification tube in 1907 before the Nipkow Disk could become practical. All mechanical television systems were outmoded in 1934 by electronic television systems.

John Baird - Mechanical Television System


John Logie Baird was born on August 13th, 1888, in Helensburgh, Dunbarton, Scotland and died on June 14th, 1946, in Bexhill-on-Sea, Sussex, England. John Baird received a diploma course in electrical engineering at the Glasgow and West of Scotland Technical

Fig: John Baird televised Human Face

College (now called Strathclyde University), and studied towards his Bachelor of Science Degree in electrical engineering from the University of Glasgow, interrupted by the outbreak of W.W.I. Baird is best remembered for inventing a mechanical television system. During the 1920's, John Baird and American Clarence W. Hansell patented the idea of using arrays of transparent rods to transmit images for television and facsimiles respectively. Baird's 30 line images were the first demonstrations of television by reflected light rather than back-lit silhouettes. John Baird based his technology on Paul Nipkow's scanning disk idea and later developments in electronics.

John Baird Milestones


The television pioneer created the first televised pictures of objects in motion (1924), the first televised human face (1925) and a year later he televised the first moving object image at the Royal Institution in London. His 1928 trans-atlantic transmission of the image of a human face was a broadcasting milestone. Color television (1928), stereoscopic television and television by infra-red light were all demonstrated by Baird before 1930. He successfully lobbied for broadcast time with the British Broadcasting Company, the BBC started broadcasting television on the Baird 30-line system in 1929. The first simultaneous sound and vision telecast was broadcast in 1930. In July 1930, the first British Television Play was transmitted, "The Man with the Flower in his Mouth." In 1936, the British Broadcasting Corporation adopted television service using the electronic television technology of Marconi-EMI (the world's first regular high resolution service - 405 lines per picture), it was that technology that won out over Baird's system.

Charles Jenekins
What John Logie Baird did towards the development and promotion of mechanical television in Great Britain, Charles Jenkins did for the advancement of mechanical television in North America. This man is known as the father of mechanical television in America. Charles Jenkins and his association with television began in 1894, when he first described a method to electrically transmit images and pictures. By 1920, Jenkins had developed a device known as the prismatic rings, which was the main aspect of his Radiovisor television kits. This invention was basically a radio that had visual capabilities and projected pictures with a resolution of 40 to 48 lines. Charles Jenkins also started the first television station in Maryland.

Charles Jenkins - Who Was He?


Charles Jenkins, an inventor from Dayton, Ohio, invented a mechanical television system called radiovision and claimed to have transmitted the earliest moving silhouette images on June 14, 1923. Charles Jenkins publicly performed his first television broadcast transmission, from Anacosta, Virginia to Washington in June, 1925. Charles Jenkins had been promoting and researching mechanical television since 1894, when he published an article in the "Electrical Engineer", describing a method of electrically transmitting pictures. In 1920, at a meeting of the Society of Motion Picture Engineers, Charles Jenkins introduced his prismatic rings, a device that replaced the shutter on a film projector and an important invention that Charles Jenkins would later use in his radiovision system.

Charles Jenkins - Radiovision


Radiovisors were mechanical scanning-drum devices manufactured by the Jenkins Television Corporation, as part of their radiovision system. Founded in 1928, the Jenkins Television Corporation sold several thousand sets to the public that cost between $85 and $135. The radiovisor was a multitube radio set that had a special attachment for receiving pictures, a cloudy 40 to 48 line image projected onto a six-inch square mirror. Charles Jenkins preferred the names radiovisor and radiovision over television. Charles Jenkins also opened and operated North America's first television station, W3XK in Wheaton, Maryland. The short-wave radio station began transmitting across the Eastern U.S. in 1928, regularly scheduled telecasts of radiomovies produced by Jenkins Laboratories Incorporated. It was said that watching a radiomovie required the viewer to constantly re-tune in the broadcast, but at the time watching the blurry moving image was considered an exciting miracle.

Electronic Television Systems


The development of electronic television systems was based on the development of the cathode ray tube (CRT). A cathode ray tube aka picture tube, was found in all electronic television sets up until the invention of the less bulky LCD screens.

Definitions A cathode is a terminal or electrode at which electrons enter a system, such as an electrolytic cell or an electron tube. A cathode ray is a stream of electrons leaving the negative electrode, or cathode, in a discharge tube (an electron tube that contains gas or vapor at low pressure), or emitted by a heated filament in certain electron tubes. A vacuum tube is an electron tube consisting of a sealed glass or metal enclosure from which the air has been withdrawn. A cathode ray tube or CRT is a specialized vacuum tube in which images are produced when an electron beam strikes a phosphorescent surface. Besides television sets, cathode ray tubes are used in computer monitors, automated teller machines, video game machines, video cameras, oscilloscopes and radar displays.

The first cathode ray tube scanning device was invented by the German scientist Karl Ferdinand Braun in 1897. Braun introduced a CRT with a fluorescent screen, known as the cathode ray oscilloscope. The screen would emit a visible light when struck by a beam of electrons. In 1907, the Russian scientist Boris Rosing (who worked with Vladimir Zworykin) used a CRT in the receiver of a television system that at the camera end made use of mirror-drum scanning. Rosing transmitted crude geometrical patterns onto the television screen and was the first inventor to do so using a CRT. Modern phosphor screens using multiple beams of electrons have allowed CRTs to display millions of colors.

Vladimir Zworykin
"I hate what they've done to my child...I would never let my own children watch it." Vladimir Zworykin on his feelings about watching television. Importance of Kinescope and Iconoscope Russian inventor, Vladimir Zworykin invented the cathode-ray tube called the kinescope in 1929. The kinescope tube was sorely needed for television. Zworykin was one of the first to demonstrate a television system with all the features of modern picture tubes. Zworykin also invented the iconoscope in 1923 - a tube for television transmission used in the first cameras. The iconoscope was later replaced but it laid the foundations for early television cameras. Vladimir Zworykin - Background Vladimir Zworykin was born in Murom, 200 miles east of Moscow, and studied electrical engineering at the Imperial Institute of Technology. Boris Rosing, a professor in charge of laboratory projects, tutored Zworykin and introduced his student to his experiments of transmitting pictures by wire. Together they experimented with a very early cathode-ray tube, developed in Germany by Karl Ferdinand Braun. Rosing and Zworykin exhibited a television system in 1910, using a mechanical scanner in the transmitter and the electronic Braun tube in the receiver. Rosing disappeared during the Bolshevik Revolution of 1917. Zworykin escaped and briefly studied X-rays under Paul Langevin in Paris, before moving to the United States in 1919, to work at the Westinghouse laboratory in Pittsburgh. On November 18, 1929, at a convention of radio engineers, Zworykin demonstrated a television receiver containing his kinescope. Radio Corporation of America Vladimir Zworykin was transferred by Westinghouse to work for the Radio Corporation of America (RCA) in Camden, New Jersey, as the new director of the Electronic Research Laboratory. RCA owned most of Westinghouse at that time and had just bought the Jenkin's Television Company, makers of mechanical television systems, in order to receive their patents (see C. F. Jenkins). Zworykin made improvements to his iconoscope, RCA funded his research to the tune of $150,000. The further improvements allegedly used an imaging section which was similar to Philo Farnsworth's patented dissector. Patent litigation forced RCA to start paying Farnsworth royalties.

Philo Farnsworth
"There's nothing on it worthwhile, and we're not going to watch it in this household, and I don't want it in your intellectual diet." - Philo Farnsworth's feelings about watching television. American engineer, Philo Farnsworth was born on August 19, 1906, on Indian Creek in Beaver County, Utah. His parents expected him to become a concert violinist, but his interests drew him to experiments with electricity. At the age of 12, he built an electric motor and produced the first electric washing machine his family had ever owned. Philo Farnsworth attended Brigham Young University in Utah, where he researched television picture transmission. While in high school, Philo Farnsworth had already conceived of his ideas for television. In 1926, he cofounded Crocker Research Laboratories, which he later renamed Farnsworth Television, Inc. in 1929 (and as Farnsworth Radio and Television Corporation in 1938.) In 1927, Philo Farnsworth was the first inventor to transmit a television image comprised of 60 horizontal lines. The image transmitted was a dollar sign. Farnsworth developed the dissector tube, the basis of all current electronic televisions. He filed for his first television patent in 1927 (pat#1,773,980.) Although he won an early patent for his image dissection tube, he lost later patent battles to RCA. Philo Farnsworth went on to invent over 165 different devices including equipment for converting an optical image into an electrical signal, amplifier, cathode-ray, vacuum tubes, electrical scanners, electron multipliers and photoelectric materials. Philo Farnsworth died on March 11, 1971, in Salt Lake City, Utah

1935-1945

1935
Patent interference between Zworykin and Farnsworth is ruled in favor of Farnsworth, preventing RCA from gaining 100% control of all television patents.

Sarnoff believes that FM will take over AM-radio, so he evicts Armstrong from the Empire State building, and announces a million-dollar research and testing program for television The German Reichpost (Post Office) begins "first television broadcasting service in the world". However, the quality is poor and the receivers are almost nonexistent

French television was officially established in 1935, though regular broadcasts did not take place until after the end of World War II. The medium was slow to make an impact on French society and it was not until the late 1950s that a television set began to be perceived as an essential household item. By the end of the 1960s, television viewing had become a routinized part of leisure activity.

Feb 13, 1935 - The first official channel of French television appeared on February 13, 1935, the date of the official inauguration of television in France, which was broadcast in 60 lines from 8:15 to 8:30 pm. The program showed the actress Batrice Bretty in the studio of Radio-PTT Vision at 103 rue de Grenelle in Paris. The broadcast had a range of 100 km (62 miles)

Mar 22, 1935 - 1935 March 22 Berlin starts the world's first television broadcasting service. 1936 August 1-16 The XI Summer Olympics take place in Berlin from During the Games, all of the anti-Semitic placards and slogans in the city are taken down.

Jun 7, 1935 - On June 7, 1935, the new technical standards for the London television service were revealed. Baird Television was to continue with 24o lines sequential scan at twenty-five frames per second, the minimum requirement of the Selsdon committee. However, Marconi-EMI was. going to use an advanced 4o5-line interlaced system at twenty-five frames (fifty fields) per second. This came as a complete surprise to everyone else experimenting intelevision, including RCA.

1936

RCA gives its first television demonstration in four years. The system is allelectronic, 343 lines, 30 frames per second. Farnsworth is also transmitting experimental television programs, with the same specifications, at Wyndmoor, Pennsylvania. Both RCA and Farnsworth can monitor each other's signals and progress. SUMMER: The Berlin Olympics are televised by Telefunken (using RCA equipment) and Fernseh (using Farnsworth equipment)

FALL: Television demonstrations given at 1936 Radiolympia in London. There

was attempted sabotage at this event.

NOVEMBER 2nd: The BBC begins a 2-year Baird-EMI competition, broadcasting from Alexandra Palace. This is heralded as the "world's first, public, regular, highdefinition Television station". NOVEMBER 30th: Fire destroys the Baird labs at Crystal Palace

1937
FEBRUARY: The BBC announced EMI as the winner of the Baird-EMI competition. Both the coronation of King George VI and the the Wimbledon tennis tournament are televised in England. 9,000 sets sold. France begins construction of the world's most powerful TV transmitter in the Eiffel Tower. At this time 18 experimental stations are in the USA. CBS begins its TV development. The BBC begins high definition broadcasts in London. Brothers and Stanford researchers Russell and Sigurd Varian introduce the Klystron. (A Klystron is a high-frequency amplifier for generating microwaves. It is considered the technology that makes UHF-TV possible because it gives the ability to generate the high power required in this spectrum.)

1938
Farnsworths buy a farm in Maine JUNE: RCA announced the Image Iconoscope, which was almost 10 times more sensitive to light than the previous Iconoscope RMA (Radio Manufacturer's Association) recommends 441-line, 30 fps (RCA standard at that time) to the FCC. Philco and Zenith are dead-set against this. They felt is was too early to launch commercial television at this time and had differing standards. OCTOBER: Sarnoff announces that he will begin television broadcasting at the New York World's Fair in April of 1939

1939
Vladimir Zworkin and RCA conduct experimentally broadcasts from the Empire State Building. Television was demonstrated at the New York World's Fair and the San Francisco Golden Gate International Exposition. RCA's David Sarnoff used his company's exhibit at the 1939 World's Fair as a showcase for the 1st Presidential speech (Roosevelt) on television and to introduce RCA's new line of television receivers, some of which had to be coupled with a radio if you wanted to hear sound. The Dumont company starts making tv sets. Approximately 19,000 television sets are operating in England -- Less than a few hundred in the USA.

1940

The year 1940 looked promising at first, to the television industry. But, unfortunately, television sets were so expensive, with little programming, and with the prospect of world war and uncertainty over jobs, few sets were sold. RCA had launched its TRK-12 in April, 1939 at $600 (about $7,000 in today's money), and quickly reduced the selling price to $395 (about $4,500) early in 1940. Still, sales lagged. Additionally, they also released a modified TRK-12, called the TRK-120. The bottom edge of the cabinet had a continuous strip of black, instead of having a gap in the middle, and the 'magic eye' tuning tube was removed from the radio. In the month of June, RCA and Philco televise the Republican Convention from Philadelphia. A 33 year-old Peter Goldmark announces to the NTSC that CBS has marketable color technology, consisting of a part electronic, part mechanical spinning color wheel system.

1941

The year 1941 was even more dismal than 1940, for makers of television sets. Although some of the trade articles were positive and upbeat, the reality of the situation was that no one was buying the sets.

Broadcasting continued, with a few hours in the late afternoon and evening. No new sets were designed or built. In March, the NTSC recommended the standard of 525 lines and 30 frames per second be adopted as the standard in the USA, in place of the existing 441 lines launched in 1939.

July 1st -- Commercial broadcasting finally authorized by the FCC to start on this date. NBC begins with a 10 second "Bulova" (watch) commercial. This first commercial, which simply showed the face of a watch, gave the network a profit of $7.00. CBS, DuMont and others start commercials in the Fall.

December 7th -- Pearl Harbor bombed. CBS televises news of the attack. World War-II begins for the US. 1942

The United States is engaged in World War-II.

All commercial production of television equipment is banned for the remainder of the war.

NBC's commercial TV schedule is canceled. However, television is allowed to continue broadcasting on a very limited basis at some stations. In England, however, ALL broadcasting comes to a complete halt, until June 7, 1946.

1943

Vladimir Zworkin developed a better camera tube called the Orthicon. The Orthicon had enough light sensitivity to record outdoor events at night.

The United States is engaged in World War-II. All commercial production of television equipment is banned for the remainder of the war. In the United States, television is allowed to continue broadcasting on a very limited basis at some stations. In England, however, ALL broadcasting comes to a complete halt, until June 7, 1946.

Fig: Orthicon

1945-1955

RCA begins production of 630-TS, the first television designed and manufactured after the war. Approximately 10,000 units sold by the end of the year, with about 43,000 sold of this model before production ends in 1949. Other manufacturers used the RCA chassis, and placed it in a cabinet of their own design. The initial RCA selling price was $352.

Dimensions: 26" wide x 15" high x 19" deep. 1946 Selling Price: $435 initially, $385 Later

Fig: 1947 Baird Large-Screen Direct-View Television Set

1948 GE Model 805 -- 10 inch screen -- Bakelite Case -- Called the "Locomotive" for its sweeping back style

Colour Television System


Peter Goldmark, working for CBS, demonstrated his color television system to the FCC. His system produced color pictures by having a red-blue-green wheel spin in front of a cathode ray tube. This mechanical means of producing a color picture was used in 1949 to broadcast medical procedures from Pennsylvania and Atlantic City hospitals. In Atlantic City, viewers could come to the convention center to see broadcasts of operations. Reports from the time noted that the realism of seeing surgery in color caused more than a few viewers to faint. Although Goldmark's mechanical system was eventually replaced by an electronic system he is recognized as the first to introduce a broadcasting color television system.

INVENTION OF COLOR TELEVISION


By 1949, monochrome television had become a commercial success, 10 million sets had been sold, and programs were available to the general public. A change to color television would only be licensed if the color broadcast signal could also be received as a monochrome signal on these sets. This greatly complicated the technology. RCA was the leading company in the television field, with CBS a distant second, but CBS advocated the field sequential color system which utilized rotating disk on which red, green and blue color filters are mounted. The system was not "compatible" with the monochrome requirement cited above, but it was practical, especially under laboratory conditions, and overcame the limitation that the color television tube, which can integrate three channels of signals (dot sequential), had not been invented. CBS was led by William S. Pauley, who was opposed by RCA's David Sarnoff. CBS executives Frank Stanton and Peter Goldmark, were opposed by RCA's Elmer Engstrom and George H. Brown. RCA had an technical staff edge, more development funds, and the virtually unlimited determination of Sarnoff to make the RCA's dot sequential color system the winner. The field sequential system (See Figure 1) displayed red, green, and blue television images in sequences, and depended upon the retentivity of the eye to merge these into a single color picture. If, however, flicker and picture sharpness were to be maintained at the level of monochrome television, a field sequential broadcast signal would require three times the bandwidth of monochrome. A compromise or trade off was reached by increasing the bandwidth from 4 to 5 MHz, number of frames were reduced from 30 to 20 per second, and scanning lines reduced from 525 to 343. RCA labeled this system as "mechanical", which was true of the color tube system only. RCA's dot sequential system approach to solve the bandwidth limitation (see Figure 2), was one proposed by Alda Bedford of RCA, the use of "mixed highs." This relied on the limitation of the eye's relative insensitivity to the fine detail of color, the portion of the picture that requires the transmission of higher frequency components. Bedford proposed that these components be separated from the three color signals, mixed, and then added to the GREEN signal. The bandwidth of the red and blue signals could then be reduced

substantially. Another addition, the use of a burst (train of 8 cycles of a sine wave) to the color signal provided a solid synchronism between camera source and receiver, and overcame noise which would cause instability. Field tests brought about the change of color to orangered and blue-green to take advantage of the eye's insensitivity to fine detail in the blue-green region, thereby narrowing the blue-green band. CBS won approval of its field sequential system and started expensive color broadcasts. However, the sets did not sell and had no audience. CBS was able to gracefully back-out by the intervention of the Korean War, and the ban on strategic material.
Figure 1 - The Field Sequential Color System. (Top) Operation of Field Sequential Color System; (Bottom) Color Transmission Sequence in Field Sequential Systems. The basic operation of the field sequential color system is shown above. Light from the scene passes through a rotating disk on which red, green and blue color filters are mounted. Thus a camera tube is exposed in sequence to the red, green and blue color components of the scene. A disk at the receiver, similarly equipped with color filters, rotates in synchronism so that the light from the kinescope passes through the red filter, for example, while the camera tube is being exposed to red light from the scene.

The bottom portion of Figure 1 shows the color sequence in successive fields of the color signal as proposed by CBS in

the 1949 hearing. Note that only two colors are included in each frame; for example, frame 1 has red odd lines and green even lines. Six fields, or 1/8 second, was required to scan all lines in all three colors. This caused fast-moving white objects to exhibit color break-up -- that is, to appear as a series of colored objects. Figure 2 Principle of the dot sequential system. Red, green and blue color signals are produced continuously and simultaneously. These signals are then sampled in sequence at a rapid rate, nominally 3.6 MHz. The output of the sampling process is a series of pulses, each having an amplitude proportional to the amplitude of the corresponding color signal at that point in the picture. This signal produces a series of tiny (approximately 0.03" wide) colored dots on a tricolor kinescope. These are perceived by the eye as a single color with a hue determined by the relative amplitude of the red, green and blue pulses at that point.

The introduction of color television using the CBS Field Sequential Color System had been a commercial failure in 1951, and the intervention of the Korean War, and prohibition on production of color television sets, let CBS gracefully withdraw.

The FCC withdrew its approval of the Field Sequential System and after further development and hearings, approved the National Television Committee's recommendation, based on RCA's Dot Sequential Color System, with commercial broadcasting authorized January 22, 1954.

The development of the tricolor tube and receiver, or kinescope, was necessary to implement the Dot Sequential System. According to George Brown of RCA, the development of the successful color kinescope in a six-month period was a triumph seldom equaled. The tube was based upon the shadow mask principle. Red, green, and blue phosphor dots are deposited in triads on the inside surface of the kinescope faceplate.The aperture or shadow mask, a perforated metal plate with one perforation for each triad, is mounted just behind each faceplate. A cluster of three electron guns located in the neck of the kinescope, are positioned so that beams converge at the mask. The beam from each gun impinges on the dot of a single color because of the slight off-axis angle at which it goes through each perforation.

The picture quality of the color tube prototypes was surprisingly good, but the manner of depositing the color dots, and their low brightness were early problems. The solution was to license the patent used by CBS, which in turn had developed it from an earlier RCA concept which they had rejected. Brightness and contrast ratio (ratio of highlights to shadows) was improved by advances in external circuity. Further improvements were made by Zenith's development of a flat tension mask tube in 1986, and the trinitron by Sony which uses vertical strips rather than dot triads.

The first color television receivers had small screens, performed poorly, and had high prices. RCA's twelve-inch model CT100, in 1954, utilized phosphor dots deposited on an internal glass plate, and had marginal brightness, even though it cost $1,000.00! The revised model was twenty-one inches with dots deposited on the inside of the face plate and was superior, but far short of today's models. The introduction of transistors resulted in reduction in size, power consumed and reliability. Integrated circuits, incorporating digital signaling processing gave further picture quality.

From 1954 to 1964, color television sales stagnated, and David Sarnoff, whose RCA Company was dominant, had high costs but small sales and revenue. In 1964, sales rose from one to nineteen million in 1985, and he gloried in the success and marvelous climax of his career

Cable Television

Cable television, formerly known as Community Antenna Television or CATV, was born in the mountains of Pennsylvania in 1948. Community antenna television (now called cable television) was started by John Walson and Margaret Walson in the spring of 1948. The Service Electric Company was formed by the Walsons in the mid 1940s to sell, install, and repair General Electric appliances in the Mahanoy City, Pennsylvania area. In 1947, the Walson also began selling television sets. However, Mahanoy City residents had problems receiving the three nearby Philadelphia network stations with local antennas because of the region's surrounding mountains. John Walson erected an antenna on a utility pole on a local mountain top that enabled him to demonstrate the televisions with good broadcasts coming from the three Philadelphia stations. Walson connected the mountain antennae to his appliance store via a cable and modified signal boosters. In June of 1948, John Walson connected the mountain antennae to both his store and several of his customers' homes that were located along the cable path, starting the nations first CATV system. John Walson has been recognized by the U.S. Congress and the National Cable Television Association as the founder of the cable television industry. John Walson was also the first cable operator to use microwave to import distant television stations, the first to use coaxial cable for improved picture quality, and the first to distribute pay television programming (HBO). Source Service Electric Cablevision, Incwith special thanks to Rob Ansbach

Louis W. Parker - Television Receiver

On September 7, 1948, a patent was granted to Louis W. Parker for a television receiver. Louis W. Parker's "intercarrier sound system" is now used in all television receivers in the world. Without it television receivers would not work as well and would be more costly. Louis W. Parker was inducted into the Inventors' Hall of Fame in 1988 for his invention of the intercarrier sound system for television sets, the modern basis for coordinating sound and picture in the television receiver. Louis W. Parker Louis W. Parker Born January 1, 1906 Died June 21, 1993 Television Receiver Patent No. 2,448,908.

1955-1965

1956
Ampex introduces the first practical videotape system of broadcast quality. Robert Adler invents the first practical remote control called the Zenith Space Commander. It was preceeded by wired remotes and units that failed in sunlight.

1960
The first split screen broadcast occurs on the Kennedy - Nixon debates.

1962
The All Channel Receiver Act requires that UHF tuners (channels 14 to 83) be included in all sets. AT&T launches Telstar, the first satellite to carry TV broadcasts - broadcasts are now internationally relayed.

Fig: 1960 Philco "Continental"

Model 4370 - Danish style mahogany wood cabinet. This 21" television was the last model offered in the innovative "Predicta" series. Released to the marketplace with little advertising (due to budget constraints), consequently, few were sold. The overall series of sets had reliability problems, foremost being the specially designed 'short-neck' picture tube.

1965-1975

1967
Most TV broadcasts are in color.

1969
July 20, first TV transmission from the moon and 600 million people watch.

1972
Half the TVs in homes are color sets.

1973
Giant screen projection TV is first marketed.

Fig: 1974 Zenith Console

Fig:1973 Philco-Ford - Model B450ETG - One of the last 'vacuum tube' sets. It was in this time period that the American television set industry migrated to a transistorized TV chassis.

1975-1985

1976
Sony introduces Betamax, the first home video cassette recorder.

1978
PBS becomes the first station to switch to all satellite delivery of programs.

1981 1,125 Lines of Resolution


NHK demonstrates HDTV with 1,125 lines of resolution.

1982
Dolby surround sound for home sets is introduced.

1983
Direct Broadcast Satellite begins service in Indianapolis, In.

1984
Stereo TV broadcasts approved.

NHK HDTV System

A traditional television broadcast is a one-way, passive medium. People gathered around the TV set as they had gathered around the radio, much like their ancestors had gathered around campfire storytellers. Once a person or a family tuned to a specific channel, they were likely to stay tuned to it for a while, sometimes for an entire evening. Advertisers therefore relied on passive advertisements promoting brands rather than specific products. The concept of viewers interacting with the television was not foreign to broadcast television pioneers. From 1953 to 1957, the CBS television network broadcast the regular children's series, Winky Dink and You, which may have been the first truly interactive TV program [WINKEY-DINK]. The interaction was created through the use of a plastic sheet that was placed on the TV screen, and held in place by static electricity, created by rubbing the screen with a special cloth.

The goal of the game was for the audience children to help the Winky Dink cartoon character overcome challenges it encountered. For example, when Winky Dink was chased by a tiger to the edge of a cliff, the announcer would ask the audience to draw a bridge on the screen allowing Winky Dink to escape the tiger. Children did experience some limited interaction with the content, as their actions would be a direct response to the events in the program. On many occasions, subsequent events on the screen were seemingly produced in response to their actions. Although children were excited by this show, some children used crayons to draw directly on the glass of the TV screen, and parental complaints finally convinced CBS to cancel the series. Nevertheless, the concept would not die. One key form of interactivity between people is conversation. The potential of such interactivity was demonstrated by Sylvester (Pat) Weaver, who had made broadcast TV history in the 1950s as the head of NBC. He aired The Tonight Show with Steve Allen, and the Today show with Dave Garroway and his sidekick chimp named J. Fred Muggs. These two live talk shows helped make the TV set popular; since then, talk shows have become the "killer app" for television. After leaving NBC, Pat Weaver went on to launch Subscription Television (STV) in July 1964. The three-channel coaxial cable network in Los Angeles and San Francisco offered an interactive movie channel, cultural events channel, and sports channel, long before HBO, A&E, or ESPN, and long before niche programming was envisioned. A connection fee of $5 was accompanied by a weekly $1 charge. Special programming could be viewed at 50 cents to $2.50 per selection. By November 1964, STV had wired 6,000 homes. Not bad for four months of work. STV's success threatened to change the industry. Theaters and broadcasters were fierce rivals since television started keeping people home, but with the advent of STV, they faced a common threat. They joined forces to organize a November 1964 ballot initiative to save free TV by outlawing pay TV in California. The referendum passed, but courts eventually ruled the measure was unconstitutional only after STV had exhausted its cash reserves, causing it to fail as well. STV's success did result in a slow change of the entire industry, as it triggered a slow and irreversible transformation toward cable-based TV service, which today is dominated by large Multiple (cable) System Owners (MSO). At age 85, Weaver told Cablevision, "In the market economy, those already in one business and doing it a certain way will fight against anybody who wants to come into their league and be competitive with them. And if they can put them out of business before they start, they will." The first major iTV service utilizing a cable infrastructure in the United States was Warner Communications' Qube, which began in Columbus, Ohio, in December 1977, and was running until 1984 (before the Mac days!). The Qube system consisted of a Set-top Box with a computer chip and some memory that kept tabs on customer's preferences. Qube offered 30 channels of television (as opposed to the 20 that were standard at the time) divided equally between broadcast, pay-per-view, and original interactive channels. The viewer accessed Qube programming with a proprietary remote control that was connected by wire to the settop box and used to select channels, order pay-per-view movies, and respond to interactive programming. The buttons to respond to interactive programming could be assigned different meanings for different shows. For example, the buttons could be used to poll the audience, respond to questions on live talk shows, answer questions on a quiz show, play interactive games, or purchase goods and services. Although Qube's innovative programming was quite

popular, it was not a sustainable business model because it was shadowed by VCRs, video movies, and video stores which were rapidly becoming popular. On a different continent, in 1979 the British government offered Teletext, which allowed BBC viewers to trade text messages via the telephone. Teletext used the black Vertical Blanking Interval (VBI) between lines of video to transmit data, which was displayed on the screen as a page of text. The UK's Teletext model was adopted by more than a dozen different countries and still survives today. Subsequently, in the 1980s, TCI and Time Warner experimented with their versions of interactive television on some test markets. Subscribers to their services could shop online, play games with people across town, and do a lot of the things we dreamed interactive TV should offer. Most of the testers found the service very useful. Costs were high and functionality was limited, mainly due to the dependence on a slow external network connection and flooding of servers by simultaneous traffic for each interactive program. As a result, neither TCI nor Time Warner could cover the costs of operating the service while keeping the prices reasonable for the consumer. In 1994 Time Warner tested a sophisticated and expensive interactive television service in Orlando called the Full Service Network (FSN). FSN offered interactive shopping, games, sports, news, and an electronic program guide, as well as movies on demand. FSN was incredibly complex. File servers stored movies and other content in digital form, and Asynchronous Transfer Mode (ATM) switches were used to transfer the data to a set-top box at a speed of 30 pictures per second. The box itself had five times the computing power of a top-of-the-line PC. Although FSN had tremendous potential, several issues led to its demise. Time Warner attempted to do too much too fast and learned that the complexity of integrating all the services was overwhelming. The reliance on an expensive network infrastructure external to a receiver introduced high overhead and infrastructure expenses that surpassed the market opportunity. In 2000, several Web TV (now MicrosoftTV) tests were conducted in various locations including Baltimore and San Diego. Viewers were able to select the Education button on the screen and see Web-like text and graphics related to Nova, or select Lifestyle or MotorWeek pages. NewsHour, Zoom, Mister Rogers' Neighborhood, and local productions of Maryland Public Television (PTV) and KPBS were also options in various content categories. These efforts were coordinated with Liberate Technologies of San Carlos, California, a leading designer of software for cable set-top boxes. Two examples of old interactive TV services that survived for decades are captioning [CAPTIONS] and teletext [TELETEXT]. Both of these services rely on local interactivity and do not rely on any external network infrastructure. The survival of these two services may indicate that a solid business model for iTV may be broadcasting interactivity over the air, utilizing local interactivity, and not using remote interactivity that generates network traffic and relies on cable infrastructure. 1.2.2 History of HDTV Format Independent of offering interactivity, Digital TV offers new formats. Today, the following message is common:

"This film has been modified from its original version. It has been formatted to fit your screen." When the studio released the home version of movies they had to cut the sides off the images so that it would fit your TV screen. Our televisions use a different aspect ratio than widescreen movies. The aspect ratio of most old TVs is 4:3, which means it's a little wider than it is tall. For every 4 units of width, our television screens stretches out 3 units of height. For example, if the width of the screen is 20 inches, its height is about 15 inches (20:15 or 4:3). This format was originally developed by W. K. L. Dickson in 1889 while he was working at Thomas Edison's laboratories. Dickson was experimenting with a motion-picture camera called a Kinescope, and he made his film 1 inch wide with frames 3/4 inch high. This film size, and its aspect ratio, became the standard for the film and motion-picture industry because there was no apparent reason to change. In 1941, when the NTSC proposed standards for television broadcasting, they adopted the same ratio as the film industry. It made sense until 50 years ago. In the 1950s Hollywood wanted to give the public a reason to buy a ticket instead of stay home and watch their sets. They tried a lot of ideas, some good and some bad, but one idea that still works today is the widescreen format. Wider screens, such as Cinerama, Cinemascope, and VistaVision, give the theater audience a more visually engulfing experience. Because our two eyes give us a wider view, a wider movie makes more sense. This concept was very simple and powerful. Today, the High Definition TV (HDTV) industry is adopting it for the same reason that Hollywood adopted it in the 1950's: encourage the public to upgrade their TV sets. Some believe that the wide format HDTV enhancement is more powerful than the introduction of interactivity into a TV program. However, regardless of the order of importance, there is no doubt that wide screen format combined with the high resolution of HDTV and interactivity, is a winning proposition . Figure: The format improvement introduced by HDTV.

In 1981, the first American demonstration of HDTV took place at the Society of Motion Picture and Television Engineers (SMPTE) [SMPTE] annual conference in San Francisco. The Japanese broadcasting corporation's (NHK) 1,125-line system drew raves from engineers and filmmakers. CBS assembled components from several companies: NHK provided a camera and monitors manufactured by Ikegami and Matsushita, respectively, and Sony

provided digital tape recorders with HDTV capabilities. That year, CBS applied to Federal Communications Commission (FCC) for allocation of DBS 12 mhz spectrum for HDTV systems and suggested the development of terrestrial HDTV delivery as well. About the same time, in a Tokyo demonstration, Sony introduced analog tape recorders capable of recording HDTV wide bandwidth, completing the components needed for an analog HDTV system. The National Association of Broadcasters (NAB) agreed to establish a group to focus on HDTV at its annual executive committee meeting. The next year, CBS and NHK combined forces to host an HDTV demonstration in Los Angeles. The 30-minute demonstration tape included two 6-minute movies produced at Francis Ford Coppola's Zoetrope Studios. That year, CBS and NHK brought their HDTV demonstration to Washington and impressed FCC commissioners. In 1984, Sony's work on HDTV brought the development of high-resolution slow motion, called Super Slo-Mo, used by ABC in broadcast of the Kentucky Derby. Sony HDTV cameras were made available for purchase at the NAB convention. Further, during that year RCA proposed an HDTV system using the same bandwidth and field rate but 750 lines and a progressive scanning method with 60 complete pictures per second to the Advanced Television Systems Committee (ATSC). The next year, NHK introduced in Tokyo an HDTV down-converter, which converted 1,125line, 60-field HDTV signal to 625-line 50-field PAL system. In the SMPTE meeting that year, the NHK and RCA HDTV systems were demonstrated side-by-side. A month later ATSC voted in favor of NHK HDTV standard. This standard was proposed by the U.S. to the International Radio Consultative Committee (CCIR), and in October 1986, a CCIR study group unanimously adopted HDTV standard based on the U.S. proposal: 1,125 lines, 2:1 interlace, 60 fields, 16:9 aspect ratio, 1,920 samples per active line for luminance, and 960 for color difference. In 1986, the Canadian Broadcasting Corporation (CBC) shot a 13-hour miniseries in HDTV, Chasing Rainbows, which was the world's first major HDTV production, at a cost of $10 million (Canadian dollars). The National Cable Television Association (NCTA) formed a technical group to examine cable transmission of HDTV, and the NAB president announced the formation of the Broadcast Technology Center, a research company devoted to HDTV and funded in part by NAB reserves. However, in 1987, roadblocks were mounting. The FCC launched an inquiry into HDTV and other Advanced Television (ATV) systems. As a result, it froze all applications for new UHF stations and for reallocation of spectrum for new UHF stations in 30 of the top 34 markets and ordered the formation of a joint FCC industry advisory committee on ATV. About a year later, a blue ribbon panel of the FCC's ATV service advisory committee unanimously approved draft interim report which recommended the adoption of a terrestrial HDTV system as well as the reservation of UHF spectrum for that purpose. In 1989, the U.S. House and Senate convened hearings on HDTV and bills were introduced in both chambers that would stimulate the growth of a U.S. HDTV industry. The American Electronics Association released a five-year business plan calling for the federal government to spend up to $1.35 billion in grants, loans, and guarantees to insure the development of HDTV through an industry government consortium.

That year, Southern Bell Telephone Corporation (SBTC) announced participation in the first use of satellite delivered HDTV signals for commercial purposes and the first HDTV transmission over fiber-optic cable. The next year, General Instrument (GI) presented DigiCipher, an all digital HDTV broadcast system with conditional access. Subsequently, the 1990s witnessed a rapid development of HDTV technologies. By 1995, GI had become the major provider of cable set-top boxes, having only one major competitor, Scientific Atlanta (SA).

1985-1995

1986
Super VHS introduced.

1993
Closed captioning required on all sets.

Closed Captioning
Closed captions are captions that are hidden in the video signal, invisible without a special decoder. The place they are hidden is called line 21 of the vertical blanking interval (VBI). A law in the United States called the Television Decoder Circuitry Act of 1990 mandates since July 1993, that all televisions manufactured for sale in the U.S. must contain a built-in caption decoder if the picture tube is 13" or larger. In 1970 the National Bureau of Standards (NBS) began to research the possibility of using a portion of the network television signal to send precise time information on a nationwide basis. The ABC-TV network agreed to be involved in the research and development. The project didn't pan out, but ABC suggested that it might be possible to send captions instead. Captioning was first previewed to the public in 1971, at the First National Conference on Television for the Hearing Impaired in Nashville, Tennessee. A second preview of closed captioning was held at Gallaudet College on February 15, 1972. ABC and the NBS presented closed captions embedded within the normal broadcast of the television show The Mod Squad. The federal government funded the final development and testing of this system. The engineering department of the Public Broadcasting System started to work on the project in 1973, under contract to the Bureau of Education for the Handicapped of the Department of Health, Education and Welfare (HEW). The Federal Communications Commission set aside line 21 in 1976, for the transmission of closed captions in the United States. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs, the encoding equipment that broadcasters and others would use to add captions to their programs and also prototype decoders. On March 16, 1980, the first, closed captioned television series was broadcast. The captions were seen in households that had the first generation of the closed caption decoder. The ABC Sunday Night Movie (ABC), The Wonderful World of Disney (NBC), Masterpiece Theatre (PBS) were all broadcast on March 16, 1980. In 1982, the NCI developed real-time captioning, a process for captioning newscasts, sports events or other live broadcasts as the events are being televised.

Modern Television Technology

LCD Television

LCD History Today, LCDs are everywhere we look, but they didn't sprout up overnight. It took a long time to get from the discovery of liquid crystals to the multitude of LCD applications we now enjoy. Liquid crystals were first discovered in 1888, by Austrian botanist Friedrich Reinitzer. Reinitzer observed that when he melted a curious cholesterol-like substance (cholesteryl benzoate), it first became a cloudy liquid and then cleared up as its temperature rose. Upon cooling, the liquid turned blue before finally crystallizing. Eighty years passed before RCA made the first experimental LCD in 1968. Since then, LCD manufacturers have steadily developed ingenious variations and improvements on the technology, taking the LCD to amazing levels of technical complexity. And there is every indication that we will continue to enjoy new LCD developments in the future!

Nematic Phase Liquid Crystals Just as there are many varieties of solids and liquids, there is also a variety of liquid crystal substances. Depending on the temperature and particular nature of a substance, liquid crystals can be in one of several distinct phases (see below). In this article, we will discuss liquid crystals in the nematic phase, the liquid crystals that make LCDs possible.

One feature of liquid crystals is that they're affected by electric current. A particular sort of nematic liquid crystal, called twisted nematics (TN), is naturally twisted. Applying an electric current to these liquid crystals will untwist them to varying degrees, depending on the current's voltage. LCDs use these liquid crystals because they react predictably to electric current in such a way as to control light passage. Most liquid crystal molecules are rod-shaped and are broadly categorized as either thermotropic or lyotropic.

Image courtesy Dr. Oleg Lavrentovich, Liquid Crystal Institute Most liquid crystal molecules are rod-shaped and are broadly categorized as either thermotropic or lyotropic. Thermotropic liquid crystals will react to changes in temperature or, in some cases, pressure. The reaction of lyotropic liquid crystals, which are used in the manufacture of soaps and detergents, depends on the type of solvent they are mixed with. Thermotropic liquid crystals are either isotropic or nematic. The key difference is that the molecules in isotropic liquid crystal substances are random in their arrangement, while nematics have a definite order or pattern.

The orientation of the molecules in the nematic phase is based on the director. The director can be anything from a magnetic field to a surface that has microscopic grooves in it. In the nematic phase, liquid crystals can be further classified by the way molecules orient themselves in respect to one another. Smectic, the most common arrangement, creates layers of molecules. There are many variations of the smectic phase, such as smectic C, in which the molecules in each layer tilt at an angle from the previous layer. Another common phase is cholesteric, also known as chiral nematic. In this phase, the molecules twist slightly from one layer to the next, resulting in a spiral formation. Ferroelectric liquid crystals (FLCs) use liquid crystal substances that have chiral molecules in a smectic C type of arrangement because the spiral nature of these molecules allows the microsecond switching response time that make FLCs particularly suited to advanced displays. Surface-stabilized ferroelectric liquid crystals(SSFLCs) apply controlled pressure through the use of a glass plate, suppressing the spiral of the molecules to make the switching even more rapid. Creating an LCD There's more to building an LCD than simply creating a sheet of liquid crystals. The combination of four facts makes LCDs possible: Light can be polarized. Liquid crystals can transmit and change polarized light. The structure of liquid crystals can be changed by electric current. There are transparent substances that can conduct electricity. An LCD is a device that uses these four facts in a surprising way.

To create an LCD, you take two pieces of polarized glass. A special polymer that creates microscopic grooves in the surface is rubbed on the side of the glass that does not have the polarizing film on it. The grooves must be in the same direction as the polarizing film. You then add a coating of nematic liquid crystals to one of the filters. The grooves will cause the first layer of molecules to align with the filter's orientation. Then add the second piece of glass with the polarizing film at a right angle to the first piece. Each successive layer of TN molecules will gradually twist until the uppermost layer is at a 90-degree angle to the bottom, matching the polarized glass filters. As light strikes the first filter, it is polarized. The molecules in each layer then guide the light they receive to the next layer. As the light passes through the liquid crystal layers, the molecules also change the light's plane of vibration to match their own angle. When the light reaches the far side of the liquid crystal substance, it vibrates at the same angle as the final layer of molecules. If the final layer is matched up with the second polarized glass filter, then the light will pass through.

If we apply an electric charge to liquid crystal molecules, they untwist. When they straighten out, they change the angle of the light passing through them so that it no longer matches the angle of the top polarizing filter. Consequently, no light can pass through that area of the LCD, which makes that area darker than the surrounding areas.

Building a simple LCD is easier than you think. Your start with the sandwich of glass and liquid crystals described above and add two transparent electrodes to it. For example, imagine that you want to create the simplest possible LCD with just a single rectangular electrode on it. The layers would look like this:

The LCD needed to do this job is very basic. It has a mirror (A) in back, which makes it reflective. Then, we add a piece of glass (B) with a polarizing film on the bottom side, and a common electrode plane (C) made of indium-tin oxide on top. A common electrode plane covers the entire area of the LCD. Above that is the layer of liquid crystal substance (D).

Next comes another piece of glass (E) with an electrode in the shape of the rectangle on the bottom and, on top, another polarizing film (F), at a right angle to the first one. The electrode is hooked up to a power source like a battery. When there is no current, light entering through the front of the LCD will simply hit the mirror and bounce right back out. But when the battery supplies current to the electrodes, the liquid crystals between the common-plane electrode and the electrode shaped like a rectangle untwist and block the light in that region from passing through. That makes the LCD show the rectangle as a black area.

Passive and Active Matrix Passive-matrix LCDs use a simple grid to supply the charge to a particular pixel on the display. Creating the grid is quite a process! It starts with two glass layers called substrates. One substrate is given columns and the other is given rows made from a transparent conductive material. This is usually indium-tin oxide. The rows or columns are connected to integrated circuits that control when a charge is sent down a particular column or row. The liquid crystal material is sandwiched between the two glass substrates, and a polarizing film is added to the outer side of each substrate. To turn on a pixel, the integrated circuit sends a charge down the correct column of one substrate and a ground activated on the correct row of the other. The row and column intersect at the designated pixel, and that delivers the voltage to untwist the liquid crystals at that pixel. The simplicity of the passive-matrix system is beautiful, but it has significant drawbacks, notably slow response time and imprecise voltage control. Response time refers to the LCD's ability to refresh the image displayed. The easiest way to observe slow response time in a passive-matrix LCD is to move the mouse pointer quickly from one side of the screen to the other. You will notice a series of "ghosts" following the pointer. Imprecise voltage control hinders the passive matrix's ability to influence only one pixel at a time. When voltage is applied to untwist one pixel, the pixels around it also partially untwist, which makes images appear fuzzy and lacking in contrast. Active-matrix LCDs depend on thin film transistors (TFT). Basically, TFTs are tiny switching transistors and capacitors. They are arranged in a matrix on a glass substrate. To address a particular pixel, the proper row is switched on, and then a charge is sent down the correct column. Since all of the other rows that the column intersects are turned off, only the capacitor at the designated pixel receives a charge. The capacitor is able to hold the charge until the next refresh cycle. And if we carefully control the amount of voltage supplied to a crystal, we can make it untwist only enough to allow some light through. By doing this in very exact, very small increments, LCDs can create a grey scale. Most displays today offer 256 levels of brightness per pixel.

Color LCD An LCD that can show colors must have three subpixels with red, green and blue color filters to create each color pixel. Through the careful control and variation of the voltage applied, the intensity of each subpixel can range over 256 shades. Combining the subpixels produces a possible palette of 16.8 million colors (256 shades of red x 256 shades of green x 256 shades of blue), as shown below. These color displays take an enormous number of transistors. For example, a typical laptop computer supports resolutions up to 1,024x768. If we multiply 1,024 columns by 768 rows by 3 subpixels, we get 2,359,296 transistors etched onto the glass! If there is a problem with any of these transistors, it creates a "bad pixel" on the display. Most active matrix displays have a few bad pixels scattered across the screen.

LCD technology is constantly evolving. LCDs today employ several variations of liquid crystal technology, including super twisted nematics (STN), dual scan twisted nematics (DSTN), ferroelectric liquid crystal (FLC) and surface stabilized ferroelectric liquid crystal (SSFLC). Display size is limited by the quality-control problems faced by manufacturers. Simply put, to increase display size, manufacturers must add more pixels and transistors. As they increase the number of pixels and transistors, they also increase the chance of including a bad transistor in a display. Manufacturers of existing large LCDs often reject about 40 percent of the panels that come off the assembly line. The level of rejection directly affects LCD price since the sales of the good LCDs must cover the cost of manufacturing both the good and bad ones. Only advances in manufacturing can lead to affordable displays in bigger sizes

PLASMA TV
For the past 75 years, the vast majority of televisions have been built around the same technology: the cathode ray tube (CRT). In a CRT television, a gun fires a beam of electrons (negatively-charged particles) inside a large glass tube. The electrons excite phosphor atoms along the wide end of the tube (the screen), which causes the phosphor atoms to light up. The television image is produced by lighting up different areas of the phosphor coating with different colors at different intensities. Cathode ray tubes produce crisp, vibrant images, but they do have a serious drawback: They are bulky. In order to increase the screen width in a CRT set, you also have to increase the length of the tube (to give the scanning electron gun room to reach all parts of the screen). Consequently, any big-screen CRT television is going to weigh a ton and take up a sizable chunk of a room. A new alternative has popped up on store shelves: the plasma flat panel display. These televisions have wide screens, comparable to the largest CRT sets, but they are only about 6 inches (15 cm) thick. In this article, we'll see how these sets do so much in such a small space

Photo courtesy Sony

Fig: A plasma display from Sony

Based on the information in a video signal, the television lights up thousands of tiny dots (called pixels) with a high-energy beam of electrons. In most systems, there are three pixel colors -- red, green and blue -- which are evenly distributed on the screen. By combining these colors in different proportions, the television can produce the entire color spectrum. The basic idea of a plasma display is to illuminate tiny, colored fluorescent lights to form an image. Each pixel is made up of three fluorescent lights -- a red light, a green light and a blue light. Just like a CRT television, the plasma display varies the intensities of the different lights to produce a full range of colors.

What is plasma?

The central element in a fluorescent light is a plasma, a gas made up of freeflowing ions (electrically charged atoms) and electrons (negatively charged particles). Under normal conditions, a gas is mainly made up of uncharged particles. That is, the individual gas atoms include equal numbers of protons (positively charged particles in the atom's nucleus) and electrons. The negatively charged electrons perfectly balance the positively charged protons, so the atom has a net charge of zero. If you introduce many free electrons into the gas by establishing an electrical voltage across it, the situation changes very quickly. The free electrons collide with the atoms, knocking loose other electrons. With a missing electron, an atom loses its balance. It has a net positive charge, making it an ion. In a plasma with an electrical current running through it, negatively charged particles are rushing toward the positively charged area of the plasma, and positively charged particles are rushing toward the negatively charged area. In this mad rush, particles are constantly bumping into each other. These collisions excite the gas atoms in the plasma, causing them to release photons of energy. Xenon and neon atoms, the atoms used in plasma screens, release light photons when they are excited. Mostly, these atoms release ultraviolet light photons, which are invisible to the human eye.

Inside a Plasma Display The xenon and neon gas in a plasma television is contained in hundreds of thousands of tiny cellspositioned between two plates of glass. Long electrodes are also sandwiched between the glass plates, on both sides of the cells. The address electrodes sit behind the cells, along the rear glass plate. The transparent display electrodes, which are surrounded by

an insulatingdielectric material and covered by a magnesium oxide protective layer, are mounted above the cell, along the front glass plate. Both sets of electrodes extend across the entire screen. The display electrodes are arranged in horizontal rows along the screen and the address electrodes are arranged in vertical columns. As you can see in the diagram below, the vertical and horizontal electrodes form a basic grid.

To ionize the gas in a particular cell, the plasma display's computer charges the electrodes that intersect at that cell. It does this thousands of times in a small fraction of a second, charging each cell in turn.

When the intersecting electrodes are charged (with a voltage difference between them), an electric current flows through the gas in the cell. As we saw in the last section, the current creates a rapid flow of charged particles, which stimulates the gas atoms to release ultraviolet photons.

The released ultraviolet photons interact with phosphor material coated on the inside wall of the cell. Phosphors are substances that give off light when they are exposed to other light. When an ultraviolet photon hits a phosphor atom in the cell, one of the phosphor's electrons jumps to a higher energy level and the atom heats up. When the electron falls back to its normal level, it releases energy in the form of a visible light photon.

The phosphors in a plasma display give off colored light when they are excited. Every pixel is made up of three separate subpixel cells, each with different colored phosphors. One subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel has a blue light phosphor. These colors blend together to create the overall color of the pixel. By varying the pulses of current flowing through the different cells, the control system can increase or decrease the intensity of each subpixel color to create hundreds of different combinations of red, green and blue. In this way, the control system can produce colors across the entire spectrum. The main advantage of plasma display technology is that you can produce a very wide screen using extremely thin materials. And because each pixel is lit individually, the image is very bright and looks good from almost every angle. The image quality isn't quite up to the standards of the best cathode ray tube sets, but it certainly meets most people's expectations. The biggest drawback of this technology has been the price. However, falling prices and advances in technology mean that the plasma display may soon edge out the old CRT sets.

3D Television
Television, like most technology, has evolved since its debut. First, there was the switch from black and white to color TV. Then manufacturers began to offer televisions in larger formats using various projection methods. Over the last two decades, we've seen LCD and plasma technologies advance to the point where you can go out and buy a 61-inch (about 155 centimeters) television that's only a few centimeters thick. And high-definition television (HDTV) provides us with a picture that's so vibrant and sharp it's almost as if we weren't looking at a collection of pixels. So what's next in television technology? Now that you can practically replace a wall with a screen and watch movies in high resolution, where do we go from here? The answer may end up right in front of your face -- or at least appear to be there, anyway. We're talking about 3D television. Audiences first got a glimpse at 3-D technology way back in 1922 with the release of "The Power of Love." Whether they thought it was a curious thing or not is lost to history. But that began the somewhat cyclical fascination with three-dimensional film.

Seeing in Three Dimensions

Ethan Miller/Getty Images 3-D televisions seemed to be everywhere at the 2009 Consumer Electronic Show -- these attendees are looking at a Samsung 3D TV. Why can you look at an object in the real world and see it as a three-dimensional object, but if you see that same object on a television screen it looks flat? What's going on, and how does 3-D technology get around the problem? It all has to do with the way we focus on objects. We see things because our eyes absorb light reflected off of the items. Our brains interpret the light and create a picture in our minds. When an object is far away, the light traveling to one eye is parallel with the light traveling to the other eye. But as an object gets closer, the lines are no longer parallel -- they converge and our eyes shift to compensate. We can see this effect in action if you try to look at something right in front of your nose -- you'll attain a lovely cross-eyed expression. When we focus on an object, your brain takes into account the effort it required to adjust your eyes to focus on it as well as how much your eyes had to converge. Together, this information allows you to estimate how far away the object is. If our eyes had to converge quite a bit, then it stands to reason that the object is close to us. The secret to 3-D television and movies is that by showing each eye the same image in two different locations, we can trick our brain into thinking the flat image you're viewing has depth. But this also means that the convergence and focal points don't match up the way they do for real objects. While your eyes may converge upon two images that seem to be one object right in front of you, they're actually focusing on a screen that's further away. This is why you get eye strain if you try to watch too many 3-D movies in one sitting. Passive Glasses In the 3-D business, there are two major categories of 3-D glasses: passive and active. Passive lenses rely on simple technology and are probably what you think of when you hear the term 3-D glasses. The classic 3-D glasses have anaglyph lenses. Anaglyph glasses use two different color lenses to filter the images you look at on the television screen. The two most common colors used are red and blue. If you were to look at the screen without your glasses, you would see that there are two sets of images slightly

offset from one another. One will have a blue tint to it and the other will have a reddish hue. If you put on your glasses, you should see a single image that appears to have depth to it. What's happening here? The red lens absorbs all the red light coming from your television, canceling out the red-hued images. The blue lens does the same for the blue images. The eye behind the red lens will only see the blue images while the eye behind the blue lens sees the red ones. Because each eye can only see one set of images, your brain interprets this to mean that both eyes are looking at the same object. But your eyes are converging on a point that's different from the focal point -- the focus will always be your television screen. That's what creates the illusion of depth. Today, a more popular type of passive lenses in movie theaters can be found in the polarized glasses. Again, if we look at a screen that uses this technology we'll see more than one set of images. The glasses use lenses that filter out light waves projected at certain angles. Each lens only allows light through that is polarized in a compatible way. Because of this, each eye will see only one set of images on the screen. Polarized lenses are becoming more popular than anaglyph glasses because the glasses don't distort the color of the image as much and provide a better audience experience. But it's very difficult to use the polarization technique for home theater systems -- most methods would require you to coat our television screen with a special polarizing film first.

Active Glasses and 3-D-Ready Televisions 3-D in High Definition It's easier to present 3-D in high definition using active glasses than with passive glasses. That's because with a passive glasses system, the television has to display two sets of images at the same time. An active glasses system alternates between the two sets of images at very high speeds -it's less information for the television to handle at any particular moment. In the last few years, engineers have come up with a new way to create three-dimensional images in movies and on television sets. We still wear 3-D glasses with this method, but they don't use colored lenses. The method doesn't compromise the color quality of the image as much as anaglyph glasses do. It also doesn't require you to put a polarization film on your television screen. What it does do is control when each of your eyes can view the screen. The glasses use liquid crystal display (LCD) technology to become an active part of the viewing experience. They have infrared (IR) sensors that allow them to connect wirelessly to your television or display. As the 3-D content appears on the screen, the picture alternates between two sets of the same image. The two sets are offset from one another similar to the way they are in passive glasses systems. But the two sets aren't shown at the same time -they turn on and off at an incredible rate of speed. In fact, if we were to look at the screen

without wearing the glasses, it would appear as if there were two sets of images at the same time. The LCD lenses in the glasses alternate between being transparent and opaque as the images alternate on the screen. The left eye blacks out when the right eye's image appears on television and vice versa. This happens so fast that your mind cannot detect the flickering lenses. But because it's timed exactly with what's on the screen, each eye sees only one set of the dual images you'd see if you weren't wearing the glasses. For several years, LCD and plasma screens weren't good candidates for this kind of technique. The refresh rates -- the speed at which a television replaces the image on the screen -- were too low for the technology to work without the viewer detecting a flicker from the glasses. But now you can find plasma and LCD displays with incredibly fast refresh rates.

3-D Ready Televisions

Courtesy Mitsubishi TV The Mitsubishi Laservue HDTV has the standard 3-D port -- you can use active 3-D glasses with this TV. We can't use a standard television and expect active glasses to work. wemust have some way to synchronize the alternating images on the screen with the LCD lenses in the glasses. That's where thestereoscopic sync signal connector comes in. It's a standardized connector with three pins that plugs in to a special port on a 3-D-ready television or monitor. The other end of the cable plugs into an IR emitter. The emitter sends signals to your active 3-D glasses. This is what synchronizes the LCD lenses with the action on the screen.

The connector operates using transistor-transistor logic (TTL). One pin on the connector carries low-voltage electricity. A second pin acts as a ground wire. The third pin carries the stereo sync signal. There are two different types of 3-D active glasses and they aren't compatible with one another. They are the E-D and ELSA style of 3-D glasses. While emitters for both styles work with the stereoscopic sync signal standard, E-D glasses will only work with an E-D emitter. While a pair of ELSA glasses can synchronize with an E-D emitter, the glasses won't perform properly. For example, when the E-D emitter sends a signal for the left lens to be transparent, the ELSA glasses will make the left lens opaque and cause the right lens to be clear. Even if you have a 3-D-ready television, an emitter and a pair of active glasses, not everything on your television will appear to be three dimensional. Content providers must optimize the signal for 3-D first. While it's possible to modify existing footage into 3-D content, some providers prefer to create video with 3-D in mind beforehand. Currently, the easiest way to view 3-D content is to connect a computer to your 3-D-ready television using an HDMI cable, and then stream the 3-D content from your computer to your television. In the future, we'll probably see more DVD players capable of sending 3-D signals to televisions and perhaps even incorporate 3-D transmissions into cable and satellite services. Lenticular Displays

AP Photo/Shinzuo Kambayashi This Toshiba display uses a lenticular film to direct light to the viewer's eyes, which creates a 3-D effect. While 3-D technology is impressive, some people still want a solution that doesn't require them to wear glasses. There have been several attempts at creating a display capable of projecting images into a three-dimensional space. Some involve lasers, some project images onto a fine mist or onto artificial smoke, but these methods aren't that common or practical. There's one way to create three-dimensional images that you may see in places like sports arenas or in a hotel during a big conference. This method relies on a display coated with a lenticular film. Lenticules are tiny lenses on the base side of a special film. The screen displays two sets of the same image. The lenses direct the light from the images to your eyes

-- each eye sees only one image. Our brain puts the images together and you interpret it as a three-dimensional image. This technology requires content providers to create special images for the effect to work. They must interlace the two sets of images together. If you were to try and view the video feed on a normal screen, you would see a blurry double image. Another problem with lenticular displays is that it depends upon the audience being in a sweet spot to get the 3-D effect. If you were to move to the left or right from one of these sweet spots, the image on the screen would begin to blur. Once you moved from one sweet spot to another, the image would return to a cohesive picture. Future televisions may include a camera that tracks your position. The television will be able to adjust the image so that you're always in a sweet spot. Whether this will work for multiple viewers of the same screen remains to be seen. Some people experience a feeling similar to motion sickness after watching a lenticular display for more than a few minutes. That's probably because your eyes have to do extra work as they deal with the discrepancy between focus and convergence. But on the other hand, we don't have to worry about losing an expensive pair of active glasses.

Das könnte Ihnen auch gefallen