Sie sind auf Seite 1von 50

UNIT-I COMPUTER GRAPHICS INTRODUCTION Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing

and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing. Computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities. Computer graphics (CG) is the field of visual computing, where one utilizes computers both to generate visual images synthetically and to integrate or alter visual and spatial information sampled from the real world. WHY USE COMPUTER GRAPHICS? A difficulty commonly encountered in the preparation of printed educational materials is the preparation of the art work, especially when the material is intended for an ethnic or language group or groups other than the one creating the material. Over the years, attempts have been made to supply visual models which might make the job of drawing visuals easier for project workers with limited training. Another problem often encountered is the difficulty in finding experienced personnel to prepare the materials in a camera read form quickly and easily. It has also proven difficult to adapt materials which have been successful in on region or country to another ethnically or culturally different one because the models do not lend themselves to change: instead, project workers make use of not always the most appropriate of materials or have to start from scratch. With the increasing numbers of computers now being used in the field there has grown a demand for simpler and more direct system. Traditional graphics and desk top producing programs available to most projects tend to become a bit to complicate for the average user to make much use of the capabilities hence one more organizations are looking at and beginning to use the Macintosh system. The Ministry of Agriculture in Swaziland is now equipped with a Mac system and there is a Macintosh User Club in Niger. Many administrators have a system at home. Where a Mac is installed it usually becomes the "most used" system with it is real user friendly that allows people with a minimum of computer skills to operate the system with a maximum of professional results. With the Macintosh and modest graphic software, visuals can be developed and collected into an

"image bank" available for many different materials. You can make single or limited, specific materials to be uses by the workers or teachers. Or the project can produce the art work for materials to be used nationally. These computer generated images can also be easily changed or adapted by people with little artistic talent or experience. Camera-ready materials can be prepared in a fraction of the time and at a much-reduced cost.

1.1 Common Uses of Computer Graphics: Computer graphics are mostly used in various fields such as

It is used in drawing maps usually called as "cartography" Also used as business presentation graphics. Used in symbolic representations and real time mappings in weather maps and also used in photo enhancements. Hast a vast application in the field of engineering drawings.

It is also used in typography and many other arts such as in architectures like construction plan, sketching the exterior plots etc. One of the most interesting applications of computer graphics is in CAD systems. These systems are used to visualize any 3d objects by the use of computer soft wares and is used in major designing such as car designing , machine designing etc. Also used in geographical informatics systems for visualizing human systems and physical geography. Used in image processing such as translating, scaling, rotating the images. Used in imparting education also by means of video devices.

1.2 APPLICATIONS OF COMPUTER GRAPHICS 1.2.1 COMPUTER AIDED DESIGN Computer-aided design (CAD), also known as computer-aided design and drafting (CADD), is the use of computer technology for the process of design and design-documentation. Computer Aided Drafting describes the process of drafting with a computer. CADD software, or environments, provides the user with input-tools for the purpose of streamlining design processes; drafting, documentation, and manufacturing processes. CADD output is often in the form of electronic files for print or machining operations. The development of CADD-based software is in direct correlation with the processes it seeks to economize; industry-based software (construction, manufacturing, etc.) typically uses vector-based (linear) environments whereas graphic-based software utilizes raster-based (pixilated) environments. CADD environments often involve more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions. CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) objects. CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design, prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.

The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD) in Fig 1-1.

Figure 1-1 Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question. CAD is one part of the whole Digital Product Development (DPD) activity within the Product Lifecycle Management (PLM) process, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:

Computer-aided engineering (CAE) and Finite element analysis (FEA) Computer-aided manufacturing (CAM) including instructions to Computer Numerical Control (CNC) machines Photo realistic rendering Document management and revision control using Product Data Management (PDM).

CAD is also used for the accurate creation of photo simulations that are often required in the preparation of Environmental Impact Reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like were the proposed facilities allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD. Computer-aided industrial design (CAID) is a subset of computer-aided design (CAD) that includes software that directly helps in product development. Within CAID programs designers have the freedom of creativity, but typically follow a simple design methodology:

Creating sketches, using a stylus Generating curves directly from the sketch Generating surfaces directly from the curves

(a) Figure 1-2

(b)

The end result is a 3D model that projects the main design intent the designer had in mind. The model can then be saved in STL format to send it to a rapid prototyping machine to create the real-life model. CAID helps the designer to focus on the technical part of the design methodology rather than taking care of sketching and modelingthen contributing to the selection of a better product proposal in less time. Later, when the requisites and parameters of the product have been defined by means of using CAID software, the designer can import the result of his work into a CAD program (typically a Solid Modeler) for adjustments prior to production and generation of blueprints and manufacturing processes.

Figure 1-3 What differentiates CAID from CAD is that the former is far more conceptual and less technical than the latter. Within a CAID program, the designer can express him/herself without extents, whilst in CAD software there is always the manufacturing factor.

(a) Figure 1-4

(b)

1.2.2 PRESENTATION GRAPHICS A presentation program (also called a presentation graphics program) is a computer software package used to display information, normally in the form of a slide show. It typically includes three major functions: an editor that allows text to be inserted and formatted, a method for inserting and manipulating graphic images and a slide-show system to display the content. A presentation program is supposed to help both: the speaker with an easier access to his ideas and the participants with visual information which complements the talk. There are many different types of presentations including professional (work-related), education, entertainment, and for general communication. Presentation programs can either supplement or replace the use of older visualaid technology, such as Pamphlets, handouts, chalkboards, flip charts, posters, slides and overhead transparencies. Text, graphics, movies, and other objects are positioned on individual pages or "slides" or "foils". The "slide" analogy is a reference to the slide projector, a device that has become somewhat obsolete due to the use of presentation software. Slides can be printed, or (more usually) displayed on-screen and navigated through at the command of the presenter. Transitions between slides can be animated in a variety of ways, as can the emergence of elements on a

slide itself. Typically a presentation has many constraints and the most important being the limited time to present consistent information.

Figure 1-5

Figure 1-6

1.2.3 COMPUTER ART


Computer art is any art in which computers play a role in production or display of the artwork. Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, videogame, web site, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers has been blurred. For instance, an artist may combine traditional painting with algorithm art and other digital techniques. As a result, defining computer art by its end product can thus be difficult. Computer art is by its nature evolutionary since changes in technology and software directly affect what is possible. Formerly, technology restricted output and print results: early machines used pen-and-ink plotters to produce basic hard copy. In the 1970s, the dot matrix printer (which was much like a typewriter) was used to reproduce varied fonts and arbitrary graphics. The first animations were created by plotting all still frames sequentially on a stack of paper, with motion transfer to 16-mm film for projection. During the 1970s and 1980s, dot matrix printers were used to produce most visual output while microfilm plotters were used for most early animation. In 1976, the inkjet printer was invented with the increase in use of personal computers. The inkjet printer is now the cheapest and most versatile option for everyday digital color output. Raster Image Processing (RIP) is typically built into the printer or supplied as a software package for the computer; it is required to achieve the highest quality output. Basic inkjet devices do not feature RIP. Instead, they rely on graphic software to rasterizing images. The laser printer, though more expensive than the inkjet, is another affordable output device available today.

Digital art is a general term for a range of artistic works and practices that use digital technology as an essential part of the creative and/or presentation process. Since the 1970s, various names have been used to describe the process including computer art and multimedia art, and digital art is itself placed under the larger umbrella term new media art. The impact of digital technology has transformed activities such as painting, drawing and sculpture, while new forms, such as net art, digital installation art, and virtual reality, have become recognized artistic practices. More generally the term digital artist is used to describe an artist who makes use of digital technologies in the production of art. In an expanded sense, "digital art" is a term applied to contemporary art that uses the methods of mass production or digital media.

Figure 1-7 1.2.4 ENTERTAINMENT Two men stand on the rooftop. One man, dressed in a black suit and black tie, shoots a penetrating look at the other through his dark sunglasses. With a quick flick of his wrists, the man in the suit fires a handful of lethal bullets. Time slows down as the projectiles float towards their victim. The camera angle changes as the man acrobatically bends back to dodge the rippling bullets. Whoosh! The bullets fly by in normal speed as the man quickly gets back up. Neo, the man who almost tasted lead, straightens himself out before continuing to battle the agents of the virtual world. Nowadays, any top science fiction or action/adventure movie uses at least some bit of computerized special effects. I still remember being amazed at how real the tyrannosaurus red looked in the blockbuster hit, Jurassic Park. On the extreme side of computer graphics are movies entirely digitally animated. Final Fantasy: the Spirit Within attempted to revolutionize movie-making by using digitalized fictional actors.

Artificial Intelligence takes the same concept and puts a different spin on it. In the film, a robot was created to replace a lost son. No matter how hard the computerized child tries to fit into the lives of actual humans, the more he is rejected, because of his lack of understanding; his understanding is limited to the ideal things that are supposed to happen when certain conditions are met, like a computer. He cannot accept the fuzziness, the uncertainties of being a real human, and real humans can never accept his ideality without sacrificing their humanity built by uncertainties. These two films represent people exploring the possibility of a real world integrated with the virtual. Today, with the advancement of technology, these dreams and ideas are no longer just speculation, for we face the virtual world all the time through the media. Computer graphics and animation have left an undeniable mark on the entertainment industry. Pioneers in the field of CGI have struggled to bring highly detailed realism and beauty to their work. Events, scenes and characters are being brought to life without the use of hockey rubber suits or stilted animatronics. As computer graphics and animation continue to evolve, the limits on what is possible in entertainment continue to dissipate.

Figure 1-8

Fantasy and Science Fiction:

Sci-fi and fantasy films have been greatly altered by the evolution of CGI. Tolkien's "The Lord of The Rings" creatures and locations were made possible through the use of computer-generated effects. The "Star Wars" prequels also made heavy use of CGI to create worlds that would be impossible to make in model or in a studio.

Figure 1-10 Video Games

Computer graphics have also assisted video games in growing to match and then eclipse almost all forms of popular entertainment in terms of sales. While computer graphics-based games started as primitive dots and lines across the screen, they have evolved into compelling virtual worlds. Games such as "Grand Theft Auto," "Oblivion" and "Fallout 3" have earned record-breaking sales for ensnaring players in impossibly detailed virtual lives.

Fig 1-11 Weaknesses

While these advances in technology have allowed film, television and video game makers to create worlds undreamed of, there are those who worry about its overuse. Film critics such as Roger Ebert have pointed out the flatness of dialogue given back actors trapped in a world of blue screens. Effects for effects sake have also become a problem for many critics who worry that style is preferred to substance.

1.2.5 EDUCATION AND TRAINING


Everyone familiar with computer graphic is aware of the great degree of elaboration and precision that is now available in desk top publishing and graphic imaging. Every office in America and Europe has it desktop publishing department that make use of the Macs versatility in producing clear, crisp and valuable graphics. But outside of the industrial West with it availability to hardware, soft ware and monthly magazines to keep users up to date on the daily advances in the technology computer graphics is just in the beginning forms. In areas of the world where it is more important to have medicines to prevent childhood diseases and the iceboxes to keep them scanners, laser printers in the office take on a lower priority. Computers are becoming more important and there is daily becoming a greater need for people to be able to use them. Computers in the world of third world development, once relegated to accounting, data systems and report writing are now beginning to find their way into the production of various forms of educational and publicity materials. The use of computer graphics to produce extension education materials is now beginning to prove itself. And because of the user friendly nature of the Mac and accompanying software is where the Mac is becoming more and more evident in remote parts of the world as a first choice with programs from ministerial level to small voluntary organizations with little to spend and limited person power to make use of the equipment.

1.2.6 VISUALIZATION Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. Visualization today has ever-expanding applications in science, education, engineering (e.g. product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization. As a subject in computer science, data visualization or scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building and reasoning. Scientific visualization Scientific visualization is the transformation, selection or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis and understanding of the data. It's a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as Science itself. Traditional areas of Scientific Visualization are Flow Visualization, medical visualization, astrophysical visualization and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common.

Fig 1-12

Figure 1-13

Educational visualization Educational visualization is using a simulation normally created on a computer to create an image of something so it can be taught about. This is very useful when teaching about a topic which is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment. It can also be used to view past events, such as looking at dinosaurs, or looking at things that are difficult or fragile to look at in reality like the human skeleton, without causing physical or mental harm to a subjective volunteer or cadaver. Information visualization Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Dr. Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question.

Fig 1-14 Knowledge visualization The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer based visualization methods complementarily. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and predictions by using various complementary visualizations. Product Visualization Product Visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of Product Lifecycle Management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping and simulation.

1.2.7 IMAGE PROCESSING Image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set

of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signalprocessing techniques to it. Image processing usually refers to digital image processing, but optical and analog image processing also are possible.

Figure 1-15

Figure 1-16 Applications of image processing


Computer vision Optical sorting Augmented Reality Face detection

Feature detection Lane departure warning system Non-photorealistic rendering Medical image processing Microscope image processing Morphological image processing Remote sensing

1.3 GRAPHICAL USER INTERFACES A graphical user interface is a type of user interface that allows users to interact with electronic devices with images rather than text commands. GUIs can be used in computers, hand-held devices such as MP3 players, portable media players or gaming devices, household appliances and office equipment. A GUI represents the information and actions available to a user through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation. The actions are usually performed through direct manipulation of the graphical elements. The term GUI is historically restricted to the scope of two-dimensional display screens with display resolutions capable of describing generic information, in the tradition of the computer science research at the Palo Alto Research Center (PARC). The term GUI earlier might have been applicable to other high-resolution types of interfaces that are nongeneric, such as videogames, or not restricted to flat screens, like volumetric displays.

Figure 1-18

OVERVIEW OF GRAPHICS SYSTEM 1.4 VIDEO DISPLAY DEVICES


The primary output device in a graphics system is a video monitor. The operation of most video monitors is based on the standard cathode ray tube (CRT). CRT - The cathode ray tube or CRT, invented by Karl Ferdinand Braun, is the display device used in most computer displays, video monitors, televisions and oscilloscopes. The CRT developed from Philo Farnsworth's work was used in all television sets until the late 20th

century and the advent of plasma screens, LCDs, DLP, OLED displays, and other technologies

Figure 1-19.

We have different types of CRTs like

1. Magnetic deflection CRT

2. Electron gun with accelerating anode. 3. Deflecting electron beam CRT.

Fig 1-20

In case of colour pictures coloured signals are sent as shown in Fig 1.

Figure 1-21 In oscilloscope CRTs, electrostatic is used, rather than the magnetic deflection commonly used with television and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. Oscilloscopes use electrostatic rather than magnetic deflection because the inductive reactance of the magnetic coils would limit the frequency response of the instrument.

1.4.1 Phosphor persistence Various phosphors are available depending upon the needs of the measurement or display application. The brightness, colour, and persistence of the illumination depend upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds for visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable.

1.4.2 Micro channel plate When displaying fast one-shot events the electron beam must deflect very quickly, with few electrons impinging on the screen; leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of emission this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness), and improved sensitivity and spot size as well. 1.4.3 Graticules Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate. External graticules are typically made of glass or acrylic plastic. An internal graticule provides an advantage in that it eliminates parallax error. Unlike an external graticule, an internal graticule cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be sideilluminated, which improves its visibility when used in a darkened room or when shaded by a camera hood.

1.4.4 Color CRTs

Figure 1-22 Spectra of constituent blue, green and red phosphors in a common CRT Color tubes use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs Color CRTs have three electron guns, one for each primary color, arranged either in a straight line or in a triangular configuration (the guns are usually constructed as a single unit). A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor. A shadow mask tube uses a metal plate with tiny holes; placed so that the electron beam only illuminates the correct phosphors on the face of the tube Another type of color CRT uses an aperture grille to achieve the same result. Convergence in Color CRTs The three beams in color CRTs would not strike the screen at the same point without convergence calibration. Instead, the set would need to be manually adjusted to converge the three color beams together to maintain color accuracy.

1.4.5 CRT versus LCD

Figure 1-23 In a CRT the electron beam is produced by heating a metal filament, which "boils" electrons off its surface. The electrons are then accelerated and focused in an electron gun, and aimed at the proper location on the screen using electromagnets. The majority of the power budget of a CRT goes into heating the filament, which is why the back of a CRTbased television is hot. Since the electrons are easily deflected by gas molecules, the entire tube has to be held in vacuum. The atmospheric force on the front face of the tube grows with the area, which requires ever-thicker glass. This limits practical CRTs to sizes around 30 inches; displays up to 40 inches were produced but weighed several hundred pounds, and televisions larger than this had to turn to other technologies like rear-projection. The lack of vacuum in an LCD television is one of its advantages; there is a small amount of vacuum in sets using CCFL backlights, but this is arranged in cylinders which are naturally stronger than large flat plates. Removing the need for heavy glass faces allows LCDs to be much lighter than other technologies. For instance, the Sharp LC-42D65, a fairly typical 42inch LCD television, weighs 55 lbs including a stand while the late-model Sony KV40XBR800, a 40" 4:3 CRT weighs a massive 304 lbs without a stand, almost six times the weight. LCD panels, like other flat panel displays, are also much thinner than CRTs. Since the CRT can only bend the electron beam through a critical angle while still maintaining focus, the electron gun has to be located some distance from the front face of the television. In early sets from the 1950s the angle was often as small as 35 degrees off-axis, but improvements, especially computer assisted convergence, allowed that to be dramatically improved and, late in their evolution, folded. Nevertheless, even the best CRTs are much deeper than an LCD; the KV-40XBR800 is 26 inches deep, while the LC-42D65U is less than 4 inches thick its stand is much deeper than the screen in order to provide stability.

LCDs can, in theory, be built at any size, with production yields being the primary constraint. As yields increased, common LCD screen sizes grew, from 14 to 30", to 42", then 52", and 65" sets are now widely available. This allowed LCDs to compete directly with most in-home projection television sets, and in comparison to those technologies directview LCDs have a better image quality. Experimental and limited run sets are available with sizes over 100 inches. Efficiency LCDs are relatively inefficient in terms of power use per display size, because the vast majority of light that is being produced at the back of the screen is blocked before it reaches the viewer. To start with, the rear polarizer filters out over half of the original unpolarized light. Examining the image above, you can see that a good portion of the screen area is covered by the cell structure around the shutters, which removes another portion. After that, each sub-pixel's color filter removes the majority of what is left to leave only the desired color. Finally, to control the color and luminance of a pixel as a whole, the light has to be further absorbed in the shutters. 3M suggests that, on average, only 8 to 10% of the light being generated at the back of the set reaches the viewer. For these reasons the backlighting system has to be extremely powerful. In spite of using highly efficient CCFLs, most sets use several hundred watts of power, more than would be required to light an entire house with the same technology. As a result, LCD televisions end up with overall power usage similar to a CRT of the same size. Using the same examples, the KV-40XBR800 dissipates 245 W, while the LC-42D65 dissipates 235 W.] Plasma displays are worse; the best are on par with LCDs, but typical sets draw much more. Modern LCD sets have attempted to address the power use through a process known as "dynamic lighting" (originally introduced for other reasons, see below). This system examines the image to find areas that are darker, and reduces the backlighting in those areas. CCFLs are long cylinders that run the length of the screen, so this change can only be used to control the brightness of the screen as a whole, or at least wide horizontal bands of it. This makes the technique suitable only for particular types of images, like the credits at the end of a movie. In 2009 some manufacturers made some TVs using HCFL (more power efficient than CCFL). Sets using LEDs are more distributed, with each LED lighting only a small number of pixels, typically a 16 by 16 patch. This allows them to dynamically adjust brightness of much smaller areas, which is suitable for a much wider set of images. Another ongoing area of research is to use materials that optically route light in order to reuse as much of the signal as possible. One potential improvement is to use microprisms or

diachronic mirrors to split the light into R, G and B, instead of absorbing the unwanted colors in a filter. A successful system would improve efficiency by three times. Another would be to direct the light that would normally fall on opaque elements back into the transparent portion of the shutters. A number of companies are actively researching a variety of approaches, and 3M currently sells several products that route leaked light back toward the front of the screen. Several newer technologies, OLED, FED and SED, have lower power use as one of their primary advantages. All of these technologies directly produce light on a sub-pixel basis, and use only as much power as that light level requires. Sony has demonstrated 36" FED units displaying very bright images drawing only 14 W, less than 1/10 as much as a similarly sized LCD. OLEDs and SEDs are similar to FEDs in power terms. The dramatically lower power requirements make these technologies particularly interesting in low-power uses like laptop computers and mobile phones. These sorts of devices were the market that originally bootstrapped LCD technology, due to its light weight and thinness. Image quality

Figure 1-24 A traveller pocket-sizes LCD TV

Early LCD sets were widely derided for their poor overall image quality, most notably the ghosting on fast-moving images, poor contrast ratio, and muddy colors. In spite of many predictions that other technologies would always beat LCDs, massive investment in LCD production, manufacturing, and electronic image processing has addressed many of these concerns. Response time For 60 frames per second video, common in North America, each pixel is lit for 17 ms before it has to be re-drawn (20 ms in Europe). Early LCD displays had response times on the order of hundreds of milliseconds, which made them useless for television. A combination of improvements in materials technology since the 1970s greatly improved

this, as did the active matrix techniques. By 2000, LCD panels with response times around 20 ms were relatively common in computer roles. This was still not fast enough for television use. A major improvement, pioneered by NEC, led to the first practical LCD televisions. NEC noticed that liquid crystals take some time to start moving into their new orientation, but stop rapidly. If the initial movement could be accelerated, the overall performance would be increased. NEC's solution was to boost the voltage during the "spin up period" when the capacitor is initially being charged, and then dropping back to normal levels to fill it to the required voltage. A common method is to double the voltage, but halve the pulse width, delivering the same total amount of power. Named "Overdrive" by NEC, the technique is now widely used on almost all LCDs. Another major improvement in response time was achieved by adding memory to hold the contents of the display something that a television needs to do anyway, but was not originally required in the computer monitor role that bootstrapped the LCD industry. In older displays the active matrix capacitors were first drained, and then recharged to the new value with every refresh. But in most cases, the vast majority of the screen's image does not change from frame to frame. By holding the before and after values in computer memory, comparing them, and only resetting those sub-pixels that actually changed, the amount of time spent charging and discharging the capacitors was reduced. Moreover the capacitors are not drained completely; instead, their existing charge level is either increased or decreased to match the new value, which typically requires fewer charging pulses. This change, which was isolated to the driver electronics and inexpensive to implement, improved response times by about two times. Together, along with continued improvements in the liquid crystals themselves, and by increasing refresh rates from 60 Hz to 120 and 240 Hz, response times fell from 20 ms in 2000 to about 2 ms in the best modern displays. But even this is not really fast enough because the pixel will still be switching while the frame is being displayed. Conventional CRTs are well under 1 ms, and plasma and OLED displays boast times on the order of 0.001ms. One way to further improve the effective refresh rate is to use "super-sampling", and it is becoming increasingly common on high-end sets. Since the blurring of the motion occurs during the transition from one state to another, this can be reduced by doubling the refresh rate of the LCD panel, and building intermediate frames using various motion compensation techniques. This smoothes out the transitions, and means the backlighting is

turned on only when the transitions are settled. A number of high-end sets offer 120 Hz (in North America) or 100 Hz (in Europe) refresh rates using this technique. Another solution is to only turn the backlighting on once the shutter has fully switched. In order to ensure that the display does not flicker, these systems fire the backlighting several times per refresh, in a fashion similar to movie projection where the shutter opens and closes several times per frame. Contrast ratio Even in a fully switched-off state, liquid crystals allow some light to leak through the shutters. This limits their contrast ratios to about 1600:1 on the best modern sets, when measured using the ANSI measurement (ANSI IT7.215-1992). Manufacturers often quote the "Full On/off" contrast ratio instead, which is about 25% greater for any given set. This lack of contrast is most noticeable in darker scenes; in order to display a color close to black; the LCD shutters have to be turned to almost full opacity, limiting the number of discrete colors they can display. This leads to "pasteurizing" effects and bands of discrete colors that become visible in shadows. Which is why many reviews of LCD TVs mention the shadow detail? For contrast, the highest-end LCD TVs offer regular contrast ratios of 2,000,000:1. Since the total amount of light reaching the viewer is a combination of the backlighting and shuttering, modern sets can use "dynamic backlighting" to improve the contrast ratio and shadow detail. If a particular area of the screen is dark, a conventional set will have to set its shutters close to opaque to cut down the light. However, if the backlighting is reduced by half in that area, the shuttering can be reduced by half, and the number of available shuttering levels in the sub-pixels doubles. This is the main reason high-end sets offer dynamic lighting (as opposed to power savings, mentioned earlier), allowing the contrast ratio across the screen to be dramatically improved. While the LCD shutters are capable of producing about 1000:1 contrast ratio, by adding 30 levels of dynamic backlighting this is improved to 30,000:1. However, the area of the screen that can be dynamically adjusted is a function of the backlighting source. CCFLs are thin tubes that light up many rows (or columns) across the entire screen at once, and that light is spread out with diffusers. The CCFL must be driven with enough power to light the brightest area of the portion of the image in front of it, so if the image is light on one side and dark on the other, this technique cannot be used successfully. Displays backlit by full arrays of LEDs have an advantage, because each LED lights only a small patch of the screen. This allows the dynamic backlighting to be used on a

much wider variety of images. Edge-lit displays do not enjoy this advantage. These displays have LEDs only along the edges and use a light guide plate covered with thousands of convex bumps that reflect light from the side-firing LEDs out through the LCD matrix and filters. LEDs on edge-lit displays can be dimmed only globally, not individually. The massive on-paper boost this method provides is the reason many sets now place the "dynamic contrast ratio" in their specifications sheets. There is widespread debate in the audio-visual world as to whether or not dynamic contrast ratios are real or simply marketing speaks. Reviewers commonly note that even the best LCD displays cannot match the contrast ratios or deep blacks of plasma displays, in spite of being rated, on paper, as having much higher ratios.

1.4.6 LCD LCD - A liquid crystal display (LCD) is a thin, flat display device made up of any number of color or monochrome pixels arrayed in front of a light source or reflector. It is prized by engineers because it uses very small amounts of electric power, and is therefore suitable for use in battery-powered electronic devices. In modern years we are using Liquid Crystal Display devices (LCD). These are commonly used in small systems, such as calculators and portable computer laptops. These non emissive devices produce a picture by passing polarized light from the surroundings or from an internal light source through a liquidcrystal material that can be aligned to either block or transmit the light.

Figure 1.25

The term liquid crystal says that these compounds have a crystalline arrangement of molecules, yet they flow like a liquid. Flat panel displays commonly use pneumatic (thread-like) liquid crystal compounds that tend to keep the long axes of the rod-shaped molecules aligned. A flat panel display can then be constructed with a pneumatic liquid crystal

A simple black - or - white LCD display works by either allowing daylight to be reflected back out at the viewer or preventing it from doing so - in which case the viewer sees a black area. The liquid crystal is the part of the system that either prevents light from passing through it or not. The crystal is placed between two polarizing filters that are at right angles to each other and together block light. When there is no electric current applied to the crystal, it twists light by 900, which allows the light to pass through the second polarizer and be reflected back. But when the voltage is applied, the crystal molecules align themselves, and light cannot pass through the polarizer: the segment turns black. LCD televisions produced a black and colored image by selectively filtering a white light. The light is typically provided by a series of cold cathode fluorescent lamps (CCFLs) at the back of the screen, although some displays use white or colored LEDs instead. Millions of individual LCD shutters arranged in a grid, open and close to allow a metered amount of the white light through. Each shutter is paired with a colored filter to remove all but the red, green or blue (RGB) portion of the light from the original white source. Each shutter filter pair forms a single sub-pixel. The sub-pixels are so small that when the display is viewed from even a short distance, the individual colors blend together to produce a single spot of color, a pixel. The shade of color is controlled by changing the relative intensity of the light passing through the sub-pixels. Liquid crystals encompass a wide range of (typically) rod-shaped polymers that naturally form into thin layers, as opposed to the more random alignment of a normal liquid. Some of these, the pneumatic liquid crystals, also show an alignment effect between the layers. The particular direction of the alignment of a pneumatic liquid crystal can be set by placing it in contact with an alignment layer or director, which is essentially a material with microscopic grooves in it. When placed on a director, the layer in contact will align itself with the grooves, and the layers above will subsequently align themselves with the layers below, the bulk material taking on the director's alignment. In the case of an LCD, this effect is utilized by using two directors arranged at right angles and placed close together with the liquid crystal between them. This forces the layers to align themselves in two directions, creating

a twisted structure with each layer aligned at a slightly different angle to the ones on either side. LCD shutters consist of a stack of three primary elements. On the bottom and top of the shutter are polarizer plates set at right angles. Normally light cannot travel through a pair of polarisers arranged in this fashion, and the display would be black. The polarisers also carry the directors to create the twisted structure aligned with the polarisers on either side. As the light flows out of the rear polarizer, it will naturally follow the liquid crystal's twist, exiting the front of the liquid crystal having been rotated through the correct angle that allows it to pass through the front polarizer. LCDs are normally transparent. To turn a shutter off, a voltage is applied across it from front to back. The rod-shaped molecules align themselves with the electric field instead of the directors, destroying the twisted structure. The light no longer changes polarization as it flows through the liquid crystal, and can no longer pass through the front polarizer. By controlling the voltage applied across the crystal, the amount of remaining twist can be selected. This allows the transparency of the shutter to be controlled. To improve switching time, the cells are placed under pressure, which increases the force to re-align themselves with the directors when the field is turned off. Several other variations and modifications have been used in order to improve performance in certain applications. In-Plane Switching displays (IPS and S-IPS) offer wider viewing angles and better color reproduction, but are more difficult to construct and have slightly slower response times. IPS displays are used primarily for computer monitors. Vertical Alignment (VA, S-PVA and MVA) offer higher contrast ratios and good response times, but suffer from color shifting when viewed from the side. In general, all of these displays work in a similar fashion by controlling the polarization of the light source. Addressing sub-pixels

A close-up (300) view of a typical LCD display, clearly showing the sub-pixel structure. The "notch" at the lower left of each sub-pixel is the thin-film transistor. The associated capacitors and addressing lines are located around the shutter, in the dark areas. In order to address a single shutter on the display, a series of electrodes is deposited on the plates on either side of the liquid crystal. One side has horizontal stripes that form rows; the other has vertical stripes that form columns. By supplying voltage to one row and one column, a field will be generated at the point where they cross. Since a metal electrode would be opaque, LCDs use electrodes made of a transparent conductor, typically indium tin oxide. Since addressing a single shutter requires power to be supplied to an entire row and column, some of the field always leaks out into the surrounding shutters. Liquid crystals are quite sensitive, and even small amounts of leaked field will cause some level of switching to occur. This partial switching of the surrounding shutters blurs the resulting image. Another problem in early LCD systems was the voltages needed to set the shutters to a particular twist was very low, but that voltage was too low to make the crystals realign with reasonable performance. This resulted in slow response times and led to easily visible "ghosting" on these displays on fast-moving images, like a mouse cursor on a computer screen. Even scrolling text often rendered as an unreadable blur, and the switching speed was far too slow to use as a useful television display. In order to attack these problems, modern LCDs use an active matrix design. Instead of powering both electrodes, one set, typically the front, is attached to a common ground. On the rear, each shutter is paired with a thin-film transistor that switches on in response to widely separated voltage levels, say 0 and +5 volts. A new addressing line, the gate line, is added as a separate switch for the transistors. The rows and columns are addressed as before, but the transistors ensure that only the single shutter at the crossing point is addressed; any leaked field is too small to switch the surrounding transistors. When switched on, a constant and relatively high amount of charge flows from the source line through the transistor and into an associated capacitor. The capacitor is charged up until it holds the correct control voltage, slowly leaking this through the crystal to the common ground. The current is very fast and not suitable for fine control of the resulting store charge, so pulse code modulation is used to accurately control the overall flow. Not only does this allow for very accurate control over the shutters, since the capacitor can be filled or drained quickly, but the response time of the shutter is dramatically improved as well.

Building a display A typical shutter assembly consists of a sandwich of several layers deposited on two thin glass sheets forming the front and back of the display. For smaller display sizes (under 30 inches), the glass sheets can be replaced with plastic. The rear sheet starts with a polarizing film, the glass sheet, the active matrix components and addressing electrodes, and then the director. The front sheet is similar, but lacks the active matrix components, replacing those with the patterned color filters. Using a multistep construction process, both sheets can be produced on the same assembly line. The liquid crystal is placed between the two sheets in a patterned plastic sheet that divides the liquid into individual shutters and keeps the sheets at a precise distance from each other. The critical step in the manufacturing process is the deposition of the active matrix components. These have a relatively high failure rate, which renders those pixels on the screen "always on". If there are enough broken pixels, the screen has to be discarded. The number of discarded panels has a strong effect on the price of the resulting television sets, and the major downward fall in pricing between 2006 and 2008 was due mostly to improved processes. To produce a complete television, the shutter assembly is combined with control electronics and backlight. The backlight for small sets can be provided by a single lamp using a diffuser or frosted mirror to spread out the light, but for larger displays a single lamp is not bright enough and the rear surface is instead covered with a number of separate lamps. Achieving even lighting over the front of an entire display remains a challenge, and bright and dark spots are not uncommon. Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarizing filters, the axes of transmission of which are (in most of the cases) perpendicular to each other. With no actual liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer. In most of the cases the liquid crystal has double refraction. The surfaces of the electrodes that are in contact with the liquid crystal material are treated so as to align the liquid crystal molecules in a particular direction. This treatment typically consists of a thin polymer layer that is unidirectional rubbed using, for example, a cloth.

The direction of the liquid crystal alignment is then defined by the direction of rubbing. Electrodes are made of a transparent conductor called Indium Tin Oxide (ITO). Before applying an electric field, the orientation of the liquid crystal molecules is determined by the alignment at the surfaces of electrodes. In a twisted pneumatic device (still the most common liquid crystal device), the surface alignment directions at the two electrodes are perpendicular to each other, and so the molecules arrange themselves in a helical structure, or twist. This reduces the rotation of the polarization of the incident light, and the device appears grey. If the applied voltage is large enough, the liquid crystal molecules in the centre of the layer are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray. This electric field also controls (reduces) the double refraction properties of the liquid crystal

LCD with top polarizer removed from device and placed on top, such that the top and bottom polarizers are parallel. The optical effect of a twisted pneumatic device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, these devices are usually operated between crossed polarizers such that they appear bright with no voltage (the eye is much more sensitive to variations in the dark state than the bright state). These devices can also be operated between parallel polarizers, in which case the bright and dark states are reversed. The voltage-off dark state in this configuration appears blotchy, however, because of small variations of thickness across the device.

Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided either by applying an alternating current or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field). When a large number of pixels are needed in a display, it is not technically possible to drive each directly since then each pixel would require independent electrodes. Instead, the display is multiplexed. In a multiplexed display, electrodes on one side of the display are grouped and wired together (typically in columns), and each group gets its own voltage source. On the other side, the electrodes are also grouped (typically in rows), with each group getting a voltage sink. The groups are designed so each pixel has a unique, unshared combination of source and sink. The electronics or the software driving the electronics then turns on sinks in sequence, and drives sources for the pixels of each sink.

Lets take a look at how the LCD display works

Figure 1-26 Architecture and components of Raster-Scan Systems & Random-Scan Systems
Contents:

1. 1Video Controller 2. 2Display Processor 3. 3Random-scan Systems

1.5 Raster-Scan Systems Interactive raster-graphics systems typically employ several processing units. In addition to the CPU, a special purpose processor called the video controller or display controller is used to control the operation of the display device. The figure 1.1 shows the organization of a raster system. The frame buffer can be anywhere in the system memory, and the video controller access the frame buffer to refresh the screen. Video Controller A fixed area of the system memory is reserved for the frame buffer, and the video controller is given direct access to the frame buffer memory. The co-ordinates of the graphics monitor starts at the lower left screen corner. Positive x values increasing to the right and y values increasing from bottom to top.

Figure 1.27 Architecture of Raster graphics system with display processor

The above diagram shows the refresh operation of video controller. Two registers are used to store the co-ordinates of the screen pixels. Initially x=0 and y=ymax. The value stored in the frame buffer corresponding to this pixel position is retrieved. And the x value is incremented by 1 and the corresponding y value is retrieved, like that the pixel values are retrieved line by line. Once the last pixel is reached again the registers are reset to initial value to repeat the process. Display Processor

The purpose of the display processor or graphics controller is to free the CPU from the graphics chores. In addition to the system memory a separate display processor memory area can also provided. A major task of the display processor is digitizing a picture definition given in an application program into a set of pixel-intensity values for storage in the frame buffer. This digitization process is called scan conversion. Lines and other geometric objects are converted into set of discrete intensity points. Characters can be defined with rectangular grids, or they can be defined with curved outlines. To reduce the memory space required to store the image information, each scan line are stored as a set of integer pairs. One number of each pair indicates an intensity value, and the second number specifies number of adjacent pixels the scan line that is also having same intensity. This technique is called run-length encoding. 1.5.1 Raster scan: A raster scan, or raster scanning, is the rectangular pattern of image capture and reconstruction in television. By analogy, the term is used for raster graphics, the pattern of image storage and transmission used in most computer bitmap image systems. The word raster comes from the Latin word rostrum (a rake), which is derived from radar (to scrape); see also rostrum, an instrument for drawing musical staff lines. The pattern left by the tines of a rake, when drawn straight, resembles the parallel lines of a raster: this line-by-line scanning is what creates a raster. It's a systematic process of covering the area progressively, one line at a time. Although often a great deal faster, it's similar in the most-general sense to how one's gaze travels when one reads English-language text.

Figure 1-28

Figure 2-25

Figure 1-29 1.6 Random-scan Systems: An application program is input and stored in the system memory along with a

graphics package. Graphics commands in the program are translated by the graphics package into a display file stored in the system memory. This display file is then accessed by the display processor to refresh the screen. The display processor cycles through each command in the display file program once during every refresh cycle. Sometimes the display processor in a random-scan system is refreshed to as a display processing unit or a graphics controller.

Figure 1.30 Architecture of simple Random scan system

Graphic patterns are drawn on a random scan system by directing the electron beam along the component lines of the picture. Lines are defined by the values for their coordinate endpoints, and these input co-ordinate values are converted to x and y deflection voltages. A scene is then drawn one line at a time by positioning the beam to fill in the line between specified endpoints.

1.7 INPUT DEVICES 1.7.1 DIGITIZERS

Figure 1-31

What Is a Digitizer? Digitizers convert analog or physical input into digital images. This makes them related to both scanners and mice, although current digitizers serve completely different roles. History
o

Much like scanners and fax machines, digitizers trace their ancestry to the late 19th century, when they emerged from telegraph-related technology. Modern digitizers came about in the 1950s, but only gained popularity with the advent of 16 bit computers in the 1980s. Digitizers of that era were often confused with scanners, as in converting any image into "digital."

Identification
o

Modern digitizers appear as flat scanning surfaces or tablets that connect to a computer workstation. The surface is touch-sensitive, sending signals to the software, which translates them into images on the screen.

Significance Digitizers carry out important work in computer-aided design, graphics design and engineering. They also help convert hand-drawn images into textures and animation in video games and movie CGI. Types In addition to the tablet itself, digitizers have an input stylus that acts as a pen. Mode of input does vary---earlier models relied on simple pressure and electrical impulses, while more advanced designs offer better accuracy with lasers and even camera pens. Features Important factors to consider when looking at digitizers are resolution, sensitivity and image recognition. While users can input any image, the tablet and software may not be able to convert it fully. Also, handwriting recognition and text auto-detect are popular features. Expert Insight

The largest maker of digitizers in the world is Wacom, a name synonymous with digitizing among graphics designers and game artists. Their product range includes smart pens and large tablet PCs, covering almost every conceivable use.

1.7.2 Computer keyboard

Figure 1-32 A laptop's keyboard. In computing, a keyboard is a typewriter keyboard, which uses an arrangement of buttons or keys, to act as mechanical levers or electronic switches. With the decline of punch cards and paper tape, interaction via teletype-style keyboards became the main input device for computers. Despite the development of alternative input devices, such as the mouse, touchscreen, pen devices, character recognition and voice recognition, the keyboard remains the most commonly used and most versatile device used for direct (human) input into computers. A keyboard typically has characters engraved or printed on the keys and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or computer commands. In normal usage, the keyboard is used to type text and numbers into a word processor, text editor or other program. In a modern computer, the interpretation of key presses is generally left to the software. A computer keyboard distinguishes each physical key from

every other and reports all key presses to the controlling software. Keyboards are also used for computer gaming, either with regular keyboards or by using keyboards with special gaming features, which can expedite frequently used keystroke combinations. A keyboard is also used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination, which brings up a task window or shuts down the machine. It is the only way to enter commands on a command-line interface.

Keyboard types One factor determining the size of a keyboard is the presence of duplicate keys, such as a separate numeric keyboard, for convenience. Further the keyboard size depends on the extent to which a system is used where a single action is produced by a combination of subsequent or simultaneous keystrokes (with modifier keys, see below), or multiple pressing of a single key. A keyboard with few keys is called a keypad. See also text entry interface. Another factor determining the size of a keyboard is the size and spacing of the keys. Reduction is limited by the practical consideration that the keys must be large enough to be easily pressed by fingers. Alternatively a tool is used for pressing small keys. 1.7.3 Image scanner In computing, an image scanneroften abbreviated to just scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical. Modern scanners typically use a charge-coupled device (CCD) or a Contact Image Sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. A rotary scanner, used for high-speed document scanning, is another type of drum scanner, using a CCD array instead of a photomultiplier. Other types of scanners are planetary scanners, which take photographs of books and documents, and 3D scanners, for producing three-dimensional models of objects. Another category of scanner is digital camera scanners, which are based on the concept of reprographic cameras. Due to increasing resolution and new features such as anti-shake, digital cameras have become an attractive alternative to regular scanners. While still

having disadvantages compared to traditional scanners (such as distortion, reflections, shadows, low contrast), digital cameras offer advantages such as speed, portability and gentle digitizing of thick documents without damaging the book spine. New scanning technologies are combining 3D scanners with digital cameras to create full-color, photorealistic 3D models of objects.

1.7.4Joystick

Figure 1-34 Joystick elements: #1 Stick; #2 Base; #3 Trigger; #4 Extra buttons; #5 Auto fire switch; #6 Throttle; #7 Hat Switch (POV Hat); #8 Suction Cup A joystick is an input device consisting of a stick that pivots on a base and reports its angle or direction to the device it is controlling. Joysticks are often used to control video games, and usually have one or more push-buttons whose state can also be read by the computer. A popular variation of the joystick used on modern video game consoles is the analog stick. The joystick has been the principal flight control in the cockpit of many aircraft, particularly military fast jets, either as a center stick or side-stick. Joysticks are also used for controlling machines such as cranes, trucks, underwater unmanned vehicles, wheelchairs, surveillance cameras and zero turning radius lawn mowers. Miniature finger-operated joysticks have been adopted as input devices for smaller electronic equipment such as mobile phones. 1.7.5 LIGHT PEN

A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with a computer's CRT TV set or monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy. It was long thought that a light pen can work with any CRT-based display, but not with LCD screens (though Toshiba and Hitachi displayed a similar idea at the "Display 2006" show in Japan, projectors and other display devices. However, in 2011 Firelight Instruments released its Firelight CMI-30A, which uses a 17" LCD monitor with light pen control. A light pen is fairly simple to implement. Just like a light gun, a light pen works by sensing the sudden small change in brightness of a point on the screen when the electron gun refreshes that spot. By noting exactly where the scanning has reached at that moment, the X,Y position of the pen can be resolved. This is usually achieved by the light pen causing an interrupt, at which point the scan position can be read from a special register, or computed from a counter or timer. The pen position is updated on every refresh of the screen. The light pen became moderately popular during the early 1980s. It was notable for its use in the Firelight CMI, and the BBC Micro. IBM PC compatible CGA, HGC and some EGA graphics cards featured a connector for a light pen as well. Even some consumer products were given light pens, in particular the Thomson MO5 computer family. Because the user was required to hold his or her arm in front of the screen for long periods of time or to use a desk that tilts the monitor, the light pen fell out of use as a general purpose input device. The first light pen was created around 1952 as part of the Whirlwind project at MIT.[2][3] Since the current version of the game show Jeopardy! Began in 1984, contestants have used a light pen to write down their wagers and responses for the Final Jeopardy! Round. Since light pens operate by detecting light emitted by the screen phosphors, some nonzero intensity level must be present at the coordinate position to be selected, because otherwise the pen won't get triggered.

1.7.6 MOUSE

Figure 1-35 Mouse A computer mouse with the most common standard features: two buttons and a scroll wheel, which can also act as a third button In computing, a mouse is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of an object held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features that can add more control or dimensional input. The mouse's motion typically translates into the motion of a cursor on a display, which allows for fine control of a graphical user interface. 1.7.7 Space Ball Space Ball is an essential tool as mouse or keyboard, giving the ability to manipulate 3D objects on the screen, while simultaneously controlling 3D camera angles and positions for viewing those objects.

Fig 1-36

Components: With the Space Ball you can intuitively zoom, pan and rotate models, exploring and navigating your designs as naturally as if they were objects in the real world The Space Ball delivers the comfort and efficiency of a proven two-handed work style with even greater performance than before. Use the motion controller to position, view and navigate 3D models and data sets, while directing the standard mouse to simultaneously select, create, edit and annotate. Its a much more natural, free-flowing way to work. If you use a common interface, executing even simple moves requires a decision, then keystrokes and/or mouse clicks. This interrupts your natural motion, slowing you down and actually restricting you from attempting more complete or continuous motion. But, the greater flexibility and interactivity of a 3D motion controller makes even difficult moves easy. You're free to go farther and be even more creative Place your fingers gently on the controller's ball. The ball senses pressure you apply to it pushes, pulls and twists - and uses that information to correspondingly move your model, camera or eye point on the screen. Pull up or push down to move your model, camera or eye point up or down. Push left or right to move your model left or right. Pull towards you or push away to move your model nearer or farther away. Orient your model on the screen by simply twisting in any direction to rotate it around the X, Y or Z axis (pitch, roll, yaw) You will quickly be able to combine all movements and control your 3D models with six degrees of freedom. The amount of pressure you apply controls speed of movement. A light touch moves your models slowly and accurately; just increase pressure to increase speed. It will be like holding your model in your hand - interacting in 3D as you do in the real world .

1.7.8 TRACK BALL

Fig 1-37 The Kensington Expert Mouse trackball can use a standard American pool ball A trackball is a pointing device consisting of a ball held by a socket containing sensors to detect a rotation of the ball about two axeslike an upside-down mouse with an exposed protruding ball. The user rolls the ball with the thumb, fingers, or the palm of the hand to move a cursor. Large tracker balls are common on CAD workstations for easy precision. Before the advent of the touchpad, small trackballs were common on portable computers, where there may be no desk space on which to run a mouse. Some small thumb balls clip onto the side of the keyboard and have integral buttons with the same function as mouse buttons. The trackball was invented by Tom Cranston and Fred Long staff as part of the Royal Canadian Navy's DATAR system in 1952,[1] eleven years before the mouse was invented. This first trackball used a Canadian five-pin bowling ball.

Fig 1-38 The world's first trackball invented by Tom Cranston, Fred Long staff and Kenyon Taylor working on the Royal Canadian Navy's DATAR project in 1952. It used a standard Canadian five-pin bowling ball. When mice still used a mechanical design (with slotted 'chopper' wheels interrupting a beam of light to measure rotation), trackballs had the advantage of being in contact with the user's hand, which is generally cleaner than the desk or mouse pad and does not drag lint into the chopper wheels. The late 1990s replacement of mouse balls by direct optical tracking put trackballs at a disadvantage and forced them to retreat into niches where their distinctive merits remained more important. Most trackballs now have direct optical tracking which follows dots on the ball. As with modern mice, most trackballs now have an auxiliary device primarily intended for scrolling. Some have a scroll wheel like most mice, but the most common type is a scroll ring which is spun around the ball. Kensington's SlimBlade Trackball similarly tracks the ball itself in three dimensions for scrolling. Three major companies Logitech, A4Tech, and Kensington currently produce trackball, although A4Tech has not released a new model in several years. Microsoft was a major producer, but has since discontinued all of its products. The Microsoft Trackball Explorer continues to be extremely popular (it has no analogous design in production by another company), with used models selling for ~$200 on ebay.

1.7.9 VOICE SYSTEMS A device in which speech is used to input data or system commands directly into a system. Such equipment involves the use of speech recognition processes, and can replace or supplement other input devices. Some voice input devices can recognize spoken words from a predefined vocabulary; some have to be trained for a particular speaker. Speech recognition (also known as automatic speech recognition or computer speech recognition) converts spoken words to machine-readable input (for example, to key presses, using the binary code for a string of character codes). The term "voice recognition" is sometimes incorrectly used to refer to speech recognition, when actually referring to speaker recognition, which attempts to identify the person speaking, as opposed to what is being said. Confusingly, journalists and manufacturers of devices that use speech recognition for control commonly use the term Voice Recognition when they mean Speech Recognition.

QUIZ QUESTIONS 1. What is refresh buffer? Identify the contents and organization of the refresh buffer for the case of raster display and velocity display. 2. Find the number of colours that is possible on 512*512 raster screen with a 3 plane frame buffer each of red, green and blue. 3. a). Why wont a light pen work with a liquid crystal display (LCD)? b) How does a magnetic tablet system locate the coordinates of the stylus? c) Why do computer graphics systems often use more than one processor? d) Consider two raster systems with resolutions of 640 by 480 and 1280 by1024. How many pixels could be accessed per second in each of these system by a display controller that refreshes the screen at a rate of 60 frames per second? What is the access time per pixel in each system?

4. Digitize a line from (10, 2) to (20, 18) on a raster screen using Bresenhams straight line algorithm. The result may be shown on a Cartesian graph 5. Find the pixel location approximating the first octant of a circle having centre (2, 3) and a radius of 2 units, using Bresenham circle algorithm. Use this to plot the complete circle on a Cartesian graph representing pixel grid.

6. Plot a circle centred at (5, 5) having a radius of 5 units using mid point circle algorithm and Cartesian graph. 7. Compute the following: (a) Size of 800*600 image at 240 pixels per inch. (b) Resolution of 2*2 inch image that has 512*512 pixels. (c) Height of the resized image 1024*768 to one that is 640 pixels wide with the same aspect ratio. (d) Width of an image having height of 5 inches and an aspect ratio of 1.5. 8. Find the no of colours a frame buffer of 8 bit planes each of red, green and blue, and 10 bit wide look up table can produce. 9. Find the amount of memory required by an 8 plane frame buffer each of red, green and blue, having 1024*768 resolutions. 10. Find the refresh rate of a 512*512 frame buffer, if the access time for each pixel is 200 nano seconds (ns).

Das könnte Ihnen auch gefallen