Sie sind auf Seite 1von 22

DHANALAKSHMI COLLEGE OF ENGINEERING,

CHENNAI

Department of Computer Science and Engineering

CS6504-COMPUTER GRAPHICS

Anna University 2 & 16 Mark Questions & Answers

Year / Semester: III / V


Regulation: 2013

Academic year: 2017 - 2018


UNIT – IV ILLUMINATION AND COLOUR MODELS
PART - A
1. State the difference between CMY and HSV color models.(nov/dec 2012)
The HSV (Hue,Saturation,Value) model is a color model which uses color descriptions that have a
more intuitive appeal to a user. To give a color specification, a user selects a spectral color and the
amounts of white and black that is to be added to obtain different shades, tint, and tones.
A color model defined with the primary colors cyan, magenta, and yellow is useful for describing
color output to hard-copy devices.

2. What are the subtractive colors?(may/june 2012)


RGB model is an additive system, the Cyan-Magenta-Yellow (CMY) model is a subtractive color
model. In a subtractive model, the more that an element is added, the more that it subtracts from
white. So, if none of these are present the result is white, and when all are fully present the result is
black.

3. Define - YIQ Color Model.


In the YIQ color model, luminance (brightness) information in contained in the Y parameter,
chromaticity information (hue and purity) is contained into the I and Q parameters.
A combination of red, green and blue intensities are chosen for the Y parameter to yield the
standard luminosity curve. Since Y contains the luminance information, black and white TV
monitors use only the Y signal.

4.What do you meant by shading of objects?(nov/dec 2011)


A shading model dictates how light is scattered or reflected from a surface. The shading models
described here focuses on achromatic light. Achromatic light has brightness and no color; it is a
shade of gray so it is described by a single value its intensity.
A shading model uses two types of light source to illuminate the objects in a scene : point light
sources and ambient light.

5. What is texture?( nov/dec 2011)


The realism of an image is greatly enhanced by adding surface texture to various faces of a mesh
object. The basic technique begins with some texture function, texture(s,t) in texture space ,
which has two parameters s and t. The function texture(s,t) produces a color or intensity value for
each value of s and t between 0(dark)and 1(light).

6. What are the types of reflection of incident light?(nov/dec 2013)


There are two different types of reflection of incident light
 Diffuse scattering.
 Specular reflections.

6. Define – Rendering. (may/june 2013)


Rendering is the process of generating an image from a model (or models in what
collectively could be called a scenefile), by means of computer programs. Also, the results of such
a model can be called a rendering.

8. Differentiate flat and smooth shading (may/june 2013)


The main distinction is between a shading method that accentuates the individual polygons (flat
shading) and a method that blends the faces to de-emphasize the edges between them (smooth
shading).
9. Define – Shading. (may/june 2012)
Shading is a process used in drawing for depicting levels of darkness on paper by applying media
more densely or with a darker shade for darker areas, and less densely or with a lighter shade for
lighter areas.

10. What is meant by shadow? (nov/dec 2012)


Shadows make an image more realistic. The way one object casts a shadow on another object gives
important visual clues as to how the two objects are positioned with respect to each other. Shadows
conveys lot of information as such, you are getting a second look at the object from the view point
of the light source.

11. What are the two methods for computing shadows?


 Shadows as Texture.
 Creating shadows with the use of a shadow buffer.

12. Write any two Drawbacks of Phong Shading


 Relatively slow in speed.
 More computation is required per pixel.

13. What are the two common sources of textures?


 Bitmap Textures.
 Procedural Textures.

14. Write two types of smooth shading.


 Gouraud shading.
 Phong shading.

15. What is a color model?


A color model is a method for explaining the properties or behavior of color within some particular
context. Example: XYZ model, RGB model.

16. What is meant by intensity of light?


Intensity is the radiant energy emitted per unit time, per unit solid angle, and per unit projected area
of source.

17. What is hue?


The perceived light has a dominant frequency (or dominant wavelength). The dominant frequency
is also called as hue or simply as color.

18. What is purity of light?


Purity describes how washed out or how “pure” the c olor of the light appears. pastels and pale
colors are described as less pure.

19. Define - Chromacity.


The term chromacity is used to refer collectively to the two properties describing color
characteristics: purity and dominant frequency.

20. How is the color of an object determined?


When white light is incident upon an object, some frequencies are reflected and some are absorbed
by the object. The combination of frequencies present in the reflected light determines what we
perceive as the color of the object.

21. What is meant by purity or saturation?


Purity describes how washed out or how "pure" the color of the light appears.

22. What is meant by complementary colors?


If the two color sources combine to produce white light, they are referred to as 'complementary
colors. Examples of complementary color pairs are red and cyan, green and magenta, and blue and
yellow.

23. Define -Primary Colors.


The two or three colors used to produce other colors in a color model are referred to as primary
colors.

24. State the use of chromaticity diagram.


Comparing color gamuts for different sets of primaries. Identifying complementary colors.
Determining dominant wavelength and purity of a given color.

25. What is Color Look up table?


In color displays, 24 bits per pixel are commonly used, where 8 bits represent 256 level for each
color. It is necessary to read 24- bit for each pixel from frame buffer. This is very time consuming.
To avoid this video controller uses look up table to store many entries to pixel values in RGB
format. This look up table is commonly known as colour table.

26. What is the use of hidden line removing algorithm?


The hidden line removal algorithm determines the lines, edges, surfaces or volumes that are visible
or invisible to an observer located at a specific point in space.

PART – B
1. Explain in detail the basic illumination models.

Light Sources

 Point

Infinitely Distant

 Radial Intensity Attenuation – (1/d2)

Problems: Point Source with 1/d2 attenuation does not always produce realistic
results.
Produces too much intensity variation for near objects and not enough for
distant.

Directional Light Sources and Spotlight effects


Angular Intensity Attenuation

Surface Lighting Effects

Diffuse Reflection
Specular Reflection
Ambient

Simple Illumination Model

Surfaces in real world environments receive light in 3 ways:

1. Directly from existing light sources such as the sun or a lit candle

2. Light that passes and refracts through transparent objects such as water or a glass vase

3. Light reflected, bounced, or diffused from other exisiting surfaces in the environment

Local Illumination

Material Models

 Diffuse illumination

Lambert's cosine law of reflection as shown in the above diagram:

1. n; a normal vector to the surface to be illuminated.

2. L, a vector from the surface position that points towards the light source.
3. Il, an intensity for a point light source.

4. kd , a diffuse reflection constant.

Equation gives the brightness of a surface point in terms of the brightness of a light source
and its orientation relative to the surface normal vector, n,

 I is the reflected intensity


 Measures how bright the surface is at that point.
 Surface brightness varies as a function of the angle between n and L
 When n and L coincide, the light source is directly overhead.
 I is at a maximum and cos = 1.
 As the angle increases to 90o, the cosine decreases the intensity to 0.
 All the quantities in the equation are normalized between 0 and 1.

I is converted into frame buffer intensity values by multiplying by the number of shades available.

With 28 = 256 possible shades, we have 1 * 255, the brightest frame buffer
intensity.

For n and L at an angle of 45 o, I = cos 45 o * 256 = 181.

 An image rendered with a Lambertian shader exhibits a dull, matte finish.


 It appears as if it has been viewed by a coal miner with a lantern attached to his helmet.
 In reality, an object is not only subjected to direct illumination from the primary light
source Il , but secondary scattered light from all remaining surfaces.

Ambient illumination

Simple illuminated model is unable to directly accommodate all scattered light


It is grouped together as independent intensity, Ia.
The formula becomes
Iaka is the ambient illumination term, taking into account the additional environmental
illumination, Ia, and the ability of the object to absorb it, ka.

Below Figure: Only ambient illumination

Below Figure: Diffuse + ambient illumination

 Illumination decreases from the light source by I/d2.


 Objects at a greater distance from the light source receive less illumination and appear
darker.
 Above equation is distance independent.
 Dividing the Lambertian term by d2 would seem to get the physics right, but it makes the
intensity vary sharply over short distance.
 Modified distance dependence is employed, giving

d is the distance from the light source to the object

 Specular Highlights – (Phong Reflection Model)

Regions of significant brightness, exhibited as spots or bands, characterize objects


that specularly reflect light.
Specular highlights originate from smooth, sometimes mirrorlike surfaces
Fresnel equation is used to simulate this effect.
The Fresnel equation states that for a perfectly reflecting surface the angle of
incidence equals the angle of reflection.

Most objects are not perfect mirrors. some angular scattering of light.

If the viewer increases the angle ( ) between himself, the line of sight vector (S), and the
reflectance vector (R), the bright spot gradually disappears.Smooth surfaces scatter light less then
rough surfaces.

This produces more localized highlights.


Building this effect into the lighting model gives

 Specular reflectance term possesses a specular reflectance constant, ks.


 The cosine term is raised to the nth power.
 Small values of n (e.g. 5) distribute the specular highlights, characteristic of glossy paper.
 High values of n (e.g. 50) are characteristic of metals.

2. Explain in detail the halftone patterns and dithering techniques.

Dithering - A technique used in quantization processes such as graphics and audio to reduce or
remove the correlation between noise and signal.
Dithering is used in computer graphics to create additional colors and shades from an existing
palette by interspersing pixels of different colors. On a monochrome display, areas of Grey are
created by varying the proportion of black and white pixels. In color displays and printers, colors
and textures are created by varying the proportions of existing colors. The different colors can
either be distributed randomly or regularly.
The higher the resolution of the display, the smoother the dithered color will appear to the eye.
Dithering doesn't reduce resolution. There are three types: regular dithering which uses a very
regular predefined pattern; random dither where the pattern is a random noise; and pseudo random
dither which uses a very large, very regular, predefined pattern.

Dithering is used to create patterns for use as backgrounds, fills and shading, as well as for
creating halftones for printing. When used for printing is it very sensitive to paper properties.
Dithering can be combined with rasterising. It is not related to anti-aliasing.

Dithering technique:
Dither algorithm performs an optimal adding of a sequence of images as far as resolution is
concerned. The principle is that, at sub-pixel level, shifts between individual input images are
nearly randomly distributed. For example, a star in the first image may be centered perfectly in the
middle of a pixel, whereas it will be across two pixels in the second one, and so on. Since it is easy
to know the exact shift between the images, it is possible to create an output image with a finer
sampling, in which resolution may be increased with respected to each input image. In fact, energy
from each input pixel is dropped in the output image.

Fig. 1: Random Number Generated

Fig. 2: Algorithm Generated


Figure 1 shows a dither pattern generated by a random number generator, even with a randomized
seed. Note that the distribution of points is not very uniform. Indeed, there are some points that
could result in sensor noise not being eliminated but in fact being reinforced due to insufficient
difference in positions.
Figure 2 shows the dither pattern that results from the use of this algorithm. It achieves maximize
separation from one frame to another with a minimum overall movement of the guide star.
Usage :

Dither should be added to any low-amplitude or highly-periodic signal before any quantization or
re-quantization process, in order to De-correlate the quantization noise with the input signal and to
prevent non-linear behavior (distortion); the lesser the bit depth, the greater the dither must be. The
results of the process still yield distortion, but the distortion is of a random nature so its result is
effectively noise. Any bit-reduction process should add dither to the waveform before the reduction
is performed.

Half toning

1. Many displays and hardcopy devices are bi-level


2. They can only produce two intensity levels.
3. In such displays or hardcopy devices we can create an apparent increase in the number of
available intensity value.
4. When we view a very small area from a sufficient large viewing distance, our eyes average
fine details within the small area and record only the overall intensity of the area.
5. The phenomenon of apparent increase in the number of available intensities by considering
combine intensity of multiple pixels is known as half toning.
6. The half toning is commonly used in printing black and white photographs in newspapers,
magazines and books.
7. The pictures produced by half toning process are called halftones.
8. In computer graphics, halftone reproductions are approximated using rectangular pixel
regions, say 22 pixels or 33 pixels.
9. These regions are called halftone patterns or pixel patterns.
10. Figure shows the halftone pattern to create number of intensity levels.

3. Explain in detail the various standards primaries and chromaticity diagram used XYZ color
space in color models.
A color model is an abstract mathematical model describing the way colors can be represented
as tuples of numbers, typically as three or four values or color components. When this model is
associated with a precise description of how the components are to be interpreted (viewing
conditions, etc.), the resulting set of colors is called color space. This section describes ways in
which human color vision can be modeled.

CIE XYZ color space - CIE primaries

The tristimulus theory of colour perception seems to imply that any colour can be obtained
from a mix of the three primaries, red, green and blue, but although nearly all visible colours
can be matched in this way, some cannot. However, if one of the primaries is added to one of
these unmatchable colours, it can be matched by a mixture of the other two, and so the colour
may be considered to have a negative weighting of that particular primary.
In 1931, the Commission Internationale de l'Éclairage (CIE) defined three standard primaries,
called X, Y and Z, that can be added to form all visible colours. The primary Y was chosen so
that its colour matching function exactly matches the luminous-efficiency function for the
human eye, given by the sum of the three curves in figure 2.

Figure 3: The CIE Chromaticity Diagram showing all visible colours. x and y are the normalised
amounts of the X and Y primaries present, and hence z = 1 - x - y gives the amount of the Z
primary required.

The CIE Chromaticity Diagram (see figure 3) shows all visible colours. The x and y axis give the
normalised amounts of the X and Y primaries for a particular colour, and hence z = 1 - x - y gives
the amount of the Z primary required. Chromaticity depends on dominant wavelength and
saturation, and is independent of luminous energy. Colours with the same chromaticity, but
different luminance all map to the same point within this region.

The pure colours of the spectrum lie on the curved part of the boundary, and a standard white light
has colour defined to be near (but not at) the point of equal energy x = y = z = 1/3. Complementary
colours, i.e. colours that add to give white, lie on the endpoints of a line through this point. As
illustrated in figure 4, all the colours along any line in the chromaticity diagram may be obtained
by mixing the colours on the end points of the line. Furthermore, all colours within a triangle may
be formed by mixing the colours at the vertices. This property illustrates graphically the fact that
all visible colours cannot be obtained by a mix of R, G and B (or any other three visible) primaries
alone, since the diagram is not triangular!

Figure 4: Mixing colours on the chromaticity diagram. All colours on the line IJ can be
obtained by mixing colours I and J, and all colours in the triangle IJK can be obtained by
mixing colours I, J and K.
4. Explain in detail on RGB color model.

The RGB color model is an additive color model in which red, green and blue light are added
together in various ways to reproduce a broad array of colors. The name of the model comes from
the initials of the three additive primary colors, red, green and blue.

The main purpose of the RGB color model is for the sensing, representation and display of images
in electronic systems, such as televisions and computers, though it has also been used in
conventional photography. Before the electronic age, the RGB color model already had a solid
theory behind it, based in human perception of colors.

RGB is a device-dependent color model: different devices detect or reproduce a given RGB value
differently, since the color elements (such as phosphors or dyes) and their response to the
individual R, G and B levels vary from manufacturer to manufacturer, or even in the same device
over time. Thus a RGB value does not define the same color across devices without some kind of
color management.

Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras.
Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED,
Quantum-Dots etc.), computer and mobile phone displays, video projectors, multicolor LED
displays and large screens such as JumboTron. Color printers, on the other hand are not RGB
devices, but subtractive color devices (typically CMYK color model).

5. Explain in detail on YIQ color model.


YIQ is the color space used by the NTSC color TV system, employed mainly in North and Central
America, and Japan. I stands for in-phase, while Q stands for quadrature, referring to the
components used in quadrature amplitude modulation. Some forms of NTSC now use the YUV
color space, which is also used by other systems such as PAL.

The Y component represents the luma information, and is the only component used by black-and-
white television receivers. I and Q represent the chrominance information. In YUV, the U and V
components can be thought of as X and Y coordinates within the color space. I and Q can be
thought of as a second pair of axes on the same graph, rotated 33°; therefore IQ and UV represent
different coordinate systems on the same plane.

The YIQ system is intended to take advantage of human color-response characteristics. The eye is
more sensitive to changes in the orange-blue (I) range than in the purple-green range (Q) —
therefore less bandwidth is required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to
0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which keeps the bandwidth
of the overall signal down to 4.2 MHz. In YUV systems, since U and V both contain information in
the orange-blue range, both components must be given the same amount of bandwidth as I to
achieve similar color fidelity.

Very few television sets perform true I and Q decoding, due to the high costs of such an
implementation. Compared to the cheaper R-Y and B-Y decoding which requires only one filter, I
and Q each requires a different filter to satisfy the bandwidth differences between I and Q. These
bandwidth differences also requires that the 'I' filter include a time delay to match the longer delay
of the 'Q' filter. The Rockwell Modular Digital Radio (MDR) was one I and Q decoding set, which
in 1997 could operate in frame-at-a-time mode with a PC or in realtime with the Fast IQ Processor
(FIQP). Some RCA "Colortrak" home TV receivers made circa 1985 not only used I/Q decoding,
but also advertised its benefits along with its comb filtering benefits as full "100 percent
processing" to deliver more of the original color picture content. Earlier, more than one brand of
color TV (RCA, Arvin) used I/Q decoding in the 1954 or 1955 model year on models utilizing
screens about 13 inches (measured diagonally). The original Advent projection television used I/Q
decoding. Around 1990, at least one manufacturer (Ikegami) of professional studio picture
monitors advertised I/Q decoding.

Conversions between Models

Conversion between RGB and HSI is somewhat more complicated:


Colors in HSI are defined with respect to normalized RGB values as

Even these equations need some correction:


 H = (360o - H) if (B/I) > (G/I) and H is normalized by H = H/360o
 H is not defined if S = 0
 S is undefined if I = 0

6. Explain in detail the CMY color model.

This stands for cyan-magenta-yellow and is used for hardcopy devices. In contrast to color on the
monitor, the color in printing acts subtractive and not additive. A printed color that looks red
absorbs the other two components and and reflects . Thus its (internal) color is
G+B=CYAN. Similarly R+B=MAGENTA and R+G=YELLOW. Thus the C-M-Y coordinates are
just the complements of the R-G-B coordinates:

If we want to print a red looking color (i.e. with R-G-B coordinates (1,0,0)) we have to use C-M-Y

values of (0,1,1). Note that absorbs , similarly absorbs and hence absorbs all
but .

Black ( ) corresponds to which should in principle


absorb , and . But in practice this will appear as some dark gray. So in order to be able to
produce better contrast printers often use black as color. This is the CMYK-model. Its

coordinates are obtained from that of the CMY-model by , ,

and .

7. Explain in detail the HSV color model.

HSL and HSV are the two most common cylindrical-coordinate representations of points in an
RGB color model. The two representations rearrange the geometry of RGB in an attempt to be
more intuitive and perceptually relevant than the cartesian (cube) representation. Developed in the
1970s for computer graphics applications, HSL and HSV are used today in color pickers, in image
editing software, and less commonly in image analysis and computer vision.

HSL stands for hue, saturation, and lightness (or luminosity), and is also often called HLS. HSV
stands for hue, saturation, and value, and is also often called HSB (B for brightness). A third
model, common in computer vision applications, is HSI (I for intensity). However, while typically
consistent, these definitions are not standardized, and any of these abbreviations might be used for
any of these three or several other related cylindrical models. (For technical definitions of these
terms, see below.)

In each cylinder, the angle around the central vertical axis corresponds to "hue", the distance from
the axis corresponds to "saturation", and the distance along the axis corresponds to "lightness",
"value" or "brightness". Note that while "hue" in HSL and HSV refers to the same attribute, their
definitions of "saturation" differ dramatically.

Because HSL and HSV are simple transformations of device-dependent RGB models, the physical
colors they define depend on the colors of the red, green, and blue primaries of the device or of the
particular RGB space, and on the gamma correction used to represent the amounts of those
primaries. As a result, each unique RGB device has unique HSL and HSV absolute color spaces to
accompany it (just as it has unique RGB absolute color space to accompany it), and the same
numerical HSL or HSV values (just as numerical RGB values) may be displayed differently by
different devices.

Both of these representations are used widely in computer graphics, and one or the other of them is
often more convenient than RGB, but both are also criticized for not adequately separating color-
making attributes, or for their lack of perceptual uniformity. Other more computationally intensive
models, such as CIELAB or CIECAM02, are said to better achieve these goals.
8. Compare and contrast the RGB and CMY.

 RGB is based on projecting. Red light plus Green light plus Blue light all projected together
create white. Black is encoded as the absence of any color.
 CMYK is based on ink. Superimpose Cyan ink plus Magenta ink plus Yellow ink, and you
get black, although this format also encodes Black (K) directly. White is encoded by the
absence of any color.
 Prism uses RGB internally. Exporting in RGB will give you results very close to what you
see on screen.
 Even though it uses one more number to encode a color, the CMYK scheme encodes a
smaller "color space" than does RGB.
 When a color is converted from RGB to CMYK, the appearance may change. Most
noticeably, bright colors in RGB will look duller and darker in CMYK as shown below
(from here).

9. Explain in detail the conversion between HSV and RGB color models.

The RGB model's approach to colors is important because:

 It directly reflects the physical properties of "Truecolor" displays


 As of 2011, most graphic cards define pixel values in terms of the colors red, green, and
blue. The typical range of intensity values for each color, 0–255, is based on taking a binary
number with 32 bits and breaking it up into four bytes of 8 bits each. 8 bits can hold a value
from 0 to 255. The fourth byte is used to specify the "alpha", or the opacity, of the color.
Opacity comes into play when layers with different colors are stacked. If the color in the
top layer is less than fully opaque (alpha < 255), the color from underlying layers "shows
through".

In the RGB model, hues are represented by specifying one color as full intensity (255), a
second color with a variable intensity, and the third color with no intensity (0).

The following provides some examples using red as the full-intensity and green as the partial-
intensity colors; blue is always zero:

Red Green Result


255 0 red (255, 0, 0)
255 128 orange (255, 128, 0)
255 255 yellow (255, 255, 0)

Shades are created by multiplying the intensity of each primary color by 1 minus the shade factor,
in the range 0 to 1. A shade factor of 0 does nothing to the hue, a shade factor of 1 produces black:

new intensity = current intensity * (1 – shade factor)

The following provides examples using orange:

0 .25 .5 .75 1.0


(255, 128, 0) (192, 96, 0) (128, 64, 0) (64, 32, 0) (0, 0, 0)
Tints are created by modifying each primary color as follows: the intensity is increased so that the
difference between the intensity and full intensity (255) is decreased by the tint factor, in the range
0 to 1. A tint factor of 0 does nothing, a tint factor of 1 produces white:

new intensity = current intensity + (255 – current intensity) * tint factor

The HSV, or HSB, model describes colors in terms of hue, saturation, and value (brightness). Note
that the range of values for each attribute is arbitrarily defined by various tools or standards. Be
sure to determine the value ranges before attempting to interpret a value.

Hue corresponds directly to the concept of hue in the Color Basics section. The advantages of
using hue are

 The angular relationship between tones around the color circle is easily identified
 Shades, tints, and tones can be generated easily without affecting the hue

Saturation corresponds directly to the concept of tint in the Color Basics section, except that full
saturation produces no tint, while zero saturation produces white, a shade of gray, or black.

Value corresponds directly to the concept of intensity in the Color Basics section.

 Pure colors are produced by specifying a hue with full saturation and value
 Shades are produced by specifying a hue with full saturation and less than full value
 Tints are produced by specifying a hue with less than full saturation and full value
 Tones are produced by specifying a hue and both less than full saturation and value
 White is produced by specifying zero saturation and full value, regardless of hue
 Black is produced by specifying zero value, regardless of hue or saturation
 Shades of gray are produced by specifying zero saturation and between zero and full value

10. Explain in detail HLS color model.

The HSL model describes colors in terms of hue, saturation, and lightness (also called luminance).
The model has two prominent properties:

 The transition from black to a hue to white is symmetric and is controlled solely by
increasing lightness
 Decreasing saturation transitions to a shade of gray dependent on the lightness, thus
keeping the overall intensity relatively constant

The properties mentioned above have led to the wide use of HSL, in particular, in the CSS3 color
model.

As in HSV, hue corresponds directly to the concept of hue in the Color Basics section. The
advantages of using hue are

 The angular relationship between tones around the color circle is easily identified
 Shades, tints, and tones can be generated easily without affecting the hue
Lightness combines the concepts of shading and tinting from the Color Basics section. Assuming
full saturation, lightness is neutral at the midpoint value, for example 50%, and the hue displays
unaltered. As lightness decreases below the midpoint, it has the effect of shading. Zero lightness
produces black. As the value increases above 50%, it has the effect of tinting, and full lightness
produces white.

At zero saturation, lightness controls the resulting shade of gray. A value of zero still produces
black, and full lightness still produces white. The midpoint value results in the "middle" shade of
gray, with an RGB value of (128,128,128).

Saturation, or the lack of it, produces tones of the reference hue that converge on the zero-
saturation shade of gray, which is determined by the lightness. The following examples uses the
hues red, orange, and yellow at midpoint lightness with decreasing saturation. The resulting RGB
value and the total intensity is shown.

1.0 .75 .5 .25 0


(128, 128, 128),
(255, 0, 0), 256 (224, 32, 32), 288 (192, 64, 64), 320 (160, 96, 96), 352
384
(255, 128, 0), (224, 128, 32), (192, 128, 64), (160, 128, 96), (128, 128, 128),
384 384 384 384 384
(255, 255, 0), (224, 224, 32), (192, 192, 64), (160, 160, 96), (128, 128, 128),
512 480 448 416 384

11. Explain in detail the shading and graphics pipeline.


Shading refers to depicting depth perception in 3D models or illustrations by varying levels of
darkness.

Shading is used in drawing for depicting levels of darkness on paper by applying media more
densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter
areas. There are various techniques of shading including cross hatching where perpendicular lines
of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together,
the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears.

Light patterns, such as objects having light and shaded areas, help when creating the illusion of
depth on paper.

Powder shading is a sketching shading method. In this style, the stumping powder and paper
stumps are used to draw a picture. This can be in color. The stumping powder is smooth and
doesn't have any shiny particles. The paper to be used should have small grains on it so that the
powder remains on the paper.

A computer graphics pipeline, rendering pipeline or simply graphics pipeline, is a conceptual


model in computer graphics that describes what steps a graphics system needs to perform to render
a 3D scene to a 2D screen.Plainly speaking, once a 3D model has been created, for instance in a
video game or any other 3D computer animation, the graphics pipeline is the process of turning
that 3D model into what the computer displays.
A graphics pipeline can be divided into three main parts: Application, Geometry and Rasterization

The geometry step, which is responsible for the majority of the operations with polygons and their
vertices, can be divided into the following five tasks. It depends on the particular implementation
of how these tasks are organized as actual parallel pipeline steps.

12. Explain the gouraud shading and phong shading interms of flat shading and smooth shading.

Flat shading

Here, a color is calculated for one point on each polygon (usually for the first vertex in the
polygon, but sometimes for the centroid for triangle meshes), based on the polygon's surface
normal and on the assumption that all polygons are flat. The color everywhere else is then
interpolated by coloring all points on a polygon the same as the point for which the color was
calculated, giving each polygon a uniform color (similar to in nearest-neighbor interpolation). It is
usually used for high speed rendering where more advanced shading techniques are too
computationally expensive. As a result of flat shading all of the polygon's vertices are colored with
one color, allowing differentiation between adjacent polygons. Specular highlights are rendered
poorly with flat shading: If there happens to be a large specular component at the representative
vertex, that brightness is drawn uniformly over the entire face. If a specular highlight doesn’t fall
on the representative point, it is missed entirely. Consequently, the specular reflection component
is usually not included in flat shading computation.

Smooth shading

In contrast to flat shading where the colors change discontinuously at polygon borders, with
smooth shading the color changes from pixel to pixel, resulting in a smooth color transition
between two adjacent polygons. Usually, values are first calculated in the vertices and bilinear
interpolation is then used to calculate the values of pixels between the vertices of the polygons.

Types of smooth shading include:

 Gouraud shading
 Phong shading
Gouraud shading

1. Determine the normal at each polygon vertex.


2. Apply an illumination model to each vertex to calculate the light intensity from the vertex
normal.
3. Interpolate the vertex intensities using bilinear interpolation over the surface polygon.

Data structures

 Sometimes vertex normals can be computed directly (e.g. height field with uniform mesh)
 More generally, need data structure for mesh
 Key: which polygons meet at each vertex.

Advantages

 Polygons, more complex than triangles, can also have different colors specified for each
vertex. In these instances, the underlying logic for shading can become more intricate.

Problems

 Even the smoothness introduced by Gouraud shading may not prevent the appearance of the
shading differences between adjacent polygons.
 Gouraud shading is more CPU intensive and can become a problem when rendering real
time environments with many polygons.
 T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general,
T-Junctions should be avoided.

Phong shading

Phong shading is similar to Gouraud shading, except that instead of interpolating the light
intensities, the normals are interpolated between the vertices. Thus, the specular highlights are
computed much more precisely than in the Gouraud shading model:

1. Compute a normal N for each vertex of the polygon.


2. From bilinear interpolation compute a normal, Ni, for each pixel. (This must be
renormalized each time.)
3. Apply an illumination model to each pixel to calculate the light intensity from Ni.

Other Approaches

Both Gouraud shading and Phong shading can be implemented using bilinear interpolation. Bishop
and Weimer proposed to use a Taylor series expansion of the resulting expression from applying
an illumination model and bilinear interpolation of the normals. Hence, second degree polynomial
interpolation was used. This type of biquadratic interpolation was further elaborated by Barrera et
al., where one second order polynomial was used to interpolate the diffuse light of the Phong
reflection model and another second order polynomial was used for the specular light.

Spherical Linear Interpolation (Slerp) was used by Kuij and Blake for computing both the normal
over the polygon as well as the vector in the direction to the light source. A similar approach was
proposed by Hast,which uses Quaternion interpolation of the normals with the advantage that the
normal will always have unit length and the computationally heavy normalization is avoided.

Flat vs. smooth shading

Flat Smooth

Smooth shading uses linear interpolation


Uses the same color for every pixel in a face – usually
of either colors or normals between
the color of the first vertex
vertices

Edges appear more pronounced than they would on a


real object because in reality almost all edges are The edges disappear with this technique
somewhat round

Same color for any point of the face Each point of the face has its own color

Individual faces are visualized Visualize underlying surface

Not suitable for smooth objects Suitable for any objects

Less computationally expensive More computationally expensive

Das könnte Ihnen auch gefallen