Sie sind auf Seite 1von 91

UNIT-II

Computer Graphics
Computers Graphics

» How pictures are represented in CG

» How pictures are prepared for presentation

» How previously prepared pictures are presented

» How interaction with the pictures is accomplished


Presenting Pictures

• The unit square can be represented by its four corners

P4 E3
P3

S1
E4 E2

P1 E1 P2
Preparing Picture for presentation

• The complex data bases contain data organized in various ways

» Ring structure , B-tree structures, quad tree structures etc

» Points can be stored either Floating point number or Integer


Presenting previously Prepared Picture
» The data used to present the picture is frequently called a
display file

» Includes transformation and orientation of the data

» Hidden line or hidden surface removal, shading transferency,


texture or color effects may be added

» Two or three dimensional clipping may be done

» View port and Windowing can be done


Interacting with the Picture
» Interaction and modification of the picture is done
» Tablet, light pen functional switches
Pixels and Frame Buffers
• Pixel
– Pixel is a smallest addressable screen element

– It is smallest piece of display screen which can be control

– The names which identify pixels correspond to the coordinates


which identify the points

– CG images are made by setting the intensity and color of the pixel
which compose the screen

– Line segments are drawn by setting the intensities, the brightness,


of a string of pixels between a starting pixel and a ending pixel
• Frame Buffer
– The values of pixels that are stored in memory called frame buffer
or bitmap

– The array of pixels, which contains an internal representation of


the image is called the frame buffer

– It collects and stores the pixel values for use by the display device

– Graphics display device can then access this array to determine


the intensity at which each pixel should be displayed

00 0 0 0 0
00 0 1 0 0

Line 00 0 0 1 0
00 0 0 0 0
01 0 0 01
Computer Graphics
The Problem Of Scan Conversion

• A line segment in a scene is defined by the coordinate


positions of the line end-points
y

(7, 5)

(2, 2)

x
The Problem (cont…)
• But what happens when we try to draw this on a pixel
based display?

• How do we choose which pixels to turn on?


Considerations
• Considerations to keep in mind:
– The line has to look good
• Avoid jaggies
– It has to be lightening fast!
• How many lines need to be drawn in a typical
scene?
• This is going to come back to bite us again and
again
Line Equations

• Let’s quickly review the equations involved in drawing


lines
y
• Slope-intercept line
equation:

yend y  m x b
• where:

y0 yend  y0
m
xend  x0
x
x0 xend b  y0  m  x0
Lines & Slopes
• The slope of a line (m) is defined by its start and end
coordinates
• The diagram below shows some examples of lines
and their slopes
m=∞
m = -4 m=4
m = -2 m=2

m = -1 m=1

m = - 1/2 m = 1/2
m = -1/3 m = 1/3

m=0 m=0
A Very Simple Solution
• We could simply work out the corresponding y
coordinate for each unit x coordinate
• Let’s consider the following example:
y

(7, 5)
5

2
(2, 2)

x
2 7
y
(7, 5) • First work out m and b:
5
52 3
m 
72 5
2 3 4
(2, 2)
b  2 2 
5 5
x
2 3 4 5 6 7

• Now for each x value work out the y value:


3 4 3 3 4 1
y (3)  3   2 y ( 4)  4  3
5 5 5 5 5 5

3 4 4 3 4 2
y (5)  5   3 y ( 6)  6  4
5 5 5 5 5 5
• Now just round off the results and turn on these pixels to
draw our line
3
7
y (3)  2  3
5
6
1
5 y ( 4)  3 3
5
4
3 4
y (5)  3 4
2 5
1
2
0 y (6)  4 4
5
0 1 2 3 4 5 6 7 8
• However, this approach is just way too slow
• In particular look out for:
– The equation y = mx + b requires the multiplication of
m by x
– Rounding off the resulting y coordinates
• We need a faster solution
The DDA Algorithm

• The digital differential analyzer


(DDA) algorithm takes an
incremental approach in order to
speed up scan conversion
• Simply calculate yk+1 based on yk

The original differential


analyzer was a physical
machine developed by
Vannevar Bush at MIT in the
1930’s in order to solve
ordinary differential equations.
– Consider the list of points that we determined for the line
in our previous example:
• (2, 2), (3, 23/5), (4, 31/5), (5, 34/5), (6, 42/5), (7, 5)

– Notice that as the x coordinates go up by one, the y


coordinates simply go up by the slope of the line

– This is the key insight in the DDA algorithm


– When the slope of the line is between -1 and 1 begin at the
first point in the line and, by incrementing the x coordinate by
1, calculate the corresponding y coordinates as follows:
yk 1  yk  m
– When the slope is outside these limits, increment the y
coordinate by 1 and calculate the corresponding x coordinates
as follows:

1
xk 1  xk 
m
m<1
m >1
Digital Differential Algorithm

• input line endpoints, (x0,y0) and (xn, yn)


• set pixel at position (x0,y0)
• calculate slope m
• Case |m|>1: repeat the following steps until (xn, yn) is reached:
xi+1 = xi + 1
• yi+1 = yi + y/ x
• set pixel at position (xi+1,Round(yi+1))
• Case |m|<1: repeat the following steps until (xn, yn) is reached:
yi+1 = yi + 1
• xi+1 = xi + x/ y
• set pixel at position (Round(xi+1), yi+1)
The DDA Algorithm Summary

• The DDA algorithm is much faster than our previous


attempt
– In particular, there are no longer any multiplications involved
• However, there are still two big issues:
– Accumulation of round-off errors can make the pixelated line
drift away from what was intended
– The rounding operations and floating point arithmetic
involved are time consuming
Bresenham's line algorithm
• This method eliminates the complete floating point
arithmetic except for the initial computations

• The basic argument for positioning the pixel here is


the amount of deviation by which the calculated
position is from the actual position obtained by the
line equation in terms of d1 and d2
Bresenham's line algorithm

y = mx + b

d2
y = m(xi+1) + b
yi d1

xi
X+1
DDA versus Bresenham’s Algorithm

• DDA works with floating point arithmetic


• Rounding to integers necessary

• Bresenham’s algorithm uses integer arithmetic


• Constants need to be computed only once

• Bresenham’s algorithm generally faster than DDA


Transformation of Geometry

• Animations are produced by moving the camera or the


object in a scene along the animation paths. Changes in
orientation, size and shape are accomplished with
geometric transformations that alternates the coordinate
description of the objects.
• The basic geometric transformations are Translation,
Rotation, and scaling. Other transformations that are often
applied to objects include Reflection.
Simple Transformations
2DTransformations

• Translations

• Rotations

• Scaling

• Mirroring or Refection
2D Translation

It involves moving the geometric element from one


location to another

2D Rotations
 The points of an objects are rotated about the
origin by an angle θ
 For a positive angle this rotation is in CCW direction
 For a negative angle the rotation is in CW direction
2D Scaling

- Scaling of a geometric element is used to enlarge it or to


reduce its size.
- The circle could be transferred into an ellipse by using
unequal ‘x’ and ‘y’ scaling factors
Concatenation (or) Combination of
Transformations
• In engineering application it becomes necessary to
combine the aforementioned individual transformations in
order to achieve the required results.
• In such cases combinations of transformation matrices can
be obtained by multiplying the respective transformation
matrices.
• The order of the matrix multiplication be done in the same
way as that of the transformation as follows
Rotation about an Arbitrary Point
1) Translate the point P to Origin
2) Rotate the object by the given angle
3) Translate the point back
Reflection about arbitrary line

• Translate the mirror line along the y-axis such that


line passes through the “o”.
• Rotate the mirror line such that it combines with the
axis.
Homogeneous Coordinates

• It is based on mapping an N-dimensional space in to N+1 dimensional


space.
• That means that one more coordinate is added to represent the position
of the object.

– Example: The point has coordinates (x, y, z) is represent by a vector [ x y z w] in


homogenous transformation.

– In this case ‘w’ is a dummy and on normalization gives [x/w y/w z/w 1].
Translation in homogenous coordinates

x' = ax + by + c
y' = dx + ey + f

Homogeneous formulation

x' a b c x
= f y
y‘ d e
1 0 0 1 1

p' = Mp
Homogeneous Coordinates
• Most of the time w = 1, and we can ignore it

x' a b c d x
y' e f g h y
=
z' i j k l z
1 0 0 0 1 1
3DTranslation
y

( x0 , y0 , z0 )
(0 , 0 , 0)
x

A translation moves the object along a


vector.
z
3DTranslation

• Let ( x , y , z ) be the coordinates of a point before the transformation

and let ( x' , y ' , z ' ) be the coordinates after the transformation.

• The equation of the translation by the vector ( x0 , y0 , z0 ) is:

 x '   x   x0 
 y '   y    y 
     0
 z '   z   z 0 
3DScaling
 Scaling makes an object larger or smaller by multiplying the coordinates
by constants.
 Scaling does not preserve distances. However, uniform scaling preserves
the shape of an object by preserving the ratios of the distances.

2
1
2 4

2
4

An object before and after uniform scaling. The numbers shown on the
edges are their lengths.
3D Scaling

• The equation of a uniform scaling by s is


 x '   s 0 0  x 
 y '  0 s 0  y 
    
 z '  0 0 s   z 

Uniform scaling
3D Scaling
• The general equation of the scaling is

 x'  s1 0 0   x
 y '   0 s2 0   y 
  
 z '   0 0 s3   z 

General scaling
P’ = S . P

 x '   s 0 0  x 
Where P’ = y '  0 s 0  y 
    
 z '  0 0 s   z 

 x '  s 0 0  x 
 y '  0 s 0  y 
 S   
 z '  0 0 s   z 

 x '   s 0 0  x 
 y '  0 s P 0=  y 
    
 z '  0 0 s   z 
y p'
Scale(s,s,s) Scale (sx, sy, sz)
p
q'
• Isotropic (uniform) q
scaling: sx = sy = sz
x

x' sx 0 0 0 x
y' 0 sy 0 0 y
z' = 0 0 sz 0 z
1 0 0 0 1 1
3D Rotation
A rotation rotates an object along an axis
Rotation
Rotation About Z-axis

y
p'

θ p

x
z
Rotation About X-axis
Rotation About Y-axis
Line Clipping Algorithm

• Any procedure that identifies those portions of picture that


are either inside or outside of a specified region of a space
is referred to as a clipping algorithm or simply clipping.

• The region which an object is to clipped is called as clip


window.
• Application of clipping include
– extracting part of a defined scene for viewing, copying, moving,
erasing and duplicating
– identifying visible surface in three dimensional views
– antialising line segments or object boundaries
Cohen-Sutherland Line clipping
Algorithm
• Divide the scene in to nine regions and a assign the
region code for the same.
• Every line end point in a picture is assigned a four digit
binary code called region code, that identifies the location
of the point relative to the boundaries of the clipping
rectangle.
• Having assigned 4-digit code, the system first examines
whether the line is inside or outside of the selected window by
the following condition

– If the line is inside the window if the both end points are equal to
“0000”.

– If the line is outside the window if the both end points are not equal to
“0000”.

– For those lines which are partially inside the window, they are split at
the window edge and discord the line segment outside the window
Hidden Surface Removal
• Specific needs of realism

– Most real objects are opaque and we only see the lines and
surfaces at front, those portions that are back had therefore
to be eliminated from the mathematical model before
displaying

– Object space Method


– Image space method
No Lines Removed
Hidden Lines Removed
Hidden Surfaces Removed
Full, Partial, None

Full Partial None

• The rectangle is closer than the triangle


• Should appear in front of the triangle
Hidden Surface Removal

 The procedure that distinguishes between visible surfaces from invisible/hidden surfaces is called visible-surface determination, which is often called hidden-
surface removal.
Z-buffer
 It’s called z-buffer algorithm, where we need z-buffer in addition to
the frame buffer.

y=249 Pixel (241,200) at


350250 resolution
screen.
It should be blue.
z=1 x=349
DOP

frame buffer z-buffer


250 250

should be “blue”
350
?
350
Z-buffer Algorithm
 The frame buffer will have a color-value for each pixel.
 The z-buffer will have a z-value for each pixel.

 The z-buffer is initialized to the z-values of the back plane

 Similarly, the frame buffer is initialized to the background color (in our case,
white).

① initial states
frame buffer z-buffer
…………….
: …………….
: w w w w ….. :
: w w w w ….. : -1 -1 -1 -1 ..
: -1 -1 -1 -1 ..
1) Project each triangle onto the z=0 plane, and draw it using the scan-line
algorithm.
2) When drawing each polygon, if the polygon point at (x,y) has a bigger
z-value than the current value of the z-buffer, the point’s color & z-
value replace the old ones in both buffers.

Let’s take the red triangle.

y=201  For example, when processing y=200


scan line, x=[241,242] will be filled.
y=200
y  For the pixel (241,200), its z-value a of
y=199
the red triangle should be greater than
240 241 242 243 244 (or equal to) the current z-buffer’s value -1.
x
240.4 242.6  So, set frame-buffer[241,200] to red,
and set z-buffer[241,200] to a.
② When scan line 200 is completed A point (242,200,0.5) on
the red triangle
frame buffer z-buffer
…………….
: …………….
: w w w w ….. :
: w R R w ….. : -1 -1 -1 -1 ..
y=200 : -1 0.6 0.5 -1 .. y=200

x=241 x=241

③ When the red triangle is completely processed


…………….
: …………….
: w w w w ….. :
: w R R w ….. : -1 -1 -1 -1 ..
R R R R : -1 0.6 0.5 -1 ..
: 0.7 0.6 0.5 0.4 ..
 When a triangle is completed, arbitrarily select the next triangle and repeat the
scan-line algorithm.
pixel (241,200)

Let’s now take the blue triangle.

 For example, when processing y=200


scan line, x=[*,241] will be filled.

y=202  For the pixel (241,200), its z-value b of


the blue triangle should be greater
y=201
than the current z-buffer’s value a.
y=200
y  Then, set frame-buffer[241,200] to
blue, and set z-buffer[241,200] to b.
240 241 242 243 244
x
241.3
④ When the blue triangle is completely processed

updated! updated!
……………. …………….
: :
: B B w w ….. : 0.8 0.7 -1
: B B R w ….. -1 ..
y=200 R R R R : 0.8 0.7 0.5 y=200
-1 ..
: 0.7 0.6 0.5 0.4 ..

x=241 x=241

 Note that we will have the same result if we process blue triangle first and
then red triangle (unless they are partially transparent).
A point (242,200,0.5) on
② When scan line 200 is completed the red triangle
frame buffer z-buffer
…………….
: …………….
: w w w w ….. :
: w R R w ….. : -1 -1 -1 -1 ..
y=200 : -1 0.6 0.5 -1 .. y=200

x=241 x=241

③ When the red triangle is completely processed


…………….
: …………….
: w w w w ….. :
: w R R w ….. : -1 -1 -1 -1 ..
R R R R : -1 0.6 0.5 -1 ..
: 0.7 0.6 0.5 0.4 ..
 When a triangle is completed, arbitrarily select the next triangle and repeat the
scan-line algorithm.

pixel (241,200)

 Let’s now take the blue triangle.


 For example, when processing y=200
scan line, x=[*,241] will be filled.
 For the pixel (241,200), its z-value b of
y=202
the blue triangle should be greater
y=201 than the current z-buffer’s value a.
 Then, set frame-buffer[241,200] to blue,
y=200
and set z-buffer[241,200] to b.
y
240 241 242 243 244
x
241.3
Z-buffer Algorithm (cont’d)
④ When the blue triangle is completely processed

updated! updated!
……………. …………….
: :
: B B w w ….. : 0.8 0.7 -1
: B B R w ….. -1 ..
y=200 R R R R : 0.8 0.7 0.5 y=200
-1 ..
: 0.7 0.6 0.5 0.4 ..

x=241 x=241

 Note that we will have the same result if we process blue triangle first and
then red triangle (unless they are partially transparent).

Das könnte Ihnen auch gefallen