Sie sind auf Seite 1von 22

A second way of constructing an image is to scan many parallel lines across the screen.

During the scribing of each


line the intensity can be varied to represent different shades. So a horizontal line might be drawn by keeping the gun
on for the whole line, a vertical line might be drawn by pulsing the gun on at the same point in each of the lines.
This is the "raster scan" in your question Modern displays increasingly work by neither of these methods, they are
individual pixels that can be turned on and off independently to emit, reflect or absorb light, depending on the
technology.
0
1 (ii) Visualisation

Hint: Visualization:-
Visualization is any technique for creating images, diagrams, or animations to communicate a message.
Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas
since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and
Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.
Scientific visualization involves interdisciplinary research into robust and effective computer science and
visualization tools for solving problems in biology, aeronautics, medical imaging, and other disciplines. The
profound impact of scientific computing upon virtually every area of science and engineering has been well
established. The increasing complexity of the underlying mathematical models has also highlighted the critical role
to be played by Scientific visualization. It, therefore , comes as no surprise that Scientific visualization is one of the
most active and exciting area of Mathematics and Computing Science, and indeed one which is only beginning

to mature. Scientific visualization is a technology which helps to explore and understand scientific phenomena
visually, objectively, quantitatively. Scientific visualization allows scientists to think about the unthinkable and
visualize the unviable. Through this we are seeking to understand data. We can generate beautiful pictures and
graphs; we can add scientific information(temperature, exhaust emission or velocity) to an existing object thus
becoming a scientific visualization product.
Thus , we can say scientific visualization is a scientific tools kit, which helps to simulate insight and understanding
of any scientific issue, thus, helping not only in solving or analyzing the same but also producing appropriate
presentations of the same. This concept of scientific visualization fits well with modeling and simulation.
0
1 (iii) Bit planes

Hint: A segment of memory used to control an object, such as a color, cursor or sprite. Bit planes may be reserved
parts of a common memory or independent memory banks each designed for one purpose. significant bit and the
16th contains the least significant bit. It is possible to see that the first bit-plane gives the roughest but the most
critical approximation of values of a medium, and the higher the number of the bit-planes, the less is its contribution
to the final stage. Thus, adding bit-plane gives a better approximation. Incrementing a bit plane by 1 gives the final
result half of a value of a previous bit-plane. If a bit is set to 1, the half value of a previous bit-plane is added,
otherwise it does not, defining the final value. Bit-plane is sometimes used as synonymous to Bitmap; however,
technically the former refers to the location of the data in memory and the latter to the data itself. One aspect of
using bit-planes is determining whether a bit-plane is random noise or contains significant information. One method
for calculation this is compare each pixel (X,Y) three adjacent pixels (X-1,Y), (X,Y-1) and (X-1,Y-1). If the pixel is
the same as at least two of the three adjacent pixels, it is not noise. A noisy bit-plane will have 49% to 51% pixels
that are noise.
0
1 (iv) Computer Simulation

Hint: A computer simulation or a computer model is a computer program that attempts to simulate an abstract model
of a particular system.The use of simulation is an activity that is as natural as a child who role plays. Children
understand the world around them by simulating (with toys and figures) most of their interactions with other people,
animals and objects. As adults, we lose some of this childlike behavior but recapture it later on trough computer
simulation. To understand reality and all of its complexity, we must build artificial objects and dynamically act our
roles with them. Computer simulation is the electronic equivalent of this type of role playing and it serves to drive
synthetic environments and virtual world. Within the overall task of simulation, there are three primary sub-fields:
model design, model execution and model analysis Simulation is often essential in the following cases:
0 o The model is very complex with variables and interacting components;
1 o The underlying variables relationships are nonlinear;
2 o The model contains random varieties;
3 o The model output is to be visual as in a 3D computer animation.

0 (v) OpenGL

Hint: OpenGL (Open Graphics library) is a standard specification defining a cross-language, cross-platform API for
writing application that produce 2D and 3D computer graphics. The interface consists of over 250 different function
calls which can be used to draw complex three-dimensional scenes from simple primitives. OpenGL was developed
by Silicon Graphics Inc. (SGI) in 1992 and is widely used in CAD, virtual reality, scientific visualization,
information visualization, and flight simulation. It is also used in video games, where it competes with Direct3D on
Microsoft Windows platforms. OpenGL serves two main purposes:-
1 Hide the complexities of interfacing with different 3D accelerators, by presenting the programmer with a
single, uniform interface.
2 Hide the differing capabilities of hardware platforms, by requiring that all implementation support the full
OpenGL feature set (using software emulation if necessary).

OpenGLs basic operation is to accept primitives such as points, lines and polygons, and convert them into pixels.
This is done by a graphics pipeline known as the OpenGL state machine. Most OpenGL commands either issue
primitives. Prior to the introduction of OpenGL 2.0, each stage of the pipeline performed a fixed function and was
configurable only within tight limits. OpenGL 2.0 offers several stages that are fully programmable using GLSL.

OpenGL is a low-level, procedural API, requiring the programmer to dictate the exact steps required to render a
scene. This contrasts with descriptive (aka scene graph or retained mode) APIs, where a programmer only needs to
describe a scene and can let the library manage the details of rendering it. OpenGLs low-level design requires
programmers to have a good knowledge of the graphics pipeline, but also gives a certain amount of freedom to
implement novel rendering algorithms. OpenGL has historically been influential on the development of 3D
accelerators, promotion a base level of functionality that is now common in consumer-level hardware:
Rasterised points, lines and polygons as basic primitives.
A transform and lighting pipeline
Z-buffering
Texture mapping
Alpha blending

1 b) Write the Bresenham line generation algorithm for positive slope. Explain the use of this
algorithm. Use this algorithm to draw a line with endpoints (2, 1) and (6, 4) Comment of the quality of the line
that you have drawn using the algorithm.
Hint: Bresenham algorithm is accurate and efficient raster line generation algorithm. This algorithm scan converts
lines using only incremental integer calculations and thes calculations can also be adopted to display circles and
other curves. Algorithm m < 1:
1 (i) Input two line end points and store left end point in (x0,y0)
2 (ii) Load (x0,y0) on frame buffer i.e., plot the first point .
3 (iii) Calculate x, y, 2y, 2y 2.x and obtain the starting value of decision parameter as p0 = 2y x
4 (iv) At each xk along the line, starting at k=0, perform following test:

If pk < 0, the next plot is (xk+1, yk) and pk+1 = pk + 2y Else next plot is (xk+1, yk+1) and pk+1 = pk + 2(y - x)

1 (a) Repeat step (d) x times.


(x1, y1) = (2, 1), (x2, y2) = (6, 4)
m = y2 y1 = y = 4-1 = 3 <1

y = 3, x = 4 Initial decision parameter P0 =2y - x =6 4 = 2 Increments for computing successive decision


parameters are: 2y = 6 2y - 2x = -2 Plot initial point (2, 1) in frame buffer. Determine successive pixel position
along line path using the rule:
1 (i) If Pk > 0, next point is (xk + 1, yk +1)

Pk+1 = Pk + 2y - 2x
1 (ii) If Pk 0, next point is (xk + 1, yk)

Pk+1 = Pk + 2y Thus, we have:


(C) Consider that a clipping window ABCD has the following coordinates :
A(0,0), B(20, 0), C(20, 20) and D(20, 0).

Consider the following three line segments:


XY with endpoints (-5, 0) and (25, 30);
MN with endpoints (30, 35) and (20, 30); and
OP with endpoints (10, 10) and (25, 20).
Use the Cohen Sutherland line clipping Algorithm to find the visible portions of the three line segments in the clipping
window. Explain each of the steps used to determine the visible portion of the line in the algorithm.

Hint:
Hint: The given window and the line-segments are shown in the figure
We first consider the segment XY.
X1=0, XR =20, YT =20 (X1, Y1) = (-5, 0),
(X2, Y2) = (25, 30)
Condition Bit of X
x1<XL 1
x1<XR 0
Y1=YB 0
Y1<YT 0

Code X = 1000

Condition Bit of Y
X2>XL 0
X2>XR 1
Y2>YB 0
Y2>YT 1
Condition Bit of M

x1>XL 0

x1=XR 0

y1>YB 0

y1>YT 1

Code M = 0001
Condition Bit of X
X2>XL 0

X2>XR 1

Y2>YB 0

Y2=YT 0

Code P = 0100
Code O AND code P = 0000
OP is partly visible.
To find the visible portion of OP:
Since code O = 0000, nothing is done.
Incase of P, for i=1,3,4,code P = 0,
So nothing is done.
For I = 2, code P = 1
X = xWmin
=20
Point of intersection is (20,y).
y - y2 = m(xWmin x2)
y 20 = {(20 10)/(25 10)}(20 25)
y 20 =(10/15)(-5)
y 20 =(-10)/3
y = 20 10/3
= 50/3 16.67
Thus, the point is (20, 16.67).
The visible portion is OP, where P = (20, 16.67).

d) Consider a regular hexagon ABCDEF is only partly visible in the window ABDE. Show or explain how
Sutherland Hodgman polygon clipping Algorithm can be used on this case to find the visible portion of the
hexagon in the window.

Hint:

Vertex Vi Vertex Vi+1 Output Vertex


A(inside) B(inside) B
B(inside) C(outside) B
C D(inside) D
D(inside) E(inside) D
E(inside) F E
F A(inside) A
The visible portion of hexagon ABCDEF is ABDE

e) Write and explain the mid point circle generation algorithm.

Hint: MID POINT CIRCLE ALGORITHM.


1 We will first calculate pixel positions for a circle centered around the origin (0,0). Then, each
calculated position (x,y) is moved to its proper screen position by adding xc to x and yc to y
2 Note that along the circle section from x=0 to x=y in the first octant, the slope of the curve varies from
0 to -1
3 Circle function around the origin is given by

fcircle(x,y) = x2 + y2 r2
1 Any point (x,y) on the boundary of the circle satisfies the equation and circle function is zero

1 For a point in the interior of the circle, the circle function is negative and for a point outside the circle,
the function is positive
2 Thus,
0 fcircle(x,y) < 0 if (x,y) is inside the circle boundary
1 fcircle(x,y) = 0 if (x,y) is on the circle boundary
2 fcircle(x,y) > 0 if (x,y) is outside the circle boundary

1 Assuming we have just plotted the pixel at (xk,yk) , we next need to determine whether the pixel at
position (xk + 1, yk-1) is closer to the circle
2 Our decision parameter is the circle function evaluated at the midpoint between these two pixels
pk = fcircle (xk +1, yk-1/2) = (xk +1)2 + (yk -1/2)2 r2 If pk < 0 , this midpoint is inside the circle and the pixel on the
scan line yk is closer to the circle boundary. Otherwise, the mid position is outside or on the circle boundary, and we
select the pixel on the scan line yk-1
1 Assuming we have just plotted the pixel at (xk,yk) , we next need to determine whether the pixel at
position (xk + 1, yk-1) is closer to the circle
2 Our decision parameter is the circle function evaluated at the midpoint between these two pixels

pk = fcircle (xk +1, yk-1/2) = (xk +1)2 + (yk -1/2)2 r2 If pk < 0 , this midpoint is inside the circle and the pixel on the
scan line yk is closer to the circle boundary. Otherwise, the mid position is outside or on the circle boundary, and we
select the pixel on the scan line yk-1
1 Note that following can also be done incrementally:

2xk+1 = 2xk +2 2 yk+1 = 2yk 2


1 At the start position (0,r) , these two terms have the values 2 and 2r-2 respectively
2 Initial decision parameter is obtained by evaluating the circle function at the start position (x0,y0) =
(0,r)

p0 = fcircle(1, r-1/2) = 1+ (r-1/2)2-r2 OR P0 = 5/4 -r


1 If radius r is specified as an integer, we can round p0 to

p0 = 1-r

Question 2: a) Explain the Homogeneous Coordinate System with the help of an example. Assume that a
triangle ABC has the coordinates A(0, 0), B(4,4), C(2,2). Find the transformed coordinates when the triangle
ABC is subjected to the clockwise rotation of 45 about the origin and then translation in the direction of
vector (1, 0). You should represent the transformation using Homogeneous Coordinate System.

Hint: A homogeneous coordinate system is a coordinate system in which there is an extra dimension, used most
commonly in computer science to specify whether the given coordinates represent a vector (if the last coordinate is
zero) or a point (if the last coordinate is non-zero). A homogeneous coordinate system is used by OpenGL for
representing position. Example: Given a point (x,y) then (x,y,h) is in homogeneous coordinates, where h is non zero
Converting a homogeneous polynomial back to its conventional form by (x/h,y/h) b)

(b) A polygon has 4 vertices located at A (0, 0) B (5, 0), C (5, 5), D (0, 5). Apply the following transformations
on the polygon: (i) Scaling and (ii) xy shear about the origin You must make and state suitable assumptions
about scaling and shear factors used by you.

Hint: The matrix representation of the given polygon is:


Thus, (0, 0), (5/2, 0), (5/2, 5/2), (0, 5/2) are the required co-ordinates.
1 (ii) We choose shearing factors are 2 and 3 in x and y directions respectively.

Thus, the matrix of xy-shear is

Thus, (0,0) ,(5, 15),(15, 20) and (10, 5) are the transformed co-ordinates.

c) Explain the following projections with the help of an example.


(i) Orthographic Projections (ii) Oblique Projections (iii) Isometric Projections
1
2 (i) Hint: Orthographic Projections:-

Orthographic Projections (or orthogonal projection is a means of representing a three-dimensional object in two
dimensions. It is a form of parallel projection, where the view direction is orthogonal to the projection plane,
resulting in every plane of the scene appearing in affine transformation on the viewing surface. It is further divided
into multiview orthographic projections and axonometric projections. Graphic communications has many forms.
Orthographic is one such form. It was developed as a way of communicating information about physical objects. It
is part of a universal system of drawings. House plans one well known drawing format, are a form of orthographic
projection. In simple terms, orthographic drawings are views ( front, side, top, and so on ) of an object. An
orthographic view is only one side. It takes several views to show all the object. Before getting to views, it is useful
to look at another type of drawing. Pictorial drawings show several sides at the same time. Many people find
pictorial drawings easier to understand. They do not provide as much information as orthographic views. The most
commonly used pictorial drawing for technical information is called isometric drawings. Isometric drawings were
developed to approximate perspective, but are much easier to draw. For a square box, all the sides are drawn as
vertical lines, or at 30 degrees to the horizontal. Example 1 shows a typical isometric of a box. Note the way the
sides are labeled. This is very important, because each side is normally used to create orthographic views.
Pictorial Drawing Example

A simple box has 6 sides top, bottom, 2 ends and 2 sides. An isometric drawing of a box looks like this. Figure:
Add labels to the sides. Figure: These labels are OK, but in the world of technical drawings, special labels are
used. The label refers to a position on the drawing. Proper labels for the sides on this box are:
Top View Front View
Right Side View
Left Side View
Rear View
Bottom View
One important thing to note is that these labels are for the position. Front view is always in this location, regardless
of the object that is drawn.
1 (ii) Oblique Projections:-

If the direction of projection d= ( d1, d2, d3 ) of the rays is not perpendicular to the view plane c or d does not have
the same direction as the view plane normal N), then the parallel projection is called an oblique projection. In
oblique projection only the faces of the object parallel to the view plane are shown at their ture size and shape,
angles and lengths are preserved for these faces only. Faces not parallel to the view plane are discarded In Oblique
projection the line perpendicular to the projection plane are foreshortened (shorter in length of actual lines) by the
direction of projection of rays. The direction of projection of rays determines the amount of foreshortening. The
change in length of the projected line (due to the direction of projection of rays ) is measured in terms of
foreshortening factor, f.
1 (iii) Isometric projections:-

In isometric projection, the direction of projection make equal angles with all the 3 principal axes. Thus, the
direction of projection d = (1, 1, 1)
And normal n = (1, 1, 1)
The equation of the plane of project is x + Y + z = 0
Transformation : Let P(x, y, z) be any point in a space. Suppose it is projected to P (x, y, z) onto the projection
plant
x + Y + z=0
We find the point P(x, y, z).
The parametric equation f a line passing through point p(x, y, z) and in the direction d(1, 1, 1) is:
P + td = (x, y, z)+ t(1, 1, 1)
Thus, (x+t, y+t, z+t) is any point on the line,
Where - <t< P is obtained when t=t* Thus,
P(x, y, z) = (x+t*, y+t*, z+t*),
since P lies on x + Y + z =0 x+t*+ y+t*+ z+t* = 0
P is obtained when t=t*
Thus, P(x, y, z) = (x+t*, y+t*, z+t*),
since P lies on x + Y + z =0 x+t*+ y+t*+ z+t* = 0
When a three-dimensional object is projects onto a view plane using perspective transformation equations, any set
of parallel lines in the object that are not parallel to the plane are projected into converging lines. Parallel lines that
are parallel to the view plane will be projected as parallel lines. The point at which a set of projected parallel lines
appears to converge is called a vanishing point. Each such set of projected parallel lines will have a separate
vanishing point, and in general, a scene can have any number of vanishing points, depending on how many sets of
parallel lines are there in the scene.

Question 3:

a) What are the uses of Bezier Curves in Computer Graphics? Draw a Bezier curve having the control points as P1
(0, 0), P2 (2, 5), P3 (5, 9), P4 (10, 20). Calculate the coordinates of the points on the curve corresponding to the
parameter u = 0.2, 0.4, 0.6. Draw a rough sketch of the curve and show coordinates of various points on it?

Hint : Bezier curves are commonly found in painting and drawing packages, as well as the CAD systems, since they
are easy to implement and the reasonably powerful in curve design. Efficient methods for determining coordinate
position along a Bezier curve can be set up using recursive calculations. For example, successive binomial
coefficients can be calculated as
C(n,k) = (n-k+1)/k*C(n,k-1), for n>=k.

Bezier curve with control point P1(0,0), P2(5,10), P3(30,9), P4(40,10).


We know cubic Bezier curve is

(0,0)(1-u)3+3(2,5)u(1-u)2+3(5,9)u2(1-u)+(10,20)u3
With u=0.2
P(0.2)=(0,0)(0.8)3+(6,10)(0.2)(0.8)2+(15,27)(0.2)2(0.8)+(10,20)(0.2)3
= (0,0)(0.512)+(6,10)(0,128)+(15,27)(0.032)+(10,20)(0.008)
= (0,0)+(0.768,1.28)+(0.48,0.864)+(0.08,0.16)
= (1.328,2.304)
With u=0.4

P(0.4)=(0,0)(0.6)3+(6,10)(0.4)(0.6)2+(15,27)(0.4)2(0.6)+(10,20)(0.4)3
= (0,0)+(6,10)(0.144)+(15,27)(0.096)+(10,20)(0.064)
= (0.864,1.44)+(1.44,2.592)+(0.64,1.28)
= (2.944,5.312)

P(0.6)= (0,0)(0.4)3+(6,10)(0.6)(0.4)2+(15,27)(0.6)2(0.4)+(10,20)(0.6)3
= (6,10)(0.096)+(15,27)(0.144)+(10,20)(0.216)
= (0.576,0.96)+(2.16,3.888)+(2.16,4.32)
= (4.896,9.168)
b) Why do you need to use visible-surface detection in Computer Graphics? Explain Scan Line method
along with the algorithm for the visible-surface detection with the help of an example. How scan line
method is is different to z-buffer method?

Hint: For the generation of realistic graphics display, hidden surfaces and hidden lines must be identified for
elimination. For this purpose we need to conduct visibility tests. Visibility tests try to identify the visible surfaces or

visible edges that are visible from a given viewpoint. Visibility tests are performed by making use of either i) object-
space or ii) image-space or iii) both object- space and image-spaces. Scan-line method deals with multiple surfaces.
As it processes each scan-line at a time. All polygons intersected by that scan-line are examined to determine which
surfaces are visible. The visibility test involves the comparison of depths of each overlapping surface to determine
which one is closer to the view plane. If it is found so, then it is declared as a visible surface and the intensity values
at the position along the scan-line are entered into the refresh-buffer.

Assumptions:
1 (I) Plane of projection is Z = 0 plane.
2 (II) Orthographic parallel projection.
3 (III) Direction of projection d= (0, 0, -1)
4 (IV) Objects must up of polygon faces.

This algorithm solves hidden surface problem, one scan-line at a time, usually procession scan lines from the bottom
to the top of the display. It is a one dimensional version of the depth buffer. We require two arrays, intensity [x] &
depth [x] to hold values for a single scan line. Here at Q 1 and Q2 both polygons are active(i.e. sharing) For each scan
line, perform step (1) to step(3)
1 (1) For all pixels on a scan line, set depth [x] =1.0 (max-value) & intensity [x] = back ground color.
2 (2) For each polygon in the scene, find all pixels on the current scan line (say S 1) that lie within the
polygon, for each of these x values:
3 a) Calculate the depth z of the polygon at (x,y).
4 b) If z < depth [x], set depth [x] = z & intensity corresponding to the polygons shading.
5 (3) After all polygons have been considered, the values contained in the intensity array represent the
solution and can be copied into a frame buffer.

Advantages of Scan line Algorithm:

Here, every time, we are working with one-dimensional array, i.e., x[0..x_max] for color not a 2D-array as in
depth buffer algorithm.

Distinguish between z-buffer method and scan-line method. What are the visibility test made in these
methods?

In z-buffer algorithm every pixel position on the projection plane is considered for determining the visibility of
surface w.r.t. this pixel. On the other hand in scan-line method all surfaces intersected by a scan line are examined
for visibility. The visibility test in z-buffer method involves the comparison of depths of surfaces w.r.t. a pixel on the
projection plane. The surface closest to the pixel position is considered visible. The visibility test in scan-line
method compares depth calculations for each overlapping surface to determine which surface is nearest ot the view-
plane so that it is declared as visible.

c) Explain the following terms in the context of computer Graphics using suitable diagram and /or
mathematical equations or one example. (i) Phong Shading
1
2 (i) Hint: : Phong shading:-

The Phong model describes the interaction of light with a surface, in terms of the properties of the surface and the
nature of the incident light. The reflection model is the basic factor in the look of a three dimensional shaded object.
It enables a two dimensional screen projection of an object to look real. The Phong model reflected light in terms of
a diffuse and specular component together with an ambient term. Phong Shading overcomes some of the
disadvantages of Gouraud Shading and specular reflection can be successfully incorporate in the scheme. The first
stage in the process is the same as for the Gouraud Shading for any polygon we evaluate the vertex normals. For
each scan line in the polygon we evaluate by linear interpolation the normal vectors at the end of each line. These
two vectors Na and Nb are then used to interpolate Ns. We thus derive a normal vector for each point or pixel on the
polygon that is as approximation to the real normal on the curved surface approximated by the polygon. Ns, the
interpolated normal vector, is then used in the intensity calculation. The vector interpolation tends to restore the
curvature of the original surface that has been approximated by a polygon mesh. We have:
Na = 1/(y1 y2)[N1(y s y2) + N2(y1 ys)]
Nab = 1/(y1 y4)[N1(y s y4) + N4(y1 ys)]
Ns = 1/(xb xa)[Na(xb xs) + Nb(xs xa)]
There are vector equations that would each be implemented as a set of three equations, one for each of the
components of the vectors in world space. This makes the Phong Shading interpolation phase three times as
expensive as Gouraud Shading. In addition, there is an application of the Phong model intensity equation at every
pixel. The incremental computation is also used for the intensity interpolation:
Nsx,R = Nsx,R -1 + Nsx

So, in Phong Shading the attribute interpolated are the vertex normals, rather than vertex intensities. Interpolation of
normal allows highlights smaller than a polygon.
1
2 (ii) Diffuse Reflection
3 (iii) Hint: Diffuse Reflection:-

Diffuse Reflection is characteristic of light reflected from a dull, non-shiny surface. Objects illuminated solely by
diffusely reflected light exhibit an equal light intensity form all viewing directions. That is in Diffuse reflection light
incident on the surface is reflected equally in all directions and is attenuated by an amount dependent upon the
physical properties of the surface. Since light is reflected equally in all directions the perceived illumination of the
surface is not of matt surfaces, i.e., surfaces that are rough or grainy which tend to scatter the reflected light in all
directions. This scattered light is called diffuse reflection. The general mathematical expression for the combined
effect of ambient and diffuse reflection is given by, I = Ia Ka + Id Kd cos = Ia Ka + Id Kd ( . ) Where I = sum of the
intensities for ambient and diffuse reflection. Ia and Id are intensities of incident ambient and diffused light
respectively, is the unit normal to surface and is the unit vector in light direction. With K a = Kd, I = Ia Ka + Id Ka
cos = Ka (Ia + Id cos)
1
2 (iv) Basic Ray Tracing Algorithm
3 (v) Hint: Basic Ray Tracing Algorithm:-

The basic ray tracking algorithm is called a recursive algorithm. Infinite recursion is recursion that never ends.
The ray tracking algorithm, too, is recursive, but it is finitely recursive. This is important, because otherwise you
would start an image rendering and it would never finish! The algorithm begins, as in ray casting, by shooting a ray
from the eye and through the screen, determining all the objects that intersect the ray, and finding the nearest of
those intersections. It then recourses by shooting more rays from the point of intersection to see what objects are
reflected at that point, what objects may be seen through the object at that point, which light source are directly
visible from that point and so on. These additional rays are often called secondary rays to differentiate them from the
original, primary ray. As an analysis of the above discussion we can say that we pay for the increased features of ray
traction by a dramatic increase in time spent with calculations of point of intersections with both the primary rays
and each secondary and shadow ray. Thus achieving good picture quality, is not an easy task, and it only gets more
expensive as you try to achieve more realism in your image.
1
2 (vi) Surface of revolution
(ii) Hint: Surface of revolution:-

A surface of revolution is generated by revolving a given curve about an axis. The given curve is a profile curve
while the axis is the axis of revolution.
To design a surface of revolution, select Advanced Features followed by Cross Sectional Design. This will bring up
the curve system. In the curve system, just design a profile curve based on the condition to be discussed below, and
then select Techniques followed by Generate Surface of Revolution. The surface system will display a surface of
revolution defined by the given profile curve.
Some special restrictions must be followed in order to design a surface of revolution under the curve system. First,
the axis of revolution must be the z-axis. Second, the profile curve must be in the xz-plane. However, when brining
up the curve system, only the xy-plane is shown. To overcome this problem, one can design a profile curve on the
xy-plane is shown. To overcome this problem, one can design a profile curve on the xy-plane and rotate the curve
(not the scene ) about the x-axis 90 degree (or -90 degree, depending on your need ). In this way profile curve will
be place on the xz-plane. Note that to modify control points after this rotation, this rotation, you must use the sliders.
1
2 (vii) Equation of a plane that passes through point P(0,0,0) and the normal to plane is given by

Question 4: a) Explain the following with the help of an example or diagram and/or mathematical equation used in
the content of Computer Graphics and Multimedia (10 Marks)
1 (i) Zero Acceleration
2 (ii) Sprite animation
3 (iii) Computer Assisted animation
4 (iv) Steps for creating animation
5 (v) Negative Acceleration
6 (vi) Features of animation hardware
7 (vii) Features of an animation software
8 (viii) Any two uses of animation

1 (i) Hint: Zero Acceleration:-

Here, the time spacing for the in between, is at equal interval, i.e. T = (Tb - Ta )/(N + 1), where T = time spacing
Tb & Ta the end points of the key frame & N in between are required between Ta & Tb The time of any Jth in-between
is given by Tj = Ta + J T, J = 1,2,N.
1
2 (ii) Sprite animation:-

Sprites often can have transparent areas. Sprites are not restricted to rectangular shapes.Sprite animation lends itself
well to be interactive. The position of each sprite is controlled by the user or by an application program ( or by
both ). It is called external animation. We refer to animated objects ( sprites or movies) as animobs . In games
and in many multimedia applications, the animations should adapt themselves to the environment, the program
status or the user activity. That is, animation should be interactive. To make the animations more event driven, one
can embed a script, a small executable program, in every animob. Every time an animob touches another animob or
when an animob gets clicked, the script is activated. The script then decides how to react to the event. The script file
itself is written by the animator or by a programmer. Sprite animation is better for creating animation because it is
external animation method and it has wide applications in games and many multimedia applications. Animation
should be interactive, and it leads to movies. It is not rectangular shapes.
1
2 (iii) Computer Assisted animation:-

Compute-assisted animation which usually refers to two dimensional systems that computerize the traditional
animation process interpolation between key shapes is typically the only algorithm use of the computer in the
production of this type of animation.
1
2 (iv) Steps for creating animaton:-

Following are the steps for creating animation: 1. Define the physical size of animation graphic 2. Draw the
background 3. Create the simple animation
a) Set up animation
b) Add motion
c) End animation
d) Add Sound
e) Test and save

4. Publish the animation

1 (v) Negative Acceleration: -


To incorporate decreasing speed in an animation, the time spacing between the frames should decrease, so that there
exits lesser change in the position as the object movies slower. To have increased interval size, the function is sin, 0
< For n in-betweens, the time for the Jth in between would be calculated as: Tj = Ta + T[sin(J 12(N + 1)] J =
1,2N T = Tb + Ta
The spacing between frames is decreasing so the situation changes from fast motion to slow motion, i.e. decreasing
acceleration or deceleration.
1 (
2 vi) Features of animation hardware:-
To create different type of animation , special software & hardware are needed. These collectively serve as computer
animation tools. Hardware: Available in many shapes, sizes & capabilities. Some hardware is specialized to do
only certain tacks, whereas other hardware do a variety of things. Some of the hardware are listed as follows:
1 (1) SGI (silicon Graphics Incorporated): Of the different types of computers available in the market, SGI
computers are extremely fast, produce excellent results & operate using the widespread UNIX operating system.
They are Wavefront, Alias & Soft image.
2 (2) PCs (Personal Computers): Because of both flexibility & Power, PCs are very useful for small
companies & other business as platforms to do computer animation. 3D animation software like 3D studio &
Animator studio are used on PCs to make animations.
3 (3) Amiga: Widely used in introduced effects and animation for movies/TV shows, etc. it is known for two
software packages such as Video Toaster & Light Wave 3D.
4 (4) Macintosh: Originally designed to be graphic & desktop publishing machines. Useful for small scale
companies wishing to do nice looking applications.

d) Explain the following terms in the context of Multimedia giving one example wherever applicable/ needed.
(i) Hypertext

Hint: Hypertext allows documents to be linked in a non-linear fashion. Hypertext is a non-sequential form of
composing and writing. It therefore provides choices for the reader as to how to navigate such text. While the
organization of regular, sequential text may be based on a multitude of criteria and dimensions, the final product
freezes the presentation into a single, sequential chain of paragraphs and chapters. The author is in charge, the reader
either accepts this "first-to-last-page" organization, or has to resort to an ad-hoc "hyper- reading" style by jumping
from page to page along a string tailored with the help of table of contents, indices, previous notes and luck.
1
2 (i) Virtual Reality

Hint: Virtual reality is an artificial environment created with computer hardware and software and presented to the
user in such a way that it appears and feels like a real environment. This technology has been applied in all walks of
life especially in education where it is used to simulate learning environments. So many universities and military
establishments had adopted this technology and this had improved the learning capability of users. VR is defined as
a highly interactive, computer-based multimedia environment in which the user becomes the participant in a
computer-generated world. It is the simulation of a real or imagined environment that can be experienced visually in
the three dimensions of width, height, and depth and that may additionally provide an interactive experience visually
in full real-time motion with sound and possibly with tactile and other forms of feedback. VR is a way for humans to
visualize, manipulate and interact with computers and extremely complex data . It is an artificial environment
created with computer hardware and software and presented to the user in such a way that it appears and feels like a
real environment .VR is a computer-synthesized, three-dimensional environment in which a plurality of human
participants, appropriately interfaced, may engage and manipulate simulated physical elements in the environment
and, in some forms, may engage and interact with representations of other humans, past, present or fictional, or with
invented creatures . It is a computer-based technology for simulating visual auditory and other sensory aspects of
complex environments. VR incorporates 3D technologies that give a real-life illusion. VR creates a simulation of
real-life situation .
1
2 (ii) JPEG Graphics and its application

Hint: JPEG is one of the two most popular bitmap image file formats used on the Internet (the other being GIF). Its
an acronym for Joint Photographic Experts Group, The JPEG standard was written by the committee known as the
Joint Photographic Experts Group, and it was designed for compressing full color or grayscale images. JPEG is a
lossy file format; when the JPEG algorithm compresses the image, it reduces the size by chucking bits of the
image away. Second, JPEG is not actually a file format, though it is often referred to as one. JPEG is the name of the
compression algorithm used to compress the image.
1
2 (iii) Digital Sound versus Analogue sound

Hint: An analog recording is one where the original sound signal is modulated onto another physical signal carried
on some media or substrate such as the groove of a gramophone disc or the magnetic field of a magnetic tape. A
physical quantity in the medium (e.g., the intensity of the magnetic field) is directly related to the physical properties
of the sound (e.g, the amplitude, phase and possibly direction of the sound wave.) The reproduction of the sound
will in part reflect the nature of the substrate and any imperfections on its surface. A digital recording is produced by
first encoding the physical properties of the original sound as digital information which can then be decoded for
reproduction. While it is subject to noise and imperfections in capturing the original sound, as long as the individual
bits can be recovered, the nature of the physical medium is immaterial in recovery of the encoded information. A
damaged digital medium, such as a scratched compact disc may also yield degraded reproduction of the original
sound, due to the loss of some digital information in the damaged area (but not due directly to the physical damage
of the disc).
1 (iv) Any two Audio file formats

Hint: wav - standard audio file format used mainly in Windows PCs. Commonly used for storing uncompressed
(PCM), CD-quality sound files, which means that they can be large in size - around 10MB per minute of music. It is
less well known that wave files can also be encoded with a variety of codecs to reduce the file size (for example the
GSM or mp3 codecs). mp3 - the MPEG Layer-3 format is the most popular format for downloading and storing
music. By eliminating portions of the audio file that are essentially inaudible, mp3 files are compressed to roughly
one-tenth the size of an equivalent PCM file while maintaining good audio quality. We recommend the mp3 format
for music storage. It is not that good for voice storage.
1 raw - a raw file can contain audio in any codec but is usually used with PCM audio data. It is rarely used
except for technical tests
2
3 (v) Interlaced Scan for image capture

Hint: Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display
screen by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One
field contains all the odd lines in the image, the other contains all the even lines of the image. A PAL based
television display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work
together to create a full frame every 1/25th of a second, resulting in a display of 25 frames per second.
1
2 (vi) Any two video file formats

Hint: Flash Video Format (.flv) Because of the cross-platform availability of Flash video players, the Flash video
format has become increasingly popular. Flash video is playable within Flash movies files, which are supported by
practically every browser on every platform. Flash video is compact, using compression from On2, and supports
both progressive and streaming downloads.
AVI Format (.avi) The AVI format, which stands for audio video interleave, was developed by Microsoft. It stores
data that can be encoded in a number of different codecs and can contain both audio and video data. The AVI format
usually uses less compression than some similar formats and is a very popular format amongst internet users. AVI
files most commonly contain M-JPEG, or DivX codecs, but can also contain almost any format. The AVI format is
supported by almost all computers using Windows, and can be played on various players. Some of the most common
players that support the avi format are:
Apple QuickTime Player (windows & Mac)
Microsoft Windows Media Player (Windows & Mac)
VideoLAN VLC media player (Windows & Mac)
Nullsoft Winamp

Quicktime Format (.mov) The QuickTime format was developed by Apple and is a very common one. It is often
used on the internet, and for saving movie and video files. The format contains one or more tracks storing video,
audio, text or effects. . It is compatible with both Mac and Windows platforms, and can be played on an Apple
Quicktime player.

MP4 Format (.mp4) This format is mostly used to store audio and visual streams online, most commonly those
defined by MPEG. It Expands MPEG-1 to support video/audio "objects", 3D content, low bit rate encoding and
support for Digital Rights Management. The MPEG-4 video format uses separate compression for audio and video
tracks; video is compressed with MPEG-4 video encoding; audio is compressed using AAC compression, the same
type of audio compression used in .AAC files. The mp4 can most commonly be played on the Apple QuickTime
Player or other movie players. Devices that play p4 are also known as mp4 players.
Mpg Format (.mpg) Common video format standardized by the Moving Picture Experts Group (MPEG); typically
incorporates MPEG-1 or MPEG-2 audio and video compression; often used for creating downloadable movies. It
can be played using Apple QuickTime Player or Microsoft Windows Media Player.
Windows Media Video Format (.wmv) WMV format, short for Windows Media Video was developed by
Microsoft. It was originally designed for internet streaming applications, and can now cater to more specialized
content. Windows Media is a common format on the Internet, but Windows Media movies cannot be played on non-
Windows computer without an extra (free) component installed. Some later Windows Media movies cannot play at
all on non-Windows computers because no player is available. Videos stored in the Windows Media format have the
extension .wmv.

3GP File Extension (.3gp) The 3gp format is both an audio and video format that was designed as a multimedia
format for transmitting audio and video files between 3G cell phones and the internet. It is most commonly used to
capture video from your cell phone and place it online. This format supports both Mac and windows applications
and can be commonly played in the following:
1 Apple QuickTime Player
2 RealNetworks RealPlayer
3 VideoLAN VLC media player
4 MPlayer
5 MIKSOFT Mobile 3GP Converter (Windows)
1 (vii) List of basic tools for creating and editing multimedia.

Hint: Categories of software tools. Music sequencing and notation. Cakewalk Previous name pro-Audio. Store
sequences of notes in MIDI. Possible to insert wav files and windows MCI command. Cubase Sequencing/editing
program. Includes digital audio editing tools. Digital audio. Cool edit Powerful, popular digital audio toolkit.
Emulate professional audio studio. Include multitrack productions. Sound Forge Sophisticated PC based program for
editing WAV.

Permits adding complex special effects. Graphics and image editing. Adobe illustrator Powerful publishing tool for
creating and editing vector graphics. Adobe photoshop Standard in tool for graphic, image processing, and image
manipulation. Macromedia Fireworks Making graphics specifically for the web. Includes bitmap editor, vector
graphic editor, and javaSscript generator. Vedio editing.
Adobe premiere Simple, intuitive svedio editing tool for nonlinear vedios. Adobe after effects Powerful vedio
editing tool that enables user to add and change existing movies and effects. Final cut pro Vedio editing tool offered
by apple for mac Allows the capture of audio and vedio from numerous sources. Animation. Multimedia APIs
JAVA3D DirectX OpenGL Rendering Tools 3D Studio Max Maya Render Man Multimedia authouring. Macromedia
Flash Create interactive movies using score metaphor. Timeline arranged in parallel event sequences. Used to show
movies or games on the web.

Das könnte Ihnen auch gefallen