Sie sind auf Seite 1von 16

e) Complete size of a 640*480 image at 240 pixels per inch.

640/240=2.6 inch
480/240=2 inch
Size=2.6’’*2’’
f) Find the CMY coordinates of a color at (0.2,1,0.5) in the RGB space.

C 1 R 1 0.2
0.8
M = 1 - G = 1 - 1.0 =
0.0
Y 1 B 1 0.5
0.5

Q2 (a)

BRESENHAM’S CIRCLE DRAWING ALGORITHM.(1053)

Step 1: Input radius r and center coordinates (Xc, Yc) and obtain the first point on the
circumference of the circle centered on the origin (0, 0) as:
(Xo, Yo)= (0, r).
Step 2: Calculate the initial value of the decision parameter as:
Po=3-2r.
Step 3: At each Xk position, starting at k=0, perform the following test:
If Pk < 0, the next point along the circle centered on (0, 0) is (Xk + 1, Yk) and
Pk+1=Pk+4Xk+6
Otherwise, the next point along the circle centered on (0, 0) is (Xk+1, Yk-1) and
Pk+1=Pk+4(Xk-Yk) +10.
Step 4: Determine the symmetry points in the other seven octants using 8-way symmetry
property of circle.
Step 5: Move each calculated pixel position (X, Y) onto the circular path centered on
(Xc, Yc) and calculate the coordinates:
X=X+Xc and Y=Y+Yc.
Step 6: Repeat steps 3 through 5 until X>Y.
2.b) Find the matrix representation of point about origin. Write the matrix
representation of Translation (Geometric).

Ans) We represent the co-ordinate pair (x,y) of a point by the triple (x,y,1) in
homogeneous co-ordinates. Then translation in the direction v = txI + tyJ can be expressed by
the matrix function –

1 0 0
TV = 0 1 0
tx ty 1

3.(a) Prove that uniform scaling (SX=SY) and a rotation form a commutative
pair of operations but that , in general , scaling and rotation are not
commutative operations.
Solution : -

The transformation matrix for θº rotation about the origin is

cosθ sinθ 0
R = -sinθ cosθ 0
0 0 1

The transformation matrix for scaling , with SX and SY scaling factor in the X and Y direction
respectively , is

SX 0 0
0 SY 0
S=
0 0 1

Now ,
cosθ sinθ 0 SX 0 0
R . S =-sinθ cosθ 0 0 SY 0
0 0 1 0 0 1

SX . cosθ SY . sinθ 0
= -SX . sinθ SY . cosθ 0
0 0 1

SX 0 0 cosθ sinθ 0
and S . R = 0 SY 0 -sinθ cosθ 0
0 0 1 0 0 1
SX . cosθ SX. sinθ 0
= -SY . sinθ SY . cosθ 0
0 0 1

Hence , scaling and rotation are not commutative operations.


SX of
But in case . cosθ
uniformSscaling
X . sinθ (S =S
X
0 Y), the above two matrix will be
R . S = -SX . sinθ SX . cosθ 0
0 0 1

SX . cosθ SX . sinθ 0
and S . R = -SX . sinθ SX . cosθ 0
0 0 1

As R.S =S.R , it is proved that uniform scaling (SX=SY) and a rotation form a commutative
pair of operations

3.b) Show that transformation matrix for a reflection about the line y=-x is
equivalent to a reflection relative to the y-axis followed by a counter,
clockwise rotation of 900.

Ans) The transformation matrix for reflection about the line y = -x is:-

0 -1 0
R’e = -1 0 0 (1)
0 0 -1

The transformation matrix R for rotating a point p(x,y) through an angle 900, in the
counter clockwise direction, about the origin in homogeneous co-ordinate system is as
follows:-

0 1 0
R= -1 0 0 (2)
0 0 1

The transformation matrix for reflection relative to the y-axis is as follows:-

-1 0 0
Re = 0 1 0 (3)
0 0 1
Now the transformation matrix of a point p(x,y) after reflection relative to the
y-axis followed by a counter clockwise rotation of 900 to the point p can be obtain by
multiplying (3) and (2)

-1 0 0 0 1 0
0 1 0 -1 0 0
0 0 1 0 0 1

0 -1 0
= -1 0 0
0 0 1

4. A) what do you mean by view port?


The area on the display device to which the
window is mapped is called View port.
b) Explain the Cohen Sutherland line
clipping algorithm.
Cohen Sutherland line clipping algorithm:
In this algorithm we divide the line clipping process
into two phases:
• Identify those lines which intersect the
clipping window and to be clipped and
• Perform the clipping.

• Visible – both endpoint of the line within the


window.
• Not visible the line definitely lies outside the
window.
This will occur if the line from(x1, y1) to (x2, y2)
satisfies any one of the following four inequalities.
x1,x2 > xmax
x1,x2 < xmin
y1,y2 > ymax
y1,y2 < ymin
• Clipping candidate the line is in neither category
1 nor 2.
Line AB in category 1 (visible);
Line CD and EF are in the category 2 (not visible);
Line GH, IJ and KL are in the category 3 (clipping
candidate).
The algorithm employs an efficient process for
finding the category of a line. It proceed in two
steps
• Assign a 4-bit region code to each end point of
the line. The code is determined according to
which of the following nine region of the plane
the endpoint lies in ymax.
Strating from the left most bit, each bit of the
code is set to true(1) or false(0) according to the
scheme
Bit 1=sign(y-ymax)
Bit 2=sign(ymin-y)
Bit3=sign(x-xmax)
Bit4=sign (xmin-x)
We use the correction that sign (a) =1 if a is
positive, otherwise. A point with code 0000 is
inside the window.
• The line is visible of both region codes are 0000
and not visible if the bitwise logical AND of the
code is not 0000, and a candidate for clipping if
the bitwise logical AND of the region code is
0000.
Q:-5.a) Distinguish object space and image space methods for hidden line removal.

Ans:- Distinguish object space and image space methods

OBJECT SPACE IMAGE SPACE

1. In computer graphics technology, what we 1. Our action to draw maps the imaginary
have envisioned is called the object object into a triangle on paper, which
definition, which defines the triangle in an constitutes a continuous display surface
abstract space of our choosing. This space is in another space called the image space.
continuous and is called object space.

2. In which surface visibility is determined 2. In which the pixel grid is used to guide
the
using continuous models in the object computational activities that determine
space without involving pixel based visibility at the pixel level.
operation.

3. An object space method compares objects 3. In an image space algorithm, visibility is


and parts of objects to each other within the decided point by point at each pixel
scene definition to determine which surface position on the projection plane.
as a while, we should label as visible .

4. Most visible-surface algorithms use image- 4. Line-display algorithms, on the other


hand,
space methods, although object-space generally use object-space methods to
methods can be used effectively to locate identity visible lines in wire-frame
displays,
visible surfaces in some cases. but many image-space visible-surface
algorithms can be adapted easily to
visible-
line detection.

Q:- 5.b) Explain any one of the hidden line removal algorithm.

SCAN-LINE ALGORITHM:-

This algorithm makes use of scan-line coherence and edge coherence


property. We require two buffers, depth buffer and time buffer (this serves as an intensity
buffer).

For each scan-line perform the following steps 1 through 3:-


Step1: For all the pixels on a scan-line initialize the depth buffer to one. The intensity buffer
is initialized to the background color &set Y to the smallest X-min in the edge list.

Step2:- For all the polygon surface finds all the pixels on the current scan-line that lies within
the polygon (this can be done using YX scan conversion algorithm).
For each such pixel values
a) Find the depth(the current value in the depth buffer) of the polygon at this point.
b) If Z-value < depth the Z-value is stored in depth buffer and the intensity buffer is set
to the shading color of the polygon under consideration.

Step3:- After all the polygon surfaces forming the object have been considered the value that
is contained in the intensity buffer represents the solution and value is co[pied into the frame
buffer.

A scan-line algorithm consists essentially of two nested loops, an X-scan loop nested within
a Y-scan loop. Y-scan loop considers active edges whose Y-min is equal to Y where active
edges are stored in order of increasing X. X-scan loop processes, from left to right, for each
active edge according to the steps 2&3 of the algorithm.

DATA STRUCTURE USED

1) The edge list contains all non-horizontal edges. The edges are sorted by the edge’s
smaller Y coordinate(Y-min). each edge entry in the edge list also contains:

i) The X-coordinate of the end of the edge with the smaller Y-coordinate.
ii) The Y-coordinate of the edge’s other end(Y-max).
iii) The increment ΔX=1/m, which is used in the next scan-line Y+1, for
each remaining active edge, in order to replace X by(X+ ΔX).
iv) A pointer indicating the polygon to which the edge belongs.

2) The polygon list for each polygon contains:

i) The equation of the plane within which the polygon lies-used for depth
determination, i.e., to find the Z-value at pixel(X,Y).
ii) An IN/OUT flag, initialized to OUT(this flag is set depending on
whether a given scan-line is in or out of the polygon.
iii) Color information for the polygon.
Q:-1.d) what do you mean by resolution of an image?

Resolution: - The maximum number of points that can be displayed without overlap
on a
CRT is referred to as resolution. A more precise definition of resolution
is the number of points per centimeters that can be plotted horizontally and
vertically, although it is often simply stated as the total number of points
in
each direction.

Q:-1.g) what is image’s aspect ratio?

Image’s Aspect Ratio: - One of the important property of video monitors is aspect
ratio. This number gives the ratio of vertical points to
horizontal points necessary to produce equal length lines
in
both directions on the screen.(sometimes aspect ratio is
stated in terms of the ratio of horizontal to vertical
points.)
An aspect ratio of ¾ means that a vertical line plotted
with
three points has the same length as a horizontal line
plotted
with four points.
Q: -1.a) Explain briefly the RGB and CMY color model?

RGB COLOR MODEL: - Based on the tristimulus theory of version, our eyes perceive
color through the stimulation of three visual pigments
in the cones of the retina. These visual pigments have a
peak sensitivity at wavelengths of about 630 nm(red),
530 nm (green), and 450 nm (blue). By comparing
intensities in a light sources, we perceive the color of
the light. This theory of vision is the basis for
displaying color output on a video monitor using the
three color primaries, red, green, and blue, referred to as
the RGB color model.

We can represent this model with the unit cube defined on R,


G and B axes. The origin represents black, and the
vertex with coordinates (1, 1, 1) is white. Vertices of the
cube on the axes represent the primary colors, and the
remaining vertices represent the complementary color
for each of the primary colors.

As with the XYZ color system, the RGB color scheme is an


additive model. Intensity of the primary colors are add
to produce other colors. Each color point within the
bounds of the cube can be represented as the
triple( R ,G, B), where values for R,G and B are
assigned in the range from 0 tom 1. Thus, a color C ‫ ג‬is
expressed in RGB component as
C ‫ = ג‬RR + GG + BB

The magenta vertex is obtained by adding red and blue to


produce the triple (1,0,1), and white at (1,1,1) is the sum
of the red, green, and blue vertices. Shades of gray are
represented along the main diagonal of the cube from
the origin (black) to the white vertex. Each point along
this diagonal has an equal contribution from each
primary color, so that gray shade halfway between
black and white is represented as (0.5, 0.5, 0.5)

NTSC standard CIE Model Approx Color Monitor Values


R (0.670,0.330) (0.735,0.265 (0.628,0.346)
)
G (0.210,0.710) (0.274,0.717 (0.268,0.588)
)
B (0.140,0.080) (0.167,0.009 (0.150,0.070)
)
Chromaticity coordinates for an NTSC standard RGB phosphor are listed in Table.

CMY COLOR MODEL: - A color model defined with primary colors cyan, magenta, and
yellow (CMY) is useful for describing color output to
hard copy device. Unlike video monitors, which
produce a color pattern by combining light from the
screen phosphors, hard copy device such as plotters
produce a color picture by coating a paper with color
pigments. We see the colors by reflected light, a
subtractive process.

As we have noted, cyan can be formed by adding green and


blue light. Therefore, when white light is reflected from
cyan colored ink, the reflected light must have no red
component. That is, red light absorbed, or subtracted, by
the ink. Similarly, magenta ink subtracts the green
component from incident light, and yellow subtract the
blue component.

In the CMY model, point (1, 1, 1) represents black, because


all components of the incident light are subtracted. The
origin represents white light. Equal amounts of each of
the primary colors produce grays, along the main
diagonal of the cube. A combination of cyan and
magenta ink produces blue light, because the red and
green components of the incident light are absorbed.
Other color combinations are obtained by a similar
subtractive process.

We can express the conversion from an RGB representation


to a CMY representation with the matrix transformation

C 1 R
M = 1 - G
Y 1 B

Where the white is represented in the RGB system as


the unit column vector. Similarly, we convert from a
CMY color representation to an RGB representation
with the matrix transformation

R 1 C
G = 1 - M
B 1 Y
where black is represented in the CMY system as the
unit column vector.
6a)find the eqution of benzier curve which passes through
points(0,0) and (-2,1) and controlled through points(7,5) and
(2,0).

Let p0=(0,0), p1=(7,5), p2=(2,0), p3=(-2,1).

P(t)=[ t3 t2 t 1] │-1 3 -3 1│ │0 0│
│3 -6 3 0│ │7 5│
│-3 3 0 0│ │2 0│
│1 0 0 0│ │-2 1 │

= [ t3 t2 t 1] │13 16│
│-36 -30│
│ 21 15│
│0 0│

=13t3-36t2+21t/xcomponent, 16t330t2+15t/ycomponent

7)
a) Describe the basic MPEG specification for video.
b) What is the difference between HTML and DHTML?

Solution: The Moving Picture Experts Group method is used to compress


video. In principle, a motion picture is a rapid flow of a set of frames, where
each frame is an image. In other words, a frame is a special combination of
frames that are sent one after another. Compression video then means spatially
compressing each frame and temporally compressing a set of frames.
Spatial compression: The spatial compression of each frame done with JPEG.
Each frame is a picture that can be independently compressed.
Temporal compression: In temporal compression, redundant frames are
removed when we watch television; we receive 50 frames per second. However
most of the frames are almost dame. For example, when someone is talking
most of the frame is the same as the previous one except for the segment of the
frame around the lips which changes from one frame to another.
To temporally compress data, the MPEG method first divides frames
into three categories I-frames, P-frames, and B-frames.
 I-frames: An intra coded frame (I-frame) is an independent frame that is
not related to any other frame (not to the frame sent before or to the
frame set after). They are present at regular intervals. An I-frame must
appear periodically to handle some sudden change in the frame that the
previous and following frames cannot show. Also, when a video is
broadcast, a viewer may time in at any time. If there is only one I-frame
at the beginning of the broadcast. The viewer will not receive a complete
picture. I-frame is independent of other frames and cannot be constructed
from other frames.
 P-frames: A predicated frame (P-frame) is related to the preceding I-
frame or P-frame. In other words, each P-frame contains only the
changes from the preceding frame. The changes, however, cannot cover a
big segment. For example, for a fast moving object, the new changes may
not be recorded P-frame. P-frames can be constructed only from previous
I-frames. P-frames carry fewer bits after compression.
 B-frames: A bidirectional frame (B-frame) is related to the preceding
and following I-frame or P-frame. Each B-frame is relating to the past
and the faster. Note that a B-frame is never related to another B-frame.

MPEG frames
I
B
B
P
B
B
I

11 2 3 4 5 6 7

I B B P B B I
MPEG frame construction

b) Difference between HTML and DHTML:

1) HTML contains static content. Once a web server processes a web page
and sends it to the computer requesting it (called the ‘Client’) it cannot
get any more data from the server unless a new request is made. For this
drawback we use Dynamic HTML (DHTML) which is combining
HTML and a scripting language that sends on the client’s browser to
bring special effects. The scripting language that we will be using is Java
script as most browsers support it. The scripting language can be used to
alter HTML data shown (or present but hidden) on the current page by
manipulating the HTML tag. Basically some script function is called to
execute the request effect when events like “Mouse over”, “Mouse Out”,
“click”, etc occur.
2) DHTML is not a scripting language like HTML. It is a combination of
existing technologies like HTML, CSS and JavaScript.
3) DHTML allow us to make changes to our webpage on the fly (i.e. change
the background color, or font size and font color).

Das könnte Ihnen auch gefallen