Sie sind auf Seite 1von 38

Page 1 of 38

Q.No=1

Computer-aided design (CAD)

Computer-aided design (CAD) is the use of computer systems (or workstations) to aid in the
creation, modification, analysis, or optimization of a design. CAD software is used to increase the
productivity of the designer, improve the quality of design, improve communications through
documentation, and to create a database for manufacturing. CAD output is often in the form of
electronic files for print, machining, or other manufacturing operations. The
term CADD (for Computer Aided Design and Drafting) is also used.

Its use in designing electronic systems is known as electronic design automation, or EDA.
In mechanical design it is known as mechanical design automation (MDA) or computer-aided
drafting (CAD), which includes the process of creating a technical drawing with the use
of computer software.

CAD software for mechanical design uses either vector-based graphics to depict the objects of
traditional drafting, or may also produce raster graphics showing the overall appearance of
designed objects. However, it involves more than just shapes. As in the
manual drafting of technical and engineering drawings, the output of CAD must convey
information, such as materials, processes, dimensions, and tolerances, according to
application-specific conventions.

CAD may be used to design curves and figures in two-dimensional (2D) space; or curves,
surfaces, and solids in three-dimensional (3D) space.

CAD is an important industrial art extensively used in many applications,


including automotive, shipbuilding, and aerospaceindustries, industrial and architectural
design, prosthetics, and many more. CAD is also widely used to produce computer
animation for special effects in movies, advertising and technical manuals, often called
DCC digital content creation. The modern ubiquity and power of computers means that even
perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of
the 1960s. Because of its enormous economic importance, CAD has been a major driving force for
research in computational geometry, computer graphics (both hardware and software),
and discrete differential geometry.

The design of geometric models for object shapes, in particular, is occasionally


called computer-aided geometric design (CAGD)
Page 2 of 38

Benefits of the CAD Software

CAD software is being used on large scale basis by a number of engineering professionals and
firms for various applications. The most common application of CAD software is designing and
drafting. Here are some of the benefits of implementing CAD systems in the companies:

1) Increase in the productivity of the designer: The CAD software helps designer in visualizing
the final product that is to be made, it subassemblies and the constituent parts. The product can also
be given animation and see how the actual product will work, thus helping the designer to
immediately make the modifications if required. CAD software helps designer in synthesizing,
analyzing, and documenting the design. All these factors help in drastically improving the
productivity of the designer that translates into fast designing, lower designing cost and shorter
project completion times. Modeling with CAD systems offers a number of advantages over
traditional drafting methods that use rulers, squares, and compasses. For example, designs can be
altered without erasing and redrawing. CAD systems also offer "zoom" features analogous to a
camera lens, whereby a designer can magnify certain elements of a model to facilitate inspection.
Computer models are typically three dimensional and can be rotated on any axis, much as one
could rotate an actual three dimensional model in one's hand, enabling the designer to gain a fuller
sense of the object. CAD systems also lend themselves to modeling cutaway drawings, in which
the internal shape of a part is revealed, and to illustrating the spatial relationships among a system
of parts.

2) Improve the quality of the design: With the CAD software the designing professionals are
offered large number of tools that help in carrying out thorough engineering analysis of the
proposed design. The tools also help designers to consider large number of investigations. Since
the CAD systems offer greater accuracy, the errors are reduced drastically in the designed product
leading to better design. Eventually, better design helps carrying out manufacturing faster and
reducing the wastages that could have occurred because of the faulty design.

3) Better communications: The next important part after designing is making the drawings. With
CAD software better and standardized drawings can be made easily. The CAD software helps in
better documentation of the design, fewer drawing errors, and greater legibility.

4) Creating documentation of the designing: Creating the documentation of designing is one of


the most important parts of designing and this can be made very conveniently by the CAD
software. The documentation of designing includes geometries and dimensions of the product, its
subassemblies and its components, material specifications for the components, bill of materials for
the components etc.

5) Creating the database for manufacturing: When the creating the data for the documentation
of the designing most of the data for manufacturing is also created like products and component
drawings, material required for the components, their dimensions, shape etc.
Page 3 of 38

6) Saving of design data and drawings: All the data used for designing can easily be saved and
used for the future reference, thus certain components dont have to be designed again and again.
Similarly, the drawings can also be saved and any number of copies can be printed whenever
required. Some of the component drawings can be standardized and be used whenever required in
any future drawings.

7) Project Management Benefits

CADs excellent ability for comprehensive documentation and communication allows for an
easier product management environment. Team communication is simpler and less stressful due to
the easy sharing properties. Engineers working in teams on complex projects can establish a CAD
library, allowing for the storing and reference of certain projects.

=====================================================================

Q.No=2

Memory is major part of computers that categories into several types. Memory is best storage part
to the computer users to save information, programs and etc, The computer memory offer several
kinds of storage media some of them can store data temporarily and some them can store
permanently. Memory consists of instructions and the data saved into computer through Central
Processing Unit (CPU).
Page 4 of 38

Types of Computer Memorys:

Memory is the best essential element of a computer because computer cant perform simple tasks.
The performance of computer mainly based on memory and CPU. Memory is internal storage
media of computer that has several names such as majorly categorized into two types, Main
memory and Secondary memory.

1. Primary Memory / Volatile Memory.

2. Secondary Memory / Non Volatile Memory.

Primary Storage

Primary storage, presently known as memory, is the only one directly accessible to the CPU. The
CPU continuously reads instructions stored there and executes them as required. Any data actively
operated on is also stored there in uniform manner. Primary Storage primary storage is a storage
location that holds memory for short periods of times while the computer running. For example,
computer RAM and cache are both examples of a primary storage device.RAMcache This storage
is the fastest memory in your computer and is used to store data while it's being used. For example,
when you open a program data is moved from the secondary storage into the primary
storage. Main memory is directly or indirectly connected to the central processing unit via
a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The
CPU firstly sends a number through an address bus, a number called memory address, that
indicates the desired location of data. Then it reads or writes the data in the memory cells using the
data bus. Additionally, a memory management unit (MMU) is a small device between CPU and
RAM recalculating the actual memory address, for example to provide an abstraction of virtual
memory or other tasks.

As the RAM types used for primary storage are volatile (uninitialized at start up), a computer
containing only such storage would not have a source to read instructions from, in order to start the
computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used
to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to
RAM and start to execute it. A non-volatile technology used for this purpose is called ROM,
for read-only memory(the terminology may be somewhat confusing as most ROM types are also
capable of random access).

Many types of "ROM" are not literally read only, as updates to them are possible; however it is
slow and memory must be erased in large portions before it can be re-written. Some embedded
systems run programs directly from ROM (or similar), because such programs are rarely changed.
Standard computers do not store non-rudimentary programs in ROM, and rather, use large
capacities of secondary storage, which is non-volatile as well, and not as costly.
Page 5 of 38

The primary storage is referred to as random access memory (RAM) due to the random selection
of memory locations. It performs both read and write operations on memory. If power failures
happened in systems during memory access then you will lose your data permanently. So, RAM is
volatile memory. RAM categorized into following types.

DRAM

SRAM

DRDRAM

Secondary Storage

Secondary storage, or storage in popular usage, differs from primary storage in that it is not
directly accessible by the CPU. The computer usually uses its input/output channels to access
secondary storage and transfers the desired data using intermediate area in primary storage.
Secondary storage does not lose the data when the device is powered downit is non- volatile.
Some other examples of secondary storage technologies are: flash memory, floppy disks, magnetic
tape, paper tape, punch cards, standalone RAM disks, and Zip drives. Secondary storage (also
known as external memory or auxiliary storage), differs from primary storage in that it is not
directly accessible by the CPU. The computer usually uses its input/output channels to access
secondary storage and transfers the desired data using intermediate area in primary storage.
Secondary storage does not lose the data when the device is powered downit is non-volatile. Per
unit, it is typically also two orders of magnitude less expensive than primary storage. Modern
computer systems typically have two orders of magnitude more secondary storage than primary
storage and data are kept for a longer time there.

In modern computers, hard disk drives are usually used as secondary storage. The time taken to
access a given byte of information stored on a hard disk is typically a few thousandths of a second,
or milliseconds. By contrast, the time taken to access a given byte of information stored in
random-access memory is measured in billionths of a second, or nanoseconds. This illustrates the
significant access-time difference which distinguishes solid-state memory from rotating magnetic
storage devices: hard disks are typically about a million times slower than memory.
Rotating optical storage devices, such as CD and DVD drives, have even longer access times.
With disk drives, once the disk read/write head reaches the proper placement and the data of
interest rotates under it, subsequent data on the track are very fast to access. To reduce the seek
time and rotational latency, data are transferred to and from disks in large contiguous blocks.

When data reside on disk, blocking access to hide latency offers an opportunity to design
efficient external memory algorithms. Sequential or block access on disks is orders of magnitude
faster than random access, and many sophisticated paradigms have been developed to design
efficient algorithms based upon sequential and block access. Another way to reduce the I/O
bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary
Page 6 of 38

and secondary memory. Some other examples of secondary storage technologies are flash
memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards,
standalone RAM disks, and Iomega Zip drives.

Read Only Memory (ROM) :

ROM is permanent memory location that offers huge types of standards to save data. But it work
with read only operation. No data lose happen whenever power failure occurs during the ROM
memory work in computers.

ROM memory has several models such names are following.

1. PROM: Programmable Read Only Memory (PROM) maintains large storage media but cant
offer the erase features in ROM. This type of RO maintains PROM chips to write data once and
read many. The programs or instructions designed in PROM cant be erased by other programs.

2. EPROM : Erasable Programmable Read Only Memory designed for recover the problems of
PROM and ROM. Users can delete the data of EPROM thorough pass on ultraviolet light and it
erases chip is reprogrammed.

3. EEPROM: Electrically Erasable Programmable Read Only Memory similar to the EPROM but
it uses electrical beam for erase the data of ROM.

Cache Memory: Mina memory less than the access time of CPU so, the performance will
decrease through less access time. Speed mismatch will decrease through maintain cache memory.
Main memory can store huge amount of data but the cache memory normally kept small and low
expensive cost. All types of external media like Magnetic disks, Magnetic drives and etc store in
cache memory to provide quick access tools to the users.

=====================================================================

Q.No=3

A programmable logic controller (PLC)

A programmable logic controller (PLC) is a digital computer used for automation of


electromechanical processes, such as control of machinery on factory assembly lines, amusement
rides, or lighting fixtures. The first PLC was developed in 1969. They are now widely used and
extend from small self-contained units for use with perhaps 20 digital inputs/outputs to modular
systems which can be used for large numbers of inputs/outputs, handle digital or analogue
inputs/outputs, and also carry out proportional-integral-derivative control modes.

PLCs are used in many industries and machines. Unlike general-purpose computers, the PLC is
designed for multiple inputs and output arrangements, extended temperature ranges, immunity to
electrical noise, and resistance to vibration and impact. Programs to control machine operation are
Page 7 of 38

typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time


system since output results must be produced in response to input conditions within a bounded
time, otherwise unintended operation will result. Figure 1 shows a graphical depiction of typical
PLCs.

Typically a PLC system has the basic functional components of processor unit, memory, power
supply unit, input/output interface section, communications interface and the programming
device. Figure shows the basic arrangement.

Components of a PLC

A typical PLC can be divided into five components.

1. Processer
2. Power supply
3. prog

All PLCs have the same basic components. These components work together to bring information
into the PLC from the field, evaluate that information, and send information back out to various
field. Without any of these major components, the PLC will fail to function properly.
The basic components include a power supply, central processing unit (CPU or processor),
co-processor modules, input and output modules (I/O), and a peripheral device.

1) The processor unit or central processing unit (CPU) is the unit containing the microprocessor
and this interprets the input signals and carries out the control actions, according to the program
stored in its memory, communicating the decisions as action signals to the outputs. The function of
the CPU is to store and run the PLC software programs. It also interfaces with the Co-Processor
Modules, the I/O Modules, the peripheral device, and runs diagnostics. It is essentially the "brains"
of the PLC.
The CPU, contains a microprocessor, memory, and interface adapters
Page 8 of 38

2) The power supply unit is needed to convert the mains a.c. voltage to the low d.c. voltage (5 V)
necessary for the processor and the circuits in the input and output interface modules. The function
of the power supply is to provide the DC power to operate the PLC. It is supplied by single-phase
120 or 240 VAC line power that powers the PLC system. The Power Supply is a module located in
the PLC system module rack. The DC power (voltage and current) it provides power the other
modules in the rack, such as the CPU, Co-processor Modules, and I/O Modules.
The line power provided to the PLC system also powers the I/O Field Devices. The PLC system is
protected against PLC module or field device malfunctions.

3) The programming device is used to enter the required program into the memory of the
processor. The program is developed in the device and then transferred to the memory unit of the
PLC.

4) The memory unit is where the program is stored that is to be used for the control actions to be
exercised by the microprocessor and data stored from the input for processing and for the output
for outputting.

5) The input and output sections are where the processor receives information from external
devices and communicates information to external devices. The type of input modules used by a
PLC depends on the type of input device. For example, some respond to digital inputs, which are
eitheronoroffwhile others respond to analog signals. In this case, analog signals represent machine
or process conditions as a range of voltage or current values. The PLC input circuitry converts
signals into logic signals that the CPU can use. The CPU evaluates the status of inputs, outputs,
and other variables as it executes a stored program. The CPU then sends signals to update the
status of outputs.

Output modules convert control signals from the CPU into digital or analog values that can be used
to control various output devices. The programming device is used to enter or change the PLCs
program or to monitor or change stored values. Once entered, the program and associated variables
are stored in the CPU. In addition to these basic elements, a PLC system may also incorporate an
operator interface device to simplify monitoring of the machine or process.
.The inputs might thus be from switches, as illustrated in Figure with the automatic drill, or other
sensors such as photo-electric cells, as in the counter mechanism in Figure temperature sensors, or
flow sensors, etc. The outputs might be to motor starter coils, solenoid valves, etc.

=====================================================================

Q.No=4

There are various types of output devices used in conduction with a computer aided design system.
These output devices include
Page 9 of 38

Pen plotters

Electrostatic plotters

Hard copy units

COMPUTER OUTPUT ON MICROFILM

Pen plotters

The pen plotter is a computer printer for printing vector graphics. In the past, plotters were used
in applications such as computer-aided design, though they have generally been replaced with
wide-format conventional printers. A plotter gives a hard copy of the output. It draws pictures on a
paper using a pen. Plotters are used to print designs of ships and machines, plans for buildings and
so on. Digitally controlled plotters evolved from earlier fully analog XY-writers used as output
devices for measurement instruments and analog computers.
Pen plotters print by moving a pen or other instrument across the surface of a piece of paper. This
means that plotters are vector graphics devices, rather than raster graphics as with other printers.
Pen plotters can draw complex line art, including text, but do so slowly because of the mechanical
movement of the pens. They are often incapable of efficiently creating a solid region of color, but
can hatch an area by drawing a number of close, regular lines.
Plotters offered the fastest way to efficiently produce very large drawings or color high-resolution
vector-based artwork when computer memory was very expensive and processor power was very
limited, and other types of printers had limited graphic output capabilities.
Pen plotters have essentially become obsolete, and have been replaced by large-format inkjet
printers and LED toner based printers. Such devices may still understand vector languages
originally designed for plotter use, because in many uses, they offer a more efficient alternative to
raster data.
Electrostatic plotters

A plotter that uses an electrostatic method of printing. Liquid toner models use a positively
charged toner that is attracted to paper which is negatively charged by passing by a line of
electrodes (tiny wires or nibs). Models print in black and white or color, and some handle paper up
to six feet wide. Newer electrostatic plotters are really large-format laser printers and focus light
onto a charged drum using lasers or LEDs. An electrostatic plotter is a type of plotter that draws
images on paper with an electrostatic process. They are most frequently used for Computer-Aided
Engineering(CAE), producing raster images via either a Liquid Toner or a Dry Toner model.

Liquid Toner models use toner that is positively charged and thus becomes attracted to paper's
negative charge. This occurs after the toner particles pass through a line of electrodes in the form
of tiny wires, or nibs. The spacing of the wires controls the resolution of the plotter; for example,
100 or 400 wires to the inch. Dry Toner models use a process similar
Page 10 of 38

to xerography in photocopiers. Unlike a laser printer or photocopier, there is no transfer drum


used in most electrostatic plotters; the imaging paper is directly exposed to the charging electrode
array.

Electrostatic plotters can print in black and white or in color. Some models handle paper sizes up to
six feet wide. Newer versions are large-format laser printers and focus light onto a charged drum
using lasers or LEDs. The image quality produced by some electrostatic plotters was lower than
that of contemporary pen plotters, but the increased speed and economy made them useful. Unlike
a pen plotter, the plot time of a rasterized electrostatic plotter was independent of the level of detail
of the image. Modern electrostatic color plotters are found in the short run graphics industry,
printing on a variety of paper or plastic film surfaces.

Electrostatic plotters were known in the early days of computer graphics; by 1967, several
manufacturers commercially supplied electrostatic plotters.
Hard copy units

A hard copy (or "hardcopy") is a printed copy of information from a computer. Sometimes referred
to as a printout, a hard copy is so-called because it exists as a physical object. The same
information, viewed on a computer display or sent as an e-mail attachment, is sometimes referred
to as a soft copy .

COMPUTER OUTPUT ON MICROFILM


A microfilm is in fiche or roll format and is used to record computer output directly from the
computer tape or cartridge. COM is a high speed low cost process. The standard roll film is 16 mm
wide with a film image that is 1/24 of the original document. COM is used for storing output in
banking and insurance applications, medical X-rays etc.

=====================================================================

Q.No=5

Various standards in graphics programming

Necessary to standardize certain elements at each stage to minimize company investment on


certain software and hardware without much modification on the newer and different
systems.-This means that there should be compatibility between various software elements as also
between the hardware and software. The following international organizations involved to develop
the graphics standards:

ACM ( Association for Computer Machinery )


ANSI ( American National Standards Institute )
ISO ( International Standards Organization )
Page 11 of 38

GIN ( German Standards Institute )

As a result of these international organization efforts, various standard functions at various levels
of the graphics system developed. These are:

1. IGES (Initial Graphics Exchange Specification) enables an exchange of model data basis
among CAD system.

2. DXF (Drawing / Data Exchange Format) file format was meant to provide an exact
representation of the data in the standard CAD file format.
Page 12 of 38

3. STEP (Standard for the Exchange of Product model data) can be used to exchange data
between CAD, Computer Aided Manufacturing (CAM) , Computer Aided Engineering
(CAE) , product data management/enterprise data modeling (PDES) and other CAx systems.

4. CALS ( Computer Aided Acquisition and Logistic Support) is an US Department of Defense


initiative with the aim of applying computer technology in Logistic support.

5. GKS (Graphics Kernel System) provides a set of drawing features for two-dimensional vector
graphics suitable for charting and similar duties.

6. PHIGS ( ProgrammersHierarchical Interactive Graphic System) The PHIGS standard defines


a set of functions and data structures to be used by a programmer to manipulate and display 3-D
graphical objects.

7. VDI (Virtual Device Interface) lies between GKS or PHIGS and the device driver code. VDI
is now called CGI (Computer Graphics Interface).

8. VDM (Virtual Device Metafile) can be stored or transmitted from graphics device to another.
VDM is now called CGM (Computer Graphics Metafile).

9. NAPLPS (North American Presentation- Level Protocol Syntax) describes text and graphics
in the form of sequences of bytes in ASCII code.

Initial Graphics Exchange Specifications (IGES)

Initial Graphics Exchange Specification (IGES) was developed as a neutral data format for the
transmission of CAD data between dissimilar CADICAM systems. Although the IGES format
does not provide a suitable data format for downstream manufacturing applications, it can be
considered as the major driving force to achieve the international standard of product data and the
data exchange format. Therefore it is described in details in this section. In order to transfer
information, translation is done from one native format to the neutral file and then to another
native formatI7. As shown in Figure. The number of processors needed to transfer data among N
different CAD systems using a neutral file is 2 * N.
Page 13 of 38

IGES file is just a document that specifies what should go into a data file. Programmers should
write software to translate from their system to the IGES format or vice versa. The program that
translates from a native CAD format to IGES is called preprocessor. The program that translates
from IGES to another target format is called postprocessor as shown in Figure below.

History of IGES

IGES was an initiative of the United States Air Force (USAF) Integrated Computer Aided
Manufacturing (ICAM) project (1976-1984).
ICAM sought to develop procedures (IDEF) processes (Group Technology) and software
(CAD/CAM) that would integrate all operations in Aerospace manufacturing and thus greatly
reduce costs. Earlier the USAF Manufacturing Technology Program had funded the Automatically
Programmed Tools (APT) language for programming Numerically Controlled (NC) machine
tools. To close the data gap between parts design and manufacturing, one of the ICAM goals was
to develop CAD software that would automatically generate numerical control programs for the
very complex Computer Numerically Controlled (CNC) machine tools used throughout
the Aerospaceindustry. A serious issue was the incompatibility of data produced by the
many CAD systems in use at the time. USAF/ICAM called a meeting at the National Bureau of
Standards (now known as National Institute of Standards and Technology or NIST) in 1978 to
address this issue. Boeing offered to sell its CAD translation software to USAF for one United
States dollar. USAF accepted this offer and contracted NIST to bring together a group of users and
vendors, including Boeing, General Electric, Xerox, Computervision, Applicon and others to
Page 14 of 38

further develop and test this software. Though it was the practice to begin the name of ICAM
developments with the word integrated (for example the IDEFs) believing that there would be
rapid development of graphical exchange software, USAF decided that the IGES would be the
Initial Graphics Exchange Specification not the Integrated Graphics Exchange Specification.
Since 1988, the DoD has required that all digital product and manufacturing information (PMI)
for weapons systems contracts (the engineering drawings, circuit diagrams, etc.) be delivered
in electronic form such as IGES format. As a consequence, CAx software vendors who want to
market their products to DoD subcontractors and their partners needed to support the import
(reading) and export (writing) of IGES format files.
An ANSI standard since 1980, IGES has been used in the automotive, aerospace,
and shipbuilding industries. It has been used for weapons systems from Trident missile guidance
systems to entire aircraft carriers. These part models may have to be used years after the vendor of
the original design system has gone out of business. IGES files provide a way to access this data
decades from now. Today, plugin viewers for Web browsers allow IGES files created 20 years ago
to be viewed from anywhere in the world

Structure of IGES file

Similar to the most CAD systems, IGES is based on the concept of entities. Entities could range
from simple geometric objects, such as points, lines, plane, and arcs, to more sophisticated entities,
such as subfigures and dimensions. Entities in IGES are divided in three categories: Geometric
entities: such as arcs, lines, and points that define the object. Annotation entities: such as
dimensions and notes that aid in the documentation and visualization of the object. Structure
entities: Those define the associations between other entities in IGES file. An IGES file is a
sequential file consisting of a sequence of records. The file formats treat the product definition to
be exchanged as a file of entities, each entity being represented in a standard format, to and from
which the native representation of a specific CAD/CAM system can be mapped. IGES file is
written in terms of ASCII characters as a sequence of 80 character records. An IGES file consists
of five sections which must appear in the following order: Start section, Global section, Directory
Entry (DE) section, Parameter Data (PD) section, and Terminate section, as shown in Figure
below.

The role of these sections is summarized in the following subsections.


Page 15 of 38

Start Section

The Start section is a human readable introduction to the file. It is commonly described as a
"prologue" to the IGES file. This section contains information such as the names of the sending
(source) and receiving (target) CADICAM systems, and a brief description of the product being
converted.

Global Section

The Global section includes information that describe the preprocessor and information needed by
the postprocessor to interpret the file. Some of the parameters that are specified in this section are:
Characters used as delimiters between individual entries and between records (usually commas
and semicolons respectively),

1. The name of the IGES file itself,


2. Vendor and software version of sending (source) system,
3. Number of significant digits in the representation of integers and single and double
precision floating point numbers on the sending systems,
4. Date and time of file generation, Model space scale,
5. Model units,
6. Minimum resolution and maximum coordinate values,
7. Name of the author of IGES file.

Directory Entry Section (DE)

The DE section is a list of all the entities defined in the IGES file together with certain attributes
associated with them. The entry for each entity occupies two 80-character records which are
divided into a total bf twenty 8-character fields. The first and the eleventh (beginning of the second
record of any given entity) fields contain the entity type number such as 100 for circle, 110 for
lines, etc. The second field contains a pointer to the parameter data entry for the entity in the PD
section. The pointer of an entity is simply its sequence number in the DE section. Some of the
entity attributes specified in this section are line font, layer number, transformation matrix, line
weight, and color.
Page 16 of 38

Parameter Data Section (PD)

The PD section contains the actual data defining each entity listed in the DE section as shown in
Figure 2-6. For example, a straight line entity is defined by the six coordinates of its two endpoints.
While each entity has always two records in the DE section, the number of records required for
each entity in the PD section varies from one entity to another (the minimum is one record) and
depends on the amount of data. Parameter data are placed in free format in columns 1 through 64.
The parameter delimiter (usually a comma) is used to separate parameters and the record delimiter
(usually a semicolon) is used to terminate the list of parameters. Both delimiters are specified in
the Global section of the IGES file. Column 65 is left blank. Columns 66 through 72 on all PD
records contain the entity pointer specified in the first record of the entity in the DE section.

Terminate Section

The Terminate section contains a single record which specifies the number of records in each of
the four preceding sections for checking purposes.

Basic IGES Entities

Line (entity 110)

A line in IGES file is defined by its end points. The coordinates of start point and terminate point
are included in parameter data section of this entity.

Circular Arc (entity 100)

To represent a circular arc in modeling space, IGES provides the information including a new
plane (Xn YT) in which the circular lies, the coordinates of center point, start point, and terminate
point. A new coordinate system (XT, YT, &) is defined by transferring the original coordinate
system (Xo, Yo, Zo) via a transformation matrix and all coordinates of points (center point, start
point, and terminate point) related to this new coordinate system. The order of end points is
counterclockwise about & axis. Transformation Matrix (entity 124)

This entity can give the relative location information between two coordinate systems, Xo, Yo,
Zo coordinate system and XT, YT, ZT coordinate system.
Page 17 of 38

Surface of Revolution (entity 120)

A surface is created by rotating the generatrix about the axis of rotation from the start position to
the terminal position. The axis of rotation is a line entity. The generatrix may be a conic arc, line,
circular arc, or composite curve. The angles of rotation are counterclockwise about the positive
direction of rotation axis.

Point (entity 11 6)

A point is defined by its coordinates (X, Y, Z).

Direction (entity 123)

A direction entity is a non-zero vector in 3D that is defined by its three components with respect to
the coordinate axes. The normal vector of surface can be determined by this entity.

Plane surface (entity 190)

The plane surface is defined by a point on the plane and the normal direction to the surface.

Vertex List (entity 502)

This entity is used to determine the vertex list which contains all the vertexes of the object.

Edge List (entity 504)

This entity is used to determine the edge list which contains all the edges of the object.

Loop (entity 508)

This entity is used to determine the loops which involved in all faces of the object.

=====================================================================
Page 18 of 38

Q.No=6

Rendering

Rendering or image synthesis is the automatic process of generating


a photorealistic or non-photorealistic image from a 2Dor 3D model (or models in what
collectively could be called a scene file) by means of computer programs. Also, the results of
displaying such a model can be called a rendering. A scene file contains objects in a strictly
defined language or data structure; it would contain geometry, viewpoint, texture, lighting,
and shading information as a description of the virtual scene. The data contained in the scene file is
then passed to a rendering program to be processed and output to a digital image or raster
graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a
scene.

Though the technical details of rendering methods vary, the general challenges to overcome in
producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics
pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist
a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and
predictable under virtual lighting, the rendering software should solve the rendering equation. The
rendering equation doesn't account for all lighting phenomena, but is a general lighting model for
computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects
in a video editing program to produce final video output.

Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always
connected to the others. In the graphics pipeline, it is the last major step, giving the final
appearance to the models and animation. With the increasing sophistication of computer graphics
since the 1970s, it has become a more distinct subject.

Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and
design visualization, each employing a different balance of features and techniques. As a product,
a wide variety of renderers are available. Some are integrated into larger modeling and animation
packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a
carefully engineered program, based on a selective mixture of disciplines related to: light
physics, visual perception, mathematics, and software development.

In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime.


Pre-rendering is a computationally intensive process that is typically used for movie creation,
while real-time rendering is often done for 3D video games which rely on the use of graphics cards
with 3D hardware accelerators.
Page 19 of 38

Features

A rendered image can be understood in terms of a number of visible features. Rendering research
and development has been largely motivated by finding ways to simulate these efficiently. Some
relate directly to particular algorithms and techniques, while others are produced together.

Ambient

The first, most general component of a lighting model is ambient light. Ambient light is diffuse,
non-directional light that is the result of multiple reflections from surrounding surfaces. Put simply
it is light that has no obvious source; it is 'everywhere'. When a scene has a low ambient light level,
it is going to be rendered as a 'dark' scene (although this may be offset by more specific point
sources).

Reflection

All surfaces (unless they are true black bodies) reflect light to some extent; what we call 'shiny'
surfaces reflect most of the light falling on them, and 'dull' surfaces reflect much less. However,
unlike the simple reflection diagrams we see in school physics texts, real surfaces are not perfectly
smooth; light is reflected in slightly different directions by the variations in small-scale roughness
of a surface. This results in a 'cone' of reflection, rather than a single coherent
beam. Diffuse reflection means that the light is reflected equally in all directions; there is an even
spread within the cone.

Shiny surfaces also generate other effects that need to be modelled; reflection from such surfaces is
called specular reflection. Generally, when light is reflected from any surface the colour of the
reflected light is based on the colour of the light source. However, shiny surfaces also
generate highlights (small areas on a curved surface that appear as bright spots, with the colour of
the light source, rather than the object), and these have to be modelled properly. The colour of the
highlight, for example, is a combination of the original light source and the characteristics of the
surface. The distribution of the reflected light will also be irregular, requiring a more complex
model.

A sophisticated lighting model needs to be able to model these effects very well; the most
commonly-used reflection formulation - and its associated shading model - is probably that
described by Rob Cook and Ken Torrance in 1982, which is (not unreasonably) called
the Cook-Torrance model. Modelling the highlights adds another level of complexity to the
system; the most successful attempt to model highlights was developed in 1975 by Phong
Bui-Thong, and is called the Phong highlighting model.

Attenuation

Another significant illumination effect - rather than a form of light - is attenuation. This is the
effect whereby objects at a greater distance from the observer appear to be 'dimmer' (that is, have
Page 20 of 38

lower light intensity levels). In the physical world this is caused by absorption of light by material
(dust, smoke, water vapour) in the atmosphere; the greater the distance between the object and the
observer, the greater the amount of absorption. This effect is particularly important in depth
cueing, giving us the illusion of three-dimensions in a two-dimensional image.

Shading models

Local illumination models generally incorporate ambient light, and simplified models of
reflection; global models attempt to model all components of the illumination process. Because
different objects - and different parts of the same object - in a scene will almost always be at
varying distances from the various point light sources, objects exhibit gradations in colour and
lightness that we call shading. The next stage of image synthesis is to develop an
applicable shading model.

Constant Shading

The simplest shading model - indeed, almost a 'non-shading' model - is 'flat', or constant, shading.
In this model all points on the surface of any polygon in the scene have the same colour value (and
light intensity); the result is that the scene has a matte look that is hardly realistic. The main
advantage of this model is the speed with which images can be rendered: once hidden-surface
calculation are done, all that is need is to identify a pixel on each visible polygon, and assign a
colour value to it; simple flood fill techniques will complete the rendering process. Images are
much more solid-looking than when using wireframes, but with only a marginal increase in render
times. Nonetheless, the technique has the major drawback that the boundaries between polygons
are clearly visible.

Polygon Shading

The next level of 'realism' is to introduce actual shading by calculating variations in colour
value within a polygon. Whilst this produces images that are visually more effective, there is
obviously a computational cost. Intra-polygon shading takes basically two forms:

a component that produces the smooth gradations in colour values that result from parts of a
polygon being at different distances from light sources; the major technique employed
is Gouraud shading

a component that produces 'localised' effects within a polygon, such as highlights; the major
technique employed is Phong shading

Gouraud Shading

The Gouraud shading model was developed by Henri Gouraud at Renault in the late 1960s.
Basically the process involves calculating the colour values for each vertex of the polygon, then
the colour value at each point within a polygon is derived by linear interpolation from these
Page 21 of 38

calculated values. Whilst this approach requires significantly more calculations that constant
shading, they are all (relatively) simple, and the result is markedly more realistic images of scenes
that primarily involve diffuse reflection. However, the technique still tends to produce visible
polygon boundaries, which show up as bright bands, called Mach bands.

Phong Shading

As discussed earlier, curved reflective surfaces tend to generate highlights when illuminated with
point light sources (and thus include some degree of specular reflection). The most successful
shading technique that simulates highlights was defined by Bui-Tuong Phong in 1975. His
technique starts from the same point as the Gouraud process, but interpolates from the surface
normalsat the vertices. Also, separate intensity values are calculated for each pixel. The resulting
process is much more complex computationally (and hence more time-consuming), but the
resulting images are yet more 'realistic'.

Rendering techniques
Many rendering algorithms have been researched, and software used for rendering may employ a
number of different techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always completely impractical and would take a
stupendous amount of time. Even tracing a portion large enough to produce an image takes an
inordinate amount of time if the sampling is not intelligently restricted.
Therefore, a few loose families of more-efficient light transport modelling techniques have
emerged:

rasterization, including scanline rendering, geometrically projects objects in the scene to an


image plane, without advanced optical effects;
ray casting considers the scene as observed from a specific point of view, calculating the
observed image based only on geometry and very basic optical laws of reflection intensity, and
perhaps using Monte Carlo techniques to reduce artifacts;
ray tracing is similar to ray casting, but employs more advanced optical simulation, and
usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often
orders of magnitude faster.
The fourth type of light transport technique, radiosity is not usually implemented as a rendering
technique, but instead calculates the passage of light as it leaves the light source and illuminates
surfaces. These surfaces are usually rendered to the display using one of the other three techniques.
Most advanced software combines two or more of the techniques to obtain good-enough results at
reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image
plane, and object order algorithms, which iterate over objects in the scene. Generally object order
is more efficient, as there are usually fewer objects in a scene than pixels.
Page 22 of 38

Visibility-based methods

Image synthesis techniques that predominantly employ local illumination are built on
a visibility approach. That is, they render scenes by first defining the visible surfaces in the scene,
then applying a flat (or at the most Gouraud) shading model to 'paint' them. Such an approach can
be very rapid, as the core operation of defining visible surfaces is 'required', and the rendering
process (+ Illumination Model + Rendering Technique) is relatively straightforward. Indeed, if a
flat shading model is employed (even if overlain by texture mapping) this form of rendering can be
carried out in the graphic processor component of the display sub-system; this is the basis of
'real-time' animation and rendering systems.

There are a number of algorithms that have been (and are) used in visible surface determination.
These include back face culling, ray casting (from which is derived ray tracing), and the z-buffer.
The latter is the basis of the scan-line rendering process. The central idea in using the z-buffer is to
test the "z-depth" (distance from the observer) of each surface to work out the closest (visible)
surface of each object. If two objects (or surfaces of the same object) have different z-depth values
along the same projected line, the higher value is further away - and thus behind - the nearer (lower
z-depth) surface or object. Applying this approach allows us to render scenes
using scan-line rendering.

Ray Tracing

The development of global illumination models made possible the generation - albeit very slowly!
- of images with a much higher level of realism. The first (and most widely-used) of these
techniques, ray tracing, was devised in the early 1980s by Turner Whitted. It is based on ray
casting techniques which, as has been suggested, were developed as an alternative to z-buffer for
deriving visible surfaces. The attraction of the ray tarcing algorithm is that it incorporates (indeed,
it is inherent within the technique) such crucial realism elements as visible surface detection,
shadowing, reflection, transparency, mapping, and multiple light sources.

The basic algorithm of ray tracing is indicated in the following diagram:


Page 23 of 38

The basic ray tracing algorithm is iterative:

1. we 'shoot' one ray per pixel 'through' the screen to produce primary rays, looking
for ray-object intersections (this also gives us visible surfaces); if no intersection is found,
the pixel will have the 'background' colour
2. at each intersection we follow any secondary rays - generated by reflection and
transmission, and from shadows - to generate a ray tree, with a user-defined maximum
depth (usually about ten levels)
3. when the complete tree has been defined, we determine the intensity and colour of each
pixel by 'adding up' from the bottom level of the ray tree the components of the tree for
each pixel

Radiosity

Despite its ability to create higly impressive images, ray tracing has been criticised for its
slowness, and its emphasis on direct reflection and transmission. Looking for a technique that
would more accurately render environments characterised more by diffuse reflection, Don
Greenberg and his collaborators at Cornell devised the radiosity method of image synthesis in the
mid 1980s.
Page 24 of 38

The basic principles of radiosity are as follows:

the system looks solely at the light/energy balance in a closed environment


a closed environment is one in which all energy emitted or reflected from a given surface is
accounted for by reflection and/or absorption by other surfaces
it is possible to define a surface radiosity value: the rate at which energy leaves the surface:

Scanline rendering and rasterisation

A high-level representation of an image necessarily contains elements in a different domain from


pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line
segments and curves might be primitives. In a graphical user interface, windows and buttons might
be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives.
If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task,
then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one
loops through each of the primitives, determines which pixels in the image it affects, and modifies
those pixels accordingly. This is called rasterization, and is the rendering method used by all
current graphics cards.
Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may
be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must
pass through them. Second, rasterization can improve cache coherency and reduce redundant work
by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous
in the image. For these reasons, rasterization is usually the approach of choice
when interactive rendering is required; however, the pixel-by-pixel approach can often produce
higher-quality images and is more versatile because it does not depend on as many assumptions
about the image as rasterization.
The older form of rasterization is characterized by rendering an entire face (primitive) as a single
color. Alternatively, rasterization can be done in a more complicated manner by first rendering the
vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This
version of rasterization has overtaken the old method as it allows the graphics to flow without
complicated textures (a rasterized image when used face by face tends to have a very block-like
effect if not covered in complex textures; the faces are not smooth because there is no gradual
color change from one primitive to the next). This newer method of rasterization utilizes the
graphics card's more taxing shading functions and still achieves better performance because the
simpler textures stored in memory use less space. Sometimes designers will use one rasterization
method on some faces and the other method on others based on the angle at which that face meets
other joined faces, thus increasing speed and not hurting the overall effect.
====================================================================
Page 25 of 38

Q.No=7

Three-Dimensional Geometric Transformations

Methods for geometric transformations and object modeling in three dimensions are extended
from two-dimensional methods by including considerations for the z coordinate.

We now translate an object by specifying a three-dimensional translation vector, which


determines how much the object is tobe moved in each of the three coordinate directions.
Similarly, we scale an object with three coordinate scaling factors.

TRANSLATION

In a three-dimensional homogeneous coordinate representation, a point is translated

from position P = (x, y, z) to position P' = (x', y', z') with the matrix Operation.

An object is translated in three dimensions by transforming each of the defining points of the
object.
Page 26 of 38

ROTATION

To generate a rotation transformation for an object, we must designate an axis of rotation (about
which the object is to be rotated) and the amount of angular rotation.

Unlike two-dimensional applications, where all transformations are carried out in the xy plane, a
three-dimensional rotation can be specified around any line in space.

The easiest rotation axes to handle are those that are parallel to the coordinate axes. Also, we can
use combinations of coordinate axis rotations (along with appropriate translations) to specify any
general rotation.

By convention, positive rotation angles produce counterclockwise rotations about a coordinate


axis, if we are looking along the positive half of the axis toward the coordinate origin.

Coordinate-Axes Rotations

The two-dimensional z-axis rotation equations are easily extended to three dimensions:
Page 27 of 38

General Three-Dimensional Rotations

A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a
composite transformation involving combinations of translations and the coordinate-axes
rotations. .
Page 28 of 38

Any coordinate position P on the object in this figure is transformed with the sequence shown as

Given the specifications f or the rotation axis and the rotation angle, we can accomplish the
required rotation in five s t e p

1 Translate the object so that the rotation axis pass= through the coordinate origin.

2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.

3. Perform the specified rotation about that coordinate axis.

4. Apply inverse rotations to bring the rotation axis back to original orientation.

5. Apply the inverse translation to bring the rotation axis back to its original position

A rotation axis can be defined with coordinate positions, as in fig or with one coordinate point and
direction angles between the rotation axis and two of the coordinate axes. We will assume that the
rotation axis is defined by two points and that the direction of rotation axes is to be counter
clockwise when looking along the axis from P1 to P2
Page 29 of 38

SCALING

The matrix expression tor the scaling transformation of a position P = (x, y, z) relative to the
coordinate origin can be written as
Page 30 of 38

where scaling parameters sx, sy, and sz, are assigned any positive values. Explicit expressions for
the coordinate transformations for scaling relative to the origin are

Scaling an object with transformation changes the size of the object and repositions the object
relative to the coordinate origin. Also, if the transformation parameters are not all equal, relative
dimensions in the object are changed:

Scaling with respect to a selected fixed position (x, y, z,) can be represented with the following
transformation sequence:
Page 31 of 38

====================================================================

Q.No=8

Parametric Representation

A parametric representation of a function expresses the functional relationship between several


variables by means ofauxiliary variable parameters.

In the case of two variables x and y, the expression F(x, y) = 0 can be geometrically interpreted a
s the equation of a planecurve. Any variable t that determines the position of a point (x, y) on the
curve can be a parameterfor example, arc lengthmeasured positively or negatively from some
Page 32 of 38

point on the curve taken as the origin, or time for a specified motion of a pointthat describes the c
urve. The variables x and y are expressed as functions of this parameter:

x = (t) y = (t)

These functions yield a parametric representation of the functional relationship between x and y,
and equations (*) are said tobe parametric equations of the corresponding curve. Thus, the equati
on x2 + y2 = 1 has a parametric representation x = cost, y = sin t, where 0 t < 2; these equatio
ns are called the parametric equations of the circle. The equation x2 y2 = 1 canbe represented
parametrically by x = (1 + t2)/2t, y = (1 t2)/2t, where t 0, or by x = cosec t, y = cot t, where
< t < and t 0; these equations are parametric equations of the hyperbola. If the parameter
t can be chosen so that the functions(*) are rational, the curve is said to be unicursal; the hyperbo
la, for example, is a unicursal curve.

Parametric representations of space curvesthat is, representations by equations of the form x =


(t),y = (t), z = (t)areof particular importance. A line in space has the parametric presentati
on x = a + mt, y = b+ nt, z = c + pt. The parametricequations for a circular helix are x = a cos t, y
= a sin t, z = ct.

In the case of three variables x, y, and z whose relationship is expressed by F(x, y, z) = 0 (one of
the variables, for examplez, may be considered an implicit function of the other two), the geomet
ric figure is a surface. To determine the position of apoint on the surface, we require two paramet
ers u and v for example, longitude and latitude on the surface of the globe.Thus, the parametri
c representation has the form x = (u, v),y =(u, v), z = (u, v)- For example, the surface x2 + y
2 = (z2 +l)2 has the parametric equations x = (u2 1) cos v, y= (u2 + 1) sin v, and z + u.

The chief advantages of parametric representations are: (1) such representations permit us to stud
y implicit functions incases where it is difficult to write the functions in explicit form without usi
ng parameters, and (2) through parametricrepresentations multiple-valued functions can be expre
ssed as single-valued functions. Parametric representations havebeen particularly intensively stud
ied for analytic functions. The parametric representation of analytic functions by single-valued a
nalytic functions is the subject of the theory of uniformization.

====================================================================

Q.No=9

Composite Surfaces

A composite surface is a collection of connected surfaces. The components can be any surface
type, including nesting composite surfaces. While it is often expected that composite surfaces are
contiguous, no such topological restrictions are enforced by FME in storing composite surfaces.
Page 33 of 38

The orientation of the composite surface is determined by checking the orientation of the first
surface. It is intended that all members of the composite are oriented in a way consistent with their
neighbors. That is, it is not expected that connected adjacent surfaces have opposing orientations,
like in a checkerboard. However, no such adjacent orientation restrictions are enforced by FME in
storing composite surfaces.

If required, transformers may be used to alter or repair composite surfaces.

Composite surfaces may possess optional front or back appearances, and may be single or
double sided. The angle argument prevents curves from being removed from the model
or composited over. Composites will not be generated where the angle between surface normals
adjacent to the curve is greater than the specified angle.

When a composite surface is created, the default behavior is to also to composite curves on the
boundary of the new composite surface.

Curves are automatically composited if the angle between tangents at the common vertex is less
than 15 degrees. The nocurves option can be used to prevent any composite curves from being
created.

The keep keyword can be used to change the default choice of which curves to composite. The
arguments following the keep keyword behave the same as for explicit composite curve creation.
Thenocurves and keep arguments are mutually exclusive.

Bezier Surface

Bzier surfaces are a species of mathematical spline used in computer graphics, computer-aided
design, and finite element modeling. As with the Bzier curve, a Bzier surface is defined by a set
of control points. Similar to interpolation in many respects, a key difference is that the surface does
not, in general, pass through the central control points; rather, it is "stretched" toward them as
though each were an attractive force. They are visually intuitive, and for many applications,
mathematically convenient. Bzier surfaces were first described in 1962 by the French engineer
Pierre Bzier who used them to design automobile bodies. Bzier surfaces can be of any degree,
but bicubic Bzier surfaces generally provide enough degrees of freedom for most applications.

Bezir Equation

A given Bzier surface of order (n, m) is defined by a set of (n + 1)(m + 1) control points ki,j. It
maps the unit square into a smooth-continuous surface embedded within a space of the same
dimensionality as { ki,j }. For example, if k are all points in a four-dimensional space, then the
surface will be within a four-dimensional space. A two-dimensional Bzier surface can be defined
as a parametric surface where the position of a point p as a function of the parametric coordinates
u, v is given by:
Page 34 of 38

A Bzier surface will transform in the same way as its control points under all linear
transformations and translations.
All u = constant and v = constant lines in the (u, v) space, and, in particular, all four edges
of the deformed (u, v) unit square are Bzier curves.
Page 35 of 38

A Bzier surface will lie completely within the convex hull of its control points, and
therefore also completely within the bounding box of its control points in any given
Cartesian coordinate system.
The points in the patch corresponding to the corners of the deformed unit square coincide
with four of the control points. However, a Bzier surface does not generally pass through
its other control points.

Generally, the most common use of Bzier surfaces is as nets of bicubic patches (where m = n = 3).
The geometry of a single bicubic patch is thus completely defined by a set of 16 control points.
These are typically linked up to form a B-spline surface in a similar way to the way Bzier curves
are linked up to form a B-spline curve. Simpler Bzier surfaces are formed from biquadratic
patches (m = n = 2), or Bzier triangles.

Bezier Surfaces In Computer Graphic

Bzier patch meshes are superior to meshes of triangles as a representation of smooth surfaces,
since they are much more compact, easier to manipulate, and have much better continuity
properties. In addition, other common parametric surfaces such as spheres and cylinders can be
well approximated by relatively small numbers of cubic Bzier patches. However, Bzier patch
meshes are difficult to render directly. One problem with Bzier patches is that calculating their
intersections with lines is difficult, making them awkward for pure ray tracing or other direct
geometric techniques which do not use subdivision or successive approximation techniques. They
are also difficult to combine directly with perspective projection algorithms. For this reason,
Bzier patch meshes are in general eventually decomposed into meshes of flat triangles by 3D
rendering pipelines. In high-quality rendering, the subdivision is adjusted to be so fine that the
individual triangle boundaries cannot be seen. To avoid a "blobby" look, fine detail is usually
applied to Bzier surfaces at this stage using texture maps, bump maps and other pixel shader
techniques. A Bzier patch of degree (m, n) may be constructed out of two Bzier triangles of
degree m+n, or out of a single Bzier triangle of degree m+n, with the input domain as a square
instead of as a triangle. A Bzier triangle of degree m may also be constructed out of a Bzier
surface of degree (m, m), with the control points so that one edge is squashed to a point, or with the
input domain as a triangle instead of as a square.

====================================================================

Q.No=10

Solid Modeling

Solid modeling is the most advanced method of geometric modeling in three dimensions. Solid
modeling is the representation of the solid parts of the object on your computer. The typical
Page 36 of 38

geometric model is made up of wire frames that show the object in the form of wires. This wire
frame structure can be two dimensional, two and half dimensional or three dimensional. Providing
surface representation to the wire three dimensional views of geometric models makes the object
appear solid on the computer screen and this is what is called as solid modeling.

A solid model is a digital representation of the geometry of an existing or envisioned physical


object. Solid models are used in many industries, from entertainment to health care. They play a
major role in the discrete-part manufacturing industries, where precise models of parts and
assemblies are created using solid modeling software or more general computer-aided design
(CAD) systems. The design process is usually incremental. Designers may specify points, curves,
and surfaces, and stitch them together to define electronic representations of the boundary of the
object. Alternatively, they may select models of simple shapes, such as blocks or cylinders, specify
their dimensions, position, and orientation, and combine them using union, intersection, or
difference operators. The resulting representation is an unambiguous, complete, and detailed
digital approximation of the geometry of an object or of an assembly of objects (such as a car
engine or an entire airplane). Interactive three-dimensional (3D) graphic supports the design
activities by providing designers with: (1) easy to understand images of their design, (2) efficient
facilities for graphically selecting or editing features of the part being designed, and (3) immediate
feedback, which helps them perform and check each design step. Early applications of solid
modeling focused on producing correct engineering drawings automatically and on cutter-path
generation for numerically controlled machining [Requicha82, Requicha96]. Today, the
engineering drawings still provide a link to traditional, non-electronic manufacturing or archival
processes, but are being rapidly replaced with electronic file transfers. Solid modeling has evolved
to provide the set of fundamental tools for representing a large class of products and processes, and
for performing on them the geometric calculations required by applications. The ultimate goal is to
relieve engineers from all of the low-level or non-creative tasks in designing products; in assessing
their manufacturability, assemblability, and other life-cycle characteristics; and in generating all
the information necessary to produce them. Engineers should be able to focus on conceptual,
high-level design decisions, while domain-expert application programs should provide advice on
the consequences of design decisions and generate plans for the manufacture and other activities
associated with the products life cycle. The total automation of analysis and manufacturing
activities (see for example [Spyridi93]), although in principle made possible by informationally
complete solid models, remains a research challenge in spite of much progress on several fronts.
Solid modeling has rapidly evolved into a large body of knowledge, created by an explosion of
research and publications [Requicha88, Requicha92]. The solid modeling technology is
implemented in dozens of commercial solid modeling software systems, which serve a
multi-billion dollar market and have significantly increased design productivity, improved product
quality, and reduced manufacturing and maintenance costs. Today, solid modeling is an
interdisciplinary field that involves a growing number of areas. Its objectives evolved from a deep
understanding of the practices and requirements of the targeted application domains. Its
formulation and rigor are based on mathematical foundations derived from general and algebraic
Page 37 of 38

topology, and from Euclidean, differential, and algebraic geometry. The computational aspects of
solid modeling deal with efficient data structures and algorithms, and benefit from recent
developments in the field of computational geometry. Efficient processing is essential, because the
complexity of industrial models is growing faster than the performance of commercial
workstations. Techniques for modeling and analyzing surfaces and for computing their
intersections are important in solid modeling. This area of research, sometimes called computer
aided geometric design, has strong ties with numerical analysis and differential geometry. Graphic
user-interface (GUI) techniques also play a crucial role in solid modeling, since they determine the
overall usability of the modeler and impact the users productivity. There have always been strong
symbiotic links and overlaps between the solid modeling community and the computer graphics
community. Solid modeling interfaces are based on efficient three-dimensional (3D) graphics
techniques, whereas research in 3D graphics focuses on fast or photo-realistic rendering of
complex scenes, often composed of solid models, and on realistic or artistic animations of
non-rigid objects. A similar symbiotic relation with computer vision is regaining popularity, as
many research efforts in vision are model-based and attempt to extract 3D models from images or
video sequences of existing parts or scenes. These efforts are particularly important for solid
modeling, because the cost of manually designing solid models of existing objects or scenes far
exceeds the other costs (hardware, software, maintenance, and training) associated with solid
modeling. Finally, the growing complexity of solid models and the growing need for
collaboration, reusability of design, and interoperability of software require expertise in
distributed databases, constraint management systems, optimization techniques, object linking
standards, and internet protocols.

Boundary representations

A physical object, modeled mathematically by an r-set, is unambiguously defined by its boundary.


Therefore, a solid may be represented by a set of non-overlapping faces, whose union
approximates the solids boundary. Such a scheme is called a boundary representation, or BRep
for short. Unfortunately, an arbitrary set of faces does not necessarily correspond to the boundary
of a solid. In fact, invalid BReps created by designers or by incorrect algorithms that implemented
higher-level modeling operations plagued the early versions of many solid modelers.

Comparison of Various Solid Modeling Schemes

There are similarities as well as dissimilarities among these schemes. These model ling schemes
are compared in Table below on the basis of such attributes as accuracy, domain, uniqueness,
validity, closure and compactness.
Page 38 of 38

====================================================================

References:
Computer Aided Design and Manufacturing

M.M.M. SARCAR, K. MALLIKARJUNA RAO, K. LALIT NARAYAN

Computer-aided Design (Dean L. Taylor)