Sie sind auf Seite 1von 155

 

1.  ELECTRICAL ENGINEERING  2 
1.1 SOME MAJOR ELECTRICAL LAWS  2 
1.2 TRANSFORMERS  3 
1.3 DC MOTOR  8 
1.4 INDUCTION MOTOR  11 
2.  ECE QUESTIONS  24 
2.1 COMPUTER NETWORKING  24 
2.2 ANALOG ELECTRONICS  32 
2.3 DIGITAL ELECTRONICS  48 
3.  COMPUTER ENGINEERING  73 
3.1 OPERATING SYSTEM  73 
3.2 COMPUTER ORGANIZATION  87 
3.3 DATA STRUCTURE  90 
3.4 SOFTWARE ENGINEERING  96 
3.5 AUTOMATA  103 
3.6 DATA BASE MANAGEMENT SYSTEM  107 
3.7 NETWORKING  114 
3.8 CLOUD COMPUTING  126 
3.9 MOBILE COMPUTING  131 
3.10 ARTIFICIAL INTELLIGENCE  135 
4.       CHEMICAL ENGINEERING  138 
1. Electrical Engineering
1.1 Some Major Electrical Laws

Faraday's law of induction

Faraday's law of induction is a basic law of electromagnetism predicting how a magnetic field will
interact with an electric circuit to produce an electromotive force (EMF)—a phenomenon
called electromagnetic induction. It is the fundamental operating principle of transformers,
inductors, and many types of electrical motors, generators and solenoids.

Thevenin Norton

As originally stated in terms of DC resistive circuits only, the Thévenin's theorem holds that:

 Any linear electrical network with voltage and current sources and only resistances can be
replaced at terminals A-B by an equivalent voltage source Vth in series connection with an
equivalent resistance Rth.
 This equivalent voltage Vth is the voltage obtained at terminals A-B of the network with
terminals A-B open circuited.
 This equivalent resistance Rth is the resistance obtained at terminals A-B of the network
with all its independent current sources open circuited and all its independent voltage
sources short circuited.

In circuit theory terms, the theorem allows any one-port network to be reduced to a single voltage
source and a single impedance.

Kirchoff’s Law

This law is also called Kirchhoff's first law, Kirchhoff's point rule, or Kirchhoff's junction rule (or
nodal rule). The principle of conservation of electric charge implies that:
 At any node (junction) in an electrical circuit, the sum of currents flowing into that node
is equal to the sum of currents flowing out of that node

or equivalently

 The algebraic sum of currents in a network of conductors meeting at a point is


zero.
1.2 Transformers
A transformer is an electromagnetic energy conversion device which transfers electrical energy
from one electrical circuit to another trough the medium of magnetic field and without the change
of frequency. The primary and secondary windings of transformer are not electrically connected.

Some of the important tasks performed by the transformer are:


• Increasing and decreasing current and voltage levels from one circuit to another for high and
low current circuits.
• For matching of impedance of load and source for maximum power transfer in electronic and
control circuits.
• For isolating one circuit from another, or for isolating DC and allowing only AC to flow from
one circuit to another.

In transformer the electrical energy transfer takes place without use of moving parts- it therefore
has the highest possible efficiency out of all electrical machines and requires the least maintenance.
Efficiency of transformers are of the range 97-97%.

Basic Principles

As mentioned earlier the transformer is a static device working on the principle of Faraday’s law of
induction. Faraday’s law states that a voltage appears across the terminals of an electric coil when
the flux linkages associated with the same changes. This emf is proportional to the rate of change of
flux linkages. Putting mathematically,

e = dψ / dt

Where, e is the induced emf in volt and ψ is the flux linkages in Weber turn.In a coil of N turns all
these N turns link flux lines of φ Weber resulting in the Nφ flux linkages. In such a case,

ψ = Nφ (2)
e= Ndφ volt (3) dt

The change in the flux linkage can be brought about in a variety of ways:

1. coil may be static and unmoving but the flux linking the same may change with time.
2. flux lines may be constant and not changing in time but the coil may move in space linking
different value of flux with time.
3. both 1 and 2 above may take place. The flux lines may change in time with coil moving in
space.

These three cases are now elaborated in sequence below, with the help of a coil with a simple
geometry.

Fig. 2 shows a region of length L m, of uniform flux density B Tesla, the flux lines being normal to
the plane of the paper. A loop of one turn links part of this flux. The flux φ linked by the turn is L ∗
B ∗ X Weber. Here X is the length of overlap in meters as shown in the figure. If now B does not
change with time and the loop is unmoving then no emf is induced in the coil as the flux linkages
do not change. Such a condition does not yield any useful machine. On the other hand, if the value
of B varies with time a voltage is induced in the coil linking the same coil even if the coil does not
move. The magnitude of B is assumed to be varying sinusoidally, and can be expressed as,
B = Bm sinωt (4)
where Bm is the peak amplitude of the flux density. ω is the angular rate of change with time. Then,
the instantaneous value of the flux linkage is given by,
ψ = Nφ = NLXBm sinωt (5); The instantaneous value of the induced emf is given by,

Here φm = Bm.L.X. The peak value of the induced emf is em =Nφm.ω (7) and the rms value is given
by
Further, this induced emf has a phase difference of π/2 radian with respect to the flux linked by the
turn. This emf is termed as ‘transformer’ emf and this principle is used in a transformer. Polarity of
the emf is obtained by the application of Lenz’s law. Lenz’s law states that the reaction to the
change in the flux linkages would be such as to oppose the cause. The emf if permitted to drive a
current would produce a counter mmf to oppose this changing flux linkage. In the present case,
presented in Fig. 2 the flux linkages are assumed to be increasing. The polarity of the emf is as
indicated. The loop also experiences a compressive force.

Iφ = B.L.X Weber. The conductor is moved with a velocity v = dx/dt normal to the flux, cutting the
flux lines and changing the flux linkages. The induced emf as per the application of Faraday’s law of
induction is e = N.B.L.dx/dt = B.L.v volt. (Here N=1)

Please note, the actual flux linked by the coil is immaterial. Only the change in the flux linkages is
needed to be known for the calculation of the voltage. The induced emf is in step with the change in
ψ and there is no phase shift. If the flux density B is distributed sinusoidally over the region in the
horizontal direction, the emf induced also becomes sinusoidal. This type of induced emf is termed
as speed emf or rotational emf, as it arises out of the motion of the conductor. The polarity of the
induced emf is obtained by the application of the Lenz’s law as before. Here the changes in flux
linkages is produced by motion of the conductor. The current in the conductor, when the coil ends
are closed, makes the conductor experience a force urging the same to the left. This is how the
polarity of the emf shown in fig.2b is arrived at. Also, the mmf of the loop aids the field mmf to
oppose change in flux linkages. This principle is used in d.c machines and alternators.
The third case under the application of the Faraday’s law arises when the flux changes and also the
conductor moves. This is shown in Fig. 2(c).
The uniform flux density in space is assumed to be varying in magnitude in time as B = Bm sin ωt.
The conductor is moved with a uniform velocity of dx = v m/sec. The dt change in the flux linkages
and hence induced emf is given by e=N.d(Bm.sinωt.L.X) =N.L.X.Bm.ω.cosωt.+N.Bm.sinωt.L.dx
Volt. (8) dt dt
The first term is due to the changing flux and hence is a transformer emf. The second term is due to
moving conductor or is a speed emf. When the terminals are closed such as to permit a current the
conductor experiences a force and also the mmf of the coil opposes the change in flux linkages. This
principle is used in a.c. machines where the field is time varying and conductors are moving under
the same.
The first case where there is a time varying field and a stationary coil resulting in a transformer emf
is the subject matter in the present section. The case two will be re- visited under the study of the
d.c machines and synchronous machines. Case three will be extensively used under the study of a.c
machines such as induction machines and also in a.c. commutator machines.
Next in the study of the transformers comes the question of creating a time varying filed. This is
easily achieved by passing a time varying current through a coil. The winding which establishes the
field is called the primary. The other winding, which is kept in that field and has a voltage induced
in it, is called a secondary. It should not be forgotten that the primary also sees the same time
varying field set up by it linking its turns and has an induced emf in the same. These aspects will be
examined in the later sections. At first the common constructional features of a transformer used in
electric power supply system operating at 50 Hz are examined.

Transformer Construction

The chief elements of transformer construction are:


- Magnetic circuit consists of limb and yoke
- Electrical circuit consists of primary and secondary windings.
- Terminals, tapping switches, terminal insulators.
- tank, oil, cooling devices, conservators

Types of Transformer
Core type transformers are popular in High voltage applications like Distribution transformers,
Power transformers, and obviously auto transformers. Reasons are:

 High voltage corresponds to high flux. So, for keeping your iron loss down you have to
use thicker core. So, core type is better choice.
 At high voltage you require heavy insulation. In core type winding putting insulation is
easier. In fact, LV (low voltage) winding itself acts as an insulation between HV (high
voltage) winding and core.

Whereas, Shell type transformers are popular in Low voltage applications like transformers used in
electronic circuits and power electronic converters etc. Reasons are:

 At low voltage, comparatively you require more volume for the copper wires than that of
iron core. So, the windows cut on the laminated sheets have to be of bigger proportion with
respect to the whole size of the transformer. So, shell type is a better choice.
 Here you don't care about the insulation much and insulation is thin and light. So, you can
put the winding anyway you want in the shell.

Why is a core is used in transformers?

An iron core increases magnetic flux density, thus making the transformer smaller and more
efficient. It also provides an armature to wind around, providing mechanical support to the
transformer windings, resulting in less physical movement of the wire during normal operation,
and especially during fault conditions.

Why is iron chosen as the material for the core of the transformer? Why don't we use
aluminium?
Aluminium is para magnetic . It offers high reluctance for flux. Almost as high as air. Hence, we
don't use aluminium as core for transformers. Iron is ferro magnetic material which offers less
reluctance. He we use Iron for transformer core

Why is a circular core not preferred in a transformer?

Although circular cross-section would be compatible with circular windings, it's highly
inconvenient because u require stacking laminations of different sizes. In a rectangular cross-
sectioned core, laminations are of the same size. Moreover, mechanical strength of the core falls,
when riveting the laminations, which is why a more approx. version of circular cross-section, the
cruciform cross-section is adopted universally.

In case of transformers how the core flux changes with change in load (considering
resistive and inductive load separately?

In the ideal case covered by introductory simplified transformer theory the answer is 'No'. When
secondary current flows, this creates a MMF which is in opposition to the primary MMF and which
therefore tends to demagnetize the core, but the primary current (I) increases to compensate for
this, and there is no net change in flux. The reason for this is that the flux in the core must always
be sufficient to induce a back emf in the primary which is equal to the applied primary voltage (Vs).

In reality, and in more complex models for the transformer there is a small reduction in flux. This
is because the primary current flows in an effective primary resistance (R) and sets up a voltage
drop IR. There is also the primary leakage reactance (X) to consider. This is also in series with the
primary circuit, and a further voltage drop IX occurs. The voltage applied to the ideal transformer
is then Vs - IZ where Z is the phasor sum of R and X. This reduced voltage requires a smaller flux.

Resistive load on secondary has voltage and current in-phase.

At first, one might expect this secondary coil current to cause additional magnetic flux in the core.
In fact, it does not. If more flux were induced in the core, it would cause more voltage to be induced
voltage in the primary coil (remember that e = dΦ/dt). This cannot happen, because the primary
coil's induced voltage must remain at the same magnitude and phase in order to balance with the
applied voltage, in accordance with Kirchhoff's voltage law. Consequently, the magnetic flux in the
core cannot be affected by secondary coil current. However, what does change is the amount of
mmf in the magnetic circuit.
Magnetomotive force is produced any time electrons move through a wire. Usually, this mmf is
accompanied by magnetic flux, in accordance with the mmf=ΦR “magnetic Ohm's Law” equation.
In this case, though, additional flux is not permitted, so the only way the secondary coil's mmf may
exist is if a counteracting mmf is generated by the primary coil, of equal magnitude and opposite
phase. Indeed, this is what happens, an alternating current forming in the primary coil -- 180o out
of phase with the secondary coil's current -- to generate this counteracting mmf and prevent
additional core flux. Polarity marks and current direction arrows have been added to the
illustration to clarify phase relations :

Flux remains constant with application of a load. However, a counteracting mmf is produced by the
loaded secondary.

If you find this process a bit confusing, do not worry. Transformer dynamics is a complex subject.
What is important to understand is this: when an AC voltage is applied to the primary coil, it
creates a magnetic flux in the core, which induces AC voltage in the secondary coil in-phase with
the source voltage. Any current drawn through the secondary coil to power a load induces a
corresponding current in the primary coil, drawing current from the source.

In a transformer, why is no load primary current very small compared to full load primary current?

Consider there is a small river dividing two cities and you need to build bridge over it to connect
both the cities. In that case, only few skilled workers will be required to build it and not all the
people of both the cities.
But once the bridge is built, the whole city or township living near the river will be able to cross it
whenever needed depending on requirements or whatever.

In the case of transformer also, Flux is the bridge between primary and secondary winding as they
are not electrically connected to each other. Without flux (bridge) you won't be able to cross from
Primary to Secondary.
On NO LOAD, a very small amount of current (few workers) is required to setup the required flux
(bridge). Also, we can say since the transformer core is made up of high permittivity magnetic
material, very less current is required to setup the required flux. But once the flux is setup and
secondary is loaded (second city call the people of 1st city for some help) and requires more
current, high current (more number of people) will be flowing depending on the secondary
demand.
Why transformer current has two components, at no load?
Transformer has a primary winding,an Iron core and a Secondary Winding.Winding is mostly a
copper wire which has a resistance and inductance.Iron core carries flux .That is
Inductance.Also,Iron core has losses due to eddy currents and hysteresis.These losses are modelled
as resistance(Resistance only has power loss).
Now,to answer your question,when on no load, current drawn by secondary is zero.So,the so
called image current in primary due to secondary is also zero.This leaves us with only the current
that is required to set up flux in the transformer.As mentioned iron core has inductance and
resistance.To simplify our calculations we model the equivalent circuit as an inductance and
resistance in parallel.So,it has two components,one to set up flux in the core and other to take care
of losses.

1.3 DC Motor
In any electric motor, operation is based on simple electromagnetism. A current-carrying
conductor generates a magnetic field; when this is then placed in an external magnetic field, it will
experience a force proportional to the current in the conductor, and to the strength of the external
magnetic field. As you are well aware of from playing with magnets as a kid, opposite (North and
South) polarities attract, while like polarities (North and North, South and South) repel. The
internal configuration of a DC motor is designed to harness the magnetic interaction between
a current-carrying conductor and an external magnetic field to generate rotational motion.

Let's start by looking at a simple 2-pole DC electric motor (here red represents a magnet or winding
with a "North" polarization, while green represents a magnet or winding with a "South"
polarization).

Every DC motor has six basic parts -- axle, rotor (a.k.a., armature), stator, commutator, field
magnet(s), and brushes. In most common DC motors (and all that BEAMers will see), the external
magnetic field is produced by high-strength permanent magnets1. The stator is the stationary part
of the motor -- this includes the motor casing, as well as two or more permanent magnet pole
pieces. The rotor (together with the axle and attached commutator) rotate with respect to the
stator. The rotor consists of windings (generally on a core), the windings being electrically
connected to the commutator. The above diagram shows a common motor layout -- with the rotor
inside the stator (field) magnets.
The geometry of the brushes, commutator contacts, and rotor
windings are such that when power is applied, the polarities of
the energized winding and the stator magnet(s) are misaligned,
and the rotor will rotate until it is almost aligned with the stator's
field magnets. As the rotor reaches alignment, the brushes move
to the next commutator contacts, and energize the next winding.
Given our example two-pole motor, the rotation reverses the
direction of current through the rotor winding, leading to a "flip"
of the rotor's magnetic field, driving it to continue rotating.

In real life, though, DC motors will always have more than two
poles (three is a very common number). In particular, this avoids
"dead spots" in the commutator. You can imagine how with our
example two-pole motor, if the rotor is exactly at the middle of its
rotation (perfectly aligned with the field magnets), it will get
"stuck" there. Meanwhile, with a two-pole motor, there is a
moment where the commutator shorts out the power supply (i.e.,
both brushes touch both commutator contacts simultaneously).
This would be bad for the power supply, waste energy, and
damage motor components as well. Yet another disadvantage of
such a simple motor is that it would exhibit a high amount
of torque "ripple" (the amount of torque it could produce is cyclic
with the position of the rotor).

You'll notice a few things from this -- namely, one pole is fully energized at a time (but two others
are "partially" energized). As each brush transitions from one commutator contact to the next, one
coil's field will rapidly collapse, as the next coil's field will rapidly charge up (this occurs within a
few microsecond). We'll see more about the effects of this later, but in the meantime, you can see
that this is a direct result of the coil windings' series wiring:
The use of an iron core armature (as in the Mabuchi, above) is quite common, and has a number of
advantages2. First off, the iron core provides a strong, rigid support for the windings -- a
particularly important consideration for high-torque motors. The core also conducts heat away
from the rotor windings, allowing the motor to be driven harder than might otherwise be the case.
Iron core construction is also relatively inexpensive compared with other construction types.

But iron core construction also has several disadvantages. The iron armature has a relatively high
inertia which limits motor acceleration. This construction also results in high winding inductances
which limit brush and commutator life.

In small motors, an alternative design is often used which features a 'coreless' armature winding.
This design depends upon the coil wire itself for structural integrity. As a result, the armature is
hollow, and the permanent magnet can be mounted inside the rotor coil. Coreless DC motors have
much lower armature inductance than iron-core motors of comparable size, extending brush and
commutator life.

The coreless design also allows manufacturers to build smaller motors; meanwhile, due to the lack
of iron in their rotors, coreless motors are somewhat prone to overheating. As a result, this design
is generally used just in small, low-power motors. BEAMers will most often see coreless DC motors
in the form of pager motors.

Types of DC – Motor
1.4 Induction Motor
An electrical motor is such an electromechanical device which converts electrical energy into a
mechanical energy. In case of three phase AC operation, most widely used motor is Three phase
induction motor as this type of motor does not require any starting device or we can say they are
self-starting induction motor.

For better understanding the principle of three phase induction motor, the basic
constructional feature of this motor must be known to us. This Motor consists of two major parts:

 Stator: Stator of three phase induction motor is made up of numbers of slots to


construct a 3-phase winding circuit which is connected to 3 phase AC source. The three
phase winding are arranged in such a manner in the slots that they produce a rotating
magnetic field after AC is given to them.
 Rotor: Rotor of three phase induction motor consists of cylindrical laminated core
with parallel slots that can carry conductors. Conductors are heavy copper or aluminum
bars which fits in each slots & they are short circuited by the end rings. The slots are not
exactly made parallel to the axis of the shaft but are slotted a little skewed because this
arrangement reduces magnetic humming noise & can avoid stalling of motor.

Working of Three Phase Induction Motor

Production of Rotating Magnetic Field

The stator of the motor consists of overlapping winding offset by an electrical angle of 120°. When
the primary winding or the stator is connected to a 3 phase AC source, it establishes a
rotating magnetic field which rotates at the synchronous speed.

Secrets behind the rotation

According to Faraday’s law an emf induced in any circuit is due to the rate of change of magnetic
flux linkage through the circuit. As the rotor winding in an induction motor are either closed
through an external resistance or directly shorted by end ring, and cut the stator rotating magnetic
field, an emf is induced in the rotor copper bar and due to this emf a current flows through the
rotor conductor.

Here the relative velocity between the rotating flux and static rotor conductor is the cause
of current generation; hence as per Lenz’s law the rotor will rotate in the same direction to reduce
the cause i.e. the relative velocity.

Thus, from the working principle of three phase induction motor it may observed that the rotor
speed should not reach the synchronous speed produced by the stator. If the speeds equals, there
would be no such relative velocity, so no emf induction in the rotor, & no current would be flowing,
and therefore no torque would be generated. Consequently, the rotor cannot reach at the
synchronous speed. The difference between the stator (synchronous speed) and rotor speeds is
called the slip. The rotation of the magnetic field in an induction motor has the advantage that no
electrical connections need to be made to the rotor.

Thus, the three-phase induction motor is:

 Self-starting
 Less armature reaction and brush sparking because of the absence of commutators and
brushes that may cause sparks.
 Robust in construction.
 Economical.
 Easier to maintain

Single Phase Induction Motor

For lightning and general purposes in homes, offices, shops, small factories single phase system is
widely used as compared to three phase system as the single phase system is more economical and
the power requirement in most of the houses, shops, offices are small, which can be easily met by
single phase system. The single-phase motors are simple in construction, cheap in cost, reliable
and easy to repair and maintain. Due to all these advantages the single-phase motor finds its
application in vacuum cleaner, fans, washing machine, centrifugal pump, blowers, washing
machine, small toys etc.

The single-phase AC motors are further classified as:


1. Single-phase induction motors or asynchronous motors.
2. Single phase synchronous motors.
3. Commutator motors.

Construction of Single-Phase Induction Motor

Like any other electrical motor asynchronous motor also have two main parts namely rotor and
stator.

 Stator: As its name indicates stator is a stationary part of induction motor. A single-
phase AC supply is given to the stator of single-phase induction motor.
 Rotor: The rotor is a rotating part of an induction motor. The rotor connects the
mechanical load through the shaft. The rotor in the single-phase induction motor is
of squirrel cage rotor type.

The construction of single-phase induction motor is almost similar to the squirrel cage
three-phase induction motor. But in case of a single phase induction motor, the stator has two
windings instead of one three-phase winding in three-phase induction motor.

Stator of Single-Phase Induction Motor

The stator of the single-phase induction motor has laminated stamping to reduce eddy current
losses on its periphery. The slots are provided on its stamping to carry stator or main winding.
Stampings are made up of silicon steel to reduce the hysteresis losses. When we apply a single
phase AC supply to the stator winding, the magnetic field gets produced, and the motor rotates at

speed slightly less than the synchronous speed Ns. Synchronous speed Ns is given by
Where,
f = supply voltage frequency,
P = No. of poles of the motor.
The construction of the stator of the single-phase induction motor is similar to that of three phase
induction motor except there are two dissimilarities in the winding part of the single-phase
induction motor.
1. Firstly, the single-phase induction motors are mostly provided with concentric coils. We
can easily adjust the number of turns per coil can with the help of concentric coils. The mmf
distribution is almost sinusoidal.
2. Except for shaded pole motor, the asynchronous motor has two stator windings namely the
main winding and the auxiliary winding. These two windings are placed in space
quadrature to each other.

Rotor of Single-Phase Induction Motor

The construction of the rotor of the single-phase induction motor is similar to the squirrel cage
three-phase induction motor. The rotor is cylindrical and has slots all over its periphery. The slots
are not made parallel to each other but are a little bit skewed as the skewing prevents magnetic
locking of stator and rotor teeth and makes the working of induction motor more smooth and
quieter, i.e., less noise. The squirrel cage rotor consists of aluminum, brass or copper bars. These
aluminum or copper bars are called rotor conductors and placed in the slots on the periphery of the
rotor. The copper or aluminum rings permanently short the rotor conductors called the end rings.
To provide mechanical strength, these rotor conductors are braced to the end ring and hence form
a complete closed circuit resembling a cage and hence got its name as squirrel cage induction
motor. As end rings permanently short the bars, the rotor electrical resistance is very small and it is
not possible to add external resistance as the bars get permanently shorted. The absence of slip ring
and brushes make the construction of single-phase induction motor very simple and robust.

Working Principle of Single-Phase Induction Motor


NOTE: We know that for the working of any electrical motor whether its AC or DC motor, we
require two fluxes as the interaction of these two fluxes produced the required torque.
When we apply a single phase AC supply to the stator winding of single phase induction motor, the
alternating current starts flowing through the stator or main winding. This alternating current
produces an alternating flux called main flux. This main flux also links with the rotor conductors
and hence cut the rotor conductors. According to the Faraday’s law of electromagnetic induction,
emf gets induced in the rotor. As the rotor circuit is closed one so, the current starts flowing in the
rotor. This current is called the rotor current. This rotor current produces its flux called rotor flux.
Since this flux is produced due to the induction principle so, the motor working on this principle
got its name as an induction motor. Now there are two fluxes one is main flux, and another is called
rotor flux. These two fluxes produce the desired torque which is required by the motor to rotate.
Why Single Phase Induction Motor is not Self Starting?
According to double field revolving theory, we can resolve any alternating quantity into two
components. Each component has a magnitude equal to the half of the maximum magnitude of the
alternating quantity, and both these components rotate in the opposite direction to each other. For
example – a flux, φ can be resolved into two components

Each of these components rotates in the opposite direction i. e if one φm/2 is rotating in clockwise
direction then the other φm / 2 rotates in anticlockwise direction.
When we apply a single-phase AC supply to the stator winding of single-phase induction motor, it
produces its flux of magnitude, φm. According to the double field revolving theory, this alternating
flux, φm is divided into two components of magnitude φm/2. Each of these components will rotate
in the opposite direction, with the synchronous speed, Ns. Let us call these two components of flux
as forwarding component of flux, φf and the backward component of flux, φb. The resultant of these
two components of flux at any instant of time gives the value of instantaneous stator flux at that
particular instant.

Now at starting condition, both the forward and backward components of flux are exactly opposite
to each other. Also, both of these components of flux are equal in magnitude. So, they cancel each
other and hence the net torque experienced by the rotor at the starting condition is zero. So, the
single-phase induction motors are not self-starting motors.

Methods for Making Single Phase Induction as Self-Starting Motor


From the above topic, we can easily conclude that the single-phase induction motors are not self-
starting because the produced stator flux is alternating in nature and at the starting, the two
components of this flux cancel each other and hence there is no net torque. The solution to this
problem is that if we make the stator flux rotating type, rather than alternating type, which rotates
in one particular direction only. Then the induction motor will become self-starting. Now for
producing this rotating magnetic field, we require two alternating flux, having some phase
difference angle between them. When these two fluxes interact with each other, they will produce a
resultant flux. This resultant flux is rotating in nature and rotates in space in one particular
direction only. Once the motor starts running, we can remove the additional flux. The motor will
continue to run under the influence of the main flux only. Depending upon the methods for making
asynchronous motor as Self-Starting Motor, there are mainly four types of single phase
induction motor namely,
1. Split phase induction motor,
2. Capacitor start inductor motor,
3. Capacitor start capacitor run induction motor,
4. Shaded pole induction motor.
5. Permanent split capacitor motor or single value capacitor motor.
Comparison between Single Phase and Three Phase Induction Motors
1. Single phase induction motors are simple in construction, reliable and economical for small
power rating as compared to three phase induction motors.
2. The electrical power factor of single-phase induction motors is low as compared to three
phase induction motors.
3. For the same size, the single-phase induction motors develop about 50% of the output as
that of three phase induction motors.
4. The starting torque is also low for asynchronous motors/single phase induction motor.
5. The efficiency of single-phase induction motors is less compared to that of three phase
induction motors.

Single phase induction motors are simple, robust, reliable and cheaper for small ratings. They
are available up to 1 KW rating.

1. Direct On Line Starter


1. It is simple and cheap starter for a 3-phase induction motor.
2. The contacts close against spring action.
3. This method is normally limited to smaller cage induction motors, because starting current can
be as high as eight times the full load current of the motor. Use of a double –cage rotor requires
lower staring current (approximately four times) and use of quick acting A.V.R enables motors
of 75 Kw and above to be started direct on line.
4. An isolator is required to isolate the starter from the supply for maintenance.
5. Protection must be provided for the motor. Some of the safety protections are over-current
protection, under-voltage protection, short circuit protection, etc. Control circuit voltage is
sometimes stepped down through an autotransformer.

 2. Star-Delta Starter
A three-phase motor will give three times the power output when the stator windings are connected
in delta than if connected in star, but will take 1/3 of the current from the supply when connected
in star than when connected in delta. The starting torque developed in star is ½ that when starting
in delta.
1. A two-position switch (manual or automatic) is provided through a timing relay.
2. Starting in star reduces the starting current.
3. When the motor has accelerated up to speed and the current is reduced to its normal value, the
starter is moved to run position with the windings now connected in delta.
4. More complicated than the DOL starter, a motor with a star-delta starter may not produce
sufficient torque to start against full load, so output is reduced in the start position. The motors
are thus normally started under a light load condition.
5. Switching causes a transient current which may have peak values in excess of those with DOL.

3. Auto Transformer Motor Starting


1. Operated by a two-position switch i.e. manually / automatically using a timer to change
over from start to run position.
2. In starting position supply is connected to stator windings through an auto-transformer
which reduces applied voltage to 50, 60, and 70% of normal value depending on
tapping used.
3. Reduced voltage reduces current in motor windings with 50% tapping used motor
current is halved and supply current will be half of the motor current. Thus, starting
current taken from supply will only be 25% of the taken by DOL starter.
4. For an induction motor, torque T is developed by V2, thus on 50% tapping, torque at
starting is only (0.5V)2 of the obtained by DOL starting. Hence 25% torque is produced.
5. Starters used in lager industries, it is larger in size and expensive.
6. Switching from start to run positions causing transient current, which can be greater in
value than those obtained by DOL starting.

4. Rotor Resistance Starter


1. This starter is used with a wound rotor induction motor. It uses an external
resistance/phase in the rotor circuit so that rotor will develop a high value of torque.
2. High torque is produced at low speeds, when the external resistance is at its higher
value.
3. At start, supply power is connected to stator through a three-pole contactor and, at a
same time, an external rotor resistance is added.
4. The high resistance limits staring current and allows the motor to start safely against
high load.
5. Resistors are normally of the wire-wound type, connected through brushes and slip
rings to each rotor phase. They are tapped with points brought out to fixed contactors.

Why star delta starter is preferred with induction motor?

Star delta starter is preferred with induction motor due to following reasons:
• Starting current is reduced 3-4 times of the direct current due to which voltage drops and hence it
causes less losses.
• Star delta starter circuit comes in circuit first during starting of motor, which reduces voltage 3
times, that is why current also reduces up to 3 times and hence less motor burning is caused.
• In addition, starting torque is increased and it prevents the damage of motor winding.

State the difference between generator and alternator

Generator and alternator are two devices, which converts mechanical energy into electrical energy.
Both have the same principle of electromagnetic induction, the only difference is that their
construction. Generator persists stationary magnetic field and rotating conductor which rolls on
the armature with slip rings and brushes riding against each other, hence it converts the induced
emf into dc current for external load whereas an alternator has a stationary armature and rotating
magnetic field for high voltages but for low voltage output rotating armature and stationary
magnetic field is used.

Why AC systems are preferred over DC systems?

Due to following reasons, AC systems are preferred over DC systems:


a. It is easy to maintain and change the voltage of AC electricity for transmission and distribution.
b. Plant cost for AC transmission (circuit breakers, transformers etc) is much lower than the
equivalent DC transmission
c. From power stations, AC is produced so it is better to use AC then DC instead of converting it.
d. When a large fault occurs in a network, it is easier to interrupt in an AC system, as the sine wave
current will naturally tend to zero at some point making the current easier to interrupt.
How can you relate power engineering with electrical engineering?

Power engineering is a sub division of electrical engineering. It deals with generation, transmission
and distribution of energy in electrical form. Design of all power equipments also comes under
power engineering. Power engineers may work on the design and maintenance of the power grid
i.e. called on grid systems and they might work on off grid systems that are not connected to the
system.

What are the various kind of cables used for transmission?

Cables, which are used for transmitting power, can be categorized in three forms:
• Low-tension cables, which can transmit voltage upto 1000 volts.
• High-tension cables can transmit voltage upto 23000 volts.
• Super tension cables can transmit voltage 66 kV to 132 kV.

Why back emf used for a dc motor? highlight its significance.

The induced emf developed when the rotating conductors of the armature between the poles of
magnet, in a DC motor, cut the magnetic flux, opposes the current flowing through the conductor,
when the armature rotates, is called back emf. Its value depends upon the speed of rotation of the
armature conductors. In starting, the value of back emf is zero.

What is slip in an induction motor?

Slip can be defined as the difference between the flux speed (Ns) and the rotor speed (N). Speed of
the rotor of an induction motor is always less than its synchronous speed. It is usually expressed as
a percentage of synchronous speed (Ns) and represented by the symbol ‘S’.

Explain the application of storage batteries.

Storage batteries are used for various purposes, some of the applications are mentioned below:

• For the operation of protective devices and for emergency lighting at generating stations and
substations.
• For starting, ignition and lighting of automobiles, aircrafts etc.
• For lighting on steam and diesel railways trains.
• As a supply power source in telephone exchange, laboratories and broad casting stations.
• For emergency lighting at hospitals, banks, rural areas where electricity supplies are not possible.

Some more basic questions on Electrical Engineering.

http://www.electricaltechnology.org/2013/09/Basic-Electrical-and-Electronics-Interview-
Questions-and-Answers-Electrical-electronics-notes.html

Basics of our Home electrical system

Electricity flows to your lights and appliances from the power company through your panel, its
breakers, out on your circuits and back. Here is a schematic picture of all the major parts of your
home electrical system.
There are many connections along these paths that can be disrupted or fail, and there are many
ways that electricity could go places you don't want it to.

The Power Company. Your electrical utility company and its distribution system bring power over
wires and through switches and transformers from the generating plant all the way to a point of
connection at your home.

The utility's system itself can have trouble that can affect things in your home. Its built-in safety
features can stop power in time, but other connections, broken lines, storms, imperfections, or
mistakes can sometimes allow unusual voltages into your system, possibly damaging parts of it.
The sensitivity of home electronic equipment to this has made us more aware of this possibility, so
that our use of surge protectors has become common. But some surges are difficult to protect
against and can be similar to lightning strikes in their effects.

This diagram gives a closer look at the source of 120 and 240 volts in the company's transformer.
Your Main Panel. Your central breaker panel (or fusebox) directs electricity through your home as a
number of separate circuits, each flowing "out" from its own circuit breaker (or fuse) on one wire
and returning from whatever is using the electricity to another connection in the panel by means of
another wire. The breaker or fuse will interrupt the current (the flow) if it ever starts to approach a
dangerous level. This diagram compares a main panel as I have diagrammed it so far, with how a
typical panel is arranged:

There may be in the panel a distinct "main" breaker that can shut off power to most or all the
circuits. If not, there could be one near the power company's meter. These devices automatically
turn power off, but connections at any one of these points -- at the meter, at the main breaker,
inside the main breaker -- can fail or become unreliable, disrupting some or all the power in your
home.
Circuits. A circuit is a path over which electric current can flow from and to an electric source. This
concept could use some clarification. If it were always as simple as current from the source
following only one possible path out to one light and back by one return path, then the operation or
malfunction of a circuit would be easy to grasp. But it is not so simple. This diagram lets you trace
the path of one circuit as it goes through your system:

Code and convention define a circuit in a home as having its source at one of the home's circuit
breakers or fuses. Taking this as the starting place of the electrical source, then, we will find that
most circuits in a home are complex, involving sub-branches like those of a tree.

By Code, a dedicated circuit is used for each of most large appliances like the electric range, electric
water heater, air conditioner, or electric dryer; these as well as electric heaters will have two
(joined) breakers in order to use 240 volts rather than the 120 volts used by most other items. A
dedicated circuit of 120 volts is usually provided for each dishwasher, disposal, gas or oil furnace,
and clothes washer. Most other 120-volt circuits tend to serve a number (from 2 to 20) of lights and
plug-in outlets. There are usually two circuits for the outlets in the kitchen/dining area, and these
use a heavier wire capable of 20 amps of flow.

Circuits serving more than one outlet or light pass power on to successive locations by means of
connections in the device itself or in the box the device is mounted in. So, on any one circuit there
are many places where electricity can fail to get through -- from the circuit breaker and its
connections, through a number of connections at devices and boxes, through switches, and at the
contacts of a receptacle where you plug something in. Troubleshooting electrical problems in your
house will depend on a basic grasp of these matters. (Want help on How to label your panel's
circuits?)

Sometimes the behavior of electricity in a home is explained by a comparison with plumbing.


Water and what it does are less abstract. But the analogy is very limited. It is true that water
pressure (voltage) through a certain size of pipe or showerhead (resistance) can result in a certain
rate of flow (current), and that a certain number of gallons (kilowatt-hours) will thereby be
delivered. But what would a circuit mean -- in plumbing terms? Water pressure ends at the sink or
out on the lawn beyond the sprinkler. The return of water back to a reservoir is very round-about.
Electrical wiring is a tighter system, a more closed system.

Wires: hot, neutral, ground. To understand the function that different wires in a circuit play,
consider first our use of terms. Because a house is provided with alternating current, the terms
"positive" and "negative" do not apply as they do to direct current in batteries and cars. Instead, the
power company is providing electricity that will flow back and forth 60 times per second. The
electricity flows through the transformer, on the one hand, and the operating household items, on
the other hand, by way of the continuous wire paths between them.

Two of the transformer's terminals are isolated from the earth and the third is connected to the
earth. We call these isolated wires "hot" or "live" because anything even slightly connected to the
earth (like us!), when touching a hot wire, provides, along with the earth, an accidental path for
electricity to flow between that wire and the transformer's "grounded" terminal. (See this very
good Portrayal of shocks by an engineer-type).

A circuit's hot wire is, we might say, one half of the path the circuit takes between the electrical
source and the operating items ("loads"). The other half, in the case of a 120-volt circuit, is the
"neutral" wire. For a 240-volt circuit, the other half is a hot wire from the other phase -- the other
hot coming from the transformer. When they are turned on (operating, running), the loads are part
of the path of the current and are where the electricity is doing its intended work.
Hot wires are distributed into your home from a number of circuit breakers or fuses in your panel.
Hot wires are typically black, occasionally red or even white, and never green or bare. The earth-
related neutral wires in your home are also distributed from your panel, but from one or two
"neutral bars". Neutral wires are always supposed to be white. Contact with them should not
normally shock you because they are connected to the earth much better (we assume) than you can
be. But contact with a hot, even one that is white-colored, will tend to shock you. Even when they
are switched off, we call these wires hot to remind ourselves that they will be, and to distinguish
them from neutrals and grounds.

Besides black, red, and white wires, the cables in homes wired since the 1960's also contain a bare
or green "ground(ing)" wire. Like the neutral, it is ultimately connected to the tranformer's
grounded terminal, but this wire is not connected so as to be part of the normal path of flow around
the circuit. Instead, it is there to connect to the metal parts of lights and appliances, so that a path
is provided "to ground" if a hot wire should contact such parts; otherwise you or I could be the best
available path. (In this diagram see if you can picture the different paths taken by normal current
and a short-to-ground

In other words, when a ground wire does carry current, it is taking care of an otherwise dangerous
situation; in fact, it usually carries so much flow suddenly, that it causes the breaker of the circuit
to trip, thereby also alerting us that a problem needs attention.
Major electrical Parameters of India

Voltage – 230V

Frequency 50Hz

India’s Transmission and distribution loss – 23.4% ( As of July 2014)

Why is supply frequency 50 Hz in India not 60 Hz?


It is an optimum frequency like Prototype said, you cannot use >60Hz.
And 50Hz is not fixed, it's not universal frequency.
United states & many Europe countries uses 60Hz frequency which is close optimum frequency to
50Hz.
However, it has been noticed that lower frequencies causes the size, weight & hence the cost to
increase. Also, more flickers are noticed in lesser frequencies than higher frequencies. Hence 60Hz
is preferred over 50Hz in most developed countries.
Both frequencies have drawbacks & Pros.
Some countries switched to 50Hz just to fit the metric system.

PS: countries with 60Hz frequency don’t use 220V/240V Power supply, in fact 110V-120V supply is
used.

Why do ceiling fan rotates in anticlockwise direction and table fan in clockwise
direction?

It is because the ceiling fan is on ceiling but table fan can be treated like a motor. The ceiling fan is
on roof and it should blow air in downward direction and table fan is to blow backword air to
forward. We can also say that the air mainly in natures direction and its clockwise and if the ceiling
fan is also clockwise then it can't be able to suck air on roof and blow it downward. But, a table fan
has a large area to suck the air and blow it outwards.

Different Kinds of Circuit Breakers

Electrical circuit breaker is a switching device which can be operated both manually and
automatically for controlling and protection of any electrical power system. As the modern power
system deals with huge currents, the special attention should be given during designing of circuit
breaker to safe interruption of arc produced during the opening/closing operation of circuit
breaker.
According to their arc quenching (rapid cooling) media the circuit breaker can be divided as:

 1) Air circuit breaker


 2) Oil circuit breaker
 3) Vacuum circuit breaker
 4) SF6 circuit breaker

Air circuit breakers (ACB)

The circuit breaker which operates in air at atmospheric pressure.


The working principle of this breaker is rather different from those in any other types of circuit
breakers. The main aim of all kind of circuit breaker is to prevent the reestablishment of arcing
after current zero by creating a situation where the contact gap will withstand the system recovery
voltage. The air circuit breaker does the same but in different manner. For interrupting arc it
creates an arc voltage in excess of the supply voltage. Arc voltage is defined as the minimum voltage
required maintaining the arc. This circuit breaker increases the arc voltage by mainly three
different ways:

 It may increase the arc voltage by cooling the arc plasma. As the temperature of arc
plasma is decreased, the mobility of the particle in arc plasma is reduced; hence more
voltage gradient is required to maintain the arc.

 It may increase the arc voltage by lengthening the arc path. As the length of arc path is
increased, the resistance of the path is increased, and hence to maintain the same arc
current more voltage is required to be applied across the arc path. That means arc
voltage is increased.

 Splitting up the arc into a number of series arcs also increases the arc voltage.

Oil Circuit Brakers (OCB)

Mineral oil has better insulating property than air. The oil is used to insulate between the phases
and between the phases and the ground, and to extinguish the arc. When electric arc is drawn
under oil, the arc vaporizes the oil and creates a large bubble of hydrogen that surrounds the arc.
The oil surrounding the bubble conducts the heat away from the arc and thus also contributes to
deionization and extinction of the arc. Disadvantage of the oil circuit breakers is the flammability of
the oil, and the maintenance necessary (i.e. changing and purifying the oil). The oil circuit breaker
is the one of the oldest type of circuit breakers.

Vacuum Circuit Brakers (VCB)

Vacuum circuit breakers are used mostly for low and medium voltages. Vacuum interrupters are
developed for up to 36 kV and can be connected in series for higher voltages. The interrupting
chambers are made of porcelain and sealed. They cannot be open for maintenance, but life is
expected to be about 20 years, provided that the vacuum is maintained. Because of the high
dielectric strength of vacuum, the interrupters are small. The gap between the contacts is about 1
cm for 15 kV interrupters, 2 mm for 3 kV interrupters.
1.ECE Questions

2.1 Computer Networking

1. Guided and unguided transmission medium.

Signals are usually transmitted over some transmission media that are broadly classified in to two
categories.
Guided Media: These are those that provide a conduit from one device to another that include
twisted-pair, coaxial cable and fiber-optic cable. A signal traveling along any of these media is
directed and is contained by the physical limits of the medium. Twisted-pair and coaxial cable use
metallic that accept and transport signals in the form of electrical current. Optical fiber is a glass or
plastic cable that accepts and transports signals in the form of light.
Unguided Media: This is the wireless media that transport electromagnetic waves without using a
physical conductor. Signals are broadcast either through air. This is done through radio
communication, satellite communication and cellular telephony.
2. Difference between baud rate and bit rate.

Bits, as you probably know, are the 0′s and 1′s that binary data consists of. The bit rate is a
measure of the number of bits that are transmitted per unit of time – where the unit of time can be
anything. Generally, you will see the bit rate measured in bits per second. This means that if a
network is transmitting 15,000 bits ever second, the bit rate is 15,000 bps – where bps obviously
stands for bits per second.

The baud rate, which is also known as symbol rate, measures the number of symbols that are
transmitted per unit of time. A symbol typically consists of a fixed number of bits depending on
what the symbol is defined as. The baud rate is measured in symbols per second.

To summarize, the bit rate measures the number of bits transmitted per second, whereas the baud
rate measures the number of symbols transmitted per second – and that is the major difference
between the two.

Can the baud rate equal the bit rate?

Because symbols are comprised of bits, the baud rate will equal the bit rate only when there is just
one bit per symbol.

3. Difference between packet switching and circuit switching.

Circuit Switching

In circuit switching network dedicated channel has to be established before the call is made
between users. The channel is reserved between the users till the connection is active. For half
duplex communication, one channel is allocated and for full duplex communication, two channels
are allocated. It is mainly used for voice communication requiring real time services without any
much delay.

As shown in the figure 1, if user-A wants to use the network it need to first ask for the request to
obtain the one and then user-A can communicate with user-C. During the connection phase if user-
B tries to call/communicate with user-D or any other user it will get busy signal from the network.

Packet Switching

In packet switching network unlike CS network, it is not required to establish the connection
initially. The connection/channel is available to use by many users. But when capacity or number
of users increases then it will lead to congestion in the network. Packet switched networks are
mainly used for data and voice applications requiring non-real time scenarios.
As shown in the figure 2, if user-A wants to send data/information to user-C and if user-B wants to
send data to user-D, it is simultaneously possible. Here information is padded with header which
contains addresses of source and destination. This header is sniffed by intermediate switching
nodes to determine their route and destination.

4. Comparison between CS vs. PS networks

As shown above in Packet switched (PS) networks quality of service (QoS) is not guaranteed while
in circuit switched (CS) networks quality is guaranteed.
PS is used for time insensitive applications such as internet/email/SMS/MMS/VOIP etc.
In CS even if user is not talking the channel cannot be used by any other users, this will waste the
resource capacity at those intervals.
The example of circuit switched network is PSTN and example of packet switched network is
GPRS/EDGE.

5. What is Manchester coding?

In telecommunication and data storage, Manchester coding (also known as phase encoding,
or PE) is a line code in which the encoding of each data bit has at least one transition and occupies
the same time. It therefore has no DC component, and is self-clocking, which means that it may be
inductively or capacitively coupled, and that a clock signal can be recovered from the encoded data.
As a result, electrical connections using a Manchester code are easily galvanically isolated using a
network isolator—a simple one-to-one isolation transformer.

An example of Manchester encoding showing both conventions

6. Name the layers of ISO-OSI layer model of network.

1. Physical Layer
2. Data Link layer
3. Network layer
4. Transport layer
5. Session layer
6.Presentation layer
7.Application layer

7. What is IEEE 802.3 MAC?

IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group
defining the physical layer and data link layer's media access control (MAC) of wired Ethernet. This
is generally a local area network technology with some wide area network applications. Physical
connections are made between nodes and/or infrastructure devices (hubs, switches, routers) by
various types of copper or fiber cable.

802.3 is a technology that supports the IEEE 802.1 network architecture.

802.3 also defines LAN access method using CSMA/CD.

8. CDMA

Code division multiple access (CDMA) is a channel access method used by various radio
communication technologies.

CDMA is an example of multiple access, which is where several transmitters can send information
simultaneously over a single communication channel. This allows several users to share a band of
frequencies (see bandwidth). To permit this without undue interference between the users, CDMA
employs spread-spectrum technology and a special coding scheme (where each transmitter is
assigned a code).

CDMA is used as the access method in many mobile phone standards such as cdmaOne,
CDMA2000 (the 3G evolution of cdmaOne), and WCDMA (the 3G standard used by GSM carriers),
which are often referred to as simply CDMA.

9. Token ring & Token Bus.

Token ring local area network (LAN) technology is a protocol which resides at the data link layer
(DLL) of the OSI model. It uses a special three-byte frame called a token that travels around the
ring.

Two examples of token ring networks: a) Using a single MAU b) Using several MAUs connected to
each other
Token bus is a network implementing the token ring protocol over a "virtual ring" on a coaxial
cable.[1] A token is passed around the network nodes and only the node possessing the token may
transmit. If a node doesn't have anything to send, the token is passed on to the next node on the
virtual ring. Each node must know the address of its neighbour in the ring, so a special protocol is
needed to notify the other nodes of connections to, and disconnections from, the ring.[2]

Token passing in a Token bus network

10. IEEE 802.11

IEEE 802.11 is a set of media access control (MAC) and physical layer (PHY) specifications for
implementing wireless local area network (WLAN) computer communication in the 2.4, 3.6, 5, and
60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards
Committee (IEEE 802). The base version of the standard was released in 1997, and has had
subsequent amendments. The standard and amendments provide the basis for wireless network
products using the Wi-Fi brand. While each amendment is officially revoked when it is
incorporated in the latest version of the standard, the corporate world tends to market to the
revisions because they concisely denote capabilities of their products. As a result, in the market
place, each revision tends to become its own standard.

11. Difference between TCP and UDP.

There are two types of Internet Protocol (IP) traffic. They are TCP or Transmission Control
Protocol and UDP or User Datagram Protocol. TCP is connection oriented – once a
connection is established, data can be sent bidirectional. UDP is a simpler, connectionless Internet
protocol. Multiple messages are sent as packets in chunks using UDP.

Comparison chart

TCP UDP
User Datagram Protocol or Universal
Acronym for Transmission Control Protocol
Datagram Protocol

TCP is a connection-oriented
Connection UDP is a connectionless protocol.
protocol.

UDP is also a protocol used in message


transport or transfer. This is not
As a message makes its way across
connection based which means that one
Function the internet from one computer to
program can send a load of packets to
another. This is connection based.
another and that would be the end of the
relationship.
TCP UDP
UDP is suitable for applications that need
TCP is suited for applications that
fast, efficient transmission, such as
require high reliability, and
Usage games. UDP's stateless nature is also
transmission time is relatively less
useful for servers that answer small
critical.
queries from huge numbers of clients.

Use by other
HTTP, HTTPs, FTP, SMTP, Telnet DNS, DHCP, TFTP, SNMP, RIP, VOIP.
protocols

UDP has no inherent order as all packets


Ordering of data TCP rearranges data packets in the are independent of each other. If ordering
packets order specified. is required, it has to be managed by the
application layer.

The speed for TCP is slower than UDP is faster because there is no error-
Speed of transfer
UDP. checking for packets.

There is absolute guarantee that the


data transferred remains intact and There is no guarantee that the messages
Reliability
arrives in the same order in which it or packets sent would reach at all.
was sent.

Header Size TCP header size is 20 bytes UDP Header size is 8 bytes.

Common Header Source port, Destination port, Check


Source port, Destination port, Check Sum
Fields Sum

Packets are sent individually and are


checked for integrity only if they arrive.
Data is read as a byte stream, no
Packets have definite boundaries which
distinguishing indications are
Streaming of data are honored upon receipt, meaning a read
transmitted to signal message
operation at the receiver socket will yield
(segment) boundaries.
an entire message as it was originally
sent.

TCP is heavy-weight. TCP requires


UDP is lightweight. There is no ordering
three packets to set up a socket
of messages, no tracking connections, etc.
Weight connection, before any user data can
It is a small transport layer designed on
be sent. TCP handles reliability and
top of IP.
congestion control.

TCP does Flow Control. TCP


requires three packets to set up a
UDP does not have an option for flow
Data Flow Control socket connection, before any user
control
data can be sent. TCP handles
reliability and congestion control.

UDP does error checking, but no recovery


Error Checking TCP does error checking
options.

1. Sequence Number, 2. AcK


Fields number, 3. Data offset, 4. Reserved, 1. Length, 2. Source port, 3. Destination
5. Control bit, 6. Window, 7. Urgent port, 4. Check Sum
Pointer 8. Options, 9. Padding, 10.
TCP UDP
Check Sum, 11. Source port, 12.
Destination port

Acknowledgement Acknowledgement segments No Acknowledgment

Handshake SYN, SYN-ACK, ACK No handshake (connectionless protocol)

Checksum checksum to detect errors

12. Congestion control & Leaky Bucket algorithm

When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,
congestion results. Because routers are receiving packets faster than they can forward them, one
of two things must happen:
The leaky bucket is an algorithm used in packet switched computer networks and
telecommunications networks. It can be used to check that data transmissions, in the form of
packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or
variations in the traffic flow). It can also be used as a scheduling algorithm to determine the timing
of transmissions that will comply with the limits set for the bandwidth and burstiness: see network
scheduler. The leaky bucket algorithm is also used in leaky bucket counters, e.g. to detect when the
average or peak rate of random or stochastic events or stochastic processes exceed defined limits.

13. IP address. Why is it unique and universal?

An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g.,
computer, printer) participating in a computer network that uses the Internet Protocol for
communication.[1] An IP address serves two principal functions: host or network interface
identification and location addressing. Its role has been characterized as follows: "A name indicates
what we seek. An address indicates where it is. A route indicates how to get there.”
Your router has an ip, that is how everything on the internet connects to you because it sends its
info to your router, (which is called an external ip address) so every computer on that router has
the same external ip address. The data then goes through the router and firewall and sends it to the
next ip address (internal ip address) which is totally different and is assigned to each computer on
the router by the router. An in the midst of it all the isp assigns the router's ip. So, if you went to
another country your ip would only be different if you changed isp's.

But your ip is changed every now and then randomly, by your isp, or even if you restart your router.
So technically that's the only way your ip would change.

14. Data privacy & Digital signature technique.

Data privacy, also called information privacy, is the aspect of information technology (IT) that
deals with the ability an organization or individual has to determine what data in a computer
system can be shared with third parties.

Digital signature is a mechanism by which a message is authenticated i.e. proving that a message is
effectively coming from a given sender, much like a signature on a paper document. For instance,
suppose that Alice wants to digitally sign a message to Bob. To do so, she uses her private-key to
encrypt the message; she then sends the message along with her public-key (typically, the public
key is attached to the signed message). Since Alice’s public-key is the only key that can decrypt that
message, a successful decryption constitutes a Digital Signature Verification, meaning that there is
no doubt that it is Alice’s private key that encrypted the message

15. Firewall

In computing, a firewall is a network security system that controls the incoming and outgoing
network traffic based on an applied rule set. A firewall establishes a barrier between a trusted,
secure internal network and another network (e.g., the Internet) that is assumed not to be secure
and trusted.[1] Firewalls exist both as a software solution and as a hardware appliance. Many
hardware-based firewalls also offer other functionality to the internal network they protect, such as
acting as a DHCP server for that network.

Many personal computer operating systems include software-based firewalls to protect against
threats from the public Internet. Many routers that pass data between networks contain firewall
components and, conversely, many firewalls can perform basic routing functions

16. ISDN

Integrated Services for Digital Network (ISDN) is a set of communication standards for
simultaneous digital transmission of voice, video, data, and other network services over the
traditional circuits of the public switched telephone network. It was first defined in 1988 in the
CCITT red book.[1] Prior to ISDN, the telephone system was viewed as a way to transport voice,
with some special services available for data. The key feature of ISDN is that it integrates speech
and data on the same lines, adding features that were not available in the classic telephone system.
There are several kinds of access interfaces to ISDN defined as Basic Rate Interface (BRI), Primary
Rate Interface (PRI), Narrowband ISDN (N-ISDN), and Broadband ISDN (B-ISDN).

ISDN is a circuit-switched telephone network system, which also provides access to packet
switched networks, designed to allow digital transmission of voice and data over ordinary
telephone copper wires, resulting in potentially better voice quality than an analog phone can
provide. It offers circuit-switched connections (for either voice or data), and packet-switched
connections (for data), in increments of 64 kilobit/s. A major market application for ISDN in some
countries is Internet access, where ISDN typically provides a maximum of 128 kbit/s in both
upstream and downstream directions. Channel bonding can achieve a greater data rate; typically
the ISDN B-channels of three or four BRIs (six to eight 64 kbit/s channels) are bonded.

ISDN should not be mistaken for its use with a specific protocol, such as Q.931 whereas ISDN is
employed as the network, data-link and physical layers in the context of the OSI model. In a broad
sense ISDN can be considered a suite of digital services existing on layers 1, 2, and 3 of the OSI
model. ISDN is designed to provide access to voice and data services simultaneously.

However, common use reduced ISDN to be limited to Q.931 and related protocols, which are a set
of protocols for establishing and breaking circuit switched connections, and for advanced calling
features for the user. They were introduced in 1986.[2]

In a videoconference, ISDN provides simultaneous voice, video, and text transmission between
individual desktop videoconferencing systems and group (room) videoconferencing systems.
2.2 Analog Electronics

1. Crossover distortion. How it can be minimized?

Crossover distortion is a type of distortion which is caused by switching between devices


driving a load, most often when the devices (such as a transistor) are matched. It is most commonly
seen in complementary, or "push-pull", Class-B amplifier stages, although it is occasionally seen in
other types of circuits as well.

Input-Output characteristic of a Class-B complementary emitter follower stage.

The term crossover signifies the "crossing over" of the signal between devices, in this case, from the
upper transistor to the lower and vice-versa. The term is not related to the audio crossover—a
filtering circuit which divides an audio signal into frequency bands.

2. CMRR in differential amplifier

The common-mode rejection ratio (CMRR) of a differential amplifier (or other device) is the
rejection by the device of unwanted input signals common to both input leads, relative to the
wanted difference signal. An ideal differential amplifier would have infinite CMRR; this is not
achievable in practice. A high CMRR is required when a differential signal must be amplified in the
presence of a possibly large common-mode input. An example is audio transmission over balanced
lines.

3. Characteristics of ideal OP-Amp

An ideal op-amp is usually considered to have the following properties:


 Infinite open-loop gain G = vout / 'v. ...
 Infinite input impedance Rin, and so zero input current.
 Zero input offset voltage.
 Infinite voltage range available at the output.
 Infinite bandwidth with zero phase shift and infinite slew rate.

4. Concept of virtual ground.

An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or
some other circuit element that has gain. Since an operational amplifier has very high open-
loop gain, the potential difference between its inputs tend to zero when a feedback network is
implemented.

A voltage divider, using two resistors, can be used to create a virtual ground node. If two voltage
sources are connected in series with two resistors, it can be shown that the midpoint becomes a
virtual ground if

Op-amp inverting amplifier

An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or
some other circuit element that has gain. Since an operational amplifier has very high open-loop
gain, the potential difference between its inputs tend to zero when a feedback network is
implemented. To achieve a reasonable voltage at the output (and thus equilibrium in the system),
the output supplies the inverting input (via the feedback network) with enough voltage to reduce
the potential difference between the inputs to microvolts. The non-inverting (+) input of the
operational amplifier is grounded; therefore, its inverting (-) input, although not connected to
ground, will assume a similar potential, becoming a virtual ground if the opamp is working in its
linear region (i.e., outputs have not saturated).

5. How is a class B push-pull amplifier better than a class A push-pull amplifier.

To improve the full power efficiency of the previous Class A amplifier by reducing the wasted power
in the form of heat, it is possible to design the power amplifier circuit with two transistors in its
output stage producing what is commonly termed as a Class B Amplifier also known as a push-
pull amplifier configuration.

Push-pull amplifiers use two “complementary” or matching transistors, one being an NPN-type
and the other being a PNP-type with both power transistors receiving the same input signal
together that is equal in magnitude, but in opposite phase to each other. This results in one
transistor only amplifying one half or 180o of the input waveform cycle while the other transistor
amplifies the other half or remaining 180o of the input waveform cycle with the resulting “two-
halves” being put back together again at the output terminal.

Then the conduction angle for this type of amplifier circuit is only 180o or 50% of the input signal.
This pushing and pulling effect of the alternating half cycles by the transistors gives this type of
circuit its amusing “push-pull” name, but are more generally known as the Class B Amplifier in
the two output half-cycles being combined to reform the sine-wave in the output transformers
primary winding which then appears across the load.

Class B Amplifier operation has zero DC bias as the transistors are biased at the cut-off, so each
transistor only conducts when the input signal is greater than the base-emitter voltage. Therefore,
at zero input there is zero output and no power is being consumed. This then means that the actual
Q-point of a Class B amplifier is on the Vce part of the load line as shown below.

Class B Output Characteristics Curves

The Class B Amplifier has the big advantage over their Class A amplifier cousins in that no
current flows through the transistors when they are result is that both halves of the output
waveform now swings from zero to twice the quiescent current thereby reducing dissipation. This
has the effect of almost doubling the efficiency of the amplifier to around 70%.

Class B amplifiers are greatly preferred over Class A designs for high-power applications such as
audio power amplifiers and PA systems. Like the Class-A Amplifier circuit, one way to greatly boost
the current gain ( Ai ) of a Class B push-pull amplifier is to use Darlington transistors pairs instead
of single transistors in its output circuitry.
The Class B amplifier circuit above uses complimentary transistors for each half of the waveform
and while Class B amplifiers have a much high gain than the Class A types, one of the main
disadvantages of class B type push-pull amplifiers is that they suffer from an effect known
commonly as Crossover Distortion.
6. Maximum efficiency of class B amplifier.

78.5

7. Tuned amplifier.

A tuned amplifier is an electronic amplifier which includes bandpass filtering components


within the amplifier circuitry. They are widely used in all kinds of wireless applications.

8. Schimdt Trigger.

A Schmitt trigger is a comparator circuit with hysteresis, implemented by applying positive


feedback to the noninverting input of a comparator or differential amplifier. It is an active circuit
which converts an analog input signal to a digital output signal. The circuit is named a "trigger"
because the output retains its value until the input changes sufficiently to trigger a change. In the
non-inverting configuration, when the input is higher than a certain chosen threshold, the output is
high. When the input is below a different (lower) chosen threshold, the output is low, and when the
input is between the two levels, the output retains its value. This dual threshold action is called
hysteresis and implies that the Schmitt trigger possesses memory and can act as a bistable circuit
(latch or flip-flop). There is a close relation between the two kinds of circuits: a Schmitt trigger can
be converted into a latch and a latch can be converted into a Schmitt trigger.

Schmitt trigger devices are typically used in signal conditioning applications to remove noise from
signals used in digital circuits, particularly mechanical switch bounce. They are also used in closed
loop negative feedback configurations to implement relaxation oscillators, used in function
generators and switching power supplies.

9. PLL

A phase-locked loop or phase lock loop (PLL) is a control system that generates an output
signal whose phase is related to the phase of an input signal. While there are several differing types,
it is easy to initially visualize as an electronic circuit consisting of a variable frequency oscillator
and a phase detector. The oscillator generates a periodic signal. The phase detector compares the
phase of that signal with the phase of the input periodic signal and adjusts the oscillator to keep the
phases matched. Bringing the output signal back toward the input signal for comparison is called a
feedback loop since the output is 'fed back' toward the input forming a loop.
10. Advantage and disadvantages of feedback.

(negative feedback only)


Advantages:
Better frequency response, less distortion, less gain or voltage drift, less temperature drift, better
CMRR, SVRR, bias point stability.

disadvantages:
less gain.
possible oscillation if not properly designed.

(positive feedback)
Advantages:
high gain
disadvantages:
Very prone to oscillation.
poor freq. response, more distortion, more drift.

11. Voltage follower.

This connection forces the op-amp to adjust its output voltage simply equal to the input voltage
(Vout follows Vin so the circuit is named op-amp voltage follower). The importance of this circuit
does not come from any change in voltage, but from the input and output impedances of the op-
amp.

A voltage buffer amplifier is used to transfer a voltage from a first circuit, having a high output
impedance level, to a second circuit with a low input impedance level. The interposed buffer
amplifier prevents the second circuit from loading the first circuit unacceptably and interfering
with its desired operation. In the ideal voltage buffer in the diagram, the input resistance is infinite,
the output resistance zero (impedance of an ideal voltage source is zero). Other properties of the
ideal buffer are: perfect linearity, regardless of signal amplitudes; and instant output response,
regardless of the speed of the input signal.

If the voltage is transferred unchanged (the voltage gain Av is 1), the amplifier is a unity gain
buffer; also known as a voltage follower because the output voltage follows or tracks the input
voltage. Although the voltage gain of a voltage buffer amplifier may be (approximately) unity, it
usually provides considerable current gain and thus power gain. However, it is commonplace to say
that it has a gain of 1 (or the equivalent 0 dB), referring to the voltage gain.

As an example, consider a Thévenin source (voltage VA, series resistance RA) driving a resistor load
RL. Because of voltage division (also referred to as "loading") the voltage across the load is only VA
RL / ( RL + RA ). However, if the Thévenin source drives a unity gain buffer such as that in Figure 1
(top, with unity gain), the voltage input to the amplifier is VA, and with no voltage division because
the amplifier input resistance is infinite. At the output the dependent voltage source delivers
voltage Av VA = VA to the load, again without voltage division because the output resistance of the
buffer is zero. A Thévenin equivalent circuit of the combined original Thévenin source and the
buffer is an ideal voltage source VA with zero Thévenin resistance.

12. Voltage regulator.

A voltage regulator is designed to automatically maintain a constant voltage level. A voltage


regulator may be a simple "feed-forward" design or may include negative feedback control loops.
It may use an electromechanical mechanism, or electronic components.
Electronic voltage regulators are found in devices such as computer power supplies where they
stabilize the DC voltages used by the processor and other elements. In automobile alternators and
central power station generator plants, voltage regulators control the output of the plant. In an
electric power distribution system, voltage regulators may be installed at a substation or along
distribution lines so that all customers receive steady voltage independent of how much power is
drawn from the line.

13. Bridge rectifier.

A bridge rectifier is an arrangement of four or more diodes in a bridge circuit configuration which
provides the same output polarity for either input polarity. It is used for converting an alternating
current (AC) input into a direct current (DC) output. A bridge rectifier provides full-wave
rectification from a two-wire AC input, therefore resulting in lower weight and cost when compared
to a rectifier with a 3-wire input from a transformer with a center-tapped secondary winding.
14. Wien-Bridge oscillator

A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can
generate a large range of frequencies. The oscillator is based on a bridge circuit originally
developed by Max Wien in 1891.[1] The bridge comprises four resistors and two capacitors. The
oscillator can also be viewed as a positive gain amplifier combined with a bandpass filter that
provides positive feedback.

The modern circuit is derived from William Hewlett's 1939 Stanford University master's degree
thesis. Hewlett figured out how to make the oscillator with a stable output amplitude and low
distortion.[2][3] Hewlett, along with David Packard, co-founded Hewlett-Packard, and Hewlett-
Packard's first product was the HP200A, a precision Wien bridge oscillator.

The frequency of oscillation is given by:

In this version of the oscillator, Rb is a small incandescent lamp. Usually R1 = R2 = R and C1 = C2


= C. In normal operation, Rb self heats to the point where its resistance is Rf/2.

A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can

15. Early effect.

The Early effect is the variation in the width of the base in a bipolar junction transistor (BJT) due
to a variation in the applied base-to-collector voltage, named after its discoverer James M. Early. A
greater reverse bias across the collector–base junction, for example, increases the collector–base
depletion width, decreasing the width of the charge neutral portion of the base.

In Figure 1 the neutral (i.e. active) base is green, and the depleted base regions are hashed light
green. The neutral emitter and collector regions are dark blue and the depleted regions hashed
light blue. Under increased collector–base reverse bias, the lower panel of Figure 1 shows a
widening of the depletion region in the base and the associated narrowing of the neutral base
region.

The collector depletion region also increases under reverse bias, more than does that of the base,
because the collector is less heavily doped. The principle governing these two widths is charge
neutrality. The narrowing of the collector does not have a significant effect as the collector is much
longer than the base. The emitter–base junction is unchanged because the emitter–base voltage is
the same.

Figure 2. The Early voltage (VA) as seen in the output-characteristic plot of a BJT.

Base-narrowing has two consequences that affect the current:

 There is a lesser chance for recombination within the "smaller" base region.
 The charge gradient is increased across the base, and consequently, the current of minority
carriers injected across the emitter junction increases.

Both these factors increase the collector or "output" current of the transistor with an increase in the
collector voltage. This increased current is shown in Figure 2. Tangents to the characteristics at
large voltages extrapolate backward to intercept the voltage axis at a voltage called the Early
voltage, often denoted by the symbol VA.

16. Schottky diode.

The Schottky diode is what is called a majority carrier device. This gives it tremendous
advantages in terms of speed. By making the devices small, the normal RC (resistance-
capacitance) type time constants can be reduced, making the Schottky diode an order of
magnitude faster than the conventional PN diodes. This factor is the prime reason why they are
so popular in RF applications.
The Schottky diode also has a much higher current density than an ordinary PN junction. This
means that forward-voltage drops are lower, making these diodes ideal for use in power-
rectification applications. The main drawback of the diode is found in the level of its reverse
current, which is relatively high. For many uses this may not be a problem, but it is a factor
which is worth watching when using Schottky diodes in more exacting applications.

17. Emitter follower.

In electronics, a common collector amplifier (also known as an emitter follower) is one of


three basic single-stage bipolar junction transistor (BJT) amplifier topologies, typically used as a
voltage buffer.
In a common-collector amplifier circuit, the output voltage at the emitter terminal follows the
applied input signal at the base terminal. Therefore, a common-collector amplifier circuit is called
emitter follower.

18. Low and High pass filters.

A high-pass filter is an electronic filter that passes signals with a frequency higher than a certain
cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The
amount of attenuation for each frequency depends on the filter design. A high-pass filter is usually
modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-cut
filter.[1] High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-zero
average voltages or radio frequency devices. They can also be used in conjunction with a low-pass
filter to produce a bandpass filter.

Figure 2: An active high-pass filter

Figure 1: A passive, analog, first-order high-pass filter, realized by an RC circuit

A low-pass filter is a filter that passes signals with a frequency lower than a certain cutoff
frequency and attenuates signals with frequencies higher than the cutoff frequency. The amount of
attenuation for each frequency depends on the filter design. The filter is sometimes called a high-
cut filter, or treble cut filter in audio applications. A low-pass filter is the opposite of a high-
pass filter. A band-pass filter is a combination of a low-pass and a high-pass filter.

Low-pass filters exist in many different forms, including electronic circuits (such as a hiss filter
used in audio), anti-aliasing filters for conditioning signals prior to analog-to-digital conversion,
digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The
moving average operation used in fields such as finance is a particular kind of low-pass filter, and
can be analyzed with the same signal processing techniques as are used for other low-pass filters.
Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations, and
leaving the longer-term trend.

An optical filter can correctly be called a low-pass filter, but conventionally is called a longpass
filter (low frequency is long wavelength), to avoid confusion.

An active low-pass filter


A simple low-pass RC filter

21.Transistor Biasing

Transistor Biasing is the process of setting a transistors DC operating voltage or current conditions
to the correct level so that any AC input signal can be amplified correctly by the transistor. A
transistors steady state of operation depends a great deal on its base current, collector voltage, and
collector current and therefore, if a transistor is to operate as a linear amplifier, it must be properly
biased to have a suitable operating point.

Establishing the correct operating point requires the proper selection of bias resistors and load
resistors to provide the appropriate input current and collector voltage conditions. The correct
biasing point for a /sub>, either NPN or PNP, generally lies somewhere between the two extremes
of operation with respect to it being either “fully-ON” or “fully-OFF” along its load line. This central
operating point is called the “Quiescent Operating Point”, or Q-point for short.

When a bipolar transistor is biased so that the Q-point is near the middle of its operating range,
that is approximately halfway between cut-off and saturation, it is said to be operating as a Class-A
amplifier. This mode of operation allows the output current to increase and decrease around the
amplifiers Q-point without distortion as the input signal swings through a complete cycle. In other
words, the output current flows for the full 360o of the input cycle.

So how do we set this Q-point biasing of a transistor? – The correct biasing of the
transistor is achieved using a process know commonly as Base Bias.

The function of the “DC Bias level” or “no input signal level” is to correctly set the transistors Q-
point by setting its Collector current ( IC ) to a constant and steady state value without an input
signal applied to the transistors Base.

This steady-state or DC operating point is set by the values of the circuits DC supply voltage ( Vcc )
and the value of the biasing resistors connected the transistors Base terminal.

Since the transistors Base bias currents are steady-state DC currents, the appropriate use of
coupling and bypass capacitors will help block bias current setup for one transistor stage affecting
the bias conditions of the next. Base bias networks can be used for Common-base (CB), common-
collector (CC) or common-emitter (CE) transistor configurations. In this simple transistor biasing
tutorial we will look at the different biasing arrangements available for a Common Emitter
Amplifier.

Voltage Divider Transistor Biasing


The common emitter transistor is biased using a voltage divider network. The name of this biasing
configuration comes from the fact that the two resistors RB1 and RB2 are connected to the
transistor’s base terminal across the supply.

This voltage divider configuration is the most widely used transistor biasing method, as the emitter
diode of the transistor is forward biased by the voltage dropped across resistor RB2. Also, voltage
divider network biasing makes the transistor circuit independent of changes in beta as the voltages
at the transistors base, emitter, and collector are dependent on external circuit values.

To calculate the voltage developed across resistor RB2 and therefore the voltage applied to the base
terminal we simply use the voltage divider formula for resistors in series. The current flowing
through resistor RB2 is generally set at 10 times the value of the required base current IB so that it
has no effect on the voltage divider current or changes in Beta.

The goal of Transistor Biasing is to establish a known Q-point in order for the transistor to work
efficiently and produce an undistorted output signal. Correct biasing of the transistor also
establishes its initial AC operating region with practical biasing circuits using either a two or four-
resistor bias network.

In bipolar transistor circuits, the Q-point is represented by ( VCE, IC ) for the NPN transistors or
( VEC, IC ) for PNP transistors. The stability of the base bias network and therefore the Q-point is
generally assessed by considering the collector current as a function of both Beta (β) and
temperature.

Here we have looked briefly at five different configurations for “biasing a transistor” using resistive
networks. But we can also bias a transistor using either silicon diodes, zener diodes or active
networks all connected to the base terminal of the transistor or by biasing the transistor from a
dual power supply.

22.Darlington Connection

A Darlington pair is two transistors that act as a single transistor but with a much higher current
gain. This mean that a tiny amount of current from a sensor, micro-controller or similar can be
used to drive a larger load. An example circuit is shown below:
The Darlington Pair can be made from two transistors as shown in the diagram or Darlington Pair
transistors are available where the two transistors are contained within the same package.

What is current gain?

Transistors have a characteristic called current gain. This is referred to as its hFE.

The amount of current that can pass through the load in the circuit above when the transistor is
turned on is:

Load current = input current x transistor gain (hFE)

The current gain varies for different transistors and can be looked up in the data sheet for the
device. For a normal transistor this would typically be about 100. This would mean that the current
available to drive the load would be 100 times larger than the input to the transistor.

Why use a Darlington Pair?

In some applications the amount of input current available to switch on a transistor is very low.
This may mean that a single transistor may not be able to pass sufficient current required by the
load.

As stated earlier this equals the input current x the gain of the transistor (hFE). If it is not possible
to increase the input current then the gain of the transistor will need to be increased. This can be
achieved by using a Darlington Pair.

A Darlington Pair acts as one transistor but with a current gain that equals:

Total current gain (hFE total) = current gain of transistor 1 (hFE t1) x current gain of transistor 2
(hFE t2)

So for example if you had two transistors with a current gain (hFE) = 100:

(hFE total) = 100 x 100

(hFE total) = 10,000

You can see that this gives a vastly increased current gain when compared to a single transistor.
Therefore this will allow a very low input current to switch a much bigger load current.
23.Have and full wave rectifier.
A rectifier is an electrical device that converts alternating current (AC), which periodically
reverses direction, to direct current (DC), which flows in only one direction. The process is known
as rectification.

Rectifier circuits may be single-phase or multi-phase (three being the most common number of
phases). Most low power rectifiers for domestic equipment are single-phase, but three-phase
rectification is very important for industrial applications and for the transmission of energy as DC
(HVDC).

Single-phase rectifiers

Half-wave rectification

In half wave rectification of a single-phase supply, either the positive or negative half of the AC
wave is passed, while the other half is blocked. Because only one half of the input waveform reaches
the output, mean voltage is lower. Half-wave rectification requires a single diode in a single-phase
supply, or three in a three-phase supply. Rectifiers yield a unidirectional but pulsating direct
current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much more
filtering is needed to eliminate harmonics of the AC frequency from the output.

Half-wave rectifier

The no-load output DC voltage of an ideal half wave rectifier for a sinusoidal input voltage is:[2]

Where:

Vdc, Vav - the DC or average output voltage,


Vpeak, the peak value of the phase input voltages,
Vrms, the root-mean-square value of output voltage.

Full-wave rectification

A full-wave rectifier converts the whole of the input waveform to one of constant polarity (positive
or negative) at its output. Full-wave rectification converts both polarities of the input waveform to
pulsating DC (direct current), and yields a higher average output voltage. Two diodes and a center
tapped transformer, or four diodes in a bridge configuration and any AC source (including a
transformer without center tap), are needed.[3] Single semiconductor diodes, double diodes with
common cathode or common anode, and four-diode bridges, are manufactured as single
components.
Graetz bridge rectifier: a full-wave rectifier using 4 diodes

For single-phase AC, if the transformer is center-tapped, then two diodes back-to-back (cathode-
to-cathode or anode-to-anode, depending upon output polarity required) can form a full-wave
rectifier. Twice as many turns are required on the transformer secondary to obtain the same output
voltage than for a bridge rectifier, but the power rating is unchanged.

Full-wave rectifier using a center-tap transformer and 2 diodes.

Full-wave rectifier, with vacuum tube having two anodes

The average and root-mean-square no-load output voltages of an ideal single-phase full-wave
rectifier are:

Very common double-diode rectifier vacuum tubes contained a single common cathode and two
anodes inside a single envelope, achieving full-wave rectification with positive output. The 5U4 and
5Y3 were popular examples of this configuration.

Three-phase rectifiers

24.Varactor diode.
Varactor diodes or varicap diodes are semiconductor devices that are widely used in the electronics
industry and are used in many applications where a voltage controlled variable capacitance is
required. Although the terms varactor diode and varicap diode can be used interchangeably, the
more common term these days is the varactor diode.

Although ordinary PN junction diodes exhibit the variable capacitance effect and these diodes can
be used for this applications, special diodes optimised to give the required changes in capacitance.
Varactor diodes or varicap diodes normally enable much higher ranges of capacitance change to be
gained as a result of the way in which they are manufactured. There are a variety of types of
varactor diode ranging from relatively standard varieties to those that are described as abrupt or
hyperabrupt varactor diodes.

Varactor diode basics

The varactor diode or varicap diode consists of a standard PN junction, although it is obviously
optimised for its function as a variable capacitor. In fact ordinary PN junction diodes can be used
as varactor diodes, even if their performance is not to the same standard as specially manufactured
varactors.

The basis of operation of the varactor diode is quite simple. The diode is operated under reverse
bias conditions and this gives rise to three regions. At either end of the diode are the P and N
regions where current can be conducted. However, around the junction is the depletion region
where no current carriers are available. As a result, current can be carried in the P and N regions,
but the depletion region is an insulator.

This is exactly the same construction as a capacitor. It has conductive plates separated by an
insulating dielectric.

The capacitance of a capacitor is dependent on a number of factors including the plate area, the
dielectric constant of the insulator between the plates and the distance between the two plates. In
the case of the varactor diode, it is possible to increase and decrease the width of the depletion
region by changing the level of the reverse bias. This has the effect of changing the distance
between the plates of the capacitor.

25.Zener diode
A Zener diode is a diode which allows current to flow in the forward direction in the same
manner as an ideal diode, but also permits it to flow in the reverse direction when the voltage is
above a certain value known as the breakdown voltage, "Zener knee voltage", "Zener voltage",
"avalanche point", or "peak inverse voltage".

26.Avalanche and Zener breakdown.

Zener breakdown
In Zener breakdown the electrostatic attraction between the negative electrons and a large positive
voltage is so great that it pulls electrons out of their covalent bonds and away from their parent
atoms. ie Electrons are transferred from the valence to the conduction band. In this situation the
current can still be limited by the limited number of free electrons produced by the applied voltage
so it is possible to cause Zener breakdown without damaging the semiconductor.

Avalanche breakdown
Avalanche breakdown occurs when the applied voltage is so large that electrons that are pulled
from their covalent bonds are accelerated to great velocities. These electrons collide with the silicon
atoms and knock off more electrons. These electrons are then also accelerated and subsequently
collide with other atoms. Each collision produces more electrons which leads to more collisions etc.
The current in the semiconductor rapidly increases and the material can quickly be destroyed.

27.Direct and Indirect band gap semiconductors

The band gap represents the minimum energy difference between the top of the valence band and
the bottom of the conduction band, However, the top of the valence band and the bottom of the
conduction band are not generally at the same value of the electron momentum. In a direct band
gap semiconductor, the top of the valence band and the bottom of the conduction band occur at
the same value of momentum, as in the schematic below.

In an indirect band gap semiconductor, the maximum energy of the valence band occurs at a
different value of momentum to the minimum in the conduction band energy:

The difference between the two is most important in optical devices. As has been mentioned in the
section charge carriers in semiconductors, a photon can provide the energy to produce an electron-
hole pair.

Each photon of energy E has momentum , where c is the velocity of light. An optical photon
has an energy of the order of 10–19 J, and, since c =3 × 108 ms–1, a typical photon has a very small
amount of momentum.
A photon of energy Eg, where Eg is the band gap energy, can produce an electron-hole pair in a
direct band gap semiconductor quite easily, because the electron does not need to be given very
much momentum. However, an electron must also undergo a significant change in its momentum
for a photon of energy Eg to produce an electron-hole pair in an indirect band gap semiconductor.
This is possible, but it requires such an electron to interact not only with the photon to gain energy,
but also with a lattice vibration called a phonon in order to either gain or lose momentum.

The indirect process proceeds at a much slower rate, as it requires three entities to intersect in
order to proceed: an electron, a photon and a phonon. This is analogous to chemical reactions,
where, in a particular reaction step, a reaction between two molecules will proceed at a much
greater rate than a process which involves three molecules.

The same principle applies to recombination of electrons and holes to produce photons. The
recombination process is much more efficient for a direct band gap semiconductor than for an
indirect band gap semiconductor, where the process must be mediated by a phonon.

As a result of such considerations, gallium arsenide and other direct band gap semiconductors are
used to make optical devices such as LEDs and semiconductor lasers, whereas silicon, which is an
indirect band gap semiconductor, is not. The table in the next section lists a number of different
semiconducting compounds and their band gaps, and it also specifies whether their band gaps are
direct or indirect.

28.Class AB amplifier

We know that we need the base-emitter voltage to be greater than 0.7v for a silicon bipolar
transistor to start conducting, so if we were to replace the two voltage divider biasing resistors
connected to the base terminals of the transistors with two silicon diodes, the biasing voltage
applied to the transistors would now be equal to the forward voltage drop of the diode. These two
diodes are generally called Biasing Diodes or Compensating Diodes and are chosen to match
the characteristics of the matching transistors. The circuit below shows diode biasing.

Class AB Amplifier
The Class AB Amplifier circuit is a compromise between the Class A and the Class B
configurations. This very small diode biasing voltage causes both transistors to slightly conduct
even when no input signal is present. An input signal waveform will cause the transistors to operate
as normal in their active region thereby eliminating any crossover distortion present in pure Class
B amplifier designs.

A small collector current will flow when there is no input signal but it is much less than that for the
Class A amplifier configuration. This means then that the transistor will be “ON” for more than half
a cycle of the waveform but much less than a full cycle giving a conduction angle of between 180 to
360o or 50 to 100% of the input signal depending upon the amount of additional biasing used. The
amount of diode biasing voltage present at the base terminal of the transistor can be increased in
multiples by adding additional diodes in series.

2.3 Digital Electronics

1. Principal uses of laser. Quantum noise.

A laser is a device that emits light through a process of optical amplification based on the
stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for
"light amplification by stimulated emission of radiation".[1][2] A laser differs from other
sources of light because it emits light coherently. Spatial coherence allows a laser to be focused to a
tight spot, enabling applications like laser cutting and lithography. Spatial coherence also allows a
laser beam to stay narrow over long distances (collimation), enabling applications such as laser
pointers. Lasers can also have high temporal coherence which allows them to have a very narrow
spectrum, i.e., they only emit a single color of light. Temporal coherence can be used to produce
pulses of light—as short as a femtosecond.
Lasers have many important applications. They are used in common consumer devices such as
optical disk drives, laser printers, and barcode scanners. Lasers are used for both fiber-optic and
free-space optical communication. They are used in medicine for laser surgery and various skin
treatments, and in industry for cutting and welding materials. They are used in military and law
enforcement devices for marking targets and measuring range and speed. Laser lighting displays
use laser light as an entertainment medium.

Quantum noise refers to the uncertainty of a physical quantity, due to its quantum origin. In
certain situations, quantum noise appears as shot noise; for example, most optical communications
use amplitude modulation, and thus, the quantum noise appears as shot noise only. For the case of
uncertainty in the electric field in some lasers, the quantum noise is not just shot noise;
uncertainties of both amplitude and phase contribute to the quantum noise. This issue becomes
important in the case of noise of a quantum amplifier, which preserves the phase. The phase noise
becomes important when the energy of the frequency modulation or phase modulation of waves is
comparable to the energy of the signal (which is believed to be more robust with respect to additive
noise than an amplitude modulation).

2. Diff. types of losses in a optical fibre.

An optical fiber (or optical fibre) is a flexible, transparent fiber made of extruded glass (silica)
or plastic, slightly thicker than a human hair. It can function as a waveguide, or “light pipe”,[1] to
transmit light between the two ends of the fiber.[2] The field of applied science and engineering
concerned with the design and application of optical fibers is known as fiber optics.

Optical fibers are widely used in fiber-optic communications, where they permit transmission over
longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead of
metal wires because signals travel along them with less loss and are also immune to
electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so
that they may be used to carry images, thus allowing viewing in confined spaces. Specially designed
fibers are used for a variety of other applications, including sensors and fiber lasers.

Optical fibers typically include a transparent core surrounded by a transparent cladding material
with a lower index of refraction. Light is kept in the core by total internal reflection. This causes the
fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are
called multi-mode fibers (MMF), while those that only support a single mode are called single-
mode fibers (SMF). Multi-mode fibers generally have a wider core diameter, and are used for
short-distance communication links and for applications where high power must be transmitted.
Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).

3. SSB.

In radio communications, Single-SideBand modulation (SSB) or Single-SideBand


Suppressed-Carrier (SSB-SC) is a refinement of amplitude modulation which uses transmitter
power and bandwidth more efficiently. Amplitude modulation produces an output signal that has
twice the bandwidth of the original baseband signal. Single-sideband modulation avoids this
bandwidth doubling, and the power wasted on a carrier, at the cost of increased device complexity
and more difficult tuning at the receiver.

4. Antenna

An antenna (or aerial) is an electrical device which converts electric power into radio waves, and
vice versa.[1] It is usually used with a radio transmitter or radio receiver. In transmission, a radio
transmitter supplies an electric current oscillating at radio frequency (i.e. a high frequency
alternating current (AC)) to the antenna's terminals, and the antenna radiates the energy from the
current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the
power of an electromagnetic wave in order to produce a tiny voltage at its terminals, that is applied
to a receiver to be amplified.

Antennas are essential components of all equipment that uses radio. They are used in systems such
as radio broadcasting, broadcast television, two-way radio, communications receivers, radar, cell
phones, and satellite communications, as well as other devices such as garage door openers,
wireless microphones, Bluetooth-enabled devices, wireless computer networks, baby monitors, and
RFID tags on merchandise.

5. Skip Distance MUF & Fading

A skip distance is the distance a radio wave travels, usually including a hop in the ionosphere. A
skip distance is a distance on the Earth's surface between the two points where radio waves from a
transmitter, refracted downwards by different layers of the ionosphere, fall. It also represents how
far a radio wave has travelled per hop on the Earth's surface, for radio waves such as the short wave
(SW) radio signals that employ continuous reflections for transmission.

MUF

In radio transmission maximum usable frequency (MUF) is the highest radio frequency that
can be used for transmission between two points via reflection from the ionosphere ( skywave or
"skip" propagation) at a specified time, independent of transmitter power. This index is especially
useful in regard to shortwave transmissions.

In shortwave radio communication, a major mode of long distance propagation is for the radio
waves to reflect off the ionized layers of the atmosphere and return diagonally back to Earth. In this
way radio waves can travel beyond the horizon, around the curve of the Earth. However the
refractive index of the ionosphere decreases with increasing frequency, so there is an upper limit to
the frequency which can be used. Above this frequency the radio waves are not reflected by the
ionosphere but are transmitted through it into space.

On a given day, communications may or may not succeed at the MUF. Commonly, the optimal
operating frequency for a given path is estimated at 80 to 90% of the MUF. As a rule of thumb the
MUF is approximately 3 times the critical frequency.[1]

[2]

where the critical frequency is the highest frequency reflected for a signal propagating directly
upward

Fading
In wireless communications, fading is deviation of the attenuation affecting a signal over certain
propagation media. The fading may vary with time, geographical position or radio frequency, and is
often modeled as a random process. A fading channel is a communication channel that
experiences fading. In wireless systems, fading may either be due to multipath propagation,
referred to as multipath induced fading, or due to shadowing from obstacles affecting the wave
propagation, sometimes referred to as shadow fading.
6. Satellite Communication.
n satellite communication, signal transferring between the sender and receiver is done with the
help of satellite. In this process, the signal which is basically a beam of modulated microwaves is
sent towards the satellite. Then the satellite amplifies the signal and sent it back to the receiver’s
antenna present on the earth’s surface. So, all the signal transferring is happening in space. Thus,
this type of communication is known as space communication.

Two satellites which are coreceived by the transponders attached to the satellite. Then after
amplifying, these signals are transmitted back to earth. This sending can be done at the same time
or after some delay. These amplified signals are stored in the memory of the satellites, when earth
properly faces the satellite. Then the satellite starts sending the signals to earth. Some active
satellites also have programming and recording features. Then these recording can be easily played
and watched. The first active satellite was launched by Russia in 1957. The signals coming from the
satellite when reach the earth, are of very low intensity. Their amplification is done by the receivers
themselves. After amplification is received by the transponders attached to the satellite. Then after
amplifying, these signals are transmitted back to earth. This sending can be done at the same time
or after some delay. These amplified signals are stored in the memory of the satellites, when earth
properly faces the satellite. Then the satellite starts sending the signals to earth. Some active
satellites also have programming and recording features. Then these recording can be easily played
and watched. The first active satellite was launched by Russia in 1957. The signals coming from the
satellite when reach the earth, are of very low intensity. Their amplification is done by the receivers
themselves. After amplification these become available for further use.

Microwave communication is possible only if the position of satellite becomes stationary with
respect to the position of earth. So, these types of satellites are known as geostationary
satellites.

7. Superheterodyne receiver.

In electronics, a superheterodyne receiver (often shortened to superhet) uses frequency


mixing to convert a received signal to a fixed intermediate frequency (IF) which can be more
conveniently processed than the original radio carrier frequency. It was invented by US engineer
Edwin Armstrong in 1918 during World War I. Virtually all modern radio receivers use the
superheterodyne principle. At the cost of an extra frequency converter stage, the superheterodyne
receiver provides superior selectivity and sensitivity compared with simpler designs.
8. TDM. FDM.

TDM
Time-division multiplexing (TDM) is a method of transmitting and receiving independent
signals over a common signal path by means of synchronized switches at each end of the
transmission line so that each signal appears on the line only a fraction of time in an alternating
pattern.

TDM versus packet-mode communication

In its primary form, TDM is used for circuit mode communication with a fixed number of channels
and constant bandwidth per channel.

Bandwidth reservation distinguishes time-division multiplexing from statistical multiplexing such


as statistical time division multiplexing i.e. the time slots are recurrent in a fixed order and pre-
allocated to the channels, rather than scheduled on a packet-by-packet basis.

FDM

In telecommunications, frequency-division multiplexing (FDM) is a technique by which the


total bandwidth available in a communication medium is divided into a series of non-overlapping
frequency sub-bands, each of which is used to carry a separate signal.

9. Cellular Telephone network.

A cellular network or mobile network is a wireless network distributed over land areas called
cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a
cellular network, each cell uses a different set of frequencies from neighboring cells, to avoid
interference and provide guaranteed bandwidth within each cell.

When joined together these cells provide radio coverage over a wide geographic area. This enables
a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with
each other and with fixed transceivers and telephones anywhere in the network, via base stations,
even if some of the transceivers are moving through more than one cell during transmission.

Cellular networks offer a number of desirable features:

 More capacity than a single large transmitter, since the same frequency can be used for
multiple links as long as they are in different cells
 Mobile devices use less power than with a single transmitter or satellite since the cell towers
are closer
 Larger coverage area than a single terrestrial transmitter, since additional cell towers can be
added indefinitely and are not limited by the horizon

Major telecommunications providers have deployed voice and data cellular networks over most of
the inhabited land area of the Earth. This allows mobile phones and mobile computing devices to
be connected to the public switched telephone network and public Internet. Private cellular
networks can be used for research[1] or for large organizations and fleets, such as dispatch for local
public safety agencies or a taxicab company.

10. CRO.

The cathode-ray oscilloscope (CRO) is a common laboratory instrument that provides accurate
time and amplitude measurements of voltage signals over a wide range of frequencies. Its
reliability, stability, and ease of operation make it suitable as a general purpose laboratory
instrument. The heart of the CRO is a cathode-ray tube shown schematically in Fig. 1.
The cathode ray is a beam of electrons which are emitted by the heated cathode (negative
electrode) and accelerated toward the fluorescent screen. The assembly of the cathode, intensity
grid, focus grid, and accelerating anode (positive electrode) is called an electron gun. Its purpose is
to generate the electron beam and control its intensity and focus. Between the electron gun and the
fluorescent screen are two pair of metal plates - one oriented to provide horizontal deflection of the
beam and one pair oriented ot give vertical deflection to the beam. These plates are thus referred to
as the horizontal and vertical deflection plates. The combination of these two deflections allows
the beam to reach any portion of the fluorescent screen. Wherever the electron beam hits the
screen, the phosphor is excited and light is emitted from that point. This coversion of electron
energy into light allows us to write with points or lines of light on an otherwise darkened screen.

In the most common use of the oscilloscope the signal to be studied is first amplified and then
applied to the vertical (deflection) plates to deflect the beam vertically and at the same time a
voltage that increases linearly with time is applied to the horizontal (deflection) plates thus causing
the beam to be deflected horizontally at a uniform (constant> rate. The signal applied to the verical
plates is thus displayed on the screen as a function of time. The horizontal axis serves as a uniform
time scale.

11. CCD(charge coupled device).

A charge-coupled device (CCD) is a device for the movement of electrical charge, usually
from within the device to an area where the charge can be manipulated, for example conversion
into a digital value. This is achieved by "shifting" the signals between stages within the device one
at a time.

12. PLA

A programmable logic array (PLA) is a kind of programmable logic device used to implement
combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a
set of programmable OR gate planes, which can then be conditionally complemented to produce an
output. This layout allows for a large number of logic functions to be synthesized in the sum of
products (and sometimes product of sums) canonical forms.

PLA's differ from Programmable Array Logic devices (PALs and GALs) in that both the AND and
OR gate planes are programmable.

PLA schematic example

13. Pass transistors.

In electronics, pass transistor logic (PTL) describes several logic families used in the design of
integrated circuits. It reduces the count of transistors used to make different logic gates, by
eliminating redundant transistors. Transistors are used as switches to pass logic levels between
nodes of a circuit, instead of as switches connected directly to supply voltages.[1] This reduces the
number of active devices, but has the disadvantage that the difference of the voltage between high
and low logic levels decreases at each stage. Each transistor in series is less saturated at its output
than at its input.[2] If several devices are chained in series in a logic path, a conventionally
constructed gate may be required to restore the signal voltage to the full value. By contrast,
conventional CMOS logic switches transistors so the output connects to one of the power supply
rails, so logic voltage levels in a sequential chain do not decrease. Since there is less isolation
between input signals and outputs, designers must take care to assess the effects of unintentional
paths within the circuit. For proper operation, design rules restrict the arrangement of circuits, so
that sneak paths, charge sharing, and slow switching can be avoided.[3] Simulation of circuits may
be required to ensure adequate performance.

14. CMOS transmission gates.

As transmission gates, is similar to a relay that can conduct in both directions or block by a control
signal with almost any voltage potential.

In principle, a transmission gate made up of two field effect transistors, in which - in contrast to
traditional discrete field effect transistors - the substrate terminal (Bulk) is not connected
internally to the source terminal. The two transistors, an n-channel MOSFET and a p-channel
MOSFET are connected in parallel with this, however, only the drain and source terminals of the
two transistors are connected together. Their gate terminals are connected to each other via a NOT
gate (inverter), to form the control terminal.

As with discrete transistors, the substrate terminal is connected to the source connection, so there
is a transistor to the parallel diode (body diode), whereby the transistor passes backwards.
However, since a transmission gate must block flow in either direction, the substrate terminals are
connected to the respective supply voltage potential in order to ensure that the substrate diode is
always operated in the reverse direction. The substrate terminal of the n-channel MOSFET is thus
connected to the positive supply voltage potential and the substrate terminal of the p-channel
MOSFET connected to the negative supply voltage potential.

15. Domino CMOS logic.

Domino logic is a CMOS-based evolution of the dynamic logic techniques based on either PMOS
or NMOS transistors. It allows a rail-to-rail logic swing. It was developed to speed up circuits.
The term derives from the fact that in domino logic (cascade structure consisting of several stages),
each stage ripples the next stage for evaluation, similar to a Domino falling one after the other.

In Dynamic logic, problem arises when cascading one gate to the next. The precharge "1" state of
the first gate may cause the second gate to discharge prematurely, before the first gate has reached
its correct state. This uses up the "precharge" of the second gate, which cannot be restored until the
next clock cycle, so there is no recovery from this error.[1]

In order to cascade dynamic logic gates, one solution is Domino Logic, which inserts an ordinary
static inverter between stages. While this might seem to defeat the point of dynamic logic, since the
inverter has a pFET (one of the main goals of Dynamic Logic is to avoid pFETs where possible, due
to speed), there are two reasons it works well. First, there is no fanout to multiple pFETs. The
dynamic gate connects to exactly one inverter, so the gate is still very fast. And since the inverter
connects to only nFETs in dynamic logic gates, it too is very fast. Second, the pFET in an inverter
can be made smaller than in some types of logic gates.[2]

In Domino logic cascade structure of several stages, the evaluation of each stage ripples the next
stage evaluation, similar to a domino falling one after the other. Once fallen, the node states cannot
return to "1" (until the next clock cycle) just as dominos, once fallen, cannot stand up, justifying the
name Domino CMOS Logic. It contrasts with other solutions to the cascade problem in which
cascading is interrupted by clocks or other means.

16. Ripple counter. Synchronous counter.

Ripple Counter: A ripple counter is an asynchronous counter where only the first flip-flop is
clocked by an external clock. All subsequent flip-flops are clocked by the output of the preceding
flip-flop. Asynchronous counters are also called ripple-counters because of the way the clock pulse
ripples it way through the flip-flops.

The MOD of the ripple counter or asynchronous counter is 2n if n flip-flops are used. For a 4-bit
counter, the range of the count is 0000 to 1111 (24-1). A counter may count up or count down or
count up and down depending on the input control. The count sequence usually repeats itself.
When counting up, the count sequence goes from 0000, 0001, 0010, ... 1110 , 1111 , 0000, 0001, ...
etc. When counting down the count sequence goes in the opposite manner: 1111, 1110, ... 0010,
0001, 0000, 1111, 1110, ... etc.

The complement of the count sequence counts in reverse direction. If the uncomplemented output
counts up, the complemented output counts down. If the uncomplemented output counts down,
the complemented output counts up.

There are many ways to implement the ripple counter depending on the characteristics of the flip
flops used and the requirements of the count sequence.
 Clock Trigger: Positive edged or Negative edged
 JK or D flip-flops
 Count Direction: Up, Down, or Up/Down

Asynchronous counters are slower than synchronous counters because of the delay in the
transmission of the pulses from flip-flop to flip-flop. With a synchronous circuit, all the bits in the
count change synchronously with the assertion of the clock. Examples of synchronous counters are
the Ring and Johnson counter.

It can be implemented using D-type flip-flops or JK-type flip-flops.

The circuit below uses 2 D flip-flops to implement a divide-by-4 ripple counter (2n = 22 = 4). It
counts down.

Synchronous counter

A 4-bit synchronous counter using JK flip-flops

In synchronous counters, the clock inputs of all the flip-flops are connected together and are
triggered by the input pulses. Thus, all the flip-flops change state simultaneously (in parallel). The
circuit below is a 4-bit synchronous counter. The J and K inputs of FF0 are connected to HIGH.
FF1 has its J and K inputs connected to the output of FF0, and the J and K inputs of FF2 are
connected to the output of an AND gate that is fed by the outputs of FF0 and FF1. A simple way of
implementing the logic for each bit of an ascending counter (which is what is depicted in the image
to the right) is for each bit to toggle when all of the less significant bits are at a logic high state. For
example, bit 1 toggles when bit 0 is logic high; bit 2 toggles when both bit 1 and bit 0 are logic high;
bit 3 toggles when bit 2, bit 1 and bit 0 are all high; and so on.

Synchronous counters can also be implemented with hardware finite state machines, which are
more complex but allow for smoother, more stable transitions.

Hardware-based counters are of this type. A simple way of implementing the logic for each bit of an
ascending counter (which is what is depicted in the image to the right) is for each bit to toggle when
all of the less significant bits are at a logic high state

17. Race around condition. Master slave flip-flop.

We know that when j=k=1 in a jk flip flop then output is the complement of the previous output i.e
if j=k=1 and Y=0 then after the clock pulse y becomes 1. but if the propagation delay of the gates is
much lesser than the pulse duration then during the same pulse at first y becomes 0 and after
another propagation delay y becomes 1 and so on. Thus within the same pulse duration due to very
small propagation delays output oscillates back and forth between 0 and 1. This condition is called
race around condition and at the end of the pulse the output is uncertain.

Master slave flip flop is a cascade me two flip flop in which the first one responds to the data input
when the block is high, whereas the second one responds to the output of the first one when the
clock is low. Thus, the final output change only when the clock is low when the data inputs are not
effective. Thus, the race-around condition gets eliminated in this.
18. Flip-flop. D and T-type flip-flops

A: In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store
state information. A flip-flop is a bistable multivibrator. The circuit can be made to change state by
signals applied to one or more control inputs and will have one or two outputs. It is the basic
storage element in sequential logic. Flip-flops and latches are a fundamental building block of
digital electronics systems used in computers, communications, and many other types of systems.

Flip-flops and latches are used as data storage elements. Such data storage can be used for storage
of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the
output and next state depend not only on its current input, but also on its current state (and hence,
previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed
input signals to some reference timing signal.

Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered).


Although the term flip-flop has historically referred generically to both simple and clocked circuits,
in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked
circuits; the simple ones are commonly called latches[1][2]

Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when
a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type
(positive going or negative going) of clock edge.

An SR latch, constructed from a pair of cross-coupled NOR gates.

19. XOR/OR gate with NAND gates.

First reformat the XOR equation.


Step 1 :- A'B + AB' = [(A+B')' + (A'+B)'] using De Morgan’s Theorems
20. AND gate with NOR gates.

NOR AS AND
An AND gate gives a 1 output when both inputs are 1; a NOR gate gives a 1 output only when both
inputs are 0. Therefore, an AND gate is made by inverting the inputs to a NOR gate.

21. Logic Gates

A logic gate is an elementary building block of a digital circuit. Most logic gates have two inputs
and one output. At any given moment, every terminal is in one of the two binary conditions low (0)
or high (1), represented by different voltage levels.
22. Function of diode. Types of diode.

Light Emitting Diode (LED): It is one of the most popular type of diodes and when this diode
permits the transfer of electric current between the electrodes, light is produced. In most of the
diodes, the light (infrared) cannot be seen as they are at frequencies that do not permit visibility.
When the diode is switched on or forward biased, the electrons recombine with the holes and
release energy in the form of light (electroluminescence). The color of light depends on the energy
gap of the semiconductor.

Avalanche Diode: This type of diode operates in the reverse bias, and used avalanche effect for
its operation. The avalanche breakdown takes place across the entire PN junction, when the voltage
drop is constant and is independent of current. Generally, the avalanche diode is used for photo-
detection, wherein high levels of sensitivity can be obtained by the avalanche process.

Laser Diode: This type of diode is different from the LED type, as it produces coherent light.
These diodes find their application in DVD and CD drives, laser pointers, etc. Laser diodes are
more expensive than LEDs. However, they are cheaper than other forms of laser generators.
Moreover, these laser diodes have limited life.

Schottky Diodes: These diodes feature lower forward voltage drop as compared to the ordinary
silicon PN junction diodes. The voltage drop may be somewhere between 0.15 and 0.4 volts at low
currents, as compared to the 0.6 volts for a silicon diode. In order to achieve this performance,
these diodes are constructed differently from normal diodes, with metal to semiconductor contact.
Schottky diodes are used in RF applications, rectifier applications and clamping diodes.

Zener diode: This type of diode provides a stable reference voltage, thus is a very useful type and
is used in vast quantities. The diode runs in reverse bias, and breaks down on the arrival of a
certain voltage. A stable voltage is produced, if the current through the resistor is limited. In power
supplies, these diodes are widely used to provide a reference voltage.

Photodiode: Photodiodes are used to detect light and feature wide, transparent junctions.
Generally, these diodes operate in reverse bias, wherein even small amounts of current flow,
resulting from the light, can be detected with ease. Photodiodes can also be used to generate
electricity, used as solar cells and even in photometry.

Varicap Diode or Varactor Diode: This type of diode feature a reverse bias placed upon it,
which varies the width of the depletion layer as per the voltage placed across the diode. This diode
acts as a capacitor and capacitor plates are formed by the extent of conduction regions and the
depletion region as the insulating dielectric. By altering the bias on the diode, the width of the
depletion region changes, thereby varying the capacitance.

Rectifier Diode: These diodes are used to rectify alternating power inputs in power supplies.
They can rectify current levels that range from an amp upwards. If low voltage drops are required,
then Schottky diodes can be used, however, generally these diodes are PN junction diodes
Gunn diode - This diode is made of materials like GaAs or InP that exhibit a negative differential
resistance region.

Crystal diode - These are a type of point contact diodes which are also called as Cat’s whisker
diode. This didoe comprises of a thin sharpened metal wire which is pressed against the
semiconducting crystal. The metal wire is the anode and the semi-conducting crystal is the cathode.
These diodes are obsolete.

Avalanche diode - This diode conducts in reverse bias condition where the reverse bias voltage
applied across the p-n junction creates a wave of ionization leading to the flow of large current.
These didoes are designed to breakdown at specific reverse voltage in order to avoid any damage.

Silicon controlled rectifier - As the name implies this diode can be controlled or triggered to
the ON condition due to the application of small voltage. They belong to the family of Tyristors and
is used in various fields of DC motor control, generator field regulation, lighting system control and
variable frequency drive. This is three-terminal device with anode, cathode and third controlled
lead or gate.

Vacuum diodes - This diode is two electrode vacuum tube which can tolerate high inverse
voltages.
Ideally, in one direction that is indicated by the arrow head diode must behave short circuited and
in other one that opposite to that of the direction of arrow head must be open circuited. By ideal
characteristics, the diodes are designed to meet these features theoretically but are not achieved
practically. So, the practical diode characteristics are only close to that of the desired.
Diodes are used widely in the electronics industry, right from electronics design to production, to
repair. Besides the above mentioned types of diodes, the other diodes are PIN diode, point contact
diode, signal diode, step recovery diode, tunnel diode and gold doped diodes. The type of diode to
transfer electric current depends on the type and amount of transmission, as well as on specific
applications.

23. PIN diode.

A PIN diode is a diode with a wide, undoped intrinsic semiconductor region between a p-type
semiconductor and an n-type semiconductor region. The p-type and n-type regions are typically
heavily doped because they are used for ohmic contacts.

The wide intrinsic region is in contrast to an ordinary PN diode. The wide intrinsic region makes
the PIN diode an inferior rectifier (one typical function of a diode), but it makes the PIN diode
suitable for attenuators, fast switches, photodetectors, and high voltage power electronics
applications.

Layers of a PIN diode

24. TRIAC.SCR.

TRIACs are part of the thyristor family and are closely related to silicon-controlled rectifiers
(SCR). However, unlike SCRs, which are unidirectional devices (that is, they can conduct current
only in one direction), TRIACs are bidirectional and so allow current in either direction

TRIACs are part of the thyristor family and are closely related to silicon-controlled rectifiers (SCR).
However, unlike SCRs, which are unidirectional devices (that is, they can conduct current only in
one direction), TRIACs are bidirectional and so allow current in either direction. Another
difference from SCRs is that TRIAC current can be enabled by either a positive or negative current
applied to its gate electrode, whereas SCRs can be triggered only by current into the gate. To create
a triggering current, a positive or negative voltage has to be applied to the gate with respect to the
MT1 terminal (otherwise known as A1).

Once triggered, the device continues to conduct until the current drops below a certain threshold
called the holding current.

The bidirectionality makes TRIACs very convenient switches for alternating current circuits, also
allowing them to control very large power flows with milliampere-scale gate currents. In addition,
applying a trigger pulse at a controlled phase angle in an A.C. cycle allows control of the percentage
of current that flows through the TRIAC to the load (phase control), which is commonly used, for
example, in controlling the speed of low-power induction motors, in dimming lamps, and in
controlling A.C. heating resistors.

TRIAC schematic symbol

SCR- A silicon-controlled rectifier (or semiconductor-controlled rectifier) is a four-layer


solid state current controlling device. The name "silicon controlled rectifier" is General Electric's
trade name for a type of thyristor. The SCR was developed by a team of power engineers led by
Gordon Hall[1] and commercialized by Frank W. "Bill" Gutzwiller in 1957.

Some sources define silicon controlled rectifiers and thyristors as synonymous,[2] other sources
define silicon controlled rectifiers as a subset of a larger family of devices with at least four layers of
alternating N and P-type material, this entire family being referred to as thyristors.[3][4] According
to Bill Gutzwiller, the terms "SCR" and "Controlled Rectifier" were earlier, and "Thyristor" was
applied later as usage of the device spread internationally.[5]

SCRs are unidirectional devices (i.e. can conduct current only in one direction) as opposed to
TRIACs which are bidirectional (i.e. current can flow through them in either direction). SCRs can
be triggered normally only by currents going into the gate as opposed to TRIACs which can be
triggered normally by either a positive or a negative current applied to its gate electrode.
SCR schematic symbol

25. Digital-to Analog Converter.

Digital-to-analog conversion is a process in which signals having a few (usually two) defined levels
or states (digital) are converted into signals having a theoretically infinite number of states
(analog). A common example is the processing, by a modem,of computer data into audio-frequency
(AF) tones that can be transmitted over a twisted pair telephone line. The circuit that performs this
function is a digital-to-analog converter (DAC).

Basically, digital-to-analog conversion is the opposite of analog-to-digital conversion. In most


cases, if an analog-to-digital converter (ADC) is placed in a communications circuit after a DAC,
the digital signal output is identical to the digital signal input. Also, in most instances when a DAC
is placed after an ADC, the analog signal output is identical to the analog signal input.

Binary digital impulses, all by themselves, appear as long strings of ones and zeros, and have no
apparent meaning to a human observer. But when a DAC is used to decode the binary digital
signals, meaningful output appears. This might be a voice, a picture, a musical tune, or mechanical
motion.

Both the DAC and the ADC are of significance in some applications of digital signal processing. The
intelligibility or fidelity of an analog signal can often be improved by converting the analog input to
digital form using an ADC, then clarifying the digital signal, and finally converting the "cleaned-up"
digital impulses back to analog form using an DAC.

26. Inverting Amplifier.

The Inverting Operational Amplifier

We saw in the last tutorial that the Open Loop Gain, ( Avo ) of an operational amplifier can be
very high, as much as 1,000,000 (120dB) or more. However, this very high gain is of no real use to
us as it makes the amplifier both unstable and hard to control as the smallest of input signals, just a
few micro-volts, (μV) would be enough to cause the output voltage to saturate and swing towards
one or the other of the voltage supply rails losing complete control of the output.

As the open loop DC gain of an Operational Amplifiers is extremely high we can therefore afford to
lose some of this high gain by connecting a suitable resistor across the amplifier from the output
terminal back to the inverting input terminal to both reduce and control the overall gain of the
amplifier. This then produces and effect known commonly as Negative Feedback, and thus
produces a very stable Operational Amplifier based system.
Negative Feedback is the process of “feeding back” a fraction of the output signal back to the
input, but to make the feedback negative, we must feed it back to the negative or “inverting input”
terminal of the op-amp using an external Feedback Resistor called Rƒ. This feedback connection
between the output and the inverting input terminal forces the differential input voltage towards
zero.

This effect produces a closed loop circuit to the amplifier resulting in the gain of the amplifier now
being called its Closed-loop Gain. Then a closed-loop inverting amplifier uses negative feedback
to accurately control the overall gain of the amplifier, but at a cost in the reduction of the amplifiers
gain.

This negative feedback results in the inverting input terminal having a different signal on it than
the actual input voltage as it will be the sum of the input voltage plus the negative feedback voltage
giving it the label or term of a Summing Point. We must therefore separate the real input signal
from the inverting input by using an Input Resistor, Rin.

As we are not using the positive non-inverting input this is connected to a common ground or zero
voltage terminal as shown below, but the effect of this closed loop feedback circuit results in the
voltage potential at the inverting input being equal to that at the non-inverting input producing a
Virtual Earth summing point because it will be at the same potential as the grounded reference
input. In other words, the op-amp becomes a “differential amplifier”.

Inverting Operational Amplifier Configuration

In this Inverting Amplifier circuit the operational amplifier is connected with feedback to
produce a closed loop operation. When dealing with operational amplifiers there are two very
important rules to remember about inverting amplifiers, these are: “No current flows into the input
terminal” and that “V1 always equals V2”. However, in real world op-amp circuits both of these
rules are slightly broken.

This is because the junction of the input and feedback signal ( X ) is at the same potential as the
positive ( + ) input which is at zero volts or ground then, the junction is a “Virtual Earth”.
Because of this virtual earth node the input resistance of the amplifier is equal to the value of the
input resistor, Rin and the closed loop gain of the inverting amplifier can be set by the ratio of the
two external resistors.

We said above that there are two very important rules to remember about Inverting Amplifiers
or any operational amplifier for that matter and these are.

 1. No Current Flows into the Input Terminals


 2. The Differential Input Voltage is Zero as V1 = V2 = 0 (Virtual Earth)

Then by using these two rules we can derive the equation for calculating the closed-loop gain of an
inverting amplifier, using first principles.

Current ( i ) flows through the resistor network as shown.

Then, the Closed-Loop Voltage Gain of an Inverting Amplifier is given as.


and this can be transposed to give Vout as:

Linear Output

The negative sign in the equation indicates an inversion of the output signal with respect to the
input as it is 180o out of phase. This is due to the feedback being negative in value.

The equation for the output voltage Vout also shows that the circuit is linear in nature for a fixed
amplifier gain as Vout = Vin x Gain. This property can be very useful for converting a smaller
sensor signal to a much larger voltage.

Another useful application of an inverting amplifier is that of a “transresistance amplifier” circuit.


A Transresistance Amplifier also known as a “transimpedance amplifier”, is basically a
current-to-voltage converter (Current “in” and Voltage “out”). They can be used in low-power
applications to convert a very small current generated by a photo-diode or photo-detecting device
etc, into a usable output voltage which is proportional to the input current as shown.

27. JFETS

The junction gate field-effect transistor (JFET or JUGFET) is the simplest type of field-
effect transistor. They are three-terminal semiconductor devices that can be used as electronically-
controlled switches, amplifiers, or voltage-controlled resistors.

Unlike bipolar transistors, JFETs are exclusively voltage-controlled in that they do not need a
biasing current. Electric charge flows through a semiconducting channel between source and drain
terminals. By applying a reverse bias voltage to a gate terminal, the channel is "pinched", so that
the electric current is impeded or switched off completely. A JFET is usually on when there is no
potential difference between its gate and source terminals. If a potential difference of the proper
polarity is applied between its gate and source terminals, the JFET will be more resistive to current
flow, which means less current would flow in the channel between the source and drain terminals.
Thus, JFETs are sometimes referred to as depletion-mode devices.

JFETs can have an n-type or p-type channel. In the n-type, if the voltage applied to the gate is less
than that applied to the source, the current will be reduced (similarly in the p-type, if the voltage
applied to the gate is greater than that applied to the source). A JFET has a large input impedance
(sometimes on the order of 1010 ohms), which means that it has a negligible effect on external
components or circuits connected to its gate.
28.CMOS

Complementary metal–oxide–semiconductor (CMOS) is a technology for constructing


integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM,
and other digital logic circuits. CMOS technology is also used for several analog circuits such as
image sensors (CMOS sensor), data converters, and highly integrated transceivers for many types
of communication. Frank Wanlass patented CMOS in 1963.

CMOS is also sometimes referred to as complementary-symmetry metal–oxide–


semiconductor (or COS-MOS).[1] The words "complementary-symmetry" refer to the fact that the
typical design style with CMOS uses complementary and symmetrical pairs of p-type and n-type
metal oxide semiconductor field effect transistors (MOSFETs) for logic functions.[2]

Two important characteristics of CMOS devices are high noise immunity and low static power
consumption. Since one transistor of the pair is always off, the series combination draws significant
power only momentarily during switching between on and off states. Consequently, CMOS devices
do not produce as much waste heat as other forms of logic, for example transistor–transistor logic
(TTL) or NMOS logic, which normally have some standing current even when not changing state.
CMOS also allows a high density of logic functions on a chip. It was primarily for this reason that
CMOS became the most used technology to be implemented in VLSI chips.

The phrase "metal–oxide–semiconductor" is a reference to the physical structure of certain field-


effect transistors, having a metal gate electrode placed on top of an oxide insulator, which in turn is
on top of a semiconductor material. Aluminium was once used but now the material is polysilicon.
Other metal gates have made a comeback with the advent of high-k dielectric materials in the
CMOS process, as announced by IBM and Intel for the 45 nanometer node and beyond.

CMOS inverter (NOT logic gate)

29. Enhancement and Depletion MOSFET

The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS


FET) is a type of transistor used for amplifying or switching electronic signals.

Although the MOSFET is a four-terminal device with source (S), gate (G), drain (D), and body (B)
terminals,[1] the body (or substrate) of the MOSFET is often connected to the source terminal,
making it a three-terminal device like other field-effect transistors. Because these two terminals are
normally connected to each other (short-circuited) internally, only three terminals appear in
electrical diagrams. The MOSFET is by far the most common transistor in both digital and analog
circuits, though the bipolar junction transistor was at one time much more common.
The main advantage of a MOSFET transistor over a regular transistor is that it requires very little
current to turn on (less than 1mA), while delivering a much higher current to a load (10 to 50A or
more). However, the MOSFET requires a higher gate voltage (3-4V) to turn on.[2]

In enhancement mode MOSFETs, a voltage drop across the oxide induces a conducting channel
between the source and drain contacts via the field effect. The term "enhancement mode" refers to
the increase of conductivity with increase in oxide field that adds carriers to the channel, also
referred to as the inversion layer. The channel can contain electrons (called an nMOSFET or
nMOS), or holes (called a pMOSFET or pMOS), opposite in type to the substrate, so nMOS is made
with a p-type substrate, and pMOS with an n-type substrate (see article on semiconductor devices).
In the less common depletion mode MOSFET, detailed later on, the channel consists of carriers in a
surface impurity layer of opposite type to the substrate, and conductivity is decreased by
application of a field that depletes carriers from this surface layer.[3]

The "metal" in the name MOSFET is now often a misnomer because the previously metal gate
material is now often a layer of polysilicon (polycrystalline silicon). Aluminium had been the gate
material until the mid-1970s, when polysilicon became dominant, due to its capability to form self-
aligned gates. Metallic gates are regaining popularity, since it is difficult to increase the speed of
operation of transistors without metal gates.

Likewise, the "oxide" in the name can be a misnomer, as different dielectric materials are used with
the aim of obtaining strong channels with smaller applied voltages.

An insulated-gate field-effect transistor or IGFET is a related term almost synonymous with


MOSFET. The term may be more inclusive, since many "MOSFETs" use a gate that is not metal,
and a gate insulator that is not oxide. Another synonym is MISFET for metal–insulator–
semiconductor FET.

30. VCO

A voltage-controlled oscillator or VCO is an electronic oscillator whose oscillation frequency is


controlled by a voltage input. The applied input voltage determines the instantaneous oscillation
frequency. Consequently, modulating signals applied to control input may cause frequency
modulation (FM) or phase modulation (PM). A VCO may also be part of a phase-locked loop
31. AM. Ratio detector.

Amplitude modulation (AM) is a modulation technique used in electronic communication,


most commonly for transmitting information via a radio carrier wave. AM works by varying the
strength (amplitude) of the carrier in proportion to the waveform being sent. That waveform may,
for instance, correspond to the sounds to be reproduced by a loudspeaker, or the light intensity of
television pixels. This contrasts with frequency modulation, in which the frequency of the carrier
signal is varied, and phase modulation, in which its phase is varied, by the modulating signal.

AM was the earliest modulation method used to transmit voice by radio. It was developed during
the first two decades of the 20th century beginning with Reginald Fessenden's radiotelephone
experiments in 1900. It remains in use today in many forms of communication; for example it is
used in portable two way radios, VHF aircraft radio and in computer modems.[citation needed] "AM" is
often used to refer to mediumwave AM radio broadcasting.

32. Finite State Machines.

A finite-state machine (FSM) or finite-state automaton (plural: automata), or simply a


state machine, is a mathematical model of computation used to design both computer programs
and sequential logic circuits. It is conceived as an abstract machine that can be in one of a finite
number of states. The machine is in only one state at a time; the state it is in at any given time is
called the current state. It can change from one state to another when initiated by a triggering
event or condition; this is called a transition. A particular FSM is defined by a list of its states, and
the triggering condition for each transition.

The behaviour of state machines can be observed in many devices in modern society which perform
a predetermined sequence of actions depending on a sequence of events with which they are
presented. Simple examples are vending machines which dispense products when the proper
combination of coins is deposited, elevators which drop riders off at upper floors before going
down, traffic lights which change sequence when cars are waiting, and combination locks which
require the input of combination numbers in the proper order.

Transformers:
1. Primary and secondary transformers. Efficiency and comparison.

(The Voltage Transformer can be thought of as an electrical component rather than an


electronic component. A transformer basically is very simple static (or stationary) electro-magnetic
passive electrical device that works on the principle of Faraday’s law of induction by converting
electrical energy from one value to another.)

A single phase voltage transformer basically consists of two electrical coils of wire, one called the
“Primary Winding” and another called the “Secondary Winding”. For this tutorial we will define the
“primary” side of the transformer as the side that usually takes power, and the “secondary” as the
side that usually delivers power. In a single-phase voltage transformer the primary is usually the
side with the higher voltage.

These two coils are not in electrical contact with each other but are instead wrapped together
around a common closed magnetic iron circuit called the “core”. This soft iron core is not solid but
made up of individual laminations connected together to help reduce the core’s losses.

The two coil windings are electrically isolated from each other but are magnetically linked through
the common core allowing electrical power to be transferred from one coil to the other. When an
electric current passed through the primary winding, a magnetic field is developed which induces a
voltage into the secondary winding as shown.

Single Phase Voltage Transformer

In other words, for a transformer there is no direct electrical connection between the two coil
windings, thereby giving it the name also of an Isolation Transformer. Generally, the primary
winding of a transformer is connected to the input voltage supply and converts or transforms the
electrical power into a magnetic field. While the job of the secondary winding is to convert this
alternating magnetic field into electrical power producing the required output voltage as shown.

Transformer Construction (single-phase)

 Where:
 VP - is the Primary Voltage
 VS - is the Secondary Voltage
 NP - is the Number of Primary Windings
 NS - is the Number of Secondary Windings
 Φ (phi) - is the Flux Linkage

Notice that the two coil windings are not electrically connected but are only linked magnetically. A
single-phase transformer can operate to either increase or decrease the voltage applied to the
primary winding. When a transformer is used to “increase” the voltage on its secondary winding
with respect to the primary, it is called a Step-up transformer. When it is used to “decrease” the
voltage on the secondary winding with respect to the primary it is called a Step-down
transformer.

However, a third condition exists in which a transformer produces the same voltage on its
secondary as is applied to its primary winding. In other words, its output is identical with respect to
voltage, current and power transferred. This type of transformer is called an “Impedance
Transformer” and is mainly used for impedance matching or the isolation of adjoining electrical
circuits.

The difference in voltage between the primary and the secondary windings is achieved by changing
the number of coil turns in the primary winding ( NP ) compared to the number of coil turns on the
secondary winding ( NS ).

As the transformer is a linear device, a ratio now exists between the number of turns of the primary
coil divided by the number of turns of the secondary coil. This ratio, called the ratio of
transformation, more commonly known as a transformers “turns ratio”, ( TR ). This turns ratio
value dictates the operation of the transformer and the corresponding voltage available on the
secondary winding.

It is necessary to know the ratio of the number of turns of wire on the primary winding compared
to the secondary winding. The turns ratio, which has no units, compares the two windings in order
and is written with a colon, such as 3:1 (3-to-1). This means in this example, that if there are 3 volts
on the primary winding there will be 1 volt on the secondary winding, 3-to-1. Then we can see that
if the ratio between the number of turns changes the resulting voltages must also change by the
same ratio, and this is true.

A transformer is all about “ratios”, and the turns ratio of a given transformer will be the same as its
voltage ratio. In other words for a transformer: “turns ratio = voltage ratio”. The actual number of
turns of wire on any winding is generally not important, just the turns ratio and this relationship is
given as:

A Transformers Turns Ratio

Transformer Action

We have seen that the number of coil turns on the secondary winding compared to the primary
winding, the turns ratio, affects the amount of voltage available from the secondary coil. But if the
two windings are electrically isolated from each other, how is this secondary voltage produced?

We have said previously that a transformer basically consists of two coils wound around a common
soft iron core. When an alternating voltage ( VP ) is applied to the primary coil, current flows
through the coil which in turn sets up a magnetic field around itself, called mutual inductance, by
this current flow according to Faraday’s Law of electromagnetic induction. The strength of the
magnetic field builds up as the current flow rises from zero to its maximum value which is given as
dΦ/dt.

As the magnetic lines of force setup by this electromagnet expand outward from the coil the soft
iron core forms a path for and concentrates the magnetic flux. This magnetic flux links the turns of
both windings as it increases and decreases in opposite directions under the influence of the AC
supply.

However, the strength of the magnetic field induced into the soft iron core depends upon the
amount of current and the number of turns in the winding. When current is reduced, the magnetic
field strength reduces.

When the magnetic lines of flux flow around the core, they pass through the turns of the secondary
winding, causing a voltage to be induced into the secondary coil. The amount of voltage induced
will be determined by: N.dΦ/dt (Faraday’s Law), where N is the number of coil turns. Also this
induced voltage has the same frequency as the primary winding voltage.

Then we can see that the same voltage is induced in each coil turn of both windings because the
same magnetic flux links the turns of both the windings together. As a result, the total induced
voltage in each winding is directly proportional to the number of turns in that winding. However,
the peak amplitude of the output voltage available on the secondary winding will be reduced if the
magnetic losses of the core are high.

If we want the primary coil to produce a stronger magnetic field to overcome the cores magnetic
losses, we can either send a larger current through the coil, or keep the same current flowing, and
instead increase the number of coil turns ( NP ) of the winding. The product of amperes times turns
is called the “ampere-turns”, which determines the magnetising force of the coil.
So assuming we have a transformer with a single turn in the primary, and only one turn in the
secondary. If one volt is applied to the one turn of the primary coil, assuming no losses, enough
current must flow and enough magnetic flux generated to induce one volt in the single turn of the
secondary. That is, each winding supports the same number of volts per turn.

As the magnetic flux varies sinusoidally, Φ = Φmax sinωt, then the basic relationship between
induced emf, ( E ) in a coil winding of N turns is given by:

emf = turns x rate of change

 Where:
 ƒ - is the flux frequency in Hertz, = ω/2π
 Ν - is the number of coil windings.
 Φ - is the flux density in webers

This is known as the Transformer EMF Equation. For the primary winding emf, N will be the
number of primary turns, ( NP ) and for the secondary winding emf, N will be the number of
secondary turns, ( NS ).

Also please note that as transformers require an alternating magnetic flux to operate correctly,
transformers cannot therefore be used to transform or supply DC voltages or currents, since the
magnetic field must be changing to induce a voltage in the secondary winding. In other words,
Transformers DO NOT Operate on DC Voltages, ONLY AC.

If a transformers primary winding was connected to a DC supply, the inductive reactance of the
winding would be zero as DC has no frequency, so the effective impedance of the winding will
therefore be very low and equal only to the resistance of the copper used. Thus the winding will
draw a very high current from the DC supply causing it to overheat and eventually burn out,
because as we know I = V/R.

Transformer Basics – Efficiency

A transformer does not require any moving parts to transfer energy. This means that there are no
friction or windage losses associated with other electrical machines. However, transformers do
suffer from other types of losses called “copper losses” and “iron losses” but generally these are
quite small.
General Question:
Why do we use alternating ac voltages and currents in our homes?
A: One of the main reasons that we use alternating AC voltages and currents in our homes and
workplaces is that it can be easily generated at a convenient voltage, transformed into a much
higher voltage and then distributed around the country using a national grid of pylons and cables
over very long distances. The reason for transforming the voltage is that higher distribution
voltages implies lower currents and therefore lower losses along the grid.
These high AC transmission voltages and currents are then reduced to a much lower, safer and
usable voltage level were it can be used to supply electrical equipment in our homes and
workplaces, and all this is possible thanks to the basic Voltage Transformer.
2. Computer Engineering
3.1 Operating System
Operating systems exist for two main purposes. One is that it is designed to make sure a
computer system performs well by managing its computational activities. Another is that it
provides users an environment for the development and execution of programs.

Definition

An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs. In technical terms, it is a software
which manages hardware. An operating System controls the allocation of resources and services
such as memory, processors, devices and information.

Following are some of important functions of an operating System.

 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
Types of Operating System

Operating systems are there from the very first computer generation. Operating systems keep
evolving over the period of time. Following are few of the important types of operating system
which are most commonly used.
Batch operating system
User cannot interact with the computer directly. Computer reads the jobs sequentially from a
pool of job (called batch) and executes the program for each job.

Time-sharing operating systems


Time sharing is a technique which enables many people, located at various terminals, to use a
particular computer system at the same time. Time-sharing or multitasking is a logical extension
of multiprogramming. The main difference between Multi-programmed Batch Systems and
Time-Sharing Systems is that in case of Multi-programmed batch systems, objective is to
maximize processor use, whereas in Time-Sharing Systems objective is to minimize response
time.

Distributed operating System


Distributed systems use multiple central processors to serve multiple real time application and
multiple users. Data processing jobs are distributed among the processors accordingly to which
one can perform each job most efficiently.

Network operating System (NOS)


Network Operating System runs on a server and provides server the capability to manage data,
users, groups, security, applications, and other networking functions. It allows a computer to
communicate with other devices over the network, including file/folder sharing.

Real Time operating System


Real-time systems are used when rigid time requirements have been placed on the operation of
a processor. It has well defined and fixed time constraints. Real time processing is always on line
whereas on line system need not be real time. Examples: Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, home-appliance
controllers, Air traffic control system etc.

SMP
SMP is short for Symmetric Multi-Processing, and is the most common type of multiple-
processor systems. In this system, each processor runs an identical copy of the operating system,
and these copies communicate with one another as needed.

Kernel
Kernel is the core of every operating system. It connects applications to the actual processing of
data. It also manages all communications between software and hardware components to
ensure usability and reliability. It provides the required abstraction to hide low level hardware
details to system or application programs.

BIOS
BIOS is an acronym for Basic Input/Output System. It is the boot firmware program on a PC,
and controls the computer from the time you start it up until the operating system takes over.
When you turn on a PC, the BIOS first conducts a basic hardware check, called a Power-On Self-
Test (POST), to determine whether all of the attachments are present and working. Then it loads
the operating system into your computer's random-access memory, or RAM.
The BIOS also manages data flow between the computer's operating system and attached
devices such as the hard disk, video card, keyboard, mouse, and printer.

Different types of CPU register in a typical operating system design


– Accumulators
– Index Registers
– Stack Pointer
– General Purpose Registers

Purpose of an I/O status information


I/O status information provides information about which I/O devices are to be allocated for a
particular process. It also shows which files are opened, and other I/O device states.

Why is partitioning and formatting a prerequisite to installing an operating


system?
Partitioning and formatting create a preparatory environment on the drive so that the operating
system can be copied and installed properly. This includes allocating space on the drive,
designating a drive name, determining and creating the appropriate file system structure.

Caching
Caching is the processing of utilizing a region of fast memory for a limited data and process. A
cache memory is usually much efficient because of its high access speed.

Spooling
Spooling is normally associated with printing. When different applications want to send an
output to the printer at the same time, spooling takes all of these print jobs into a disk file and
queues them accordingly to the printer.

Assembler
An assembler acts as a translator for low level language. Assembly codes, written using
mnemonic commands are translated by the Assembler into machine language.

Interrupts
Interrupts are part of a hardware mechanism that sends a notification to the CPU when it wants
to gain access to a particular resource. An interrupt handler receives this interrupt signal and
“tells” the processor to take action based on the interrupt request.

GUI
GUI is short for Graphical User Interface. It provides users with an interface wherein actions
can be performed by interacting with icons and graphical symbols. People find it easier to
interact with the computer when in a GUI especially when using the mouse. Instead of having to
remember and type commands, users just click on buttons to perform a process.

Plumbing / Piping
It is the process of using the output of one program as an input to another. For example, instead
of sending the listing of a folder or drive to the main screen, it can be piped and sent to a file, or
sent to the printer to produce a hard copy.

Process
A program in execution is called a process. Or it may also be called a unit of work. A process
needs some system resources as CPU time, memory, files, and I/O devices to accomplish the
task. Each process is represented in the operating system by a process control block or task
control block (PCB).

Process State
New State – means a process is being created
Running – means instructions are being executed
Waiting – means a process is waiting for certain conditions or events to occur
Ready – means a process is waiting for an instruction from the main processor
Terminate – means a process is done executing

Thread
A thread is a program line under execution. Thread sometimes called a light-weight process, is a
basic unit of CPU utilization; it comprises a thread id, a program counter, a register set, and a
stack.
1. User thread
2. Kernel thread
Socket
A socket provides a connection between two applications. Each endpoint of a communication is
a socket.

Context Switching
Transferring the control from one process to other process requires saving the state of the old
process and loading the saved state for new process. This task is known as context switching.
But time taken for switching from one process to other is pure overhead, because the system
does no useful work while switching. So, one of the solutions is to go for threading whenever
possible.

Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Schedulers
A Scheduler is special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types

 Long Term Scheduler- Long term schedulers are the job schedulers that select processes
from the job queue and load them into memory for execution.
 Short Term Scheduler- The short term schedulers are the CPU schedulers that select a
process form the ready queue and allocate the CPU to one of them.
 Medium Term Scheduler- Medium term scheduling is part of the swapping function.
This relates to processes that are in a blocked or suspended state. They are swapped out
of real-memory until they are ready to execute.

Scheduling Queues
1. Job queue- When a process enters the system it is placed in the job queue.
2. Ready queue- The processes that are residing in the main memory and are ready and
waiting to execute are kept on a list called the ready queue.
3. Device queue- A list of processes waiting for a particular I/O device is called device
queue.

Scheduling algorithms
We'll discuss four major scheduling algorithms here which are following
• First Come First Serve (FCFS) Scheduling- The process that requests the CPU first is
allocated the CPU first. Implementation is managed by a FIFO queue.
• Shortest-Job-First (SJF) Scheduling- The process that has the smallest next CPU burst is
allocated the CPU first.
• Priority Scheduling- A priority is associated with each process. The CPU is allocated to
the process with the highest priority.
• Round Robin (RR) Scheduling- It is primarily aimed for time-sharing systems. A circular
queue is setup in such a way that the CPU scheduler goes around that queue, allocating CPU to
each process for a time interval of up to around 10 to 100 milliseconds.
• Multilevel Queue Scheduling- It partitions the ready queue into several separate queues.
Each queue has its own scheduling algorithm. The processes are permanently assigned to one
queue based on some property of the process.

Memory Management
Memory management is the functionality of an operating system which handles or manages
primary memory. Memory management keeps track of each and every memory location either it
is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.

Logical versus Physical Address Space


An address generated by the CPU is a logical address whereas address actually available on
memory unit is a physical address. Logical address is also known a Virtual address.

Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory
to a backing store, and then brought back into memory for continued execution. It allows more
processes to be run that can fit into memory at one time.

Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little
pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.

Paging
External fragmentation is avoided by using paging technique. Paging is a technique in which
physical memory is broken into blocks of the same size called pages (size is power of 2, between
512 bytes and 8192 bytes). When a process is to be executed, its corresponding pages are loaded
into any available memory frames.
Belady's Anomaly
The Belady's anomaly occurs in case of the FIFO page replacement policy in the OS. When this
FIFO is used and the number of page frames are increased in number, then the frames that are
required by the program varies in a large range (due to large no of pages) as a result of this the
number of page faults increases with the number of frames. This anomaly doesn't occur in the
LRU(least recently used) scheduling algorithm.

Virtual Memory
Virtual memory is a technique that allows the execution of processes which are not completely
available in memory. The main visible advantage of this scheme is that programs can be larger
than physical memory.

Demand Paging: Demand paging is the paging policy that a page is not read into memory until it
is requested, that is, until there is a page fault on the page.
Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a page
that is not in memory. The present bit in the page table entry will be found to be off by the
virtual memory hardware and it will signal an interrupt.
Thrashing: The problem of many page faults occurring in a short time, called “page thrashing”.
It happens when it is spending more time paging instead of executing.

Cache Memory
Cache memory is random access memory (RAM) that a computer microprocessor can access
more quickly than it can access regular RAM. As the microprocessor processes data, it looks first
in the cache memory and if it finds the data there (from a previous reading of data), it does not
have to do the more time-consuming reading of data

Process Synchronization
A situation, where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place, is
called race condition. To guard against the race condition we need to ensure that only one
process at a time can be manipulating the same data. The technique we use for this is called
process synchronization.
Race situation
When two processes are trying to access the same variable simultaneously, we call it as race
condition.

Critical section problem


Critical section is the code segment of a process in which the process may be changing common
variables, updating tables, writing a file and so on. Only one process is allowed to go into critical
section at any given time (mutually exclusive).The critical section problem is to design a
protocol that the processes can use toco-operate. The three basic requirements of critical section
are:
1. Mutual exclusion
2. Progress
3. Bounded waiting

Bakery algorithm is one of the solutions to CS problem.


Semaphore: It is a synchronization tool used to solve complex critical section problems. A
semaphore is an integer variable that, apart from initialization, is accessed only through two
standard atomic operations: Wait and Signal.

Bounded-Buffer Problem
Here we assume that a pool consists of n buffers, each capable of holding one item. The
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1.The empty and full semaphores count the number of empty and full buffers, respectively.
Empty is initialized to n, and full is initialized to 0.

Reader-Writer Problem
The Readers and Writers problem is useful for modeling processes which are competing for a
limited shared resource. A practical example of a Readers and Writers problem is an airline
reservation system consisting of a huge data base with many processes that read and write the
data.
Reading information from the data base will not cause a problem since no data is changed. The
problem lies in writing information to the data base. If no constraints are put on access to the
data base, data may change at any moment. By the time a reading process displays the result of a
request for information to the user, the actual data in the data base may have changed. What if,
for instance, a process reads the number of available seats on a flight, finds a value of one, and
reports it to the customer? Before the customer has a chance to make their reservation, another
process makes a reservation for another customer, changing the number of available seats to
zero.
We can provide permission to a number of readers to read same data at same time. But a writer
must be exclusively allowed to access. There are two solutions to this problem:
1. No reader will be kept waiting unless a writer has already obtained permission to use the
shared object. In other words, no reader should wait for other readers to complete simply
because a writer is waiting.

2. Once a writer is ready, that writer performs its write as soon as possible. In other words,
if a writer is waiting to access the object, no new may start reading.

Deadlock
Consider the following situation: Suppose that two processes (A and B) are running on the same
machine, and both processes require the use of the local printer and tape drive. Process A may
have been granted access to the machine's printer but be waiting for the tape drive, while
process B has access to the tape drive but is waiting for the printer. Such a condition is known as
deadlock. Since both processes hold a resource the other process needs, the processes will wait
indefinitely for the resources to be released and neither will finish executing.
In order for deadlock to be possible, the following three conditions must be present in the
computer:
1. Mutual exclusion: Only one process may use a resource at a time.
2. Hold-and-wait: A process may hold allocated resources while awaiting assignment of others.
3. No pre-emption: No resource can be forcibly removed from a process holding it.
4. Circular wait
(where p[i] is waiting for p[j] to release a resource. i= 1,2,…n
j=if (i!=n) then i+1
else 1 )

Dining Philosopher’s Problem


The dining philosopher’s problem is invented by E. W. Dijkstra. Imagine that five philosophers
who spend their lives just thinking and eating. A circular table with five chairs has a big plate of
spaghetti. However, there are only five chopsticks available, as shown in the following figure.
Each philosopher thinks. When he gets hungry, he sits down and picks up the two chopsticks
that are closest to him. If a philosopher can pick up both chopsticks, he eats for a while. After a
philosopher finishes eating, he puts down the chopsticks and starts to think.
Note that the Dining Philosophers problem meets all three of the conditions presented earlier.
Only one philosopher can use a fork at a time so the fork resource is mutually exclusive. Hungry
philosophers with only one fork will hold this resource until another fork becomes available,
thus resulting in hold-and-wait. And since the philosophers are peaceful and courteous, no one
is willing to forcibly remove a fork from his neighbour.
One solution to avoid the possibility of deadlock could be making even numbered philosophers
pick up the forks in a different order from the rest. That is, left first rather than right first
[Magee and Kramer 1999]. Notice that this solution differs from the other approaches
presented. Rather than attacking one of the necessary conditions for deadlock, the solution
imposes a particular resource allocation order which makes it impossible for all five
philosophers to be holding a single fork.

Deadlock avoidance algorithms


A dead lock avoidance algorithm dynamically examines the resource-allocation state to ensure
that a circular wait condition can never exist. The resource allocation state is defined by the
number of available and allocated resources, and the maximum demand of the process. There
are two algorithms:

1. Resource allocation graph algorithm -This is the graphical description of deadlocks.


2. Banker’s algorithm
3. Safety algorithm
4. Resource request algorithm

Bankers Algorithm
Banker’s algorithm is one form of deadlock-avoidance in a system. For the Banker's algorithm to
work, it needs to know three things:
• How many of each resource each process could possibly request[MAX]
• How many of each resource each process is currently holding[ALLOCATED]
• How many of each resource the system currently has available[AVAILABLE]

Resources may be allocated to a process only if it satisfies the following conditions:


• Request ≤ max, else set error condition as process has crossed maximum claim made by
it.
• Request ≤ available, else process waits until resources are available.
• Some of the resources that are tracked in real systems are memory, semaphores and
interface access.

The Banker's Algorithm derives its name from the fact that this algorithm could be used in a
banking system to ensure that the bank does not run out of resources, because the bank would
never allocate its money in such a way that it can no longer satisfy the needs of all its customers.
By using the Banker's algorithm, the bank ensures that when customers request money the bank
never leaves a safe state. If the customer's request does not cause the bank to leave a safe state,
the cash will be allocated; otherwise the customer must wait until some other customer deposits
enough.

Safe state and a safe sequence


A system is in safe state only if there exists a safe sequence. A sequence of processes is a safe
sequence for the current allocation state if, for each Pi, the resources that the Pi can still request
can be satisfied by the currently available resources plus the resources held by all the Pj.

What factors determine whether a detection-algorithm must be utilized in a


deadlock avoidance system?
One is that it depends on how often a deadlock is likely to occur under the implementation of
this algorithm. The other has to do with how many processes will be affected by deadlock when
this algorithm is applied.

Direct Access Method


Direct Access method is based on a disk model of a file, such that it is viewed as a numbered
sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is
advantageous when accessing large amounts of information.

How would a file named EXAMPLEFILE.TXT appear when viewed under the DOS command
console operating in Windows 98?
The filename would appear as EXAMPL~1.TXT. The reason behind this is that filenames under
this operating system is limited to 8 characters when working under DOS environment.

Throughput – number of processes that complete their execution per time unit Turnaround
time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)

Linux
Linux is one of popular version of UNIX operating System. It is open source as its source code is
freely available. It is free to use. Linux was designed considering UNIX compatibility. It's
functionality list is quite similar to that of UNIX.
Components of Linux System
Linux Operating System has primarily three components
• Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It is consists of various modules and it interacts directly with the underlying
hardware. Kernel provides the required abstraction to hide low level hardware details to system
or application programs.
• System Library - System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries implements
most of the functionalities of the operating system and do not requires kernel module's code
access rights.
• System Utility - System Utility programs are responsible to do specialized, individual
level tasks.

Architecture
Linux System Architecture consists of following layers:
• Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
• Kernel - Core component of Operating System, interacts directly with hardware, provides
low level services to upper layer components.
• Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes
commands from user and executes kernel's functions.
• Utilities - Utility programs giving user most of the functionalities of an operating
systems.

Kernel Mode vs. User Mode


Kernel component code executes in a special privileged mode called kernel mode with full access
to all resources of the computer. This code represents a single process, executes in single
address space and do not require any context switch and hence is very efficient and fast. Kernel
runs each process and provides system services to the processes andprotected access to
hardware to the processes.
Support code which is not required to run in kernel mode is in System Library. User programs
and other system programs works in User Mode which has no access to system hardware and
kernel code. User programs/ utilities use System libraries to access Kernel functions to get
system's low level tasks.

Basic Features
Following are some of the important features of Linux Operating System.
• Portable - Portability means software can work on different types of hardware in same
way. Linux kernel and application programs support their installation on any kind of hardware
platform.
• Open Source - Linux source code is freely available and it is community-based
development project. Multiple teams work in collaboration to enhance the capability of Linux
operating system and it is continuously evolving.
• Multi-User - Linux is a multiuser system means multiple users can access system
resources such as memory/ RAM/ application programs at the same time.
• Multiprogramming - Linux is a multiprogramming system means multiple applications
can run at same time.
• Hierarchical File System - Linux provides a standard file structure in which system files/
user files are arranged.
• Shell - Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs etc.
• Security - Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.

Windows Operating System


Microsoft Windows or Windows is a Meta family of graphical operating systems developed,
marketed, and sold by Microsoft. It consists of several families of operating systems, each of
which cater to a certain sector of the computing industry. Active Windows families include
Windows NT, Window Server and Windows Phone.
Windows NT: Started as a family of operating system with Windows NT 3.1, an operating system
for server computers and workstations. It now consists of three operating system subfamilies
that are released almost at the same time and share the same kernel.
Windows: The operating system for mainstream personal computers. The latest version is
Windows 8.1. It is almost impossible for someone unfamiliar with the subject to identify the
members of this family by name because they do not adhere to any specific rule; e.g. Windows
Vista, Windows 7 and Windows RT are members of this family but Windows 3.1 is not.
Windows Server: The operating system for server computers. The latest version is Windows
Server 2012 R2. Unlike its client sibling, it has adopted a strong naming scheme.
Windows Phone: An operating system sold only to manufacturers of smart phones. The first
version was Windows Phone 7.
Mac Operating System
Mac OS is a series of graphical user interface-based operating systems developed by Apple Inc.
for their Macintosh line of computer systems.
Design Concept
Apple's original concept for the Macintosh deliberately sought to minimize the user's conceptual
awareness of the operating system. Tasks which required more operating system knowledge on
other systems would be accomplished by mouse gestures and graphic controls on a Macintosh.
This would differentiate it from then current systems, such as MS-DOS, which used a command
line interface consisting of tersely-abbreviated textual commands.
The core of the system software was held in ROM, with updates originally provided on floppy
disk, freely copy able at Apple dealers. The user's involvement in an upgrade of the operating
system was also minimized to running an installer, or replacing system files using the file
manager. This simplicity meant that the early releases lacked any access controls, in effect giving
its single user root privileges at all times.
3.2 Computer Organization

A computer has five functionally independent parts-

Input Unit
Computers accept coded information through input units, which read the data. The most well-
known input device is keyboard. Whenever a key is pressed, the corresponding letter or digit is
automatically translated into its corresponding binary code and transmitted over a cable to
either the memory or the processor.

Memory Unit
The function of the memory unit is to store programs and data. There are two classes of storage,
called Primary and Secondary.

 Primary storage is a fast memory that operates at electronic speeds. Programs


must be stored in the memory while they are being executed. Memory in which
any location can be reached in a short and fixed amount of time after specifying
its address is called Random Access Memory (RAM). The time required to access
one word is called the Memory Access Time.
 Although Primary Storage is essential, it tends to be expensive. Thus additional,
cheaper, secondary storage is used when large amounts of data and many
programs have to be stored, particularly for information that is accessed
infrequently. Examples: magnetic disks and tapes and optical disks.
Arithmetic and logic Unit (ALU)
All the arithmetic & Logical operations are performed by ALU and this operation are initiated
once the operands are brought into the processor.

 Output Unit- Its function is to send processed result to the outside world. Ex:-
Printer.
 Control unit- The Control Unit is used to co-ordinate the operations of memory,
ALU, input and output units. The Control Unit can be said as the nerve center
that sends control signals to other units and senses their states.

Basic Operational Concepts


The computer works based on some given instruction. Let us consider an example:
Add LOCA, R0

 This instruction adds the operands at memory location LOCA to the operand in
the register R0 and places the sum into the register R0. It seems that this
instruction is done in one step, but actually it internally performs several steps
 First, the instruction is fetched from the memory into the processor. Next, the
operand at LOCA is fetched and added to the contents of R0.

The above instruction can be written also as-


Load LOCA, R1
Add R1, R0

Let us now analyze how the memory and processor are connected:
The Processor contains a number of registers used for several purposes.

 IR: The IR (Instruction Register) holds the instruction that is currently being
executed.
 PC: The PC (Program Counter) is another specialized register which contains the
memory address of next instruction to be fetched.
 MAR (Memory Address Register): This register facilitates communication with
the memory. MAR holds the address of the location to be accessed.
 MDR: This register facilitates communication with the memory. The MDR
contains the data to be written into or read out of the addressed location.
 There are n general purpose registers R0 to Rn-1.
 The Program execution starts when the PC is set to point the 1st instruction. The
content of the PC is transferred to the M AR and Read control signal is sent to
memory. T hen the addressed word is read out of the memory and loaded into the
MDR. Next the contents of the MDR are transferred to the IR. Then the program
is decoded, it is sent to the ALU if it has some arithmetic or logical calculations.
The n general purpose registers are used during the calculations to store the
result. Then the result is sent to the MDR, and its address of location where result
is stored is sent to MAR. And then a write cycle is initiated. Then PC is
incremented and the process continues.
Bus Structures
The Bus is used to connect all the individual parts of a computer in an organized manner. The
simplest way to interconnect functional u nits is to use a single bus. The main advantage of the
single bus structure is its low cost and its flexibility. The multiple bus achieve more concurrency
in operation by allowing two or more transfers to be carried at the same time.

Multiprocessors and Multi computers


A large computer system which contains a number of processor units is called Multiprocessor
System. These systems either execute a number of different application tasks in parallel, or they
execute subtasks of a single large task in parallel. The high performance of these systems comes
with much increased complexity and cost.

The Multicomputer System is formed by interconnecting a group of complete computers to


achieve high total computational power. They can access only their own memory. They
communicate data by exchanging messages over a communication network.

Performance

 Performance means how quickly it can execute programs.


 For best performance, it is necessary to design the compiler, the machine
instruction set, and the hardware in a coordinated way.
 Processor circuits are controlled by a timing signal called clock. The processor
divides the action to be performed in basic steps, such that each step can be
completed in one clock cycle.
 Basic Performance Equation is given by:- T=((NXS)/R) , Where N= actual no. of
instruction executions , S= average no. of basic steps needed to execute one
machine instruction , R- clock rate (cycles/sec)
 In order to achieve high performance, the T value should be reduced which can
be done by reducing N and S, or by increasing R.
 A Substantial improvement can also be done by overlapping the execution of
successive instructions. This concept is known as pipelining.

3.3 Data Structure


Overview

Data can be organized in many ways and data structure is one of these ways. It is used to represent
data in the memory of the computer so that the processing of data can be done in easier way. In other
words, data structure is the logical and mathematical model of a particular organization of data.

The data structures can be of the following types:


1. Linear Data structures- In these data structures the elements form a sequence. Such as
Arrays, Linked Lists, Stacks and Queues are linear data structures.

2. Non-Linear Data Structures- In these data structures the elements do not form a sequence.
Such as Trees and Graphs are non-linear data structures.

Array

An array is a linear data structure. It is a collection of similar type of (homogeneous) data


elements and is represented by a single name. An array is a random access data structure, where
each element can be accessed directly and in constant time. A typical illustration of random
access is a book - each page of the book can be open independently of others. Random access is
critical to many algorithms, for example binary search.

It has the following features:


1. The elements are stored in continuous memory locations.
2. The n elements are numbered by consecutive numbers i.e. 1, 2, 3, . . . . . , n.

Stack
A stack is a container of objects that are inserted and removed according to the last-in first-out
(LIFO) principle. In the pushdown stacks only two operations are allowed: push the item into
the stack, and pop the item out of the stack. A stack is a limited access data structure - elements
can be added and removed from the stack only at the top. Push adds an item to the top of the
stack, pop removes the item from the top. A helpful analogy is to think of a stack of books; you
can remove only the top book, also you can add a new book on the top. The simplest application
of a stack is to reverse a word. You push a given word to stack - letter by letter - and then pop
letters from the stack. Another application is an "undo" mechanism in text editors, this
operation is accomplished by keeping all text changes in a stack.
Queue
A queue is a container of objects (a linear collection) that are inserted and removed according to
the first-in first-out (FIFO) principle. An excellent example of a queue is a line of students in the
food court. New additions to a line made to the back of the queue, while removal (or serving)
happens in the front. In the queue only two operations are allowed enqueue and dequeue.
Enqueue means to insert an item into the back of the queue, dequeue means removing the front
item. The picture demonstrates the FIFO access.

The difference between stacks and queues is in removing. In a stack we remove the item the
most recently added; in a queue, we remove the item the least recently added.

Linked List
A linked list is a linear data structure where each element is a separate object. Each element
(node) of a list comprises two items - the data and a reference to the next node. The last node
has a reference to null. The entry point into a linked list is called the head of the list. It should be
noted that head is not a separate node, but the reference to the first node. If the list is empty
then the head is a null reference. A linked list is a dynamic data structure. The number of nodes
in a list is not fixed and can grow and shrink on demand. Any application which has to deal with
an unknown number of objects will need to use a linked list.

Tree
A tree is a non-linear data structure made up of nodes or vertices and edges without having any
cycle. The tree with no nodes is called the null or empty tree. A tree that is not empty consists of
a root node and potentially many levels of additional nodes that form a hierarchy.

Binary Tree
A worthwhile simplification is to consider only binary trees. A binary tree is one in which each
node has at most two descendants - a node can have just one but it can't have more than two.
Clearly each node in a binary tree can have a left and/or a right descendant. The importance of a
binary tree is that it can create a data structure that mimics a "yes/no" decision making process.

Graph
A graph is a non-linear data structure consists of a finite set of nodes or vertices, together with a
set of ordered pairs of these nodes (or, in some cases, a set of unordered pairs). These pairs are
known as edges or arcs.
Searching
Linear Search
Linear search or sequential search is a method for finding an element within a list. It
sequentially checks each element of the list until a match is found or the whole list has been
searched.
Worst Case: Θ(n) ; search key not present or last element
Best Case: Θ(1) ; first element
Binary Search
In computer science, a binary search or half-interval search algorithm finds the position of a
specified input value (the search "key") within an array sorted by key value. In each step, the
algorithm compares the search key value with the key value of the middle element of the array.
If the keys match then a matching element has been found and its index, or position, is returned.
Otherwise, if the search key is less than the middle element's key, the algorithm repeats its
action on the sub-array to the left of the middle element or if the search key is greater, on the
sub-array to the right.

Worst case/Average case: Θ(logn)


Best Case: Θ(1) ; when key is middle element

Depth First Search


Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. One starts at the root (selecting some arbitrary node as the root in the case of a
graph) and explores as far as possible along each branch before backtracking.

Breadth First Search


Breadth-first search (BFS) is a strategy for searching in a graph when search is limited to
essentially two operations: (a) visit and inspect a node of a graph; (b) gain access to visit the
nodes that neighbour the currently visited node.
Time Complexity and Space Complexity of Searching Algorithm

Sorting
Insertion Sort
Insertion sort iterates, consuming one input element each repetition, and growing a sorted output
list. Insertion sort removes one element from the input data in each iteration, finds the location it
belongs within the sorted list, and inserts it there. It repeats until no input elements remain.

 Average Case / Worst Case: Θ(n2); happens when input is already sorted in
descending order
 Best Case: Θ(n); when input is already sorted
Selection Sort
The selection sort method requires (n-1) passes to sort an array. In the first pass, find the
smallest element in the entire array and swap with the first element a[0]. In the second pass,
find the smallest element from a[1] to a[n-1] and swap with a[1] and so on.

 Average Case / Worst Case / Best Case: Θ(n2)

Merge Sort
Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945.
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sub-lists, each containing 1 element (a list of 1 element is
considered sorted).
2. Repeatedly merge sub-lists to produce new sorted sub-lists until there is only 1 sub-list
remaining. This will be the sorted list.

Average Case / Worst Case / Best case: Θ(nlogn) ; It doesn't matter at all whether the input is
sorted or not.

Quick Sort
Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into two
smaller sub-arrays: the low elements and the high elements. Quicksort can then recursively sort
the sub-arrays.

The steps are:


1. Pick an element, called a pivot, from the array.

2. Reorder the array so that all elements with values less than the pivot come before the pivot,
while all elements with values greater than the pivot come after it (equal values can go either
way). After this partitioning, the pivot is in its final position. This is called the partition
operation.

3. Recursively apply the above steps to the sub-array of elements with smaller values and
separately to the sub-array of elements with greater values.

Worst Case: Θ(n2); happens input is already sorted


Best Case: Θ(nlogn); when pivot divides array in exactly half

Bubble Sort
It is least efficient sorting technique. The basic idea underlying the bubble sort is to pass
through a file sequentially several times. Each pass consists of comparing each element in the
file with its successors and interchanging the two elements if they are not in proper order.

 Worst Case: Θ(n2)


 Best Case: Θ(n); on already sorted

Time Complexity and Space Complexity of Sorting Algorithm

Algorithm designing techniques


Divide and Conquer
A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-
problems of the same (or related) type, until these become simple enough to be solved directly. The
solutions to the sub-problems are then combined to give a solution to the original problem.

This technique is the basis of efficient algorithms for all kinds of problems, such as sorting (e.g.,
quicksort, merge sort).

Greedy Method
This method suggests that one can design an algorithm that works in stage, considering one
input at a time. At each step the decision is made whether a particular input is an optimal
solution. Example: Prim's algorithm, Kruskal's algorithm.

Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected
weighted undirected graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized.

In Kruskal’s algorithm, the edges of the graph are considered in non-decreasing order of cost. It
selects edges into a set T such that, it is possible to complete T into a tree (no cycle)

Dijkstra's Algorithm
Dijkstra's algorithm conceived is a graph search algorithm that solves the single-source shortest
path problem for a graph with non-negative edge path costs, producing a shortest path tree. This
algorithm is often used in routing and as a subroutine in other graph algorithms.

3.4 Software Engineering


Software engineering deals with tools and techniques through the semantic application of which
new reliable software can be developed and old software can be maintained.

Process Life Cycle

Process life cycle describes the organizational activities of the engineers. It includes all the faces
that are
1. Requirement Analysis
2. Design
3. Implementation
4. Testing

Software Development Life Cycle Models


Waterfall Model
The waterfall model is a sequential design process, used in software development processes, in
which progress is seen as flowing steadily downwards (like a waterfall) through the phases of
Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation
and Maintenance.

Prototype Model
This type of System Development Method is employed when it is very difficult to obtain exact
requirements from the customer (unlike waterfall model, where requirements are clear). The
basic idea here is that instead of freezing the requirements before any design or coding can
proceed, a throwaway prototype is built to help understand the requirements. Development of
the prototype obviously undergoes design, coding and testing, but each of these phases is not
done very formally. By using the prototype, the client can get an actual feel of the system, which
can enable the client to better understand the requirements of the desired system.

Spiral Model
Spiral Model includes the iterative nature of the prototyping model and the linear nature of the
waterfall model. This approach is ideal for developing software that is revealed in various versions.

In each iteration of the spiral approach, software development process follows the phase-wise
linear approach. At the end of first iteration, the customer evaluates the software and provides
the feedback. Based on the feedback, software development process enters into the next
iteration. The process of iteration continues throughout the life of the software.

An example of the spiral model is the evolution of Microsoft Windows Operating system from
Windows 3.1 to windows 2003. We may refer to Microsoft windows 3.1 Operating System as the
first iteration in the spiral approach. The product was released and evaluated by the customers,
which include the market large. After getting the feedback from customers about the windows
3.1, Microsoft planned to develop a new version of windows operating system. Windows’95 was
released with the enhancement and graphical flexibility. Similarly, other versions of windows
operating system were released.
Incremental Model
Each increment includes three phases: design, implementation and analysis. During the design
phase of the first increment, the functionality with topmost priority from the project activity list is
selected and the design is prepared. In the implementation phase, the design is implemented and
tested. In the analysis phase, the functional capability of the partially developed product is analyzed.
The development process is repeated until all the functions of the project are implemented.

Consider an example where a bank wants to develop software to automate the banking process
for insurance services, personal banking, and home and automobile loans. The bank wants the
automation of personal banking system immediately because it will enhance the customer
services. You can implement the incremental approach to develop the banking software. In the
first increment, you can implement the personal banking feature and deliver it to the customer.
In the later increments, you can implement the insurance services, home loans, and automobile
loans features of the bank.
Four P's of Software Project Management
The effective software project management focuses on four P's:

 The People
 The Product
 The Process
 The Project

The People
The following categories of people are involved in the software process.

 Senior Managers
 Project Managers
 Practitioners
 Customers
 End Users

Senior Managers define the business issue. Project Managers plan, motivate, organize and
control the practitioners who do the Software work. Practitioners deliver the technical skills that
are necessary to engineer a product or application. Customer specifies the requirements for the
software to be developed. End Users interact with the software once it is released.

The Product
Before a software project is planned, the product objectives and scope should be established,
technical and management constraints should be identified. Without this information it is
impossible to define a reasonable cost, amount of risk involved, the project schedule etc.

The Process
Here the important point is to select an appropriate process model to develop the software.
There are different process models available: Water fall model, Iterative water fall model,
Prototyping model, Evolutionary model, Spiral model. In practice we may use any one of the
above models or a combination of the above models.

The Project
In order to manage a successful software project, the different activities should be defined
clearly. A project is a series of steps where we need to make accurate decision so as to make a
successful project.

Project Estimation
For an effective management accurate estimation of various measures is a must. With correct
estimation managers can manage and control the project more efficiently and effectively.
Project estimation may involve the following:

 Software size estimation


 Effort estimation
 Time estimation
 Cost estimation

Project Estimation Techniques


Decomposition Technique
This technique assumes the software as a product of various compositions. There are two main
models -
Line of Code Estimation is done on behalf of number of line of codes in the software product.
Function Points Estimation is done on behalf of number of function points in the software product.

Empirical Estimation Technique


This technique uses empirically derived formulae to make estimation. These formulae are based
on LOC or FPs.

Putnam Model
This model is made by Lawrence H. Putnam, which is based on Norden’s frequency distribution
(Rayleigh curve). Putnam model maps time and efforts required with software size.

COCOMO
COCOMO stands for COnstructive COst MOdel, developed by Barry W. Boehm. It divides the
software product into three categories of software: organic, semi-detached and embedded.

Project Risk Management


Risk management involves all activities pertaining to identification, analyzing and making
provision for predictable and non-predictable risks in the project. Risk may include the
following:

 Experienced staff leaving the project and new staff coming in.
 Change in organizational management.
 Requirement change or misinterpreting requirement.
 Under-estimation of required time and resources.
 Technological changes, environmental changes, business competition.
Risk Management Process
There are following activities involved in risk management process:

 Identification - Make note of all possible risks, which may occur in the project.
 Categorize - Categorize known risks into high, medium and low risk intensity as
per their possible impact on the project.
 Manage - Analyze the probability of occurrence of risks at various phases. Make
plan to avoid or face risks. Attempt to minimize their side-effects.
 Monitor - Closely monitor the potential risks and their early symptoms. Also
monitor the effects of steps taken to mitigate or avoid them.
3.5 Automata

Overview

Automata theory is the study of abstract computing devices, or “machines”. This topic goes back
to the days before digital computers and describes what is possible to compute using an abstract
machine. These ideas directly apply to creating compilers, programming languages, and
designing applications. They also provide a formal framework to analyze new types of
computing devices, e.g. bio computers or quantum computers.

Practical Examples of the implications of Automata Theory and the formal Languages
Grammars and languages are closely related to automata theory and are the basis of many
important software components like:
• Compilers and interpreters
• Text editors and processors
• Text searching
• System verification

Difference between string and word


Any combination of letters of alphabet that follows rules of language is called a word.A string is
a finite sequence of symbols from an alphabet.

Non-Determinism and Determinism


Determinism means that our computational model (machine) knows what to do for every
possible input.
In case of Non-determinism our machine may or may not know what it has to do on all possible
inputs.
As you can conclude from above definition that Non-Deterministic machine cannot be
implemented (used) on computer unless it is converted in Deterministic machine.

Difference between Palindrome and Reverse function


Reverse (623) = 326
Palindrome is a string where the reverse of the string is the string itself. X=Reverse(X)
PALINDROME={Λ , a, b, aa, bb, aaa, aba, bab, bbb, …}

Differentiate between (a,b) and (a+b)


(a, b) = Represents a and b
(a + b) = Represents either a or b
What is the concept of FA also known as FSM (Finite State Machine)?
FA (Finite Automaton) is a finite state machine that recognizes a regular language. In computer
science, a finite-state machine (FSM) or finite-state automaton (FSA) is an abstract machine that has
only a finite, constant amount of memory. The internal states of the machine carry no further
structure. This kind of model is very widely used in the study of computation and languages.

A Finite automaton (FA) is a collection of the followings:


 Finite number of states having one initial and some (maybe none) final states.
 Finite set of input letters (O) from which input strings are formed
 Finite set of transitions i.e. for each state and for each input letter there is a
transition showing how to move from one state to another
There are 5 tuples in finite state machine: states, input symbols, initial state, accepting state and
transition function.

Nondeterministic Finite Automaton (NFA)


Non-determinism plays a key role in the theory of computing. A nondeterministic finite state
automaton is one in which the current state of the machine and the current input do not
uniquely determine the next state. This just means that a number of subsequent states (zero or
more) are possible next states of the automaton at every step of a computation. Of course, non-
determinism is not realistic, because in real life, computers must be deterministic. Still, we can
simulate non-determinism with deterministic programs. Furthermore, as a mathematical tool
for understanding computability, non-determinism is invaluable.

As with deterministic finite state automata, a nondeterministic finite state automaton


has five components.
 A set of states
 A finite input alphabet from which input strings can be constructed
 A transition function that describes how the automaton changes states
as it processes an input string
 A single designated starting state
 A set of accepting states

The only difference lies in the transition function, which can now target subsets of the states of
the automaton rather than a single next state for each state, input pair.
Difference between FA’s and NFA’s
FA stands for finite automata while NFA stands for non-deterministic finite automata
In FA there must be a transition for each letter of the alphabet from each state. So, in FA
number of transitions must be equal to (number of states * number of letter in alphabet).
While in NFA there may be more than one transition for a letter from a state. And finally every
FA is an NFA while every NFA may be an FA or not.
Regular Expression
Regular expression (RE) is used to express the infinite or finite language, these REs are made in
such a way that these can generate the strings of that unique language also for the cross check
that the defined RE is of a specified language that RE should accept all the string of that
language and all language strings should be accepted by that RE.

How Moore and Mealy machine works in Computer Memory what is their importance in
Computing?
Mealy & Moore Machines work in computing as incrementing machine & 1’s complement
machine etc. These operations as basic computer operations so these machines are very
important.
In Mealy Machine we read input string letters and generate output while moving along the paths
from one state to another while in Moore machine we generate output on reaching the state so
the output pattern of Moore machine contains one extra letter because we generated output for
state q0 where we read nothing.

Why Context Free Grammars are called “Context Free?


Context Free Grammars are called context free because the words of the languages of Context Free
Grammars have words like “aaabbb”(PALINDROME). In these words, the value of letters (a, b) is the
same on whatever position they appear. On the other hand, in context sensitive grammars their value
depends on the position they appear in the word a simple example may be as follows

Suppose we have a decimal number 141 in our language. When compiler reads it, it would be in the
form of string. The compiler would calculate its decimal equivalent so that we can perform
mathematical functions on it. In calculating its decimal value, weight of first “1” is different than the
second “1” it means it is context sensitive (depends on in which position the “1” has appeared).

141 = < * + ;*% + *1

(Value of one is just one)


That is not the case with the words of Context Free Languages. (The value of “a” is always
same in whatever position “a” appears).

Pushdown Automata
In computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.
Pushdown automata are used in theories about what can be computed by machines. PDA is just an
enhancement in FA i.e. memory is attached with machine that recognizes some language.They are
more capable than finite-state machines but less capable than Turing machines. Deterministic
pushdown automata can recognize all deterministic context-free languages while nondeterministic
ones can recognize all context-free languages. Mainly the former are used in parser design.
Turing machine
A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according
to a table of rules. On this tape are symbols, which the machine can read and write, one at a
time, using a tape head. Operation is fully determined by a finite set of elementary instructions
such as "in state %2, if the symbol seen is , write a ; if the symbol seen is , change into state 7; in
state 7, if the symbol seen is , write a and change to state 6;" etc.
Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer
algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
3.6 Data Base Management System
Database is collection of data which is related by some aspect. Data is collection of facts and
figures which can be processed to produce information. Name of a student, age, class and her
subjects can be counted as data for recording purposes.
A database management system stores data, in such a way which is easier to retrieve,
manipulate and helps to produce information.

Characteristics

Traditionally data was organized in file formats. DBMS was all new concepts then and all the
research was done to make it to overcome all the deficiencies in traditional style of data
management. Modern DBMS has the following characteristics:

 Real-world Entity: Modern DBMS are more realistic and uses real world entities
to design its architecture. It uses the behavior and attributes too. For example, a
school database may use student as entity and their age as their attribute.

 Relation-based Tables: DBMS allows entities and relations among them to form
as tables. This eases the concept of data saving. A user can understand the
architecture of database just by looking at table names etc.

 Isolation of Data and Application: A database system is entirely different than its
data. Where database is said to active entity, data is said to be passive one on
which the database works and organizes. DBMS also stores metadata which is
data about data, to ease its own process.

 Less Redundancy: DBMS follows rules of normalization, which splits a relation


when any of its attributes is having redundancy in values. Following
normalization, which itself is a mathematically rich and scientific process, make
the entire database to contain as less redundancy as possible.

 Consistency: DBMS always enjoy the state on consistency where the previous
form of data storing applications like file processing does not guarantee this.
Consistency is a state where every relation in database remains consistent. There
exist methods and techniques, which can detect attempt of leaving database in
inconsistent state.

 Query Language: DBMS is equipped with query language, which makes it more efficient to
retrieve and manipulate data. A user can apply as many and different filtering options, as he
or she wants. Traditionally it was not possible where file-processing system was used.

 ACID Properties: DBMS follows the concepts for ACID properties, which stands
for Atomicity, Consistency, Isolation and Durability. These concepts are applied
on transactions, which manipulate data in database. ACID properties maintains
database in healthy state in multi-transactional environment and in case of
failure.

 Multiuser and Concurrent Access: DBMS support multi-user environment and allows them
to access and manipulate data in parallel. Though there are restrictions on transactions when
they attempt to handle same data item, but users are always unaware of them.

 Multiple Views: DBMS offers multiples views for different users. A user who is in
sales department will have a different view of database than a person working in
production department. This enables user to have a concentrate view of database
according to their requirements.

 Security: Features like multiple views offers security at some extent where users
are unable to access data of other users and departments. DBMS offers methods
to impose constraints while entering data into database and retrieving data at
later stage. DBMS offers many different levels of security features, which enables
multiple users to have different view with different features. For example, a user
in sales department cannot see data of purchase department is one thing,
additionally how much data of sales department he can see, can also be managed.
Because DBMS is not saved on disk as traditional file system it is very hard for a
thief to break the code.

Entity-Relationship Model
Entity-Relationship model is based on the notion of real-world entities and relationship among
them. While formulating real-world scenario into database model, ER Model creates entity set,
relationship set, general attributes and constraints.
ER Model is best used for the conceptual design of database. ER Model is based on:

 Entities and their attributes


 Relationships among entities

These concepts are explained below:

Database Schema
Database schema represents the logical view of entire database. It tells about how the data is
organized and how relation among them is associated. It formulates all database constraints that
would be put on data in relations, which resides in database.
Database schema can be divided broadly in two categories:
 Physical Database Schema: This schema pertains to the actual storage of data and it is form
of storage like files, indices etc. It defines how data will be stored in secondary storage etc.
 Logical Database Schema: This defines all logical constraints that need to be
applied on data stored. It defines tables, views and integrity constraints etc.

Normalization
If a database design is not perfect it may contain anomalies, which are like a bad dream for
database itself. Managing a database with anomalies is next to impossible.

 Update anomalies: If data items are scattered and are not linked to each other
properly, then there may be instances when we try to update one data item that
has copies of it scattered at several places, few instances of it get updated
properly while few are left with their old values. This leaves database in an
inconsistent state.
 Deletion anomalies: When we try to delete a record, but parts of it may be left
undeleted because of unawareness, then the data may also get saved somewhere
else.
 Insert anomalies: When we try to insert data in a record that does not exist at all.

Normalization is a method to remove all these anomalies and bring database to consistent state
and free from any kinds of anomalies.
First Normal Form:
This is defined in the definition of relations (tables) itself. This rule defines that all the attributes
in a relation must have atomic domains. Values in atomic domain are indivisible units.
Second Normal Form:

 Prime attribute: an attribute, which is part of prime-key, is prime attribute.


 Non-prime attribute: an attribute, which is not a part of prime-key, is said to be a
non-prime attribute.

Second normal form says that every non-prime attribute should be fully functionally dependent
on prime key attribute. That is, if X → A holds, then there should not be any proper subset Y of
X, for that Y → A also holds.
Third Normal Form:
For a relation to be in Third Normal Form it must be in Second Normal form and the following
must satisfy:

 No non-prime attribute is transitively dependent on prime key attribute


 For any non-trivial functional dependency, if X → A then either X is a superkey
or A is prime attribute.
Boyce-Codd Normal Form:
BCNF is an extension of Third Normal Form in strict way. BCNF states that

 For any non-trivial functional dependency, if X → A, then X must be a super-key.

DBMS - Indexing
We know that information in the DBMS files is stored in the form of records. Every record is
equipped with some key field, which helps it to be recognized uniquely.
Indexing is a data structure technique to efficiently retrieve records from database files based on
some attributes on which the indexing has been done. Indexing in database systems is similar to
the one we see in books.
Indexing is defined based on its indexing attributes. Indexing can be one of the following types:

 Primary Index: If index is built on ordering 'key-field' of file it is called Primary


Index. Generally, it is the primary key of the relation.
 Secondary Index: If index is built on non-ordering field of file it is called
Secondary Index.
 Clustering Index: If index is built on ordering non-key field of file it is called Clustering
Index.

Ordering field is the field on which the records of file are ordered. It can be different from
primary or candidate key of a file. Ordered Indexing is of two types:

 Dense Index
 Sparse Index

Dense Index
In dense index, there is an index record for every search key value in the database. This makes
searching faster but requires more space to store index records itself. Index record contains
search key value and a pointer to the actual record on the disk.
Sparse Index
In sparse index, index records are not created for every search key. An index record here contains
search key and actual pointer to the data on the disk. To search a record, we first proceed by index
record and reach at the actual location of the data. If the data we are looking for is not where we
directly reach by following index, the system starts sequential search until the desired data is found.
Multilevel Index
Index records comprise search-key value and data pointers. This index itself is stored on the
disk along with the actual database files. As the size of database grows, so does the size of
indices. There is an immense need to keep the index records in the main memory so that the
search can speed up. If single level index is used then a large size index cannot be kept in
memory as whole and this leads to multiple disk accesses.

DBMS - Transaction
A transaction can be defined as a group of tasks. A single task is the minimum processing unit of
work, which cannot be divided further.

An example of transaction can be bank accounts of two users, say A & B. When a bank employee
transfers amount of Rs. 5 from A's account to B's account, a number of tasks are executed
behind the screen. This very simple and small transaction includes several steps: decrease A's
bank account from 5
States of Transactions:
A transaction in a database can be in one of the following state:
Transaction States

 Active: In this state the transaction is being executed. This is the initial state of
every transaction.
 Partially Committed: When a transaction executes its final operation, it is said to be in this
state. After execution of all operations, the database system performs some checks e.g. the
consistency state of database after applying output of transaction onto the database.
 Failed: If any checks made by database recovery system fail, the transaction is
said to be in failed state, from where it can no longer proceed further.
 Aborted: If any check fails and transaction reached in Failed state, the recovery manager rolls
back all its write operation on the database to make database in the state where it was prior
to start of execution of transaction. Transactions in this state are called aborted. Database
recovery module can select one of the two operations after a transaction aborts:
Re-start the transaction or Kill the transaction
 Committed: If transaction executes all its operations successfully it is said to be
committed. All its effects are now permanently made on database system.

Concurrency Control
In a multiprogramming environment where more than one transactions can be concurrently
executed, there exists a need of protocols to control the concurrency of transaction to ensure
atomicity and isolation properties of transactions.
Concurrency control protocols, which ensure serialization of transactions, are most desirable.
Concurrency control protocols can be broadly divided into two categories:

 Lock based protocols


 Time stamp-based protocols

Lock based protocols


Database systems, which are equipped with lock-based protocols, use mechanism by which any
transaction cannot read or write data until it acquires appropriate lock on it first. Locks are of
two kinds:

 Binary Locks: A lock on data item can be in two states; it is either locked or
unlocked.
 Shared/exclusive: This type of locking mechanism differentiates lock based on their uses. If a
lock is acquired on a data item to perform a write operation, it is exclusive lock. Because
allowing more than one transactions to write on same data item would lead the database into
an inconsistent state. Read locks are shared because no data value is being changed.

Time stamp based protocols


The most commonly used concurrency protocol is time-stamp based protocol. This protocol uses
either system time or logical counter to be used as a time-stamp.

Lock based protocols manage the order between conflicting pairs among transaction at the time of
execution whereas time-stamp based protocols start working as soon as transaction is created.
Every transaction has a time-stamp associated with it and the ordering is determined by the age
of the transaction. A transaction created at 2 clock time would be older than all other
transaction, which come after it. For example, any transaction 'y' entering the system at % is two
seconds younger and priority may be given to the older one.
In addition, every data item is given the latest read and write-timestamp. This lets the system
know, when last read and write operation were made on the data item.

Different Types of Structured Query Language statements


 DDL – Data Definition Language: DDL is used to define the structure that holds
the data. For example, Create, Alter, Drop and Truncate table.

 DML– Data Manipulation Language: DML is used for manipulation of the data
itself. Typical operations are Insert, Delete, Update and retrieving the data from
the table. Select statement is considered as a limited version of DML, since it
can't change data in the database. But it can perform operations on data retrieved
from DBMS, before the results are returned to the calling function.

 DCL– Data Control Language DCL is used to control the visibility of data like
granting database access and set privileges to create tables etc. Example - Grant,
Revoke access permission to the user to access data in database.

Integrity Rules
There are two Integrity rules:

 Entity Integrity: States that "Primary key cannot have NULL value"
 Referential Integrity: States that "Foreign Key can be either a NULL value or
should be Primary Key value of other relation.
3.7 Networking

Network

A network is a set of devices connected by physical media links. A network is recursively is a


connection of two or more nodes by a physical link or two or more networks connected by one or
more nodes.

Node
A network can consist of two or more computers directly connected by some physical medium
such as coaxial cable or optical fiber. Such a physical medium is called as Links and the
computer it connects is called as Nodes.

Gateway or Router
A node that is connected to two or more networks is commonly called as router or Gateway. It
generally forwards message from one network to another.

Routing
The process of determining systematically how to forward messages towards the destination
nodes based on its address is called routing.

Advantages of Distributed Processing


a. Security/Encapsulation

b. Distributed database

c. Faster Problem solving

d. Security through redundancy

e. Collaborative Processing

Protocol
A protocol is a set of rules that govern all aspects of information communication. The key
elements of protocols are:

 Syntax: It refers to the structure or format of the data that is the order in which
they are presented.
 Semantics: It refers to the meaning of each section of bits.
 Timing: Timing refers to two characteristics: when data should be sent and how
fast they can be sent.
Bandwidth and Latency
Network performance is measured in Bandwidth (throughput) and Latency (Delay). Bandwidth
of a network is given by the number of bits that can be transmitted over the network in a certain
period of time. Every line has an upper limit and a lower limit on the frequency of signals it can
carry. This limited range is called the bandwidth.

Latency corresponds to how long it takes a message to travel from one end of a network to the
other. It is strictly measured in terms of time.

When a switch is said to be congested?


It is possible that a switch receives packets faster than the shared link can accommodate and stores
in its memory, for an extended period of time, then the switch will eventually run out of buffer space,
and some packets will have to be dropped and in this state is said to congested state.

Round Trip Time


The duration of time it takes to send a message from one end of a network to the other and back,
is called RTT.

Unicasting, Multicasting and Broadcasting


If the message is sent from a source to a single destination node, it is called Unicasting.
If the message is sent to some subset of other nodes, it is called Multicasting.
If the message is sent to all the m nodes in the network it is called Broadcasting.

Multiplexing
Multiplexing is the set of techniques that allows the simultaneous transmission of multiple
signals across a single data link.

TCP/IP
TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or
protocol of the Internet. It can also be used as a communications protocol in a private network
(either an intranet or an extranet). When you are set up with direct access to the Internet, your
computer is provided with a copy of the TCP/IP program just as every other computer that you
may send messages to or get information from also has a copy of TCP/IP.
TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the
assembling of a message or file into smaller packets that are transmitted over the Internet and
received by a TCP layer that reassembles the packets into the original message. The lower layer,
Internet Protocol, handles the address part of each packet so that it gets to the right destination.
Each gateway computer on the network checks this address to see where to forward the
message. Even though some packets from the same message are routed differently than others,
they'll be reassembled at the destination.

TCP/IP Model
TCP/IP is based on a four-layer reference model. All protocols that belong to the TCP/IP
protocol suite are located in the top three layers of this model.
As shown in the following illustration, each layer of the TCP/IP model corresponds to one or
more layers of the seven-layer Open Systems Interconnection (OSI) reference model proposed
by the International Standards Organization (ISO).

The types of services performed and protocols used at each layer within the TCP/IP model are
described in more detail in the following table.

Layer Description Protocols


Defines TCP/IP application protocols HTTP, Telnet, FTP, TFTP,
Application and SNMP,
how host programs interface with
transport DNS, SMTP, X Windows, other
layer services to use the network. application protocols

Transport Provides communication session TCP, UDP, RTP


management between host computers.
Defines the level of service and status of
the
connection used when transporting data.

Internet Packages data into IP datagrams, which IP, ICMP, ARP, RARP
contain source and destination address
information that is used to forward the
datagrams between hosts and across
networks. Performs routing of IP
datagrams.

Network Specifies details of how data is physically Ethernet, Token Ring, FDDI,
interface sent through the network, including how X.25, Frame Relay, RS-232, v.35
bits are electrically signaled by hardware
devices that interface directly with a
network medium, such as coaxial cable,
optical fiber, or twisted-pair copper wire.

Bridges
These operate both in the physical and data link layers of LANs of same type. They divide a
larger network in to smaller segments. They contain logic that allow them to keep the traffic for
each segment separate and thus are repeaters that relay a frame only the side of the segment
containing the intended recipient and control congestion.

Error Detection

Data can be corrupted during transmission. For reliable communication errors must be
deducted and corrected. Error Detection uses the concept of redundancy, which means adding
extra bits for detecting errors at the destination.
The common Error Detection methods are:
a. Vertical Redundancy Check (VRC)
b. Longitudinal Redundancy Check (VRC)
c. Cyclic Redundancy Check (VRC)
d. Checksum
Redundancy
The technique of including extra information in the transmission solely for the purpose of
comparison is called redundancy.

Encoder
A device or program that uses predefined algorithms to encode, or compress audio or video data
for storage or transmission use. Example: a circuit that is used to convert between digital video
and analog video.

Decoder
A device or program that translates encoded data into its original format (e.g. it decodes the
data). The term is often used in reference to MPEG-2 video and sound data, which must be
decoded before it is output.

Framing
Framing in the data link layer separates a message from one source to a destination, or from
other messages to other destinations, by adding a sender address and a destination address. The
destination address defines where the packet has to go and the sender address helps the
recipient acknowledge the receipt.

Automatic Repeat Request (ARQ)


Error control is both error detection and error correction. It allows the receiver to inform the
sender of any frames lost or damaged in transmission and coordinates the retransmission of
those frames by the sender. In the data link layer, the term error control refers primarily to
methods of error detection and retransmission. Error control in the data link layer is often
implemented simply: Any time an error is detected in an exchange, specified frames are
retransmitted. This process is called automatic repeat request (ARQ).

Stop-and-Wait Protocol
In Stop and wait protocol, sender sends one frame, waits until it receives confirmation from the
receiver (okay to go ahead), and then sends the next frame.

Piggy Backing
A technique called piggybacking is used to improve the efficiency of the bidirectional protocols.
When a frame is carrying data from A to B, it can also carry control information about arrived
(or lost) frames from B; when a frame is carrying data from B to A, it can also carry control
information about the arrived (or lost) frames from A.

Transmission technology
(i) Broadcast and (ii) point-to-point
Point-to-Point Protocol
A communications protocol used to connect computers to remote networking services including
Internet service providers.

Difference between the communication and transmission


Transmission is a physical movement of information and concern issues like bit polarity,
synchronization, clock etc.
Communication means the meaning-full exchange of information between two communication
media.
Data Exchange. Possible ways:
(i) Simplex (ii) Half-duplex (iii) Full-duplex

RAID: A method for providing fault tolerance by using multiple hard disk drives
Attenuation: The degeneration of a signal over distance on a network cable is called attenuation.
What is MAC address?
The address for a device as it is identified at the Media Access Control (MAC) layer in the network
architecture. MAC address is usually stored in ROM on the network adapter card and is unique.

What is the range of addresses in the classes of internet addresses?


Class A - ... - 27.255.255.255
Class B - 28... - 9.255.255.255
Class C - 92... - 223.255.255.255
Class D - 22%... - 239.255.255.255
Class E - 2%... - 2%7.255.255.255

Topologies for networks


BUS topology: In this each computer is directly connected to primary network cable in a single line.
Advantages: Inexpensive, easy to install, simple to understand, easy to extend.

STAR topology: In this all computers are connected using a central hub.
Advantages: Can be inexpensive, easy to install and reconfigure and easy to trouble shoot
physical problems.

RING topology: In this all computers are connected in loop.


Advantages: All computers have equal access to network media, installation can be simple, and
signal does not degrade as much as in other topologies because each computer regenerates it.

Mesh network
A network in which there are multiple network-links between computers to provide multiple
paths for data to travel.

Baseband and Broadband transmission


In a baseband transmission, the entire bandwidth of the cable is consumed by a single signal. In
broadband transmission, signals are sent on multiple frequencies, allowing multiple signals to
be sent simultaneously.

5-4-3 Rule
In an Ethernet network, between any two points on the network, there can be no more than five
network segments or four repeaters and of those five segments only three of segments can be
populated.

Traffic Shaping
One of the main causes of congestion is that traffic is often busy. If hosts could be made to
transmit at a uniform rate, congestion would be less common. Another open loop method to
help manage congestion is forcing the packet to be transmitted at a more predictable rate. This
is called traffic shaping.

Silly Window Syndrome


It is a problem that can ruin TCP performance. This problem occurs when data are passed to the
sending TCP entity in large blocks, but an interactive application on the receiving side reads
byte at a time.

Local Area Networks Local area networks (LANs) are used to connect networking devices that
are in a very close geographic area, such as a floor of a building, a building itself, or a campus
environment.

Wide Area Networks Wide area networks (WANs) are used to connect LANs together. Typically,
WANs are used when the LANs that must be connected are separated by a large distance.
Metropolitan Area Networks: A metropolitan area network (MAN) is a hybrid between a LAN
and a WAN.

Intranet
An intranet is basically a network that is local to a company. In other words, users from within
this company can find all of their resources without having to go outside of the company. An
intranet can include LANs, private WANs and MANs.

Extranet
An extranet is an extended intranet, where certain internal services are made available to known
external users or external business partners at remote locations.

Internet
An internet is used when unknown external users need to access internal resources in your
network. In other words, your company might have a web site that sells various products, and
you want any external user to be able to access this service.

VPN
A virtual private network (VPN) is a special type of secured network. A VPN is used to provide a
secure connection across a public network, such as an internet. Extranets typically use a VPN to
provide a secure connection between a company and its known external users or offices.
Authentication is provided to validate the identities of the two peers. Confidentiality provides
encryption of the data to keep it private from prying eyes. Integrity is used to ensure that the
data sent between the two devices or sites has not been tampered with.

ICMP
ICMP is Internet Control Message Protocol, a network layer protocol of the TCP/IP suite used by
hosts and gateways to send notification of datagram problems back to the sender. It uses the
echo test / reply to test whether a destination is reachable and responding. It also handles both
control and error messages.

OSI Model

OSI stands for Open Systems Interconnection. It has been developed by ISO –
‘International Organization of Standardization‘, in the year 1974. It is a 7 layer
architecture with each layer having specific functionality to perform. All these 7 layers
work collaboratively to transmit the data from one person to another across the globe.

1. Physical Layer (Layer 1) :

The lowest layer of the OSI reference model is the physical layer. It is responsible for the
actual physical connection between the devices. The physical layer contains information
in the form of bits. It is responsible for the actual physical connection between the
devices. When receiving data, this layer will get the signal received and convert it into 0s
and 1s and send them to the Data Link layer, which will put the frame back together.

The functions of the physical layer are :


1. Bit synchronization: The physical layer provides the synchronization of the bits
by providing a clock. This clock controls both sender and receiver thus providing
synchronization at bit level.
2. Bit rate control: The Physical layer also defines the transmission rate i.e. the
number of bits sent per second.
3. Physical topologies: Physical layer specifies the way in which the different,
devices/nodes are arranged in a network i.e. bus, star or mesh topolgy.
4. Transmission mode: Physical layer also defines the way in which the data flows
between the two connected devices. The various transmission modes possible are:
Simplex, half-duplex and full-duplex.
Hub, Repeater, Modem, Cables are Physical Layer devices.
Network Layer, Data Link Layer and Physical Layer are also known as Lower
Layers or Hardware Layers.

2. Data Link Layer (DLL) (Layer 2):

The data link layer is responsible for the node to node delivery of the message. The main
function of this layer is to make sure data transfer is error free from one node to
another, over the physical layer. When a packet arrives in a network, it is the
responsibility of DLL to transmit it to the Host using its MAC address.

Data Link Layer is divided into two sub layers :


1. Logical Link Control (LLC)
2. Media Access Control (MAC)
The packet received from Network layer is further divided into frames depending on the
frame size of NIC(Network Interface Card). DLL also encapsulates Sender and
Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution
Protocol) request onto the wire asking “Who has that IP address?” and the destination
host will reply with its MAC address.

The functions of the data Link layer are :


1. Framing: Framing is a function of the data link layer. It provides a way for a
sender to transmit a set of bits that are meaningful to the receiver. This can be
accomplished by attaching special bit patterns to the beginning and end of the
frame.
2. Physical addressing: After creating frames, Data link layer adds physical
addresses (MAC address) of sender and/or receiver in the header of each frame.
3. Error control: Data link layer provides the mechanism of error control in which
it detects and retransmits damaged or lost frames.
4. Flow Control: The data rate must be constant on both sides else the data may get
corrupted thus, flow control coordinates that amount of data that can be sent
before receiving acknowledgement.
5. Access control: When a single communication channel is shared by multiple
devices, MAC sub-layer of data link layer helps to determine which device has
control over the channel at a given time.

Packet in Data Link layer is referred as Frame.


Data Link layer is handled by the NIC (Network Interface Card) and device drivers of
host machines.
Switch & Bridge are Data Link Layer devices.

3. Network Layer (Layer 3):

Network layer works for the transmission of data from one host to the other located in
different networks. It also takes care of packet routing i.e. selection of the shortest path
to transmit the packet, from the number of routes available. The sender & receiver’s IP
address are placed in the header by network layer.

The functions of the Network layer are:


1. Routing: The network layer protocols determine which route is suitable from
source to destination. This function of network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork uniquely,
network layer defines an addressing scheme. The sender & receiver’s IP address are
placed in the header by network layer. Such an address distinguishes each device
uniquely and universally.

Segment in Network layer is referred as Packet.


Network layer is implemented by networking devices such as routers.

4. Transport Layer (Layer 4):


Transport layer provides services to application layer and takes services from network
layer. The data in the transport layer is referred to as Segments. It is responsible for the
End to End delivery of the complete message. Transport layer also provides the
acknowledgment of the successful data transmission and re-transmits the data if an
error is found.

• At sender’s side:
Transport layer receives the formatted data from the upper layers,
performs Segmentationand also implements Flow & Error control to ensure
proper data transmission. It also adds Source and Destination port number in its header
and forwards the segmented data to the Network Layer.
Note: The sender need to know the port number associated with the receiver’s
application.
Generally, this destination port number is configured, either by default or manually. For
example, when a web application makes a request to a web server, it typically uses port
number 80, because this is the default port assigned to web applications. Many
applications have default port assigned.

• At receiver’s side:
Transport Layer reads the port number from its header and forwards the Data which it
has received to the respective application. It also performs sequencing and reassembling
of the segmented data.
The functions of the transport layer are:
1. Segmentation and Reassembly: This layer accepts the message from the
(session) layer, breaks the message into smaller units. Each of the segment
produced has a header associated with it. The transport layer at the destination
station reassembles the message.
2. Service Point Addressing: In order to deliver the message to correct process,
transport layer header includes a type of address called service point address or
port address. Thus, by specifying this address, transport layer makes sure that the
message is delivered to the correct process.

The services provided by transport layer:


1. Connection Oriented Service: It is a three-phase process which include
– Connection Establishment
– Data Transfer
– Termination / disconnection
In this type of transmission, the receiving device sends an acknowledgment, back
to the source after a packet or group of packet is received. This type of transmission
is reliable and secure.
2. Connection less service: It is a one phase process and includes Data Transfer.
In this type of transmission, the receiver does not acknowledge receipt of a packet.
This approach allows for much faster communication between devices. Connection
oriented Service is more reliable than connection less Service.

Data in the Transport Layer is called as Segments.


Transport layer is operated by the Operating System. It is a part of the OS and
communicates with the Application Layer by making system calls. Transport Layer is
called as Heart of OSI model.
5. Session Layer (Layer 5):

This layer is responsible for establishment of connection, maintenance of sessions,


authentication and also ensures security.
The functions of the session layer are:
1. Session establishment, maintenance and termination: The layer allows
the two processes to establish, use and terminate a connection.
2. Synchronization: This layer allows a process to add checkpoints which are
considered as synchronization points into the data. These synchronization point
help to identify the error so that the data is re-synchronized properly, and ends of
the messages are not cut prematurely and data loss is avoided.
3. Dialog Controller: The session layer allows two systems to start communication
with each other in half-duplex or full-duplex.
All the below 3 layers (including Session Layer) are integrated as a single layer in
TCP/IP model as “Application Layer”.
**Implementation of these 3 layers is done by the network application itself. These are
also known as Upper Layers or Software Layers.

SCENARIO:
Let’s consider a scenario where a user wants to send a message through some Messenger
application running in his browser. The “Messenger” here acts as the application layer
which provides the user with an interface to create the data. This message or so-called
Data is compressed, encrypted (if any secure data) and converted into bits (0’s and 1’s)
so that it can be transmitted.

6. Presentation Layer (Layer 6):

Presentation layer is also called the Translation layer. The data from the application
layer is extracted here and manipulated as per the required format to transmit over the
network.
The functions of the presentation layer are :
1. Translation: For example, ASCII to EBCDIC.
2. Encryption/ Decryption: Data encryption translates the data into another form
or code. The encrypted data is known as the cipher text and the decrypted data is
known as plain text. A key value is used for encrypting as well as decrypting data.
3. Compression: Reduces the number of bits that need to be transmitted on the
network.

7. Application Layer (Layer 7):

At the very top of the OSI Reference Model stack of layers, we find Application layer
which is implemented by the network applications. These applications produce the data,
which has to be transferred over the network. This layer also serves as a window for the
application services to access the network and for displaying the received information to
the user.
Ex: Application – Browsers, Skype Messenger etc.
Application Layer is also called as Desktop Layer.

The functions of the Application layer are:


1. Network Virtual Terminal
2. FTAM-File transfer access and management
3. Mail Services
4. Directory Services
OSI model acts as a reference model and is not implemented in Internet because of its
late invention. Current model being used is the TCP/IP model.
3.8 Cloud Computing
Cloud computing is a metaphor used for internet. It provides on-demand access to virtualized IT
resources that can be shared by others or subscribed by you. It provides an easy way to provide
configurable resources by taking it from a shared pool. The pool consists of networks, servers,
storage, applications and services.

In basic terms, cloud computing is the phrase used to describe different scenarios in which
computing resource is delivered as a service over a network connection (usually, this is the
internet). Cloud computing is therefore a type of computing that relies on sharing a pool of
physical and/or virtual resources, rather than deploying local or personal hardware and
software. It is somewhat synonymous with the term ‘utility computing’ as users are able to tap
into a supply of computing resource rather than manage the equipment needed to generate it
themselves; much in the same way as a consumer tapping into the national electricity supply,
instead of running their own generator.

Cloud computing consists of 3 layers in the hierarchy and these are as follows:

 Infrastructure as a Service (IaaS) provides cloud infrastructure in terms of


hardware like memory, processor speed etc.
 Platform as a Service (PaaS) provides cloud application platform for the
developers.
 Software as a Service (SaaS) provides cloud applications which are used by the
user directly without installing anything on the system. The application remains
on the cloud and it can be saved and edited in there only.

What resources are provided by Infrastructure as a Service?


Infrastructure as a Service provides physical and virtual resources that are used to build a cloud.
Infrastructure deals with the complexities of maintaining and deploying of the services provided by
this layer. The infrastructure here is the servers, storage and other hardware systems.

IaaS providers such as AWS supplies a virtual server instance and storage, as well as application
program interfaces (APIs) that let users migrate workloads to a virtual machine (VM). Users
have an allocated storage capacity and start, stop, access and configure the VM and storage as
desired. IaaS providers offer small, medium, large, extra-large, and memory- or compute-
optimized instances, in addition to customized instances, for various workload needs.

How important is Platform as a Service?


Platform as a Service is an important layer in cloud architecture. It is built on the infrastructure
model, which provides resources like computers, storage and network. This layer includes
organizing and operate the resources provided by the below layer. It is also responsible to
provide complete virtualization of the infrastructure layer to make it look like a single server and
keep it hidden from the outside world.
In the PaaS model, providers host development tools on their infrastructures. Users access those
tools over the Internet using APIs, Web portals or gateway software. PaaS is used for general
software development, such as Java or Perl, and to create custom applications. A PaaS provider
often hosts the software after it's developed. Common PaaS providers include Salesforce.com’s
Force.com, Amazon Elastic Beanstalk and GoogleApps.

What does Software as a Service provide?


Software as Service is another layer of cloud computing, which provides cloud applications as
Google is doing; it is providing Google docs for the user to save their documents on the cloud
and create as well. It provides the applications to be created on fly without adding or installing
any extra software component. It provides built in software to create wide varieties of
applications and documents and share it with other people online.

Microsoft Office 365 is a SaaS offering for productivity software and email services. Users can
access SaaS applications and services from any location using a computer or mobile device that
has Internet access.

Cloud computing supports many deployment models and they are as follows:
• Private Cloud

Organizations choose to build their private cloud as to keep the strategic, operation and other
reasons to themselves and they feel more secure to do it. It is a complete platform which is fully
functional and can be owned, operated and restricted to only an organization or an industry.
More organizations have moved to private clouds due to security concerns. Virtual private cloud
is being used that operate by a hosting company.

• Public Cloud

These are the platforms which are public means open to the people for use and deployment. For
example, google, amazon etc. They focus on a few layers like cloud application, infrastructure
providing and providing platform markets.

• Hybrid Clouds

It is the combination of public and private cloud. It is the most robust approach to implement cloud
architecture as it includes the functionalities and features of both the worlds. It allows organizations
to create their own cloud and allow them to give the control over to someone else as well.

Use of API’s in cloud services:


API stands for Application programming interface is very useful in cloud platforms as it allows
easy implementation of it on the system. It removes the need to write full-fledged programs. It
provides the instructions to make the communication between one or more applications. It also
allows creating application with ease and linking the cloud services with other systems.
Security Aspects
Security is one of the major aspects which come with any application and service used by the user.
Companies or organizations remain much more concerned with the security provided with the cloud.
There are many levels of security which has to be provided within cloud environment such as:

 Identity management: It authorizes the application service or hardware


component to be used by authorized users.
 Access control: Permission has to be provided to the users so that they can
control the access of other users who are entering in the cloud environment.
 Authorization and Authentication: Provision should be made to allow only the
authorized and authenticated people to access and change the applications and
data.

Traditional data centers vs. Cloud


 Cost of the traditional data center is higher, due to heating issues and other
hardware/software related issues but this is not the case with the cloud
computing infrastructure.
 It gets scaled when the demand increases. Most of the cost is being spent on the
maintenance being performed on the data centers, whereas cloud platform
requires minimum maintenance and not very expert hand to handle them.

VPN
VPN stands for virtual private network. It is a private cloud which manages the security of the
data during the transport in the cloud environment. VPN allows an organization to make a
public network as private network and use it to transfer files and other resources on a network.
VPN consists of two important things:

 Firewall: It acts as a barrier between the public network and the private network.
It filters the messages that are getting exchanged between the networks. It also
protects the network from any malicious activity being done on the network.
 Encryption: It is used to protect the sensitive data from professional hackers and
other spammers who are usually remain active to get the data. With a message
always there will be a key with which you can match the key provided to you.

Few platforms which are used for large scale cloud computing:
There are many platforms available for cloud computing but to model the large-scale distributed
computing the platforms are as follows:

 MapReduce: It is software that is being built by Google to support distributed


computing. It is a framework that works on large set of data. It utilizes the cloud
resources and distributes the data to several other computers known as clusters.
It has the capability to deal with both structured and non-structured data.
 Apache Hadoop: It is an open source distributed computing platform. It is being written in
Java. It creates a pool of computer each with Hadoop file system. It then clusters the data
elements and applies the hash algorithms that are similar. Then it creates copy of the files
that already exist.

Examples of large cloud providers and their databases:


Cloud computing has many providers and it is supported on the large scale. The providers with
their databases are as follows:
• Google bigtable: It is a hybrid cloud that consists of a big table that is spilt into tables and
rows. MapReduce is used for modifying and generating the data.
• Amazon SimpleDB: It is a web service that is used for indexing and querying the data. It
allows the storing, processing and creating query on the data set within the cloud platform. It
has a system that automatically indexes the data.
• Cloud based SQL: It is introduced by Microsoft and it is based on SQL database. It provides
data storage by the usage of relational model in the cloud. The data can be accessed from the
cloud using the client application.

Some open source cloud computing platform databases:


Cloud computing platform has various databases that are in support. The open source databases
that are developed to support it is as follows:
 MongoDB: It is an open source database system which is schema free and
document-oriented database. It is written in C++ and provides tables and high
storage space.
 CouchDB: It is an open source database system based on Apache server and used
to store the data efficiently.
 LucidDB: It is the database made in Java/C++ for data warehousing. It provides
features and functionalities to maintain data warehouse.

Eucalyptus in cloud computing environment


Eucalyptus stands for “Elastic Utility Computing Architecture for Linking Your Programs to
Useful Systems” and provides an open source software infrastructure to implement clusters in
cloud computing platform. It is used to build private, public and hybrid clouds. It can also
produce your own datacenter into a private cloud and allow you to extend the functionality to
many other organizations. Eucalyptus provides APIs to be used with the web services to cope up
with the demand of resources used in the private clouds.

Utility Computing
Utility computing allows the user to utilize ‘pay per use’ medium which means that whatever they are
using they have to pay only for that. It is a plug in that needs to be managed by the organizations on
deciding what type of services has to be deployed from the cloud. Utility computing allows the user to
think and implement the services according to them. Most organizations go for hybrid strategy that
combines internal delivered services that are hosted or outsourced services.

Is there any difference in cloud computing and computing for mobiles?


Mobile cloud computing uses the same concept but it just adds a device of mobile. Cloud
computing comes in action when a task or data get kept on the internet rather than individual
devices. It provides users on-demand access to the data which they have to retrieve.
Applications run on the remote server, and then given to the user to be able to, store and
manage it from the mobile platform.
3.9 Mobile Computing
Mobile computing is human–computer interaction by which a computer is expected to be
transported during normal usage. It is a technology that allows transmission of data, via a
computer, without having to be connected to a fixed physical link. Mobile computing involves
mobile communication, mobile hardware, and mobile software.

Mobile voice communication is widely established throughout the world and has had a very rapid
increase in the number of subscribers to the various cellular networks over the last few years. An
extension of this technology is the ability to send and receive data across these cellular networks.
This is the principle of mobile computing.

Mobile data communication has become a very important and rapidly evolving technology as it
allows users to transmit data from remote locations to other remote or fixed locations. This proves
to be the solution to the biggest problem of business people on the move - mobility.

Understanding Existing Cellular Network Architecture


Mobile telephony took off with the introduction of cellular technology which allowed the efficient
utilization of frequencies enabling the connection of a large number of users. During the 98's
analogue technology was used. In the 99's the digital cellular technology was introduced with GSM
(Global System Mobile) being the most widely accepted system around the world.

A cellular network consists of mobile units linked together to switching equipment, which
interconnect the different parts of the network and allow access to the fixed Public Switched
Telephone Network (PSTN). The technology is hidden from view; it's incorporated in a number of
transceivers called Base Stations (BS). Every BS covers a given area or cell - hence the name
cellular communications. BSs communicate through a so called Mobile Switching Centre (MSC)
which is the heart of a cellular radio-system. It is responsible for routing, or switching, calls from
the originator to the destinator. The MSC may be connected to other MSCs on the same network or
to the PSTN.

For GSM, 89 - 95 MHz range is used for transmission and 935 -96 MHz for reception. The DCS
(Digital Communication System) technology uses frequencies in the 8MHz range while PCS in the
9MHz range.
When a Mobile Station (MS) becomes 'active' it registers with the nearest BS. The corresponding
MSC stores the information about that MS and its position. This information is used to direct
incoming calls to the MS.
If during a call the MS moves to an adjacent cell then a change of frequency will necessarily occur -
since adjacent cells never use the same channels. This procedure is called hand over and is the key
to Mobile communications.

Data Communication
Data Communications is the exchange of data using existing communication networks. The term
data covers a wide range of applications including File Transfer (FT), interconnection between
Wide-Area-Networks (WAN), facsimile (fax), electronic mail, access to the internet and the World
Wide Web (WWW).

Data Communications have been achieved using a variety of networks such as PSTN, leased-lines
and more recently ISDN (Integrated Services Data Network) and ATM (Asynchronous Transfer
Mode)/Frame Relay. These networks are partly or totally analogue or digital using technologies
such as circuit - switching, packet - switching etc.
Circuit switching implies that data from one user (sender) to another (receiver) has to follow a pre-
specified path. If a link to be used is busy, the message cannot be redirected, a property which
causes many delays.
Packet switching is an attempt to make better utilization of the existing network by splitting the
message to be sent into packets. Each packet contains information about the sender, the receiver,
the position of the packet in the message as well as part of the actual message. There are many
protocols defining the way packets can be send from the sender to the receiver. The most widely
used are the Virtual Circuit-Switching system, which implies that packets have to be sent through
the same path, and the Datagram system which allows packets to be sent at various paths
depending on the network availability. Packet switching requires more equipment at the receiver,
where reconstruction of the message will have to be done.

The introduction of mobility in data communications required a move from the Public Switched
Data Network (PSDN) to other networks like the ones used by mobile phones. PCSI has come up
with an idea called CDPD (Cellular Digital Packet Data) technology which uses the existing mobile
network (frequencies used for mobile telephony).

Mobility implemented in data communications has a significant difference compared to voice


communications. Mobile phones allow the user to move around and talk at the same time; the loss of
the connection for %ms during the hand over is undetectable by the user. When it comes to data, %ms is
not only detectable but causes huge distortion to the message. Therefore, data can be transmitted from
a mobile station under the assumption that it remains stable or within the same cell.

CDPD Technology
Today, the mobile data communications market is becoming dominated by a technology called
CDPD.
There are other alternatives to this technology namely Circuit Switched Cellular, Specialised Mobile
Radio and Wireless Data Networks. As can be seen from the table below the CDPD technology is
much more advantageous than the others.
CDPD's principle lies in the usage of the idle time in between existing voice signals that are being
sent across the cellular networks. The major advantage of this system is the fact that the idle time is
not chargeable and so the cost of data transmission is very low.
CDPD networks allow fixed or mobile users to connect to the network across a fixed link and a
packet switched system respectively. Fixed users have a fixed physical link to the CDPD network. In
the case of a mobile end user, the user can, if CDPD network facilities are non-existent, connect to
existing circuit switched networks and transmit data via these networks. This is known as Circuit
Switched CDPD (CS-CDPD).

Applications of Mobile Computing


The question that always arises when a business is thinking of buying a mobile computer is "Will it
be worth it?"
In many fields of work, the ability to keep on the move is vital in order to utilize time efficiently.
Efficient utilization of resources (i.e. staff) can mean substantial savings in transportation costs and
other non-quantifiable costs such as increased customer attention, impact of onsite maintenance
and improved intercommunication within the business.

The importance of Mobile Computers has been highlighted in many fields of which a few are
described below:

 For Estate Agents: Estate agents can work either at home or out in the field. With
mobile computers they can be more productive.
 Emergency Services: Ability to receive information on the move is vital where the
emergency services are involved. Information regarding the address, type and other
details of an incident can be dispatched quickly, via a CDPD system using mobile
computers, to one or several appropriate mobile units which are in the vicinity of
the incident. Here the reliability and security implemented in the CDPD system
would be of great advantage.
 In courts: Defense counsels can take mobile computers in court. When the opposing
counsel references a case which they are not familiar, they can use the computer to
get direct, real-time access to on-line legal database services, where they can gather
information on the case and related precedents.
 In companies: Managers can use mobile computers in, say, critical presentations to
major customers. They can access the latest market share information. At a small
recess, they can revise the presentation to take advantage of this information.
 Stock Information Collation/Control: In environments where access to stock is very
limited, the use of small portable electronic databases accessed via a mobile
computer would be ideal.
 Credit Card Verification: At Point of Sale (POS) terminals in shops and
supermarkets, when customers use credit cards for transactions, the
intercommunication required between the bank central computer and the POS
terminal, in order to effect verification of the card usage, can take place quickly and
securely over cellular channels using a mobile computer unit. This can speed up the
transaction process and relieve congestion at the POS terminals.
 Electronic Mail/Paging: Usage of a mobile unit to send and read emails is a very
useful asset for any business individual, as it allows him/her to keep in touch with
any colleagues as well as any urgent developments that may affect their work

Difference between Cloud Computing and Mobile Computing


Both cloud computing and mobile computing have to do with using wireless systems to transmit
data. Beyond this, these two terms are quite different.

 Cloud computing relates to the specific design of new technologies and services that
allow data to be sent over distributed networks, through wireless connections, to a
remote secure location that is usually maintained by a vendor. Cloud service
providers usually serve multiple clients. They arrange access between the client's
local or closed networks, and their own data storage and data backup systems. That
means that the vendors can intake data, which is sent to them, and store it securely,
while delivering services back to a client through these carefully maintained
connections.
 Mobile computing relates to the emergence of new devices and interfaces. Smartphones and
tablets are mobile devices that can do a lot of what traditional desktop and laptop computers do.
Mobile computing functions include accessing the Internet through browsers, supporting
multiple software applications with a core operating system, and sending and receiving different
types of data. The mobile operating system, as an interface, supports users by providing
intuitive icons, familiar search technologies and easy touch-screen commands.
 While mobile computing is largely a consumer-facing service, cloud computing is
something that is used by many businesses and companies. Individuals can also
benefit from cloud computing, but some of the most sophisticated and advanced
cloud computing services are aimed at enterprises.

For example, big businesses and even smaller operations use specific cloud computing services to
make different processes like supply-chain management, inventory handling, customer
relationships and even production more efficient. An emerging picture of the difference between
cloud computing and mobile computing involves the emergence of smart phone and tablet
operating systems and, on the cloud end, new networking services that may serve these and other
devices.

3.10 Artificial Intelligence

Artificial Intelligence
Since the invention of computers or machines, their capability to perform various tasks went on
growing
exponentially. Humans have developed the power of computer systems in terms of their diverse
working
domains, their increasing speed, and reducing size with respect to time. A branch of Computer
Science named Artificial Intelligence pursues creating the computers or machines as intelligent as
human beings.

What is Artificial Intelligence


According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering
of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a
software think intelligently, in the similar manner the intelligent humans think. AI is
accomplished by studying how human brain thinks, and how humans learn, decide, and work while
trying to solve a problem, and then using the outcomes of this study as a basis of developing
intelligent software and systems.

Philosophy of AI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder,
“Can a machine think and behave like humans do?”
Thus, the development of AI started with the intention of creating similar intelligence in machines
that we find and regard high in humans.

Goals of AI

 To Create Expert Systems - The systems which exhibit intelligent behavior,


learn, demonstrate, explain, and advice its users.
 To Implement Human Intelligence in Machines - Creating systems that
understand, think, learn, and behave like humans.
What Contributes to AI?
Artificial intelligence is a science and technology based on disciplines such as Computer Science,
Biology,
Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development
of computer functions associated with human intelligence, such as reasoning, learning, and
problem solving.
Out of the following areas, one or multiple areas can contribute to build an intelligent system.

Programming Without and With AI


The programming without and with AI is different in following ways -

Programming Programming
Without AI With AI

A computer program without AI can answer A computer program with AI can answer the
the specific questions it is meant to solve. generic questions it is meant to solve.

AI programs can absorb new modifications by


putting highly independent pieces of
information together. Hence one can modify Modification in the program leads to
even a minute change in its structure.
piece of information of program without
affecting its structure.

Modification is not quick and easy. It may


Quick and Easy program modification.
lead to affecting the program adversely.

What is AI Technique?
In the real world, the knowledge has some unwelcomed properties:

 Its volume is huge, next to unimaginable.


 It is not well-organized or well-formatted.
 It keeps changing constantly.

AI Technique is a manner to organize and use the knowledge efficiently in such a way that:

 It should be perceivable by the people who provide it.


 It should be easily modifiable to correct errors.
 It should be useful in many situations though it is incomplete or inaccurate.
AI techniques elevate the speed of execution of the complex program it is equipped with.

Applications of AI: AI has been dominant in various fields such as -

1. Gaming - AI plays crucial role in strategic games such as chess, poker, tic-tac-toe,
etc., where machine can think of large number of possible positions based on
heuristic knowledge.

2. Natural Language Processing - It is possible to interact with the computer that


understands natural language spoken by humans.
3. Expert Systems - There are some applications which integrate machine, software,
and special information to impart reasoning and advising. They provide explanation
and advice to the users.
4. Vision Systems - These systems understand, interpret, and comprehend visual
input on the computer. For example,
 A spying aeroplane takes photographs, which are used to figure out spatial
information or map of the areas.
 Doctors use clinical expert system to diagnose the patient.
 Police use computer software that can recognize the face of criminal with the stored
portrait made by forensic artist.

5. Speech Recognition - Some intelligent systems are capable of hearing and


comprehending the language in terms of sentences and their meanings while a
human talk to it. It can handle different accents, slang words, noise in the
background, change in human’s noise due to cold, etc.

6. Handwriting Recognition - The handwriting recognition software reads the text


written on paper by a pen or on screen by a stylus. It can recognize the shapes of the
letters and convert it into editable text.

7. Intelligent Robots - Robots are able to perform the tasks given by a human. They
have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure. They have efficient
processors, multiple sensors and huge memory, to exhibit intelligence. In addition,
they are capable of learning from their
mistakes and they can adapt to the new environment.
Chemical Engineering
1 Fluid flow Phenomena:
1.1 Boundary layer:
A boundary layer is the layer of fluid in the immediate vicinity of a bounding surface where the
effects of viscosity are significant.
• Laminar Boundary Layer Flow
The laminar boundary is a very smooth flow, while the turbulent boundary layer contains
swirls or "eddies." The laminar flow creates less skin friction drag than the turbulent flow,
but is less stable. As the flow continues back from the leading edge, the laminar boundary
layer increases in thickness.
• Turbulent Boundary Layer Flow
At some distance back from the leading edge, the smooth laminar flow breaks down and
transitions to a turbulent flow. The low energy laminar flow, however, tends to break down
more suddenly than the turbulent layer.

1.2 Types of fluid:


• Newtonian fluid:
In continuum mechanics, a Newtonian fluid is a fluid in which the viscous stresses arising
from its flow, at every point, are linearly proportional to the local strain rate—the rate of
change of its deformation over time.
• Non-Newtonian fluid:
1.3 Viscosity:
Viscosity is a property arising from collisions between neighboring particles in a fluid that are
moving at different velocities. When the fluid is forced through a tube, the particles which comprise
the fluid generally move more quickly near the tube's axis and more slowly near its walls: therefore
some stress, (such as a pressure difference between the two ends of the tube), is needed to overcome
the friction between particle layers and keep the fluid moving. For the same velocity pattern, the
stress required is proportional to the fluid's viscosity.

1.4 Types of flow:


Laminar flow: In fluid dynamics, laminar flow (or streamline flow) occurs when a fluid flows in
parallel layers, with no disruption between the layers. At low velocities, the fluid tends to flow
without lateral mixing, and adjacent layers slide past one another like playing cards. There are no
cross-currents perpendicular to the direction of flow, nor eddies or swirls of fluids.
Turbulent flow: Flows at Reynolds numbers larger than 5000 are typically (but not necessarily)
turbulent, while those at low Reynolds numbers usually remain laminar.

1.5 Fully developed flow:


The entrance region refers to that portion of pipe until the velocity profile is fully developed. When
a fluid is entering a pipe at a uniform velocity, the fluid particles in the layer in contact with the
surface of the pipe come to a complete stop due to the no-slip condition. Due to viscosity of the fluid,
this layer in contact with the pipe surface, resists the motion of adjacent layers and slows them
down gradually. For the conservation of mass to hold true the velocity of middle layers of the fluid in
the pipe increases (since the layers of fluid near the pipe surface have reduced velocities). This
develops a velocity gradient across the cross section of the pipe.

1.6 Bernoulli’s principle:


Bernoulli's principle can be derived from the principle of conservation of energy. This states that,
in a steady flow, the sum of all forms of energy in a fluid along a streamline is the same at all
points on that streamline. This requires that the sum of kinetic energy, potential energy and
internal energy remains constant. Thus an increase in the speed of the fluid – implying an increase
in both its dynamic pressure and kinetic energy – occurs with a simultaneous decrease in (the sum
of) its static pressure, potential energy and internal energy. If the fluid is flowing out of a reservoir,
the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per
unit volume (the sum of pressure and gravitational potential ρ g h) is the same everywhere.
Bernoulli's principle can also be derived directly from Newton's 2nd law. If a small volume of fluid
is flowing horizontally from a region of high pressure to a region of low pressure, then there is
more pressure behind than in front. This gives a net force on the volume, accelerating it along the
streamline.

1.7 Stagnation point:


In fluid dynamics, a stagnation point is a point in a flow field where the local velocity of the fluid is
zero. Stagnation points exist at the surface of objects in the flow field, where the fluid is brought to
rest by the object. The Bernoulli equation shows that the static pressure is highest when the
velocity is zero and hence static pressure is at its maximum value at stagnation points. This static
pressure is called the stagnation pressure.

1.8 Stuffing boxes:


A gland is a general type of stuffing box, used to seal a rotating or reciprocating shaft against a
fluid. The most common example is in the head of a tap where the gland is usually packed with
string which has been soaked in tallow or similar grease. The gland nut allows the packing material
to be compressed to form a watertight seal and prevent water leaking up the shaft when the tap is
turned on. The gland at the rotating shaft of a centrifugal pump may be packed in a similar way
and graphite grease used to accommodate continuous operation. The linear seal around the piston
rod of a double acting steam piston is also known as a gland, particularly in marine applications

1.9 Gate valve:


A gate valve, also known as a sluice valve, is a valve that opens by lifting a round or rectangular
gate/wedge out of the path of the fluid. The distinct feature of a gate valve is the sealing surfaces
between the gate and seats are planar, so gate valves are often used when a straight-line flow of
fluid and minimum restriction is desired. The gate faces can form a wedge shape or they can be
parallel. Gate valves are primarily used to permit or prevent the flow of liquids, but typical gate
valves shouldn't be used for regulating flow, unless they are specifically designed for that purpose.

1.10 Globe valve:


A globe valve, different from ball valve, is a type of valve used for regulating flow in a pipeline,
consisting of a movable disk-type element and a stationary ring seat in a generally spherical body.
Globe valves are named for their spherical body shape with the two halves of the body being
separated by an internal baffle. This has an opening that forms a seat onto which a movable plug
can be screwed in to close (or shut) the valve.
1.11 NPSH:
Vapour pressure is strongly dependent on temperature. Centrifugal pumps are particularly
vulnerable especially when pumping heated solution near the vapor pressure, whereas positive
displacement pumps are less affected by cavitation, as they are better able to pump two-phase
flow (the mixture of gas and liquid), however, the resultant flow rate of the pump will be
diminished because of the gas volumetrically displacing a disproportion of liquid. Careful design is
required to pump high temperature liquids with a centrifugal pump when the liquid is near its
boiling point.
The violent collapse of the cavitation bubble creates a shock wave that can carve material from
internal pump components (usually the leading edge of the impeller) and creates noise often
described as "pumping gravel". Additionally, the inevitable increase in vibration can cause other
mechanical faults in the pump and associated equipment.
Using the above application of Bernoulli to eliminate the velocity term and local pressure terms in
the definition of NPSHA:

1.12Reciprocating pump:

1.13 Rotary pump (Gear pump):


A gear pump uses the meshing of gears to pump fluid by displacement. They are one of the most
common types of pumps for hydraulic fluid power applications.
Gear pumps are also widely used in chemical installations to pump high viscosity fluids. There are
two main variations; external gear pumps which use two external spur gears, and internal gear
pumps which use external and internal spur gears (internal spur gear teeth face inwards, see
below). Gear pumps are positive displacement (or fixed displacement), meaning they pump a
constant amount of fluid for each revolution. Some gear pumps are designed to function as either a
motor or a pump.

1.14 Centrifugal pump:


Centrifugal pumps are used to transport fluids by the conversion of rotational kinetic energy to the
hydrodynamic energy of the fluid flow. The rotational energy typically comes from an engine or
electric motor. The fluid enters the pump impeller along or near to the rotating axis and is
accelerated by the impeller, flowing radially outward into a diffuser or volute chamber (casing),
from where it exits.
Common uses include water, sewage, petroleum and petrochemical pumping. The reverse function
of the centrifugal pump is a water turbine converting potential energy of water pressure into
mechanical rotational energy.
• Principle:
Like most pumps, a centrifugal pump converts mechanical energy from a motor to energy
of a moving fluid. A portion of the energy goes into kinetic energy of the fluid. Fluid enters
axially through eye of the casing, is caught up in the impeller blades, and is whirled
tangentially and radially outward until it leaves through all circumferential parts of the
impeller into the diffuser part of the casing. The fluid gains both velocity and pressure while
passing through the impeller. The doughnut-shaped diffuser, or scroll, section of the casing
decelerates the flow and further increase the pressure.
• Priming:
Typically, a liquid pump can't simply draw air. The feed line of the pump must first be filled
with the liquid that requires pumping. An operator must introduce liquid into the system to
initiate the pumping. This is called priming the pump. Loss of prime is usually due to
ingestion of air into the pump. The clearances and displacement ratios in pumps for liquids,
whether thin or more viscous, usually cannot displace air due to its compressibility.

1.15 Jet ejector:


1.16 Flow meters:
• Venturi meter:
An equation for the drop in pressure due to the Venturi effect may be derived from a
combination of Bernoulli's principle and the continuity equation.
Referring to the diagram to the right, using Bernoulli's equation in the special case of
incompressible flows (such as the flow of water or other liquid, or low speed flow of gas),
the theoretical pressure drop at the constriction is given by:

where is the density of the fluid,is the (slower) fluid velocity where the pipe is wider,
is the (faster) fluid velocity where the pipe is narrower (as seen in the figure). This assumes
the flowing fluid (or other substance) is not significantly compressible - even though
pressure varies, the density is assumed to remain approximately constant.

• Orifice meter:
An orifice plate is a device used for measuring flow rate, for reducing pressure or for
restricting flow. Either a volumetric or mass flow rate may be determined, depending on
the calculation associated with the orifice plate. It uses the same principle as a Venturi
nozzle, namely Bernoulli's principle which states that there is a relationship between the
pressure of the fluid and the velocity of the fluid. When the velocity increases, the pressure
decreases and vice versa.
• Rotameter:
A rotameter consists of a tapered tube, typically made of glass with a 'float', made either
of anodized aluminum or a ceramic, actually a shaped weight, inside that is pushed up by
the drag force of the flow and pulled down by gravity.
A higher volumetric flow rate through a given area increases flow speed and drag force, so
the float will be pushed upwards. However, as the inside of the rotameter is cone shaped
(widens), the area around the float through which the medium flows increases, the flow
speed and drag force decrease until there is mechanical equilibrium with the float's
weight.

Pitot tube:
The basic pitot tube consists of a tube pointing directly into the fluid flow. As this tube
contains fluid, a pressure can be measured; the moving fluid is brought to rest (stagnates)
as there is no outlet to allow flow to continue. This pressure is the stagnation pressure of
the fluid, also known as the total pressure or (particularly in aviation) the pitot pressure.
The measured stagnation pressure cannot itself be used to determine the fluid flow
velocity (airspeed in aviation). However, Bernoulli's equation states: Stagnation
pressure = static pressure + dynamic pressure This can also be written as:

Solving that for flow velocity we get:


2 Heat Transfer:
2.1 Conduction:
 Fourier’s Law:
The differential form of Fourier's Law of thermal conduction shows that the local heat
flux density, , is equal to the product of thermal conductivity, , and the negative
local temperature gradient, . The heat flux density is the amount of energy that
flows through a unit area per unit time.

where (including the SI units) is the local


heat flux density, W·m−2 is the material's

conductivity, W·m−1·K−1, is the


temperature gradient, K·m−1.

The thermal conductivity , is often treated as a constant, though this is not always true.
While the thermal conductivity of a material generally varies with temperature, the
variation can be small over a significant range of temperatures for some common materials.
In anisotropic materials, the thermal conductivity typically varies with orientation; in this
case is represented by a second-order tensor. In non uniform materials, varies with
spatial location.
For many simple applications, Fourier's law is used in its one-dimensional form. In the x-
direction,

2.2 Radiation:
 Wien’s displacement law:
Wien's displacement law states that the black body radiation curve for different
temperatures peaks at a wavelength inversely proportional to the temperature. The shift of
that peak is a direct consequence of the Planck radiation law which describes the spectral
brightness of black body radiation as a function of wavelength at any given temperature.

where T is the absolute temperature in Kelvin and b is a constant of


proportionality called Wien's displacement constant

2.3 Stefan-Boltzmann law:


The Stefan–Boltzmann law, also known as Stefan's law, describes the power radiated from
a black body in terms of its temperature. Specifically, the Stefan–Boltzmann law states
that the total energy radiated per unit surface area of a black body across all
wavelengths per unit time , , is directly proportional to the fourth power of the black
body's thermodynamic temperature T:

The constant of proportionality σ, called the Stefan–Boltzmann constant or Stefan's


constant.

2.4 View Factor:


In radiative heat transfer, a view factor, , is the proportion of the radiation which
leaves surface that strikes surface .
In a complex 'scene' there can be any number of different objects, which can be divided in
turn into even more surfaces and surface segments.
View factors are also sometimes known as configuration factors, form factors or shape
factors.

2.5 Black Body:


A black body (also, blackbody) is an idealized physical body that absorbs all incident
electromagnetic radiation, regardless of frequency or angle of incidence. A white body is
one with a rough surface that reflects all incident rays completely and uniformly in all
directions. A black body in thermal equilibrium (that is, at a constant temperature) emits
electromagnetic radiation called black-body radiation. The radiation is emitted according
to Planck's law, meaning that it has a spectrum that is determined by the temperature
alone, not by the body's shape or composition.
A black body in thermal equilibrium has two notable properties:

 It is an ideal emitter: at every frequency, it emits as much energy as – or more energy


than – any other body at the same temperature.

 It is a diffuse emitter: the energy is radiated isotropically, independent of direction.

2.6 Flash evaporation:


Flash (or partial) evaporation is the partial vapor that occurs when a saturated liquid stream
undergoes a reduction in pressure by passing through a throttling valve or other throttling device.
If the throttling valve or device is located at the entry into a pressure vessel so that the flash
evaporation occurs within the vessel, then the vessel is often referred to as a flash drum.
If the saturated liquid is a single-component liquid (for example, liquid propane or liquid
ammonia), a part of the liquid immediately "flashes" into vapor. Both the vapor and the residual
liquid are cooled to the saturation temperature of the liquid at the reduced pressure.
2.7 Forward feed evaporator:

2.8 Backward feed evaporator:


3 Mass transfer:
3.1 Fick’s first law of diffusion:
Fick's first law relates the diffusive flux to the concentration under the assumption of steady state.
It postulates that the flux goes from regions of high concentration to regions of low concentration,
with a magnitude that is proportional to the concentration gradient (spatial derivative), or in
simplistic terms the concept that a solute will move from a region of high concentration to a region
of low concentration across a concentration gradient. In one (spatial) dimension, the law is:

where
is the "diffusion flux" [(amount of substance) per unit area per unit time], for example


measures the amount of substance that will flow through a small area during a small time interval.

is the diffusion coefficient or diffusivity in dimensions of [length2 time−1], for example


(for ideal mixtures) is the concentration in dimensions of [amount of substance per unit
volume], for example
is the position [length], for example

is proportional to the squared velocity of the diffusing particles, which depends on the
temperature, viscosity of the fluid and the size of the particles according to the
StokesEinstein relation.

3.2 Gas Absorption:


Gas absorption (also known as scrubbing) is an operation in which a gas mixture is contacted with
a liquid for the purpose of preferentially dissolving one or more components of the gas mixture and
to provide a solution of them in the liquid.
Therefore, we can see that there is a mass transfer of the component of the gas from the gas phase
to the liquid phase. The solute so transferred is said to be absorbed by the liquid.

In gas desorption (or stripping), the mass transfer is in the opposite direction, i.e. from the liquid
phase to the gas phase. The principles for both systems are the same.
In addition, we assume there is no chemical reaction in the system and that it is operating at
isothermal condition. The process of gas absorption thus involves the diffusion of solute from the
gas phase through a stagnant or non-diffusing liquid.

3.3 Humidity:
Humidity is the amount of water vapor in the air. Water vapor is the gaseous state of water and is
invisible. Humidity indicates the likelihood of precipitation, dew, or fog.

3.4 Absolute humidity and Relative humidity:


• Absolute humidity is the mass of water vapor divided by the mass of dry air in
a volume of air at a given temperature.
• Relative humidity is the ratio of the current absolute humidity to the highest
possible absolute humidity (which depends on the current air temperature). A
reading of 100 percent relative humidity means that the air is totally saturated
with water vapor and cannot hold any more.

3.5 Percentage humidity:


The ratio, expressed as a percentage, of the weight of water vapor in a pound of dry air to the
weight of water vapor if the same weight of air were saturated.

3.6 Dew point:


The dew point is the temperature at which the water vapor in a sample of air at constant
barometric pressure condenses into liquid water at the same rate at which it evaporates. At
temperatures below the dew point, water will leave the air. The condensed water is called dew
when it forms on a solid surface. The condensed water is called either fog or acloud, depending on
its altitude, when it forms in the air.

3.7 Dry bulb and wet bulb temperature:


• The Dry Bulb temperature, usually referred to as air temperature, is the air
property that is most common used. When people refer to the temperature of
the air, they are normally referring to its dry bulb temperature.The Dry Bulb
Temperature refers basically to the ambient air temperature. It is called "Dry
Bulb" because the air temperature is indicated by a thermometer not affected by
the moisture of the air.

• The Wet Bulb temperature is the temperature of adiabatic saturation. This is


the temperature indicated by a moistened thermometer bulb exposed to the air
flow.

Wet Bulb temperature can be measured by using a thermometer with the bulb wrapped in
wet muslin. The adiabatic evaporation of water from the thermometer and the cooling
effect is indicated by a "wet bulb temperature" lower than the "dry bulb temperature" in the
air.
The rate of evaporation from the wet bandage on the bulb, and the temperature difference
between the dry bulb and wet bulb, depends on the humidity of the air. The evaporation is
reduced when the air contains more water vapor.
The wet bulb temperature is always lower than the dry bulb temperature but will be
identical with 100% relative humidity (the air is at the saturation line).

4 Chemical Engineering Thermodynamics:


4.1 Quasi-static process:
In thermodynamics, a quasistatic process is a thermodynamic process that happens
"infinitely slowly". No real process is quasistatic, but such processes can be approximated
by performing them very slowly.

4.2 Reversible process:


In thermodynamics, a reversible process -- or reversible cycle if the process is cyclic -- is a
process that can be "reversed" by means of infinitesimal changes in some property of the
system without entropy production (i.e. dissipation of energy). Due to these infinitesimal
changes, the system is in thermodynamic equilibrium throughout the entire process. Since
it would take an infinite amount of time for the reversible process to finish, perfectly
reversible processes are impossible. However, if the system undergoing the changes
responds much faster than the applied change, the deviation from reversibility may be
negligible. In a reversible cycle, the system and its surroundings will be exactly the same
after each cycle.
4.3 State postulate:
The state postulate is a term used in thermodynamics that defines the given number of
properties to a thermodynamic system in a state of equilibrium. The state postulate allows a
finite number of properties to be specified in order to fully describe a state of
thermodynamic equilibrium. Once the state postulate is given the other unspecified
properties must assume certain values.

4.4 Laws of thermodynamics:

• Zeroth law of thermodynamics: If two systems are in thermal equilibrium


respectively with a third system, they must be in thermal equilibrium with
each other. This law helps define the notion of temperature.
• First law of thermodynamics: When energy passes, as work, as heat, or
with matter, into or out from a system, its internal energy changes in accord
with the law of conservation of energy. Equivalently, perpetual motion
machines of the first kind are impossible.
• Kelvin–Planck statement : The Kelvin–Planck statement of the
second law of thermodynamics states that it is impossible to devise a cyclically
operating device, the sole effect of which is to absorb energy in the form of
heat from a single thermal reservoir and to deliver an equivalent amount of
work. This implies that it is impossible to build a heat engine that has 100%
thermal efficiency
• Clausius statement : The Clausius statement of the second law of
thermodynamics states that it is impossible for a self acting machine working
in a cyclic process without any external force, to transfer heat from a body at
a lower temperature to a body at a higher temperature. It considers
transformation of heat between two heat reservoirs.
• Third law of thermodynamics: The entropy of a system approaches a
constant value as the temperature approaches absolute zero.[2] With the
exception of glasses the entropy of a system at absolute zero is typically
close to zero, and is equal to the log of the multiplicity of the quantum
ground state.

4.5 Clausius inequality:


The Clausius theorem states that for a system undergoing a cyclic process (i.e. a process
which ultimately returns a system to its original state):

where δQ is the amount of heat absorbed by the system. The equality holds in the reversible
case and the inequality holds in the irreversible case. The reversible case is used to
introduce the entropy state function. This is because in cyclic process the variation of a
state function is zero.
4.6 Entropy:
In thermodynamics, entropy is a measure of the number of specific ways in which a
thermodynamic system may be arranged, commonly understood as a measure of disorder.
According to the second law of thermodynamics the entropy of an isolated system never
decreases; such a system will spontaneously evolve toward thermodynamic equilibrium, the
configuration with maximum entropy. Systems that are not isolated may decrease in
entropy, provided they increase the entropy of their environment by at least that same
amount. Since entropy is a state function, the change in the entropy of a system is the same
for any process that goes from a given initial state to a given final state, whether the process
is reversible or irreversible. However, irreversible processes increase the combined
entropy of the system and its environment.
The change in entropy (ΔS) of a system was originally defined for a thermodynamically
reversible process as

where T is the absolute temperature of the system, dividing an incremental reversible


transfer of heat into that system (dQ). (If heat is transferred out the sign would be reversed
giving a decrease in entropy of the system.)

4.7 Gibbs free energy:


The Gibbs free energy is the maximum amount of non-expansion work that can be
extracted from a closed system; this maximum can be attained only in a completely
reversible process. When a system changes from a well-defined initial state to a well-
defined final state, the
Gibbs free energy ΔG equals the work exchanged by the system with its surroundings,
minus the work of the pressure forces, during a reversible transformation of the system
from the same initial state to the same final state.

4.8 Helmholtz free energy:


The negative of the difference in the Helmholtz energy is equal to the maximum amount of
work that the system can perform in a thermodynamic process in which temperature is held
constant. If the volume is not held constant, part of this work will be performed as
boundary work. The Helmholtz energy is commonly used for systems held at constant
volume. Since in this case no work is performed on the environment, the drop in the
Helmholtz energy is equal to the maximum amount of useful work that can be extracted
from the system. For a system at constant temperature and volume, the Helmholtz energy
is minimized at equilibrium.

4.9 Fugacity:
In chemical thermodynamics, the fugacity ( ) of a real gas is an effective pressure which
replaces the true mechanical pressure in accurate chemical equilibrium calculations. It is
equal to the pressure of an ideal gas which has the same chemical potential as the real gas.
4.10 Gibbs’ Phase rule:

The Phase Rule describes the possible number of degrees of freedom in a (closed) system at
equilibrium, in terms of the number of separate phases and the number of chemical
constituents in the system. The Degrees of Freedom [F] is the number of independent
intensive variables (i.e. those that are independent of the quantity of material present) that
need to be specified in value to fully determine the state of the system. Typical such
variables might be temperature, pressure, or concentration.

Das könnte Ihnen auch gefallen