Beruflich Dokumente
Kultur Dokumente
1. ELECTRICAL ENGINEERING 2
1.1 SOME MAJOR ELECTRICAL LAWS 2
1.2 TRANSFORMERS 3
1.3 DC MOTOR 8
1.4 INDUCTION MOTOR 11
2. ECE QUESTIONS 24
2.1 COMPUTER NETWORKING 24
2.2 ANALOG ELECTRONICS 32
2.3 DIGITAL ELECTRONICS 48
3. COMPUTER ENGINEERING 73
3.1 OPERATING SYSTEM 73
3.2 COMPUTER ORGANIZATION 87
3.3 DATA STRUCTURE 90
3.4 SOFTWARE ENGINEERING 96
3.5 AUTOMATA 103
3.6 DATA BASE MANAGEMENT SYSTEM 107
3.7 NETWORKING 114
3.8 CLOUD COMPUTING 126
3.9 MOBILE COMPUTING 131
3.10 ARTIFICIAL INTELLIGENCE 135
4. CHEMICAL ENGINEERING 138
1. Electrical Engineering
1.1 Some Major Electrical Laws
Faraday's law of induction is a basic law of electromagnetism predicting how a magnetic field will
interact with an electric circuit to produce an electromotive force (EMF)—a phenomenon
called electromagnetic induction. It is the fundamental operating principle of transformers,
inductors, and many types of electrical motors, generators and solenoids.
Thevenin Norton
As originally stated in terms of DC resistive circuits only, the Thévenin's theorem holds that:
Any linear electrical network with voltage and current sources and only resistances can be
replaced at terminals A-B by an equivalent voltage source Vth in series connection with an
equivalent resistance Rth.
This equivalent voltage Vth is the voltage obtained at terminals A-B of the network with
terminals A-B open circuited.
This equivalent resistance Rth is the resistance obtained at terminals A-B of the network
with all its independent current sources open circuited and all its independent voltage
sources short circuited.
In circuit theory terms, the theorem allows any one-port network to be reduced to a single voltage
source and a single impedance.
Kirchoff’s Law
This law is also called Kirchhoff's first law, Kirchhoff's point rule, or Kirchhoff's junction rule (or
nodal rule). The principle of conservation of electric charge implies that:
At any node (junction) in an electrical circuit, the sum of currents flowing into that node
is equal to the sum of currents flowing out of that node
or equivalently
In transformer the electrical energy transfer takes place without use of moving parts- it therefore
has the highest possible efficiency out of all electrical machines and requires the least maintenance.
Efficiency of transformers are of the range 97-97%.
Basic Principles
As mentioned earlier the transformer is a static device working on the principle of Faraday’s law of
induction. Faraday’s law states that a voltage appears across the terminals of an electric coil when
the flux linkages associated with the same changes. This emf is proportional to the rate of change of
flux linkages. Putting mathematically,
e = dψ / dt
Where, e is the induced emf in volt and ψ is the flux linkages in Weber turn.In a coil of N turns all
these N turns link flux lines of φ Weber resulting in the Nφ flux linkages. In such a case,
ψ = Nφ (2)
e= Ndφ volt (3) dt
The change in the flux linkage can be brought about in a variety of ways:
1. coil may be static and unmoving but the flux linking the same may change with time.
2. flux lines may be constant and not changing in time but the coil may move in space linking
different value of flux with time.
3. both 1 and 2 above may take place. The flux lines may change in time with coil moving in
space.
These three cases are now elaborated in sequence below, with the help of a coil with a simple
geometry.
Fig. 2 shows a region of length L m, of uniform flux density B Tesla, the flux lines being normal to
the plane of the paper. A loop of one turn links part of this flux. The flux φ linked by the turn is L ∗
B ∗ X Weber. Here X is the length of overlap in meters as shown in the figure. If now B does not
change with time and the loop is unmoving then no emf is induced in the coil as the flux linkages
do not change. Such a condition does not yield any useful machine. On the other hand, if the value
of B varies with time a voltage is induced in the coil linking the same coil even if the coil does not
move. The magnitude of B is assumed to be varying sinusoidally, and can be expressed as,
B = Bm sinωt (4)
where Bm is the peak amplitude of the flux density. ω is the angular rate of change with time. Then,
the instantaneous value of the flux linkage is given by,
ψ = Nφ = NLXBm sinωt (5); The instantaneous value of the induced emf is given by,
Here φm = Bm.L.X. The peak value of the induced emf is em =Nφm.ω (7) and the rms value is given
by
Further, this induced emf has a phase difference of π/2 radian with respect to the flux linked by the
turn. This emf is termed as ‘transformer’ emf and this principle is used in a transformer. Polarity of
the emf is obtained by the application of Lenz’s law. Lenz’s law states that the reaction to the
change in the flux linkages would be such as to oppose the cause. The emf if permitted to drive a
current would produce a counter mmf to oppose this changing flux linkage. In the present case,
presented in Fig. 2 the flux linkages are assumed to be increasing. The polarity of the emf is as
indicated. The loop also experiences a compressive force.
Iφ = B.L.X Weber. The conductor is moved with a velocity v = dx/dt normal to the flux, cutting the
flux lines and changing the flux linkages. The induced emf as per the application of Faraday’s law of
induction is e = N.B.L.dx/dt = B.L.v volt. (Here N=1)
Please note, the actual flux linked by the coil is immaterial. Only the change in the flux linkages is
needed to be known for the calculation of the voltage. The induced emf is in step with the change in
ψ and there is no phase shift. If the flux density B is distributed sinusoidally over the region in the
horizontal direction, the emf induced also becomes sinusoidal. This type of induced emf is termed
as speed emf or rotational emf, as it arises out of the motion of the conductor. The polarity of the
induced emf is obtained by the application of the Lenz’s law as before. Here the changes in flux
linkages is produced by motion of the conductor. The current in the conductor, when the coil ends
are closed, makes the conductor experience a force urging the same to the left. This is how the
polarity of the emf shown in fig.2b is arrived at. Also, the mmf of the loop aids the field mmf to
oppose change in flux linkages. This principle is used in d.c machines and alternators.
The third case under the application of the Faraday’s law arises when the flux changes and also the
conductor moves. This is shown in Fig. 2(c).
The uniform flux density in space is assumed to be varying in magnitude in time as B = Bm sin ωt.
The conductor is moved with a uniform velocity of dx = v m/sec. The dt change in the flux linkages
and hence induced emf is given by e=N.d(Bm.sinωt.L.X) =N.L.X.Bm.ω.cosωt.+N.Bm.sinωt.L.dx
Volt. (8) dt dt
The first term is due to the changing flux and hence is a transformer emf. The second term is due to
moving conductor or is a speed emf. When the terminals are closed such as to permit a current the
conductor experiences a force and also the mmf of the coil opposes the change in flux linkages. This
principle is used in a.c. machines where the field is time varying and conductors are moving under
the same.
The first case where there is a time varying field and a stationary coil resulting in a transformer emf
is the subject matter in the present section. The case two will be re- visited under the study of the
d.c machines and synchronous machines. Case three will be extensively used under the study of a.c
machines such as induction machines and also in a.c. commutator machines.
Next in the study of the transformers comes the question of creating a time varying filed. This is
easily achieved by passing a time varying current through a coil. The winding which establishes the
field is called the primary. The other winding, which is kept in that field and has a voltage induced
in it, is called a secondary. It should not be forgotten that the primary also sees the same time
varying field set up by it linking its turns and has an induced emf in the same. These aspects will be
examined in the later sections. At first the common constructional features of a transformer used in
electric power supply system operating at 50 Hz are examined.
Transformer Construction
Types of Transformer
Core type transformers are popular in High voltage applications like Distribution transformers,
Power transformers, and obviously auto transformers. Reasons are:
High voltage corresponds to high flux. So, for keeping your iron loss down you have to
use thicker core. So, core type is better choice.
At high voltage you require heavy insulation. In core type winding putting insulation is
easier. In fact, LV (low voltage) winding itself acts as an insulation between HV (high
voltage) winding and core.
Whereas, Shell type transformers are popular in Low voltage applications like transformers used in
electronic circuits and power electronic converters etc. Reasons are:
At low voltage, comparatively you require more volume for the copper wires than that of
iron core. So, the windows cut on the laminated sheets have to be of bigger proportion with
respect to the whole size of the transformer. So, shell type is a better choice.
Here you don't care about the insulation much and insulation is thin and light. So, you can
put the winding anyway you want in the shell.
An iron core increases magnetic flux density, thus making the transformer smaller and more
efficient. It also provides an armature to wind around, providing mechanical support to the
transformer windings, resulting in less physical movement of the wire during normal operation,
and especially during fault conditions.
Why is iron chosen as the material for the core of the transformer? Why don't we use
aluminium?
Aluminium is para magnetic . It offers high reluctance for flux. Almost as high as air. Hence, we
don't use aluminium as core for transformers. Iron is ferro magnetic material which offers less
reluctance. He we use Iron for transformer core
Although circular cross-section would be compatible with circular windings, it's highly
inconvenient because u require stacking laminations of different sizes. In a rectangular cross-
sectioned core, laminations are of the same size. Moreover, mechanical strength of the core falls,
when riveting the laminations, which is why a more approx. version of circular cross-section, the
cruciform cross-section is adopted universally.
In case of transformers how the core flux changes with change in load (considering
resistive and inductive load separately?
In the ideal case covered by introductory simplified transformer theory the answer is 'No'. When
secondary current flows, this creates a MMF which is in opposition to the primary MMF and which
therefore tends to demagnetize the core, but the primary current (I) increases to compensate for
this, and there is no net change in flux. The reason for this is that the flux in the core must always
be sufficient to induce a back emf in the primary which is equal to the applied primary voltage (Vs).
In reality, and in more complex models for the transformer there is a small reduction in flux. This
is because the primary current flows in an effective primary resistance (R) and sets up a voltage
drop IR. There is also the primary leakage reactance (X) to consider. This is also in series with the
primary circuit, and a further voltage drop IX occurs. The voltage applied to the ideal transformer
is then Vs - IZ where Z is the phasor sum of R and X. This reduced voltage requires a smaller flux.
At first, one might expect this secondary coil current to cause additional magnetic flux in the core.
In fact, it does not. If more flux were induced in the core, it would cause more voltage to be induced
voltage in the primary coil (remember that e = dΦ/dt). This cannot happen, because the primary
coil's induced voltage must remain at the same magnitude and phase in order to balance with the
applied voltage, in accordance with Kirchhoff's voltage law. Consequently, the magnetic flux in the
core cannot be affected by secondary coil current. However, what does change is the amount of
mmf in the magnetic circuit.
Magnetomotive force is produced any time electrons move through a wire. Usually, this mmf is
accompanied by magnetic flux, in accordance with the mmf=ΦR “magnetic Ohm's Law” equation.
In this case, though, additional flux is not permitted, so the only way the secondary coil's mmf may
exist is if a counteracting mmf is generated by the primary coil, of equal magnitude and opposite
phase. Indeed, this is what happens, an alternating current forming in the primary coil -- 180o out
of phase with the secondary coil's current -- to generate this counteracting mmf and prevent
additional core flux. Polarity marks and current direction arrows have been added to the
illustration to clarify phase relations :
Flux remains constant with application of a load. However, a counteracting mmf is produced by the
loaded secondary.
If you find this process a bit confusing, do not worry. Transformer dynamics is a complex subject.
What is important to understand is this: when an AC voltage is applied to the primary coil, it
creates a magnetic flux in the core, which induces AC voltage in the secondary coil in-phase with
the source voltage. Any current drawn through the secondary coil to power a load induces a
corresponding current in the primary coil, drawing current from the source.
In a transformer, why is no load primary current very small compared to full load primary current?
Consider there is a small river dividing two cities and you need to build bridge over it to connect
both the cities. In that case, only few skilled workers will be required to build it and not all the
people of both the cities.
But once the bridge is built, the whole city or township living near the river will be able to cross it
whenever needed depending on requirements or whatever.
In the case of transformer also, Flux is the bridge between primary and secondary winding as they
are not electrically connected to each other. Without flux (bridge) you won't be able to cross from
Primary to Secondary.
On NO LOAD, a very small amount of current (few workers) is required to setup the required flux
(bridge). Also, we can say since the transformer core is made up of high permittivity magnetic
material, very less current is required to setup the required flux. But once the flux is setup and
secondary is loaded (second city call the people of 1st city for some help) and requires more
current, high current (more number of people) will be flowing depending on the secondary
demand.
Why transformer current has two components, at no load?
Transformer has a primary winding,an Iron core and a Secondary Winding.Winding is mostly a
copper wire which has a resistance and inductance.Iron core carries flux .That is
Inductance.Also,Iron core has losses due to eddy currents and hysteresis.These losses are modelled
as resistance(Resistance only has power loss).
Now,to answer your question,when on no load, current drawn by secondary is zero.So,the so
called image current in primary due to secondary is also zero.This leaves us with only the current
that is required to set up flux in the transformer.As mentioned iron core has inductance and
resistance.To simplify our calculations we model the equivalent circuit as an inductance and
resistance in parallel.So,it has two components,one to set up flux in the core and other to take care
of losses.
1.3 DC Motor
In any electric motor, operation is based on simple electromagnetism. A current-carrying
conductor generates a magnetic field; when this is then placed in an external magnetic field, it will
experience a force proportional to the current in the conductor, and to the strength of the external
magnetic field. As you are well aware of from playing with magnets as a kid, opposite (North and
South) polarities attract, while like polarities (North and North, South and South) repel. The
internal configuration of a DC motor is designed to harness the magnetic interaction between
a current-carrying conductor and an external magnetic field to generate rotational motion.
Let's start by looking at a simple 2-pole DC electric motor (here red represents a magnet or winding
with a "North" polarization, while green represents a magnet or winding with a "South"
polarization).
Every DC motor has six basic parts -- axle, rotor (a.k.a., armature), stator, commutator, field
magnet(s), and brushes. In most common DC motors (and all that BEAMers will see), the external
magnetic field is produced by high-strength permanent magnets1. The stator is the stationary part
of the motor -- this includes the motor casing, as well as two or more permanent magnet pole
pieces. The rotor (together with the axle and attached commutator) rotate with respect to the
stator. The rotor consists of windings (generally on a core), the windings being electrically
connected to the commutator. The above diagram shows a common motor layout -- with the rotor
inside the stator (field) magnets.
The geometry of the brushes, commutator contacts, and rotor
windings are such that when power is applied, the polarities of
the energized winding and the stator magnet(s) are misaligned,
and the rotor will rotate until it is almost aligned with the stator's
field magnets. As the rotor reaches alignment, the brushes move
to the next commutator contacts, and energize the next winding.
Given our example two-pole motor, the rotation reverses the
direction of current through the rotor winding, leading to a "flip"
of the rotor's magnetic field, driving it to continue rotating.
In real life, though, DC motors will always have more than two
poles (three is a very common number). In particular, this avoids
"dead spots" in the commutator. You can imagine how with our
example two-pole motor, if the rotor is exactly at the middle of its
rotation (perfectly aligned with the field magnets), it will get
"stuck" there. Meanwhile, with a two-pole motor, there is a
moment where the commutator shorts out the power supply (i.e.,
both brushes touch both commutator contacts simultaneously).
This would be bad for the power supply, waste energy, and
damage motor components as well. Yet another disadvantage of
such a simple motor is that it would exhibit a high amount
of torque "ripple" (the amount of torque it could produce is cyclic
with the position of the rotor).
You'll notice a few things from this -- namely, one pole is fully energized at a time (but two others
are "partially" energized). As each brush transitions from one commutator contact to the next, one
coil's field will rapidly collapse, as the next coil's field will rapidly charge up (this occurs within a
few microsecond). We'll see more about the effects of this later, but in the meantime, you can see
that this is a direct result of the coil windings' series wiring:
The use of an iron core armature (as in the Mabuchi, above) is quite common, and has a number of
advantages2. First off, the iron core provides a strong, rigid support for the windings -- a
particularly important consideration for high-torque motors. The core also conducts heat away
from the rotor windings, allowing the motor to be driven harder than might otherwise be the case.
Iron core construction is also relatively inexpensive compared with other construction types.
But iron core construction also has several disadvantages. The iron armature has a relatively high
inertia which limits motor acceleration. This construction also results in high winding inductances
which limit brush and commutator life.
In small motors, an alternative design is often used which features a 'coreless' armature winding.
This design depends upon the coil wire itself for structural integrity. As a result, the armature is
hollow, and the permanent magnet can be mounted inside the rotor coil. Coreless DC motors have
much lower armature inductance than iron-core motors of comparable size, extending brush and
commutator life.
The coreless design also allows manufacturers to build smaller motors; meanwhile, due to the lack
of iron in their rotors, coreless motors are somewhat prone to overheating. As a result, this design
is generally used just in small, low-power motors. BEAMers will most often see coreless DC motors
in the form of pager motors.
Types of DC – Motor
1.4 Induction Motor
An electrical motor is such an electromechanical device which converts electrical energy into a
mechanical energy. In case of three phase AC operation, most widely used motor is Three phase
induction motor as this type of motor does not require any starting device or we can say they are
self-starting induction motor.
For better understanding the principle of three phase induction motor, the basic
constructional feature of this motor must be known to us. This Motor consists of two major parts:
The stator of the motor consists of overlapping winding offset by an electrical angle of 120°. When
the primary winding or the stator is connected to a 3 phase AC source, it establishes a
rotating magnetic field which rotates at the synchronous speed.
According to Faraday’s law an emf induced in any circuit is due to the rate of change of magnetic
flux linkage through the circuit. As the rotor winding in an induction motor are either closed
through an external resistance or directly shorted by end ring, and cut the stator rotating magnetic
field, an emf is induced in the rotor copper bar and due to this emf a current flows through the
rotor conductor.
Here the relative velocity between the rotating flux and static rotor conductor is the cause
of current generation; hence as per Lenz’s law the rotor will rotate in the same direction to reduce
the cause i.e. the relative velocity.
Thus, from the working principle of three phase induction motor it may observed that the rotor
speed should not reach the synchronous speed produced by the stator. If the speeds equals, there
would be no such relative velocity, so no emf induction in the rotor, & no current would be flowing,
and therefore no torque would be generated. Consequently, the rotor cannot reach at the
synchronous speed. The difference between the stator (synchronous speed) and rotor speeds is
called the slip. The rotation of the magnetic field in an induction motor has the advantage that no
electrical connections need to be made to the rotor.
Self-starting
Less armature reaction and brush sparking because of the absence of commutators and
brushes that may cause sparks.
Robust in construction.
Economical.
Easier to maintain
For lightning and general purposes in homes, offices, shops, small factories single phase system is
widely used as compared to three phase system as the single phase system is more economical and
the power requirement in most of the houses, shops, offices are small, which can be easily met by
single phase system. The single-phase motors are simple in construction, cheap in cost, reliable
and easy to repair and maintain. Due to all these advantages the single-phase motor finds its
application in vacuum cleaner, fans, washing machine, centrifugal pump, blowers, washing
machine, small toys etc.
Like any other electrical motor asynchronous motor also have two main parts namely rotor and
stator.
Stator: As its name indicates stator is a stationary part of induction motor. A single-
phase AC supply is given to the stator of single-phase induction motor.
Rotor: The rotor is a rotating part of an induction motor. The rotor connects the
mechanical load through the shaft. The rotor in the single-phase induction motor is
of squirrel cage rotor type.
The construction of single-phase induction motor is almost similar to the squirrel cage
three-phase induction motor. But in case of a single phase induction motor, the stator has two
windings instead of one three-phase winding in three-phase induction motor.
The stator of the single-phase induction motor has laminated stamping to reduce eddy current
losses on its periphery. The slots are provided on its stamping to carry stator or main winding.
Stampings are made up of silicon steel to reduce the hysteresis losses. When we apply a single
phase AC supply to the stator winding, the magnetic field gets produced, and the motor rotates at
speed slightly less than the synchronous speed Ns. Synchronous speed Ns is given by
Where,
f = supply voltage frequency,
P = No. of poles of the motor.
The construction of the stator of the single-phase induction motor is similar to that of three phase
induction motor except there are two dissimilarities in the winding part of the single-phase
induction motor.
1. Firstly, the single-phase induction motors are mostly provided with concentric coils. We
can easily adjust the number of turns per coil can with the help of concentric coils. The mmf
distribution is almost sinusoidal.
2. Except for shaded pole motor, the asynchronous motor has two stator windings namely the
main winding and the auxiliary winding. These two windings are placed in space
quadrature to each other.
The construction of the rotor of the single-phase induction motor is similar to the squirrel cage
three-phase induction motor. The rotor is cylindrical and has slots all over its periphery. The slots
are not made parallel to each other but are a little bit skewed as the skewing prevents magnetic
locking of stator and rotor teeth and makes the working of induction motor more smooth and
quieter, i.e., less noise. The squirrel cage rotor consists of aluminum, brass or copper bars. These
aluminum or copper bars are called rotor conductors and placed in the slots on the periphery of the
rotor. The copper or aluminum rings permanently short the rotor conductors called the end rings.
To provide mechanical strength, these rotor conductors are braced to the end ring and hence form
a complete closed circuit resembling a cage and hence got its name as squirrel cage induction
motor. As end rings permanently short the bars, the rotor electrical resistance is very small and it is
not possible to add external resistance as the bars get permanently shorted. The absence of slip ring
and brushes make the construction of single-phase induction motor very simple and robust.
Each of these components rotates in the opposite direction i. e if one φm/2 is rotating in clockwise
direction then the other φm / 2 rotates in anticlockwise direction.
When we apply a single-phase AC supply to the stator winding of single-phase induction motor, it
produces its flux of magnitude, φm. According to the double field revolving theory, this alternating
flux, φm is divided into two components of magnitude φm/2. Each of these components will rotate
in the opposite direction, with the synchronous speed, Ns. Let us call these two components of flux
as forwarding component of flux, φf and the backward component of flux, φb. The resultant of these
two components of flux at any instant of time gives the value of instantaneous stator flux at that
particular instant.
Now at starting condition, both the forward and backward components of flux are exactly opposite
to each other. Also, both of these components of flux are equal in magnitude. So, they cancel each
other and hence the net torque experienced by the rotor at the starting condition is zero. So, the
single-phase induction motors are not self-starting motors.
Single phase induction motors are simple, robust, reliable and cheaper for small ratings. They
are available up to 1 KW rating.
2. Star-Delta Starter
A three-phase motor will give three times the power output when the stator windings are connected
in delta than if connected in star, but will take 1/3 of the current from the supply when connected
in star than when connected in delta. The starting torque developed in star is ½ that when starting
in delta.
1. A two-position switch (manual or automatic) is provided through a timing relay.
2. Starting in star reduces the starting current.
3. When the motor has accelerated up to speed and the current is reduced to its normal value, the
starter is moved to run position with the windings now connected in delta.
4. More complicated than the DOL starter, a motor with a star-delta starter may not produce
sufficient torque to start against full load, so output is reduced in the start position. The motors
are thus normally started under a light load condition.
5. Switching causes a transient current which may have peak values in excess of those with DOL.
Star delta starter is preferred with induction motor due to following reasons:
• Starting current is reduced 3-4 times of the direct current due to which voltage drops and hence it
causes less losses.
• Star delta starter circuit comes in circuit first during starting of motor, which reduces voltage 3
times, that is why current also reduces up to 3 times and hence less motor burning is caused.
• In addition, starting torque is increased and it prevents the damage of motor winding.
Generator and alternator are two devices, which converts mechanical energy into electrical energy.
Both have the same principle of electromagnetic induction, the only difference is that their
construction. Generator persists stationary magnetic field and rotating conductor which rolls on
the armature with slip rings and brushes riding against each other, hence it converts the induced
emf into dc current for external load whereas an alternator has a stationary armature and rotating
magnetic field for high voltages but for low voltage output rotating armature and stationary
magnetic field is used.
Power engineering is a sub division of electrical engineering. It deals with generation, transmission
and distribution of energy in electrical form. Design of all power equipments also comes under
power engineering. Power engineers may work on the design and maintenance of the power grid
i.e. called on grid systems and they might work on off grid systems that are not connected to the
system.
Cables, which are used for transmitting power, can be categorized in three forms:
• Low-tension cables, which can transmit voltage upto 1000 volts.
• High-tension cables can transmit voltage upto 23000 volts.
• Super tension cables can transmit voltage 66 kV to 132 kV.
The induced emf developed when the rotating conductors of the armature between the poles of
magnet, in a DC motor, cut the magnetic flux, opposes the current flowing through the conductor,
when the armature rotates, is called back emf. Its value depends upon the speed of rotation of the
armature conductors. In starting, the value of back emf is zero.
Slip can be defined as the difference between the flux speed (Ns) and the rotor speed (N). Speed of
the rotor of an induction motor is always less than its synchronous speed. It is usually expressed as
a percentage of synchronous speed (Ns) and represented by the symbol ‘S’.
Storage batteries are used for various purposes, some of the applications are mentioned below:
• For the operation of protective devices and for emergency lighting at generating stations and
substations.
• For starting, ignition and lighting of automobiles, aircrafts etc.
• For lighting on steam and diesel railways trains.
• As a supply power source in telephone exchange, laboratories and broad casting stations.
• For emergency lighting at hospitals, banks, rural areas where electricity supplies are not possible.
http://www.electricaltechnology.org/2013/09/Basic-Electrical-and-Electronics-Interview-
Questions-and-Answers-Electrical-electronics-notes.html
Electricity flows to your lights and appliances from the power company through your panel, its
breakers, out on your circuits and back. Here is a schematic picture of all the major parts of your
home electrical system.
There are many connections along these paths that can be disrupted or fail, and there are many
ways that electricity could go places you don't want it to.
The Power Company. Your electrical utility company and its distribution system bring power over
wires and through switches and transformers from the generating plant all the way to a point of
connection at your home.
The utility's system itself can have trouble that can affect things in your home. Its built-in safety
features can stop power in time, but other connections, broken lines, storms, imperfections, or
mistakes can sometimes allow unusual voltages into your system, possibly damaging parts of it.
The sensitivity of home electronic equipment to this has made us more aware of this possibility, so
that our use of surge protectors has become common. But some surges are difficult to protect
against and can be similar to lightning strikes in their effects.
This diagram gives a closer look at the source of 120 and 240 volts in the company's transformer.
Your Main Panel. Your central breaker panel (or fusebox) directs electricity through your home as a
number of separate circuits, each flowing "out" from its own circuit breaker (or fuse) on one wire
and returning from whatever is using the electricity to another connection in the panel by means of
another wire. The breaker or fuse will interrupt the current (the flow) if it ever starts to approach a
dangerous level. This diagram compares a main panel as I have diagrammed it so far, with how a
typical panel is arranged:
There may be in the panel a distinct "main" breaker that can shut off power to most or all the
circuits. If not, there could be one near the power company's meter. These devices automatically
turn power off, but connections at any one of these points -- at the meter, at the main breaker,
inside the main breaker -- can fail or become unreliable, disrupting some or all the power in your
home.
Circuits. A circuit is a path over which electric current can flow from and to an electric source. This
concept could use some clarification. If it were always as simple as current from the source
following only one possible path out to one light and back by one return path, then the operation or
malfunction of a circuit would be easy to grasp. But it is not so simple. This diagram lets you trace
the path of one circuit as it goes through your system:
Code and convention define a circuit in a home as having its source at one of the home's circuit
breakers or fuses. Taking this as the starting place of the electrical source, then, we will find that
most circuits in a home are complex, involving sub-branches like those of a tree.
By Code, a dedicated circuit is used for each of most large appliances like the electric range, electric
water heater, air conditioner, or electric dryer; these as well as electric heaters will have two
(joined) breakers in order to use 240 volts rather than the 120 volts used by most other items. A
dedicated circuit of 120 volts is usually provided for each dishwasher, disposal, gas or oil furnace,
and clothes washer. Most other 120-volt circuits tend to serve a number (from 2 to 20) of lights and
plug-in outlets. There are usually two circuits for the outlets in the kitchen/dining area, and these
use a heavier wire capable of 20 amps of flow.
Circuits serving more than one outlet or light pass power on to successive locations by means of
connections in the device itself or in the box the device is mounted in. So, on any one circuit there
are many places where electricity can fail to get through -- from the circuit breaker and its
connections, through a number of connections at devices and boxes, through switches, and at the
contacts of a receptacle where you plug something in. Troubleshooting electrical problems in your
house will depend on a basic grasp of these matters. (Want help on How to label your panel's
circuits?)
Wires: hot, neutral, ground. To understand the function that different wires in a circuit play,
consider first our use of terms. Because a house is provided with alternating current, the terms
"positive" and "negative" do not apply as they do to direct current in batteries and cars. Instead, the
power company is providing electricity that will flow back and forth 60 times per second. The
electricity flows through the transformer, on the one hand, and the operating household items, on
the other hand, by way of the continuous wire paths between them.
Two of the transformer's terminals are isolated from the earth and the third is connected to the
earth. We call these isolated wires "hot" or "live" because anything even slightly connected to the
earth (like us!), when touching a hot wire, provides, along with the earth, an accidental path for
electricity to flow between that wire and the transformer's "grounded" terminal. (See this very
good Portrayal of shocks by an engineer-type).
A circuit's hot wire is, we might say, one half of the path the circuit takes between the electrical
source and the operating items ("loads"). The other half, in the case of a 120-volt circuit, is the
"neutral" wire. For a 240-volt circuit, the other half is a hot wire from the other phase -- the other
hot coming from the transformer. When they are turned on (operating, running), the loads are part
of the path of the current and are where the electricity is doing its intended work.
Hot wires are distributed into your home from a number of circuit breakers or fuses in your panel.
Hot wires are typically black, occasionally red or even white, and never green or bare. The earth-
related neutral wires in your home are also distributed from your panel, but from one or two
"neutral bars". Neutral wires are always supposed to be white. Contact with them should not
normally shock you because they are connected to the earth much better (we assume) than you can
be. But contact with a hot, even one that is white-colored, will tend to shock you. Even when they
are switched off, we call these wires hot to remind ourselves that they will be, and to distinguish
them from neutrals and grounds.
Besides black, red, and white wires, the cables in homes wired since the 1960's also contain a bare
or green "ground(ing)" wire. Like the neutral, it is ultimately connected to the tranformer's
grounded terminal, but this wire is not connected so as to be part of the normal path of flow around
the circuit. Instead, it is there to connect to the metal parts of lights and appliances, so that a path
is provided "to ground" if a hot wire should contact such parts; otherwise you or I could be the best
available path. (In this diagram see if you can picture the different paths taken by normal current
and a short-to-ground
In other words, when a ground wire does carry current, it is taking care of an otherwise dangerous
situation; in fact, it usually carries so much flow suddenly, that it causes the breaker of the circuit
to trip, thereby also alerting us that a problem needs attention.
Major electrical Parameters of India
Voltage – 230V
Frequency 50Hz
PS: countries with 60Hz frequency don’t use 220V/240V Power supply, in fact 110V-120V supply is
used.
Why do ceiling fan rotates in anticlockwise direction and table fan in clockwise
direction?
It is because the ceiling fan is on ceiling but table fan can be treated like a motor. The ceiling fan is
on roof and it should blow air in downward direction and table fan is to blow backword air to
forward. We can also say that the air mainly in natures direction and its clockwise and if the ceiling
fan is also clockwise then it can't be able to suck air on roof and blow it downward. But, a table fan
has a large area to suck the air and blow it outwards.
Electrical circuit breaker is a switching device which can be operated both manually and
automatically for controlling and protection of any electrical power system. As the modern power
system deals with huge currents, the special attention should be given during designing of circuit
breaker to safe interruption of arc produced during the opening/closing operation of circuit
breaker.
According to their arc quenching (rapid cooling) media the circuit breaker can be divided as:
It may increase the arc voltage by cooling the arc plasma. As the temperature of arc
plasma is decreased, the mobility of the particle in arc plasma is reduced; hence more
voltage gradient is required to maintain the arc.
It may increase the arc voltage by lengthening the arc path. As the length of arc path is
increased, the resistance of the path is increased, and hence to maintain the same arc
current more voltage is required to be applied across the arc path. That means arc
voltage is increased.
Splitting up the arc into a number of series arcs also increases the arc voltage.
Mineral oil has better insulating property than air. The oil is used to insulate between the phases
and between the phases and the ground, and to extinguish the arc. When electric arc is drawn
under oil, the arc vaporizes the oil and creates a large bubble of hydrogen that surrounds the arc.
The oil surrounding the bubble conducts the heat away from the arc and thus also contributes to
deionization and extinction of the arc. Disadvantage of the oil circuit breakers is the flammability of
the oil, and the maintenance necessary (i.e. changing and purifying the oil). The oil circuit breaker
is the one of the oldest type of circuit breakers.
Vacuum circuit breakers are used mostly for low and medium voltages. Vacuum interrupters are
developed for up to 36 kV and can be connected in series for higher voltages. The interrupting
chambers are made of porcelain and sealed. They cannot be open for maintenance, but life is
expected to be about 20 years, provided that the vacuum is maintained. Because of the high
dielectric strength of vacuum, the interrupters are small. The gap between the contacts is about 1
cm for 15 kV interrupters, 2 mm for 3 kV interrupters.
1.ECE Questions
Signals are usually transmitted over some transmission media that are broadly classified in to two
categories.
Guided Media: These are those that provide a conduit from one device to another that include
twisted-pair, coaxial cable and fiber-optic cable. A signal traveling along any of these media is
directed and is contained by the physical limits of the medium. Twisted-pair and coaxial cable use
metallic that accept and transport signals in the form of electrical current. Optical fiber is a glass or
plastic cable that accepts and transports signals in the form of light.
Unguided Media: This is the wireless media that transport electromagnetic waves without using a
physical conductor. Signals are broadcast either through air. This is done through radio
communication, satellite communication and cellular telephony.
2. Difference between baud rate and bit rate.
Bits, as you probably know, are the 0′s and 1′s that binary data consists of. The bit rate is a
measure of the number of bits that are transmitted per unit of time – where the unit of time can be
anything. Generally, you will see the bit rate measured in bits per second. This means that if a
network is transmitting 15,000 bits ever second, the bit rate is 15,000 bps – where bps obviously
stands for bits per second.
The baud rate, which is also known as symbol rate, measures the number of symbols that are
transmitted per unit of time. A symbol typically consists of a fixed number of bits depending on
what the symbol is defined as. The baud rate is measured in symbols per second.
To summarize, the bit rate measures the number of bits transmitted per second, whereas the baud
rate measures the number of symbols transmitted per second – and that is the major difference
between the two.
Because symbols are comprised of bits, the baud rate will equal the bit rate only when there is just
one bit per symbol.
Circuit Switching
In circuit switching network dedicated channel has to be established before the call is made
between users. The channel is reserved between the users till the connection is active. For half
duplex communication, one channel is allocated and for full duplex communication, two channels
are allocated. It is mainly used for voice communication requiring real time services without any
much delay.
As shown in the figure 1, if user-A wants to use the network it need to first ask for the request to
obtain the one and then user-A can communicate with user-C. During the connection phase if user-
B tries to call/communicate with user-D or any other user it will get busy signal from the network.
Packet Switching
In packet switching network unlike CS network, it is not required to establish the connection
initially. The connection/channel is available to use by many users. But when capacity or number
of users increases then it will lead to congestion in the network. Packet switched networks are
mainly used for data and voice applications requiring non-real time scenarios.
As shown in the figure 2, if user-A wants to send data/information to user-C and if user-B wants to
send data to user-D, it is simultaneously possible. Here information is padded with header which
contains addresses of source and destination. This header is sniffed by intermediate switching
nodes to determine their route and destination.
As shown above in Packet switched (PS) networks quality of service (QoS) is not guaranteed while
in circuit switched (CS) networks quality is guaranteed.
PS is used for time insensitive applications such as internet/email/SMS/MMS/VOIP etc.
In CS even if user is not talking the channel cannot be used by any other users, this will waste the
resource capacity at those intervals.
The example of circuit switched network is PSTN and example of packet switched network is
GPRS/EDGE.
In telecommunication and data storage, Manchester coding (also known as phase encoding,
or PE) is a line code in which the encoding of each data bit has at least one transition and occupies
the same time. It therefore has no DC component, and is self-clocking, which means that it may be
inductively or capacitively coupled, and that a clock signal can be recovered from the encoded data.
As a result, electrical connections using a Manchester code are easily galvanically isolated using a
network isolator—a simple one-to-one isolation transformer.
1. Physical Layer
2. Data Link layer
3. Network layer
4. Transport layer
5. Session layer
6.Presentation layer
7.Application layer
IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group
defining the physical layer and data link layer's media access control (MAC) of wired Ethernet. This
is generally a local area network technology with some wide area network applications. Physical
connections are made between nodes and/or infrastructure devices (hubs, switches, routers) by
various types of copper or fiber cable.
8. CDMA
Code division multiple access (CDMA) is a channel access method used by various radio
communication technologies.
CDMA is an example of multiple access, which is where several transmitters can send information
simultaneously over a single communication channel. This allows several users to share a band of
frequencies (see bandwidth). To permit this without undue interference between the users, CDMA
employs spread-spectrum technology and a special coding scheme (where each transmitter is
assigned a code).
CDMA is used as the access method in many mobile phone standards such as cdmaOne,
CDMA2000 (the 3G evolution of cdmaOne), and WCDMA (the 3G standard used by GSM carriers),
which are often referred to as simply CDMA.
Token ring local area network (LAN) technology is a protocol which resides at the data link layer
(DLL) of the OSI model. It uses a special three-byte frame called a token that travels around the
ring.
Two examples of token ring networks: a) Using a single MAU b) Using several MAUs connected to
each other
Token bus is a network implementing the token ring protocol over a "virtual ring" on a coaxial
cable.[1] A token is passed around the network nodes and only the node possessing the token may
transmit. If a node doesn't have anything to send, the token is passed on to the next node on the
virtual ring. Each node must know the address of its neighbour in the ring, so a special protocol is
needed to notify the other nodes of connections to, and disconnections from, the ring.[2]
IEEE 802.11 is a set of media access control (MAC) and physical layer (PHY) specifications for
implementing wireless local area network (WLAN) computer communication in the 2.4, 3.6, 5, and
60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards
Committee (IEEE 802). The base version of the standard was released in 1997, and has had
subsequent amendments. The standard and amendments provide the basis for wireless network
products using the Wi-Fi brand. While each amendment is officially revoked when it is
incorporated in the latest version of the standard, the corporate world tends to market to the
revisions because they concisely denote capabilities of their products. As a result, in the market
place, each revision tends to become its own standard.
There are two types of Internet Protocol (IP) traffic. They are TCP or Transmission Control
Protocol and UDP or User Datagram Protocol. TCP is connection oriented – once a
connection is established, data can be sent bidirectional. UDP is a simpler, connectionless Internet
protocol. Multiple messages are sent as packets in chunks using UDP.
Comparison chart
TCP UDP
User Datagram Protocol or Universal
Acronym for Transmission Control Protocol
Datagram Protocol
TCP is a connection-oriented
Connection UDP is a connectionless protocol.
protocol.
Use by other
HTTP, HTTPs, FTP, SMTP, Telnet DNS, DHCP, TFTP, SNMP, RIP, VOIP.
protocols
The speed for TCP is slower than UDP is faster because there is no error-
Speed of transfer
UDP. checking for packets.
Header Size TCP header size is 20 bytes UDP Header size is 8 bytes.
When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,
congestion results. Because routers are receiving packets faster than they can forward them, one
of two things must happen:
The leaky bucket is an algorithm used in packet switched computer networks and
telecommunications networks. It can be used to check that data transmissions, in the form of
packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or
variations in the traffic flow). It can also be used as a scheduling algorithm to determine the timing
of transmissions that will comply with the limits set for the bandwidth and burstiness: see network
scheduler. The leaky bucket algorithm is also used in leaky bucket counters, e.g. to detect when the
average or peak rate of random or stochastic events or stochastic processes exceed defined limits.
An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g.,
computer, printer) participating in a computer network that uses the Internet Protocol for
communication.[1] An IP address serves two principal functions: host or network interface
identification and location addressing. Its role has been characterized as follows: "A name indicates
what we seek. An address indicates where it is. A route indicates how to get there.”
Your router has an ip, that is how everything on the internet connects to you because it sends its
info to your router, (which is called an external ip address) so every computer on that router has
the same external ip address. The data then goes through the router and firewall and sends it to the
next ip address (internal ip address) which is totally different and is assigned to each computer on
the router by the router. An in the midst of it all the isp assigns the router's ip. So, if you went to
another country your ip would only be different if you changed isp's.
But your ip is changed every now and then randomly, by your isp, or even if you restart your router.
So technically that's the only way your ip would change.
Data privacy, also called information privacy, is the aspect of information technology (IT) that
deals with the ability an organization or individual has to determine what data in a computer
system can be shared with third parties.
Digital signature is a mechanism by which a message is authenticated i.e. proving that a message is
effectively coming from a given sender, much like a signature on a paper document. For instance,
suppose that Alice wants to digitally sign a message to Bob. To do so, she uses her private-key to
encrypt the message; she then sends the message along with her public-key (typically, the public
key is attached to the signed message). Since Alice’s public-key is the only key that can decrypt that
message, a successful decryption constitutes a Digital Signature Verification, meaning that there is
no doubt that it is Alice’s private key that encrypted the message
15. Firewall
In computing, a firewall is a network security system that controls the incoming and outgoing
network traffic based on an applied rule set. A firewall establishes a barrier between a trusted,
secure internal network and another network (e.g., the Internet) that is assumed not to be secure
and trusted.[1] Firewalls exist both as a software solution and as a hardware appliance. Many
hardware-based firewalls also offer other functionality to the internal network they protect, such as
acting as a DHCP server for that network.
Many personal computer operating systems include software-based firewalls to protect against
threats from the public Internet. Many routers that pass data between networks contain firewall
components and, conversely, many firewalls can perform basic routing functions
16. ISDN
Integrated Services for Digital Network (ISDN) is a set of communication standards for
simultaneous digital transmission of voice, video, data, and other network services over the
traditional circuits of the public switched telephone network. It was first defined in 1988 in the
CCITT red book.[1] Prior to ISDN, the telephone system was viewed as a way to transport voice,
with some special services available for data. The key feature of ISDN is that it integrates speech
and data on the same lines, adding features that were not available in the classic telephone system.
There are several kinds of access interfaces to ISDN defined as Basic Rate Interface (BRI), Primary
Rate Interface (PRI), Narrowband ISDN (N-ISDN), and Broadband ISDN (B-ISDN).
ISDN is a circuit-switched telephone network system, which also provides access to packet
switched networks, designed to allow digital transmission of voice and data over ordinary
telephone copper wires, resulting in potentially better voice quality than an analog phone can
provide. It offers circuit-switched connections (for either voice or data), and packet-switched
connections (for data), in increments of 64 kilobit/s. A major market application for ISDN in some
countries is Internet access, where ISDN typically provides a maximum of 128 kbit/s in both
upstream and downstream directions. Channel bonding can achieve a greater data rate; typically
the ISDN B-channels of three or four BRIs (six to eight 64 kbit/s channels) are bonded.
ISDN should not be mistaken for its use with a specific protocol, such as Q.931 whereas ISDN is
employed as the network, data-link and physical layers in the context of the OSI model. In a broad
sense ISDN can be considered a suite of digital services existing on layers 1, 2, and 3 of the OSI
model. ISDN is designed to provide access to voice and data services simultaneously.
However, common use reduced ISDN to be limited to Q.931 and related protocols, which are a set
of protocols for establishing and breaking circuit switched connections, and for advanced calling
features for the user. They were introduced in 1986.[2]
In a videoconference, ISDN provides simultaneous voice, video, and text transmission between
individual desktop videoconferencing systems and group (room) videoconferencing systems.
2.2 Analog Electronics
The term crossover signifies the "crossing over" of the signal between devices, in this case, from the
upper transistor to the lower and vice-versa. The term is not related to the audio crossover—a
filtering circuit which divides an audio signal into frequency bands.
The common-mode rejection ratio (CMRR) of a differential amplifier (or other device) is the
rejection by the device of unwanted input signals common to both input leads, relative to the
wanted difference signal. An ideal differential amplifier would have infinite CMRR; this is not
achievable in practice. A high CMRR is required when a differential signal must be amplified in the
presence of a possibly large common-mode input. An example is audio transmission over balanced
lines.
An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or
some other circuit element that has gain. Since an operational amplifier has very high open-
loop gain, the potential difference between its inputs tend to zero when a feedback network is
implemented.
A voltage divider, using two resistors, can be used to create a virtual ground node. If two voltage
sources are connected in series with two resistors, it can be shown that the midpoint becomes a
virtual ground if
An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or
some other circuit element that has gain. Since an operational amplifier has very high open-loop
gain, the potential difference between its inputs tend to zero when a feedback network is
implemented. To achieve a reasonable voltage at the output (and thus equilibrium in the system),
the output supplies the inverting input (via the feedback network) with enough voltage to reduce
the potential difference between the inputs to microvolts. The non-inverting (+) input of the
operational amplifier is grounded; therefore, its inverting (-) input, although not connected to
ground, will assume a similar potential, becoming a virtual ground if the opamp is working in its
linear region (i.e., outputs have not saturated).
To improve the full power efficiency of the previous Class A amplifier by reducing the wasted power
in the form of heat, it is possible to design the power amplifier circuit with two transistors in its
output stage producing what is commonly termed as a Class B Amplifier also known as a push-
pull amplifier configuration.
Push-pull amplifiers use two “complementary” or matching transistors, one being an NPN-type
and the other being a PNP-type with both power transistors receiving the same input signal
together that is equal in magnitude, but in opposite phase to each other. This results in one
transistor only amplifying one half or 180o of the input waveform cycle while the other transistor
amplifies the other half or remaining 180o of the input waveform cycle with the resulting “two-
halves” being put back together again at the output terminal.
Then the conduction angle for this type of amplifier circuit is only 180o or 50% of the input signal.
This pushing and pulling effect of the alternating half cycles by the transistors gives this type of
circuit its amusing “push-pull” name, but are more generally known as the Class B Amplifier in
the two output half-cycles being combined to reform the sine-wave in the output transformers
primary winding which then appears across the load.
Class B Amplifier operation has zero DC bias as the transistors are biased at the cut-off, so each
transistor only conducts when the input signal is greater than the base-emitter voltage. Therefore,
at zero input there is zero output and no power is being consumed. This then means that the actual
Q-point of a Class B amplifier is on the Vce part of the load line as shown below.
The Class B Amplifier has the big advantage over their Class A amplifier cousins in that no
current flows through the transistors when they are result is that both halves of the output
waveform now swings from zero to twice the quiescent current thereby reducing dissipation. This
has the effect of almost doubling the efficiency of the amplifier to around 70%.
Class B amplifiers are greatly preferred over Class A designs for high-power applications such as
audio power amplifiers and PA systems. Like the Class-A Amplifier circuit, one way to greatly boost
the current gain ( Ai ) of a Class B push-pull amplifier is to use Darlington transistors pairs instead
of single transistors in its output circuitry.
The Class B amplifier circuit above uses complimentary transistors for each half of the waveform
and while Class B amplifiers have a much high gain than the Class A types, one of the main
disadvantages of class B type push-pull amplifiers is that they suffer from an effect known
commonly as Crossover Distortion.
6. Maximum efficiency of class B amplifier.
78.5
7. Tuned amplifier.
8. Schimdt Trigger.
Schmitt trigger devices are typically used in signal conditioning applications to remove noise from
signals used in digital circuits, particularly mechanical switch bounce. They are also used in closed
loop negative feedback configurations to implement relaxation oscillators, used in function
generators and switching power supplies.
9. PLL
A phase-locked loop or phase lock loop (PLL) is a control system that generates an output
signal whose phase is related to the phase of an input signal. While there are several differing types,
it is easy to initially visualize as an electronic circuit consisting of a variable frequency oscillator
and a phase detector. The oscillator generates a periodic signal. The phase detector compares the
phase of that signal with the phase of the input periodic signal and adjusts the oscillator to keep the
phases matched. Bringing the output signal back toward the input signal for comparison is called a
feedback loop since the output is 'fed back' toward the input forming a loop.
10. Advantage and disadvantages of feedback.
disadvantages:
less gain.
possible oscillation if not properly designed.
(positive feedback)
Advantages:
high gain
disadvantages:
Very prone to oscillation.
poor freq. response, more distortion, more drift.
This connection forces the op-amp to adjust its output voltage simply equal to the input voltage
(Vout follows Vin so the circuit is named op-amp voltage follower). The importance of this circuit
does not come from any change in voltage, but from the input and output impedances of the op-
amp.
A voltage buffer amplifier is used to transfer a voltage from a first circuit, having a high output
impedance level, to a second circuit with a low input impedance level. The interposed buffer
amplifier prevents the second circuit from loading the first circuit unacceptably and interfering
with its desired operation. In the ideal voltage buffer in the diagram, the input resistance is infinite,
the output resistance zero (impedance of an ideal voltage source is zero). Other properties of the
ideal buffer are: perfect linearity, regardless of signal amplitudes; and instant output response,
regardless of the speed of the input signal.
If the voltage is transferred unchanged (the voltage gain Av is 1), the amplifier is a unity gain
buffer; also known as a voltage follower because the output voltage follows or tracks the input
voltage. Although the voltage gain of a voltage buffer amplifier may be (approximately) unity, it
usually provides considerable current gain and thus power gain. However, it is commonplace to say
that it has a gain of 1 (or the equivalent 0 dB), referring to the voltage gain.
As an example, consider a Thévenin source (voltage VA, series resistance RA) driving a resistor load
RL. Because of voltage division (also referred to as "loading") the voltage across the load is only VA
RL / ( RL + RA ). However, if the Thévenin source drives a unity gain buffer such as that in Figure 1
(top, with unity gain), the voltage input to the amplifier is VA, and with no voltage division because
the amplifier input resistance is infinite. At the output the dependent voltage source delivers
voltage Av VA = VA to the load, again without voltage division because the output resistance of the
buffer is zero. A Thévenin equivalent circuit of the combined original Thévenin source and the
buffer is an ideal voltage source VA with zero Thévenin resistance.
A bridge rectifier is an arrangement of four or more diodes in a bridge circuit configuration which
provides the same output polarity for either input polarity. It is used for converting an alternating
current (AC) input into a direct current (DC) output. A bridge rectifier provides full-wave
rectification from a two-wire AC input, therefore resulting in lower weight and cost when compared
to a rectifier with a 3-wire input from a transformer with a center-tapped secondary winding.
14. Wien-Bridge oscillator
A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can
generate a large range of frequencies. The oscillator is based on a bridge circuit originally
developed by Max Wien in 1891.[1] The bridge comprises four resistors and two capacitors. The
oscillator can also be viewed as a positive gain amplifier combined with a bandpass filter that
provides positive feedback.
The modern circuit is derived from William Hewlett's 1939 Stanford University master's degree
thesis. Hewlett figured out how to make the oscillator with a stable output amplitude and low
distortion.[2][3] Hewlett, along with David Packard, co-founded Hewlett-Packard, and Hewlett-
Packard's first product was the HP200A, a precision Wien bridge oscillator.
A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can
The Early effect is the variation in the width of the base in a bipolar junction transistor (BJT) due
to a variation in the applied base-to-collector voltage, named after its discoverer James M. Early. A
greater reverse bias across the collector–base junction, for example, increases the collector–base
depletion width, decreasing the width of the charge neutral portion of the base.
In Figure 1 the neutral (i.e. active) base is green, and the depleted base regions are hashed light
green. The neutral emitter and collector regions are dark blue and the depleted regions hashed
light blue. Under increased collector–base reverse bias, the lower panel of Figure 1 shows a
widening of the depletion region in the base and the associated narrowing of the neutral base
region.
The collector depletion region also increases under reverse bias, more than does that of the base,
because the collector is less heavily doped. The principle governing these two widths is charge
neutrality. The narrowing of the collector does not have a significant effect as the collector is much
longer than the base. The emitter–base junction is unchanged because the emitter–base voltage is
the same.
Figure 2. The Early voltage (VA) as seen in the output-characteristic plot of a BJT.
There is a lesser chance for recombination within the "smaller" base region.
The charge gradient is increased across the base, and consequently, the current of minority
carriers injected across the emitter junction increases.
Both these factors increase the collector or "output" current of the transistor with an increase in the
collector voltage. This increased current is shown in Figure 2. Tangents to the characteristics at
large voltages extrapolate backward to intercept the voltage axis at a voltage called the Early
voltage, often denoted by the symbol VA.
The Schottky diode is what is called a majority carrier device. This gives it tremendous
advantages in terms of speed. By making the devices small, the normal RC (resistance-
capacitance) type time constants can be reduced, making the Schottky diode an order of
magnitude faster than the conventional PN diodes. This factor is the prime reason why they are
so popular in RF applications.
The Schottky diode also has a much higher current density than an ordinary PN junction. This
means that forward-voltage drops are lower, making these diodes ideal for use in power-
rectification applications. The main drawback of the diode is found in the level of its reverse
current, which is relatively high. For many uses this may not be a problem, but it is a factor
which is worth watching when using Schottky diodes in more exacting applications.
A high-pass filter is an electronic filter that passes signals with a frequency higher than a certain
cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The
amount of attenuation for each frequency depends on the filter design. A high-pass filter is usually
modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-cut
filter.[1] High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-zero
average voltages or radio frequency devices. They can also be used in conjunction with a low-pass
filter to produce a bandpass filter.
A low-pass filter is a filter that passes signals with a frequency lower than a certain cutoff
frequency and attenuates signals with frequencies higher than the cutoff frequency. The amount of
attenuation for each frequency depends on the filter design. The filter is sometimes called a high-
cut filter, or treble cut filter in audio applications. A low-pass filter is the opposite of a high-
pass filter. A band-pass filter is a combination of a low-pass and a high-pass filter.
Low-pass filters exist in many different forms, including electronic circuits (such as a hiss filter
used in audio), anti-aliasing filters for conditioning signals prior to analog-to-digital conversion,
digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The
moving average operation used in fields such as finance is a particular kind of low-pass filter, and
can be analyzed with the same signal processing techniques as are used for other low-pass filters.
Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations, and
leaving the longer-term trend.
An optical filter can correctly be called a low-pass filter, but conventionally is called a longpass
filter (low frequency is long wavelength), to avoid confusion.
21.Transistor Biasing
Transistor Biasing is the process of setting a transistors DC operating voltage or current conditions
to the correct level so that any AC input signal can be amplified correctly by the transistor. A
transistors steady state of operation depends a great deal on its base current, collector voltage, and
collector current and therefore, if a transistor is to operate as a linear amplifier, it must be properly
biased to have a suitable operating point.
Establishing the correct operating point requires the proper selection of bias resistors and load
resistors to provide the appropriate input current and collector voltage conditions. The correct
biasing point for a /sub>, either NPN or PNP, generally lies somewhere between the two extremes
of operation with respect to it being either “fully-ON” or “fully-OFF” along its load line. This central
operating point is called the “Quiescent Operating Point”, or Q-point for short.
When a bipolar transistor is biased so that the Q-point is near the middle of its operating range,
that is approximately halfway between cut-off and saturation, it is said to be operating as a Class-A
amplifier. This mode of operation allows the output current to increase and decrease around the
amplifiers Q-point without distortion as the input signal swings through a complete cycle. In other
words, the output current flows for the full 360o of the input cycle.
So how do we set this Q-point biasing of a transistor? – The correct biasing of the
transistor is achieved using a process know commonly as Base Bias.
The function of the “DC Bias level” or “no input signal level” is to correctly set the transistors Q-
point by setting its Collector current ( IC ) to a constant and steady state value without an input
signal applied to the transistors Base.
This steady-state or DC operating point is set by the values of the circuits DC supply voltage ( Vcc )
and the value of the biasing resistors connected the transistors Base terminal.
Since the transistors Base bias currents are steady-state DC currents, the appropriate use of
coupling and bypass capacitors will help block bias current setup for one transistor stage affecting
the bias conditions of the next. Base bias networks can be used for Common-base (CB), common-
collector (CC) or common-emitter (CE) transistor configurations. In this simple transistor biasing
tutorial we will look at the different biasing arrangements available for a Common Emitter
Amplifier.
This voltage divider configuration is the most widely used transistor biasing method, as the emitter
diode of the transistor is forward biased by the voltage dropped across resistor RB2. Also, voltage
divider network biasing makes the transistor circuit independent of changes in beta as the voltages
at the transistors base, emitter, and collector are dependent on external circuit values.
To calculate the voltage developed across resistor RB2 and therefore the voltage applied to the base
terminal we simply use the voltage divider formula for resistors in series. The current flowing
through resistor RB2 is generally set at 10 times the value of the required base current IB so that it
has no effect on the voltage divider current or changes in Beta.
The goal of Transistor Biasing is to establish a known Q-point in order for the transistor to work
efficiently and produce an undistorted output signal. Correct biasing of the transistor also
establishes its initial AC operating region with practical biasing circuits using either a two or four-
resistor bias network.
In bipolar transistor circuits, the Q-point is represented by ( VCE, IC ) for the NPN transistors or
( VEC, IC ) for PNP transistors. The stability of the base bias network and therefore the Q-point is
generally assessed by considering the collector current as a function of both Beta (β) and
temperature.
Here we have looked briefly at five different configurations for “biasing a transistor” using resistive
networks. But we can also bias a transistor using either silicon diodes, zener diodes or active
networks all connected to the base terminal of the transistor or by biasing the transistor from a
dual power supply.
22.Darlington Connection
A Darlington pair is two transistors that act as a single transistor but with a much higher current
gain. This mean that a tiny amount of current from a sensor, micro-controller or similar can be
used to drive a larger load. An example circuit is shown below:
The Darlington Pair can be made from two transistors as shown in the diagram or Darlington Pair
transistors are available where the two transistors are contained within the same package.
Transistors have a characteristic called current gain. This is referred to as its hFE.
The amount of current that can pass through the load in the circuit above when the transistor is
turned on is:
The current gain varies for different transistors and can be looked up in the data sheet for the
device. For a normal transistor this would typically be about 100. This would mean that the current
available to drive the load would be 100 times larger than the input to the transistor.
In some applications the amount of input current available to switch on a transistor is very low.
This may mean that a single transistor may not be able to pass sufficient current required by the
load.
As stated earlier this equals the input current x the gain of the transistor (hFE). If it is not possible
to increase the input current then the gain of the transistor will need to be increased. This can be
achieved by using a Darlington Pair.
A Darlington Pair acts as one transistor but with a current gain that equals:
Total current gain (hFE total) = current gain of transistor 1 (hFE t1) x current gain of transistor 2
(hFE t2)
So for example if you had two transistors with a current gain (hFE) = 100:
You can see that this gives a vastly increased current gain when compared to a single transistor.
Therefore this will allow a very low input current to switch a much bigger load current.
23.Have and full wave rectifier.
A rectifier is an electrical device that converts alternating current (AC), which periodically
reverses direction, to direct current (DC), which flows in only one direction. The process is known
as rectification.
Rectifier circuits may be single-phase or multi-phase (three being the most common number of
phases). Most low power rectifiers for domestic equipment are single-phase, but three-phase
rectification is very important for industrial applications and for the transmission of energy as DC
(HVDC).
Single-phase rectifiers
Half-wave rectification
In half wave rectification of a single-phase supply, either the positive or negative half of the AC
wave is passed, while the other half is blocked. Because only one half of the input waveform reaches
the output, mean voltage is lower. Half-wave rectification requires a single diode in a single-phase
supply, or three in a three-phase supply. Rectifiers yield a unidirectional but pulsating direct
current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much more
filtering is needed to eliminate harmonics of the AC frequency from the output.
Half-wave rectifier
The no-load output DC voltage of an ideal half wave rectifier for a sinusoidal input voltage is:[2]
Where:
Full-wave rectification
A full-wave rectifier converts the whole of the input waveform to one of constant polarity (positive
or negative) at its output. Full-wave rectification converts both polarities of the input waveform to
pulsating DC (direct current), and yields a higher average output voltage. Two diodes and a center
tapped transformer, or four diodes in a bridge configuration and any AC source (including a
transformer without center tap), are needed.[3] Single semiconductor diodes, double diodes with
common cathode or common anode, and four-diode bridges, are manufactured as single
components.
Graetz bridge rectifier: a full-wave rectifier using 4 diodes
For single-phase AC, if the transformer is center-tapped, then two diodes back-to-back (cathode-
to-cathode or anode-to-anode, depending upon output polarity required) can form a full-wave
rectifier. Twice as many turns are required on the transformer secondary to obtain the same output
voltage than for a bridge rectifier, but the power rating is unchanged.
The average and root-mean-square no-load output voltages of an ideal single-phase full-wave
rectifier are:
Very common double-diode rectifier vacuum tubes contained a single common cathode and two
anodes inside a single envelope, achieving full-wave rectification with positive output. The 5U4 and
5Y3 were popular examples of this configuration.
Three-phase rectifiers
24.Varactor diode.
Varactor diodes or varicap diodes are semiconductor devices that are widely used in the electronics
industry and are used in many applications where a voltage controlled variable capacitance is
required. Although the terms varactor diode and varicap diode can be used interchangeably, the
more common term these days is the varactor diode.
Although ordinary PN junction diodes exhibit the variable capacitance effect and these diodes can
be used for this applications, special diodes optimised to give the required changes in capacitance.
Varactor diodes or varicap diodes normally enable much higher ranges of capacitance change to be
gained as a result of the way in which they are manufactured. There are a variety of types of
varactor diode ranging from relatively standard varieties to those that are described as abrupt or
hyperabrupt varactor diodes.
The varactor diode or varicap diode consists of a standard PN junction, although it is obviously
optimised for its function as a variable capacitor. In fact ordinary PN junction diodes can be used
as varactor diodes, even if their performance is not to the same standard as specially manufactured
varactors.
The basis of operation of the varactor diode is quite simple. The diode is operated under reverse
bias conditions and this gives rise to three regions. At either end of the diode are the P and N
regions where current can be conducted. However, around the junction is the depletion region
where no current carriers are available. As a result, current can be carried in the P and N regions,
but the depletion region is an insulator.
This is exactly the same construction as a capacitor. It has conductive plates separated by an
insulating dielectric.
The capacitance of a capacitor is dependent on a number of factors including the plate area, the
dielectric constant of the insulator between the plates and the distance between the two plates. In
the case of the varactor diode, it is possible to increase and decrease the width of the depletion
region by changing the level of the reverse bias. This has the effect of changing the distance
between the plates of the capacitor.
25.Zener diode
A Zener diode is a diode which allows current to flow in the forward direction in the same
manner as an ideal diode, but also permits it to flow in the reverse direction when the voltage is
above a certain value known as the breakdown voltage, "Zener knee voltage", "Zener voltage",
"avalanche point", or "peak inverse voltage".
Zener breakdown
In Zener breakdown the electrostatic attraction between the negative electrons and a large positive
voltage is so great that it pulls electrons out of their covalent bonds and away from their parent
atoms. ie Electrons are transferred from the valence to the conduction band. In this situation the
current can still be limited by the limited number of free electrons produced by the applied voltage
so it is possible to cause Zener breakdown without damaging the semiconductor.
Avalanche breakdown
Avalanche breakdown occurs when the applied voltage is so large that electrons that are pulled
from their covalent bonds are accelerated to great velocities. These electrons collide with the silicon
atoms and knock off more electrons. These electrons are then also accelerated and subsequently
collide with other atoms. Each collision produces more electrons which leads to more collisions etc.
The current in the semiconductor rapidly increases and the material can quickly be destroyed.
The band gap represents the minimum energy difference between the top of the valence band and
the bottom of the conduction band, However, the top of the valence band and the bottom of the
conduction band are not generally at the same value of the electron momentum. In a direct band
gap semiconductor, the top of the valence band and the bottom of the conduction band occur at
the same value of momentum, as in the schematic below.
In an indirect band gap semiconductor, the maximum energy of the valence band occurs at a
different value of momentum to the minimum in the conduction band energy:
The difference between the two is most important in optical devices. As has been mentioned in the
section charge carriers in semiconductors, a photon can provide the energy to produce an electron-
hole pair.
Each photon of energy E has momentum , where c is the velocity of light. An optical photon
has an energy of the order of 10–19 J, and, since c =3 × 108 ms–1, a typical photon has a very small
amount of momentum.
A photon of energy Eg, where Eg is the band gap energy, can produce an electron-hole pair in a
direct band gap semiconductor quite easily, because the electron does not need to be given very
much momentum. However, an electron must also undergo a significant change in its momentum
for a photon of energy Eg to produce an electron-hole pair in an indirect band gap semiconductor.
This is possible, but it requires such an electron to interact not only with the photon to gain energy,
but also with a lattice vibration called a phonon in order to either gain or lose momentum.
The indirect process proceeds at a much slower rate, as it requires three entities to intersect in
order to proceed: an electron, a photon and a phonon. This is analogous to chemical reactions,
where, in a particular reaction step, a reaction between two molecules will proceed at a much
greater rate than a process which involves three molecules.
The same principle applies to recombination of electrons and holes to produce photons. The
recombination process is much more efficient for a direct band gap semiconductor than for an
indirect band gap semiconductor, where the process must be mediated by a phonon.
As a result of such considerations, gallium arsenide and other direct band gap semiconductors are
used to make optical devices such as LEDs and semiconductor lasers, whereas silicon, which is an
indirect band gap semiconductor, is not. The table in the next section lists a number of different
semiconducting compounds and their band gaps, and it also specifies whether their band gaps are
direct or indirect.
28.Class AB amplifier
We know that we need the base-emitter voltage to be greater than 0.7v for a silicon bipolar
transistor to start conducting, so if we were to replace the two voltage divider biasing resistors
connected to the base terminals of the transistors with two silicon diodes, the biasing voltage
applied to the transistors would now be equal to the forward voltage drop of the diode. These two
diodes are generally called Biasing Diodes or Compensating Diodes and are chosen to match
the characteristics of the matching transistors. The circuit below shows diode biasing.
Class AB Amplifier
The Class AB Amplifier circuit is a compromise between the Class A and the Class B
configurations. This very small diode biasing voltage causes both transistors to slightly conduct
even when no input signal is present. An input signal waveform will cause the transistors to operate
as normal in their active region thereby eliminating any crossover distortion present in pure Class
B amplifier designs.
A small collector current will flow when there is no input signal but it is much less than that for the
Class A amplifier configuration. This means then that the transistor will be “ON” for more than half
a cycle of the waveform but much less than a full cycle giving a conduction angle of between 180 to
360o or 50 to 100% of the input signal depending upon the amount of additional biasing used. The
amount of diode biasing voltage present at the base terminal of the transistor can be increased in
multiples by adding additional diodes in series.
A laser is a device that emits light through a process of optical amplification based on the
stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for
"light amplification by stimulated emission of radiation".[1][2] A laser differs from other
sources of light because it emits light coherently. Spatial coherence allows a laser to be focused to a
tight spot, enabling applications like laser cutting and lithography. Spatial coherence also allows a
laser beam to stay narrow over long distances (collimation), enabling applications such as laser
pointers. Lasers can also have high temporal coherence which allows them to have a very narrow
spectrum, i.e., they only emit a single color of light. Temporal coherence can be used to produce
pulses of light—as short as a femtosecond.
Lasers have many important applications. They are used in common consumer devices such as
optical disk drives, laser printers, and barcode scanners. Lasers are used for both fiber-optic and
free-space optical communication. They are used in medicine for laser surgery and various skin
treatments, and in industry for cutting and welding materials. They are used in military and law
enforcement devices for marking targets and measuring range and speed. Laser lighting displays
use laser light as an entertainment medium.
Quantum noise refers to the uncertainty of a physical quantity, due to its quantum origin. In
certain situations, quantum noise appears as shot noise; for example, most optical communications
use amplitude modulation, and thus, the quantum noise appears as shot noise only. For the case of
uncertainty in the electric field in some lasers, the quantum noise is not just shot noise;
uncertainties of both amplitude and phase contribute to the quantum noise. This issue becomes
important in the case of noise of a quantum amplifier, which preserves the phase. The phase noise
becomes important when the energy of the frequency modulation or phase modulation of waves is
comparable to the energy of the signal (which is believed to be more robust with respect to additive
noise than an amplitude modulation).
An optical fiber (or optical fibre) is a flexible, transparent fiber made of extruded glass (silica)
or plastic, slightly thicker than a human hair. It can function as a waveguide, or “light pipe”,[1] to
transmit light between the two ends of the fiber.[2] The field of applied science and engineering
concerned with the design and application of optical fibers is known as fiber optics.
Optical fibers are widely used in fiber-optic communications, where they permit transmission over
longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead of
metal wires because signals travel along them with less loss and are also immune to
electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so
that they may be used to carry images, thus allowing viewing in confined spaces. Specially designed
fibers are used for a variety of other applications, including sensors and fiber lasers.
Optical fibers typically include a transparent core surrounded by a transparent cladding material
with a lower index of refraction. Light is kept in the core by total internal reflection. This causes the
fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are
called multi-mode fibers (MMF), while those that only support a single mode are called single-
mode fibers (SMF). Multi-mode fibers generally have a wider core diameter, and are used for
short-distance communication links and for applications where high power must be transmitted.
Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).
3. SSB.
4. Antenna
An antenna (or aerial) is an electrical device which converts electric power into radio waves, and
vice versa.[1] It is usually used with a radio transmitter or radio receiver. In transmission, a radio
transmitter supplies an electric current oscillating at radio frequency (i.e. a high frequency
alternating current (AC)) to the antenna's terminals, and the antenna radiates the energy from the
current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the
power of an electromagnetic wave in order to produce a tiny voltage at its terminals, that is applied
to a receiver to be amplified.
Antennas are essential components of all equipment that uses radio. They are used in systems such
as radio broadcasting, broadcast television, two-way radio, communications receivers, radar, cell
phones, and satellite communications, as well as other devices such as garage door openers,
wireless microphones, Bluetooth-enabled devices, wireless computer networks, baby monitors, and
RFID tags on merchandise.
A skip distance is the distance a radio wave travels, usually including a hop in the ionosphere. A
skip distance is a distance on the Earth's surface between the two points where radio waves from a
transmitter, refracted downwards by different layers of the ionosphere, fall. It also represents how
far a radio wave has travelled per hop on the Earth's surface, for radio waves such as the short wave
(SW) radio signals that employ continuous reflections for transmission.
MUF
In radio transmission maximum usable frequency (MUF) is the highest radio frequency that
can be used for transmission between two points via reflection from the ionosphere ( skywave or
"skip" propagation) at a specified time, independent of transmitter power. This index is especially
useful in regard to shortwave transmissions.
In shortwave radio communication, a major mode of long distance propagation is for the radio
waves to reflect off the ionized layers of the atmosphere and return diagonally back to Earth. In this
way radio waves can travel beyond the horizon, around the curve of the Earth. However the
refractive index of the ionosphere decreases with increasing frequency, so there is an upper limit to
the frequency which can be used. Above this frequency the radio waves are not reflected by the
ionosphere but are transmitted through it into space.
On a given day, communications may or may not succeed at the MUF. Commonly, the optimal
operating frequency for a given path is estimated at 80 to 90% of the MUF. As a rule of thumb the
MUF is approximately 3 times the critical frequency.[1]
[2]
where the critical frequency is the highest frequency reflected for a signal propagating directly
upward
Fading
In wireless communications, fading is deviation of the attenuation affecting a signal over certain
propagation media. The fading may vary with time, geographical position or radio frequency, and is
often modeled as a random process. A fading channel is a communication channel that
experiences fading. In wireless systems, fading may either be due to multipath propagation,
referred to as multipath induced fading, or due to shadowing from obstacles affecting the wave
propagation, sometimes referred to as shadow fading.
6. Satellite Communication.
n satellite communication, signal transferring between the sender and receiver is done with the
help of satellite. In this process, the signal which is basically a beam of modulated microwaves is
sent towards the satellite. Then the satellite amplifies the signal and sent it back to the receiver’s
antenna present on the earth’s surface. So, all the signal transferring is happening in space. Thus,
this type of communication is known as space communication.
Two satellites which are coreceived by the transponders attached to the satellite. Then after
amplifying, these signals are transmitted back to earth. This sending can be done at the same time
or after some delay. These amplified signals are stored in the memory of the satellites, when earth
properly faces the satellite. Then the satellite starts sending the signals to earth. Some active
satellites also have programming and recording features. Then these recording can be easily played
and watched. The first active satellite was launched by Russia in 1957. The signals coming from the
satellite when reach the earth, are of very low intensity. Their amplification is done by the receivers
themselves. After amplification is received by the transponders attached to the satellite. Then after
amplifying, these signals are transmitted back to earth. This sending can be done at the same time
or after some delay. These amplified signals are stored in the memory of the satellites, when earth
properly faces the satellite. Then the satellite starts sending the signals to earth. Some active
satellites also have programming and recording features. Then these recording can be easily played
and watched. The first active satellite was launched by Russia in 1957. The signals coming from the
satellite when reach the earth, are of very low intensity. Their amplification is done by the receivers
themselves. After amplification these become available for further use.
Microwave communication is possible only if the position of satellite becomes stationary with
respect to the position of earth. So, these types of satellites are known as geostationary
satellites.
7. Superheterodyne receiver.
TDM
Time-division multiplexing (TDM) is a method of transmitting and receiving independent
signals over a common signal path by means of synchronized switches at each end of the
transmission line so that each signal appears on the line only a fraction of time in an alternating
pattern.
In its primary form, TDM is used for circuit mode communication with a fixed number of channels
and constant bandwidth per channel.
FDM
A cellular network or mobile network is a wireless network distributed over land areas called
cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a
cellular network, each cell uses a different set of frequencies from neighboring cells, to avoid
interference and provide guaranteed bandwidth within each cell.
When joined together these cells provide radio coverage over a wide geographic area. This enables
a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with
each other and with fixed transceivers and telephones anywhere in the network, via base stations,
even if some of the transceivers are moving through more than one cell during transmission.
More capacity than a single large transmitter, since the same frequency can be used for
multiple links as long as they are in different cells
Mobile devices use less power than with a single transmitter or satellite since the cell towers
are closer
Larger coverage area than a single terrestrial transmitter, since additional cell towers can be
added indefinitely and are not limited by the horizon
Major telecommunications providers have deployed voice and data cellular networks over most of
the inhabited land area of the Earth. This allows mobile phones and mobile computing devices to
be connected to the public switched telephone network and public Internet. Private cellular
networks can be used for research[1] or for large organizations and fleets, such as dispatch for local
public safety agencies or a taxicab company.
10. CRO.
The cathode-ray oscilloscope (CRO) is a common laboratory instrument that provides accurate
time and amplitude measurements of voltage signals over a wide range of frequencies. Its
reliability, stability, and ease of operation make it suitable as a general purpose laboratory
instrument. The heart of the CRO is a cathode-ray tube shown schematically in Fig. 1.
The cathode ray is a beam of electrons which are emitted by the heated cathode (negative
electrode) and accelerated toward the fluorescent screen. The assembly of the cathode, intensity
grid, focus grid, and accelerating anode (positive electrode) is called an electron gun. Its purpose is
to generate the electron beam and control its intensity and focus. Between the electron gun and the
fluorescent screen are two pair of metal plates - one oriented to provide horizontal deflection of the
beam and one pair oriented ot give vertical deflection to the beam. These plates are thus referred to
as the horizontal and vertical deflection plates. The combination of these two deflections allows
the beam to reach any portion of the fluorescent screen. Wherever the electron beam hits the
screen, the phosphor is excited and light is emitted from that point. This coversion of electron
energy into light allows us to write with points or lines of light on an otherwise darkened screen.
In the most common use of the oscilloscope the signal to be studied is first amplified and then
applied to the vertical (deflection) plates to deflect the beam vertically and at the same time a
voltage that increases linearly with time is applied to the horizontal (deflection) plates thus causing
the beam to be deflected horizontally at a uniform (constant> rate. The signal applied to the verical
plates is thus displayed on the screen as a function of time. The horizontal axis serves as a uniform
time scale.
A charge-coupled device (CCD) is a device for the movement of electrical charge, usually
from within the device to an area where the charge can be manipulated, for example conversion
into a digital value. This is achieved by "shifting" the signals between stages within the device one
at a time.
12. PLA
A programmable logic array (PLA) is a kind of programmable logic device used to implement
combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a
set of programmable OR gate planes, which can then be conditionally complemented to produce an
output. This layout allows for a large number of logic functions to be synthesized in the sum of
products (and sometimes product of sums) canonical forms.
PLA's differ from Programmable Array Logic devices (PALs and GALs) in that both the AND and
OR gate planes are programmable.
In electronics, pass transistor logic (PTL) describes several logic families used in the design of
integrated circuits. It reduces the count of transistors used to make different logic gates, by
eliminating redundant transistors. Transistors are used as switches to pass logic levels between
nodes of a circuit, instead of as switches connected directly to supply voltages.[1] This reduces the
number of active devices, but has the disadvantage that the difference of the voltage between high
and low logic levels decreases at each stage. Each transistor in series is less saturated at its output
than at its input.[2] If several devices are chained in series in a logic path, a conventionally
constructed gate may be required to restore the signal voltage to the full value. By contrast,
conventional CMOS logic switches transistors so the output connects to one of the power supply
rails, so logic voltage levels in a sequential chain do not decrease. Since there is less isolation
between input signals and outputs, designers must take care to assess the effects of unintentional
paths within the circuit. For proper operation, design rules restrict the arrangement of circuits, so
that sneak paths, charge sharing, and slow switching can be avoided.[3] Simulation of circuits may
be required to ensure adequate performance.
As transmission gates, is similar to a relay that can conduct in both directions or block by a control
signal with almost any voltage potential.
In principle, a transmission gate made up of two field effect transistors, in which - in contrast to
traditional discrete field effect transistors - the substrate terminal (Bulk) is not connected
internally to the source terminal. The two transistors, an n-channel MOSFET and a p-channel
MOSFET are connected in parallel with this, however, only the drain and source terminals of the
two transistors are connected together. Their gate terminals are connected to each other via a NOT
gate (inverter), to form the control terminal.
As with discrete transistors, the substrate terminal is connected to the source connection, so there
is a transistor to the parallel diode (body diode), whereby the transistor passes backwards.
However, since a transmission gate must block flow in either direction, the substrate terminals are
connected to the respective supply voltage potential in order to ensure that the substrate diode is
always operated in the reverse direction. The substrate terminal of the n-channel MOSFET is thus
connected to the positive supply voltage potential and the substrate terminal of the p-channel
MOSFET connected to the negative supply voltage potential.
Domino logic is a CMOS-based evolution of the dynamic logic techniques based on either PMOS
or NMOS transistors. It allows a rail-to-rail logic swing. It was developed to speed up circuits.
The term derives from the fact that in domino logic (cascade structure consisting of several stages),
each stage ripples the next stage for evaluation, similar to a Domino falling one after the other.
In Dynamic logic, problem arises when cascading one gate to the next. The precharge "1" state of
the first gate may cause the second gate to discharge prematurely, before the first gate has reached
its correct state. This uses up the "precharge" of the second gate, which cannot be restored until the
next clock cycle, so there is no recovery from this error.[1]
In order to cascade dynamic logic gates, one solution is Domino Logic, which inserts an ordinary
static inverter between stages. While this might seem to defeat the point of dynamic logic, since the
inverter has a pFET (one of the main goals of Dynamic Logic is to avoid pFETs where possible, due
to speed), there are two reasons it works well. First, there is no fanout to multiple pFETs. The
dynamic gate connects to exactly one inverter, so the gate is still very fast. And since the inverter
connects to only nFETs in dynamic logic gates, it too is very fast. Second, the pFET in an inverter
can be made smaller than in some types of logic gates.[2]
In Domino logic cascade structure of several stages, the evaluation of each stage ripples the next
stage evaluation, similar to a domino falling one after the other. Once fallen, the node states cannot
return to "1" (until the next clock cycle) just as dominos, once fallen, cannot stand up, justifying the
name Domino CMOS Logic. It contrasts with other solutions to the cascade problem in which
cascading is interrupted by clocks or other means.
Ripple Counter: A ripple counter is an asynchronous counter where only the first flip-flop is
clocked by an external clock. All subsequent flip-flops are clocked by the output of the preceding
flip-flop. Asynchronous counters are also called ripple-counters because of the way the clock pulse
ripples it way through the flip-flops.
The MOD of the ripple counter or asynchronous counter is 2n if n flip-flops are used. For a 4-bit
counter, the range of the count is 0000 to 1111 (24-1). A counter may count up or count down or
count up and down depending on the input control. The count sequence usually repeats itself.
When counting up, the count sequence goes from 0000, 0001, 0010, ... 1110 , 1111 , 0000, 0001, ...
etc. When counting down the count sequence goes in the opposite manner: 1111, 1110, ... 0010,
0001, 0000, 1111, 1110, ... etc.
The complement of the count sequence counts in reverse direction. If the uncomplemented output
counts up, the complemented output counts down. If the uncomplemented output counts down,
the complemented output counts up.
There are many ways to implement the ripple counter depending on the characteristics of the flip
flops used and the requirements of the count sequence.
Clock Trigger: Positive edged or Negative edged
JK or D flip-flops
Count Direction: Up, Down, or Up/Down
Asynchronous counters are slower than synchronous counters because of the delay in the
transmission of the pulses from flip-flop to flip-flop. With a synchronous circuit, all the bits in the
count change synchronously with the assertion of the clock. Examples of synchronous counters are
the Ring and Johnson counter.
The circuit below uses 2 D flip-flops to implement a divide-by-4 ripple counter (2n = 22 = 4). It
counts down.
Synchronous counter
In synchronous counters, the clock inputs of all the flip-flops are connected together and are
triggered by the input pulses. Thus, all the flip-flops change state simultaneously (in parallel). The
circuit below is a 4-bit synchronous counter. The J and K inputs of FF0 are connected to HIGH.
FF1 has its J and K inputs connected to the output of FF0, and the J and K inputs of FF2 are
connected to the output of an AND gate that is fed by the outputs of FF0 and FF1. A simple way of
implementing the logic for each bit of an ascending counter (which is what is depicted in the image
to the right) is for each bit to toggle when all of the less significant bits are at a logic high state. For
example, bit 1 toggles when bit 0 is logic high; bit 2 toggles when both bit 1 and bit 0 are logic high;
bit 3 toggles when bit 2, bit 1 and bit 0 are all high; and so on.
Synchronous counters can also be implemented with hardware finite state machines, which are
more complex but allow for smoother, more stable transitions.
Hardware-based counters are of this type. A simple way of implementing the logic for each bit of an
ascending counter (which is what is depicted in the image to the right) is for each bit to toggle when
all of the less significant bits are at a logic high state
We know that when j=k=1 in a jk flip flop then output is the complement of the previous output i.e
if j=k=1 and Y=0 then after the clock pulse y becomes 1. but if the propagation delay of the gates is
much lesser than the pulse duration then during the same pulse at first y becomes 0 and after
another propagation delay y becomes 1 and so on. Thus within the same pulse duration due to very
small propagation delays output oscillates back and forth between 0 and 1. This condition is called
race around condition and at the end of the pulse the output is uncertain.
Master slave flip flop is a cascade me two flip flop in which the first one responds to the data input
when the block is high, whereas the second one responds to the output of the first one when the
clock is low. Thus, the final output change only when the clock is low when the data inputs are not
effective. Thus, the race-around condition gets eliminated in this.
18. Flip-flop. D and T-type flip-flops
A: In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store
state information. A flip-flop is a bistable multivibrator. The circuit can be made to change state by
signals applied to one or more control inputs and will have one or two outputs. It is the basic
storage element in sequential logic. Flip-flops and latches are a fundamental building block of
digital electronics systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements. Such data storage can be used for storage
of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the
output and next state depend not only on its current input, but also on its current state (and hence,
previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed
input signals to some reference timing signal.
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when
a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type
(positive going or negative going) of clock edge.
NOR AS AND
An AND gate gives a 1 output when both inputs are 1; a NOR gate gives a 1 output only when both
inputs are 0. Therefore, an AND gate is made by inverting the inputs to a NOR gate.
A logic gate is an elementary building block of a digital circuit. Most logic gates have two inputs
and one output. At any given moment, every terminal is in one of the two binary conditions low (0)
or high (1), represented by different voltage levels.
22. Function of diode. Types of diode.
Light Emitting Diode (LED): It is one of the most popular type of diodes and when this diode
permits the transfer of electric current between the electrodes, light is produced. In most of the
diodes, the light (infrared) cannot be seen as they are at frequencies that do not permit visibility.
When the diode is switched on or forward biased, the electrons recombine with the holes and
release energy in the form of light (electroluminescence). The color of light depends on the energy
gap of the semiconductor.
Avalanche Diode: This type of diode operates in the reverse bias, and used avalanche effect for
its operation. The avalanche breakdown takes place across the entire PN junction, when the voltage
drop is constant and is independent of current. Generally, the avalanche diode is used for photo-
detection, wherein high levels of sensitivity can be obtained by the avalanche process.
Laser Diode: This type of diode is different from the LED type, as it produces coherent light.
These diodes find their application in DVD and CD drives, laser pointers, etc. Laser diodes are
more expensive than LEDs. However, they are cheaper than other forms of laser generators.
Moreover, these laser diodes have limited life.
Schottky Diodes: These diodes feature lower forward voltage drop as compared to the ordinary
silicon PN junction diodes. The voltage drop may be somewhere between 0.15 and 0.4 volts at low
currents, as compared to the 0.6 volts for a silicon diode. In order to achieve this performance,
these diodes are constructed differently from normal diodes, with metal to semiconductor contact.
Schottky diodes are used in RF applications, rectifier applications and clamping diodes.
Zener diode: This type of diode provides a stable reference voltage, thus is a very useful type and
is used in vast quantities. The diode runs in reverse bias, and breaks down on the arrival of a
certain voltage. A stable voltage is produced, if the current through the resistor is limited. In power
supplies, these diodes are widely used to provide a reference voltage.
Photodiode: Photodiodes are used to detect light and feature wide, transparent junctions.
Generally, these diodes operate in reverse bias, wherein even small amounts of current flow,
resulting from the light, can be detected with ease. Photodiodes can also be used to generate
electricity, used as solar cells and even in photometry.
Varicap Diode or Varactor Diode: This type of diode feature a reverse bias placed upon it,
which varies the width of the depletion layer as per the voltage placed across the diode. This diode
acts as a capacitor and capacitor plates are formed by the extent of conduction regions and the
depletion region as the insulating dielectric. By altering the bias on the diode, the width of the
depletion region changes, thereby varying the capacitance.
Rectifier Diode: These diodes are used to rectify alternating power inputs in power supplies.
They can rectify current levels that range from an amp upwards. If low voltage drops are required,
then Schottky diodes can be used, however, generally these diodes are PN junction diodes
Gunn diode - This diode is made of materials like GaAs or InP that exhibit a negative differential
resistance region.
Crystal diode - These are a type of point contact diodes which are also called as Cat’s whisker
diode. This didoe comprises of a thin sharpened metal wire which is pressed against the
semiconducting crystal. The metal wire is the anode and the semi-conducting crystal is the cathode.
These diodes are obsolete.
Avalanche diode - This diode conducts in reverse bias condition where the reverse bias voltage
applied across the p-n junction creates a wave of ionization leading to the flow of large current.
These didoes are designed to breakdown at specific reverse voltage in order to avoid any damage.
Silicon controlled rectifier - As the name implies this diode can be controlled or triggered to
the ON condition due to the application of small voltage. They belong to the family of Tyristors and
is used in various fields of DC motor control, generator field regulation, lighting system control and
variable frequency drive. This is three-terminal device with anode, cathode and third controlled
lead or gate.
Vacuum diodes - This diode is two electrode vacuum tube which can tolerate high inverse
voltages.
Ideally, in one direction that is indicated by the arrow head diode must behave short circuited and
in other one that opposite to that of the direction of arrow head must be open circuited. By ideal
characteristics, the diodes are designed to meet these features theoretically but are not achieved
practically. So, the practical diode characteristics are only close to that of the desired.
Diodes are used widely in the electronics industry, right from electronics design to production, to
repair. Besides the above mentioned types of diodes, the other diodes are PIN diode, point contact
diode, signal diode, step recovery diode, tunnel diode and gold doped diodes. The type of diode to
transfer electric current depends on the type and amount of transmission, as well as on specific
applications.
A PIN diode is a diode with a wide, undoped intrinsic semiconductor region between a p-type
semiconductor and an n-type semiconductor region. The p-type and n-type regions are typically
heavily doped because they are used for ohmic contacts.
The wide intrinsic region is in contrast to an ordinary PN diode. The wide intrinsic region makes
the PIN diode an inferior rectifier (one typical function of a diode), but it makes the PIN diode
suitable for attenuators, fast switches, photodetectors, and high voltage power electronics
applications.
24. TRIAC.SCR.
TRIACs are part of the thyristor family and are closely related to silicon-controlled rectifiers
(SCR). However, unlike SCRs, which are unidirectional devices (that is, they can conduct current
only in one direction), TRIACs are bidirectional and so allow current in either direction
TRIACs are part of the thyristor family and are closely related to silicon-controlled rectifiers (SCR).
However, unlike SCRs, which are unidirectional devices (that is, they can conduct current only in
one direction), TRIACs are bidirectional and so allow current in either direction. Another
difference from SCRs is that TRIAC current can be enabled by either a positive or negative current
applied to its gate electrode, whereas SCRs can be triggered only by current into the gate. To create
a triggering current, a positive or negative voltage has to be applied to the gate with respect to the
MT1 terminal (otherwise known as A1).
Once triggered, the device continues to conduct until the current drops below a certain threshold
called the holding current.
The bidirectionality makes TRIACs very convenient switches for alternating current circuits, also
allowing them to control very large power flows with milliampere-scale gate currents. In addition,
applying a trigger pulse at a controlled phase angle in an A.C. cycle allows control of the percentage
of current that flows through the TRIAC to the load (phase control), which is commonly used, for
example, in controlling the speed of low-power induction motors, in dimming lamps, and in
controlling A.C. heating resistors.
Some sources define silicon controlled rectifiers and thyristors as synonymous,[2] other sources
define silicon controlled rectifiers as a subset of a larger family of devices with at least four layers of
alternating N and P-type material, this entire family being referred to as thyristors.[3][4] According
to Bill Gutzwiller, the terms "SCR" and "Controlled Rectifier" were earlier, and "Thyristor" was
applied later as usage of the device spread internationally.[5]
SCRs are unidirectional devices (i.e. can conduct current only in one direction) as opposed to
TRIACs which are bidirectional (i.e. current can flow through them in either direction). SCRs can
be triggered normally only by currents going into the gate as opposed to TRIACs which can be
triggered normally by either a positive or a negative current applied to its gate electrode.
SCR schematic symbol
Digital-to-analog conversion is a process in which signals having a few (usually two) defined levels
or states (digital) are converted into signals having a theoretically infinite number of states
(analog). A common example is the processing, by a modem,of computer data into audio-frequency
(AF) tones that can be transmitted over a twisted pair telephone line. The circuit that performs this
function is a digital-to-analog converter (DAC).
Binary digital impulses, all by themselves, appear as long strings of ones and zeros, and have no
apparent meaning to a human observer. But when a DAC is used to decode the binary digital
signals, meaningful output appears. This might be a voice, a picture, a musical tune, or mechanical
motion.
Both the DAC and the ADC are of significance in some applications of digital signal processing. The
intelligibility or fidelity of an analog signal can often be improved by converting the analog input to
digital form using an ADC, then clarifying the digital signal, and finally converting the "cleaned-up"
digital impulses back to analog form using an DAC.
We saw in the last tutorial that the Open Loop Gain, ( Avo ) of an operational amplifier can be
very high, as much as 1,000,000 (120dB) or more. However, this very high gain is of no real use to
us as it makes the amplifier both unstable and hard to control as the smallest of input signals, just a
few micro-volts, (μV) would be enough to cause the output voltage to saturate and swing towards
one or the other of the voltage supply rails losing complete control of the output.
As the open loop DC gain of an Operational Amplifiers is extremely high we can therefore afford to
lose some of this high gain by connecting a suitable resistor across the amplifier from the output
terminal back to the inverting input terminal to both reduce and control the overall gain of the
amplifier. This then produces and effect known commonly as Negative Feedback, and thus
produces a very stable Operational Amplifier based system.
Negative Feedback is the process of “feeding back” a fraction of the output signal back to the
input, but to make the feedback negative, we must feed it back to the negative or “inverting input”
terminal of the op-amp using an external Feedback Resistor called Rƒ. This feedback connection
between the output and the inverting input terminal forces the differential input voltage towards
zero.
This effect produces a closed loop circuit to the amplifier resulting in the gain of the amplifier now
being called its Closed-loop Gain. Then a closed-loop inverting amplifier uses negative feedback
to accurately control the overall gain of the amplifier, but at a cost in the reduction of the amplifiers
gain.
This negative feedback results in the inverting input terminal having a different signal on it than
the actual input voltage as it will be the sum of the input voltage plus the negative feedback voltage
giving it the label or term of a Summing Point. We must therefore separate the real input signal
from the inverting input by using an Input Resistor, Rin.
As we are not using the positive non-inverting input this is connected to a common ground or zero
voltage terminal as shown below, but the effect of this closed loop feedback circuit results in the
voltage potential at the inverting input being equal to that at the non-inverting input producing a
Virtual Earth summing point because it will be at the same potential as the grounded reference
input. In other words, the op-amp becomes a “differential amplifier”.
In this Inverting Amplifier circuit the operational amplifier is connected with feedback to
produce a closed loop operation. When dealing with operational amplifiers there are two very
important rules to remember about inverting amplifiers, these are: “No current flows into the input
terminal” and that “V1 always equals V2”. However, in real world op-amp circuits both of these
rules are slightly broken.
This is because the junction of the input and feedback signal ( X ) is at the same potential as the
positive ( + ) input which is at zero volts or ground then, the junction is a “Virtual Earth”.
Because of this virtual earth node the input resistance of the amplifier is equal to the value of the
input resistor, Rin and the closed loop gain of the inverting amplifier can be set by the ratio of the
two external resistors.
We said above that there are two very important rules to remember about Inverting Amplifiers
or any operational amplifier for that matter and these are.
Then by using these two rules we can derive the equation for calculating the closed-loop gain of an
inverting amplifier, using first principles.
Linear Output
The negative sign in the equation indicates an inversion of the output signal with respect to the
input as it is 180o out of phase. This is due to the feedback being negative in value.
The equation for the output voltage Vout also shows that the circuit is linear in nature for a fixed
amplifier gain as Vout = Vin x Gain. This property can be very useful for converting a smaller
sensor signal to a much larger voltage.
27. JFETS
The junction gate field-effect transistor (JFET or JUGFET) is the simplest type of field-
effect transistor. They are three-terminal semiconductor devices that can be used as electronically-
controlled switches, amplifiers, or voltage-controlled resistors.
Unlike bipolar transistors, JFETs are exclusively voltage-controlled in that they do not need a
biasing current. Electric charge flows through a semiconducting channel between source and drain
terminals. By applying a reverse bias voltage to a gate terminal, the channel is "pinched", so that
the electric current is impeded or switched off completely. A JFET is usually on when there is no
potential difference between its gate and source terminals. If a potential difference of the proper
polarity is applied between its gate and source terminals, the JFET will be more resistive to current
flow, which means less current would flow in the channel between the source and drain terminals.
Thus, JFETs are sometimes referred to as depletion-mode devices.
JFETs can have an n-type or p-type channel. In the n-type, if the voltage applied to the gate is less
than that applied to the source, the current will be reduced (similarly in the p-type, if the voltage
applied to the gate is greater than that applied to the source). A JFET has a large input impedance
(sometimes on the order of 1010 ohms), which means that it has a negligible effect on external
components or circuits connected to its gate.
28.CMOS
Two important characteristics of CMOS devices are high noise immunity and low static power
consumption. Since one transistor of the pair is always off, the series combination draws significant
power only momentarily during switching between on and off states. Consequently, CMOS devices
do not produce as much waste heat as other forms of logic, for example transistor–transistor logic
(TTL) or NMOS logic, which normally have some standing current even when not changing state.
CMOS also allows a high density of logic functions on a chip. It was primarily for this reason that
CMOS became the most used technology to be implemented in VLSI chips.
Although the MOSFET is a four-terminal device with source (S), gate (G), drain (D), and body (B)
terminals,[1] the body (or substrate) of the MOSFET is often connected to the source terminal,
making it a three-terminal device like other field-effect transistors. Because these two terminals are
normally connected to each other (short-circuited) internally, only three terminals appear in
electrical diagrams. The MOSFET is by far the most common transistor in both digital and analog
circuits, though the bipolar junction transistor was at one time much more common.
The main advantage of a MOSFET transistor over a regular transistor is that it requires very little
current to turn on (less than 1mA), while delivering a much higher current to a load (10 to 50A or
more). However, the MOSFET requires a higher gate voltage (3-4V) to turn on.[2]
In enhancement mode MOSFETs, a voltage drop across the oxide induces a conducting channel
between the source and drain contacts via the field effect. The term "enhancement mode" refers to
the increase of conductivity with increase in oxide field that adds carriers to the channel, also
referred to as the inversion layer. The channel can contain electrons (called an nMOSFET or
nMOS), or holes (called a pMOSFET or pMOS), opposite in type to the substrate, so nMOS is made
with a p-type substrate, and pMOS with an n-type substrate (see article on semiconductor devices).
In the less common depletion mode MOSFET, detailed later on, the channel consists of carriers in a
surface impurity layer of opposite type to the substrate, and conductivity is decreased by
application of a field that depletes carriers from this surface layer.[3]
The "metal" in the name MOSFET is now often a misnomer because the previously metal gate
material is now often a layer of polysilicon (polycrystalline silicon). Aluminium had been the gate
material until the mid-1970s, when polysilicon became dominant, due to its capability to form self-
aligned gates. Metallic gates are regaining popularity, since it is difficult to increase the speed of
operation of transistors without metal gates.
Likewise, the "oxide" in the name can be a misnomer, as different dielectric materials are used with
the aim of obtaining strong channels with smaller applied voltages.
30. VCO
AM was the earliest modulation method used to transmit voice by radio. It was developed during
the first two decades of the 20th century beginning with Reginald Fessenden's radiotelephone
experiments in 1900. It remains in use today in many forms of communication; for example it is
used in portable two way radios, VHF aircraft radio and in computer modems.[citation needed] "AM" is
often used to refer to mediumwave AM radio broadcasting.
The behaviour of state machines can be observed in many devices in modern society which perform
a predetermined sequence of actions depending on a sequence of events with which they are
presented. Simple examples are vending machines which dispense products when the proper
combination of coins is deposited, elevators which drop riders off at upper floors before going
down, traffic lights which change sequence when cars are waiting, and combination locks which
require the input of combination numbers in the proper order.
Transformers:
1. Primary and secondary transformers. Efficiency and comparison.
A single phase voltage transformer basically consists of two electrical coils of wire, one called the
“Primary Winding” and another called the “Secondary Winding”. For this tutorial we will define the
“primary” side of the transformer as the side that usually takes power, and the “secondary” as the
side that usually delivers power. In a single-phase voltage transformer the primary is usually the
side with the higher voltage.
These two coils are not in electrical contact with each other but are instead wrapped together
around a common closed magnetic iron circuit called the “core”. This soft iron core is not solid but
made up of individual laminations connected together to help reduce the core’s losses.
The two coil windings are electrically isolated from each other but are magnetically linked through
the common core allowing electrical power to be transferred from one coil to the other. When an
electric current passed through the primary winding, a magnetic field is developed which induces a
voltage into the secondary winding as shown.
In other words, for a transformer there is no direct electrical connection between the two coil
windings, thereby giving it the name also of an Isolation Transformer. Generally, the primary
winding of a transformer is connected to the input voltage supply and converts or transforms the
electrical power into a magnetic field. While the job of the secondary winding is to convert this
alternating magnetic field into electrical power producing the required output voltage as shown.
Where:
VP - is the Primary Voltage
VS - is the Secondary Voltage
NP - is the Number of Primary Windings
NS - is the Number of Secondary Windings
Φ (phi) - is the Flux Linkage
Notice that the two coil windings are not electrically connected but are only linked magnetically. A
single-phase transformer can operate to either increase or decrease the voltage applied to the
primary winding. When a transformer is used to “increase” the voltage on its secondary winding
with respect to the primary, it is called a Step-up transformer. When it is used to “decrease” the
voltage on the secondary winding with respect to the primary it is called a Step-down
transformer.
However, a third condition exists in which a transformer produces the same voltage on its
secondary as is applied to its primary winding. In other words, its output is identical with respect to
voltage, current and power transferred. This type of transformer is called an “Impedance
Transformer” and is mainly used for impedance matching or the isolation of adjoining electrical
circuits.
The difference in voltage between the primary and the secondary windings is achieved by changing
the number of coil turns in the primary winding ( NP ) compared to the number of coil turns on the
secondary winding ( NS ).
As the transformer is a linear device, a ratio now exists between the number of turns of the primary
coil divided by the number of turns of the secondary coil. This ratio, called the ratio of
transformation, more commonly known as a transformers “turns ratio”, ( TR ). This turns ratio
value dictates the operation of the transformer and the corresponding voltage available on the
secondary winding.
It is necessary to know the ratio of the number of turns of wire on the primary winding compared
to the secondary winding. The turns ratio, which has no units, compares the two windings in order
and is written with a colon, such as 3:1 (3-to-1). This means in this example, that if there are 3 volts
on the primary winding there will be 1 volt on the secondary winding, 3-to-1. Then we can see that
if the ratio between the number of turns changes the resulting voltages must also change by the
same ratio, and this is true.
A transformer is all about “ratios”, and the turns ratio of a given transformer will be the same as its
voltage ratio. In other words for a transformer: “turns ratio = voltage ratio”. The actual number of
turns of wire on any winding is generally not important, just the turns ratio and this relationship is
given as:
Transformer Action
We have seen that the number of coil turns on the secondary winding compared to the primary
winding, the turns ratio, affects the amount of voltage available from the secondary coil. But if the
two windings are electrically isolated from each other, how is this secondary voltage produced?
We have said previously that a transformer basically consists of two coils wound around a common
soft iron core. When an alternating voltage ( VP ) is applied to the primary coil, current flows
through the coil which in turn sets up a magnetic field around itself, called mutual inductance, by
this current flow according to Faraday’s Law of electromagnetic induction. The strength of the
magnetic field builds up as the current flow rises from zero to its maximum value which is given as
dΦ/dt.
As the magnetic lines of force setup by this electromagnet expand outward from the coil the soft
iron core forms a path for and concentrates the magnetic flux. This magnetic flux links the turns of
both windings as it increases and decreases in opposite directions under the influence of the AC
supply.
However, the strength of the magnetic field induced into the soft iron core depends upon the
amount of current and the number of turns in the winding. When current is reduced, the magnetic
field strength reduces.
When the magnetic lines of flux flow around the core, they pass through the turns of the secondary
winding, causing a voltage to be induced into the secondary coil. The amount of voltage induced
will be determined by: N.dΦ/dt (Faraday’s Law), where N is the number of coil turns. Also this
induced voltage has the same frequency as the primary winding voltage.
Then we can see that the same voltage is induced in each coil turn of both windings because the
same magnetic flux links the turns of both the windings together. As a result, the total induced
voltage in each winding is directly proportional to the number of turns in that winding. However,
the peak amplitude of the output voltage available on the secondary winding will be reduced if the
magnetic losses of the core are high.
If we want the primary coil to produce a stronger magnetic field to overcome the cores magnetic
losses, we can either send a larger current through the coil, or keep the same current flowing, and
instead increase the number of coil turns ( NP ) of the winding. The product of amperes times turns
is called the “ampere-turns”, which determines the magnetising force of the coil.
So assuming we have a transformer with a single turn in the primary, and only one turn in the
secondary. If one volt is applied to the one turn of the primary coil, assuming no losses, enough
current must flow and enough magnetic flux generated to induce one volt in the single turn of the
secondary. That is, each winding supports the same number of volts per turn.
As the magnetic flux varies sinusoidally, Φ = Φmax sinωt, then the basic relationship between
induced emf, ( E ) in a coil winding of N turns is given by:
Where:
ƒ - is the flux frequency in Hertz, = ω/2π
Ν - is the number of coil windings.
Φ - is the flux density in webers
This is known as the Transformer EMF Equation. For the primary winding emf, N will be the
number of primary turns, ( NP ) and for the secondary winding emf, N will be the number of
secondary turns, ( NS ).
Also please note that as transformers require an alternating magnetic flux to operate correctly,
transformers cannot therefore be used to transform or supply DC voltages or currents, since the
magnetic field must be changing to induce a voltage in the secondary winding. In other words,
Transformers DO NOT Operate on DC Voltages, ONLY AC.
If a transformers primary winding was connected to a DC supply, the inductive reactance of the
winding would be zero as DC has no frequency, so the effective impedance of the winding will
therefore be very low and equal only to the resistance of the copper used. Thus the winding will
draw a very high current from the DC supply causing it to overheat and eventually burn out,
because as we know I = V/R.
A transformer does not require any moving parts to transfer energy. This means that there are no
friction or windage losses associated with other electrical machines. However, transformers do
suffer from other types of losses called “copper losses” and “iron losses” but generally these are
quite small.
General Question:
Why do we use alternating ac voltages and currents in our homes?
A: One of the main reasons that we use alternating AC voltages and currents in our homes and
workplaces is that it can be easily generated at a convenient voltage, transformed into a much
higher voltage and then distributed around the country using a national grid of pylons and cables
over very long distances. The reason for transforming the voltage is that higher distribution
voltages implies lower currents and therefore lower losses along the grid.
These high AC transmission voltages and currents are then reduced to a much lower, safer and
usable voltage level were it can be used to supply electrical equipment in our homes and
workplaces, and all this is possible thanks to the basic Voltage Transformer.
2. Computer Engineering
3.1 Operating System
Operating systems exist for two main purposes. One is that it is designed to make sure a
computer system performs well by managing its computational activities. Another is that it
provides users an environment for the development and execution of programs.
Definition
An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs. In technical terms, it is a software
which manages hardware. An operating System controls the allocation of resources and services
such as memory, processors, devices and information.
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Types of Operating System
Operating systems are there from the very first computer generation. Operating systems keep
evolving over the period of time. Following are few of the important types of operating system
which are most commonly used.
Batch operating system
User cannot interact with the computer directly. Computer reads the jobs sequentially from a
pool of job (called batch) and executes the program for each job.
SMP
SMP is short for Symmetric Multi-Processing, and is the most common type of multiple-
processor systems. In this system, each processor runs an identical copy of the operating system,
and these copies communicate with one another as needed.
Kernel
Kernel is the core of every operating system. It connects applications to the actual processing of
data. It also manages all communications between software and hardware components to
ensure usability and reliability. It provides the required abstraction to hide low level hardware
details to system or application programs.
BIOS
BIOS is an acronym for Basic Input/Output System. It is the boot firmware program on a PC,
and controls the computer from the time you start it up until the operating system takes over.
When you turn on a PC, the BIOS first conducts a basic hardware check, called a Power-On Self-
Test (POST), to determine whether all of the attachments are present and working. Then it loads
the operating system into your computer's random-access memory, or RAM.
The BIOS also manages data flow between the computer's operating system and attached
devices such as the hard disk, video card, keyboard, mouse, and printer.
Caching
Caching is the processing of utilizing a region of fast memory for a limited data and process. A
cache memory is usually much efficient because of its high access speed.
Spooling
Spooling is normally associated with printing. When different applications want to send an
output to the printer at the same time, spooling takes all of these print jobs into a disk file and
queues them accordingly to the printer.
Assembler
An assembler acts as a translator for low level language. Assembly codes, written using
mnemonic commands are translated by the Assembler into machine language.
Interrupts
Interrupts are part of a hardware mechanism that sends a notification to the CPU when it wants
to gain access to a particular resource. An interrupt handler receives this interrupt signal and
“tells” the processor to take action based on the interrupt request.
GUI
GUI is short for Graphical User Interface. It provides users with an interface wherein actions
can be performed by interacting with icons and graphical symbols. People find it easier to
interact with the computer when in a GUI especially when using the mouse. Instead of having to
remember and type commands, users just click on buttons to perform a process.
Plumbing / Piping
It is the process of using the output of one program as an input to another. For example, instead
of sending the listing of a folder or drive to the main screen, it can be piped and sent to a file, or
sent to the printer to produce a hard copy.
Process
A program in execution is called a process. Or it may also be called a unit of work. A process
needs some system resources as CPU time, memory, files, and I/O devices to accomplish the
task. Each process is represented in the operating system by a process control block or task
control block (PCB).
Process State
New State – means a process is being created
Running – means instructions are being executed
Waiting – means a process is waiting for certain conditions or events to occur
Ready – means a process is waiting for an instruction from the main processor
Terminate – means a process is done executing
Thread
A thread is a program line under execution. Thread sometimes called a light-weight process, is a
basic unit of CPU utilization; it comprises a thread id, a program counter, a register set, and a
stack.
1. User thread
2. Kernel thread
Socket
A socket provides a connection between two applications. Each endpoint of a communication is
a socket.
Context Switching
Transferring the control from one process to other process requires saving the state of the old
process and loading the saved state for new process. This task is known as context switching.
But time taken for switching from one process to other is pure overhead, because the system
does no useful work while switching. So, one of the solutions is to go for threading whenever
possible.
Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Schedulers
A Scheduler is special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types
Long Term Scheduler- Long term schedulers are the job schedulers that select processes
from the job queue and load them into memory for execution.
Short Term Scheduler- The short term schedulers are the CPU schedulers that select a
process form the ready queue and allocate the CPU to one of them.
Medium Term Scheduler- Medium term scheduling is part of the swapping function.
This relates to processes that are in a blocked or suspended state. They are swapped out
of real-memory until they are ready to execute.
Scheduling Queues
1. Job queue- When a process enters the system it is placed in the job queue.
2. Ready queue- The processes that are residing in the main memory and are ready and
waiting to execute are kept on a list called the ready queue.
3. Device queue- A list of processes waiting for a particular I/O device is called device
queue.
Scheduling algorithms
We'll discuss four major scheduling algorithms here which are following
• First Come First Serve (FCFS) Scheduling- The process that requests the CPU first is
allocated the CPU first. Implementation is managed by a FIFO queue.
• Shortest-Job-First (SJF) Scheduling- The process that has the smallest next CPU burst is
allocated the CPU first.
• Priority Scheduling- A priority is associated with each process. The CPU is allocated to
the process with the highest priority.
• Round Robin (RR) Scheduling- It is primarily aimed for time-sharing systems. A circular
queue is setup in such a way that the CPU scheduler goes around that queue, allocating CPU to
each process for a time interval of up to around 10 to 100 milliseconds.
• Multilevel Queue Scheduling- It partitions the ready queue into several separate queues.
Each queue has its own scheduling algorithm. The processes are permanently assigned to one
queue based on some property of the process.
Memory Management
Memory management is the functionality of an operating system which handles or manages
primary memory. Memory management keeps track of each and every memory location either it
is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory
to a backing store, and then brought back into memory for continued execution. It allows more
processes to be run that can fit into memory at one time.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little
pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.
Paging
External fragmentation is avoided by using paging technique. Paging is a technique in which
physical memory is broken into blocks of the same size called pages (size is power of 2, between
512 bytes and 8192 bytes). When a process is to be executed, its corresponding pages are loaded
into any available memory frames.
Belady's Anomaly
The Belady's anomaly occurs in case of the FIFO page replacement policy in the OS. When this
FIFO is used and the number of page frames are increased in number, then the frames that are
required by the program varies in a large range (due to large no of pages) as a result of this the
number of page faults increases with the number of frames. This anomaly doesn't occur in the
LRU(least recently used) scheduling algorithm.
Virtual Memory
Virtual memory is a technique that allows the execution of processes which are not completely
available in memory. The main visible advantage of this scheme is that programs can be larger
than physical memory.
Demand Paging: Demand paging is the paging policy that a page is not read into memory until it
is requested, that is, until there is a page fault on the page.
Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a page
that is not in memory. The present bit in the page table entry will be found to be off by the
virtual memory hardware and it will signal an interrupt.
Thrashing: The problem of many page faults occurring in a short time, called “page thrashing”.
It happens when it is spending more time paging instead of executing.
Cache Memory
Cache memory is random access memory (RAM) that a computer microprocessor can access
more quickly than it can access regular RAM. As the microprocessor processes data, it looks first
in the cache memory and if it finds the data there (from a previous reading of data), it does not
have to do the more time-consuming reading of data
Process Synchronization
A situation, where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place, is
called race condition. To guard against the race condition we need to ensure that only one
process at a time can be manipulating the same data. The technique we use for this is called
process synchronization.
Race situation
When two processes are trying to access the same variable simultaneously, we call it as race
condition.
Bounded-Buffer Problem
Here we assume that a pool consists of n buffers, each capable of holding one item. The
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1.The empty and full semaphores count the number of empty and full buffers, respectively.
Empty is initialized to n, and full is initialized to 0.
Reader-Writer Problem
The Readers and Writers problem is useful for modeling processes which are competing for a
limited shared resource. A practical example of a Readers and Writers problem is an airline
reservation system consisting of a huge data base with many processes that read and write the
data.
Reading information from the data base will not cause a problem since no data is changed. The
problem lies in writing information to the data base. If no constraints are put on access to the
data base, data may change at any moment. By the time a reading process displays the result of a
request for information to the user, the actual data in the data base may have changed. What if,
for instance, a process reads the number of available seats on a flight, finds a value of one, and
reports it to the customer? Before the customer has a chance to make their reservation, another
process makes a reservation for another customer, changing the number of available seats to
zero.
We can provide permission to a number of readers to read same data at same time. But a writer
must be exclusively allowed to access. There are two solutions to this problem:
1. No reader will be kept waiting unless a writer has already obtained permission to use the
shared object. In other words, no reader should wait for other readers to complete simply
because a writer is waiting.
2. Once a writer is ready, that writer performs its write as soon as possible. In other words,
if a writer is waiting to access the object, no new may start reading.
Deadlock
Consider the following situation: Suppose that two processes (A and B) are running on the same
machine, and both processes require the use of the local printer and tape drive. Process A may
have been granted access to the machine's printer but be waiting for the tape drive, while
process B has access to the tape drive but is waiting for the printer. Such a condition is known as
deadlock. Since both processes hold a resource the other process needs, the processes will wait
indefinitely for the resources to be released and neither will finish executing.
In order for deadlock to be possible, the following three conditions must be present in the
computer:
1. Mutual exclusion: Only one process may use a resource at a time.
2. Hold-and-wait: A process may hold allocated resources while awaiting assignment of others.
3. No pre-emption: No resource can be forcibly removed from a process holding it.
4. Circular wait
(where p[i] is waiting for p[j] to release a resource. i= 1,2,…n
j=if (i!=n) then i+1
else 1 )
Bankers Algorithm
Banker’s algorithm is one form of deadlock-avoidance in a system. For the Banker's algorithm to
work, it needs to know three things:
• How many of each resource each process could possibly request[MAX]
• How many of each resource each process is currently holding[ALLOCATED]
• How many of each resource the system currently has available[AVAILABLE]
The Banker's Algorithm derives its name from the fact that this algorithm could be used in a
banking system to ensure that the bank does not run out of resources, because the bank would
never allocate its money in such a way that it can no longer satisfy the needs of all its customers.
By using the Banker's algorithm, the bank ensures that when customers request money the bank
never leaves a safe state. If the customer's request does not cause the bank to leave a safe state,
the cash will be allocated; otherwise the customer must wait until some other customer deposits
enough.
How would a file named EXAMPLEFILE.TXT appear when viewed under the DOS command
console operating in Windows 98?
The filename would appear as EXAMPL~1.TXT. The reason behind this is that filenames under
this operating system is limited to 8 characters when working under DOS environment.
Throughput – number of processes that complete their execution per time unit Turnaround
time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
Linux
Linux is one of popular version of UNIX operating System. It is open source as its source code is
freely available. It is free to use. Linux was designed considering UNIX compatibility. It's
functionality list is quite similar to that of UNIX.
Components of Linux System
Linux Operating System has primarily three components
• Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It is consists of various modules and it interacts directly with the underlying
hardware. Kernel provides the required abstraction to hide low level hardware details to system
or application programs.
• System Library - System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries implements
most of the functionalities of the operating system and do not requires kernel module's code
access rights.
• System Utility - System Utility programs are responsible to do specialized, individual
level tasks.
Architecture
Linux System Architecture consists of following layers:
• Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
• Kernel - Core component of Operating System, interacts directly with hardware, provides
low level services to upper layer components.
• Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes
commands from user and executes kernel's functions.
• Utilities - Utility programs giving user most of the functionalities of an operating
systems.
Basic Features
Following are some of the important features of Linux Operating System.
• Portable - Portability means software can work on different types of hardware in same
way. Linux kernel and application programs support their installation on any kind of hardware
platform.
• Open Source - Linux source code is freely available and it is community-based
development project. Multiple teams work in collaboration to enhance the capability of Linux
operating system and it is continuously evolving.
• Multi-User - Linux is a multiuser system means multiple users can access system
resources such as memory/ RAM/ application programs at the same time.
• Multiprogramming - Linux is a multiprogramming system means multiple applications
can run at same time.
• Hierarchical File System - Linux provides a standard file structure in which system files/
user files are arranged.
• Shell - Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs etc.
• Security - Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Input Unit
Computers accept coded information through input units, which read the data. The most well-
known input device is keyboard. Whenever a key is pressed, the corresponding letter or digit is
automatically translated into its corresponding binary code and transmitted over a cable to
either the memory or the processor.
Memory Unit
The function of the memory unit is to store programs and data. There are two classes of storage,
called Primary and Secondary.
Output Unit- Its function is to send processed result to the outside world. Ex:-
Printer.
Control unit- The Control Unit is used to co-ordinate the operations of memory,
ALU, input and output units. The Control Unit can be said as the nerve center
that sends control signals to other units and senses their states.
This instruction adds the operands at memory location LOCA to the operand in
the register R0 and places the sum into the register R0. It seems that this
instruction is done in one step, but actually it internally performs several steps
First, the instruction is fetched from the memory into the processor. Next, the
operand at LOCA is fetched and added to the contents of R0.
Let us now analyze how the memory and processor are connected:
The Processor contains a number of registers used for several purposes.
IR: The IR (Instruction Register) holds the instruction that is currently being
executed.
PC: The PC (Program Counter) is another specialized register which contains the
memory address of next instruction to be fetched.
MAR (Memory Address Register): This register facilitates communication with
the memory. MAR holds the address of the location to be accessed.
MDR: This register facilitates communication with the memory. The MDR
contains the data to be written into or read out of the addressed location.
There are n general purpose registers R0 to Rn-1.
The Program execution starts when the PC is set to point the 1st instruction. The
content of the PC is transferred to the M AR and Read control signal is sent to
memory. T hen the addressed word is read out of the memory and loaded into the
MDR. Next the contents of the MDR are transferred to the IR. Then the program
is decoded, it is sent to the ALU if it has some arithmetic or logical calculations.
The n general purpose registers are used during the calculations to store the
result. Then the result is sent to the MDR, and its address of location where result
is stored is sent to MAR. And then a write cycle is initiated. Then PC is
incremented and the process continues.
Bus Structures
The Bus is used to connect all the individual parts of a computer in an organized manner. The
simplest way to interconnect functional u nits is to use a single bus. The main advantage of the
single bus structure is its low cost and its flexibility. The multiple bus achieve more concurrency
in operation by allowing two or more transfers to be carried at the same time.
Performance
Data can be organized in many ways and data structure is one of these ways. It is used to represent
data in the memory of the computer so that the processing of data can be done in easier way. In other
words, data structure is the logical and mathematical model of a particular organization of data.
2. Non-Linear Data Structures- In these data structures the elements do not form a sequence.
Such as Trees and Graphs are non-linear data structures.
Array
Stack
A stack is a container of objects that are inserted and removed according to the last-in first-out
(LIFO) principle. In the pushdown stacks only two operations are allowed: push the item into
the stack, and pop the item out of the stack. A stack is a limited access data structure - elements
can be added and removed from the stack only at the top. Push adds an item to the top of the
stack, pop removes the item from the top. A helpful analogy is to think of a stack of books; you
can remove only the top book, also you can add a new book on the top. The simplest application
of a stack is to reverse a word. You push a given word to stack - letter by letter - and then pop
letters from the stack. Another application is an "undo" mechanism in text editors, this
operation is accomplished by keeping all text changes in a stack.
Queue
A queue is a container of objects (a linear collection) that are inserted and removed according to
the first-in first-out (FIFO) principle. An excellent example of a queue is a line of students in the
food court. New additions to a line made to the back of the queue, while removal (or serving)
happens in the front. In the queue only two operations are allowed enqueue and dequeue.
Enqueue means to insert an item into the back of the queue, dequeue means removing the front
item. The picture demonstrates the FIFO access.
The difference between stacks and queues is in removing. In a stack we remove the item the
most recently added; in a queue, we remove the item the least recently added.
Linked List
A linked list is a linear data structure where each element is a separate object. Each element
(node) of a list comprises two items - the data and a reference to the next node. The last node
has a reference to null. The entry point into a linked list is called the head of the list. It should be
noted that head is not a separate node, but the reference to the first node. If the list is empty
then the head is a null reference. A linked list is a dynamic data structure. The number of nodes
in a list is not fixed and can grow and shrink on demand. Any application which has to deal with
an unknown number of objects will need to use a linked list.
Tree
A tree is a non-linear data structure made up of nodes or vertices and edges without having any
cycle. The tree with no nodes is called the null or empty tree. A tree that is not empty consists of
a root node and potentially many levels of additional nodes that form a hierarchy.
Binary Tree
A worthwhile simplification is to consider only binary trees. A binary tree is one in which each
node has at most two descendants - a node can have just one but it can't have more than two.
Clearly each node in a binary tree can have a left and/or a right descendant. The importance of a
binary tree is that it can create a data structure that mimics a "yes/no" decision making process.
Graph
A graph is a non-linear data structure consists of a finite set of nodes or vertices, together with a
set of ordered pairs of these nodes (or, in some cases, a set of unordered pairs). These pairs are
known as edges or arcs.
Searching
Linear Search
Linear search or sequential search is a method for finding an element within a list. It
sequentially checks each element of the list until a match is found or the whole list has been
searched.
Worst Case: Θ(n) ; search key not present or last element
Best Case: Θ(1) ; first element
Binary Search
In computer science, a binary search or half-interval search algorithm finds the position of a
specified input value (the search "key") within an array sorted by key value. In each step, the
algorithm compares the search key value with the key value of the middle element of the array.
If the keys match then a matching element has been found and its index, or position, is returned.
Otherwise, if the search key is less than the middle element's key, the algorithm repeats its
action on the sub-array to the left of the middle element or if the search key is greater, on the
sub-array to the right.
Sorting
Insertion Sort
Insertion sort iterates, consuming one input element each repetition, and growing a sorted output
list. Insertion sort removes one element from the input data in each iteration, finds the location it
belongs within the sorted list, and inserts it there. It repeats until no input elements remain.
Average Case / Worst Case: Θ(n2); happens when input is already sorted in
descending order
Best Case: Θ(n); when input is already sorted
Selection Sort
The selection sort method requires (n-1) passes to sort an array. In the first pass, find the
smallest element in the entire array and swap with the first element a[0]. In the second pass,
find the smallest element from a[1] to a[n-1] and swap with a[1] and so on.
Merge Sort
Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945.
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sub-lists, each containing 1 element (a list of 1 element is
considered sorted).
2. Repeatedly merge sub-lists to produce new sorted sub-lists until there is only 1 sub-list
remaining. This will be the sorted list.
Average Case / Worst Case / Best case: Θ(nlogn) ; It doesn't matter at all whether the input is
sorted or not.
Quick Sort
Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into two
smaller sub-arrays: the low elements and the high elements. Quicksort can then recursively sort
the sub-arrays.
2. Reorder the array so that all elements with values less than the pivot come before the pivot,
while all elements with values greater than the pivot come after it (equal values can go either
way). After this partitioning, the pivot is in its final position. This is called the partition
operation.
3. Recursively apply the above steps to the sub-array of elements with smaller values and
separately to the sub-array of elements with greater values.
Bubble Sort
It is least efficient sorting technique. The basic idea underlying the bubble sort is to pass
through a file sequentially several times. Each pass consists of comparing each element in the
file with its successors and interchanging the two elements if they are not in proper order.
This technique is the basis of efficient algorithms for all kinds of problems, such as sorting (e.g.,
quicksort, merge sort).
Greedy Method
This method suggests that one can design an algorithm that works in stage, considering one
input at a time. At each step the decision is made whether a particular input is an optimal
solution. Example: Prim's algorithm, Kruskal's algorithm.
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected
weighted undirected graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized.
In Kruskal’s algorithm, the edges of the graph are considered in non-decreasing order of cost. It
selects edges into a set T such that, it is possible to complete T into a tree (no cycle)
Dijkstra's Algorithm
Dijkstra's algorithm conceived is a graph search algorithm that solves the single-source shortest
path problem for a graph with non-negative edge path costs, producing a shortest path tree. This
algorithm is often used in routing and as a subroutine in other graph algorithms.
Process life cycle describes the organizational activities of the engineers. It includes all the faces
that are
1. Requirement Analysis
2. Design
3. Implementation
4. Testing
Prototype Model
This type of System Development Method is employed when it is very difficult to obtain exact
requirements from the customer (unlike waterfall model, where requirements are clear). The
basic idea here is that instead of freezing the requirements before any design or coding can
proceed, a throwaway prototype is built to help understand the requirements. Development of
the prototype obviously undergoes design, coding and testing, but each of these phases is not
done very formally. By using the prototype, the client can get an actual feel of the system, which
can enable the client to better understand the requirements of the desired system.
Spiral Model
Spiral Model includes the iterative nature of the prototyping model and the linear nature of the
waterfall model. This approach is ideal for developing software that is revealed in various versions.
In each iteration of the spiral approach, software development process follows the phase-wise
linear approach. At the end of first iteration, the customer evaluates the software and provides
the feedback. Based on the feedback, software development process enters into the next
iteration. The process of iteration continues throughout the life of the software.
An example of the spiral model is the evolution of Microsoft Windows Operating system from
Windows 3.1 to windows 2003. We may refer to Microsoft windows 3.1 Operating System as the
first iteration in the spiral approach. The product was released and evaluated by the customers,
which include the market large. After getting the feedback from customers about the windows
3.1, Microsoft planned to develop a new version of windows operating system. Windows’95 was
released with the enhancement and graphical flexibility. Similarly, other versions of windows
operating system were released.
Incremental Model
Each increment includes three phases: design, implementation and analysis. During the design
phase of the first increment, the functionality with topmost priority from the project activity list is
selected and the design is prepared. In the implementation phase, the design is implemented and
tested. In the analysis phase, the functional capability of the partially developed product is analyzed.
The development process is repeated until all the functions of the project are implemented.
Consider an example where a bank wants to develop software to automate the banking process
for insurance services, personal banking, and home and automobile loans. The bank wants the
automation of personal banking system immediately because it will enhance the customer
services. You can implement the incremental approach to develop the banking software. In the
first increment, you can implement the personal banking feature and deliver it to the customer.
In the later increments, you can implement the insurance services, home loans, and automobile
loans features of the bank.
Four P's of Software Project Management
The effective software project management focuses on four P's:
The People
The Product
The Process
The Project
The People
The following categories of people are involved in the software process.
Senior Managers
Project Managers
Practitioners
Customers
End Users
Senior Managers define the business issue. Project Managers plan, motivate, organize and
control the practitioners who do the Software work. Practitioners deliver the technical skills that
are necessary to engineer a product or application. Customer specifies the requirements for the
software to be developed. End Users interact with the software once it is released.
The Product
Before a software project is planned, the product objectives and scope should be established,
technical and management constraints should be identified. Without this information it is
impossible to define a reasonable cost, amount of risk involved, the project schedule etc.
The Process
Here the important point is to select an appropriate process model to develop the software.
There are different process models available: Water fall model, Iterative water fall model,
Prototyping model, Evolutionary model, Spiral model. In practice we may use any one of the
above models or a combination of the above models.
The Project
In order to manage a successful software project, the different activities should be defined
clearly. A project is a series of steps where we need to make accurate decision so as to make a
successful project.
Project Estimation
For an effective management accurate estimation of various measures is a must. With correct
estimation managers can manage and control the project more efficiently and effectively.
Project estimation may involve the following:
Putnam Model
This model is made by Lawrence H. Putnam, which is based on Norden’s frequency distribution
(Rayleigh curve). Putnam model maps time and efforts required with software size.
COCOMO
COCOMO stands for COnstructive COst MOdel, developed by Barry W. Boehm. It divides the
software product into three categories of software: organic, semi-detached and embedded.
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business competition.
Risk Management Process
There are following activities involved in risk management process:
Identification - Make note of all possible risks, which may occur in the project.
Categorize - Categorize known risks into high, medium and low risk intensity as
per their possible impact on the project.
Manage - Analyze the probability of occurrence of risks at various phases. Make
plan to avoid or face risks. Attempt to minimize their side-effects.
Monitor - Closely monitor the potential risks and their early symptoms. Also
monitor the effects of steps taken to mitigate or avoid them.
3.5 Automata
Overview
Automata theory is the study of abstract computing devices, or “machines”. This topic goes back
to the days before digital computers and describes what is possible to compute using an abstract
machine. These ideas directly apply to creating compilers, programming languages, and
designing applications. They also provide a formal framework to analyze new types of
computing devices, e.g. bio computers or quantum computers.
Practical Examples of the implications of Automata Theory and the formal Languages
Grammars and languages are closely related to automata theory and are the basis of many
important software components like:
• Compilers and interpreters
• Text editors and processors
• Text searching
• System verification
The only difference lies in the transition function, which can now target subsets of the states of
the automaton rather than a single next state for each state, input pair.
Difference between FA’s and NFA’s
FA stands for finite automata while NFA stands for non-deterministic finite automata
In FA there must be a transition for each letter of the alphabet from each state. So, in FA
number of transitions must be equal to (number of states * number of letter in alphabet).
While in NFA there may be more than one transition for a letter from a state. And finally every
FA is an NFA while every NFA may be an FA or not.
Regular Expression
Regular expression (RE) is used to express the infinite or finite language, these REs are made in
such a way that these can generate the strings of that unique language also for the cross check
that the defined RE is of a specified language that RE should accept all the string of that
language and all language strings should be accepted by that RE.
How Moore and Mealy machine works in Computer Memory what is their importance in
Computing?
Mealy & Moore Machines work in computing as incrementing machine & 1’s complement
machine etc. These operations as basic computer operations so these machines are very
important.
In Mealy Machine we read input string letters and generate output while moving along the paths
from one state to another while in Moore machine we generate output on reaching the state so
the output pattern of Moore machine contains one extra letter because we generated output for
state q0 where we read nothing.
Suppose we have a decimal number 141 in our language. When compiler reads it, it would be in the
form of string. The compiler would calculate its decimal equivalent so that we can perform
mathematical functions on it. In calculating its decimal value, weight of first “1” is different than the
second “1” it means it is context sensitive (depends on in which position the “1” has appeared).
Pushdown Automata
In computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.
Pushdown automata are used in theories about what can be computed by machines. PDA is just an
enhancement in FA i.e. memory is attached with machine that recognizes some language.They are
more capable than finite-state machines but less capable than Turing machines. Deterministic
pushdown automata can recognize all deterministic context-free languages while nondeterministic
ones can recognize all context-free languages. Mainly the former are used in parser design.
Turing machine
A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according
to a table of rules. On this tape are symbols, which the machine can read and write, one at a
time, using a tape head. Operation is fully determined by a finite set of elementary instructions
such as "in state %2, if the symbol seen is , write a ; if the symbol seen is , change into state 7; in
state 7, if the symbol seen is , write a and change to state 6;" etc.
Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer
algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
3.6 Data Base Management System
Database is collection of data which is related by some aspect. Data is collection of facts and
figures which can be processed to produce information. Name of a student, age, class and her
subjects can be counted as data for recording purposes.
A database management system stores data, in such a way which is easier to retrieve,
manipulate and helps to produce information.
Characteristics
Traditionally data was organized in file formats. DBMS was all new concepts then and all the
research was done to make it to overcome all the deficiencies in traditional style of data
management. Modern DBMS has the following characteristics:
Real-world Entity: Modern DBMS are more realistic and uses real world entities
to design its architecture. It uses the behavior and attributes too. For example, a
school database may use student as entity and their age as their attribute.
Relation-based Tables: DBMS allows entities and relations among them to form
as tables. This eases the concept of data saving. A user can understand the
architecture of database just by looking at table names etc.
Isolation of Data and Application: A database system is entirely different than its
data. Where database is said to active entity, data is said to be passive one on
which the database works and organizes. DBMS also stores metadata which is
data about data, to ease its own process.
Consistency: DBMS always enjoy the state on consistency where the previous
form of data storing applications like file processing does not guarantee this.
Consistency is a state where every relation in database remains consistent. There
exist methods and techniques, which can detect attempt of leaving database in
inconsistent state.
Query Language: DBMS is equipped with query language, which makes it more efficient to
retrieve and manipulate data. A user can apply as many and different filtering options, as he
or she wants. Traditionally it was not possible where file-processing system was used.
ACID Properties: DBMS follows the concepts for ACID properties, which stands
for Atomicity, Consistency, Isolation and Durability. These concepts are applied
on transactions, which manipulate data in database. ACID properties maintains
database in healthy state in multi-transactional environment and in case of
failure.
Multiuser and Concurrent Access: DBMS support multi-user environment and allows them
to access and manipulate data in parallel. Though there are restrictions on transactions when
they attempt to handle same data item, but users are always unaware of them.
Multiple Views: DBMS offers multiples views for different users. A user who is in
sales department will have a different view of database than a person working in
production department. This enables user to have a concentrate view of database
according to their requirements.
Security: Features like multiple views offers security at some extent where users
are unable to access data of other users and departments. DBMS offers methods
to impose constraints while entering data into database and retrieving data at
later stage. DBMS offers many different levels of security features, which enables
multiple users to have different view with different features. For example, a user
in sales department cannot see data of purchase department is one thing,
additionally how much data of sales department he can see, can also be managed.
Because DBMS is not saved on disk as traditional file system it is very hard for a
thief to break the code.
Entity-Relationship Model
Entity-Relationship model is based on the notion of real-world entities and relationship among
them. While formulating real-world scenario into database model, ER Model creates entity set,
relationship set, general attributes and constraints.
ER Model is best used for the conceptual design of database. ER Model is based on:
Database Schema
Database schema represents the logical view of entire database. It tells about how the data is
organized and how relation among them is associated. It formulates all database constraints that
would be put on data in relations, which resides in database.
Database schema can be divided broadly in two categories:
Physical Database Schema: This schema pertains to the actual storage of data and it is form
of storage like files, indices etc. It defines how data will be stored in secondary storage etc.
Logical Database Schema: This defines all logical constraints that need to be
applied on data stored. It defines tables, views and integrity constraints etc.
Normalization
If a database design is not perfect it may contain anomalies, which are like a bad dream for
database itself. Managing a database with anomalies is next to impossible.
Update anomalies: If data items are scattered and are not linked to each other
properly, then there may be instances when we try to update one data item that
has copies of it scattered at several places, few instances of it get updated
properly while few are left with their old values. This leaves database in an
inconsistent state.
Deletion anomalies: When we try to delete a record, but parts of it may be left
undeleted because of unawareness, then the data may also get saved somewhere
else.
Insert anomalies: When we try to insert data in a record that does not exist at all.
Normalization is a method to remove all these anomalies and bring database to consistent state
and free from any kinds of anomalies.
First Normal Form:
This is defined in the definition of relations (tables) itself. This rule defines that all the attributes
in a relation must have atomic domains. Values in atomic domain are indivisible units.
Second Normal Form:
Second normal form says that every non-prime attribute should be fully functionally dependent
on prime key attribute. That is, if X → A holds, then there should not be any proper subset Y of
X, for that Y → A also holds.
Third Normal Form:
For a relation to be in Third Normal Form it must be in Second Normal form and the following
must satisfy:
DBMS - Indexing
We know that information in the DBMS files is stored in the form of records. Every record is
equipped with some key field, which helps it to be recognized uniquely.
Indexing is a data structure technique to efficiently retrieve records from database files based on
some attributes on which the indexing has been done. Indexing in database systems is similar to
the one we see in books.
Indexing is defined based on its indexing attributes. Indexing can be one of the following types:
Ordering field is the field on which the records of file are ordered. It can be different from
primary or candidate key of a file. Ordered Indexing is of two types:
Dense Index
Sparse Index
Dense Index
In dense index, there is an index record for every search key value in the database. This makes
searching faster but requires more space to store index records itself. Index record contains
search key value and a pointer to the actual record on the disk.
Sparse Index
In sparse index, index records are not created for every search key. An index record here contains
search key and actual pointer to the data on the disk. To search a record, we first proceed by index
record and reach at the actual location of the data. If the data we are looking for is not where we
directly reach by following index, the system starts sequential search until the desired data is found.
Multilevel Index
Index records comprise search-key value and data pointers. This index itself is stored on the
disk along with the actual database files. As the size of database grows, so does the size of
indices. There is an immense need to keep the index records in the main memory so that the
search can speed up. If single level index is used then a large size index cannot be kept in
memory as whole and this leads to multiple disk accesses.
DBMS - Transaction
A transaction can be defined as a group of tasks. A single task is the minimum processing unit of
work, which cannot be divided further.
An example of transaction can be bank accounts of two users, say A & B. When a bank employee
transfers amount of Rs. 5 from A's account to B's account, a number of tasks are executed
behind the screen. This very simple and small transaction includes several steps: decrease A's
bank account from 5
States of Transactions:
A transaction in a database can be in one of the following state:
Transaction States
Active: In this state the transaction is being executed. This is the initial state of
every transaction.
Partially Committed: When a transaction executes its final operation, it is said to be in this
state. After execution of all operations, the database system performs some checks e.g. the
consistency state of database after applying output of transaction onto the database.
Failed: If any checks made by database recovery system fail, the transaction is
said to be in failed state, from where it can no longer proceed further.
Aborted: If any check fails and transaction reached in Failed state, the recovery manager rolls
back all its write operation on the database to make database in the state where it was prior
to start of execution of transaction. Transactions in this state are called aborted. Database
recovery module can select one of the two operations after a transaction aborts:
Re-start the transaction or Kill the transaction
Committed: If transaction executes all its operations successfully it is said to be
committed. All its effects are now permanently made on database system.
Concurrency Control
In a multiprogramming environment where more than one transactions can be concurrently
executed, there exists a need of protocols to control the concurrency of transaction to ensure
atomicity and isolation properties of transactions.
Concurrency control protocols, which ensure serialization of transactions, are most desirable.
Concurrency control protocols can be broadly divided into two categories:
Binary Locks: A lock on data item can be in two states; it is either locked or
unlocked.
Shared/exclusive: This type of locking mechanism differentiates lock based on their uses. If a
lock is acquired on a data item to perform a write operation, it is exclusive lock. Because
allowing more than one transactions to write on same data item would lead the database into
an inconsistent state. Read locks are shared because no data value is being changed.
Lock based protocols manage the order between conflicting pairs among transaction at the time of
execution whereas time-stamp based protocols start working as soon as transaction is created.
Every transaction has a time-stamp associated with it and the ordering is determined by the age
of the transaction. A transaction created at 2 clock time would be older than all other
transaction, which come after it. For example, any transaction 'y' entering the system at % is two
seconds younger and priority may be given to the older one.
In addition, every data item is given the latest read and write-timestamp. This lets the system
know, when last read and write operation were made on the data item.
DML– Data Manipulation Language: DML is used for manipulation of the data
itself. Typical operations are Insert, Delete, Update and retrieving the data from
the table. Select statement is considered as a limited version of DML, since it
can't change data in the database. But it can perform operations on data retrieved
from DBMS, before the results are returned to the calling function.
DCL– Data Control Language DCL is used to control the visibility of data like
granting database access and set privileges to create tables etc. Example - Grant,
Revoke access permission to the user to access data in database.
Integrity Rules
There are two Integrity rules:
Entity Integrity: States that "Primary key cannot have NULL value"
Referential Integrity: States that "Foreign Key can be either a NULL value or
should be Primary Key value of other relation.
3.7 Networking
Network
Node
A network can consist of two or more computers directly connected by some physical medium
such as coaxial cable or optical fiber. Such a physical medium is called as Links and the
computer it connects is called as Nodes.
Gateway or Router
A node that is connected to two or more networks is commonly called as router or Gateway. It
generally forwards message from one network to another.
Routing
The process of determining systematically how to forward messages towards the destination
nodes based on its address is called routing.
b. Distributed database
e. Collaborative Processing
Protocol
A protocol is a set of rules that govern all aspects of information communication. The key
elements of protocols are:
Syntax: It refers to the structure or format of the data that is the order in which
they are presented.
Semantics: It refers to the meaning of each section of bits.
Timing: Timing refers to two characteristics: when data should be sent and how
fast they can be sent.
Bandwidth and Latency
Network performance is measured in Bandwidth (throughput) and Latency (Delay). Bandwidth
of a network is given by the number of bits that can be transmitted over the network in a certain
period of time. Every line has an upper limit and a lower limit on the frequency of signals it can
carry. This limited range is called the bandwidth.
Latency corresponds to how long it takes a message to travel from one end of a network to the
other. It is strictly measured in terms of time.
Multiplexing
Multiplexing is the set of techniques that allows the simultaneous transmission of multiple
signals across a single data link.
TCP/IP
TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or
protocol of the Internet. It can also be used as a communications protocol in a private network
(either an intranet or an extranet). When you are set up with direct access to the Internet, your
computer is provided with a copy of the TCP/IP program just as every other computer that you
may send messages to or get information from also has a copy of TCP/IP.
TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the
assembling of a message or file into smaller packets that are transmitted over the Internet and
received by a TCP layer that reassembles the packets into the original message. The lower layer,
Internet Protocol, handles the address part of each packet so that it gets to the right destination.
Each gateway computer on the network checks this address to see where to forward the
message. Even though some packets from the same message are routed differently than others,
they'll be reassembled at the destination.
TCP/IP Model
TCP/IP is based on a four-layer reference model. All protocols that belong to the TCP/IP
protocol suite are located in the top three layers of this model.
As shown in the following illustration, each layer of the TCP/IP model corresponds to one or
more layers of the seven-layer Open Systems Interconnection (OSI) reference model proposed
by the International Standards Organization (ISO).
The types of services performed and protocols used at each layer within the TCP/IP model are
described in more detail in the following table.
Internet Packages data into IP datagrams, which IP, ICMP, ARP, RARP
contain source and destination address
information that is used to forward the
datagrams between hosts and across
networks. Performs routing of IP
datagrams.
Network Specifies details of how data is physically Ethernet, Token Ring, FDDI,
interface sent through the network, including how X.25, Frame Relay, RS-232, v.35
bits are electrically signaled by hardware
devices that interface directly with a
network medium, such as coaxial cable,
optical fiber, or twisted-pair copper wire.
Bridges
These operate both in the physical and data link layers of LANs of same type. They divide a
larger network in to smaller segments. They contain logic that allow them to keep the traffic for
each segment separate and thus are repeaters that relay a frame only the side of the segment
containing the intended recipient and control congestion.
Error Detection
Data can be corrupted during transmission. For reliable communication errors must be
deducted and corrected. Error Detection uses the concept of redundancy, which means adding
extra bits for detecting errors at the destination.
The common Error Detection methods are:
a. Vertical Redundancy Check (VRC)
b. Longitudinal Redundancy Check (VRC)
c. Cyclic Redundancy Check (VRC)
d. Checksum
Redundancy
The technique of including extra information in the transmission solely for the purpose of
comparison is called redundancy.
Encoder
A device or program that uses predefined algorithms to encode, or compress audio or video data
for storage or transmission use. Example: a circuit that is used to convert between digital video
and analog video.
Decoder
A device or program that translates encoded data into its original format (e.g. it decodes the
data). The term is often used in reference to MPEG-2 video and sound data, which must be
decoded before it is output.
Framing
Framing in the data link layer separates a message from one source to a destination, or from
other messages to other destinations, by adding a sender address and a destination address. The
destination address defines where the packet has to go and the sender address helps the
recipient acknowledge the receipt.
Stop-and-Wait Protocol
In Stop and wait protocol, sender sends one frame, waits until it receives confirmation from the
receiver (okay to go ahead), and then sends the next frame.
Piggy Backing
A technique called piggybacking is used to improve the efficiency of the bidirectional protocols.
When a frame is carrying data from A to B, it can also carry control information about arrived
(or lost) frames from B; when a frame is carrying data from B to A, it can also carry control
information about the arrived (or lost) frames from A.
Transmission technology
(i) Broadcast and (ii) point-to-point
Point-to-Point Protocol
A communications protocol used to connect computers to remote networking services including
Internet service providers.
RAID: A method for providing fault tolerance by using multiple hard disk drives
Attenuation: The degeneration of a signal over distance on a network cable is called attenuation.
What is MAC address?
The address for a device as it is identified at the Media Access Control (MAC) layer in the network
architecture. MAC address is usually stored in ROM on the network adapter card and is unique.
STAR topology: In this all computers are connected using a central hub.
Advantages: Can be inexpensive, easy to install and reconfigure and easy to trouble shoot
physical problems.
Mesh network
A network in which there are multiple network-links between computers to provide multiple
paths for data to travel.
5-4-3 Rule
In an Ethernet network, between any two points on the network, there can be no more than five
network segments or four repeaters and of those five segments only three of segments can be
populated.
Traffic Shaping
One of the main causes of congestion is that traffic is often busy. If hosts could be made to
transmit at a uniform rate, congestion would be less common. Another open loop method to
help manage congestion is forcing the packet to be transmitted at a more predictable rate. This
is called traffic shaping.
Local Area Networks Local area networks (LANs) are used to connect networking devices that
are in a very close geographic area, such as a floor of a building, a building itself, or a campus
environment.
Wide Area Networks Wide area networks (WANs) are used to connect LANs together. Typically,
WANs are used when the LANs that must be connected are separated by a large distance.
Metropolitan Area Networks: A metropolitan area network (MAN) is a hybrid between a LAN
and a WAN.
Intranet
An intranet is basically a network that is local to a company. In other words, users from within
this company can find all of their resources without having to go outside of the company. An
intranet can include LANs, private WANs and MANs.
Extranet
An extranet is an extended intranet, where certain internal services are made available to known
external users or external business partners at remote locations.
Internet
An internet is used when unknown external users need to access internal resources in your
network. In other words, your company might have a web site that sells various products, and
you want any external user to be able to access this service.
VPN
A virtual private network (VPN) is a special type of secured network. A VPN is used to provide a
secure connection across a public network, such as an internet. Extranets typically use a VPN to
provide a secure connection between a company and its known external users or offices.
Authentication is provided to validate the identities of the two peers. Confidentiality provides
encryption of the data to keep it private from prying eyes. Integrity is used to ensure that the
data sent between the two devices or sites has not been tampered with.
ICMP
ICMP is Internet Control Message Protocol, a network layer protocol of the TCP/IP suite used by
hosts and gateways to send notification of datagram problems back to the sender. It uses the
echo test / reply to test whether a destination is reachable and responding. It also handles both
control and error messages.
OSI Model
OSI stands for Open Systems Interconnection. It has been developed by ISO –
‘International Organization of Standardization‘, in the year 1974. It is a 7 layer
architecture with each layer having specific functionality to perform. All these 7 layers
work collaboratively to transmit the data from one person to another across the globe.
The lowest layer of the OSI reference model is the physical layer. It is responsible for the
actual physical connection between the devices. The physical layer contains information
in the form of bits. It is responsible for the actual physical connection between the
devices. When receiving data, this layer will get the signal received and convert it into 0s
and 1s and send them to the Data Link layer, which will put the frame back together.
The data link layer is responsible for the node to node delivery of the message. The main
function of this layer is to make sure data transfer is error free from one node to
another, over the physical layer. When a packet arrives in a network, it is the
responsibility of DLL to transmit it to the Host using its MAC address.
Network layer works for the transmission of data from one host to the other located in
different networks. It also takes care of packet routing i.e. selection of the shortest path
to transmit the packet, from the number of routes available. The sender & receiver’s IP
address are placed in the header by network layer.
• At sender’s side:
Transport layer receives the formatted data from the upper layers,
performs Segmentationand also implements Flow & Error control to ensure
proper data transmission. It also adds Source and Destination port number in its header
and forwards the segmented data to the Network Layer.
Note: The sender need to know the port number associated with the receiver’s
application.
Generally, this destination port number is configured, either by default or manually. For
example, when a web application makes a request to a web server, it typically uses port
number 80, because this is the default port assigned to web applications. Many
applications have default port assigned.
• At receiver’s side:
Transport Layer reads the port number from its header and forwards the Data which it
has received to the respective application. It also performs sequencing and reassembling
of the segmented data.
The functions of the transport layer are:
1. Segmentation and Reassembly: This layer accepts the message from the
(session) layer, breaks the message into smaller units. Each of the segment
produced has a header associated with it. The transport layer at the destination
station reassembles the message.
2. Service Point Addressing: In order to deliver the message to correct process,
transport layer header includes a type of address called service point address or
port address. Thus, by specifying this address, transport layer makes sure that the
message is delivered to the correct process.
SCENARIO:
Let’s consider a scenario where a user wants to send a message through some Messenger
application running in his browser. The “Messenger” here acts as the application layer
which provides the user with an interface to create the data. This message or so-called
Data is compressed, encrypted (if any secure data) and converted into bits (0’s and 1’s)
so that it can be transmitted.
Presentation layer is also called the Translation layer. The data from the application
layer is extracted here and manipulated as per the required format to transmit over the
network.
The functions of the presentation layer are :
1. Translation: For example, ASCII to EBCDIC.
2. Encryption/ Decryption: Data encryption translates the data into another form
or code. The encrypted data is known as the cipher text and the decrypted data is
known as plain text. A key value is used for encrypting as well as decrypting data.
3. Compression: Reduces the number of bits that need to be transmitted on the
network.
At the very top of the OSI Reference Model stack of layers, we find Application layer
which is implemented by the network applications. These applications produce the data,
which has to be transferred over the network. This layer also serves as a window for the
application services to access the network and for displaying the received information to
the user.
Ex: Application – Browsers, Skype Messenger etc.
Application Layer is also called as Desktop Layer.
In basic terms, cloud computing is the phrase used to describe different scenarios in which
computing resource is delivered as a service over a network connection (usually, this is the
internet). Cloud computing is therefore a type of computing that relies on sharing a pool of
physical and/or virtual resources, rather than deploying local or personal hardware and
software. It is somewhat synonymous with the term ‘utility computing’ as users are able to tap
into a supply of computing resource rather than manage the equipment needed to generate it
themselves; much in the same way as a consumer tapping into the national electricity supply,
instead of running their own generator.
Cloud computing consists of 3 layers in the hierarchy and these are as follows:
IaaS providers such as AWS supplies a virtual server instance and storage, as well as application
program interfaces (APIs) that let users migrate workloads to a virtual machine (VM). Users
have an allocated storage capacity and start, stop, access and configure the VM and storage as
desired. IaaS providers offer small, medium, large, extra-large, and memory- or compute-
optimized instances, in addition to customized instances, for various workload needs.
Microsoft Office 365 is a SaaS offering for productivity software and email services. Users can
access SaaS applications and services from any location using a computer or mobile device that
has Internet access.
Cloud computing supports many deployment models and they are as follows:
• Private Cloud
Organizations choose to build their private cloud as to keep the strategic, operation and other
reasons to themselves and they feel more secure to do it. It is a complete platform which is fully
functional and can be owned, operated and restricted to only an organization or an industry.
More organizations have moved to private clouds due to security concerns. Virtual private cloud
is being used that operate by a hosting company.
• Public Cloud
These are the platforms which are public means open to the people for use and deployment. For
example, google, amazon etc. They focus on a few layers like cloud application, infrastructure
providing and providing platform markets.
• Hybrid Clouds
It is the combination of public and private cloud. It is the most robust approach to implement cloud
architecture as it includes the functionalities and features of both the worlds. It allows organizations
to create their own cloud and allow them to give the control over to someone else as well.
VPN
VPN stands for virtual private network. It is a private cloud which manages the security of the
data during the transport in the cloud environment. VPN allows an organization to make a
public network as private network and use it to transfer files and other resources on a network.
VPN consists of two important things:
Firewall: It acts as a barrier between the public network and the private network.
It filters the messages that are getting exchanged between the networks. It also
protects the network from any malicious activity being done on the network.
Encryption: It is used to protect the sensitive data from professional hackers and
other spammers who are usually remain active to get the data. With a message
always there will be a key with which you can match the key provided to you.
Few platforms which are used for large scale cloud computing:
There are many platforms available for cloud computing but to model the large-scale distributed
computing the platforms are as follows:
Utility Computing
Utility computing allows the user to utilize ‘pay per use’ medium which means that whatever they are
using they have to pay only for that. It is a plug in that needs to be managed by the organizations on
deciding what type of services has to be deployed from the cloud. Utility computing allows the user to
think and implement the services according to them. Most organizations go for hybrid strategy that
combines internal delivered services that are hosted or outsourced services.
Mobile voice communication is widely established throughout the world and has had a very rapid
increase in the number of subscribers to the various cellular networks over the last few years. An
extension of this technology is the ability to send and receive data across these cellular networks.
This is the principle of mobile computing.
Mobile data communication has become a very important and rapidly evolving technology as it
allows users to transmit data from remote locations to other remote or fixed locations. This proves
to be the solution to the biggest problem of business people on the move - mobility.
A cellular network consists of mobile units linked together to switching equipment, which
interconnect the different parts of the network and allow access to the fixed Public Switched
Telephone Network (PSTN). The technology is hidden from view; it's incorporated in a number of
transceivers called Base Stations (BS). Every BS covers a given area or cell - hence the name
cellular communications. BSs communicate through a so called Mobile Switching Centre (MSC)
which is the heart of a cellular radio-system. It is responsible for routing, or switching, calls from
the originator to the destinator. The MSC may be connected to other MSCs on the same network or
to the PSTN.
For GSM, 89 - 95 MHz range is used for transmission and 935 -96 MHz for reception. The DCS
(Digital Communication System) technology uses frequencies in the 8MHz range while PCS in the
9MHz range.
When a Mobile Station (MS) becomes 'active' it registers with the nearest BS. The corresponding
MSC stores the information about that MS and its position. This information is used to direct
incoming calls to the MS.
If during a call the MS moves to an adjacent cell then a change of frequency will necessarily occur -
since adjacent cells never use the same channels. This procedure is called hand over and is the key
to Mobile communications.
Data Communication
Data Communications is the exchange of data using existing communication networks. The term
data covers a wide range of applications including File Transfer (FT), interconnection between
Wide-Area-Networks (WAN), facsimile (fax), electronic mail, access to the internet and the World
Wide Web (WWW).
Data Communications have been achieved using a variety of networks such as PSTN, leased-lines
and more recently ISDN (Integrated Services Data Network) and ATM (Asynchronous Transfer
Mode)/Frame Relay. These networks are partly or totally analogue or digital using technologies
such as circuit - switching, packet - switching etc.
Circuit switching implies that data from one user (sender) to another (receiver) has to follow a pre-
specified path. If a link to be used is busy, the message cannot be redirected, a property which
causes many delays.
Packet switching is an attempt to make better utilization of the existing network by splitting the
message to be sent into packets. Each packet contains information about the sender, the receiver,
the position of the packet in the message as well as part of the actual message. There are many
protocols defining the way packets can be send from the sender to the receiver. The most widely
used are the Virtual Circuit-Switching system, which implies that packets have to be sent through
the same path, and the Datagram system which allows packets to be sent at various paths
depending on the network availability. Packet switching requires more equipment at the receiver,
where reconstruction of the message will have to be done.
The introduction of mobility in data communications required a move from the Public Switched
Data Network (PSDN) to other networks like the ones used by mobile phones. PCSI has come up
with an idea called CDPD (Cellular Digital Packet Data) technology which uses the existing mobile
network (frequencies used for mobile telephony).
CDPD Technology
Today, the mobile data communications market is becoming dominated by a technology called
CDPD.
There are other alternatives to this technology namely Circuit Switched Cellular, Specialised Mobile
Radio and Wireless Data Networks. As can be seen from the table below the CDPD technology is
much more advantageous than the others.
CDPD's principle lies in the usage of the idle time in between existing voice signals that are being
sent across the cellular networks. The major advantage of this system is the fact that the idle time is
not chargeable and so the cost of data transmission is very low.
CDPD networks allow fixed or mobile users to connect to the network across a fixed link and a
packet switched system respectively. Fixed users have a fixed physical link to the CDPD network. In
the case of a mobile end user, the user can, if CDPD network facilities are non-existent, connect to
existing circuit switched networks and transmit data via these networks. This is known as Circuit
Switched CDPD (CS-CDPD).
The importance of Mobile Computers has been highlighted in many fields of which a few are
described below:
For Estate Agents: Estate agents can work either at home or out in the field. With
mobile computers they can be more productive.
Emergency Services: Ability to receive information on the move is vital where the
emergency services are involved. Information regarding the address, type and other
details of an incident can be dispatched quickly, via a CDPD system using mobile
computers, to one or several appropriate mobile units which are in the vicinity of
the incident. Here the reliability and security implemented in the CDPD system
would be of great advantage.
In courts: Defense counsels can take mobile computers in court. When the opposing
counsel references a case which they are not familiar, they can use the computer to
get direct, real-time access to on-line legal database services, where they can gather
information on the case and related precedents.
In companies: Managers can use mobile computers in, say, critical presentations to
major customers. They can access the latest market share information. At a small
recess, they can revise the presentation to take advantage of this information.
Stock Information Collation/Control: In environments where access to stock is very
limited, the use of small portable electronic databases accessed via a mobile
computer would be ideal.
Credit Card Verification: At Point of Sale (POS) terminals in shops and
supermarkets, when customers use credit cards for transactions, the
intercommunication required between the bank central computer and the POS
terminal, in order to effect verification of the card usage, can take place quickly and
securely over cellular channels using a mobile computer unit. This can speed up the
transaction process and relieve congestion at the POS terminals.
Electronic Mail/Paging: Usage of a mobile unit to send and read emails is a very
useful asset for any business individual, as it allows him/her to keep in touch with
any colleagues as well as any urgent developments that may affect their work
Cloud computing relates to the specific design of new technologies and services that
allow data to be sent over distributed networks, through wireless connections, to a
remote secure location that is usually maintained by a vendor. Cloud service
providers usually serve multiple clients. They arrange access between the client's
local or closed networks, and their own data storage and data backup systems. That
means that the vendors can intake data, which is sent to them, and store it securely,
while delivering services back to a client through these carefully maintained
connections.
Mobile computing relates to the emergence of new devices and interfaces. Smartphones and
tablets are mobile devices that can do a lot of what traditional desktop and laptop computers do.
Mobile computing functions include accessing the Internet through browsers, supporting
multiple software applications with a core operating system, and sending and receiving different
types of data. The mobile operating system, as an interface, supports users by providing
intuitive icons, familiar search technologies and easy touch-screen commands.
While mobile computing is largely a consumer-facing service, cloud computing is
something that is used by many businesses and companies. Individuals can also
benefit from cloud computing, but some of the most sophisticated and advanced
cloud computing services are aimed at enterprises.
For example, big businesses and even smaller operations use specific cloud computing services to
make different processes like supply-chain management, inventory handling, customer
relationships and even production more efficient. An emerging picture of the difference between
cloud computing and mobile computing involves the emergence of smart phone and tablet
operating systems and, on the cloud end, new networking services that may serve these and other
devices.
Artificial Intelligence
Since the invention of computers or machines, their capability to perform various tasks went on
growing
exponentially. Humans have developed the power of computer systems in terms of their diverse
working
domains, their increasing speed, and reducing size with respect to time. A branch of Computer
Science named Artificial Intelligence pursues creating the computers or machines as intelligent as
human beings.
Philosophy of AI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder,
“Can a machine think and behave like humans do?”
Thus, the development of AI started with the intention of creating similar intelligence in machines
that we find and regard high in humans.
Goals of AI
Programming Programming
Without AI With AI
A computer program without AI can answer A computer program with AI can answer the
the specific questions it is meant to solve. generic questions it is meant to solve.
What is AI Technique?
In the real world, the knowledge has some unwelcomed properties:
AI Technique is a manner to organize and use the knowledge efficiently in such a way that:
1. Gaming - AI plays crucial role in strategic games such as chess, poker, tic-tac-toe,
etc., where machine can think of large number of possible positions based on
heuristic knowledge.
7. Intelligent Robots - Robots are able to perform the tasks given by a human. They
have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure. They have efficient
processors, multiple sensors and huge memory, to exhibit intelligence. In addition,
they are capable of learning from their
mistakes and they can adapt to the new environment.
Chemical Engineering
1 Fluid flow Phenomena:
1.1 Boundary layer:
A boundary layer is the layer of fluid in the immediate vicinity of a bounding surface where the
effects of viscosity are significant.
• Laminar Boundary Layer Flow
The laminar boundary is a very smooth flow, while the turbulent boundary layer contains
swirls or "eddies." The laminar flow creates less skin friction drag than the turbulent flow,
but is less stable. As the flow continues back from the leading edge, the laminar boundary
layer increases in thickness.
• Turbulent Boundary Layer Flow
At some distance back from the leading edge, the smooth laminar flow breaks down and
transitions to a turbulent flow. The low energy laminar flow, however, tends to break down
more suddenly than the turbulent layer.
1.12Reciprocating pump:
where is the density of the fluid,is the (slower) fluid velocity where the pipe is wider,
is the (faster) fluid velocity where the pipe is narrower (as seen in the figure). This assumes
the flowing fluid (or other substance) is not significantly compressible - even though
pressure varies, the density is assumed to remain approximately constant.
• Orifice meter:
An orifice plate is a device used for measuring flow rate, for reducing pressure or for
restricting flow. Either a volumetric or mass flow rate may be determined, depending on
the calculation associated with the orifice plate. It uses the same principle as a Venturi
nozzle, namely Bernoulli's principle which states that there is a relationship between the
pressure of the fluid and the velocity of the fluid. When the velocity increases, the pressure
decreases and vice versa.
• Rotameter:
A rotameter consists of a tapered tube, typically made of glass with a 'float', made either
of anodized aluminum or a ceramic, actually a shaped weight, inside that is pushed up by
the drag force of the flow and pulled down by gravity.
A higher volumetric flow rate through a given area increases flow speed and drag force, so
the float will be pushed upwards. However, as the inside of the rotameter is cone shaped
(widens), the area around the float through which the medium flows increases, the flow
speed and drag force decrease until there is mechanical equilibrium with the float's
weight.
Pitot tube:
The basic pitot tube consists of a tube pointing directly into the fluid flow. As this tube
contains fluid, a pressure can be measured; the moving fluid is brought to rest (stagnates)
as there is no outlet to allow flow to continue. This pressure is the stagnation pressure of
the fluid, also known as the total pressure or (particularly in aviation) the pitot pressure.
The measured stagnation pressure cannot itself be used to determine the fluid flow
velocity (airspeed in aviation). However, Bernoulli's equation states: Stagnation
pressure = static pressure + dynamic pressure This can also be written as:
The thermal conductivity , is often treated as a constant, though this is not always true.
While the thermal conductivity of a material generally varies with temperature, the
variation can be small over a significant range of temperatures for some common materials.
In anisotropic materials, the thermal conductivity typically varies with orientation; in this
case is represented by a second-order tensor. In non uniform materials, varies with
spatial location.
For many simple applications, Fourier's law is used in its one-dimensional form. In the x-
direction,
2.2 Radiation:
Wien’s displacement law:
Wien's displacement law states that the black body radiation curve for different
temperatures peaks at a wavelength inversely proportional to the temperature. The shift of
that peak is a direct consequence of the Planck radiation law which describes the spectral
brightness of black body radiation as a function of wavelength at any given temperature.
where
is the "diffusion flux" [(amount of substance) per unit area per unit time], for example
.
measures the amount of substance that will flow through a small area during a small time interval.
is proportional to the squared velocity of the diffusing particles, which depends on the
temperature, viscosity of the fluid and the size of the particles according to the
StokesEinstein relation.
In gas desorption (or stripping), the mass transfer is in the opposite direction, i.e. from the liquid
phase to the gas phase. The principles for both systems are the same.
In addition, we assume there is no chemical reaction in the system and that it is operating at
isothermal condition. The process of gas absorption thus involves the diffusion of solute from the
gas phase through a stagnant or non-diffusing liquid.
3.3 Humidity:
Humidity is the amount of water vapor in the air. Water vapor is the gaseous state of water and is
invisible. Humidity indicates the likelihood of precipitation, dew, or fog.
Wet Bulb temperature can be measured by using a thermometer with the bulb wrapped in
wet muslin. The adiabatic evaporation of water from the thermometer and the cooling
effect is indicated by a "wet bulb temperature" lower than the "dry bulb temperature" in the
air.
The rate of evaporation from the wet bandage on the bulb, and the temperature difference
between the dry bulb and wet bulb, depends on the humidity of the air. The evaporation is
reduced when the air contains more water vapor.
The wet bulb temperature is always lower than the dry bulb temperature but will be
identical with 100% relative humidity (the air is at the saturation line).
where δQ is the amount of heat absorbed by the system. The equality holds in the reversible
case and the inequality holds in the irreversible case. The reversible case is used to
introduce the entropy state function. This is because in cyclic process the variation of a
state function is zero.
4.6 Entropy:
In thermodynamics, entropy is a measure of the number of specific ways in which a
thermodynamic system may be arranged, commonly understood as a measure of disorder.
According to the second law of thermodynamics the entropy of an isolated system never
decreases; such a system will spontaneously evolve toward thermodynamic equilibrium, the
configuration with maximum entropy. Systems that are not isolated may decrease in
entropy, provided they increase the entropy of their environment by at least that same
amount. Since entropy is a state function, the change in the entropy of a system is the same
for any process that goes from a given initial state to a given final state, whether the process
is reversible or irreversible. However, irreversible processes increase the combined
entropy of the system and its environment.
The change in entropy (ΔS) of a system was originally defined for a thermodynamically
reversible process as
4.9 Fugacity:
In chemical thermodynamics, the fugacity ( ) of a real gas is an effective pressure which
replaces the true mechanical pressure in accurate chemical equilibrium calculations. It is
equal to the pressure of an ideal gas which has the same chemical potential as the real gas.
4.10 Gibbs’ Phase rule:
The Phase Rule describes the possible number of degrees of freedom in a (closed) system at
equilibrium, in terms of the number of separate phases and the number of chemical
constituents in the system. The Degrees of Freedom [F] is the number of independent
intensive variables (i.e. those that are independent of the quantity of material present) that
need to be specified in value to fully determine the state of the system. Typical such
variables might be temperature, pressure, or concentration.