Beruflich Dokumente
Kultur Dokumente
Answer.
Binary arithmetic
Arithmetic in binary is much like arithmetic in other numeral systems. Addition,
subtraction, multiplication, and division can be performed on binary numerals.
Addition
The circuit diagram for a binary half adder, which adds two bits together, producing
sum and carry bits.
The simplest arithmetic operation in binary is addition. Adding two single-digit
binary numbers is relatively simple, using a form of carrying:
0+0→0
0+1→1
1+0→1
1 + 1 → 10, carry 1 (since 1 + 1 = 0 + 1 × binary 10)
Adding two "1" digits produces a digit "0", while 1 will have to be added to the next
column. This is similar to what happens in decimal when certain single-digit
numbers are added together; if the result equals or exceeds the value of the radix
(10), the digit to the left is incremented:
5 + 5 → 0, carry 1 (since 5 + 5 = 0 + 1 × 10)
7 + 9 → 6, carry 1 (since 7 + 9 = 6 + 1 × 10)
This is known as carrying. When the result of an addition exceeds the value of a
digit, the procedure is to "carry" the excess amount divided by the radix (that is,
10/10) to the left, adding it to the next positional value. This is correct since the
next position has a weight that is higher by a factor equal to the radix. Carrying
works the same way in binary:
1 1 1 1 1 (carried digits)
01101
+ 10111
-------------
=100100
In this example, two numerals are being added together: 011012 (1310) and
101112 (2310). The top row shows the carry bits used. Starting in the rightmost
column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom
of the rightmost column. The second column from the right is added: 1 + 0 + 1 =
102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1
+ 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding
like this gives the final answer 1001002 (36 decimal).
When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for
any two bits x and y allows for very fast calculation, as well.
A simplification for many binary addition problems is the Long Carry Method or
Brookhouse Method of Binary Addition. This method is generally useful in any binary
addition where one of the numbers has a long string of “1” digits. For example the
following large binary numbers can be added in two simple steps without multiple
carries from one place to the next.
----------------------- + 1 0 0 0 1 0 0 0 0 0 0 = (sum
of crossed out digits)
(now add remaining digits)
= 1 1 0 0 1 1 1 0 0 0 1 ---------------------------
1 1 0 0 1 1 1 0 0 0 1
Addition table
0 1
0 0 1
1 1 10
Subtraction
Subtraction works in much the same way:
0−0→0
0 − 1 → 1, borrow 1
1−0→1
1−1→0
Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be
subtracted from the next column. This is known as borrowing. The principle is the
same as for carrying. When the result of a subtraction is less than 0, the least
possible value of a digit, the procedure is to "borrow" the deficit divided by the radix
(that is, 10/10) from the left, subtracting it from the next positional value.
Multiplication
Multiplication in binary is similar to its decimal counterpart. Two numbers A and B
can be multiplied by partial products: for each digit in B, the product of that digit in
A is calculated and written on a new line, shifted leftward so that its rightmost digit
lines up with the digit in B that was used. The sum of all these partial products gives
the final result.
Since there are only two digits in binary, there are only two possible outcomes of
each partial multiplication:
For example, the binary numbers 1011 and 1010 are multiplied as follows:
1011 (A)
×1010 (B)
---------
0000 ← Corresponds to a zero in B
+ 1011 ← Corresponds to a one in B
+ 0000
+1011
---------------
=110111 0
Binary numbers can also be multiplied with bits after a binary point:
Here, the divisor is 1012, or 5 decimal, while the dividend is 110112, or 27 decimal.
The procedure is the same as that of decimal long division; here, the divisor 1012
goes into the first three digits 1102 of the dividend one time, so a "1" is written on
the top line. This result is multiplied by the divisor, and subtracted from the first
three digits of the dividend; the next digit (a "1") is included to obtain a new three-
digit sequence:
1
___________
101 )11011
−101
-----
011
The procedure is then repeated with the new sequence, continuing until the digits in
the dividend have been exhausted:
101
___________
101 )11011
−101
-----
011
−000
-----
111
−101
-----
10
Q-2. Explain the Boolean rules with relevant examples.
Ans-
Boolean algebra finds its most practical use in the simplification of logic
circuits. If we translate a logic circuit's function into symbolic (Boolean) form,
and apply certain algebraic rules to the resulting equation to reduce the
number of terms and/or arithmetic operations, the simplified equation may
be translated back into circuit form for a logic circuit performing the same
function with fewer components. If equivalent function may be achieved with
fewer components, the result will be increased reliability and decreased cost
of manufacture.
To this end, there are several rules of Boolean algebra presented in this
section for use in reducing expressions to their simplest forms. The identities
and properties already reviewed in this chapter are very useful in Boolean
simplification, and for the most part bear similarity to many identities and
properties of "normal" algebra. However, the rules shown in this section are
all unique to Boolean mathematics.
Example:
This rule may be proven symbolically by factoring an "A" out of the two terms, then applying
the rules of A + 1 = 1 and 1A = A to achieve the final result:
Please note how the rule A + 1 = 1 was used to reduce the (B + 1) term to 1. When a rule like "A
+ 1 = 1" is expressed using the letter "A", it doesn't mean it only applies to expressions
containing "A". What the "A" stands for in a rule like A + 1 = 1 is any Boolean variable or
collection of variables. This is perhaps the most difficult concept for new students to master in
Boolean simplification: applying standardized identities, properties, and rules to expressions not
in standard form.
For instance, the Boolean expression ABC + 1 also reduces to 1 by means of the "A + 1 = 1"
identity. In this case, we recognize that the "A" term in the identity's standard form can represent
the entire "ABC" term in the original expression.
The next rule looks similar to the first one shown in this section, but is actually quite different
and requires a more clever proof:
Note how the last rule (A + AB = A) is used to "un-simplify" the first "A" term
in the expression, changing the "A" into an "A + AB". While this may seem
like a backward step, it certainly helped to reduce the expression to
something simpler! Sometimes in mathematics we must take "backward"
steps to achieve the most elegant solution. Knowing when to take such a
step and when not to is part of the art-form of algebra, just as a victory in a
game of chess almost always requires calculated sacrifices.
(1a) x·y=y·x
(1b) x+y=y+x
(1c) 1+x=1
(2a) x · (y · z) = (x · y) · z
(2b) x + (y + z) = (x + y) + z
(3a) x · (y + z) = (x · y) + (x · z)
(3b) x + (y · z) = (x + y) · (x + z)
(4a) x·x= x
(4b) x+x=x
(5a) x · (x + y) = x
(5b) x + (x · y) = x
(6a) x · x1 = 0
(6b) x + x1 = 1
(7) (x1)1 = x
(8a) (x · y)1 = x1 + y1
(8b) (x + y) 1 = x1 · y1
Thus the above rules are used to translate the logic gates directly. Although some of the
rules can be derived from simpler identities derived in the packages.
x y xVy
F F F
F T T
T F T
T T T
x y xΛy
F F F
F T F
T F F
T T T
x ¬x
F T
T F
Q-3 Explain Karnaugh maps with your own examples illustrating the
formation.
Ans-
Karnaugh map
K-map showing minterms and boxes covering the desired minterms. The
brown region is an overlapping of the red (square) and green regions.
The binary digits in the map represent the function's output for any given
combination of inputs. So 0 is written in the upper leftmost corner of the map
because ƒ = 0 when A = 0, B = 0, C = 0, D = 0. Similarly we mark the bottom
right corner as 1 because A = 1, B = 0, C = 1, D = 0 gives ƒ = 1. Note that
the values are ordered in a Gray code, so that precisely one variable
changes between any pair of adjacent cells.
After the Karnaugh map has been constructed the next task is to find the
minimal terms to use in the final expression. These terms are found by
encircling groups of 1s in the map. The groups must be rectangular and must
have an area that is a power of two (i.e. 1, 2, 4, 8…). The rectangles should
be as large as possible without containing any 0s. The optimal groupings in
this map are marked by the green, red and blue lines. Note that groups may
overlap. In this example, the red and green groups overlap. The red group is
a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is
indicated in brown.
The grid is toroidally connected, which means that the rectangular groups
can wrap around edges, so is a valid term, although not part of the
minimal set—this covers Minterms 8, 10, 12, and 14.
Perhaps the hardest-to-visualize wrap-around term is which covers the
four corners—this covers minterms 0, 2, 8, 10.
Examples:
• f(A,B,C,D) = ∑
(6,8,9,10,11,12,13,14) Note: The values inside are the ∑
minterms to map (i.e. which rows have output 1 in the truth table).
•
Truth table
# A BC D f(A,B,C
0 0 0 0 0 0 ,D)
1 0 001 0
2 0 010 0
3 0 011 0
4 0 100 0
5 0 101 0
6 0 110 1
7 0 111 0
8 1 000 1
9 1 001 1
1 1 010 1
1
0 1 011 1
1 1 100 1
1
2 1 101 1
1
3 1 110 1
1
4 1 111 0
5
Parity generator
This picture show the function : parity generator.The input is 1(0) if is input with cans (or odd)
number of 1(0). This function is MAX lengt of 4 input karnaugh map - as you can see there is no
way how make simple this function - no 1(0) has not 1(0) as neighbor
Karnaugh map for 5 (five) inputs
More people do not know how create map for 5 (five) inputs. Next picture showing
how you can do it. And show you different rules on 5 inputs karnaugh map:
The adder
The basic circuit is a adder. There is a example of basic component of adder (also
you can use half-adders).
Q-4. With a neat labeled diagram explain the working of binary half
and full adders.
Ans.
Ans-
Timing Diagrams:
Timing diagrams are the main key in understanding digital systems. Timing diagrams explain
digital circuitry functioning during time flow. Timing diagrams help to understand how digital
circuits or sub circuits should work or fit in to larger circuit system. So learning how to read
Timing diagrams may increase your work with digital systems and integrate them.
Bellow is a list o most commonly used timing diagram fragments:
• Low level to supply voltage:
• Bus signals – parallel signals transitioning from one level to other:
As you see timing diagrams together with digital circuit can completely describe the circuits
working. To understand the timing diagrams, you should follow all symbols and transitions
in timing diagrams.
You can find plenty of symbols in timing diagrams. It depends actually on designer or circuit
manufacturer. But once you understand the whole picture, you can easy read any timing
diagram like in this example:
1. The clock signal must be distributed to every flip-flop in the circuit. As the
clock is usually a high-frequency signal, this distribution consumes a
relatively large amount of power and dissipates much heat. Even the flip-
flops that are doing nothing consume a small amount of power, thereby
generating waste heat in the chip.
2. The maximum possible clock rate is determined by the slowest logic path
in the circuit, otherwise known as the critical path. This means that every
logical calculation, from the simplest to the most complex, must complete in
one clock cycle. One way around this limitation is to split complex operations
into several simple operations, a technique known as 'pipelining'. This
technique is prominent within microprocessor design, and helps to improve
the performance of modern processors.
Complexity:
Although more practical than Karnaugh mapping when dealing with more
than four variables, the Quine–McCluskey algorithm also has a limited range
of use since the problem it solves is NP-hard: the runtime of the Quine–
McCluskey algorithm grows exponentially with the number of variables. It
can be shown that for a function of n variables the upper bound on the
number of prime implicants is 3n/n. If n = 32 there may be over 6.5 * 1015
prime implicants. Functions with a large number of variables have to be
minimized with potentially non-optimal heuristic methods, of which the
Espresso heuristic logic minimizer is the de-facto standard
A BCD f
m0 0 0 0 0 0
m1 0 0 0 1 0
m2 0 0 1 0 0
m3 0 0 1 1 0
m4 0 1 0 0 1
m5 0 1 0 1 0
m6 0 1 1 0 0
m7 0 1 1 1 0
m8 1 0 0 0 1
m9 1 0 0 1 x
m1 1 010 1
0
m1 1 011 1
1
m1 1 100 1
2
m1 1 101 0
3
m1 1 110 x
4
m1 1 111 1
5
One can easily form the canonical sum of products expression from this table, simply by
summing the minterms (leaving out don't-care terms) where the function evaluates to one:
Of course, that's certainly not minimal. So to optimize, all minterms that evaluate to one are first
placed in a minterm table. Don't-care terms are also added into this table, so they can be
combined with minterms:
At this point, one can start combining minterms with other minterms. If two
terms vary by only a single digit changing, that digit can be replaced with a
dash indicating that the digit doesn't matter. Terms that can't be combined
any more are marked with a "*". When going from Size 2 to Size 4, treat '-' as
a third bit value. Ex: -110 and -100 or -11- can be combined, but not -110
and 011-. (Trick: Match up the '-' first.)
481 1 1 1 = A BCD
m(4,12)* X 0 1 2
X 5 - >
= - 100
m(8,9,10,11) X X X 100 =
10- > 10- -
m(8,10,12,14 X X X -
1-- >
= 1- - 0
)
m(10,11,14,1 X X 0 =
X 1- > 1- 1-
5)* 1- >
Here, each of the essential prime implicants has been starred - the second
prime implicant can be 'covered' by the third and fourth, and the third prime
implicant can be 'covered' by the second and first, and neither is thus
essential. If a prime implicant is essential then, as would be expected, it is
necessary to include it in the minimized boolean equation. In some cases,
the essential prime implicants do not cover all minterms, in which case
additional procedures for chart reduction can be employed. The simplest
"additional procedure" is trial and error, but a more systematic way is
Petrick's Method. In the current example, the essential prime implicants do
not handle all of the minterms, so, in this case, one can combine the
essential implicants with one of the two non-essential ones to yield one of
these two equations:
Both of those final equations are functionally
Early computer buses were literally parallel electrical buses with multiple
connections, but the term is now used for any physical arrangement that
provides the same logical functionality as a parallel electrical bus. Modern
computer buses can use both parallel and bit-serial connections, and can be
wired in either a multidrop (electrical parallel) or daisy chain topology, or
connected by switched hubs, as in the case of USB.
Description of BUS :
At one time, "bus" meant an electrically parallel system, with electrical
conductors similar or identical to the pins on the CPU. This is no longer the
case, and modern systems are blurring the lines between buses and
networks.
Buses can be parallel buses, which carry data words in parallel on multiple
wires, or serial buses, which carry data in bit-serial form. The addition of
extra power and control connections, differential drivers, and data
connections in each direction usually means that most serial buses have
more conductors than the minimum of one used in the 1-Wire and UNI/O
serial buses. As data rates increase, the problems of timing skew, power
consumption, electromagnetic interference and crosstalk across parallel
buses become more and more difficult to circumvent. One partial solution to
this problem has been to double pump the bus. Often, a serial bus can
actually be operated at higher overall data rates than a parallel bus, despite
having fewer electrical connections, because a serial bus inherently has no
timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this.
Multidrop connections do not work well for fast serial buses, so most modern
serial buses use daisy-chain or hub designs.
Most computers have both internal and external buses. An internal bus
connects all the internal components of a computer to the motherboard (and
thus, the CPU and internal memory). These types of buses are also referred
to as a local bus, because they are intended to connect to local devices, not
to those in other machines or external to the computer. An external bus
connects external peripherals to the motherboard.
Network connections such as Ethernet are not generally regarded as buses,
although the difference is largely conceptual rather than practical. The
arrival of technologies such as InfiniBand and HyperTransport is further
blurring the boundaries between networks and buses. Even the lines
between internal and external are sometimes fuzzy, I²C can be used as both
an internal bus, or an external bus (where it is known as ACCESS.bus), and
InfiniBand is intended to replace both internal buses like PCI as well as
external ones like Fibre Channel. In the typical desktop application, USB
serves as a peripheral bus, but it also sees some use as a networking utility
and for connectivity between different computers, again blurring the
conceptual distinction.
Answer.
CPU which is the heart of a computer consists of Registers, Control Unit and
Arithmetic Logic Unit. The following tasks are to be performed by the CPU:
1. Fetch instructions: The CPU must read instructions from the memory.
2. Interpret instructions: The instructions must be decoded to determine
what action is required.
3. Fetch data: The execution of an instruction may require reading data
from memory or an I/O module.
4. Process data: The execution of an instruction may require performing
some arithmetic or logical operations on data.
5. Write data: The results of an execution may require writing data to the
memory or an I/O module.
Q.9: Explain the organization of 8085 processor with the relevant diagrams.
Ans- The Intel 8085 is an 8-bit microprocessor introduced by Intel in 1977.
It was binary-compatible with the more-famous Intel 8080 but required less
supporting hardware, thus allowing simpler and less expensive
microcomputer systems to be built.
The "5" in the model number came from the fact that the 8085 required only
a +5-volt (V) power supply rather than the +5V, -5V and +12V supplies the
8080 needed. Both processors were sometimes used in computers running
the CP/M operating system, and the 8085 later saw use as a microcontroller,
by virtue of its low component count. Both designs were eclipsed for desktop
computers by the compatible Zilog Z80, which took over most of the CP/M
computer market as well as taking a share of the booming home computer
market in the early-to-mid-1980s.
The 8085 had a long life as a controller. Once designed into such products as
the DECtape controller and the VT100 video terminal in the late 1970s, it
continued to serve for new production throughout the life span of those
products (generally longer than the product life of desktop computers).
The 8085 Architecture follows the "von Neumann architecture", with a 16-bit
address bus, and a 8-bit data bus.The 8085 used a multiplexed Data Bus
i.e.the address was split between the 8-bit address bus and 8-bit data bus.
(For saving Number of Pins)
Registers:
The 8085 can access 216 (= 65,536) individual 8-bit memory locations, or in
other words, its address space is 64 KB. Unlike some other microprocessors
of its era, it has a separate address space for up to 28 (=256) I/O ports. It
also has a built in register array which are usually labeled A (Accumulator),
B, C, D, E, H, and L. Further special-purpose registers are the 16-bit Program
Counter (PC), Stack Pointer (SP), and 8-bit flag register F. The microprocessor
has three maskable interrupts (RST 7.5, RST 6.5 and RST 5.5), one Non-
Maskable interrupt (TRAP), and one externally serviced interrupt (INTR). The
RST n.5 interrupts refer to actual pins on the processor-a feature which
permitted simple systems to avoid the cost of a separate interrupt controller
chip.
Buses:
* Address bus - 16 line bus accessing 216 memory locations (64 KB) of
memory.
* Data bus - 8 line bus accessing one (8-bit) byte of data in one operation.
Data bus width is the traditional measure of processor bit designations, as
opposed to address bus width, resulting in the 8-bit microprocessor
designation.
* Control buses - Carries the essential signals for various operations.
b) ARBITRATION METHOD
In all systems except the simplest system, more than one module is required to
control the bus. For example, an I / O module may be required to read or write
directly to memory, without sending data to the CPU. Because at one point is just a
unit that will successfully transmit data through the bus, then take a few method of
arbitration. The various methods by and large can be classified as a method
centralized and distributed methods. In the centralized method, a hardware device,
known as bus controllers or arbitrary, is responsible for the allocation of time on the
bus. Perhaps it is CPU device module shaped or a separate section.
In a distributed method, there is no central controller. Rather, each module
consists of access control logic and modules work together to put on a bus
together. In the second method of arbitration, the goal is to assign a device,
the CPU or I / O module, acting as master. Then the master can initiate data
transfer (e.g., read or write) by using other devices, which work as slave for
this particular data exchange.
C) Bus timing:
The timing diagram example on the right describes the Serial Peripheral
Interface (SPI) Bus. Most SPI master nodes have the ability to set the clock
polarity (CPOL) and clock phase (CPHA) with respect to the data. This timing
diagram shows the clock for both values of CPOL as well as the values for the
two data lines (MISO & MOSI) for each value of CPHA. Note that when
CPHA=1 then the data is delayed by one-half clock cycle.
When a slave's SS line is high then both of its MISO and MOSI line should be
high impedance so to avoid disrupting a transfer to a different slave. Prior to
SS being pulled low, the MISO & MOSI lines are indicated with a "z" for high
impedance. Also prior to the SS being pulled low the "cycle #" row is
meaningless and is shown greyed-out.
Note that for CPHA=1 the MISO & MOSI lines are undefined until after the
first clock edge and are also shown greyed-out before that.
A more typical timing diagram has just a single clock and numerous data
lines
specified memory location. For store operation processor sends the address
of the memory
location where the data is to be written on the address lines, the data to be
written on the data lines and generates the appropriate write signal to
indicate the store operation.
The memory identifies the addressed memory location and writes the data
sent by the processor at that location destroying the former contents of that
location. This operation is illustrated in Fig. 3.2*
The data items or instructions which are transferred between memory and
processor may be either byte or word- They can be transferred using a single
operation* Mainly this transfer takes place between processor registers and
the memory locations. We know that processor has small number of built-in
registers. Each register is capable of holding a word data- These registers are
either the source or the destination of a transfer to or from the memory.
This expression states that the contents of memory location LOC are
transferred into the processor register R2.
When key is pressed, the corresponding character code is stored in the DATA
IN register and SIN status bit is set to indicate that the valid character code is
available in the DATA IN register. Under program control processor checks
the SIN bit, and when it finds SIN = 1, it reads the contents of the DATA IN
register. After completion of read operation SIN is automatically cleared to 0.
If another key is pressed, the corresponding character code is entered into
the DATA IN register, SIN is again set to 1 and the process repeats.
• Increment
• Decrement
• Negate (Change sign of operand)
Example : ADO R1, R2, R3
This expression states that the contents of processor registers Rl and R2 are
added and the result is stored in the register R3.
The multiplication instruction does the multiplication of two operands and
stores the result in the destination operand. On the other hand the division
instruction does the division of two operands and stores the result in the
destination operand. For example
• AND
• OR
• NOT
• EXOR
• Compare
• Shift
• Rotate
Example : AND R1f R2, R3
This expression states that the contents of processor registers Rl and R2 are
logically ANDed and the result is stored in the register R3.
These operations shift or rotate the bits of the operand right or left by some
specified number of bit positions.
Logical Shift : There are two logical shift instructions logical shift left
(LShiftL) and logical shift right (LShiftR). These two instructions shift
an operand by a number of the positions specified in a count
operand contain in the instruction. The general syntax for these
instructions are:
Arithmetic Shift : In logical shift operation we have seen that the vacant
positions created within the register due to shift operation arc filled with
zeroes. In arithmetic right shift it is necessary to repeat the sign bit as the
fill-in bit for the vacated position. This requirement on right shifting
distinguishes arithmetic shifts from logical shifts. Otherwise two shift
operations are very similar. The Fig. 3.5 shows the example of an arithmetic
right shift (AshiftR). The arithmetic left shift (AshiftL) is same as the logical
left shift.
Master of Computer Application (MCA) – Semester 1
MC0062 – Digital Systems, Computer Organization and Architecture
(Book ID: B0680 & B0684)
Assignment Set – 1
Answer.
Gated SR Latch
Two possible circuits for gated SR latch are shown in Figure 1. The graphical symbol for gated SR latch is
shown in Figure 2.
R S
Q Q
Clk Clk
QQSR
(a) Gated SR latch with NOR and AND gates. (b) Gated SR latch with NAND gates.
S Q
Clk
R Q
+
Clk S R Q Comments
0 × × Q No change, typically stable states
Q = 0, Q = 1 or Q = 1, Q = 0
1 0 0 Q No change, typically stable states
Q = 0, Q = 1 or Q = 1, Q = 0
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 × Avoid this setting
Figure 3 shows an example timing diagram for gated SR latch (assuming negligible propagation
delays through the logic gates). Notice that during the last clock cycle when Clk = 1, both R = 1 and S = 1.
So as Clk returns to 0, the next state will be uncertain. This explains why we need to avoid the setting in
the last row of the above characteristic table in normal operation of a gated SR latch.
1
Clk
0
1
R
0
1
S
0
Q
1 ?
0
Q 1 ?
0
Time
Figure 3. An example timing diagram for gated SR latch.
Gated D Latch
A possible circuit for gated D latch is shown in Figure 4. The graphical symbol for gated D latch is shown
in Figure 5.
S
D
(Data) Q
Clk
Q
R
D Q
Clk Q
+
Clk D Q Comments
0 × Q No change
1 0 0
1 1 1
Figure 6 shows an example timing diagram for gated D latch (assuming negligible propagation delays
through the logic gates).
t1 t2 t3 t4
1
Clk
0
1
D
0
1
Q
0
Time
D Q
1
Clock
0
1
D
0
1
Qm
0
1
Q=Qs
0
Time
An example timing diagram for negative−edge−triggered master−slave D flip−flop.
A Positive-edge-triggered D Flip-Flop
Besides building a positive-edge-triggered master-slave D flip-flop as mentioned in our preceding
discussion, we can accomplish the same task by a circuit presented in Figure 10. It requires only six
NAND gates and, hence, fewer logic gates.
1 P3
P1
2
5 Q
Clock
P2 6 Q
3
4 P4
D
The operation of the circuit in Figure 10 is as follows. When Clock = 0, the outputs of gates 2 and 3 are
high. Thus P 1 = P 2 = 1, which maintains the output latch, comprising gates 5 and 6, in its present state.
At the same time, the signal P 3 is equal to D, and P 4 is equal to its complement D. When Clock changes
to 1, the following changes take place. The values of P 3 and P 4 are transmitted through gates 2 and 3 to
cause P 1 = D and P 2 = D, which sets Q = D and Q = D. To operate reliably, P 3 and P 4 must be stable
when Clock changes from 0 to 1. Hence the setup time of the flip-flop is equal to the delay from the D
input through gates 4 and 1 to P 3. The hold time is given by the delay through gate 3 because once P 2
is stable, the changes in D no longer matter.
For proper operation it is necessary to show that, after Clock changes to 1, any further changes in D
will not a ect the output latch as long as Clock = 1. We have to consider two cases. Suppose first that D =
0 at the positive edge of the clock. Then P 2 = 0, which will keep the output of gate 4 equal to 1 as long as
Clock = 1, regardless of the value of the D input. The second case is if D = 1 at the positive edge
of the clock. Then P 1 = 0, which forces the outputs of gates 1 and 3 to be equal to 1, regardless of the D
input. Therefore, the flip-flop ignores changes in the D input while Clock = 1.
Figure 11 shows the graphical symbol for a positive-edge-triggered D flip-flop.
D Q
1
Clock
0
1
D
0
1
Q
0
Time
D
Q
Clock
Clear
D Q
Clear
Figure 14. The graphical symbol for master−slave D flip−flop with Clear and Preset.
A similar modification can be done on the positive-edge-triggered D flip-flop of Figure 10, as indicated
in Figure 15. A graphical symbol for this flip-flop is shown in Figure 16. Again, both Clear and Preset
inputs are active low. They do not disturb the flip-flop when they are equal to 1.
Preset
Clock
Clear
D Q
Clear
Figure 16. The graphical symbol for positive−edge−triggered D flip−flop with Clear and Preset.
In the circuits in Figures 13 and 15, the e ect of a low signal on either the Clear or Preset input is
immediate. For example, if Clear = 0 then the flip-flop goes into the state Q = 0 immediately, regardless of
the value of the clock signal. In such a circuit, where the Clear signal is used to clear a flip-flop without
regard to the clock signal, we say that the flip-flop has an asynchronous clear. In practice, it is often
preferable to clear the flip-flops on the active edge of the clock. Such synchronous clear can be
accomplished as shown in Figure 17. The flip-flop operates normally when the Clear input is equal to 1.
But if Clear goes to 0, then on the next positive edge of the clock the flip-flop will be cleared to 0.
Clear D Q Q
D
Clock Q Q
The input signals J and K are connected to the gated "master" SR flip-flop
which "locks" the input condition while the clock (Clk) input is "HIGH" at logic
level "1". As the clock input of the "slave" flip-flop is the inverse
(complement) of the "master" clock input, the "slave" SR flip-flop does not
toggle. The outputs from the "master" flip-flop are only "seen" by the gated
"slave" flip-flop when the clock input goes "LOW" to logic level "0". When the
clock is "LOW", the outputs from the "master" flip-flop are latched and any
additional changes to its inputs are ignored. The gated "slave" flip-flop now
responds to the state of its inputs passed over by the "master" section. Then
on the "Low-to-High" transition of the clock pulse the inputs of the "master"
flip-flop are fed through to the gated inputs of the "slave" flip-flop and on the
"High-to-Low" transition the same inputs are reflected on the output of the
"slave" making this type of flip-flop edge or pulse-triggered.
Then, the circuit accepts input data when the clock signal is "HIGH", and
passés the data to the output on the falling-edge of the clock signal. In other
words, the Master-Slave JK Flip-flop is a "Synchronous" device as it only
passes data with the timing of the clock signal.
Answer.
A single flip-flop used gives two states output and is referred to as mod-2
counter. With two flip-flops four output states can be counted in ascending or
in descending way and is referred to as mod-4 or mod-22 counter. With ‘n’
flip-flops a mod-2n counting is possible either of ascending or of descending
type.
To design an asynchronous counter to count till M or mod-M counter where M
is not a power of 2, following procedure is used.
• Find the number of flip-flops required n = log2 M. calculated value is
not an integer value if the M # 2n then select n by rounding to the next
integer value.
• First writer the sequence of counting till M either in ascending or in
descending way.
• Tabulate the value to reset the flip-flops in a mod-M count.
• Find the flip-flop outputs which are to reset from the tabulated value.
• Tap the output from these flip-flops and feed it to an NAND gate whose
output is connected to the clear pin.
Ans.
Serial-in, serial-out shift registers delay data by one clock time for each
stage. They will store a bit of data for each register. A serial-in, serial-out
shift register may be one to 64 bits in length, longer if registers or packages
are cascaded.
Below is a single stage shift register receiving data which is not synchronized
to the register clock. The "data in" at the D pin of the type D FF (Flip-Flop)
does not change levels when the clock changes for low to high. We may want
to synchronize the data to a system wide clock in a circuit board to improve
the reliability of a digital logic circuit.
The obvious point (as compared to the figure below) illustrated above is that
whatever "data in" is present at the D pin of a type D FF is transferred from
D to output Q at clock time. Since our example shift register uses positive
edge sensitive storage elements, the output Q follows the D input when the
clock transitions from low to high as shown by the up arrows on the diagram
above. There is no doubt what logic level is present at clock time because
the data is stable well before and after the clock edge. This is seldom the
case in multi-stage shift registers. But, this was an easy example to start
with. We are only concerned with the positive, low to high, clock edge. The
falling edge can be ignored. It is very easy to see Q follow D at clock time
above. Compare this to the diagram below where the "data in" appears to
change with the positive clock edge.
Since "data in" appears to changes at clock time t1 above, what does the
type D FF see at clock time? The short over simplified answer is that it sees
the data that was present at D prior to the clock. That is what is transfered
to Q at clock time t1. The correct waveform is QC. At t1 Q goes to a zero if it is
not already zero. The D register does not see a one until time t2, at which
time Q goes high.
Since data, above, present at D is clocked to Q at clock time, and Q cannot
change until the next clock time, the D FF delays data by one clock period,
provided that the data is already synchronized to the clock. The Q A waveform
is the same as "data in" with a one clock period delay.
Answer.
Main Memory
The main memory stores data and instructions. Main memories are usually
built from dynamic IC’s known as dynamic RAMs. These semiconductor ICs
can also implement static memories referred to as static RAMs (SRAMs).
SRAMs are faster but cost per bit is higher. These are oten used to build
caches.
Types of Random-Access Semiconductor Memory
Dynamic RAM: for example: Charge in capacitor. It requires periodic
refreshing. Static RAM: for example: Flip-flop logic-gate. Applying power is
enough no need for refreshing). Dynamic RAM is simpler and hence smaller
than the static RAM. Therefore more dense and less expensive. But it
requires supporting refresh circuitry. Static RAM/s are faster than dynamic
RAM’s.
ROM: The data is actually wired in the factory. Can never be altered.
PROM: Programmable ROM. It can only be programmed once after its
fabrication. It requires special device to program.
EPROM: Erasable Programmable ROM. It can be programmed multiple times.
Whole capacity need to be erased by ultraviolet radiation before a new
programming activity. It can be partially programmed.
EEPROM: Electrically Erasable Programmable ROM. Erased and programmed
electrically. It can be partially programmed. Write operation takes
considerably longer time compared to read operation.
Each more functional Rom is more expensive to build, and has smaller
capacity then less functional ROM’s.
Ans.
There are several replacement algorithms that require less overhead than
LRU method. One method is to remove the oldest block from a full set when
a new block must be brought in. this method is referred to as FIFO. In this
technique no updating is needed when hit occurs. However, because the
algorithm does not consider the recent patterns of access to blocks in the
cache, it is not effective as LRU approach in choosing the best block to
remove. There is another method called least frequently used (LFU) that
replaces that block in the set which has experienced the fewer references. It
is implemented by associating a counter with each slot. Yet another simplest
algorithm called random, is to choose a block to be overwritten in random.
Ans.
Functional Requirements
Control Signals
Answer.
Functional Requirements
The functional requirements of control unit are those functions that the
control unit must perform. And these are the basis for the design and
implementation of the control unit.
A three step process that lead to the characterization of the Control Unit:
• Define the basis elements of the processor
• Describe the micro-operations that the processor performs
• Determine the functions that the control unit must perform to cause
the micro-operations to be performed.
For the control unit to perform its function, it must have inputs that allow it
to determine the state of the system and outputs that allow it to control the
behavior of the system. These are the external specifications of the control
unit. Internally, the control unit must have the logic required to perform
sequencing and execution functions.
There are two major types of control organization: hardwired control and
microprogrammed control. In the hardwired organization, the control logic is
implemented with gates, flip-flops, decoders, and other digital circuits. It has
the advantage that it can be optimized to produce a fast mode of operation.
In the microprogrammed organization, the control information is stored in a
control memory. The control memory is programmed to initiate the required
sequence of microoperations. A hardwired control, as the name implies, re-
quires changes in the wiring among the various components if the design
has to be modified or changed. In the microprogrammed control, any
required changes or modifications can be done by updating the
microprogram in control memory.
The block diagram of the control unit is shown in Fig. 5-6. It consists of two
decoders, a sequence counter, and a number of control logic gates. An
instruction read from memory is placed in the instruction register (IR). The
position of this register in the common bus system is indicated in Fig. 5-4.
The instruction register is shown again in Fig. 5-6, where it is divided into
three parts: the I bit, the operation code, and bits 0 through 11. The
operation code in bits 12 through 14 are decoded with a 3 x 8 decoder. The
eight outputs of the decoder are designated by the symbols D0 through D7.
The subscripted decimal number is equivalent to the binary value of the
corresponding operation code. Bit 15 of the instruction is transferred to a
flip-flop designated by the symbol I. Bits 0 through 11 are applied to the
control logic gates. The 4-bit sequence counter can count in binary from 0
through 15. The outputs of the counter are decoded into 16 timing signals T 0
through T15.
Q. 12. Explain:
CISC Characteristics
The design of an instruction set for a computer must take into consideration
not only machine language constructs, but also the requirements imposed
on the use of high-level programming languages. The translation from high-
level to machine language programs is done by means of a compiler
program. One reason for the trend to provide a complex instruction set is
the desire to simplify the compilation and improve the overall computer
performance. The task of a compiler is to generate a sequence of machine
instructions for each high-level language statement. The task is simplified if
there are machine instructions that implement the statements directly. The
essential goal of a CISC architecture is to attempt to provide a single
machine instruction for each statement that is written in a high-level
language. Examples of CISC architectures are the Digital Equipment
Corporation VAX computer and the IBM 370 computer.
RISC Characteristics