Sie sind auf Seite 1von 30

While each logical element or condition must always have a logic value of either "0" or "1", we also need

to have ways to combine different logical signals or conditions to provide a logical result. For example, consider the logical statement: "If I move the switch on the wall up, the light will turn on." At first glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there's more to it than this. In this example, a more complete statement would be: "If I move the switch on the wall upand the light bulb is good and the power is on, the light will turn on." If we look at these two statements as logical expressions and use logical terminology, we can reduce the first statement to: Light = Switch This means nothing more than that the light will follow the action of the switch, so that when the switch is up/on/true/1 the light will also be on/true/1. Conversely, if the switch is down/off/false/0 the light will also be off/false/0.

Looking at the second version of the statement, we have a slightly more complex expression: Light = Switch and Bulb and Power Normally, we use symbols rather than words to designate the and function that we're using to combine the separate variables of Switch, Bulb, and Power in this expression. The symbol normally used is a dot, which is the same symbol used for multiplication in some mathematical expressions. Using this symbol, our three-variable expression becomes: Light = Switch Bulb Power

When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need some special symbols to denote these functions in a logical diagram. There are three fundamental logical operations, from which all other functions, no matter how complex, can be derived. These functions are namedand, or,

and not. Each of these has a specific symbol and a clearly-defined behavior, as follows:
The AND Gate The AND gate implements the AND function. With the gate shown to the left, both inputs must have logic 1 signals applied to them in order for the output to be a logic 1. With either input at logic 0, the output will be held to logic 0. If your browser supports the Javascript functions required for the demonstrations built into this page, you can click the buttons to the left of the AND gate drawing to change their assigned logic values, and the drawing will change to reflect the new input states. Other demonstrations on these pages will work the same way. There is no limit to the number of inputs that may be applied to an AND function, so there is no functional limit to the number of inputs an AND gate may have. However, for practical reasons, commercial AND gates are most commonly manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or 16 pins, for practical size and handling. A standard 14pin package can contain four 2-input gates, three 3-input gates, or two 4-input gates, and still have room for two pins for power supply connections.

The OR Gate The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal counterpart, allows the output to be true (logic 1) if any one or more of its inputs are true. Verbally, we might say, "If it is raining OR if I turn on the sprinkler, the lawn will be wet." Note that the lawn will still be wet if the sprinkler is on and it is also raining. This is correctly reflected by the basic OR function. In symbols, the OR function is designated with a plus sign (+). In logical diagrams, the symbol to the left designates the OR gate. As with the AND function, the OR function can have any

number of inputs. However, practical commercial OR gates are mostly limited to 2, 3, and 4 inputs, as with AND gates.

The NOT Gate, or Inverter The inverter is a little different from AND and OR gates in that it always has exactly one input as well as one output. Whatever logical state is applied to the input, the opposite state will appear at the output. The NOT function, as it is called, is necesasary in many applications and highly useful in others. A practical verbal application might be: The door is NOT locked = You may enter The NOT function is denoted by a horizontal bar over the value to be inverted, as shown in the figure to the left. In some cases a single quote mark (') may also be used for this purpose: 0' = 1 and 1' = 0. For greater clarity in some logical expressions, we will use the overbar most of the time. In the inverter symbol, the triangle actually denotes only an amplifier, which in digital terms means that it "cleans up" the signal but does not change its logical sense. It is the circle at the output which denotes the logical inversion. The circle could have been placed at the input instead, and the logical meaning would still be the same.

An Introduction to the Microprocessor


The key to a microprocessor is its ability to communicate with itself. It is a sequential logic circuit. Previously, we have only been talking about combinational circuits, whose outputs are only a function of the inputs. In a sequential circuit like a microprocessor, the output is a function of the input, as well as the current, or previous state. As we build parts of the microprocessor, we will repeatedly utilize our answers from one part, as an input to something else, or even looped back to the beggining.

A typical microprocessor is organized like this:

Here the ROM section would contain the read only memory, while the RWM contains Read Write Memory. There are two main type of RWM memory, including RAM and sequential access memory. Ram is the key memory where we store the data that we are working on, while sequential access memory is memory like a magnetic tape, or CD-Rom Disc. In both of these types of memory, the data location is represented by a binary address. This address is how we are able to get into the contents of the memory by calling its address through our address bus. The I/O section is our input output section that lets us talk to the rest of the universe, as well as receive input. Our basic microprocessor, like most, is based on a fetch/execute cycle. When the microprocessor is in the fetch state it must receive a command from one of its memory locations, or from the I/O lines. Once this instruction is fetched, our microprocessor switches into execute state. In this state the processor induces changes onto memory in various locations as the command instructs. There is only one exception to this otherwise infinite process--halt. After a halt call, a reset is necessary to place the processor into a fetch state. This can be used more extensively in multiprocessor lines, although there are many different implementations here. For now, we will use the halt line only in relation to our memory stack. Let's take a closer look as to just how the processor actually fetches an instruction. To fetch an instruction, our microprocessor sends out the address of the next instruction on the address bus. The external memory returns the instruction along the data bus. The internal processor Program Counter is responsible for the determination as to the next instruction address. The fetch process therefore begins by transferring the contents of the Program Counter to an internal address register. The internal address register is responsible for storing this information, and then feeding the address bus. Our now fetched instruction then enters the microprocessor along our data bus. The fetched instruction first passes into a temporary memory location known as the instruction register. This then flows into the instruction decoder. This decoder is responsible for translating our instruction into the binary control values which are used by the control unit of the microprocessor. The instruction that is passed may contain both an

operation, as well as data to operate on. This is usually taken care of within the internal processor memory, or cache, but can be put to external memory as well. The control unit then issues the appropriate signals to the Arithmetic Logic Unit, or ALU. This then passes the correct operands to the ALU as well as any other parts of the microprocessor, or I/O system. The actual microprocessor computation takes place inside the processor's ALU, and several accumulators. The accumulator holds an operand while the temporary accumulator holds the other operand. The ALU performs a specified arithmetic or logical operation on these operands ans stores the result in the accumulator. The ALU also generates a set of flag signals which are passed onto a (you guessed it) Flag register. These flags indicate certain specific results, such as a result of 0, or an overflow error. The next major stage that we will be covering will be to make the accumulators and ALU talk to memory, and use PLA's. After we finish with that, our introduction to our processor will be complete. This will lead us into the more facinating world of acually building a microprocessor.

Boolean Algebra-- The Fundamental Backbone


Forget the math you learned in Elementary School

Although there were some wonderfully usefull skills that they taught you as a child, the mathematics you learned does not hold true here. Most people starting in Microelectronic engineering (or Semiconductor Design) stumble with these concepts. The very start of boolean algebra is actually quite simple--in fact, it only involves 0's and 1's. Boolean algebra is a mathematical way to solve and optimise logic problems, involving only true (1) and false (0). Why only 0's and 1's? Because the electronic equipment that we will be using can not read complex voltages, or values--to a computer there is only on and off. On is symbolically true. In the actual computer this is usually any voltage that is greater than the threshold value (typically 2.2V) up to the max voltage (typically 5V). This range in voltages to represent true is necessary to allow for small powerlosses caused by resistance in the circuit, and other flucuations. Any value that is below the threshold value is considered to be false (0). By using voltages to determine true and

false algorithms, we are making it possible to feed electricity through a circuit, and activate different stages. But we will touch more on this later. The very first thing that we need to learn to do using boolean algebra is count. The number system of 0's and 1's in boolean algebra is known as binary. Binary is very simple to learn. As you begin, you just keep carrying over the numbers... Here is an example-000 001 010 011 100 101 110 111

For our purposes, however many numbers (bits) you start with, that is as high as you can count. Now that we are used to only using 0's and 1's, we need to learn some very simple techniques--like addition and subtraction. Don't be fooled it is not as simple as you might think. The first step is addition, this should be very familiar. In fact, it is the same as the decimal math that you are used to. Here are a few examples.
+ 0001 0010 ---0011 0100 + 0110 ---1010 + 1000 1000 ---1 0000

Pay close attention to the final case, where there is a "carry." This last floating 1 is to high to be included in the sequence, so our answer is "0000" with a carry, not"10000." This distinction is very important in boolean algebra, since in hardware each digit will be represented as a bit, and you can not increase the number of "bits" your processor can handle at a time. Ie-- if you have an 8-bit processor and you have a carry, you must have a way to handle that 9th digit, because only 8 can fit on the bus at one time. You don't need to worry about a bus yet, however. For now, just know that you can not have more bits than the number that you started with. This leads us into subtraction. This once simple action is not necessarily so easy anymore. Our processors really only do addition. In order to subtract, we need to add a negative. This is simple in concept, but here is where it gets tricky--there are no negatives in boolean algebra. This is why the 2's compliment number system was devised. Two's compliment is a way that lets us represent the entire numeric range in binary. This is done by taking the first significant bit and converting it for use similar to a +/- sign. A "1" proceeding a number means that number is negative, while a "0" means it is postive. Here are some examples:

Decimal 3 2 1 0 -1 -2 -3 -4

= = = = = = = =

Binary 011 010 001 000 111 110 101 100

Keep in mind that when we are dealing with two's compliment numbers for subtraction, 111 does not represent the decimal "8", but rather a "-3." However, when we are adding, or not dealing with two's compliment, it does. Are you still with me? I hope so, because I'm sure by now you see the problem with two's compliment. If your processor can only handle 8 bits, in order to handle negative values, it can really only compute 7 numeric bits, and one signed bit! This limits the maximum word length that a CPU can handle. However, for now, we won't worry about that. What is more important is a way to have the processor compute the two's compliment number for itself. Luckily, this is very simple. There is one simple process that converts normal binary to two's compliment, and vice versa. All that you need to do is take your number, lets say 101, invert every number and add 1, like this:
0101 (Number in unsigned binary)

1010 (Bits inverted) + 0001 (Add a 1) ----1011 Your finished two's compliment number.

You can see from here that this number is now a negative 101. Notice how I added a bit to the front to hold the sign. This gives us a 4 bit number. For our first processor, we will leave it as a 4-bit design, although adding more would be quite simple. The main advantage to this conversion scheme, is that it works both ways. Say we are complete with two's complement processing, and would like to return to normal-just apply the method again-1011 (Number as two's compliment)

0100 (Bits inverted) + 0001 (Add a 1) ----0101 Your finished Unsigned Binary number

Simple! Now that we understand how two's compliment works, we can move on to subtraction... see I told you it was not what you were used to! Okay, lets start with an example.
0100 -0011 ---= Convert this 0100 to an addition +-0011 problem first ----> Now represent our negative as 2's comp. -> 0100 + 1101 ---Then complete the addition! -> 0100 + 1101 ---1 0001

The decimal equivalent to what we just did was take 4-3=1. Only, you can see that we actually ran out of significant bits, causing a carry situation. If you were watching closely you would realize that this 2's complement scheme has a hole in it-overflows. This is a situation where an you actually run out of significant bits and the number rolls over. This is best shown with another example
5 +3 --8 Our decimal problem looks like this 0101 + 0011 -----1000 However in binary the numbers end up like this, meaning 5 +3 ---8

This common overflow problem can only be encountered when you are adding 2 numbers of the same sign (ie negative plus negative, or positive plus a positive.) What happens is, that the 2's compliment system is really a number "wheel" and the number follwing +7 is -8. If you were counting up in binary, the 2's compliment equivalent would look like this
0000 1101 +0 -3 0001 1110 +1 -2 0010 1111 +2 -1 0011 +3 0100 +4 0101 +5 0110 +6 0111 +7 1000 -8 1001 -7 1010 -6 1011 -5 1100 -4

Alrighty... By now I hope you have a light grasp on 2's compliment. As you use it more it gets much easier.
The Rules of the Game

Now that we understand the most basic of the boolean/binary functions, it is time to move onto logic. Logic is perhaps the single most straightforward topic you'll ever use. Digital circuit logic is something you have been practicing since you learned to talk, you just didn't know it. There are two key operators that you must learn first-'And' and 'Or'. The first operator, And, is represented by the multiplication symbol (*) in Boolean algebra, while Or is represented by the addition symbol (+). A few good examples of + and * will clarify their usage

1 1 0 0

and and and and

1 0 1 0

= = = =

1 0 0 0

1 1 0 0

or or or or

1 0 0 0

= = = =

1 1 0 1

As you can see, for an And statement to be true, all of the statements must be true. For an Or statement, only one of the statements must be true. Now let's look at that very same problem represented in true Boolean algebra style.
1 1 0 0 * * * * 1 0 1 0 = = = = 1 0 0 0 1 1 0 0 + + + + 1 0 0 0 = = = = 1 1 0 1

There we have it! Boolean algebra! Now that we know that, there is really only a few small additional rules to hammer out. Instead of deriving all of the rules we will use, I am just going to give them to you. If you are ambitious enough, all of these rules can be proved using the boolean algebra techniques that you've already learned.
The set B contains at least two elements a, Closure: For every a, b in B a + b is in B a * b is in B Commutative Laws For every a, b in B. a + b = b + a a * b = b * a Associative Laws: For every a, b, c in B (a + b) + c = a + (b + c) = a + b (a * b) * c = a * (b * c) = a * b Distributive Laws--These are different than a + (b * c) = (a + b) * (a + c) a * (b + c) = (a * b) + (a * c) Compliment Where a' is the compliment of a a + a' = 1 a * a' = 0 Lemma 1: X + X = X And through Duality X * X = X Lemma 2: X + 1 = 1 And through Duality X * 0 = 0 b such that a does not equal b.

+ c * c you are used to!

Combinational Logic and K-Maps


Now that we've looked through Boolean Algebra, we are going to take a leap into the world of schematics and more advanced logic. To begin with, you need to know the

schematic symbols for our combinational logic circuits. Here are the key ones you should know:
* = AND + = OR ! = NOT (+) = XOR

You see here a new symbol as well--XOR. When two numbers are XOR'd they answer is only 1 if 1 or the other is 1, not both, pretty easy. From these we will build more complex parts like adders and multiplexors. Instead of re-writing the entire circuit as we go along, we will place these in a square box with their name. You will see this as we go along. Okay then, time to build our first circuit, a half adder. This is a circuit that will add 2 binary numbers together, ignoring the possibility of a carry in. The lack of ability to handle a carry gives it its name. The theory behind a half adders logic is pretty basic. There are two inputs, binary number A, and binary number B. Also, there are two outputs, Sum and Carry. Now let's use what is called a truth table to look at what the values of each number should be. We begin by making a truth table that has every possible input combination, like this:
A | B | Sum | Carry ------------------------0 | 0 | | 0 | 1 | | 1 | 0 | | 1 | 1 | |

Next, we fill out what the "answers" to our problem would need to be, like this:
A | B | Sum | Carry ------------------------0 | 0 | 0 | 0 0 | 1 | 1 | 0 1 | 0 | 1 | 0 1 | 1 | 0 | 1

There, we have now completed our first truth table. The next stage, is to develop a formula to figure out which combinations give us our desired values. First let us look at Sum. Sum is equal to one only when A or B is equal to 1, but not both. This sounds just like a XOR gate, so lets put it together in an equation-- Sum=A (*) B. Now, let's look at the Carry. The Carry is only equal to one when both A and B are equal to 1, so let's use an And gate, leaving us: Carry=A * B. Now we have the two needed

equasions to make the circuit. Lets look at it like this-

Okay, this is a good start. However, half adders are not really useful for anything other than a building block. For any real processing, we need to have a full adder that is capable of receiving a carry, as well as outputting one. The full adder works in the same way that we did the boolean math, and carried the numbers over for addition. This lets us chain them together to make a higher level (bit) implementation of the same circuit. The three inputs of the full adder will be represented as A, B, and CI (carry in). The outputs will be named S (sum) and CO (carry out). We will write sum as S = CI (+) A (+) B, and CO as CO = CI * (A + B) + A * B. Great, now lets just rewrite that in schematic form:

We are now complete with learning the most basic combinational logic circuits. Although the methods of writing out the truth table and looking to discover a way of implementing the answer works, it is really not very efficient. With the more complex circuits that we will be creating, simply generating an answer without a good method does not work. That is why we use Karnough maps. Karnough maps are the basic foundation of how to develop single stage implementations of combinational logic circuits. I am going to give you a short introduction to Karnough maps, however, they take lots of practice. Because of the complexity of Karnough maps, I will show you the basic steps, and let you explore more on your own. For an excellent source to learn more, I would highly recommend purchasing Randy H. Katz's book, Contemporary Logic Design. He has excellent examples, and goes very in-depth. Well-- enough of where else to look-- Let's jump in now!!
K-Maps

A K-map is going to be our key for developing a foundation of pieces to use in our processor. Based on the theory of boolean truth table adjacencies, K-maps provide a simple approach to determining realizations of complex combinatational logic problems.

In order to use a K-map, we must first talk a short bit about Grey Code. This is the binary implementation where never more than a one bit shift occurs in our counting. All K-map's will be written using grey code, so that our adjacencies can be properly extracted. First, let's take a look at a simple K-map structure.
Map of Z, as a funtion of ABC AB 00 01 11 10 --|-----|-----|-----|-----| 0 | | | | | C --|-----|-----|-----|-----| 1 | | | | |

This empty K-map shows us the basic setup that goes into a logic problem. Remember those old elementary logic puzzles where you filled in a grid pattern to determine a given problem? This is very similar. First, we need to pay special attention to the way that the grid is layed out. The top is representative of the A and B values, while the side is that of C. This will be easier to see when we map a problem to our K-Map. Let's use the following truth table to develop a function for.
A, B, and C are inputs, while Z is our output. A | B | C || Z ----|-----|-----||---0 | 0 | 0 || 1 0 | 0 | 1 || 0 0 | 1 | 0 || 1 0 | 1 | 1 || 0 1 | 0 | 0 || 0 1 | 0 | 1 || 1 1 | 1 | 0 || 0 1 | 1 | 1 || 1

Alright, now let's take that truth table, and map the Z's into our K-map based on their A, B and C values.
AB 00 01 11 10 --|-----|-----|-----|-----| 0 | 1 | 1 | 0 | 0 | --|-----|-----|-----|-----| 1 | 0 | 0 | 1 | 1 |

Perfect. You can now see that our function, Z, is equal to one in 4 locations. Now, lets use what is called Sum of Products form (or SOP) to determine a simple formula. We need to group our adjacent 1's in powers of 2 (ie 2, 4, 8 ...) first. This leaves us with two cirlces of grouped ones. On on the top left, the other on the bottom right. Now, just take the values for A, B, and C that make up those circles. The first (top left) is made up of A=0, and C=0. As you can see, it is 1 for both values of B, so we ignore

B. We can then write the formula simply as (A'B'). We next need to look at the bottom right corner. This is made up of values A=1, C=1, or (AC). Our final formula is the sum of these to products, or (A'B')+(AC). This is equal to our Z. By grouping these adjacencies, we can determine any size problem of any type, quickly and easily. That is really all there is to Karnough maps-- except understanding what is adjacent. You need to remember that our K-maps are really 3-d, so the top right number is adjacent to the top left, as if the map were really on a sphere. I know this will take a little getting used to, but as we do K-maps throughout the next several sections, you will pick it up quickly.

Designing and Building an ALU


Our Arithmetic Logic Unit (ALU) is going to be our most complex building block. The ALU will have 2 4-bit inputs, and one 4-bit output. If our circuit is designed correctly, we can use the control circuits designed earlier to feed the inputs from memory to the ALU, and the output back out into memory. But let's not jump ahead just yet. As you recall from our introduction to microprocessor theory, we need to have a set of commands for our microprocessor. These commands are simply a set of control signals fed into the ALU. The user's commands will contain the address of the memory location where the correct instructions are found. Also, we will want these instructions to be able to store the ouput in memory somewhere, so we need to design circuits that take a binary bit string and use it to activate only one output line. We will call these circuits Decoders. This is the section where we have the widest space to expand our micro-processor. Our ALU will have all the basic functions, but there is no reason why we couldn't keep adding functions. These additional functions are what set the average processor apart from the outstanding processor. Of course, the more functions the processor has the larger the design, the more expensive to build. There is a very narow gate where we decide the time/money point where enough features have been implemented. For our processor we will add all the basic math functions, as well as the basic logical operations. If you wanted to take this processor further, spend some extra time in this section and add more functions to your processor. Let's take a look at one of the very important parts we developed earlier, the two's compliment generator:

We are now going to make this device allot more powerful and give it several functions. These functions will include a 4-bit 2's compliment generator, a 1's compliment generator, and a passthrough. A set of select lines will let us choose which function we want the circuit to perform. Let's take a look at the function table that we will use to control this chip:
___ ____ NEG/NOT | PASS | Function ---------+------+-------------0 | 0 | Pass-through 0 | 1 | Two's Comp. 1 | 0 | Pass-through 1 | 1 | One's Comp.

As you can see, anytime the pass line is 0, the chip sends the input to the ouput, and anytime it is 1 the value of Neg bar/not determines the function. Here's the schematic that gives us those fuctions from our little 2's compliment generator:

Let's now take that NEG/NOT chip and place it into a box for use as a building block in our ALU:

That is the first major component of our ALU. Let's dive right into the next one, the ADD/OR circuit. If you think about it, using the building blocks we've already built this circuit is child's play. We have 2-4bit inputs, and one 4-bit output. We want to be able to add them, or 'OR' the 2 signals. How would we add them? By using the 4 bit adder we developed. How would we 'OR' them? Simple, by using 4-OR gates. Now, if we connect the inputs to both of these circuits at once, we will have 2-4bit outputs, one with the OR result, the other with the 'ADD' result. Remember that 4-bit Multiplexer we designed? I hope so, because we will just add that to the end of the circuit to decide which answer we will pass on! Let's take a look at this simple circuit:

Re-iterating, this circuit performs both the Add and the Or, and then using the multiplexer control line you decide which answer will go to the output lines. These two functions are nice, but it is also very handy to have a pass-through on our circuits, so we'll just add that into a second multiplexer. Now the circuit is an ADD/OR/PASS circuit, looking like:

Just to keep things straight before our circuit gets too large, let's take a look at the function table for this chip.
___ ____ ADD/OR | PASS | Function --------+------+------------0 | 0 | Pass-through 0 | 1 | Add 1 | 0 | Pass-through 1 | 1 | Or

Let's put this powerful circuit into a box, to be a major component of our ALU.

Let's take these circuits we've designed and put them together into an ALU now, our largest building block in our Micro-processor. At this point, our ALU has the ability to perform 1's compliments, make a 2's compliment, Add, or, and pass through data. These functions can be combined in our instruction set to make more complex instructions like a subtract function that first converts a numer to two's compliment, and then adds it. So let's take a look at our ALU circuit that we've developed:

Now, let's finalize the building block process by placing this ALU into a box:

Our Microprocessor is at a good point to take a moment and discuss where we could really make this chip powerful. In the modern world, a 4-bit chip wouldn't get you very far. Howerver, think of how you would build an 8, 16, or 32 bit chip. Let's take our adder for example. We started with a 1-bit adder, connected 4 of them, and made a 4-bit adder. Now that we have a 4-bit adder, we could connect 2 of those and make an 8-bit adder. With an 8-bit adder, we could connect 2 of those to get a 16-bit adder. Now, connect two of those and you have your powerful 32-bit adder. There is no reason why our design could not be very easily scaled up to a 32, or 64-bit design except for time and space. If you take this building block approach, a 64-bit adder is not that complex a thing. However, if you tried to tackle the entire project all at the same time, you might lose your mind in all the AND/OR/INVERTER gates staring at you. Every once in a while it is worth evaluating your circuit to see if there are ways you can optimize it further. Especially if your chip is going to be used for a high speed leading edge design. There are computer programs available that will search for removable gates in your circuit, and reduce the size. Or, you can take the entire circuit project into a Karnaugh map, and optimize it by hand. The ability to just use a K-map will vanish very shortly however, when we add memory and sequential circuits into the mix. Why worry about the chip count? Other than straight cost, every gate that you put in a circuit adds a small delay until the signal can be processed. Although for only a couple gates this time is negligible, when you start creating complex circuits

like ALU's that go through many gates, you must keep the clock speed of your processor slower than the time it takes all of the gates to propogate. This explains the difference in machines like the SNES which can only operate at 3 MHZ, and the new Ultra-64 which can operate at 93 MHZ. The u64 processor is a lesson in good processor design. SGI has always been known for some of the best CPU designs in the industry. Back to building, let's take a turn from adding new functions to our processor to making the glue that will hold all of our pieces together. The first is a Decoder. A 1bit decoder is actually just a demultiplexer, but since it is used for a different purpose, we rename the in and out lines. Decoders are used to activate a specific memory location. They do this by taking the binary input number, and then putting a 1 on whichever line is specied, and 0's on every other line. The decoder also has an enable line. If this enable line is ever a 0, the decoder is turned off and all lines output a 0. This is important when we stack decoders to make more memory addresses avavilable. Let's take a look at using this 1-bit decoder to make a 2-bit decoder:

Which behaves according to this truth table:


EN | A1 | A0 || 0 | 1 | 2 | 3 ---+----+----++---+---+---+--0 | 0 | 0 || 0 | 0 | 0 | 0 0 | 0 | 1 || 0 | 0 | 0 | 0 0 | 1 | 0 || 0 | 0 | 0 | 0 0 | 1 | 1 || 0 | 0 | 0 | 0 1 | 0 | 0 || 1 | 0 | 0 | 0 1 | 0 | 1 || 0 | 1 | 0 | 0 1 | 1 | 0 || 0 | 0 | 1 | 0 1 | 1 | 1 || 0 | 0 | 0 | 1

Now, let's just expand that to make a 4-bit decoder for use with our processor. And we'll label that in a box as:

And building still further, we get:

This marks the end of all of the combinational logic circuits that we will use in our processor. However, there is allot more to come in the form of sequential circuits. These are the circuits whose output is directly dependent on not only their input, but their current output as well. Join in for the next segment as we cover buffers, registers, and our basic memory.

Designing and Building our 4-bit Addition Engine


To begin with, we need to develop some of the basic building blocks for use in our processor. One of the largest functions of our processor will be math, so let's start with the half adder. If you recall from earlier, we have already developed the schematic for one. Let's take this schematic:

and put it into a more simplistic form by just making a box labled HA. If we didn't do this, by the time our processor shematic was complete, you wouldn't be able to follow it! So let's take a look at how that looks now:

Looks great! Now a half adder won't do a whole lot for us if we stop here. So let's use that half adder to build the next large step, a 4-bit incrementer. An 4 bit incrementer has 4 input lines (hence the 4 bit), and one increment flag. If the increment flag is set to one, it output the input+1, if the flag is 0, it just outputs the input. The ouput is just 4 output lines, and a carry line. We will use 4 half adders to do this. Now is also a good time to introduce the Hex keyboard, and display. These will just be used to show how things work in the schematics. If you recall, hex is very useful, since it is base 16 it allows us to represent any 4-bit binary combination with only 1 character. Thus a 4 bit binary output can be viewed as a single hex character (ie. 0010b=2h). Although these displays make it look very clean, remember under the surface it is all just 0's and 1's. Well, let's look at that schematic!

That's it! Now, let's connect that to those hex displays and keypads we were talking about, and view the output. The letter darkened by the hex keypad represents the key that is pressed:

As you can see, our 4-bit incrementer works! This is the first 4-bit component we've built. But what good is it you ask? Great question... This device will be used to

generate our two's compliment numbers! Lets take a look using binary switches, and probes:

Alright, we now have a very important, and necessary part of our microprocessor. Let's keep right on going building these basic devices. Remember, if you get stuck, go back to some of the early sections and read up. If that still doesn't help, you can always e-mail me at team-0@gamezero.com... So onto the Full Adder. Instead of just designing a basic full adder, let's make it 4-bit full adder, so it'll be useful. Let's start with the shematic for a 1-bit full adder (you remember this in a slightly different form from earlier, right?):

Now we will take this 1-bit full adder, and place it into a box as a building block for bigger circuits:

We now want to make that 1-bit full adder into a 4-bit full adder. Remember that a 4bit adder should be able to add two 4-bit numbers and a carry, to get one 4-bit answer, and a single carry out. If you think about it, it is very easy, The output from a 1-bit adder is simply one output line, and a carry line. The input is 2 bits, plus a carry in. To design a 4-bit adder, we will take the carry out from one adder, and bring it into the carry in to the next. Each 1bit adder will then have 2 input lines, and the device will have 1 carry in for all the adders. Before you look at the schematic, think about this one. You should be able to come up with this on your own without that much of a problem using our current building blocks. Let's look at it now:

Now that's not too tough, is it? We have designed all of the very basic mathematical functions for our processor. Technically speaking, these are enough functions to constitute a very rude foundation for an ALU. However, to make our processor better than just an adding machine, we will continue to add more devices. Why don't you take a break, read through this well, and then join me for the next section where we design Multiplexors, which will control the flow of data through our system.

Designing Multiplexers, and Demultiplexers


A microprocessor that can't change the way data flows through it is useless. Therefore, we need to design a circuit that will let us route data through our chip. Such a device is known as a Multiplexer. A multiplexer has two data inputs, one data output, and a select line. Depending on whether the select line is high or low either the data from input 1, or input 2 is shown on the output line. In our multiplexer, we will call input 1 A, input 2 B, the select line C, and the output line Y. Let's first look at the truth table that I've just described:
A | B | C | Y ---+---+---+---

0 1 0 1 0 1 0 1

| | | | | | | |

0 0 1 1 0 0 1 1

| | | | | | | |

0 0 0 0 1 1 1 1

| | | | | | | |

0 0 1 1 0 1 0 1

As you can see, it only has three inputs, that affect a single output. Remember those Karnaugh maps from earlier? Well, now is the time to use them. There are several implementations of this possible, so if you develop a different one, feel free to use it. This POS implementation's schematic looks like:

Now let's put that multiplexer into a box to use as another building block, giving us:

This is a very usefull circuit we have just designed, but it is not the only control circuit that we need. At times we will have one data input, and want to route where it goes. We call this a demultiplexer. Go ahead and design this one on your own, it is pretty simple. Just to be clear, here is the truth table. Remember in this circuit D is our input, Y/Z bar is our select, Y is output 1, and Z is output 2.
_ D | Y/Z | Y | Z ---+-----+---+--0 | 0 | 0 | 0 1 | 0 | 0 | 1 0 | 1 | 0 | 0 1 | 1 | 1 | 0

We'll use the building block representation of this circuit here:

These are nice, but a 1-bit anything isn't too useful in our circuit. Therefore, just like we did with the adder, we will combine several of our multiplexer1 circuits together to make a 4-bit multiplexer. Remember, to control the four inputs we only need 2 select lines, in binary this gives us 4 combinations 00 01 10 11. This is known as addressing. In fact this is the basic scheme that all computers use to route information, and how we will talk to specified memory later on. Well, let's take a look at our new 4-bit multiplexer:

Okay, this gives us some good control circuitry now. We will use this control circuitry in our ALU. Let's jump to the next section and design some basic logical operation circuits, and combine them all into an ALU.

JK flip flop
To prevent any possibility of a "race" condition occurring when both the S and R inputs are at logic 1 when the CLK input falls from logic 1 to logic 0, we must somehow prevent one of those inputs from having an effect on the master latch in the circuit. At the same time, we still want the flip-flop to be able to change state on each falling edge of the CLK input, if the input logic signals call for this. Therefore, the S or R input to be disabled depends on the current state of the slave latch outputs. If the Q output is a logic 1 (the flip-flop is in the "Set" state), the S input can't make it any more set than it already is. Therefore, we can disable the S input without disabling the flip-flop under these conditions. In the same way, if the Q output is logic 0 (the flip-flop is Reset), the R input can be disabled without causing any harm. If we can accomplish this without too much trouble, we will have solved the problem of the "race" condition. The circuit below shows the solution. To the RS flip-flop we have added two new connections from the Q and Q' outputs back to the original input gates. Remember that a NAND gate may have any number of inputs, so this causes no trouble. To show that we have done this, we change the designations of the logic inputs and of the flipflop itself. The inputs are now designated J (instead of S) and K (instead of R). The entire circuit is known as a JK flip-flop.

In most ways, the JK flip-flop behaves just like the RS flip-flop. The Q and Q' outputs will only change state on the falling edge of the CLK signal, and the J and K inputs will control the future output state pretty much as before. However, there are some important differences. Since one of the two logic inputs is always disabled according to the output state of the overall flip-flop, the master latch cannot change state back and forth while the CLK input is at logic 1. Instead, the enabled input can change the state of the master latch once, after which this latch will not change again. This was not true of the RS flip-flop. If both the J and K inputs are held at logic 1 and the CLK signal continues to change, the Q and Q' outputs will simply change state with each falling edge of the CLK signal. (The master latch circuit will change state with each rising edge of CLK.) We can use this characteristic to advantage in a number of ways. A flip-flop built specifically to operate this way is typically designated as a T (for Toggle) flip-flop. The lone T input is in fact the CLK input for other types of flip-flops. The JK flip-flop must be edge triggered in this manner. Any level-triggered JK latch circuit will oscillate rapidly if all three inputs are held at logic 1. This is not very useful. For the same reason, the T flip-flop must also be edge triggered. For both types, this is the only way to ensure that the flip-flop will change state only once on any given clock pulse.

Because the behavior of the JK flip-flop is completely predictable under all conditions, this is the preferred type of flip-flop for most logic circuit designs. The RS

flip-flop is only used in applications where it can be guaranteed that both R and S cannot be logic 1 at the same time. At the same time, there are some additional useful configurations of both latches and flip-flops. In the next pages, we will look first at the major configurations and note their properties. Then we will see how multiple flip-flops or latches can be combined to perform useful functions and operations.

Using the ALU and Accumulators


When an actual program is run on our program, it will not flow in a straight line. Instead it will branch depending on which direction you press the controller, or any other interrupt. Because we need to be able to jump to a different path at any time we need the ability to cease one task, save its data for later, and start up another one. We will need the registers during the execution of the second task, but when it is complete we want to restore them to how we started to the first task can continue. Enter: stack. The stack got its name because it acts just like a stack of books. The last book you put on stack is the first one that you can remove. Therefore it is reffered to as a LIFO stack, for Last In First Out. Well, we have now introduced all of the key (major) components of a microprocessor. If you recall, that leaves us with An ALU Accumulators Flag Registers A LIFO stack An Instruction Register An Instruction Decoder A Program Counter Address and Data Registers and a Control Unit. Whoa you say--- What is the control unit? This is a state machine that controls the overall function of the processor. The instructions that come out of the control unit are generated from reading the output of the instruction decoder into an internal ROM, and then looking at a "look up table" in the ROM to output a sequence of sets of control signals. This makes up our Assembly Language. Only tasks that are

programmed into our ROM can be executed on our processor. The beauty of this design style is that the programmers really only need to know a simple set of assembly calls that will be interpreted by the processor as a sequence of actions. Here's a good example -- subtraction. In order to subtract, the assembly programmer only needs to know to type "Subtr," however, internally to subtract our processor must first take the read in numbers, convert them properly to two's compliment, load number one into the Register, tell the ALU to subtract with the second number placed on the data bus. Then the address bus must be told to place the number into RAM, and then change the address bus to the display address to show the result to the user. That is quite a few steps, for only one command. Thus, we have a simple assembly language, where a single task does many steps. Assembly is not as direct a line to the CPU as many people think. How many instructions should you have in your ROM? That brings up a big argument in the Microprocessor design rings right now-- CISC vs RISC. A Complex instruction (CISC) set makes writing programs a little easier by providing a large number of possible instructions for the programmer to use. However, a reduced instruction set (RISC) makes Microprocessor design easier by reducing the number of operations that the actual hardware needs to do. As long as sequences of the RISC set can do anything it does not have from the CISC set, no loss on functionality exists. Where does this leave us? Well, we are going to deal with the RISC set for several reasons. One, it is faster to develop and build, two, it has proven to be just as fast as the CISC chips with less overhead, and three, most of the new video game systems are all RISC. At this point, we are actually ready to begin construction part by part of our new processor. This will be accomplished in several stages. First, we will develop the core parts that will be used in our processor, including the adder, or circuitry and other ALU components. By combining this with our other logic, we will build a brainless (no ROM) microprocessor. After this, we will put on a ROM, add more memory (enough to be mildly useful) and have built a fully functional microprocessor. However, we will be thus far leaving out some components that will lower the functionality of our processor -- namely the stack, and advanced ROM commands. This will lead us into the next installment where we add that functionality, as well as more IO and some video game console tricks. Well, don't stick around here -- Let's go make our microprocessor!

Das könnte Ihnen auch gefallen