Sie sind auf Seite 1von 13

Representation of Negative Numbers

Binary is not complicated. Once you learn how number systems work its pretty easy
to go from decimal to binary, back, to add binary numbers, multiply them and so on.
Theres one part of binary numbers that is not as striaght-forward, though, and that is
the representation of negative binary numbers.
Signed Magnitude
The simplest method to represent negative binary numbers is called Signed
Magnitude: you use the leftmost digit as a sign indication, and treat the remaining
bits as if they represented an unsigned integer. The convention is that if the leftmost
digit (also called the most significant digit or most significant bit) is 0 the number is
positive, if its 1 the number is negative. So:
00001010 = decimal 10
10001010 = decimal -10
That is why the range of positive numbers you can store in unsigned integers is
larger than signed ones. For example, most computers use a 32-bit architecture
these days, so integers will have 32 bits as well in C.
This means that an unsigned INT can go up to 4,294,967,296 (which is 2^32 1).
You need to subtract one because the result of 2^32 starts from 1, while the first
binary representation is 0.
Now if the INT is signed you wont be able to use the leftmost bit. This means that
your positive range will go up to 2,147,483,647 (which is 2^31 1). However you
also have the negative values, and they go up to -2,147,483,647.
The main problem with this system is that it doesnt support binary arithmetic (which
is what the computer would naturally do). That is, if you add 10 and -10 binary you
wont get 0 as a result.

00001010 (decimal 10)


+10001010 (decimal -10)
------------10010100 (decimal -20)
This doesnt make much sense, and thats why people came up with representations
more suitable for a computer. Nonetheless there were some very early computers
that used this system to represent negative numbers.
Ones Complement

The Ones Complement of a binary number is basically another binary number


which, when added to the original number, will make the result a binary number with
1s in all bits.
To obtain ones complement you simply need to flip all the bits. Suppose we are
working with unsigned integers.
Decimal 10 is represented as: 00001010
Its one complement would be: 11110101
Notice that the complement is 245, which is 255 10. That is no co-incidence. The
complement of a number (again, we talking unsigned) is the largest number
represented with the number of bits available minus the number itself. Since we are
using 8 bits here the maximum number represented is 255 (2^32 1). So the
complement of 10 will be 245.
If we add the number and its complement the result should be 1s on all bits.

00001010 (decimal 10)


+11110101 (decimal 245)
------------11111111 (decimal 255)
We could now say that the leftmost bit will indicate the signal of the number again.
So 11110101 would be a negative number. What number? -10, because the
complement of 11110101 is 00001010 (i.e., decimal 10).
Another example: suppose we want to represent -12 with the ones complement
system. First we need to represent 12 in binary, which is 00001100. Now we find its
ones complement, which is 11110011, and that is the -12.
As you can see, using the ones complement system to represent negative numbers
we would have two zeroes: 00000000 (could be seen as +0) and 11111111 (could
be seen as -0).
Just as with the signed magnitude method, the range of numbers here goes from 2^(n-1) -1 to +2^(n-1) 1, where n is the number of bits used to represent the
numbers. If we had 8 bits the ranges would be from -127 up to 127.
With this representation the binary arithmetic problem is partially solved. If we add 12
and -12, for example, well get -0 as the result, which makes sense.

00001100 (decimal 12)


+11110011 (decimal -12)
------------11111111 (decimal -0)
I said this system partially solves the binary arithmetic problem because there are
some special cases left.
For example, lets add 3 with -1.

00000011 (decimal 3)
+11111101 (decimal -2)
------------100000000 (decimal 256)

But since we have only 8 bits to represent the numbers, the leftmost 1 will be
discarded, and the result would be 00000000 (decimal +0).
This is not the answer we expected.
To fix the problem we just need to place the leftmost 1 (i.e., the carry) into the first
bit.

00000011 (decimal 3)
+11111101 (decimal -2)
------------00000000 (decimal +0)
+00000001 (the carry)
------------00000001 (decimal 1)

Now it works. Lets do one more example adding 10 and -5.

00001010 (decimal 10)


+11111010 (decimal -5)
------------00000100 (decimal 4)
+00000001 (the carry)
------------00000101 (decimal 5)

Works again!
This system was used by many computers at one point in time. For example, the
PDP-1 (DECs first computer) used it.
Twos Complement
The Twos Complement of a binary number is basically another number which, when
added to the original, will make all bits become zeroes. You find a twos complement
by first finding the ones complement, and then by adding 1 to it. If you think about it
it makes perfect sense. The ones complement, when added to the original number,
will produce a binary number with 1s on all the bits. Add 1 to that and youll cause an
overflow, setting every bit back to 0.
For example, lets find the twos complement of 12. The binary representation of 12
is 00001100. Its ones complement is 11110011. Add one to that and we have its
twos complement.

11110011 (one's complement of 12)


+00000001 (decimal 1)
------------11110100 (two's complement of 12)
Now if we add 12 with its twos complement we should get all 0s.

00001100 (decimal 12)


+11110100 (two's complement of 12)
------------00000000 (decimal 0)
Once more well use the most significant bit (i.e., the leftmost one) to represent the
sign of the number. Lets suppose we want to represent -5. First of find its ones
complement, which is 11111010, and then we add 1 to it. So -5 is represented as
11111011 in binary under the twos complement system.
Now lets add 12 with -5 to see if well have the same problem that we had when
using the ones complement system:

00001100 (decimal 12)


+11111011 (decimal -5)
------------00000111 (decimal 7)

As you can see the result is correct, without the need to keep track/add the carry in
case of overflow. Additionally, the number zero has a single representation now:
0000000.
This means that the twos complement system pretty much solves all the binary
arithmetic problems, and that is why its used by most computers these days.
If you have a negative binary number under the twos complement system and want
to convert it to you digital you simply remove 1 from it and then find its ones
complement.
Say we have this number in binary: 10010101
Removing one it becomes 10010100. Its ones complement then is 01101011, which
is 107 in decimal. So the original number represented -107.
As I mentioned before this method has only one representation for the zero, which is
00000000. 11111111 (which was also zero under the ones complement system) will

now be -1. And 10000000 will now be -128, meaning we gained one more number in
the range.
That is, using the twos complement system the range of numbers will go from -2^(n1) up to +2^(n-1)-1. If we are using 8 bits this means that numbers will go from -128
up to 127.

Adder
In electronics, an adder or summer is a digital logic circuit that performs addition of
numbers. In many computers and other kinds of processors, adders are used not only in
thearithmetic logic units, but also in other parts of the processor, where they are used to
calculate addresses, table indices, increment and decrement operators, and similar
operations.
Although adders can be constructed for many numerical representations, such as binarycoded decimal or excess-3, the most common adders operate on binary numbers. In cases
where two's complement or ones' complement is being used to represent negative
numbers, it is trivial to modify an adder into an addersubtractor. Other signed number
representations require a more complex adder.
Half adder

Half adder logic diagram


The half adder adds two single binary digits A and B. It has two outputs, sum (S) and carry
(C). The carry signal represents an overflow into the next digit of a multi-digit addition. The
value of the sum is 2C + S. The simplest half-adder design, pictured on the right,
incorporates an XOR gate for S and an AND gate for C. With the addition of an OR gate to
combine their carry outputs, two half adders can be combined to make a full adder.[1]
The half adder adds two input bits and generates a carry and sum, which are the two
outputs of a half adder. The input variables of a half adder are called the augend and
addend bits. The output variables are the sum and carry. The truth table for the half adder
is:

Inputs Outputs
A

Full adder[edit]

Schematic symbol for a 1-bit full adder with Cin and Cout drawn on sides of block to
emphasize their use in a multi-bit adder
A full adder adds binary numbers and accounts for values carried in as well as out. A one-bit
full adder adds three one-bit numbers, often written as A, B, and Cin; A and B are the
operands, and Cin is a bit carried in from the previous less significant stage.[2] The full adder
is usually a component in a cascade of adders, which add 8, 16, 32, etc. bit binary numbers.
The circuit produces a two-bit output, output carry and sum typically represented by the
signals Cout and S, where

Full-adder logic diagram

. The one-bit full adder's truth table is:

Inputs

Outputs

A B Cin Cout

0 0 0

1 0 0

0 1 0

1 1 0

0 0 1

1 0 1

0 1 1

1 1 1

A full adder can be implemented in many different ways such as with a custom transistorlevel circuit or composed of other gates. One example implementation is
with

and

In this implementation, the final OR gate before the carry-out output may be replaced by
an XOR gate without altering the resulting logic. Using only two types of gates is convenient
if the circuit is being implemented using simple IC chips which contain only one gate type
per chip.
A full adder can be constructed from two half adders by connecting A and B to the input of
one half adder, connecting the sum from that to an input to the second adder,
connecting Ci to the other input and OR the two carry outputs. The critical path of a full
adder runs through both XOR-gates and ends at the sum bit . Assumed that an XOR-gate
takes 3 delays to complete, the delay imposed by the critical path of a full adder is equal to

The carry-block subcomponent consists of 2 gates and therefore has a delay of

More complex adders[edit]


Ripple-carry adder[edit]

4-bit adder with logic gates shown


It is possible to create a logical circuit using multiple full adders to add N-bit numbers. Each
full adder inputs a Cin, which is the Cout of the previous adder. This kind of adder is called
a ripple-carry adder, since each carry bit "ripples" to the next full adder. Note that the first
(and only the first) full adder may be replaced by a half adder (under the assumption
that Cin = 0).
The layout of a ripple-carry adder is simple, which allows for fast design time; however, the
ripple-carry adder is relatively slow, since each full adder must wait for the carry bit to be
calculated from the previous full adder. The gate delay can easily be calculated by inspection
of the full adder circuit. Each full adder requires three levels of logic. In a 32-bit ripple-carry
adder, there are 32 full adders, so the critical path (worst case) delay is 3 (from input to
carry in first adder) + 31 * 2 (for carry propagation in later adders) = 65 gate delays. The
general equation for the worst-case delay for an-bit carry-ripple adder is

The delay from bit position 0 to the carry-out is a little different:

The carry-in must travel through n carry-generator blocks to have an effect on the carry-out

A design with alternating carry polarities and optimized AND-OR-Invert gates can be about
twice as fast.[3]

4-bit adder with carry lookahead


To reduce the computation time, engineers devised faster ways to add two binary numbers
by using carry-lookahead adders. They work by creating two signals (P and G) for each bit

position, based on whether a carry is propagated through from a less significant bit position
(at least one input is a '1'), generated in that bit position (both inputs are '1'), or killed in
that bit position (both inputs are '0'). In most cases, P is simply the sum output of a half
adder and G is the carry output of the same adder. After P and G are generated the carries
for every bit position are created. Some advanced carry-lookahead architectures are
the Manchester carry chain, BrentKung adder, and theKoggeStone adder.
Some other multi-bit adder architectures break the adder into blocks. It is possible to vary
the length of these blocks based on thepropagation delay of the circuits to optimize
computation time. These block based adders include the carry-skip (or carry-bypass)
adderwhich will determine P and G values for each block rather than each bit, and the carry
select adder which pre-generates the sum and carry values for either possible carry input (0
or 1) to the block, using multiplexers to select the appropriate result when the carry bit is
known.
Other adder designs include the carry-select adder, conditional sum adder, carry-skip adder,
and carry-complete adder.
Lookahead carry unit[edit]

A 64-bit adder
By combining multiple carry lookahead adders even larger adders can be created. This can
be used at multiple levels to make even larger adders. For example, the following adder is a
64-bit adder that uses four 16-bit CLAs with two levels of LCUs
Carry-save adders[edit]
Main article: Carry-save adder
If an adding circuit is to compute the sum of three or more numbers it can be advantageous
to not propagate the carry result. Instead, three input adders are used, generating two
results: a sum and a carry. The sum and the carry may be fed into two inputs of the
subsequent 3-number adder without having to wait for propagation of a carry signal. After
all stages of addition, however, a conventional adder (such as the ripple carry or the
lookahead) must be used to combine the final sum and carry results.
3:2 compressors[edit]

We can view a full adder as a 3:2 lossy compressor: it sums three one-bit inputs, and returns
the result as a single two-bit number; that is, it maps 8 input values to 4 output values. Thus,
for example, a binary input of 101 results in an output of 1+0+1=10 (decimal number '2').
The carry-out represents bit one of the result, while the sum represents bit zero. Likewise, a
half adder can be used as a 2:2 lossy compressor, compressing four possible inputs into
three possible outputs.[citation needed]
Such compressors can be used to speed up the summation of three or more addends. If the
addends are exactly three, the layout is known as the carry-save adder. If the addends are
four or more, more than one layer of compressors is necessary and there are various
possible design for the circuit: the most common are Dadda and Wallace trees. This kind of
circuit is most notably used in multipliers, which is why these circuits are also known as
Dadda and Wallace multipliers.

Half subtractor

Logic diagram for a half subtractor


The half subtractor is a combinational circuit which is used to perform subtraction of two
bits. It has two inputs, the minuend
andsubtrahend
and two outputs the
difference

and borrow out

. The borrow out signal is set when the subtractor needs

to borrow from the next digit in a multi-digit subtraction. That is,


when
. Since
and
are bits,
if and only if
and
.
An important point worth mentioning is that the half subtractor diagram aside
implements

and not

since

on the diagram is given by

.
This is an important distinction to make since subtraction itself is not commutative, but the
difference bit
is calculated using an XOR gate which is commutative.
The truth table for the half subtractor is:
Inputs Outputs

D Bout

Using the table above and a Karnaugh map, we find the following logic equations for
and

.
Full subtractor
The full subtractor is a combinational circuit which is used to perform subtraction of three
input bits: the minuend

, subtrahend

, and borrow in

generates two output bits: the difference


previous digit borrowed from

and borrow out

. Thus,

. The full subtractor


.

is also subtracted from

is set when the


as well as the

subtrahend . Or in symbols:
. Like the half subtractor, the full subtractor
generates a borrow out when it needs to borrow from the next digit. Since we are
subtracting
by
and
, a borrow out needs to be generated when
. When a borrow out is generated, 2 is added in the current digit. (This is similar to the
subtraction algorithm in decimal. Instead of adding 2, we add 10 when we borrow.)
Therefore,

The truth table for the full subtractor is:


Inputs

Outputs

X Y Bin D Bout
0 0 0

0 0 1

0 1 0

0 1 1

1 0 0

1 0 1

1 1 0

1 1 1

Das könnte Ihnen auch gefallen