Sie sind auf Seite 1von 7

Encoder

From Wikipedia, the free encyclopedia

Jump to: navigation, search

An encoder is a device used to change a signal (such as a bitstream) or data into a code.
The code may serve any of a number of purposes such as compressing information for
transmission or storage, encrypting or adding redundancies to the input code, or
translating from one code to another. This is usually done by means of a programmed
algorithm, especially if any part is digital, while most analog encoding is done with
analog circuitry.

Contents
[hide]

• 1 Single bit 4 to 2 Encoder


• 2 Priority encoder
• 3 Connecting Priority Encoders
• 4 Examples
• 5 See also

• 6 External links

[edit] Single bit 4 to 2 Encoder


An encoder has 2n input lines and n output lines.The output lines genrate a binary code

Gate level circuit diagram of a single bit 4-to-2 line encoder


corresponding to the input value. For example a single bit 4 to 2 encoder takes in 4 bits
and outputs 2 bits. It is assumed that there are only 4 types of input signals these are :
0001, 0010, 0100, 1000.

I3 I2 I1 I0 O1 O0
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1
4 to 2 encoder

The encoder has the limitation that only one input can be active at any given time. If two
inputs are simultaneously active, the output produces and undefined combination. To
prevent this we make use of the priority encoder.

[edit] Priority encoder


A priority encoder is such that if two or more inputs is given at the same time, the the
input having the highest priority will take precedence. An example of a single bit 4 to 2
encoder is shown.

I3 I2 I1 I0 O1 O0
0 0 0 d 0 0
0 0 1 d 0 1
0 1 d d 1 0
1 d d d 1 1
4 to 2 priority encoder

[edit] Connecting Priority Encoders


Priority encoders can be easily connected in arrays to make larger encoders, such as a 16
to 4 encoder made from six 4 to 2 priority encoders (four encoders having the signal
source connected to their inputs, and two encoders that take the output of the first four as
input).

Data compression
From Wikipedia, the free encyclopedia

Jump to: navigation, search


"Source coding" redirects here. For the term in computer programming, see Source
code.

In computer science and information theory, data compression or source coding is the
process of encoding information using fewer bits (or other information-bearing units)
than an unencoded representation would use through use of specific encoding schemes.
For example, the ZIP file format, which provides compression, also acts as an archiver,
storing many source files in a single destination output file.
As with any communication, compressed data communication only works when both the
sender and receiver of the information understand the encoding scheme. For example,
this text makes sense only if the receiver understands that it is intended to be interpreted
as characters representing the English language. Similarly, compressed data can only be
understood if the decoding method is known by the receiver.

Compression is useful because it helps reduce the consumption of expensive resources,


such as hard disk space or transmission bandwidth. On the downside, compressed data
must be decompressed to be used, and this extra processing may be detrimental to some
applications. For instance, a compression scheme for video may require expensive
hardware for the video to be decompressed fast enough to be viewed as it's being
decompressed (the option of decompressing the video in full before watching it may be
inconvenient, and requires storage space for the decompressed video). The design of data
compression schemes therefore involves trade-offs among various factors, including the
degree of compression, the amount of distortion introduced (if using a lossy compression
scheme), and the computational resources required to compress and uncompress the data.

Contents
[hide]

• 1 Lossless versus lossy compression


• 2 Applications
• 3 Theory
• 4 See also
o 4.1 Data compression topics
o 4.2 Compression algorithms
 4.2.1 Lossless data compression
 4.2.2 Lossy data compression
 4.2.3 Example implementations
o 4.3 Corpora
• 5 References

• 6 External links

[edit] Lossless versus lossy compression


Lossless compression algorithms usually exploit statistical redundancy in such a way as
to represent the sender's data more concisely without error. Lossless compression is
possible because most real-world data has statistical redundancy. For example, in
English text, the letter 'e' is much more common than the letter 'z', and the probability that
the letter 'q' will be followed by the letter 'z' is very small.

Another kind of compression, called lossy data compression or perceptual coding, is


possible if some loss of fidelity is acceptable. Generally, a lossy data compression will be
guided by research on how people perceive the data in question. For example, the human
eye is more sensitive to subtle variations in luminance than it is to variations in color.
JPEG image compression works in part by "rounding off" some of this less-important
information. Lossy data compression provides a way to obtain the best fidelity for a given
amount of compression. In some cases, transparent (unnoticeable) compression is
desired; in other cases, fidelity is sacrificed to reduce the amount of data as much as
possible.

Lossless compression schemes are reversible so that the original data can be
reconstructed, while lossy schemes accept some loss of data in order to achieve higher
compression.

However, lossless data compression algorithms will always fail to compress some files;
indeed, any compression algorithm will necessarily fail to compress any data containing
no discernible patterns. Attempts to compress data that has been compressed already will
therefore usually result in an expansion, as will attempts to compress encrypted data.

In practice, lossy data compression will also come to a point where compressing again
does not work, although an extremely lossy algorithm, like for example always removing
the last byte of a file, will always compress a file up to the point where it is empty.

An example of lossless vs. lossy compression is the following string:

25.888888888

This string can be compressed as:

25.[9]8

Interpreted as, "twenty five point 9 eights", the original string is perfectly recreated, just
written in a smaller form. In a lossy system, using

26

instead, the original data is lost, at the benefit of a smaller file size.

[edit] Applications
The above is a very simple example of run-length encoding, wherein large runs of
consecutive identical data values are replaced by a simple code with the data value and
length of the run. This is an example of lossless data compression. It is often used to
optimize disk space on office computers, or better use the connection bandwidth in a
computer network. For symbolic data such as spreadsheets, text, executable programs,
etc., losslessness is essential because changing even a single bit cannot be tolerated
(except in some limited cases).
For visual and audio data, some loss of quality can be tolerated without losing the
essential nature of the data. By taking advantage of the limitations of the human sensory
system, a great deal of space can be saved while producing an output which is nearly
indistinguishable from the original. These lossy data compression methods typically offer
a three-way tradeoff between compression speed, compressed data size and quality loss.

Lossy image compression is used in digital cameras, to increase storage capacities with
minimal degradation of picture quality. Similarly, DVDs use the lossy MPEG-2 codec for
video compression.

In lossy audio compression, methods of psychoacoustics are used to remove non-audible


(or less audible) components of the signal. Compression of human speech is often
performed with even more specialized techniques, so that "speech compression" or "voice
coding" is sometimes distinguished as a separate discipline than "audio compression".
Different audio and speech compression standards are listed under audio codecs. Voice
compression is used in Internet telephony for example, while audio compression is used
for CD ripping and is decoded by audio players.

[edit] Theory
The theoretical background of compression is provided by information theory (which is
closely related to algorithmic information theory) and by rate-distortion theory. These
fields of study were essentially created by Claude Shannon, who published fundamental
papers on the topic in the late 1940s and early 1950s. Cryptography and coding theory are
also closely related. The idea of data compression is deeply connected with statistical
inference.

Many lossless data compression systems can be viewed in terms of a four-stage model.
Lossy data compression systems typically include even more stages, including, for
example, prediction, frequency transformation, and quantization.

The Lempel-Ziv (LZ) compression methods are among the most popular algorithms for
lossless storage. DEFLATE is a variation on LZ which is optimized for decompression
speed and compression ratio, although compression can be slow. DEFLATE is used in
PKZIP, gzip and PNG. LZW (Lempel-Ziv-Welch) is used in GIF images. Also
noteworthy are the LZR (LZ-Renau) methods, which serve as the basis of the Zip
method. LZ methods utilize a table-based compression model where table entries are
substituted for repeated strings of data. For most LZ methods, this table is generated
dynamically from earlier data in the input. The table itself is often Huffman encoded (e.g.
SHRI, LZX). A current LZ-based coding scheme that performs well is LZX, used in
Microsoft's CAB format.

The very best compressors use probabilistic models whose predictions are coupled to an
algorithm called arithmetic coding. Arithmetic coding, invented by Jorma Rissanen, and
turned into a practical method by Witten, Neal, and Cleary, achieves superior
compression to the better-known Huffman algorithm, and lends itself especially well to
adaptive data compression tasks where the predictions are strongly context-dependent.
Arithmetic coding is used in the bilevel image-compression standard JBIG, and the
document-compression standard DjVu. The text entry system, Dasher, is an inverse-
arithmetic-coder.

There is a close connection between machine learning and compression: a system that
predicts the posterior probabilities of a sequence given its entire history can be used for
optimal data compression (by using arithmetic coding on the output distribution), while
an optimal compressor can be used for prediction (by finding the symbol that compresses
best, given the previous history). This equivalence has been used as justification for data
compression as a benchmark for "general intelligence" [1].

[edit] See also


A decoder is a device which does the reverse of an encoder, undoing the encoding so that
the original information can be retrieved. The same method used to encode is usually just
reversed in order to decode.

In digital electronics this would mean that a decoder is a multiple-input, multiple-output


logic circuit that converts coded inputs into coded outputs, where the input and output
codes are different. e.g. n-to-2n, BCD decoders.

Enable inputs must be on for the decoder to function, otherwise its outputs assume a
single "disabled" output code word. Decoding is necessary in applications such as data
multiplexing, 7 segment display and memory address decoding.

The simplest decoder circuit would be an AND gate because the output of an AND gate
is "High" (1) only when all its inputs are "High".Such output is called as "active High
output".If instead of AND gate,the NAND gate is connected the output will be "Low" (0)
only when all its inputs are "High".such output is called as "active low output".

Example: A 2-to-4 Line Single Bit Decoder

A slightly more complex decoder would be the n-to-2n type binary decoders. These type
of decoders are combinational circuits that convert binary information from 'n' coded
inputs to a maximum of 2n unique outputs. We say a maximum of 2n outputs because in
case the 'n' bit coded information has unused bit combinations, the decoder may have less
than 2n outputs. We can have 2-to-4 decoder, 3-to-8 decoder or 4-to-16 decoder. We can
form a 3-to-8 decoder from two 2-to-4 decoders (with enable signals).
Similarly, we can also form a 4-to-16 decoder by combining two 3-to-8 decoders. In this
type of circuit design, the enable inputs of both 3-to-8 decoders originate from a 4th
input, which acts as a selector between the two 3-to-8 decoders. This allows the 4th input
to enable either the top or bottom decoder, which produces outputs of D(0) through D(7)
for the first decoder, and D(8) through D(15) for the second decoder.

It is important to note that a decoder that contains enable inputs is also known as a
decoder-demultiplexer. Thus, we have a 4-to-16 decoder produced by adding a 4th input
shared among both decoders, producing 16 outputs.

Das könnte Ihnen auch gefallen