00 positive Bewertungen00 negative Bewertungen

9 Ansichten21 SeitenFeb 23, 2011

© Attribution Non-Commercial (BY-NC)

PDF, TXT oder online auf Scribd lesen

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

9 Ansichten

00 positive Bewertungen00 negative Bewertungen

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

Sie sind auf Seite 1von 21

ABSTRACT

Chaos functions have mainly been used to develop mathematical models of non linear systems. They have attracted the attention of many mathematicians owing to their extremely sensitive nature to initial conditions and their immense applicability to problems of daily life. In this project we investigate the use of an algorithm for symmetric key cryptography using one of the simplest and oldest chaotic functions f(x) = 4 * x * (1-x) that can be used for secure communications. This work aims to design and implement a hardware model of a chaos based cryptosystem that can be used to generate multiple keys for symmetric key cryptography. The encryption step proposed in the algorithm consists of just a simple XOR operation. In this context, implementation using re-programmable devices such as FPGAs is highly attractive options since they provide hardware agility, parameterization and developing cost efficiency.

Contents

1.Introduction

1.1 Motivation

2.Description of a Chaos and Cryptography

2.1 Basic Introduction to Chaos

2.1.1 Logistic Equation

2.1.2 Bifurcation Diagram and Logistic Map

2.2 Chaos and Cryptography

2.2.1 Confusion and Diffusion

2.3 Cryptography

2.3.1 Classification of Cryptosystems

2.3.1.1 Symmetric and Asymmetric

Distribution Keys

2.3.1.2 Block and Stream Cipher

2.3.1.3 Hardware versus Software

Implementation

2.3.2 Characteristics of a Cryptosystem

2.3.3 Analysis of Cryptographic Systems

2.3.3.1 Brute Force (Exhaustive Key

Search):

2.3.3.2 Codebook

2.3.3.3 Differential Cryptanalysis

1.

Introduction

The need for a reliable method of encryption has persisted throughout history. Encryption applications range from military and intelligence use to daily commercial activities. As technology has improved to allow for easier and better encryption and transmission, so has it also allowed improvements in interception and cryptanalysis? Codes have become more advanced, progressing from simple character-replacement ciphers to today's algorithms of large pseudo-primes, exponents, and modular congruences. But the concept has remained simple; it is desirable to be able to send information from one point to another without anyone being able to understand it in the middle. Ideally, the encrypted information should contain no shadows of the original message, which could be identified by careful observation. That is, the ideal code would encrypt a message so that it would be indistinguishable from random noise during transmission.

Most of the research work at present on symmetric key cryptography concentrates on block based algorithms in which blocks of messages are subjected to numerous rounds of computation with the help of a single or multiple keys. Such algorithms are mainly based on the DES algorithm which

bears a 56-bit key on 64-bit blocks with 16 rounds of key dependent computation [shein]. But with ever increasing computation speeds, such algorithms have become more and more susceptible. With current computational resources available a hacker can try each of the 2 ^{5}^{6} keys possible in the DES algorithm in just a matter of 48 hours and thus crack the encrypted message. It is therefore no longer secure to use a single key to encrypt all data. The obvious solution is to use multiple keys, a different key for encrypting each block of data. But this has practical limitations, as all the multiple sets of keys have to be maintained at both the sending and the receiving ends. Also the number of multiple keys unless extremely large in number will not pose any challenge to the hacker who we assume has unlimited resources at his disposal.

Another solution is to generate one-time pads for encryption with the help of a single key and various chaining algorithms. But since encryption algorithms are publicly known, the above procedure critically depends on the security of the single key. The idea of using mathematical functions for generating multiple keys or one time pads has been largely unexplored. Some such functions are suggested in [euro].

However simple mathematical functions are not sufficient. It is always assumed that the algorithm for encryption is public which means that the function to generate the multiple keys is known to the hacker as well. This means that once the hacker is able to discover one key, he immediately has access to all the other keys. This is where chaotic functions can play a major role. In this project an algorithm using the chaotic function f(x) = r*x*(1-x) is explored to generate multiple keys for symmetric key cryptography. The encryption step proposed in the algorithm consists of just a simple XOR operation which should be sufficient unless there is a known ciphertext- plaintext attack.

1.1 Motivation

The advent of the Internet has made security of data and protection of privacy a major cause concern for everyone. Therefore increasing efforts have been made to use chaotic systems for enhancing some features of communications systems. The highly unpredictable and random-look nature of chaotic signals is the most attractive feature of deterministic chaotic systems that may lead to novel (engineering) applications. Chaos and cryptography have some common features, the most prominent being sensitivity to variable and parameter changes. During the past decade a large number of chaos-based encryption systems has been suggested and investigated. The idea behind is to use complex dynamics but simple mathematical descriptions and algorithms of chaotic systems for the purpose of encryption

2.

Description of Chaos & Cryptography

2.1 Basic Introduction to Chaos

Uncountable definitions have been given in seeking to formally describe chaos and chaos theory as a branch of mathematics.

Let’s start this introduction to chaos speaking about the mathematical problem of having two bodies in space. Solving the system of differential equations that arise from the application of Newton’s law, the trajectories of the two bodies are completely described in terms of space and time by the well-known gravity law. This means that, given the initial conditions, all the parameters that govern the system — i.e. position, velocity and acceleration — can be determined for each of the two bodies at any time. Because of this, the system is said to be deterministic. When a third body is introduced into the system, the property of the system of having a closed solution no longer holds to be true. The system of differential equations still describe completely the behaviour of the three bodies, but the knowledge of the position of each element in the system at a given time t can be calculated only by iterating the differential equations written in discrete form starting from the initial condition up to time t. In other words, the output from the

previous step is the input of the next iteration and a general solution can not be expressed using only one equation. The system belongs to the category of non-linear systems.

Figure 2.2: Divergence due to sensitive dependence to initial conditions.

A third characteristic distinguishes the system of three bodies in space:

density. A system is dense when, without regard to the distance between two legal points in the system, there always exists a third point between them. These properties lead, for a reason that will not be discussed here, to another proposition by which chaotic systems are referred—sensitive dependence on initial conditions. To provide a general idea of such dependence without entering into the mathematical realm, let’s suppose to record the trajectory followed by the system starting with the initial condition t0. Let’s also suppose to choose another condition t1 very close, thank the denseness property, to t0. If the trajectory of t1 diverges sensitively from the one of t0 (see figure), it is impossible, chosen a third initial condition t2, to predict what trajectory the system will follow. In this sense, the system appears to behave randomly. Summarizing the notions so far introduced, a chaotic system might be defined as a non-linear deterministic system so sensitive to initial conditions that it appears random. It is remarkable that here two antonyms, deterministic and random are used in the same sentence, since chaos theory effectively forms a bridge between two dissimilar sciences—mathematics and probability.

2.1.1

The Logistic Equation

The iterative equation

x _{n} _{} _{1} = rx _{n} 1 − x _{n}

_{…}_{…}_{…}_{…}_{…}_{…}_{…}

(1)

known as the logistic equation, is historically interesting as one of the earliest proposed sources of Chaos Functions. Ulam and von Neumann suggested its use in 1947, partly because it had a ``known algebraic distribution''. This means that even though the sequence of numbers generated by repeatedly applying the function to itself is not uniformly distributed, an algebraic transformation will give the uniform distribution. The equation was mentioned again in 1949 by von Neumann, and much later in 1969 by Knuth, but it was never used for random number generation.

However, lately Chaotic Functions have caught the interest of researchers as they have found to possess numerous interesting properties. The iterative values generated from such functions are completely random in nature although limited between bounds. The iterative values are never seen to converge after any value of iterations. However the most fascinating aspect of these functions is their extreme sensitiveness to initial conditions. For example even if the initial start value of iterations is subjected to a disturbance as small as 10-100, iterative values generated after some number of iterations are completely different from each other[adcom]. It is this extreme sensitivity to the initial conditions that make chaotic functions very important for applications in cryptography.

2.1.2 Bifurcation Diagram and Logistic Map

This is a plot of the parameter ‘r’ with the values that are obtained after some number of iterations. For 0<r<3 , the function is seen to converge to a particular value after some number of iterations . As r is increased to just greater than 3 the curve splits into 2 branches. The values generated by this

function now oscillate between two different values. As the parameter 'r' is further increased, the curves bifurcate again and now the oscillations are seen in between 4 values. As 'r' is further increase the bifurcations become faster and faster, 8, 16 then 32. Beyond a certain value of 'r' known as the "point

Figure: The bifurcation diagram for the logistic equation

of accumulation" periodicity gives way to complete chaos. This is found for

r>3.57. The chaos values generated at this point are seen to be restricted to two different bounds. As the value of 'r' is further increased the two bounds give way to a single bound. Also the range between which chaos values are yielded increases constantly as the value of 'p' is increased. Finally for r=4, we observe that chaos values are generated in the complete range of 0 to 1. It is this point that we are interested in.

Therefore the chaos function that we investigate for generation of multiple keys and application to one time pads, for symmetric key encryption, in this project, is r*x*(1-x). As mentioned earlier, a slight difference in the initial starting value i.e. x0 leads to substantial difference in the obtained iterative values. For an error as small as the order of 10-30 we can achieve differences of greater than 0.0625 after about 100 iterations theoretically[adcom].

A clearer picture can be obtained by dividing the parameter r into 3

segments and then analyzing the behavior of Equation 1. Refer to figure. When r ∈ [0,3] as shown in Fig. (a), the calculation results come to the same value after several iterations without any chaotic behavior. When r ∈ [3,3.57] , the phase space concludes several points only, as showed in Fig. (b), the system appears periodic. While r ∈ [3.57,4] , it becomes a chaotic system with periodicity disappeared as shown in Fig. (c). it is also to be noted here that X _{n} takes values in the interval [0,1]. So we can draw the following conclusions:

(1) When r ∈ [0,3.57] , the points concentrate on several values and could not be used for our cryptosystem.

(2) For r ∈ [3.57,4] , the logistic map exhibits chaotic behavior. So it can

be used for our cryptosystem.

Fig 2

(a)

(b)

(c)

Analysis of logistic map: Iteration property when (a) r = 2.8 (b) r =

3.2 (c) r = 3.8

2.2 Chaos and cryptography

Chaos has attracted much attention in the field of Cryptography due to its

deterministic nature and its sensitivity to initial values. Such properties mean

that Chaos has certain potential in creating a new way of securing important

information to be transmitted or stored.

The close relationship between chaos and cryptography makes chaos based

cryptographic algorithms as a natural candidate for design of chaos based

encryption techniques which are good for practical use as these techniques

provide a good combination of speed, high security, complexity, reasonable

computational overheads and computational power etc.

Chaos describes a system which is sensitive to initial conditions, generates

apparently random behaviour but at the same time is completely

deterministic as already explained in section . These properties of chaos have much potential for applications in cryptography as it is hard to make long-term predictions on chaotic systems.

Firstly, being completely deterministic means that we can always obtain the same set of values provided we have exactly the same mapping function and initial value. Compared to using conventional random number generators where the string of random numbers cannot be regenerated, chaos allows us to repeat the same string of numbers as long as we have the mapping function and initial value used. The apparent randomness of the system also makes attacks such as the ‘codebook’ attack impossible.

Next, since chaotic functions are sensitive to initial conditions, any slight difference in the initial value used will mean that the cipher-text produced using chaos will be drastically different. This means that the system will be ‘strong’ against brute force attacks as the number of possible keys is astronomical given that the precision of the initial values, which depends on the hardware used, is high.

2.2.1 Confusion and Diffusion The main objective of cryptology is to achieve the perfect secrecy [shein] by which no information of the plaintext can be extracted from the ciphertext. The only cryptographic algorithm able of such a performance is called one- time pad where each character of the plain-message is ciphered with exactly one random number picked out from a truly random sequence which can be used only once for only one message. Since the one-time pad only has a theoretical validity and any other cipher is an approximation of it, every ciphertext unavoidably yields some information about the corresponding plaintext. In part this is also due to the redundancy to which a natural language is subjected, i.e. the fact that a plaintext contains more symbols than those necessary to provide the same amount of information. A good

algorithm will tend to reduce redundancy to a minimum. According to [shein] who cites Shannon, the two basic techniques for conceiving redundancies, beside using a compression algorithm, are called confusion and diffusion.

Confusion seeks to reduce the correlation between the input plaintext and the output ciphertext. The task is generally accomplished substituting every fundamental block of data for another one according to the rules dictated by the cryptographic algorithm. Despite this, repetitions or well-known sequence of blocks in the plaintext are still kept at the output. This problem is addressed by diffusion: a data on the input block is transposed to other coordinates on the output block. Put in another way, diffusion changes the position of data, while, during a confusion process, the data itself is modified. It is to be observed that diffusion implies a block cipher, whereas confusion can deal with streams of data, as well

2.3 CRYPTOGRAPHY

The word cryptography refers to the science of keeping secrecy of messages exchanged between a sender and a receiver over an insecure channel. The objective is achieved by encoding data so that it can only be decoded by specific individuals. The original message M being wanted to be sent is called plaintext since it is clearly intelligible, whereas the term used to refer to the Message C being transited over an insecure channel is cipher-text. The process E of transforming a plaintext into a cipher-text is called encryption, while the opposite procedure D that turns a cipher-text into a plaintext at the receiver’s side is said decryption. In symbols

E (M) = C

D (C) = M

A cryptographic algorithm is composed of the mathematical function used for encryption and its related inverse-function for decryption. A cryptographic algorithm is some times referred as cipher. The security of an algorithm can rely on the secrecy of its function, when quality, standardization and mass utilization is not a concern. Where these restrictions can not be tolerated (basically in any practical situation), the problem is solved by means of a key, denoted with k. This key might be any one of a large number of values, which all together form the keyspace K. the security of a cryptosystem largely depends on strength of its key.

Fig 1 Encryption and decryption with a key

Two different keys ke and kd for encryption and decryption might be used. In symbols:

Eke (M) = C Dkd (C) = M

Finally, a cryptosystem is an algorithm plus all possible plaintexts, cipher-texts and keys.

2.3.1

Classification of Cryptosystems

As seen, definitions and symbology introduced in the previous section can be summarized by the general concept of cryptosystem, whose picture is shown in figure 1. In regard to the kind of distribution method established for the keys, the way a cipher treats the plaintext and the type of implementation support chosen, a cryptosystem can be seen under several points of view. A look at them will allow us to have an idea of the collocation of the present algorithm within the vast field of cryptography.

2.3.1.1 Symmetric and Asymmetric Distribution Keys

The first big classification to which cryptographic algorithms might be undertaken is the distinction between the methods with which keys are distributed. When encryption key ke and decryption key kd are identical, sender and receiver must agree on a secure channel through which transmitting the key without anybody else finding out. This is the most extensively used method and is often accompanied by the adjective symmetric because of the equivalence of the keys.

Contrarily, the asymmetric method makes use of a pair of keys for each individual — one public and the other private.

2.3.1.2 Block and Stream Cipher

Another big classification for cryptographic algorithms consists of subdividing ciphers into two categories: stream ciphers and block ciphers. A stream cipher is so called because it works on a stream of data, normally one bit (but some time also one byte or 32 bit) at a time: as soon as a new value of plaintext arrives, the correspondent cipher value is computed. On the other side, a block cipher operates on the plaintext one block at a time: a new block of ciphertext cannot be evaluated until the previous block is finished. Moreover, a block cipher will encrypt the same plaintext with the same key always to the same ciphertext, while for a stream cryptosystem the output

depends also on the history of the cipher — this leads to the problem of synchronization between encryption and decryption processes.

A cipher can operate in several cryptographic modes in regard to the way the plaintext, key and ciphertext interact with each other. Electronic Codebook (ECB) mode represents the more straightforward and simple solution for a block cipher. Once the key is fixed, the system will always encrypt the same block of plaintext into the same block of ciphertext, without regard to other parameters. This mode can be thought of as a double entry look-up table. While implementations can be extremely fast, this mode is also very memory demanding since a table is necessary for every couple of plain- and ciphertext and for each key k. The counterpart of ECB is the Cipher Block Chaining (CBC) mode in which new ciphertext blocks depend, by means of a sort of feedback mechanism on previous outputs.

Besides ECB and CBC, Ciphertext Feedback (CFB) represents a mode to run block ciphers as stream ciphers. This statement means that output values from a cryptosystem are serialized as in a stream cipher, but rely somehow on the previous computed values as in a block cipher. The mechanism used to realize this mode generally consists of a shift register into which new values are pushed and on which the encryption algorithm depends.

Another solution consists of using a Pseudo-Random Number Generator (PRNG). It is worth noticing that in any case the mechanism has to be initialized with an initialization vector which concurs, besides the key, to effect the encipherment output for a given sequence of plaintext’s data. This means that the ciphertext depends on previous blocks such as a stream cipher. Nevertheless, if the initialization vector depends on the key and the keys ke and kd match, there are no synchronization problems and ciphertext can be correctly deciphered.

2.3.1.3

Hardware versus Software Implementation

The kind of algorithm classes being explained in this subsection do not have well defined boundaries, that is, an algorithm can be moved from a class to another due to a technological improvement or a smarter implementation. In fact, any cryptographic algorithm might potentially be designed either for an hardware project and for a software program, but many times the involved operations render one of the two solutions (and sometimes both) impracticable or inconvenient. A software implementation has, on its side, flexibility throughout different applications, portability from one platform to another, ease of use and ease of upgrade of the binary or source code. The disadvantages are in speed — especially if the algorithm belongs to the category of stream ciphers — ease of modification and manipulation by third party.

On the other side, a hardware implementation suffers mainly of deficiency in mathematical abilities, since operations as multiplications and divisions are normally difficult or cost prohibitive for realization. Nonetheless, the advantages abundantly overcome the inadequacies. The first is speed. Dedicated hardware — possibly another chip beside the main CPU — will always win a speed race against a general-purpose processor, especially if the cryptographic algorithm is a sort of stream cipher. Besides speed, security reasons play a great role. A dedicated hardware has got a physical barrier to be surmounted before reading internal variables. Codes can be burned into the chip and tamper-proof can prevent someone from modifying a hardware encryption device [shein]: chemical substances can be used to destruct the chip’s logic in case a third party accesses the interior. A final reason relies on ease installation as simple device between two existent peripherals.

2.3.2

Characteristics of a Cryptosystem

A good information security system is able to not only protect confidential

messages in the text form, but also in image form. In general, there are three basic characteristics in the information security field: privacy, integrity, and availability.

1. Privacy: Any unauthorized user cannot disclose the message.

2. Integrity: Unauthorized user cannot modify or corrupt the message.

3. Availability: The messages are made available to authorized users faithfully

A perfect cryptosystem is not only flexible in the security mechanism, but

also has high overall performance. Thus, besides the above characteristics, it also requires the following :

1. The encryption system should be computationally secure. It must require an extremely long computation time to break, for example. Unauthorized users should not be able to read privileged images.

2. Encryption and decryption should be fast enough not to degrade system performance. The algorithms for encryption and decryption must be simple enough to be done by users with a personal computer.

3. The security mechanism should be as widespread as possible. It must be widely acceptable to design a cryptosystem like a commercial product

4.

The security mechanism should be flexible.

5. There should not be a large expansion of the encrypted data so as to put a strain on memory. Also should be able to work on compressed files.

2.3.3 Analysis of Cryptographic Systems

There is no theory which proves the strength for any conventional cipher, Therefore ciphers have been traditionally regarded "strong" when they have been used for a long time with no known easy method to break them. Cryptanalysis seeks to improve this process by testing the ciphers against certain known attack strategies and also looking for new attack strategies. But while cryptanalysis can show the "weakness" of the ciphers against certain attacks, it cannot prove that there is no simpler attack: “Lack of proof of weakness is not proof of strength”. We cannot assume that a particular cryptographic system is ‘strong’ just because we cannot find weakness in the system. This is because there may be vulnerabilities in the system that is not discovered yet and may be exposed with the advance in technology. We can only show the strength of the system against known attacks at the moment

There are various ways that code-breakers use to break or compromise cryptographic systems. Some of the more common methods are mentioned below.

2.3.3.1 Brute Force (Exhaustive Key Search):

Try every possible key on the encrypted message(ciphertext) until readable messages are produced. This method means that the longer or more sophisticated the key is, the more effort needed to break the cipher. The required computing power increases exponentially with the length of the

key. A 32 bit key takes 2 ^{3}^{2} steps to cover all the possible combinations of the key which can be performed within 24 hours on modern computers.

2.3.3.2 Codebook One simply tries to build or collect a codebook of all the possible transformations between plaintext(original message) and ciphertext under a single key. This is the classic approach we normally think of as "code- breaking". Such attacks can be defeated if the plaintext data are randomized and thus evenly and independently distributed among the possible values.

2.3.3.3 Differential Cryptanalysis

Cryptanalysts would choose pairs of plaintexts such that there is a specified difference between members of the pair. They would then study the difference between the corresponding pair of cipher-texts. Statistics of the plaintext pair-cipher text pair differences can yield information about the key used in encryption.

References

1. W. Stallings., "Cryptography and Network Security: Principles and Practice," Prentice- Hall, New Jersey, 1999.

2. Bruce Schneier, "Applied Cryptography – Protocols, algorithms, and

source code in C," John Wiley & Sons, Inc., New York, second edition,

1996.

3. David A. Patterson, John L. Hussey, “Computer Organization and Design”, Elsevier, San Francisco, 2007.

4. Hosam El-din et.al, “An Efficient Chaos based Feedback Stream Cipher for image Encryption and Decryption, Informatica, 2007.

5. Bhaskar, “ A VHDL Primer”, Pearson- Hall, New Jersey, 1999.

6. Charles H. Roth, “ Introduction to VHDL”,2003

7. Proceedings

of

the

Asia

and

South

Conference 2000 (ASP-DAC’00)

Pacific

Design

Automation

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.