Sie sind auf Seite 1von 77

INTRODUCTION

1
1. INTRODUCTION

1.1 MOTIVATION:
Over the past decade, there has been a notable increase in the widespread use of
personal digital assistants, mobile phones with advanced functionality and consumer
devices which contain limited, yet non-trivial processors. Such devices have had, and will
continue to have a significant effect on the flow of social and business communication.
At the same time, we have seen personal computers handling more and more highly
sensitive data such as bank and business transactions. The next logical step is to see the
limited devices handling such transactions on the go in a secure fashion. Human nature
and the pace of business demands that all of these interactions take place effectively and
efficiently. Thus follows the need for an efficient, lightweight way of securely
establishing a secret key for communication between two parties.

Key exchange protocols have existed since men have kept secrets from other men.
Stories of Julius Caesar and his armies encrypting messages are legendary to those who
know the history of espionage. The appropriately named Caesar Cipher is a simple
algorithm which, given a simple key, maps out an alternate alphabet by shifting the
values of each letter in a message any number of letters to the left. If a key is 1, then any
b in your message becomes an a, any c becomes a b, any d becomes a c and so on. This
will result in a scrambled mess of a message resulting in an encrypted or ciphertext
message.

The receiving party simply shifts each letter of the encrypted message to the right
1 letter resulting in the decrypted or plaintext message. The key can be any number
between 1 and 26. Ignoring the fact that there are only 26 possible keys, this simple
encryption algorithm depends on each party knowing the secret key. In the old days, this
would most likely be done by a trusted courier or a worst case scenario of a personal
conversation between to two communicating parties.

2
Modifications of this algorithm allowed for more complex keys which could
consist of words or larger numbers as keys. Given this technique, keys could be a
previously established word, such as a last name, or a country, or a favorite food. This
was still very easy to break and still requires a previous agreement on a key establishing
method. Later advancements in encryption lead to more complex mathematical structures
requiring hundreds or thousands or millions of calculations which realistically, requires
efficient computational machines such as computers.

Algorithms used today, such as RSA or Diffie-Hellman, depend on modular


exponentiation of very large numbers which would take an enormous amount of time if
done by hand. These algorithms can be used in such a way that two parties can end up
with the same data without revealing the data in an open communication channel. These
are called Public Key Crypto-Systems. Public key crypto-systems are based on the
concept of an one-way function.

To understand one-way functions, we must first grasp the concept of a twoway


function. An example of a two-way function is addition. With the equation A+B = C, and
given A and B, it is easy to calculate C. Likewise, if we are given B and C, we could
easily compute A by using the inverse of addition subtraction. This gives us the equation
C − B = A. A one-way function on the other hand, is one where it is not computationally
feasible to come up with one of the arguments, given the remaining arguments and the
solution. An example of an one-way function is modular exponentiation. A function of
the form y = xn(mod p) is a one-way function. Given x, y, and p, we would be unable to
efficiently determine n if we were using large enough values. This computationally
intensive process is the basis and starting point for this paper.

1.2 PROBLEM DEFINITION:


In this project we are implementing a system that uses ECC technique to provide
encryption which provides same security as DH but with very small key size. This is
great benefit to small devices which have less computing power. Other remarkable

3
advantage of ECC technique over DH is that it requires only addition of points over
elliptic curve compared to exponentiation which is required to implement DH algorithm

1.3 OBJECTIVE OF THE PROJECT


This project gives an introduction to elliptic curve cryptography and how it is
used in the implementation of key agreement algorithms. This report discusses the
implementation of ECC on prime field. My implementation attempts to measure the
performance of DH algorithm and ECC algorithms in terms of key generation times and
encryption times.

1.4 LIMITATIONS OF THE PROJECT:


Even though our goal was to analyze the ECC and DH for the applicability to
mobile platforms, we are comparing the times on desktops only. The future enhancement
will be to run these algorithms on actual mobile devices.

1.5 ORGANIZATION OF PROJECT:

The organization of the document is as follows:


 Perform literature survey of the application which includes the analysis of
existing system, the proposed system and the various features and functionalities
provided by the application developed as part of the project.
 Discuss requirement specification which includes system perspective, specific
requirements pertaining to the user, software and hardware requirements,
functions and the function point analysis along with non functional requirements
like performance analysis, safety, security and reliability analysis.
 Discuss the design of the system by the use of UML and Flow diagrams.
 Highlights the implementation issues with the explanation of key functions and an
in-depth analysis of the methods of implementation.
 Discuss the test cases used to test and validate the system. Gives the conclusion
and future enhancements made to the application.

4
LITERATURE REVIEW

5
2. LITERATURE REVIEW
2.1 INTRODUCTION:
2.1.1 Cryptography:

In cryptography world the message that needs to be secured is called plaintext or


clear text. The scrambled form of the message is called cipher text. The process of
converting a plaintext to cipher text is called encryption. The process of reconverting the
cipher text into plaintext is called decryption

In the information age, cryptography has become one of the major methods for
protection in all applications. Cryptography allows people to carry over the confidence
found in the physical world to the electronic world. It allows people to do business
electronically without worries of deceit and deception. In the distant past, cryptography
was used to assure only secrecy. Wax seals, signatures, and other physical mechanisms
were typically used to assure integrity of the message and authenticity of the sender.
When people started doing business online and needed to transfer funds
electronically, the applications of cryptography for integrity began to surpass its use for
secrecy. Hundreds of thousands of people interact electronically every day, whether it is
through e-mail, ecommerce (business conducted over the Internet), ATM machines, or
cellular phones. The constant increase of information transmitted electronically has lead
to an increased reliance on cryptography and authentication.
Till now we are having more securing algorithms, here we are going to introduce
a new block cipher named Bit-Level encryption algorithm. There are many aspects to
security and many applications, ranging from secure commerce and payments to private
communications and protecting passwords. One essential aspect for secure
communications is that of cryptography. But it is important to note that while
cryptography is necessary for secure communications, it is not by itself sufficient.

6
2.1.2 Purpose of Cryptography:
Cryptography is the science of writing in secret code and is an ancient art; the first
documented use of cryptography in writing dates back to circa 1900 B.C. when an
Egyptian scribe used non-standard hieroglyphs in an inscription. Some experts argue that
cryptography appeared spontaneously sometime after writing was invented, with
applications ranging from diplomatic missives to war-time battle plans. It is no surprise,
then, that new forms of cryptography came soon after the widespread development of
computer communications. In data and telecommunications, cryptography is necessary
when communicating over any untrusted medium, which includes just
about any network, particularly the Internet.
Within the context of any application-to-application communication, there are
some specific security requirements, including:

• Authentication: The process of proving one's identity. (The


primary forms of host-to-host authentication on the Internet today are
name-based or address-based, both of which are notoriously weak.)
• Privacy/confidentiality: Ensuring that no one can read the
message except the intended receiver.
• Integrity: Assuring the receiver that the received message has not
been altered in any way from the original.
• Non-repudiation: A mechanism to prove that the sender really
sent this message.

Cryptography, then, not only protects data from theft or alteration, but can also be used
for user authentication. There are, in general, three types of cryptographic schemes
typically used to accomplish these goals: secret key (or symmetric) cryptography, public-
key (or asymmetric) cryptography, and hash functions, each of which is described below.
In all cases, the initial unencrypted data is referred to as plaintext. It is encrypted
into ciphertext, which will in turn (usually) be decrypted into usable plaintext.

In many of the descriptions below, two communicating parties will be referred to


as Alice and Bob; this is the common nomenclature in the crypto field and literature to

7
make it easier to identify the communicating parties. If there is a third or fourth party to
the communication, they will be referred to as Carol and Dave. Mallory is a malicious
party, Eve is an eavesdropper, and Trent is a trusted third party.

2.1.3 Types of cryptographic algorithms:

There are several ways of classifying cryptographic algorithms. For purposes of


this paper, they will be categorized based on the number of keys that are employed for
encryption and decryption, and further defined by their application and use. The three
types of algorithms are

• Secret Key Cryptography (SKC): Uses a single key for both


encryption and decryption
• Public Key Cryptography (PKC): Uses one key for encryption
and another for decryption
• Hash Functions: Uses a mathematical transformation to
irreversibly "encrypt" information

8
Secret Key Cryptography:

With secret key cryptography, a single key is used for both encryption and
decryption. The sender uses the key (or some set of rules) to encrypt the plaintext and
sends the cipher text to the receiver. The receiver applies the same key (or rule set) to
decrypt the message and recover the plaintext. Because a single key is used for both
functions, secret key cryptography is also called symmetric encryption.

9
With this form of cryptography, it is obvious that the key must be known to both
the sender and the receiver; that, in fact, is the secret. The biggest difficulty with this
approach, of course, is the distribution of the key.

Secret key cryptography schemes are generally categorized as being either stream
ciphers or block ciphers. Stream ciphers operate on a single bit (byte or computer word)
at a time and implement some form of feedback mechanism so that the key is constantly
changing. A block cipher is so-called because the scheme encrypts one block of data at a
time using the same key on each block. In general, the same plaintext block will always
encrypt to the same cipher text when using the same key in a block cipher whereas the
same plaintext will encrypt to different cipher text in a stream cipher.

Stream ciphers come in several flavors but two are worth mentioning here. Self-
synchronizing stream ciphers calculate each bit in the key stream as a function of the
previous n bits in the key stream. It is termed "self-synchronizing" because the decryption
process can stay synchronized with the encryption process merely by knowing how far
into the n-bit key stream it is. One problem is error propagation; a garbled bit in
transmission will result in n garbled bits at the receiving side. Synchronous stream
ciphers generate the key stream in a fashion independent of the message stream but by
using the same key stream generation function at sender and receiver. While stream
ciphers do not propagate transmission errors, they are, by their nature, periodic so that the
key stream will eventually repeat.

Block ciphers can operate in one of several modes; the following four are the
most important:

• Electronic Codebook (ECB) mode is the simplest, most obvious


application: the secret key is used to encrypt the plaintext block to form a
cipher text block. Two identical plaintext blocks, then, will always
generate the same cipher text block. Although this is the most common
mode of block ciphers, it is susceptible to a variety of brute-force attacks.

10
• Cipher Block Chaining (CBC) mode adds a feedback mechanism
to the encryption scheme. In CBC, the plaintext is exclusively- XORed
(XORed) with the previous cipher text block prior to encryption. In this
mode, two identical blocks of plaintext never encrypt to the same cipher
text.
• Cipher Feedback (CFB) mode is a block cipher implementation as
a self-synchronizing stream cipher. CFB mode allows data to be encrypted
in units smaller than the block size, which might be useful in some
applications such as encrypting interactive terminal input. If we were
using 1-byte CFB mode, for example, each incoming character is placed
into a shift register the same size as the block, encrypted, and the block
transmitted. At the receiving side, the cipher text is decrypted and the
extra bits in the block (i.e., everything above and beyond the one byte) are
discarded.

Output Feedback (OFB) mode is a block cipher implementation conceptually similar to


a synchronous stream cipher. OFB prevents the same plaintext block from generating the
same cipher text block by using an internal feedback mechanism.

Public-Key Cryptography:

Public-key cryptography has been said to be the most significant new


development in cryptography in the last 300-400 years. Modern PKC was first described
publicly by Stanford University professor Martin Hellman and graduate student Whitfield
Diffie in 1976. Their paper described a two-key crypto system in which two parties could
engage in a secure communication over a non-secure communications channel without
having to share a secret key.

PKC depends upon the existence of so-called one-way functions, or mathematical


functions that are easy to computer whereas their inverse function is relatively difficult to
compute. Let me give you two simple examples:

11
1. Multiplication vs. factorization: Suppose I tell you that I have
two numbers, 9 and 16, and that I want to calculate the product; it should
take almost no time to calculate the product, 144. Suppose instead that I
tell you that I have a number, 144, and I need you tell me which pair of
integers I multiplied together to obtain that number. You will eventually
come up with the solution but whereas calculating the product took
milliseconds, factoring will take longer because you first need to find the 8
pair of integer factors and then determine which one is the correct pair.
2. Exponentiation vs. logarithms: Suppose I tell you that I want to
take the number 3 to the 6th power; again, it is easy to calculate 3 6=729.
But if I tell you that I have the number 729 and want you to tell me the
two integers that I used, x and y so that logx 729 = y, it will take you
longer to find all possible solutions and select the pair that I used.

While the examples above are trivial, they do represent two of the functional pairs
that are used with PKC; namely, the ease of multiplication and exponentiation versus the
relative difficulty of factoring and calculating logarithms, respectively. The mathematical
"trick" in PKC is to find a trap door in the one-way function so that the inverse
calculation becomes easy given knowledge of some item of information.

Generic PKC employs two keys that are mathematically related although
knowledge of one key does not allow someone to easily determine the other key. One key
is used to encrypt the plaintext and the other key is used to decrypt the ciphertext. The
important point here is that it does not matter which key is applied first, but that both
keys are required for the process to work . Because pair of keys are required, this
approach is also called asymmetric cryptography.

In PKC, one of the keys is designated the public key and may be advertised as
widely as the owner wants. The other key is designated the private key and is never
revealed to another party. It is straight forward to send messages under this scheme.
Suppose Alice wants to send Bob a message. Alice encrypts some information using
Bob's public key; Bob decrypts the ciphertext using his private key. This method could be

12
also used to prove who sent a message; Alice, for example, could encrypt some plaintext
with her private key; when Bob decrypts using Alice's public key, he knows that Alice
sent the message and Alice cannot deny having sent the message (non-repudiation).

Public-key cryptography algorithms that are in use today for key exchange or
digital signatures include:

RSA:

The first, and still most common, PKC implementation, named for the three MIT
mathematicians who developed it — Ronald Rivest, Adi Shamir, and Leonard Adleman.
RSA today is used in hundreds of software products and can be used for key exchange,
digital signatures, or encryption of small blocks of data. RSA uses a variable size
encryption block and a variable size key. The key-pair is derived from a very large
number, n, that is the product of two prime numbers chosen according to special rules;
these primes may be 100 or more digits in length each, yielding an n with roughly twice
as many digits as the prime factors. The public key information includes n and a
derivative of one of the factors of n; an attacker cannot determine the prime factors
of n (and, therefore, the private key) from this information alone and that is what makes
the RSA algorithm so secure. (Some descriptions of PKC erroneously state that RSA's
safety is due to the difficulty in factoring large prime numbers. In fact, large prime
numbers, like small prime numbers, only have two factors!) The ability for computers to
factor large numbers, and therefore attack schemes such as RSA, is rapidly improving
and systems today can find the prime factors of numbers with more than 200 digits.
Nevertheless, if a large number is created from two prime factors that are roughly the
same size, there is no known factorization algorithm that will solve the problem in a
reasonable amount of time; a 2005 test to factor a 200-digit number took 1.5 years and
over 50 years of compute time .Regardless, one presumed protection of RSA is that users
can easily increase the key size to always stay ahead of the computer processing curve.
As an aside, the patent for RSA expired in September 2000 which does not appear to
have affected RSA's popularity one way or the other.

13
DHA:

Whitefield Diffie and Martin Hellman devised an amazing solution to the problem
of key agreement or key exchange in 1976. This solution is called as the Diffie-Hellman
Key Exchange/Agreement Algorithm. Diffie-Hellman key exchange (D-H) is a
cryptographic protocol that allows two parties that have no prior knowledge of each other
to jointly establish a shared secret key over an insecure communications channel. This
key can then be used to encrypt subsequent communications using a symmetric key
cipher. The beauty of this scheme is that the two parties, who want to communicate
securely, can agree on a symmetric key using this technique. This key can then be used
for encryption/decryption. However, we must note that Diffie-Hellman key exchange
algorithm can be used only for key agreement, but not for encryption or decryption of
messages. Once both the parties agree n the key to be used, they need to use other
symmetric key encryption algorithms.

Although the Diffie-Hellman key exchange algorithm is based on mathematical


principles, it is quite simple to understand. Deffie-Hellman key exchange algorithm gets
it security from the difficulty of calculating discrete logarithms in a finite field, as
compared with the ease if calculating with the ease of calculating exponentiation in the
same field. Although Diffie-Hellman key agreement itself is an anonymous(non-
authenticated) key agreement protocol, it provides the basis for a variety of authenticated
protocols, and is used to provide perfect forward secrecy in Transport Layer Security’s
ephemeral modes.

Authenticated two-party Diffie-Hellman key exchange allows two principals A


and B, communicating over a public network, and each holding a pair of matching
public/private keys to agree on a session key. Protocols designed to deal with this
problem ensure A (B resp.) that no other principals aside from B (A resp.) can learn any
information about this value. These protocols additionally often ensure A and B that their
respective partner has actually computed the shared secret value.

14
Hash Functions:

Hash functions, also called message digests and one-way encryption, are
algorithms that, in some sense, use no key. Instead, a fixed-length hash value is computed
based upon the plaintext that makes it impossible for either the contents or length of the
plaintext to be recovered. Hash algorithms are typically used to provide a digital
fingerprint of a file's contents often used to ensure that the file has not been altered by an
intruder or virus. Hash functions are also commonly employed by many operating
systems to encrypt passwords. Message Digest (MD) algorithms: A series of byte-
oriented algorithms that Hash functions are sometimes misunderstood and some sources
claim that no two files can have the same hash value. This is, in fact, not correct.
Consider a hash function that provides a 128-bit hash value. There are, obviously,
2128 possible hash values. But there are a lot more than 2128 possible files. Therefore, there
have to be multiple files — in fact, there have to be an infinite number of files! — that
can have the same 128-bit hash value.

The difficulty is finding two files with the same hash! What is, indeed, very hard
to do is to try to create a file that has a given hash value so as to force a hash value
collision which is the reason that hash functions are used extensively for information
security and computer forensics applications. Alas, researchers in 2004 found
that practical collision attacks could be launched on MD5, SHA-1, and other hash
algorithms.

ECC:
ECC relies on an algebra based on curves of the form y2 = x3+ax+b. Similar levels
of security can be obtained by using numbers that are several times smaller than the
numbers used in Diffie-Hellman(DH) Algorithm. It is believed that ECC gives
remarkably similar levels of security at a fraction of the cost of DH. With a further
understanding of ECC, small devices may utilize the technology to exchange encryption

15
keys. These keys may be used to encrypt bank transactions, credit card numbers, and
other sensitive information.
Encryption of a value using ECC is conceptually simple. The idea is that you take
a random point on the curve, and add it to itself some number of times, your plaintext,
using a specialized algebra created specifically for Elliptic curves. As multiplication over
Elliptic curves is simply repeated additions of a point to itself. Using this concept, we are
able to encode an integer onto the curve by multiplying the point by the integer we wish
to encode. So far, all that has been discussed is the underlying concepts that make up
ECC.
To use ECC in reality, the actual encryption algorithm that sits on top of the ECC
algebra is as follows:
1. Alice and Bob agree to use a curve and point c and p.
2. Alice chooses a secret integer a and computes c * a and sends this new point over
to Bob.
3. Bob chooses a secret integer b and computes c * b and sends this new point over to
Alice.
4. Alice computes (c * b) * a and has the secret point.
5. Bob computes (c * a) * b and has the secret point.
The analogy between the normal discrete logarithm problem and the elliptic curve
logarithm problem is focused around the basic operation of multiplication for normal
discrete logarithm and addition of points for elliptic curves. The main operation being
exponentiation in discrete logarithm and scalar multiplication in elliptic curves. This
scalar multiplication is a result of calculations which are not modular exponentiation.
This is the main reason that elliptic curve cryptography boasts such significant gains over
normal Diffie-Hellman.

16
Key Management:

One of the most secure cryptosystems known is also one of the simplest - the one-
time pad. A random stream of bits is xor'ed into the plaintext to produce the cipher text,
requiring one random bit for each bit of plaintext. The key is the stream of random bits
itself, which must be known to both the sender and receiver, and which is discarded after
a single use. A simple implementation would be to generate the random bits from an
electronic white noise source, write the bits to a CD-ROM, and make two copies only.
Such a pair of CD-ROMs could encode about 600MB of data before they would be used
up and have to be rewritten or destroyed.

Of course, the great problem with a one-time pad is key management. In this
example, the CD-ROMs containing the key would have to be physically transported to
both the sender and receiver. The key couldn't be transmitted across the network, because
that would expose it to capture and violate the single use principle of the one-time pad.
Furthermore, precautions might need to be taken to ensure that the CD-ROMs weren't
serendipitously copied en route. Once used, the CD-ROMs couldn't just be discarded,
because the used key might then fall into malicious hands and be used to decrypt
previously recorded crypto text. These issues are for perhaps the simplest cryptosystem
ever devised, so imagine how much more complex key management becomes for
practical systems.

Two major issues in key management are key lifetime and key exposure. The
lifetime of a key is the limit of its use, which can be measured as an duration of time, or
as the amount of data encrypted with the key. Every time a key is used, crypto text is
generated and transmitted, potentially into the hands of an attacker trying to crypto
analyze the traffic and discover the plaintext. The more crypto text an attacker possesses,
the more information he possesses and the better the chances of unraveling the key and
discovering its secrets. Therefore, lifetime and exposure are closely related concepts.

17
Long-duration keys should be infrequently used, and oft-used keys should have a short,
carefully limited lifetime.

To meet the conflicting demands of encrypting a significant amount of data using


a long-lived key, keys are usually chained. For example, consider an email message
encrypted with PGP, using the recipient's RSA public key to ensure that only he can read
the message. An RSA key pair used to establish identity would be a fairly long-lived key,
since the public key would probably be published and made available for anyone wanting
to send the owner secure email. Therefore, the entire message would not be encrypted
with this key. Instead, the sender would randomly generate a block cipher key, perhaps a
128-bit key for IDEA or a 168-bit key for Triple DES, and encrypt the message using this
key. Then, the block cipher key (only 128 or 168 bits) would be encrypted using the RSA
public key and attached to the message in a header. An attempt to crack the message
would have less than a hundred bytes of RSA crypto text to work with. Even if the main
body of the message could be cracked to reveal the block cipher key, this would only
compromise the single message, not any other encoded using the RSA key. Key chaining
is a common technique used by almost every major cryptographic protocol.

Another key management issue is the encryption of key itself. Long-lived keys
used only periodically (PGP keys, for encrypting and authenticating email, for example)
are often encrypted to protect them while stored on disk. The encryption is typically done
using a conventional symmetric cipher, its key formed from the hash of a user-supplied
pass phrase. To use the key, its owner must enter the pass phrase when prompted. The
entered pass phrase is then hashed, and the hash used to decrypt the key, which is
formatted so that if the wrong pass phrase is provided, the decryption will produce
gibberish, which won't match the format. Use of the key thus requires knowledge of the
pass phrase. Obviously, a key stored in this manner can't be used by automated processes.

2.2 EXISTING SYSTEM:


Currently the security applications are using RSA and DH algorithms for
providing encryption while doing key exchange. Both of these algorithms require prime

18
factorization and exponentiation as key operations to implement them. These operations
seems to be a lot time consuming, hence might not be suitable for small devices.

2.3 PROPOSED SYSTEM:

In this project we are implementing a system that uses ECC technique to provide
encryption which provides same security as DH but with very small key size. This is
great benefit to small devices which have less computing power. Other remarkable
advantage of ECC technique over DH is that it requires only addition of points over
elliptic curve compared to exponentiation which is required to implement DH algorithm.

2.4 FEASIBILITY STUDY:


2.4.1 ECONOMIC FEASIBILITY:
Economic feasibility attempts 2 weigh the costs of developing and implementing
a new system, against the benefits that would accrue from having the new system in
place. This feasibility study gives the top management the economic justification for the
new system. A simple economic analysis which gives the actual comparison of costs and
benefits are much more meaningful in this case. In addition, this proves to be a useful
point of reference to compare actual costs as the project progresses. There could be
various types of intangible benefits on account of automation. These could include
increased customer satisfaction, improvement in product quality better decision making
timeliness of information, expediting activities, improved accuracy of operations, better
documentation and record keeping, faster retrieval of information, better employee
morale.

2.4.2 OPERATIONAL FEASIBILITY:

19
Proposed project is beneficial only if it can be turned into information systems
that will meet the organizations operating requirements. Simply stated, this test of
feasibility asks if the system will work when it is developed and installed. Are there
major barriers to Implementation? Here are questions that will help test the operational
feasibility of a project: Is there sufficient support for the project from management from
users? If the current system is well liked and used to the extent that persons will not be
able to see reasons for change, there may be resistance.Are the current business methods
acceptable to the user? If they are not, Users may welcome a change that will bring about
a more operational and useful systems.Have the user been involved in the planning and
development of the project? Early involvement reduces the chances of resistance to the
system and in general and increases the likelihood of successful project.Since the
proposed system was to help reduce the hardships encountered. In the existing manual
system, the new system was considered to be operational feasible.

2.4.3 TECHNICAL FESIBILITY:

Evaluating the technical feasibility is the trickiest part of a feasibility study. This
is because, at this point in time, not too many detailed design of the system, making it
difficult to access issues like performance, costs on (on account of the kind of technology
to be deployed) etc. A number of issues have to be considered while doing a technical
analysis.
Understand the different technologies involved in the proposed system before
commencing the project we have to be very clear about what are the technologies that are
to be required for the development of the new system. Find out whether the organization
currently possesses the required technologies. Is the required technology available with
the organization?

20
2.5 SOFTWARE PROFILE:
TECHNOLOGY:
INTRODUCTION TO JAVA:

Java has been around since 1991, developed by a small team of Sun Microsystems
developers in a project originally called the Green project. The intent of the project was
to develop a platform-independent software technology that would be used in the
consumer electronics industry. The language that the team created was originally called
Oak.
The first implementation of Oak was in a PDA-type device called Star Seven (*7)
that consisted of the Oak language, an operating system called GreenOS, a user interface,
and hardware. The name *7 was derived from the telephone sequence that was used in
the team's office and that was dialed in order to answer any ringing telephone from any
other phone in the office.
Around the time the First Person project was floundering in consumer electronics,
a new craze was gaining momentum in America; the craze was called "Web surfing." The
World Wide Web, a name applied to the Internet's millions of linked HTML documents
was suddenly becoming popular for use by the masses. The reason for this was the
introduction of a graphical Web browser called Mosaic, developed by NCSA.
The browser simplified Web browsing by combining text and graphics into a
single interface to eliminate the need for users to learn many confusing UNIX and DOS
commands. Navigating around the Web was much easier using Mosaic.
It has only been since 1994 that Oak technology has been applied to the Web. In
1994, two Sun developers created the first version of Hot Java, and then called Web
Runner, which is a graphical browser for the Web that exists today. The browser was
coded entirely in the Oak language, by this time called Java. Soon after, the Java

21
compiler was rewritten in the Java language from its original C code, thus proving that
Java could be used effectively as an application language. Sun introduced Java in May
1995 at the Sun World 95 convention.

Web surfing has become an enormously popular practice among millions of


computer users. Until Java, however, the content of information on the Internet has been
a bland series of HTML documents. Web users are hungry for applications that are
interactive, that users can execute no matter what hardware or software platform they are
using, and that travel across heterogeneous networks and do not spread viruses to their
computers. Java can create such applications.

WORKING OF JAVA:
For those who are new to object-oriented programming, the concept of a class will
be new to you. Simplistically, a class is the definition for a segment of code that can
contain both data and functions.
When the interpreter executes a class, it looks for a particular method by the name
of main, which will sound familiar to C programmers. The main method is passed as a
parameter an array of strings (similar to the argv [] of C), and is declared as a static
method. To output text from the program, we execute the println method of System.
Out, this is java’s output stream. UNIX users will appreciate the theory behind such a
stream, as it is actually standard output. For those who are instead used to the Wintel
platform, it will write the string passed to it to the user’s program.
Compiling and interpreting Java Source Code:

During run-time the Java interpreter tricks the byte code file into thinking that it is
running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95
or Suns ARC station

22
Java
PC Compiler Interpreter
Source SPARC
Code (PC)
Java
………..
………..
C
Byte code
Macintosh
……….. Compiler o Java
Interpreter
(Platform Java
(Macintosh)
m
Independe Interpreter
………… nt) (Spare)
p
i
l
e
r

Figure 6.Java Architecture

Running Solaris or Apple Macintosh running system and all could receive code
from any computer through Internet and run the Applets

SWING:
1INTRODUCTION TO SWING:
Swing contains all the components. It’s a big library, but it’s designed to have
appropriate complexity for the task at hand – if something is simple, you don’t have to
write much code but as you try to do more your code becomes increasingly complex.
This means an easy entry point, but you’ve got the power if you need it.
Swing has great depth. This section does not attempt to be comprehensive, but
instead introduces the power and simplicity of Swing to get you started using the library.
Please be aware that what you see here is intended to be simple.

If you need to do more, then Swing can probably give you what you want if
you’re willing to do the research by hunting through the online documentation from Sun.

23
BENEFITS OF SWING:

Swing components are Beans, so they can be used in any development


environment that supports Beans. Swing provides a full set of UI components. For speed,
all the components are lightweight and Swing is written entirely in Java for portability.
Swing could be called “orthogonality of use;” that is, once you pick up the general
ideas about the library you can apply them everywhere. Primarily because of the Beans
naming conventions.

Keyboard navigation is automatic – you can use a Swing application without the

mouse, but you don’t have to do any extra programming. Scrolling support is effortless –

you simply wrap your component in a JScrollPane as you add it to your form. Other

features such as tool tips typically require a single line of code to implement.
Swing also supports something called “pluggable look and feel,” which means
that the appearance of the UI can be dynamically changed to suit the expectations of
users working under different platforms and operating systems. It’s even possible to
invent your own look and feel.

Java Swing Class Hierarchy:


The class JComponent, descended directly from Container, is the root class for
most of Swing’s user interface components.
Swing contains components that you’ll use to build a GUI. I am listing you some
of the commonly used Swing components. To learn and understand these swing
programs, AWT Programming knowledge is not required.

24
Fig: Java swing Class Hierarchy

Swing introduced a mechanism that allowed the look and feel of every component
in an application to be altered without making substantial changes to the application code.
The introduction of support for a pluggable look and feel allows Swing components to
emulate the appearance of native components while still retaining the benefits of platform
independence. This feature also makes it easy to make an application written in Swing
look very different from native programs if desired.

25
Debugging:
Swing application debugging can be difficult because of the toolkit's visual
nature. In contrast to non-visual applications, GUI applications cannot be as easily
debugged using step-by-step debuggers. One of the reasons is that Swing normally
performs painting into an off-screen buffer (double buffering) first and then copies the
entire result to the screen. This makes it impossible to observe the impact of each
separate graphical operation on the user interface using a general-purpose Java debugger.
There are also some common problems related to the painting thread. Swing uses
the AWT event dispatching thread for painting components. In accordance with Swing
standards, all components must be accessed only from the AWT event dispatch thread. If
the application violates this rule, it may cause unpredictable behavior. If long-running
operations are performed in the AWT event dispatch thread, repainting of the Swing user
interface temporary becomes impossible causing screen freezes.

26
SYSTEM ANALYSIS

27
3. SYSTEM ANALYSIS

3.1 INTRODUCTION:
System analysis is the first stage according to the System Development Life Cycle
model. This System Analysis is the process that starts with the analyst.
Analysis is a detailed study of the various operations performed by a system an
their relationships within and outside the system. one aspect of analysis is defining the
boundaries of the system and determining whether or nor a candidate should consider
other related systems. During analysis, data is collected from the available files, decision
points and transactions handled by the present system.
Logical system models and tools are used in analysis. Training, experience and
common sense are required for collection of the information needed to do the analysis.

In this project we are implementing a system that uses ECC technique to provide
encryption which provides same security as DH but with very small key size. This is
great benefit to small devices which have less computing power. Other remarkable
advantage of ECC technique over DH is that it requires only addition of points over
elliptic curve compared to exponentiation which is required to implement DH algorithm.

3.2 Algorithms Used:

• DH Algorithm
 ECC Algorithm

DH Algorithm
DH Algorithm relies on the concept above of an one-way function called modular
exponentiation of large numbers. The Security of DH relies on the difficulty in solving
the Diffie-Hellman Problem, which is a variation of the Discrete Logarithm Problem. The
DiscreteLogarithm Problem is to find a such that x = ga(mod p), given x, g, and p where p
is a large prime number, g a generator for a large subgroup of Zp .The Diffie-Hellman
problem is to find gab(mod p) given x = ga(mod p) and y = gb(mod p), g and p (but not a

28
and b). Since there is no efficient algorithm known which can solve this problem, then
the Diffie-Hellman Algorithm can be used as an encryption algorithm given a large
enough prime number.

The DH implementation is as follows:


1. Alice and Bob agree to use prime number p and a generator g from Z_p .
2. Alice chooses a secret integer a and computes ga(mod p) and sends this over to Bob.
3. Bob chooses a secret integer b and computes gb(mod p) and sends this over to Alice.
4. Alice computes (ga(mod p))b(mod p) and has the secret key.
5. Bob computes (gb(mod p))a(mod p) and has the secret key.

Since modulo arithmetic is associative (ga(mod p))b(mod p) and (gb(mod p))a(mod


p) are identical. Yet neither a nor b have been sent over an insecure communication
channel. It must be noted that the use of DH usually depends on extremely large prime
numbers. The numbers must be large to ensure sufficient security. This is where the
problem of DH comes in. While it is very secure, it is also very costly.

ECC Algorithm
ECC relies on an algebra based on curves of the form y2 = x3+ax+b. Similar levels
of security can be obtained by using numbers that are several times smaller than the
numbers used in Diffie-Hellman(DH) Algorithm. It is believed that ECC gives
remarkably similar levels of security at a fraction of the cost of DH. With a further
understanding of ECC, small devices may utilize the technology to exchange encryption
keys. These keys may be used to encrypt bank transactions, credit card numbers, and
other sensitive information.

Encryption of a value using ECC is conceptually simple. The idea is that you take
a random point on the curve, and add it to itself some number of times, your plaintext,
using a specialized algebra created specifically for Elliptic curves. As multiplication over
Elliptic curves is simply repeated additions of a point to itself. Using this concept, we are
able to encode an integer onto the curve by multiplying the point by the integer we wish

29
to encode. So far, all that has been discussed is the underlying concepts that make up
ECC.

To use ECC in reality, the actual encryption algorithm that sits on top of the ECC algebra
is as follows:
1. Alice and Bob agree to use a curve and point c and p.
2. Alice chooses a secret integer a and computes c * a and sends this new point over to
Bob.
3. Bob chooses a secret integer b and computes c * b and sends this new point over to
Alice.
4. Alice computes (c * b) * a and has the secret point.
5. Bob computes (c * a) * b and has the secret point.

The analogy between the normal discrete logarithm problem and the elliptic curve
logarithm problem is focused around the basic operation of multiplication for normal
discrete logarithm and addition of points for elliptic curves. The main operation being
exponentiation in discrete logarithm and scalar multiplication in elliptic curves. This
scalar multiplication is a result of calculations which are not modular exponentiation.
This is the main reason that elliptic curve cryptography boasts such significant gains over
normal Diffie-Hellman.

3.3 REQUIREMENT SPECIFICATION:


The requirements specification is a technical specification of requirements for the
software products. It is the first step in the requirements analysis process it lists the
requirements of a particular software system including functional, performance and
security requirements. The requirements also provide usage scenarios from a user, an
operational and an administrative perspective. The purpose of software requirements
specification is to provide a detailed overview of the software project, its parameters and
goals. This describes the project target audience and its user interface, hardware and

30
software requirements. It defines how the client, team and audience see the project and its
functionality.

3.3.1 User requirement:


This module used to mine the frequent item sets in a given database. The user is
presented with Graphical User Interface through which the mining can be done over
given datasets.

3.3.1.1 Functional requirements:


DATA MANAGEMENT:
This part is made up of a gathering the datasets that are required to test the
performance of ECC and DH algorithms.

DESIGN OF EXPERIMENTS:
The aim of this part is the design of the desired experimentation over the selected
inputs and providing for many options for the selection of required number of bits for
greater security.

3.3.1.2 Non-functional requirements:

Performance Requirements:

The system should be reliable fast and must produce accurate results.

It should be fast while transferring data to different nodes.

Software System Attributes:

Reliability: The system is reliable and produces accurate resulting from implementation
security measures, Security is provided using password protection, release when time out
etc., by using these technologies the system is made more secure and more authentic.

Maintainability: The system will be designed as a closed system. New methods can be
added easily with little or no changes in the existing architecture.

31
Portability: The system should be portable. It can be deployed on any platform and it
should be used on all platforms.

Security: The protection of computer based resources that include hardware, software,
data, procedures and people against unauthorized use or natural Disaster is known as
System Security.

System Security refers to the technical innovations and procedures applied to the
hardware and operation systems to protect against deliberate or accidental damage from a
defined threat.

System Integrity refers to the power functioning of hardware and programs, appropriate
physical security and safety against external threats such as eavesdropping and
wiretapping.

Privacy defines the rights of the user or organizations to determine what information
they are willing to share with or accept from others and how the organization can be
protected against unwelcome, unfair or excessive dissemination of information about it.

Confidentiality is a special status given to sensitive information in a database to


minimize the possible invasion of privacy. It is an attribute of information that
characterizes its need for protection.

3.4 Software and Hardware requirements:

SOFTWARE REQUIREMENT:

32
 Programming language : JAVA (JDK1.6 )
 Operating system : Windows 98 or higher version
 Technologies : Java Swing, AWT.

HARDWARE REQUIREMENT:

 Processor : P4
 RAM : 128 MB
 Hard size capacity : At least 40 GB

The analysis of the system gives the profound idea of the system which helps us
understand the further study of the system. Since, our application has considered all the
issues related to the software project and the quality is also assured, the implementation
of the application finds a better place with the user.
Therefore, the analysis phase of any software product plays a major role in its
development cycle not only to analyze the features of the product and the requirements
but also to understand the project completely.

33
DESIGN

4. DESIGN

34
4.1 INTRODUCTION:

 It is a Graphical User Interface that allows the design of experiments for finding
frequent data item sets over large pool of data.
 Once the experiment is designed, it generates the directory structure and files
required for running them in any local machine with Java.
 The experiments are graphically modeled. They represent a multiple connection
among data, algorithms and analysis/visualization modules.
 Aspects such as type of learning, validation, number of runs and algorithm’s
parameter scan be easily configured.
 Once the experiment is created, our system will generate a GUI which helps us to
mine the real world data.

4.1.1 Flow Diagrams:

Selection of
Domain Curve and
Parameter Random Point
Sharing

User Selection of Private ECC


Key

Public and
Private Key
Creation
Computation
of Public Key

Result

Fig: Flow Diagram for ECC

35
Performs the
128-bit key Operation

User
Experiments
Design 255-bit key

Result

512-bit key

Fig: Flow Diagram for Experiments Design

4.2 UML DIAGRAMS:

The Unified Software development process is representative of a number of


component-based development models that have been proposed in the industry. Using
Unified Modeling language (UML), the Unified process defines the components that will
be used to build the system and the interfaces that will connect the components . Using a
combination of iterative and incremental development, the unified process defines the
function of the system by applying a scenario -based approach. It then couples with an
architectural frame work that identifies the form the software will take.

The UML captures information about the static structure and dynamic behaviors
of system. The static structure defines kinds of objects. The dynamic defines history of
objects.

MODEL:

A model is representation in a certain medium of something in the same or


another medium. A model is expressed in a medium that is convenient for working .A
model of a software system is made in a modeling language such as UML. The model has

36
both semantics and notation and can take various forms that include both pictures and
text.

4.2.1 Use Case Diagrams:

A Use case diagram shows a set of use cases and actors (a special kind of class)
and their relationships. Use case diagrams address the static use case view of a system.
These diagrams are especially important in organizing and modeling the behaviors of a
system.
A Use case is a set of scenarios that describes the interaction between the user and
the system. The actor is the user who interacts with the system to perform the set of use
cases that are defined for the system.

Fig: Use Case Diagram

37
4.2.2 Class Diagram:

A class diagram shows a set of classes, interfaces, and collaborations and their
relationships. These diagrams are the most common diagrams found in modeling object
oriented systems. Class diagrams address the static design view of a system. Class
diagrams that include active classes address the static process view of a system. Classes
are composed of three things: a name, attributes, and operations.

Fig: Class Diagram

4.2.3 Sequence Diagrams:

38
An interaction diagram shows an interaction, consisting a set of objects and their
relationships, including the messages that may be dispatched among them. Two such
diagrams in UML are the sequence diagrams and collaboration diagrams. As the name
suggests, sequence diagrams are well known to model the instance which reflects time
ordering of messages. For this purpose, it takes into consideration various objects and
supply of messages among these objects.

Fig: Sequence Diagram

4.2.4 Collaboration Diagram:

39
Collaboration diagram is the structural organization of the objects. It is used to
model the interactions between objects. It maintains the ordering of messages and the
messages are labeled in a chronological manner.

Fig: Collaboration Diagram

4.2.5 Activity Diagram:

40
Activity diagrams describe the workflow behavior of a system. Activity diagrams
are similar to state diagrams because activities are the state of doing something. The
diagrams describe the state of activities by showing the sequence of activities performed.
Activity diagrams can show activities that are conditional or parallel.

Activity diagrams should be used in conjunction with other modeling techniques


such as interaction diagrams and state diagrams. The main reason to use activity diagrams
is to model the workflow behind the system being designed. Activity diagrams are also
useful for: analyzing a use case by describing what actions need to take place and when
they should occur; describing a complicated sequential algorithm; and modeling
applications with parallel processes.

Start

Share Domain
Paramters

ECC & DH

Output Data

Stop

Fig: Activity Diagram

4.2.6 Component Diagrams:

41
The component diagram shows the structural organization of components and
relationships among them. The component diagram for this system is as follows.

Fig: Component Diagram

First user select the Datasets and Import and Export the Datasets the Datasets can
be send to the Keel Tool it can be give the required imported file

4.4 MODULE DESIGN:


The Three Modules of the project are
a) User Interface Module
b) ECC Module
c) DH Module

Model of any software product is the blue print of the project. The Unified
Modeling language is a proven and well accepted engineering technique. Every system
may be described from different aspects using different models, and each model is

42
therefore a semantically closed abstraction of the system. A model may be structural,
emphasizing the organization of the system, or it may be behavioral, emphasizing the
dynamics of the system.

The project is designed as per the vocabulary of the Unified Modeling Language
and all the models help us to visualize the system as it is or as we want it to be. It also
permits us to specify the structure or behavior of a system. These models also give us a
template that guides us in constructing the system and also document the decisions that
we have made.

43
IMPLEMENTATION

5. IMPLEMENTATION

44
5.1 INTRODUCTION:
The success of the software product is determined only when it is successfully
implemented according to the requirements. The analysis and the design of the proposed
system provide a perfect platform to implement the idea using the specified technology in
the desired environment. The implementation of our system is made user friendly.
Any software project is designed in modules and the project is said to be
successfully implemented when each of the module is executed individually to obtain the
expected result and also, when all the modules are integrated and run together without
any errors.

5.2 EXPLANATION BY KEY FUNCTIONS:


Any software project is designed in modules and the project is said to be
successfully implemented when each of the module is executed individually to obtain the
expected result and also, when all the modules are integrated and run together without
any errors. When such result is obtained, it is said that the goal of the software product is
reached. There are four different modules each of which has got a different functionality.
The implementation of the software product can be clearly explained by the modules
rather the system as a whole. So, let us learn about each feature so as to understand our
system better. In this project we can distinguish the following four parts that we will
describe briefly.

 Domain Parameters
 ECC
 DH
 Key Computation

5.3 MODULES
There are three modules in this project.

45
A) User Interface Module
B) ECC Module
C) DH Module

ECC Module:

The elliptic curve library will consist of the communication protocol to exchange
the appropriate public information. When the public information is established, each of
the parties will utilize methods in the math library to come up with random integers and
calculations for the private key information. The next few subsections deal with the
classes that make up the Elliptic curve arithmetic.

Point, add (), multiply ():


The point class is quite simple and can be described as simply a wrapper class
around a BigInt array representing a point in two-dimensional space with a little more to
accommodate Elliptic curves. Each point contains an X and a Y coordinate as well as
curve that is associated with it. The most significant methods associated with this class
deal with point multiplication and point addition. These methods are the reason that the
point needs to know what curve it belongs to. The subtraction of a point Q = (x, y) from a
point P = (a, b) is simply addition of the point -Q = (x, -y). Addition and Multiplication
also follow the same mathematical formulas.

Elliptic Curve, Point on Curve (), generate Random Point ():


The elliptic curves are defined by the formula y2 = x3 + xa + b are
nonsupersingular curves and are believed to be cryptographically strong. The choice is
given to us to make a = b = c which will make things easier on us for now. So the Elliptic
curve in this situation is made up of a point, a BigInt representing c and a modulus prime
p. The functionality of the class comes from its ability to generate a random point. Given
a point in 2d space, the elliptic curve class can determine if the point is on the curve by
simply filling out both sides of the non-supersingular curve equation. For the actual

46
encryption of the data, ECC is using a modified version of Diffie-Hellman. Alice in this
case will initiate the transmission by creating a public curve and a public starting point.
She then will create a random BigInt A and multiply the public point P by A. The
resulting point PA is sent over to Bob who is able to take that new point and multiply it
by his random BigInt B to get the result BPA. Alice takes Bob’s product BP and multiply
it by her BigInt A to get ABP. The end result is that each side has a shared secret key but
it was not shared in public.

DH Module:

The basic algorithm for DH is quite simple and easy to understand. The math
behind it deals with the understanding of the properties of modular exponentiation.
As described in an earlier section, the values of p and g represent the public and
private keys. Using these gives us the ciphertext from the plaintext and vice versa.
Using the Modular Arithmetic algorithm’s and supporting methods, it is quite
easy to see that DH is a very simple algorithm by definition. As described earlier, this is
known as the Discrete Logarithm problem and is an one-way function.

User Interface Module:

The user interface (UI) is everything designed into an information device with
which a human being may interact -- including display screen, keyboard, mouse, light
pen, the appearance of a desktop, illuminated characters, help messages, and how an
application program or a Web site invites interaction and responds to it. In early
computers, there was very little user interface except for a few buttons at an operator's
console. The user interface was largely in the form of punched card input and report
output.

The implementation of the project is carried on to obtain the user satisfaction. The
result oriented implementation of the project is found successful when this goal is

47
achieved. Discovering Trust worthy information on the Web will definitely obtain the
user satisfaction not just because of the design of the user interface but also for the user
friendliness of working with the site.

The desire to include special features will not only lead to the success of the
project, but when they are implemented with ease by the developers, the project is said to
achieve its goal. This project has considered all the requirements of the end user and the
system and accordingly designed the system. The user interface helps the developer find
it easy to implement the system with ease.

48
SAMPLE CODE

6. SAMPLE CODE

49
6.1 PSUEDO CODE/ CODE:

System Implementation is used to bring a developed system or sub system into


operational use and turning it over to the user. It involves programmer users and
operational managements.
System Implementation components include:
 Personal Orientation:
Introduce people to the new system and their relationship to the system.
 Training:
Give employees the tools and techniques to operate and use the system.
 Hardware Installation:
Schedule for, prepare for, and then actually install new equipment.
 Procedure Writing:
Develop procedure manual to follow in operating the new system.
 Testing:
Ensure that the computer programs properly process the data.
 File Conversion:
Load the information of the present files into the new system files.
 Parallel Operation:
Use the new system at the same time as the old to make sure results.

SAMPLE CODE:

50
DH.java

package crypt;

import java.util.Random;
import math.BigInt;
.

public class DH extends EncryptionType{

private final static int SECURITY = 5;

private Random myRand = null;

private BigInt myA = null;

private BigInt myB = null;

private BigInt publicG = null;

private BigInt publicP = null;

private BigInt publicQ = null;

private BigInt secretKey = null;

private BigInt gToTheAModP = null;


private BigInt gToTheBModP = null;

public void keyGen(int seed, int pSize, int gSize){


myRand = new Random(seed);
do{
publicQ = BigInt.getPrimeWithProbability(gSize, myRand.nextInt(),
SECURITY);
publicP = publicQ.times2().increment();
}while(publicP.isPrime(SECURITY));

do{
BigInt alpha = BigInt.getRandomBetween(BigInt.TWO, publicP, myRand);

51
publicG = alpha.square().mod(publicP);
}while(publicG.compareMags(BigInt.ONE) == 0 ||
publicG.compareMags(publicP.decrement()) == 0);
}

public void createGToTheAModP(int aSize){


myA = BigInt.getRandomNBitsLong(aSize, myRand);
gToTheAModP = publicG.modExp(myA, publicP);
}

public void keyGen1(int seed, int bSize) {


myRand = new Random(seed);
myB = BigInt.getRandomNBitsLong(bSize, myRand);

//myComputed = publicG.modExp(otherComputed, publicP);


gToTheBModP = publicG.modExp(myB, publicP);

//secretKey = gToTheAModP.modExp(myB, publicP);


}

public void keyGen2(Object otherGToThe, String whoIsPerforming){


if(whoIsPerforming.compareTo("alice") == 0){
gToTheBModP = (BigInt)otherGToThe;
secretKey = gToTheBModP.modExp(myA, publicP);
}
else{
gToTheAModP = (BigInt)otherGToThe;
secretKey = gToTheAModP.modExp(myB, publicP);
}
}

public Object getKey(){


return secretKey;
}
public String keyToString(){
return secretKey.toString();
}

public DH(){
//BITLENGTH = bitLength;
}

public Object[] getParameters(){


BigInt[] ret = {publicP, publicG, publicQ};

52
return ret;
}

public void setPublicKey(Object[] publicKey) {


publicP = (BigInt)publicKey[0];
publicG = (BigInt)publicKey[1];
//gToTheAModP = (BigInt)publicKey[2];
}
public String toString(){
String ret =
"gToTheAModP *:"+gToTheAModP+
"\ngTotheBModP *:"+gToTheBModP+
"\nK: =:"+secretKey;
return ret;
}

public BigInt getMyComputed() {

return gToTheAModP;
}

public BigInt getK() {


return secretKey;
}

public Object getGToTheBModP() {


return gToTheBModP;
}
public BigInt[] getGToTheBModPInBigIntArray() {
BigInt[] ret = {gToTheBModP};
return ret;
}
public Object getGToTheAModP(){
return gToTheAModP;
}
public void printParameters(){
System.out.println("DH Parameters");
System.out.println("G: "+publicG);
System.out.println("P: "+publicP);
System.out.println("Q: "+publicQ);
}
public void printGToTheAModP(){
System.out.println("g^amodP "+gToTheAModP);
}

53
public BigInt getBigIntKey(){
return this.secretKey;
}
}

ECC.JAVA:

package crypt;

import math.BigInt;
import java.util.Random;

import geometry.EllipticCurve;
import geometry.Point;

import edu.rit.crypto.hash.SHA256Hash;
import edu.rit.crypto.TooMuchDataException;
public class ECC extends EncryptionType{
private Random myRand;
private static final int PRIMESIZE = 128;
private static final int SECURITY = 10;
private static final BigInt FOUR = BigInt.makeNewBigInt(4), TWENTYSEVEN
= BigInt.makeNewBigInt(27);
private static final BigInt RMAX = null, RMIN = null;

private Point gToTheAModP;

private Point gToTheBModP;

private EllipticCurve sharedCurve;


private Point sharedPoint;
private Point myPoint;
private BigInt myK;
private Point mySecretDerivedPoint;
private Point secret;
private Point othersPublicKey;
private void ECKGP(){
System.out.println("get random");
myK = BigInt.getRandomBetween(BigInt.ZERO, sharedCurve.getP(), myRand);
System.out.println("point multiply");
myPoint = sharedPoint.pointMultiplication(myK, sharedCurve.getAorBorC(),
sharedCurve.getP());
}

54
public ECC(){
}
public BigInt encrypt(BigInt plainText, Point B) {
BigInt hash = doHash(mySecretDerivedPoint, plainText);
return hash;
}

private BigInt doHash(Point secretPoint2, BigInt text) {


int textLength = text.length();
byte[] tmp;
byte[] hasher = new byte[4*textLength];
BigInt X = secretPoint2.getX();
SHA256Hash hash = new SHA256Hash();
int counter = 0;
try{
for(int i = 0; i < textLength; i++){
counter++;
hash.reset();
hash.writeByteArray(X.toByteArray());
hash.writeInt(counter);
tmp = hash.hash();

hasher[(counter-1)*4] = tmp[0];
hasher[(counter-1)*4+1] = tmp[1];
hasher[(counter-1)*4+2] = tmp[2];
hasher[(counter-1)*4+3] = tmp[3];
}
}
catch(TooMuchDataException tmde){
System.out.println("ECC:doHash():Too Much Data");
}
BigInt tempBigInt = new BigInt(1, hasher);

return tempBigInt.XOR(text);
}
public BigInt decrypt(BigInt cipherText, Point R) {
Point S = R.pointMultiplication(myK, sharedCurve.getAorBorC(),
sharedCurve.getP());
//System.out.println("Decrypt:Point: "+S);
BigInt hash = doHash(S, cipherText);
return hash;
}

public Object[] getParameters() {

55
Object[] ret = {sharedPoint, sharedCurve};
System.out.println("on curve?: "+sharedCurve.isPointOnCurve(sharedPoint));
return ret;
}
public Point getPublicPoint(){
return sharedPoint;
}

public EllipticCurve getPublicCurve(){


return sharedCurve;
}

public void setPublicKey(Object[] key){


sharedCurve = (EllipticCurve)key[1];
sharedPoint = (Point)key[0];
sharedPoint.setCurve(sharedCurve);
}

public void computeDerivedSecretPoint(){

mySecretDerivedPoint = myPoint.pointMultiplication(myK,
sharedCurve.getAorBorC(), sharedCurve.getP());

private EllipticCurve generateEllipticCurve(int size, int altSize) {


System.out.println("ECC Generate Elliptic Curve");
BigInt P = BigInt.getPrimeWithProbability(size,
myRand.nextInt(),
SECURITY);
BigInt c;
do {

c = BigInt.getRandomBetween(BigInt.ONE, P, myRand);

}
while(FOUR.multiply(c).add(TWENTYSEVEN).mod(P).compareMags(BigInt.Z
ERO) == 0);

return new EllipticCurve(P, c);


}

public Point getMyGeneratedPoint() {

56
return myPoint;
}
public String toString(){
String ret = secret.toString();
return ret;
}
public Point getSecretPoint() {
return mySecretDerivedPoint;
}
public void setOtherDerivedPoint(Point otherGuysKey) {
othersPublicKey = otherGuysKey;
}
public void computeSecret() {
System.out.println(myK+" "+sharedCurve);
secret = othersPublicKey.pointMultiplication(myK,
sharedCurve.getAorBorC(),
sharedCurve.getP());

}
public void keyGen(int seed, int pSize, int gSize ) {
myRand = new Random(seed);
sharedCurve = generateEllipticCurve(pSize, gSize);
sharedPoint = sharedCurve.generateRandomPoint(myRand);
}
public void keyGen1(int seed, int bSize) {
myRand = new Random(seed);
ECKGP();
gToTheBModP = myPoint;

public void keyGen2(Object otherGToThe, String whoIsPerforming){


if(whoIsPerforming.compareTo("alice") == 0){
gToTheBModP = (Point)otherGToThe;
secret =
gToTheBModP.pointMultiplication(myK,sharedCurve.getAorBorC(),
sharedCurve.getP());
}
else{
gToTheAModP = (Point)otherGToThe;
secret =
gToTheAModP.pointMultiplication(myK,sharedCurve.getAorBorC(),
sharedCurve.getP());
}

57
public Object getGToTheAModP(){
return gToTheAModP;
}
public Object getGToTheBModP(){
return gToTheBModP;
}
public String keyToString(){
return secret.toString();
}
public void createGToTheAModP(int size){
ECKGP();
gToTheAModP = myPoint;
}
public BigInt[] getGToTheBModPInBigIntArray(){
BigInt[] ret = {gToTheBModP.getX(), gToTheBModP.getY()};
return ret;
}
public Object getKey(){
return secret;
}

public void printParameters(){


System.out.println("ECC Parameters");
System.out.println("X: "+sharedPoint.getX());
System.out.println("Y: "+sharedPoint.getY());
System.out.println("C: "+sharedCurve.getAorBorC());
System.out.println("P: "+sharedCurve.getP());
}

public void printGToTheAModP(){


System.out.println("g^amodP\n"+gToTheAModP);
}

public BigInt getBigIntKey(){


return secret.getX();
}
}

58
SCREEN SHOTS

7. SCREEN SHOTS

59
7.1 FORMS/TABLES/SCREENS:

Screen1: DH with 128 bit key

60
Screen2: ECC with 128 bit key

61
Screen3: DH with 256 bit key

62
Screen4: ECC with 256 bit key

63
Screen5: DH with 512 bit key

64
Screen6: ECC with 512 bit key

65
TESTING

8. TESTING

8.1 INTRODUCTION:
Software testing is a critical element of software quality assurance and represents
the ultimate review of specification, design and coding. In fact, testing is the one step in

66
the software engineering process that could be viewed as destructive rather than
constructive.

A strategy for software testing integrates software test case design methods into a
well-planned series of steps that result in the successful construction of software. Testing
is the set of activities that can be planned in advance and conducted systematically. The
underlying motivation of program testing is to affirm software quality with methods that
can economically and effectively apply to both strategic to both large and small-scale
systems.

Testing Objectives:

 Testing is a process of executing a program with the intent of finding an error.


 A good test case is one that has a probability of finding an as yet undiscovered
error.
 A successful test is one that uncovers an undiscovered error.

Testing Principles:

 All tests should be traceable to end user requirements.


 Tests should be planned long before testing begins.
 Testing should begin on a small scale and progress towards testing in large.
 Exhaustive testing is not possible.
 To be most effective testing should be conducted by an independent third party.

8.2 TESTING METHODOLOGIES:

A Strategy for software testing integrates software test cases into a series of well
planned steps that result in the successful construction of software. Software testing is a
broader topic for what is referred to as Verification and Validation. Verification refers to
the set of activities that ensure that the software correctly implements a specific function.

67
Validation refers he set of activities that ensure that the software that has been built is
traceable to customer’s requirements.

Unit Testing:
Unit testing focuses verification effort on the smallest unit of software design that
is the module. Using procedural design description as a guide, important control paths
are tested to uncover errors within the boundaries of the module. The unit test is
normally white box testing oriented and the step can be conducted in parallel for multiple
modules.

Integration Testing:

Integration testing is a systematic technique for constructing the program


structure, while conducting test to uncover errors associated with the interface. The
objective is to take unit tested methods and build a program structure that has been
dictated by design.

Top-down Integration:
Top down integrations is an incremental approach for construction of program
structure. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control program. Modules subordinate to the main program are
incorporated in the structure either in the breath-first or depth-first manner.

Bottom-up Integration:
This method as the name suggests, begins construction and testing with atomic
modules i.e., modules at the lowest level. Because the modules are integrated in the
bottom up manner the processing required for the modules subordinate to a given level is
always available and the need for stubs is eliminated.

System Testing:
System testing is actually a series of different tests whose primary purpose is to
fully exercise the computer-based system. Although each test has a different purpose, all

68
work to verify that all system elements have been properly integrated to perform
allocated functions.
The software testing process commences once the program is created and the
documentation and related data structures are designed. Software testing is essential for
correcting errors. Otherwise the program or the project is not said to be complete.
Software testing is the critical element of software quality assurance and represents the
ultimate the review of specification design and coding. Testing is the process of
executing the program with the intent of finding the error. A good test case design is one
that as a probability of finding a yet undiscovered error. A successful test is one that
uncovers a yet undiscovered error. Any engineering product can be tested in one of the
two ways.

8.2.1 WHITE BOX TESTING:

This testing is also called as Glass box testing. In this testing, by knowing the
specific functions that a product has been design to perform test can be conducted that
demonstrate each function is fully operational at the same time searching for errors in
each function. It is a test case design method that uses the control structure of the
procedural design to derive test cases. Basis path testing is a white box testing.
Basis path testing:
 Flow graph notation
 Cyclometer complexity
 Deriving test cases
 Graph matrices Control

8.2.2. BLACK BOX TESTING:

In this testing by knowing the internal operation of a product, test can be


conducted to ensure that “all gears mesh”, that is the internal operation performs
according to specification and all internal components have been adequately exercised. It
fundamentally focuses on the functional requirements of the software.
The steps involved in black box test case design are:

69
 Graph based testing methods
 Equivalence partitioning
 Boundary value analysis
 Comparison testing

8.2.3. PROGRAM TESTING:

The logical and syntax errors have been pointed out by program testing. A syntax
error is an error in a program statement that in violates one or more rules of the language
in which it is written. An improperly defined field dimension or omitted keywords are
common syntax error. These errors are shown through error messages generated by the
computer. A logic error on the other hand deals with the incorrect data fields, out-off-
range items and invalid combinations. Since the compiler s will not deduct logical error,
the programmer must examine the output.
Condition testing exercises the logical conditions contained in a module. The
possible types of elements in a condition include a Boolean operator, Boolean variable, a
pair of Boolean parentheses A relational operator or on arithmetic expression. Condition
testing method focuses on testing each condition in the program the purpose of condition
test is to deduct not only errors in the condition of a program but also other a errors in
the program.

8.2.4. VERIFICATION TESTING:

At the culmination of integration testing, software is completely assembled as a


package. Interfacing errors have been uncovered and corrected and a final series of
software test-validation testing begins. Validation testing can be defined in many ways,
but a simple definition is that validation succeeds when the software functions in manner
that is reasonably expected by the customer. Software validation is achieved through a
series of black box tests that demonstrate conformity with requirement. After validation
test has been conducted, one of two conditions exists.
 The function or performance characteristics confirm to specifications and are
accepted.

70
 A validation from specification is uncovered and a deficiency created.
Deviation or errors discovered at this step in this project is corrected prior to
completion of the project with the help of the user by negotiating to establish a method
for resolving deficiencies. Thus the proposed system under consideration has been tested
by using validation testing and found to be working satisfactorily. Though there were
deficiencies in the system they were not catastrophic.

8.3 DESIGN OF TEST CASES AND SCENARIOS:


To test the developed system, certain test cases are noted and the testing is carried
on those test cases. Let us now consider the test cases that we have used in our system.
Test cases that reduced by a count that is greater than one, the number of
additional test cases that much be designed to achieve reasonable testing.
Test cases that tell us something about the presence or absence of classes of
errors, rather than an error associated only with the specific test at hand.

8.4 VALIDATION:

The validation section of the testing phase has a major role. Validation deals
with the authenticating the user based on the generic combinations that are to be checked
to gain the access to the other features of the system.

The validation section checks about the verification of input format, size and
display order e.t.c.

The combinations are all checked with both valid and invalid combinations, and
the system has shown out the expected result.

8.5 RESULT ANALYSIS:

The testing and validation of the project is carried on to obtain the user
satisfaction. The result oriented testing of the project is found successful when this goal is
achieved.

71
This system will definitely obtain the user satisfaction not just because of the
design of the user interface but also for the user friendliness of working with this system.

Any software project is said to be complete after it is tested with all the test cases
and proved that it is working in all the scenarios. Hence, the testing phase of the project
plays a major role in the software project development. So, both ECC and DH was tested
properly for all the modules. All exceptions in this application are safely handled.

72
CONCLUSION
&
FUTURE WORK

9. CONCLUSION

9.1 Conclusion:
We implemented the following functionality:
1. The underlying Math library

73
2. DH and ECC algorithms
3. PK Generation on the desktop

All of this was written in Java. BigInteger was used only as a test vehicle. All
of the above was new code. The implementation described in this paper attempted to run
time trials on alternative solutions to the problem of efficiently exchanging keys over
insecure communication channels. Discussed were two alternatives to the discrete
logarithm problem as a solution to the lack of efficiency. ECC and DH both provided
alternative algebras which gave us a faster yet equally secure implementation. The
implementation showed a significant increase in bit-size efficiency when using ECC, and
an even greater increase in bit-size efficiency when using DH. The time trials, however,
revealed that ECC was in fact faster than DH at the individual encryption steps. Possible
causes were discussed in previous sections. Using this data and other research, we should
be closer to an ideal situation for exchanging keys in a timely manner. While the tests
here were not entirely complete as defined, they do reveal a trend that will no doubt be
helpful in future encryption schemes.

9.2 FUTURE ENHANCEMENT:

Future enhancements of this project could include several things. First of all, the
use of BigInteger for the actual math library would probably be a good first step. As the
data shows, BigInteger was significantly faster than the library that we developed. Doing

74
this would require copying the source for BigInteger into the developers source library
and math package. The developer would need to remove all errors that occur. This may
be a trial and error ordeal which may take awhile. We are planning to implement this on
mobile platform and will be interested to see how ECC works in real-time mobile
transactions.

75
BIBLIOGRAPHY

10.BIBLIOGRAPHY

[1]. An Elliptic Curve Cryptography Primer, Certicom “Catch the curve” White paper
series, June 2004.
[2]. J.S. Milne Elliptic Curves, Aug 21st 1996 v 1.01

76
[3]. Anoop MS, Elliptic Curve Cryptography, an imple-mentation tutorial.
[4]. Hans Eberle, Nils Gura, and Sheueling Chang-Shantz, Cryptographic Processor for
arbitrary Elliptic Curves over GF(2m), Sun Microsystems laboratory.
[5]. Hans Eberle, Nils Gura, and Sheueling Chang-Shantz, Generic implementation of
Elliptic Curve Cryptogra-phy using partial reduction. Sun Microsystems Labora-tory.
[6]. Nils Gura, Arun Patel, Arvinderpal Wander, Hans Eberle, and Sheueling Chang-
Shantz, Comparing Ellip-tic Curve Cryptography and RSA or 8 Bit CPU.
[7]. Miguel Morales and Claudia Feregrino, Hardware ar-chitecture for Elliptic Curve
Cryptography and lossless data compression.
[8]. M. Ernst, M. Jung, F. Madlener, S. Huss and R. Blu-mel, Reconfigurable System on
Chip implementation for Elliptic Curve Cryptography over GF(2m).
[9]. Darrel Hankerson, Julio Lopez Hernandez, Alfred Menezes, Software
Implementation of Elliptic curve Cryptography over Binary Fields
[10]. Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone, Hnadbook of
Applied Cyptography, CRC Press, 1996

Sites Referred
 http://www java.sun.com
 http://www.java2s.com
 http://www.w3schools.com
 http://www.garykessler.net
 http://www.cs.montana.edu

77

Das könnte Ihnen auch gefallen