Sie sind auf Seite 1von 18



In the fulfilment of the requirement for the award of the Degree of


Under the guidance of

Submitted By

Mr. Abhishek Shrivastav

Kapil Singhal (091226) Kishore Abhishek (091227) Ankit Rawat (091297)

2013 Jaypee University of Engineering & Technology

A.B. Road, Raghogarh, Guna, Madhya Pradesh

There has been an accelerating increase in the accumulations and communication of digital data by government, industry and by other organizations in the private sector. The contents of these communicated and stored data often have very significant value and/or sensitivity. It is now common to find data transmissions which constitute funds transfers of several million dollars, purchase or sale of securities, warrants for arrests or arrest and conviction records being communicated between law enforcement agencies, airline reservations and ticketing representing investment and value to both to the airline and passengers, and health and patient care records transmitted among physicians and treatment centres. The increasing volume, value and confidentiality of these records, regularly transmitted and stored by commercial and government agencies has led to heightened recognition and concern over their exposure to unauthorized access and use. This misuse can be in the form of theft or defalcations of data records representing money, malicious modification of business inventories or the interception or misuse of confidential information about people. The need for protection is then apparent and urgent. It is recognized that encryption represents the only means of protecting such data during transmission and a useful means of protecting the content of data stored on various media, providing encryption of adequate strength can be devised, validated and is inherently integrable into system architecture. With the fast development of cryptographic research and computer technology, the capabilities of cryptosystems such as RSA and Diffie-Hellman are inadequate due to the requirements of large number of bits. The cryptosystems based on Elliptic curve cryptography (ECC) are becoming the recent trend of public key cryptography because of their smaller key size. This document presents the implementation of NIST approved Elliptic curves. Implementation follows transforming a message into an affine point on the Elliptic Curve, over the finite field GF (p). In ECC, we normally start with an affine point which lies on the elliptic curve. In this report, we will illustrate the process of encryption and decryption of a text message. It is almost infeasible to attempt a brute force attack to break the cryptosystem using ECC.

On May 15, 1973, the National Bureau of Standards NBS (now the National Institute of Standards and Technology, or NIST) published a solicitation for cryptosystems in the Federal Register. This led ultimately to the adoption of the Data Encryption Standard, or DES, which the most widely used cryptosystem in the world. DES was developed at IBM, as a modification of an earlier system known as Lucifer. DES was first published in the Federal Register of March 17, 1975. After a considerable amount of public discussion, DES was adopted as a standard on January 15, 1977. DES was reviewed approximately every five years after its adoption. Its last renewal was in January 1999; by that time, development of a replacement, the Advanced Encryption Standard, or AES, had already begun. In the cryptosystems so far, decryption rule is either same as encryption rule, or easily derived from it. A cryptosystem of this type is known as symmetric-key cryptosystems, since exposure of either encryption or decryption rule renders the system insecure. The one drawback of a symmetric-key cryptosystem is that it requires the prior communication of the key between two parties, using a secure channel, before any cipher text is transmitted. In practise, this may be very difficult to achieve. This leads to the development of public-key cryptosystems. The idea behind a public-key cryptosystem is that it might be possible to find a cryptosystem where it is computationally infeasible to determine decryption rule given encryption rule. The public key cryptosystem is based in the so-called one-way function. In 1976, Diffie and Hellman introduced the notion of a public-key cryptosystem by describing a public key distribution scheme. The security of this distribution scheme is based on the presumed intractability of the discrete logarithm problem in the multiplicative group of a large finite field. In 1985, El-Gamal made use of the discrete logarithm problem to construct a practical public-key cryptosystem with security equivalent to the distribution scheme. Since the discrete logarithm problem can be extended to arbitrary groups, the cryptosystem of El-Gamal can be generalized to an arbitrary finite cyclic group. In 1985, Neal Koblitz and Victor S. Miller independently suggested the elliptic curve group for the implementation of public-key cryptosystems. Hence, the elliptic curve group can be used to implement the El-Gamal public-key cryptosystem. Elliptic Curve Cryptography is the best choice because: ECC offers considerably greater security for a given key size. The smaller key size also makes possible much more compact implementations for a given level of security, which means faster cryptographic operations, running on a smaller chips or more compact software.

In short: public-key cryptography is demanding. But if you are looking for the cryptosystem that will give you the most security per bit, you want ECC.

There are various standards bodies guiding the implementation of security protocols for the industry. Some of the organizations involved in the standards activities are the Internet Engineering Task Force (IETF), American Bankers Association, International Telecommunication Union and National Institute of Standards and Technology (NIST). Only NIST approved curves are of our interest. NIST provides a list of curves to be used, specified in Federal Information Processing Standards (FIPS) 186-3, Digital Signature Standard.

Definitions, Symbols and Abbreviation

Approved FIPS approved or NIST Recommended. An algorithm or technique that is either 1) Specified in a FIPS or NIST Recommendation, or 2) Adopted in a FIPS or NIST Recommendation and specified either (a) in an appendix to the FIPS of NIST Recommendation, or (b) in a document referenced by the FIPS or NIST Recommendation. Cryptography is to enable two people to communicate over an insecure channel in such a way that an opponent cannot understand what is being said. A cryptosystem has five tuples (P, C, K, E, D), where the following conditions are satisfied: P is a finite set of possible plain texts; C is a finite set of possible cipher texts; K, the key space, is a finite set of possible keys; For each k K, there is an encryption rule eK E and a corresponding decryption rule dK D. Each eK : P C and dK : C P are function such that dK( eK( x ) ) = x for all x P. An original message or message to be encrypted. Coded message or decrypted message. Process of converting plain test into cipher text. Process of restoring the plain text from the cipher text. Techniques used for deciphering the message without any knowledge of the encryption details. Cryptosystem in which encryption and decryption are performed using the same key. Also known as convention or secret-key cryptosystem. Cryptosystems in which encryption and decryption are performed using the different keys, called private key and public key Private key is one that is used for decryption/signature generation. Public key is one that is used for encryption/signature verification. The assurance that the communicating entity is the one that it claims to be. The protection of data from unauthorized access. The assurance that data received are exactly as sent by an authorized entity (i.e. contain no modification, insertion, deletion, replay) Provides protection against denial by one of the entities involved in a communication of having participated in all or part of the communication.

Cryptography Cryptosystem

Plain Text Cipher Text Encryption Decryption Cryptanalysis Symmetrickey cryptosystem Public-key cryptosystem Private key Public key Authentication Confidentiality Data Integrity Nonrepudiation

Symbols and Abbreviations

General AES DES EC ECC NIST IBM NBS RSA DH DSA IETF FIPS DSS Advanced Encryption Standard Data Encryption Standard Elliptic Curve Elliptic Curve Cryptography National Institute of Standards and Technology National Bureau of Standards Diffie-Hellman Digital Signature Algorithm Internet Engineering Task Force Federal Information Processing Standards Digital Signature Standard

The Purpose of cryptography

Cryptography is the science of writing in secret code and is an ancient art; the first documented use of cryptography in writing dates back to circa 1900 B.C. when an Egyptian scribe used non-standard hieroglyphs in an inscription. Some experts argue that cryptography appeared spontaneously sometime after writing was invented, with applications ranging from diplomatic missives to war-time battle plans. The predominant practitioners of the art were those associated with the military, the diplomatic services and government in general. Cryptography was used as tool to protect to protect national secrets and strategies. The proliferation of computers and communication system in 1960s brought it with a demand from private sector for means to protect information in digital form and provide security services. In data and telecommunications, cryptography is necessary when communicating over any entrusted medium, which includes just about any network, particularly the Internet.

Goals of cryptography
Cryptography is the study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, non-repudiation, signature, time stamping, authorization, verification and anonymity. Cryptography is not the only means of providing information security, but rather one set of techniques. Cryptography is about the prevention and detection of cheating and other malicious activities. A fundamental goal of cryptography is to adequately address the following four areas in both theory and practice
1. Confidentiality is a service used to keep the content of information from all but those

authorized to have it. Secrecy is a term synonymous with confidentiality and privacy. There are numerous approaches to providing confidentiality, ranging from physical protection to mathematical algorithms which render data unintelligible.
2. Data integrity is a service which addresses the unauthorized alteration of data. To

assure data integrity, one must have the ability to detect data manipulation by unauthorized parties. Data manipulation includes such things as insertion, deletion, and substitution.
3. Authentication is a service related to identification. This function applies to both

entities and information itself. To parties entering into a communication should identify each other. Information delivered over a channel should be authenticated as to origin, date of origin, data content, time sent, etc. For these reasons this aspect of cryptography is usually subdivided into two major classes: entity authentication and data origin authentication. Data origin authentication implicitly provides data integrity.

4. Non-repudiation is a service which prevents an entity from denying previous

commitments or actions. When disputes arise due to an entity denying that certain actions were taken, a means to resolve the situation is necessary.

Cryptographic Primitives
There are several ways of classifying cryptographic primitives. For purposes of this document, it will be categorized based on the number of keys that are employed for encryption and decryption. Figure provides a schematic listing of the primitives considered

These primitives should be evaluated with respect to various criteria such as

1. Level of security: This is usually difficult to quantify. Often it is given in terms of the

numbers of operation required to defeat the intended objective. Typically the level of security is defined by an upper bound on the amount of work necessary to defeat the objective.

2. Functionality: Primitives will need to be combined to meet various information

security objectives. Which primitives are most effective for a given objective will be determined by the basic properties of the primitives?
3. Methods of operation: Primitives, when applied in various ways and with various

inputs, will typically exhibit different characteristics; thus, one primitive could provide very different functionality depending in its mode of operation and usage.
4. Performance: This refers to the efficiency of a primitive in a particular mode of

operation. For example, an encryption algorithm may be rated by the number of bits per second which it encrypts.
5. Ease of implementation: This refers to the difficulty of realizing the primitive in a

practical instantiation. This might include the complexity of implementing the primitive in either a software or hardware environment.

Symmetric-key Cryptography
With secret-key cryptography, a single is used for both encryption and decryption. As shown in figure, the sender uses the key or some set of rules to encrypt the plain text and sends the cipher text to the receiver. The receiver applies the same key or rule set to decrypt the message and recover the plain text. Because a single is used for both functions, secret key cryptography is also called symmetric encryption. With this form of cryptography, it is obvious that the key must be known to both the sender and the receiver that, in fact, is the secret. The biggest difficulty with this approach, of course, is the distribution of the key. Secret key cryptography schemes are generally categorized as being either stream ciphers or block ciphers. Stream ciphers operate in single bit (or byte) at a time and implement some form of feedback mechanism so that the key is constantly changing. A block cipher is socalled because the scheme encrypts one block of data at a time using the same key on each block. Secret key cryptography algorithms that are in use today include

Data Encryption Standard (DES): The most common SKC scheme used today, DES was designed by IBM in the 1970s and adopted by NBS in 1977 for commercial and unclassified government applications. DES is a block-cipher scheme employing a 56bit key that operates on 64-bit blocks. DES has a complex set of rules and transformations that were designed specifically to yield fast hardware implementations and slow software implementations, although this latter point in becoming less significant today since the speed of computer processors is several orders of magnitude faster today than twenty years ago.

Triple-DES (3DES): A variant of DES that employs up to three 64 additional key bits to the plain text prior to the encryption, effectively increases the key length to 120 bits. Advanced Encryption Standard (AES): In 1997, NIST initiated a very public, four and half year process to develop a new secure cryptosystem for U.S. government applications. The result, the AES, became the official successor to DES in December 2001. AES uses an SKC scheme called Rijndael, a block cipher designed by Belgian cryptographers John Daemen and Vincent Rijmen. The alogorithm can use a variable block length; the latest specification allowed any combination of keys lengths of 128, 192, or 256 bits and blocks of length 128, 192, or 256 bits.

Public-key cryptography
Public-key Cryptography (PKC) has been said to be the most significant new development in cryptography in the last 300-400 years. Modern PKC was first described publicly by Stanford University professor Martin Hellman and graduate student Whitfield Diffie in 1976. Their paper described a two-key cryptosystem in which two parties could engage in a secure communication over a non-secure communications channel without having to share a secret key. PKC depends upon the existence of so-called one-way functions, or mathematical functions that are easy to compute whereas their inverse function is relatively difficult to compute. Let us give you two simple examples

Multiplication vs. factorization: Suppose there are two prime numbers 3 and 7 and you want to calculate the product, it should take almost no time to calculate that value, which is 21. Now suppose, instead, that if you have a number 21 and you have to calculate the pair of prime numbers. You will eventually come up with the solution but whereas calculating the product took milliseconds, factoring will take longer. The problem becomes much harder if primes have 400 digits or so, because the product will have ~800 digits. Exponentiation vs. logarithms: Suppose you have to calculate the number 3 to the 6th power again, it is relatively easy to calculate 36 = 729. But if you have the number 729 and you want to calculate 2 integers x & y so that logx 729 = y, it will take longer time to find the two values.

Generic PKC employs two keys, one key is used to encrypt the plaintext and the other key is used to decrypt the cipher text. One of the keys is designated as the public key and may be advertised as widely as the owner wants. The other key is designated as the private key and is never revealed to another party. The important here is that it does not matter which key is applied first, but that both keys are required for the process to work. Because a pair of keys is required, this approach is also called asymmetric cryptography.

Public-key cryptography algorithms that are in use today for key exchange or digital signatures include:

RSA: It is used in hundreds of software products and can be used for key exchange, digital signatures, or encryption of small blocks of data. RSA uses a variable size encryption block and a variable size key. Diffie- Hellman: After the RSA algorithm was published, Diffie and Hellman came up with their own algorithm. Diffie-Hellman is used for secret-key key exchange only and not for authentication or digital signatures. Digital Signature Standard (DSS): The algorithm specified in NIST's Digital Signature Standard (DSS), provides digital signature capability for the authentication of messages. ElGamal: Designed by Taher Elgamal, a PKC system similar to Diffie-Hellman and used for key exchange. Elliptic Curve Cryptography (ECC): A PKC algorithm based upon elliptic curves. ECC can offer levels of security with small keys comparable to RSA and other PKC methods. It was designed for devices with limited compute power and/or memory, such as smartcards and PDAs. We will discuss in details in later topics.

Hash Functions
Hash functions, also called message digests and one-way encryption, are algorithms that, in some sense, use no key. Instead, a fixed-length hash value is computed based upon the plaintext that makes it impossible for either the contents or length of the plaintext to be recovered. Hash algorithms are typically used to provide a digital fingerprint of a file's contents often used to ensure that the file has not been altered by an intruder or virus. Hash functions are also commonly employed by many operating systems to encrypt passwords. Hash functions, then, provide a measure of the integrity of a file. Hash algorithms that are in common use today include:

Message Digest (MD) algorithms: A series of byte-oriented algorithms that produce a 128-bit hash value from an arbitrary-length message. Secure Hash Algorithm (SHA): Algorithm for NISTs Secure Hash Standard (SHS). SHS-1 produces a 160-bit hash value and was originally published as FIPS 180-1 and RFC 3174.

Why Three Encryption Techniques?

So, why are there so many different types of cryptographic schemes? Why cant we do everything we need with just one? The answer is that each scheme is optimized for some specific application(s). Hash functions, for example, are well-suited for insuring data integrity because any change made to the contents of a message will result in the receiver calculating a different hash value than the one placed in the transmission by the sender. Since it is highly unlikely that two different messages will yield the same hash value, data integrity is ensured to a high degree of confidence. On the other hand secret key cryptography is ideally suited to encrypting messages, thus providing privacy and confidentiality. The sender can generate a session key on a permessage basis to encrypt the message; the receiver, of course, needs the same key to decrypt the message. Key exchange, of course, is an application of public-key cryptography. Asymmetric schemes can also be used for non-repudiation and user authentication. Public-key cryptography could, theoretically, also be used to encrypt messages although this is rarely done because secret-key cryptography operates about 1000 times faster than public-key cryptography.

Whats wrong with Classical Cryptography?

Originally, security of a classically schemed cryptosystem depended on the secrecy of the entire encrypting and decrypting procedures. In such ciphers a set of specific parameters, called a key, is supplied together with the plain-text as input to the encrypting algorithm, and together with the cryptogram as an input to the decrypting algorithm. Once the key is established, subsequent communication involves sending cryptograms over a public channel which is vulnerable to total passive eavesdropping. However in order to establish the key, two users, who share no secret information initially, must at a certain stage of communication that use a reliable and a very secure channel. Since interpretation is a set of measurements performed by the eavesdropper on this channel, in principle any classical key distribution can always be passively monitored, without the legitimate users being aware that any eavesdropping has taken place. The main problem with classical cryptography arises in the distribution of secret key to the intended users. Mathematicians have tried hard to solve the key distribution problem and in 1970s, they brought an idea in which users do not agree on a secret key before they send the message. They work on the principle of a safe with two keys, one public key to lock it, and latter private one to open it. Everyone has a key to lock the safe but only one person has a key that will open it again, so anyone can put a message in the safe but only one person can take it out.

Elliptic Curve Cryptography

Over the past 30 years, PKC has become a mainstay for secure communications over the Internet and throughout many other forms of communications. It provides the foundation for both key management and digital signatures. Notably, they form the basis for key management and authentication for IP encryption, web traffic and secure electronic mail. In their days these public key techniques revolutionized cryptography. Over the last twenty years however, new techniques have been developed which offer both better performance and higher security than these first generation public key techniques. The best assured group of new public key techniques is built on the arithmetic of elliptic curves. Use of elliptic curves in cryptography was discovered independently by Neal Koblitz and Victor S. Miller. Unlike other popular algorithms such as RSA, ECC is based on discrete logarithms that are much more difficult to challenge at equivalent key lengths. An earlier scheme to ECC, Diffie-Hellman, relies upon the difficulty of the discrete logarithm problem, a logarithm problem calculated within the multiplicative group of a finite field, to frustrate attempts to use the public key to compromise the private key. ECC uses groups and a logarithm problem too. What ECC also offers, however, is a difference in the method by which the defined, how the elements of the group are defined, and how the fundamental operations on them are defined. Its the difference in the way the group is defined, both the numbers in the set and the definitions of the arithmetic operations used to manipulate them, that give ECC its more rapid increase in security as key length increases.

Elliptic Curves
Let a, b R be constants such that 4a3 + 27b2 0. A non-singular elliptic curve is the set E of solutions (x, y) R R to the equation

y2 = x3 + ax + b
together with a special point called the point at infinity. Elliptic curve can be defined in a standard two dimensional Cartesian plane. In figure, we depict the elliptic y2 = x3 4x. The graphs turn out to be gently looping lines of various forms.

Elliptic Curve Groups

Let p be the prime number, and let Fp denote the field of integers modulo p. An elliptic curve E over Fp is defined by an equation of the form y2 x3 + ax + b (mod p) ------ (1)

Where a,b Fp satisfy 4a3 + 27b2 0 (mod p). A pair (x, y), where x, y Fp, is a point on the curve if (x, y) satisfies the equation (1). The point at infinity, denoted by , is also said to be on the curve. The set of all the points on E is denoted by E (Fp).

Points on elliptic curve

To find the points on elliptic curve, one should follow the following procedure

Check for the non-singularity. Elliptic curve is said to be non-singular if a, b p and 4a3 + 27b2 0 (mod p). For each value of x Fp, find the value of (x3 + ax + b) (mod p). Suppose e = (x3 + ax + b) (mod p) Test to see if e is a quadratic residue by applying Eulers criterion method. If e is a quadratic residue modulo p for prime p (3 mod 4), find the square roots of e by applying e(p+1)/4 mod p

Operations on Elliptic Curve Group

Let p > 3 is a prime and E is an elliptic curve over Zp. We will define a binary operation over E which makes E into an abelian group. This operation is usually denoted by addition. The point at infinity, , will be the identity element, so P + = + P = P for all P E. Addition of two points In geometric terms, rules for addition can be stated as follows: If three point on elliptic curve lie on a straight line, then their sum is equal to identity element . From this definition, we can define the rules of addition over an elliptic curve. Suppose P, Q and R E, where P = (x1, y1), Q = (x2, y2) and R = (x3, y3). 1. Point at infinity serves as the additive identity.
2. The negative of a point P is the point with the same x coordinate but the negative of

the y coordinate; that is, -P = (x1, -y1). These two points can be joined by a vertical line. Note that P + (-P) = .

3. We define P + Q = -R. That is, we define P + Q to be the mirror image of the third

point. Then the coordinates of third point can be calculated as

y2 y1 x x for _ x1 x2 = 2 2 1 3 x1 + a for _ x1 = x2 2 y1

x3 = x1 x2 y3 = ( x3 x1 ) + y1
Example Let E be the elliptic curve y2 = x3 + x + 6 over Z11. Points on the elliptic are tabulated in the table.

E has 13 points on it. Since the group of prime order in cyclic, it follows that E is isomorphic to Z13, and any point other than the point at infinity is a generator of E. Suppose we take the generator a = (2, 7). Then we compute the powers of a (which we will write in multiplies of a, since the group operation is addition). Multiplies are listed in the table.

Security and Efficiency

The majority of public key systems in use today use 1024-bit parameters for RSA and DiffieHellman. The NIST has recommended that these 1024-bit systems are sufficient for use until 2010. After that, NIST recommends that they be upgraded to something providing more security. The question is what should these systems be changed to? One option is to simply increase the public key parameter size to a level appropriate for another decade of use. Another option is to take advantage of the past 30 years of public key research and analysis and move from first generation public key algorithms and on to elliptic curves. One way judgments are made about the correct key size for a public key system is to look at the strength of the conventional (symmetric) encryption algorithms that the public key algorithm will be used to key or authenticate. Examples of these conventional algorithms are the Data Encryption Standard (DES) created in 1975 and the Advanced Encryption Standard (AES) now a new standard. The length of a key, in bits, for a conventional encryption algorithm is a common measure of security. To attack an algorithm with a k-bit key it will generally require roughly 2k-1 operations. Hence, to secure a public key system one would generally want to use parameters that require at least 2k-1 operations to attack. The following table gives the key sizes recommended by the NIST to protect keys used in conventional encryption algorithms like the (DES) and (AES) together with the key sizes for RSA, DiffieHellman and elliptic curves that are needed to provide equivalent security.

To use RSA or Diffie-Hellman to protect 128-bit AES keys one should use 3072-bit parameters: three times the size in use throughout the Internet today. The equivalent key size for elliptic curves is only 256 bits. One can see that as symmetric key sizes increase the required key sizes for RSA and Diffie-Hellman increase at a much faster rate than the required key sizes for elliptic curve cryptosystems. Hence, elliptic curve systems offer more security per bit increase in key size than either RSA or Diffie-Hellman public key systems. Security is not the only attractive feature of elliptic curve cryptography. Elliptic curve cryptosystems also are more computationally efficient than the first generation public key systems, RSA and Diffie-Hellman. Although elliptic curve arithmetic is slightly more complex per bit than either RSA or DH arithmetic, the added strength per bit more than makes up for any extra compute time. The following table shows the ratio of DH computation versus EC computation for each of the key sizes listed in Table.

Closely related to the key size of different public key systems is the channel overhead required to perform key exchanges and digital signatures on a communications link? The key sizes for public key in table, is also roughly the number of bits that need to be transmitted each way over a communications channel for a key exchange. In channel-constrained environments, elliptic curves offer a much better solution than first generation public key systems like Diffie-Hellman. The United States, the UK, Canada and certain other NATO nations have all adopted some form of ECC for future systems to protect classified information throughout and between their governments. The Cryptographic Modernization Initiative in the US Department of Defence aims at replacing almost 1.3 million existing equipments over the next 10 years. In addition, the Department's Global Information Grid will require a vast expansion of the number of security devices in use throughout the US Military. Most of these needs will be satisfied with a new generation of cryptographic equipment that uses elliptic curve cryptography for key management and digital signatures.

Elliptic Curve Intellectual Property

Despite the many advantages of elliptic curves and despite the adoption of elliptic curves by many users, many vendors and academics view the intellectual property environment surrounding elliptic curves as a major roadblock to their implementation and use. Various aspects of elliptic curve cryptography have been patented by a variety of people and companies around the world. Notably the Canadian company, Certicom Inc. holds over 350 patents related to elliptic curves and public key cryptography in general. As a way of clearing the way for the implementation of elliptic curves to protect US and allied government information, the National Security Agency purchased from Certicom a license that covers all of their intellectual property in a restricted field of use. The license would be limited to implementations that were for national security uses and certified under FIPS 140-2 or were approved by NSA. Further, the license would be limited to only prime field curves where the prime was greater than 2255. On the NIST list of curves 3 out of the 15 fit this field of use: the prime field curves with primes of 256 bits, 384 bits and 521 bits. Certicom identified 26 patents that covered this field of use. NSA's license includes a right to sublicense these 26 patents to vendors building products within the restricted field of use. Certicom also retained a right to license vendors both within the field of use and under other terms that they may negotiate with vendors.

Commercial vendors may receive a license from NSA provided their products fit within the field of use of NSA's license. Alternatively, commercial vendors may contact Certicom for a license for the same 26 patents. Certicom is planning on developing and selling software toolkits that implement elliptic curve cryptography in the field of use. With the toolkit a vendor will also receive a license from Certicom to sell the technology licensed by NSA in the general commercial marketplace. Vendors wishing to implement elliptic curves outside the scope of the NSA license will need to work with Certicom if they wish to be licensed.

The Elliptic Curve Discrete Logarithm Problem

The inverse operation to point multiplication, finding a log in a group defined on an elliptic curve over a prime field, is defined as follows: given points P and Q, find the integer k such that Q = kP. This is the elliptic curve discrete logarithm problem and this is the inverse operation in the cryptosystem, the one you have to perform to get the plain text back from the cipher text, given only the public key. One can find k by performing repeated addition operations stepping through P, 2P, 3P and so on, until you find kP. You would start by doubling P, then adding P to 2P finding 3P, then 3P to P finding 4P and so on. This is the brute force method. The trouble with this is if you use large enough prime field, the number of possible values for k becomes inconveniently very large. So inconveniently large that its quite practical to create a sufficiently large prime field that searching through the possible values of k would take all the processor time currently available on the planet thousands of years.