Sie sind auf Seite 1von 66

The World of Encryption explained by the

Anonymous Group

#blackhat

Author: Roy Rogers

We are Anonymous

We are Legion

We do not Forgive

We do not Forget
encryption
Encryption is the conversion of electronic data into another form, called ciphertext, which
cannot be easily understood by anyone except authorized parties.

How encryption works


Data, often referred to as plaintext, is encrypted using an encryption algorithm and an
encryption key.

plaintext
In cryptography, plaintext is ordinary readable text before being encrypted into ciphertext or
after being decrypted.

Plaintext is what you have before encryption, and ciphertext is the encrypted result.

The term cipher is sometimes used as a synonym for ciphertext, but it more properly means
the method of encryption rather than the result.

This process generates ciphertext that can only be viewed in its original form if decrypted with
the correct key.

Decryption is simply the inverse of encryption, following the same steps but reversing the
order in which the keys are applied.

Today's encryption algorithms are divided into two categories: symmetric and asymmetric.

Symmetric-key ciphers use the same key, or secret, for encrypting and decrypting a message
or file.

The most widely used symmetric-key cipher is AES, which was created to protect government
classified information.

Symmetric-key encryption is much faster than asymmetric encryption, but the sender must
exchange the key used to encrypt the data with the recipient before he or she can decrypt it.

This requirement to securely distribute and manage large numbers of keys means most
cryptographic processes use a symmetric algorithm to efficiently encrypt data, but use an
asymmetric algorithm to exchange the secret key.

Asymmetric cryptography, also known as public-key cryptography, uses two different but
mathematically linked keys, one public and one private.

The public key can be shared with everyone, whereas the private key must be kept secret.

RSA is the most widely used asymmetric algorithm, partly because both the public and the
private keys can encrypt a message; the opposite key from the one used to encrypt a message
is used to decrypt it.

This attribute provides a method of assuring not only confidentiality, but also the integrity,
authenticity and non-reputability of electronic communications and data at rest through the
use of digital signatures.

Cryptographic hash functions


A cryptographic hash function plays a somewhat different role than other cryptographic
algorithms.

Hash functions are widely used in many aspects of security, such as digital signatures and data
integrity checks.
They take an electronic file, message or block of data and generate a short digital fingerprint
of the content called a message digest or hash value.

The key properties of a secure cryptographic hash function are:

Output length is small compared to input


Computation is fast and efficient for any input
Any change to input affects lots of output bits
One-way value-- the input cannot be determined from the output
Strong collision resistance -- two different inputs can't create the same output
In 2012, the National Institute of Standards and Technology (NIST) announced Keccak as the
winner of its Cryptographic Hash Algorithm Competition to select a next-generation
cryptographic hash algorithm.

The Keccak (pronounced "catch-ack") algorithm will be known as SHA-3 and complement the
SHA-1 and SHA-2 algorithms specified in FIPS 180-4, Secure Hash Standard.

Even though the competition was prompted by successful attacks on MD5 and SHA-0 and the
emergence of theoretical attacks on SHA-1, NIST has said that SHA-2 is still "secure and
suitable for general use."

The ciphers in hash functions are built for hashing: they use large keys and blocks, can
efficiently change keys every block and have been designed and vetted for resistance to
related-key attacks.

General-purpose ciphers used for encryption tend to have different design goals.

For example, the symmetric-key block cipher AES could also be used for generating hash
values, but its key and block sizes make it nontrivial and inefficient.

Contemporary encryption issues


For any cipher, the most basic method of attack is brute force;

trying each key until the right one is found.

The length of the key determines the number of possible keys, and hence the feasibility of this
type of attack.

Encryption strength is directly tied to key size, but as the key size increases so too do the
resources required to perform the computation.

Alternative methods of breaking a cipher include side-channel attacks, which don't attack the
actual cipher but its implementation.

An error in system design or execution can allow such attacks to succeed.

Another approach is to actually break the cipher through cryptanalysis; finding a weakness in
the cipher that can be exploited with a complexity less than brute force.

The challenge of successfully attacking a cipher is easier of course if the cipher itself is flawed
in the first place.

There have always been suspicions that interference from theNational Security
Agency weakened the Data Encryption Standard algorithm, and following revelations from
former NSA analyst and contractor Edward Snowden, many believe they have attempted to
weaken encryption products and subvert cryptography standards.
Despite these issues, one reason for the popularity and longevity of the AES algorithm is that
the process that led to its selection was fully open to public scrutiny and comment ensuring a
thorough, transparent analysis of the design.

How we use encryption today


Until the arrival of the Diffie-Hellman key exchange and RSA algorithms, governments and
their armies were the only real users of encryption. However, Diffie-Hellman and RSA led to
the broad use of encryption in the commercial and consumer realms to protect data both while
it is being sent across a network (data in transit) and stored, such as on a hard
drive, smartphone or flash drive (data at rest).

Devices like modems, set-top boxes, smartcards and SIM cards all use encryption or rely
on protocols like SSH, S/MIME, and SSL/TLS to encrypt sensitive data.

Encryption is used to protect data in transit sent from all sorts of devices across all sorts of
networks, not just the Internet; every time someone uses an ATM or buys something online
with a smartphone, makes a mobile phone call or presses a key fob to unlock a car, encryption
is used to protect the information being relayed.

Digital rights management systems, which prevent unauthorized use or reproduction of


copyrighted material, are yet another example of encryption protecting data.

Ciphertext is encrypted text.


The primary purpose of encryption is to protect the confidentiality of digital data stored on
computer systems or transmitted via the Internet or other computer networks.

Modern encryption algorithms play a vital role in the security assurance of IT systems and
communications as they can provide not only confidentiality, but also the following key
elements of security:

Authentication: the origin of a message can be verified.

Integrity: proof that the contents of a message have not been changed since it was sent.

Non-repudiation: the sender of a message cannot deny sending the message.

History of encryption
The word encryption comes from the Greek word kryptos, meaning hidden or secret.

The use of encryption is nearly as old as the art of communication itself.

As early as 1900 BC, an Egyptian scribe used non-standard hieroglyphs to hide the meaning of
an inscription.

In a time when most people couldn't read, simply writing a message was often enough, but
encryption schemes soon developed to convert messages into unreadable groups of figures to
protect the message's secrecy while it was carried from one place to another.

The contents of a message were reordered (transposition) or replaced (substitution) with other
characters, symbols, numbers or pictures in order to conceal its meaning.

In 700 BC, the Spartans wrote sensitive messages on strips of leather wrapped around sticks.

When the tape was unwound the characters became meaningless, but with a stick of exactly
the same diameter, the recipient could recreate (decipher) the message.

Later, the Romans used what's known as the Caesar Shift Cipher, a monoalphabetic cipher in
which each letter is shifted by an agreed number.
So, for example, if the agreed number is three, then the message, "Be at the gates at six"
would become "eh dw wkh jdwhv dw vla".

At first glance this may look difficult to decipher, but juxtapositioning the start of the alphabet
until the letters make sense doesn't take long.

Also, the vowels and other commonly used letters like T and S can be quickly deduced using
frequency analysis, and that information in turn can be used to decipher the rest of the
message.

The Middle Ages saw the emergence of polyalphabetic substitution, which uses multiple
substitution alphabets to limit the use of frequency analysis to crack a cipher.

This method of encrypting messages remained popular despite many implementations that
failed to adequately conceal when the substitution changed, also known as key progression.

Possibly the most famous implementation of a polyalphabetic substitution cipher is the Enigma
electro-mechanical rotor cipher machine used by the Germans during World War Two.

It was not until the mid-1970s that encryption took a major leap forward. Until this point, all
encryption schemes used the same secret for encrypting and decrypting a message: a
symmetric key. In 1976, B. Whitfield Diffie and Martin Hellman's paper New Directions in
Cryptography solved one of the fundamental problems of cryptography, namely how to
securely distribute the encryption key to those who need it.

This breakthrough was followed shortly afterwards by RSA, an implementation of public-key


cryptography using asymmetric algorithms, which ushered in a new era of encryption.

key
In cryptography, a key is a variable value that is applied using an algorithm to a string
or block of unencrypted text to produce encrypted text, or to decrypt encrypted text.

The length of the key is a factor in considering how difficult it will be to decrypt the text in a
given message.

In a database context, a key is a field that is selected for sorting. A primary key is a key that is
unique for each record and is, as such, used to access that record; a foreign key is one that
targets a primary key in another table.

public key
In cryptography, a public key is a value provided by a designated authority as
anencryption key.

A system for using public keys is called a public key infrastructure (PKI).

The Public-Key Cryptography Standards (PKCS) are a set of intervendor standard protocols for
making possible secure information exchange on the Internet using a public key infrastructure
(PKI).

When combined with a private key that is mathematically linked to the public key, messages
and digital signatures can be effectively encrypted.

The use of combined public and private keys is known as asymmetric cryptography.

cryptography
Cryptography is a method of storing and transmitting data in a particular form so that only
those for whom it is intended can read and process it.
Cryptography is closely related to the disciplines of cryptology and cryptanalysis.

Cryptography includes techniques such as microdots, merging words with images, and other
ways to hide information in storage or transit.

However, in today's computer-centric world, cryptography is most often associated with


scrambling plaintext (ordinary text, sometimes referred to as cleartext) into ciphertext (a
process called encryption), then back again (known as decryption).

Individuals who practice this field are known as cryptographers.

Modern cryptography concerns itself with the following four objectives:

1) Confidentiality (the information cannot be understood by anyone for whom it was


unintended)

2) Integrity (the information cannot be altered in storage or transit between sender and
intended receiver without the alteration being detected)

3) Non-repudiation (the creator/sender of the information cannot deny at a later stage his or
her intentions in the creation or transmission of the information)

4) Authentication (the sender and receiver can confirm each other?s identity and the
origin/destination of the information)

Procedures and protocols that meet some or all of the above criteria are known as
cryptosystems.

Cryptosystems are often thought to refer only to mathematical procedures and computer
programs; however, they also include the regulation of human behavior, such as choosing
hard-to-guess passwords, logging off unused systems, and not discussing sensitive procedures
with outsiders.

The word is derived from the Greek kryptos, meaning hidden.

The origin of cryptography is usually dated from about 2000 BC, with the Egyptian practice of
hieroglyphics.

These consisted of complex pictograms, the full meaning of which was only known to an elite
few.

The first known use of a modern cipher was by Julius Caesar (100 BC to 44 BC), who did not
trust his messengers when communicating with his governors and officers.

For this reason, he created a system in which each character in his messages was replaced by
a character three positions ahead of it in the Roman alphabet.

In recent times, cryptography has turned into a battleground of some of the world's best
mathematicians and computer scientists.

The ability to securely store and transfer sensitive information has proved a critical factor in
success in war and business.

Because governments do not wish certain entities in and out of their countries to have access
to ways to receive and send hidden information that may be a threat to national interests,
cryptography has been subject to various restrictions in many countries, ranging from
limitations of the usage and export of software to the public dissemination of mathematical
concepts that could be used to develop cryptosystems.

However, the Internet has allowed the spread of powerful programs and, more importantly,
the underlying techniques of cryptography, so that today many of the most advanced
cryptosystems and ideas are now in the public domain.

cryptology
Cryptology is the mathematics, such as number theory, and the application of formulas
and algorithms, that underpin cryptography and cryptanalysis.

Since the cryptanalysis concepts are highly specialized and complex, we concentrate here only
on some of the key mathematical concepts behind cryptography.

In order for data to be secured for storage or transmission, it must be transformed in such a
manner that it would be difficult for an unauthorized individual to be able to discover its true
meaning.

To do this, certain mathematical equations are used, which are very difficult to solve unless
certain strict criteria are met.

The level of difficulty of solving a given equation is known as its intractability. These types of
equations form the basis of cryptography.

Some of the most important are:

The Discrete Logarithm Problem: The best way to describe this problem is first to show how its
inverse concept works.

The following applies to Galois fields (groups).

Assume we have a prime number P (a number that is not divisible except by 1 and itself, P).

This P is a large prime number of over 300 digits.

Let us now assume we have two other integers, a and b.

Now say we want to find the value of N, so that value is found by the following formula:

N = ab mod P, where 0 <= N <= (P 1)

This is known as discrete exponentiation and is quite simple to compute.

However, the opposite is true when we invert it.

If we are given P, a, and N and are required to find b so that the equation is valid, then we
face a tremendous level of difficulty.

This problem forms the basis for a number of public key infrastructure algorithms, such as
Diffie-Hellman and EIGamal.

This problem has been studied for many years and cryptography based on it has withstood
many forms of attacks.

The Integer Factorization Problem:


This is simple in concept.
Say that one takes two prime numbers, P2 and P1, which are both "large" (a relative term, the
definition of which continues to move forward as computing power increases).

We then multiply these two primes to produce the product, N.

The difficulty arises when, being given N, we try and find the original P1 and P2.

The Rivest-Shamir-Adleman public key infrastructure encryption protocol is one of many based
on this problem.

To simplify matters to a great degree, the N product is the public key and the P1 and P2
numbers are, together, theprivate key.

This problem is one of the most fundamental of all mathematical concepts.

It has been studied intensely for the past 20 years and the consensus seems to be that there
is some unproven or undiscovered law of mathematics that forbids any shortcuts.

That said, the mere fact that it is being studied intensely leads many others to worry that,
somehow, a breakthrough may be discovered.

The Elliptic Curve Discrete Logarithm Problem:

This is a new cryptographic protocol based upon a reasonably well-known mathematical


problem.

The properties of elliptic curves have been well known for centuries, but it is only recently that
their application to the field of cryptography has been undertaken.

First, imagine a huge piece of paper on which is printed a series of vertical and horizontal lines.

Each line represents an integer with the vertical lines forming x class components and
horizontal lines forming the y class components.

The intersection of a horizontal and vertical line gives a set of coordinates (x,y).

In the highly simplified example below, we have an elliptic curve that is defined by the
equation:

y2 + y = x3 x2

(this is way too small for use in a real life application, but it will illustrate the general idea)

For the above, given a definable operator, we can determine any third point on the curve given
any two other points.

This definable operator forms a "group" of finite length.

To add two points on an elliptic curve, we first need to understand that any straight line that
passes through this curve intersects it at precisely three points.

Now, say we define two of these points as u and v: we can then draw a straight line through
two of these points to find another intersecting point, at w.

We can then draw a vertical line through w to find the final intersecting point at x.

Now, we can see that u + v = x.


This rule works, when we define another imaginary point, the Origin, or O, which exists at
(theoretically) extreme points on the curve.

As strange as this problem may seem, it does permit for an effective encryption system, but it
does have its detractors.

On the positive side, the problem appears to be quite intractable, requiring a shorter key
length (thus allowing for quicker processing time) for equivalent security levels as compared to
the Integer Factorization Problem and the Discrete Logarithm Problem.

On the negative side, critics contend that this problem, since it has only recently begun to be
implemented in cryptography, has not had the intense scrutiny of many years that is required
to give it a sufficient level of trust as being secure.

This leads us to more general problem of cryptology than of the intractability of the various
mathematical concepts, which is that the more time, effort, and resources that can be devoted
to studying a problem, then the greater the possibility that a solution, or at least a weakness,
will be found.

PKI (public key infrastructure)


A public key infrastructure (PKI) supports the distribution and identification of public
encryption keys, enabling users and computers to both securely exchange data
overnetworks such as the Internet and verify the identity of the other party.

Without PKI, sensitive information can still beencrypted (ensuring confidentiality) and
exchanged, but there would be no assurance of the identity (authentication) of the other party.

Any form of sensitive data exchanged over the Internet is reliant on PKI for security.

Elements of PKI
A typical PKI consists of hardware, software, policies and standards to manage the creation,
administration, distribution and revocation of keys and digital certificates.

Digital certificates are at the heart of PKI as they affirm the identity of the certificate subject
and bind that identity to the public key contained in the certificate.

A typical PKI includes the following key elements:

A trusted party, called a certificate authority (CA), acts as the root of trust and provides
services that authenticate the identity of individuals, computers and other entities

A registration authority, often called a subordinate CA, certified by a root CA to issue


certificates for specific uses permitted by the root

A certificate database, which stores certificate requests and issues and revokes certificates

A certificate store, which resides on a local computer as a place to store issued certificates
and private keys

A CA issues digital certificates to entities and individuals after verifying their identity.

It signs these certificates using its private key; its public key is made available to all interested
parties in a self-signed CA certificate.

CAs use this trusted root certificate to create a "chain of trust" -- many root certificates are
embedded in Web browsers so they have built-in trust of those CAs.
Web servers, email clients, smartphones and many other types of hardware and software also
support PKI and contain trusted root certificates from the major CAs.

Along with an entitys or individuals public key, digital certificates contain information about
the algorithm used to create the signature, the person or entity identified, the digital signature
of the CA that verified the subject data and issued the certificate, the purpose of the public key
encryption, signature and certificate signing, as well as a date range during which the
certificate can be considered valid.

Problems with PKI


PKI provides a chain of trust, so that identities on a network can be verified.

However, like any chain, a PKI is only as strong as its weakest link.

There are various standards that cover aspects of PKI -- such as the Internet X.509 Public Key
Infrastructure Certificate Policy and Certification Practices Framework (RFC2527) -- but there
is no predominant governing body enforcing these standards.

Although a CA is often referred to as a trusted third party, shortcomings in the security


procedures of various CAs in recent years has jeopardized trust in the entire PKI on which the
Internet depends.

If one CA is compromised, the security of the entire PKI is at risk.

For example, in 2011, Web browser vendors were forced to blacklist all certificates issued by
the Dutch CA DigiNotar after more than 500 fake certificates were discovered.

A Web of trust
An alternative approach to using a CA to authenticate public key information is a decentralized
trust model called a "Web of trust," a concept used in PGP and other OpenPGP-compatible
systems.

Instead of relying solely on a hierarchy of certificate authorities, certificates are signed by


other users to endorse the association of that public key with the person or entity listed in the
certificate.

One problem with this method is a user has to trust all those in the key chain to be honest, so
its often best suited to small user communities.

For example, an enterprise could use a Web of trust for authenticating the identity of its
internal, intranet and extranet users and devices.

It could also act as its own CA, using software such as Microsoft Certificate Services to issue
and revoke digital certificates.

certificate authority (CA)


A certificate authority (CA) is a trusted entity that issues electronic documents that verify a
digital entitys identity on the Internet.

The electronic documents, which are called digital certificates, are an essential part of secure
communication and play an important part in the public key infrastructure (PKI).

Certificates typically include the owner's public key, the expiration date of the certificate, the
owner's name and other information about the public key owner.

Operating systems (OSes) and browsersmaintain lists of trusted CA root certificates to verify
certificates that a CA has issued and signed.
Although any entity that wants to issue digital certificates for secure communications can
potentially become their own certificate authority, most e-commerce websites use certificates
issued by commercial CAs.

Typically, the longer the CA has been operational, the more browsers and devices will trust the
certificates a CA issues.

Ideally, certificates are backwards compatibile with older browsers and operating systems, a
concept known as ubiquity.

Protocols that rely on certificate chain verification -- such as VPN and SSL/TLS -- are
vulnerable to a number of dangerous attacks, including SSL man-in-the-middle attacks.

Recently, trust in CAs has been shaken due to abuse of fraudulent certificates.

Hackers have broken into various CA networks -- DigiNotar and Comodo, for example -- and
signed bogus digital certificates in the names of trusted sites such as Twitter and Microsoft.

In response, DigiCert became the first certificate authority to implement certificate


transparency, an initiative intended to make it possible for a certificate to be issued for
a domain without the domain owner's knowledge.

e-commerce (electronic commerce or EC)


E-commerce (electronic commerce or EC) is the buying and selling of goods and services, or
the transmitting of funds or data, over an electronic network, primarily the internet.

These business transactions occur either as business-to-business, business-to-consumer,


consumer-to-consumer or consumer-to-business.

The terms e-commerce and e-business are often used interchangeably.

The term e-tail is also sometimes used in reference to transactional processes for online
shopping.

History of e-commerce
The beginnings of e-commerce can be traced to the 1960s, when businesses started
usingElectronic Data Interchange (EDI) to share business documents with other companies.

In 1979, the American National Standards Institute developed ASC X12 as a universal
standard for businesses to share documents through electronic networks.

After the number of individual users sharing electronic documents with each other grew in the
1980s, in the 1990s the rise of eBay and Amazon revolutionized the e-commerce industry.

Consumers can now purchase endless amounts of items online, both from typical brick and
mortar stores with e-commerce capabilities and one another.

E-commerce applications
E-commerce is conducted using a variety of applications, such as email, online catalogs and
shopping carts, EDI, File Transfer Protocol, and web services.

This includes business-to-business activities and outreach such as using email for unsolicited
ads (usually viewed as spam) to consumers and other business prospects, as well as to send
out e-newsletters to subscribers.

More companies now try to entice consumers directly online, using tools such as digital
coupons, social media marketing and targeted advertisements.
The benefits of e-commerce include its around-the-clock availability, the speed of access, the
wide availability of goods and services for the consumer, easy accessibility, and international
reach.

Its perceived downsides include sometimes-limited customer service, consumers not being
able to see or touch a product prior to purchase, and the necessitated wait time for product
shipping.

The e-commerce market continues to grow: Online sales accounted for more than a third of
total U.S. retail sales growth in 2015, according to data from the U.S. Commerce Department.

Web sales totaled $341.7 billion in 2015, a 14.6% increase over 2014.

E-commerce conducted using mobile devices and social media is on the rise as well: Internet
Retailer reported that mobile accounted for 30% of all U.S. e-commerce activities in 2015.

And according to Invesp, 5% of all online spending was via social commerce in 2015, with
Facebook, Pinterest and Twitter providing the most referrals.

The rise of e-commerce forces IT personnel to move beyond infrastructure design and
maintenance and consider numerous customer-facing aspects such as consumer data
privacy and security.

When developing IT systems and applications to accommodate e-commerce activities, data


governance related regulatory compliance mandates, personally identifiable
information privacy rules and information protection protocols must be considered.

Government regulations for e-commerce


In the United States, the Federal Trade Commission (FTC) and the Payment Card Industry
(PCI) Security Standards Council are among the primary agencies that regulate e-commerce
activities.

The FTC monitors activities such as online advertising, content marketing and customer
privacy, while the PCI Council develops standards and rules including PCI-DSS compliance that
outlines procedures for proper handling and storage of consumers' financial data.

To ensure the security, privacy and effectiveness of e-commerce, businesses should


authenticate business transactions, control access to resources such as webpages for
registered or selected users, encrypt communications and implement security technologies
such as the Secure Sockets Layer and two factor authentication.

Transaction
In computer programming, a transaction usually means a sequence of information exchange
and related work (such as database updating) that is treated as a unit for the purposes of
satisfying a request and for ensuring database integrity.

For a transaction to be completed and database changes to made permanent, a transaction


has to be completed in its entirety.

A typical transaction is a catalog merchandise order phoned in by a customer and entered into
a computer by a customer representative.

The order transaction involves checking an inventory database, confirming that the item is
available, placing the order, and confirming that the order has been placed and the expected
time of shipment.

If we view this as a single transaction, then all of the steps must be completed before the
transaction is successful and the database is actually changed to reflect the new order.
If something happens before the transaction is successfully completed, any changes to the
database must be kept track of so that they can be undone.

A program that manages or oversees the sequence of events that are part of a transaction is
sometimes called a transaction monitor.
Transactions are supported by Structured Query Language, the standard database user and
programming interface.

When a transaction completes successfully, database changes are said to be committed; when
a transaction does not complete, changes are rolled back.

In IBM's Customer Information Control System product, a transaction is a unit of application


data processing that results from a particular type of transaction request.

In CICS, an instance of a particular transaction request by a computer operator or user is


called a task.

Less frequently and in other computer contexts, a transaction may have a different meaning.

For example, in IBM mainframe operating system batch processing, a transaction is a job or
a job step.

two-factor authentication (2FA)


Two-factor authentication is a security process in which the user provides two means of
identification from separate categories of credentials; one is typically a physical token, such as
a card, and the other is typically something memorized, such as a security code.

In this context, the two factors involved are sometimes spoken of as something you
haveand something you know.

A common example of two-factor authentication is a bank card: the card itself is the physical
item and the personal identification number (PIN) is the data that goes with it.

Including those two elements makes it more difficult for someone to access the users bank
account because they would have to have the physical item in their possession and also know
the PIN.

According to proponents, two-factor authentication can drastically reduce the incidence of


online identity theft, phishing expeditions, and other online fraud, because stealing the victim's
password is not enough to give a thief access to their information.

What are authentication factors?


An authentication factor is an independent category of credential used for identity verification.

The three most common categories are often described as something you know
(the knowledge factor), something you have (the possession factor) and something you are
(the inherence factor).

For systems with more demanding requirements for security, location and time are sometimes
added as fourth and fifth factors.

Single-factor authentication (SFA) is based on only one category of identifying credential.

The most common SFA method is the familiar user name and password combination
(something you know).
The security of SFA relies to some extent upon the diligence of users.

Best practices for SFA include selecting strong passwords and refraining from automatic
or social logins.

For any system or network that contains sensitive data, it's advisable to add additional
authentication factors.

Multifactor authentication (MFA) involves two or more independent credentials for more secure
transactions.

Single-factor authentication (SFA) vs. two-factor authentication (2FA)

Although ID and password are two items, because they belong to the same authentication
factor (knowledge), they are single factor authentication (SFA).

It is really because of their low cost, ease of implementation and familiarity that passwords
that have remained the most common form of SFA.

As far as SFA solutions go, ID and password are not the most secure.

Multiple challenge-response questions can provide more security, depending on how they are
implemented, and standalone biometric verification methods of many kinds can also provide
more secure single-factor authentication.

One problem with password-based authentication is that it requires knowledge and diligence to
create and remember strong passwords.

Passwords also require protection from many inside threats like carelessly discarded password
sticky notes and old hard drives and social engineering exploits.

Passwords are also prey to external threats such as hackers using brute
force, dictionary or rainbow table attacks.

Given enough time and resources, an attacker can usually breach password-based security
systems.

Two-factor authentication is designed to provide additional security.

2FA products
There are a huge number of devices and solutions for 2FA, from tokens to RFID cards to
smartphone apps.

Offerings from some well-known companies:

RSA SecureID is still very common (although its SecurID was hacked in 2011).

Microsoft Phonefactor offers 2FA for a reasonable cost and is free to small organizations of 25
members or less.

Dell Defender is a multifactor authentication suite that offers biometrics and various token
methods for 2FA and higher.

Google Authenticator is a 2FA app that works with any supporting site or service.

Apples iOS, iTunes store and cloud services all support 2FA to protect user accounts and
content.
2FA for mobile authentication
Apples iOS, Google Android and Blackberry OS 10 all have apps supporting 2FA and other
multifactor authentication.

Some have screens capable of recognizing fingerprints; a built-in camera can be used for facial
recognition or iris scanning and the microphone can be used in voice recognition.

Many smartphones have GPS to verify location as an additional factor.

Voice or SMS may also be used as a channel for out-of-band authentication.

There are also apps that provide one time password tokens, allowing the phone itself to serve
as the physical device to satisfy the possession factor.

Google Authenticator is a two-factor authentication app.

To access websites or web-based services, the user types in his username and password and
then enters a one-time passcode (OTP) that was delivered to his device in response to the
login.

The six-digit one time password changes once every 30-60 seconds and serves again to prove
possession as an authentication factor.

Smartphones offer a variety of possibilities for 2FA, allowing companies to use what works best
for them.
Is two-factor authentication secure?
Opponents argue (among other things) that, should a thief gain access to your computer, he
can boot up in safe mode, bypass the physical authentication processes, scan your system for
all passwords and enter the data manually, thus -- at least in this situation -- making two-
factor authentication no more secure than the use of a password alone.

Higher levels of authentication for more secure communications


Some security procedures now require three-factor authentication (3FA), which typically
involves possession of a physical token and a password used in conjunction with
biometric data, such as fingerscanning or a voiceprint.

An attacker may occasionally break an authentication factor in the physical world.

A persistent search of the target premises, for example, might yield an employee card or an ID
and password in an organizations trash or carelessly discarded storage containing password
databases.

If additional factors are required for authentication, however, the attacker would face at least
one more obstacle.
The majority of attacks come from remote internet connections.

2FA can make distance attacks much less of a threat because accessing passwords is not
sufficient for access and it is unlikely that the attacker would also possess the physical device
associated with the user account.

Each additional authentication factor makes a system more secure.

Because the factors are independent, compromise of one should not lead to the fall of others.

registration authority (RA)


A registration authority (RA) is an authority in a network that verifies user requests for a
digital certificate and tells the certificate authority (CA) to issue it.
RAs are part of a public key infrastructure (PKI), a networked system that enables companies
and users to exchange information and money safely and securely.

The digital certificate contains a public key that is used to encrypt and decrypt messages
and digital signatures.

digital certificate
A digital certificate is an electronic "passport" that allows a person, computer or organization
to exchange information securely over the Internet using the public key infrastructure (PKI).

A digital certificate may also be referred to as a public key certificate.

Just like a passport, a digital certificate provides identifying information, is forgery resistant
and can be verified because it was issued by an official, trusted agency.

The certificate contains the name of the certificate holder, a serial number, expiration dates, a
copy of the certificate holder's public key (used for encrypting messages and digital signatures)
and the digital signature of the certificate-issuing authority (CA) so that a recipient can verify
that the certificate is real.

To provide evidence that a certificate is genuine and valid, it is digitally signed by a root
certificate belonging to a trusted certificate authority.

Operating systems and browsers maintain lists of trusted CA root certificates so they can easily
verify certificates that the CAs have issued and signed.

When PKI is deployed internally, digital certificates can be self-signed.

Many digital certificates conform to the X.509 standard.

blackhole list (blacklist)


A blackhole list, sometimes simply referred to as a blacklist, is the publication of a group of ISP
addresses known to be sources of spam, a type of e-mail more formally known as unsolicited
commercial e-mail (UCE).

The goal of a blackhole list is to provide a list of IP addresses that a network can use
to filter out undesireable traffic.

After filtering, traffic coming or going to an IP address on the list simply disappears, as if it
were swallowed by an astronomical black hole.

The Mail Abuse Prevention System (MAPS) Real-time Blackhole List (RBL), which has over
3000 entries, is one of the most popular blackhole lists.

Begun as a personal project by Paul Vixie, it used by hundreds of servers around the world.
Other popular blackhole lists include the Relay Spam Stopper and the Dialup User List.

intranet
An intranet is a private network that is contained within an enterprise. It may consist of many
interlinked local area networks and also use leased lines in the wide area network.

Typically, an intranet includes connections through one or more gateway computers to the
outside Internet.
The main purpose of an intranet is to share company information and computing resources
among employees.

An intranet can also be used to facilitate working in groups and for teleconferences.

An intranet uses TCP/IP, HTTP, and other Internet protocols and in general looks like a private
version of the Internet.

With tunneling, companies can send private messages through the public network, using the
public network with special encryption/decryption and other security safeguards to connect one
part of their intranet to another.

Typically, larger enterprises allow users within their intranet to access the public Internet
through firewall servers that have the ability to screen messages in both directions so that
company security is maintained.

When part of an intranet is made accessible to customers, partners, suppliers, or others


outside the company, that part becomes part of an extranet.

extranet
An extranet is a private network that uses Internet technology and the public
telecommunication system to securely share part of a business's information or operations with
suppliers, vendors, partners, customers, or other businesses.

An extranet can be viewed as part of a company's intranet that is extended to users outside
the company.

It has also been described as a "state of mind" in which the Internet is perceived as a way to
do business with other companies as well as to sell products to customers.

An extranet requires security and privacy.

These can include firewall server management, the issuance and use of digital certificates or
similar means of user authentication, encryptionof messages, and the use of virtual private
networks (VPNs) that tunnel through the public network.

Companies can use an extranet to:

Exchange large volumes of data using Electronic Data Interchange (EDI)

Share product catalogs exclusively with wholesalers or those "in the trade"

Collaborate with other companies on joint development efforts

Jointly develop and use training programs with other companies

Provide or access services provided by one company to a group of other companies, such as an
online banking application managed by one company on behalf of affiliated banks

Share news of common interest exclusively with partner companies

RSA algorithm (Rivest-Shamir-Adleman)


RSA is a cryptosystem for public-key encryption, and is widely used for securing sensitive data,
particularly when being sent over an insecure network such as the Internet.

RSA was first described in 1977 by Ron Rivest, Adi Shamir and Leonard Adleman of the
Massachusetts Institute of Technology.
Public-key cryptography, also known as asymmetric cryptography, uses two different but
mathematically linked keys, one public and one private.

The public key can be shared with everyone, whereas the private key must be kept secret.

In RSA cryptography, both the public and the private keys can encrypt a message; the
opposite key from the one used to encrypt a message is used to decrypt it.

This attribute is one reason why RSA has become the most widely used asymmetric algorithm:
It provides a method of assuring the confidentiality, integrity, authenticity and non-reputability
of electronic communications and data storage.

Many protocols like SSH, OpenPGP, S/MIME, and SSL/TLS rely on RSA for encryption
and digital signature functions.

It is also used in software programs -- browsers are an obvious example, which need to
establish a secure connection over an insecure network like the Internet or validate a digital
signature.

RSA signature verification is one of the most commonly performed operations in IT.

Explaining RSA's popularity


RSA derives its security from the difficulty of factoring large integers that are the product of
two large prime numbers.

Multiplying these two numbers is easy, but determining the original prime numbers from the
total -- factoring -- is considered infeasible due to the time it would take even using todays
super computers.

The public and the private key-generation algorithm is the most complex part of RSA
cryptography.

Two large prime numbers, p and q, are generated using the Rabin-Miller primality test
algorithm.

A modulus n is calculated by multiplying p and q.

This number is used by both the public and private keys and provides the link between them.

Its length, usually expressed in bits, is called the key length.

The public key consists of the modulusn, and a public exponent, e, which is normally set at
65537, as it's a prime number that is not too large.

The e figure doesn't have to be a secretly selected prime number as the public key is shared
with everyone.
The private key consists of the modulus n and the private exponent d, which is calculated
using the Extended Euclidean algorithm to find the multiplicative inverse with respect to the
totient of n.

A simple, worked example

Alice generates her RSA keys by selecting two primes: p=11 and q=13.

The modulusn=pq=143.

The totient of n (n)=(p1)x(q1)=120.

She chooses 7 for her RSA public keye and calculates her RSA private key using the Extended
Euclidean Algorithm which gives her 103.

Bob wants to send Alice an encrypted message M so he obtains her RSA public key (n,e) which
in this example is (143,
7). His plaintext message is just the number 9 and is encrypted into ciphertext C as follows:

Me mod n = 97 mod 143 = 48 = C

When Alice receives Bobs message she decrypts it by using her RSA private key (d, n) as
follows:

Cd mod n = 48103 mod 143 = 9 = M

To use RSA keys to digitally sign a message, Alice would create a hash or message digest of
her message to Bob, encrypt the hash value with her RSA private key and add it to the
message.

Bob can then verify that the message has been sent by Alice and has not been altered by
decrypting the hash value with her public key.

If this value matches the hash of the original message, then only Alice could have sent it
(authentication and non-repudiation) and the message is exactly as she wrote it (integrity).
Alice could, of course, encrypt her message with Bobs RSA public key (confidentiality) before
sending it to Bob.

A digital certificate contains information that identifies the certificate's owner and also contains
the owner's public key.

Certificates are signed by the certificate authority that issues them, and can simplify the
process of obtaining public keys and verifying the owner.

Security of RSA
As discussed, the security of RSA relies on the computational difficulty of factoring large
integers.

As computing power increases and more efficient factoring algorithms are discovered, the
ability to factor larger and larger numbers also increases.

Encryption strength is directly tied to key size, and doubling key length delivers an exponential
increase in strength, although it does impair performance.

RSA keys are typically 1024- or 2048-bits long, but experts believe that 1024-bit keys could be
broken in the near future, which is why government and industry are moving to a minimum
key length of 2048-bits.

Barring an unforeseen breakthrough in quantum computing, it should be many years before


longer keys are required, but elliptic curve cryptography is gaining favor with many security
experts as an alternative to RSA for implementing public-key cryptography. It can create
faster, smaller and more efficient cryptographic keys. Much of todays hardware and software
is ECC-ready and its popularity is likely to grow as it can deliver equivalent security with lower
computing power and battery resource usage, making it more suitable for mobile apps than
RSA.

Finally, a team of researchers which included Adi Shamir, a co-inventor of RSA, has
successfully determined a 4096-bit RSA key using acoustic cryptanalysis, however any
encryption algorithm is vulnerable to this type of attack.

The inventors of the RSA algorithm founded RSA Data Security in 1983. The company was
later acquired by Security Dynamics, which was in turn purchased by EMC Corporation in 2006.
The RSA algorithm was released to the public domain by RSA Security in 2000.

How RSA & PKI works and the math behind it.

https://youtu.be/Jt5EDBOcZ44

public key
In cryptography, a public key is a value provided by a designated authority as
anencryption key. A system for using public keys is called a public key infrastructure (PKI).

Asymmetric cryptography, also known as public key encryption, uses two different but
mathematically linked keys.

The complexity and length of the private key determine how feasible it is for an interloper to
carry out a brute force attack and try out different keys until the right one is found.

The challenge for this system is that significant computing resources are required to create
long, strong private keys.
Secret-key ciphers generally fall into one of two categories: stream ciphers or block ciphers.
A block cipher applies a private key and algorithm to a block of data simultaneously, whereas a
stream cipher applies the key and algorithm one bit at a time.

Symmetric-key encryption is much faster computationally than asymmetric encryption but


requires a key exchange.

secret key algorithm (symmetric algorithm)


A secret key algorithm (sometimes called a symmetric algorithm) is a
cryptographicalgorithm that uses the same key to encrypt and decrypt data.

The best known algorithm is the U.S. Department of Defense's Data Encryption Standard
(DES).

DES, which was developed at IBM in 1977, was thought to be so difficult to break that the U.S.
government restricted its exportation.

A very simple example of how a secret key algorithm might work might be substituting the
letter in the alphabet prior to the target letter for each one in a message.

The resulting text - "gdkkn," for example - would make no sense to someone who didn't know
the algorithm used (x-1), but would be easily understood by the parties involved in the
exchange as "hello."

The problem with secret or symmetric keys is how to securely get the secret keys to each end
of the exchange and keep them secure after that.

For this reason, an asymmetric key system is now often used that is known as the public key
infrastructure (PKI).
asymmetric cryptography (public key cryptography)

Asymmetric cryptography, also known as public key cryptography, uses public and private
keys to encrypt and decrypt data.

The keys are simply large numbers that have been paired together but are not identical
(asymmetric).

One key in the pair can be shared with everyone; it is called the public key.

The other key in the pair is kept secret; it is called the private key.

Either of the keys can be used to encrypt a message; the opposite key from the one used to
encrypt the message is used for decryption.

Many protocols like SSH, OpenPGP, S/MIME, and SSL/TLS rely on asymmetric cryptography for
encryption and digital signature functions.

It is also used in software programs, such as browsers, which need to establish a secure
connection over an insecure network like the internet or need to validate a digital signature.

Encryption strength is directly tied to key size and doubling key length delivers an exponential
increase in strength, although it does impair performance.

As computing power increases and more efficient factoring algorithms are discovered, the
ability to factor larger and larger numbers also increases.

For asymmetric encryption to deliver confidentiality, integrity, authenticity and non-


repudiability, users and systems need to be certain that a public key is authentic, that it
belongs to the person or entity claimed and that it has not been tampered with or replaced by
a malicious third party.

There is no perfect solution to this public key authentication problem.

A public key infrastructure (PKI), where trusted certificate authorities certify ownership of key
pairs and certificates, is the most common approach, but encryption products based on
the Pretty Good Privacy (PGP) model (including OpenPGP), rely on a decentralized
authentication model called a web of trust, which relies on individual endorsements of the link
between user and public key.

Whitfield Diffie and Martin Hellman, researchers at Stanford University, first publicly proposed
asymmetric encryption in their 1977 paper, "New Directions in Cryptography." The concept had
been independently and covertly proposed by James Ellis several years before, while working
for the Government Communications Headquarters (GCHQ), the British intelligence and
security organization.

The asymmetric algorithm as outlined in the Diffie-Hellman paper uses numbers raised to
specific powers to produce decryption keys.

RSA (Rivest-Shamir-Adleman), the most widely used asymmetric algorithm, is embedded in


the SSL/TLS protocol which is used to provide communications security over a computer
network.

RSA derives its security from the computational difficulty of factoring large integers that are
the product of two large prime numbers. Multiplying two large primes is easy, but the difficulty
of determining the original numbers from the total -- factoring -- forms the basis of public key
cryptography security.

The time it takes to factor the product of two sufficiently large primes is considered to be
beyond the capabilities of most attackers, excluding nation state actors who may have access
to sufficient computing power.

RSA keys are typically 1024- or 2048-bits long, but experts believe that 1024-bit keys could be
broken in the near future, which is why government and industry are moving to a minimum
key length of 2048-bits.

Elliptic Curve Cryptography (ECC) is gaining favor with many security experts as an alternative
to RSA for implementing public-key cryptography. ECC is a public key encryption technique
based on elliptic curve theory that can create faster, smaller, and more efficient cryptographic
keys.

ECC generates keys through the properties of the elliptic curve equation.

To break ECC, one must compute an elliptic curve discrete logarithm, and it turns out that this
is a significantly more difficult problem than factoring.

As a result, ECC key sizes can be significantly smaller than those required by RSA yet deliver
equivalent security with lower computing power and battery resource usage making it more
suitable for mobile applications than RSA.

Digital signatures and asymmetric cryptography


Digital signatures are based on asymmetric cryptography and can provide assurances of
evidence to origin, identity and status of an electronic document, transaction or message, as
well as acknowledging informed consent by the signer.

To create a digital signature, signing software (such as an email program) creates a one-
way hash of the electronic data to be signed.
The user's private key is then used to encrypt the hash, returning a value that is unique to the
hashed data.

The encrypted hash, along with other information such as the hashing algorithm, forms the
digital signature.
Any change in the data, even to a single bit, results in a different hash value.

This attribute enables others to validate the integrity of the data by using the signer's public
key to decrypt the hash.

If the decrypted hash matches a second computed hash of the same data, it proves that the
data hasn't changed since it was signed.

If the two hashes don't match, the data has either been tampered with in some way (indicating
a failure of integrity) or the signature was created with a private key that doesn't correspond
to the public key presented by the signer (indicating a failure of authentication).

A digital signature also makes it difficult for the signing party to deny having signed something
(the property of non-repudiation).

If a signing party denies a valid digital signature, their private key has either been
compromised, or they are being untruthful.

In many countries, including the United States, digital signatures have the same legal weight
as more traditional forms of signatures.

Secure Shell (SSH)


SSH, also known as Secure Socket Shell, is a network protocol that provides administrators
with a secure way to access a remote computer.

SSH also refers to the suite of utilities that implement the protocol.

Secure Shell provides strong authenticationand secure encrypted data communications


between two computers connecting over an insecure network such as the Internet.

SSH is widely used by network administrators for managing systems and applications
remotely, allowing them to log in to another computer over a network, execute commands and
move files from one computer to another.

SSH can refer both to the cryptographic network protocol and to the suite of utilities that
implement that protocol. SSH uses theclient-server model, connecting a secure
shellclient application, the end at which the session is displayed, with an SSH server, the end
at which the session runs.

Apart from Microsoft Windows, SSH software is included by default on most operating systems.
SSH also supports tunneling, forwarding arbitrary TCP ports and X11 connections while file
transfer can be accomplished using the associated secure file transfer or secure copy (SCP)
protocols.

An SSH server, by default, listens on the standard TCP port 22.

The SSH suite comprises three utilities -- slogin, ssh and scp -- that are secure versions of the
earlier insecure UNIX utilities, rlogin, rsh, and rcp.
SSH uses public-key cryptography to authenticate the remote computer and allow the remote
computer to authenticate the user, if necessary.

The first version of SSH appeared in 1995 and was designed by Tatu Ylnen, a researcher at
Helsinki University of Technology who founded SSH Communications Security.

Over time various flaws have been found in SSH-1 and it is now obsolete.

The current set of Secure Shell protocols is SSH-2 and was adopted as a standard in 2006.

It's not compatible with SSH-1 and uses a Diffie-Hellman key exchange and a strongerintegrity
check that uses message authentication codes to improve security.

SSH clients and servers can use a number of encryption methods, the mostly widely used
being AESand Blowfish.

As yet, there are no known exploitable vulnerabilities in SSH2, though information leaked
by Edward Snowden in 2013 suggests the National Security Agency may be able to decrypt
some SSH traffic.

Shellshock, a security hole in the Bash command processor, can be executed over SSH but is a
vulnerability in Bash, not in SSH.

In reality, the biggest threat to SSH is poor key management.

Without the proper centralized creation, rotation and removal of SSH keys, organizations can
lose control over who has access to which resources and when, particularly when SSH is used
in automated application-to-application processes.

OpenPGP
OpenPGP is an open and free version of the Pretty Good Privacy (PGP) standard that
definesencryption formats to enable private messaging abilities for email and other message
encryption.

The standard uses the PKI (public key infrastructure) to create keys that are bound to
individual email addresses and uses symmetric encryption based on elliptical curve
cryptography.

Compliant applications generate a random key that is encrypted with thepublic receive key.

That process creates an encrypted message that contains both the data and the encrypted
key.

The receiver decrypts the key and uses their private key to retrieve the original random key
and decrypt the data.

OpenPGP-compliant software products include Symantec Command Line, McAfee E-Business


Server, Diplomat OpenPGP Community Edition, many email clients.

OpenPGP clients must use up-to-date or matched versions so that settings and files created by
one application are compatible with another.

Only then can the applications share and mutually decrypt messages.

The OpenPGP Alliance promotes OpenPGP for other communications as well as


email. Facebook, for example, has added the capacity for users to add an OpenPGP key to
their profile so that notifications and messages are encrypted.
Security expert Bruce Schneier advises that open encryption standards are best for security
and privacy when dealing with such pervasive forces as NSA mass surveillance. He states that
open security and encryption standards are much harder for the NSA to back door -- especially
without getting caught.

The difficulty is increased when the standard is compatible with other services and used by
other vendors, because any one of them may discover the back door.

Schneier worked along with Edward Snowden and the Guardian newspaper in breaking
the whistleblowers revelations about NSA surveillance.

OpenPGP is specified in IETF (Internet Engineering Task Force) RFC 4880.

Snowden effect
The Snowden effect is the increase in public concern about information security and privacy
resulting from disclosures that Edward Snowden made detailing the extent of the National
Security Agency's (NSA) surveillance activities.

In 2013, Snowden, a former NSA contractor, leaked NSA documents that revealed the agency
was collecting data from the electronic communications of United States citizens.

Other disclosures included information about PRISM, the agency's data collection program,
a surveillance metadata collection and XKeyscore, which supplies federated search capabilities
for all NSA databases.
Snowden's revelations forced the NSA -- one of the nations most secretive organizations-- to
publicly explain itself.

Since that time, there have been perceptible increases in the general public's knowledge about
the U.S. government's cybersecurity initiatives and awareness of how those initiatives have
impacted the privacy of individuals, businesses and foreign governments.

The leaks also raised questions about data sovereignty and how secure a company's data
really is if it's stored by a cloud provider based in the United States.

In 2014, almost 90% of respondents to a survey commissioned by security consultancy NTT


Communications said they were changing their cloud-buying behavior as a result of Snowdens
revelations.

Just over half said they are carrying out greater due diligence on cloud providers than ever
before, and more than four fifths responded that they would seek out more training on data
protection laws.

Studies have been conducted to quantify some of the effects as indicated by changes since the
Snowden revelations.

An Internet poll conducted by the Center for International Governance Innovation showed
across 24 countries that, overall, 60 percent of respondents were aware of Snowden; in many
developed countries the numbers were higher.

Germany came in highest at 94 percent.

Sweden, China, Brazil, and Hong Kong percentages were in the mid-to-low eighties.

The Canadian number at 64 percent, along with the U.S.'s 76 percent, suggests more media
coverage of the events in developed nations outside North America.
Behavior changes were reflected in the survey with 43 percent claiming they were more careful
about sites they accessed; 39 percent reported changing passwords more often than they had
before Snowden's revelations.

In March 2015, Snowden's revelations came to public attention again when Laura Poitras' film
about Snowden, Citizen Four, won the Academy Award for best documentary.

In May 21015, the U.S. court of appeals ruled that the NSA's mass telephone surveillance was
illegal.

S/MIME (Secure Multi-Purpose Internet Mail Extensions)


S/MIME (Secure Multi-Purpose Internet Mail Extensions) is a secure method of sending e-
mail that uses the Rivest-Shamir-Adleman encryption system.

S/MIME is included in the latest versions of the Web browsers from Microsoft and Netscape and
has also been endorsed by other vendors that make messaging products.

RSA has proposed S/MIME as a standard to the Internet Engineering Task Force (IETF). An
alternative to S/MIME is PGP/MIME, which has also been proposed as a standard.

MIME itself, described in the IETF standard called Request for Comments 1521, spells out how
an electronic message will be organized. S/MIME describes how encryption information and a
digital certificate can be included as part of the message body.

S/MIME follows the syntax provided in the Public-Key Cryptography Standard format #7.

digital signature
A digital signature (not to be confused with a digital certificate) is a mathematical technique
used to validate the authenticity and integrity of a message, software or digital document.

The digital equivalent of a handwritten signature or stamped seal, but offering far more
inherent security, a digital signature is intended to solve the problem of tampering and
impersonation in digital communications.

Digital signatures can provide the added assurances of evidence to origin, identity and status
of an electronic document, transaction or message, as well as acknowledging informed consent
by the signer.

In many countries, including the United States, digital signatures have the same legal
significance as the more traditional forms of signed documents.

The United States Government Printing Office publishes electronic versions of the budget,
public and private laws, and congressional bills with digital signatures.

How digital signatures work


Digital signatures are based on public key cryptography, also known as asymmetric
cryptography.

Using a public key algorithm such as RSA, one can generate two keys that are mathematically
linked: one private and one public.

To create a digital signature, signing software (such as an email program) creates a one-way
hash of the electronic data to be signed.

The private key is then used to encrypt the hash.

The encrypted hash -- along with other information, such as the hashing algorithm -- is the
digital signature.
The reason for encrypting the hash instead of the entire message or document is that a hash
function can convert an arbitrary input into a fixed length value, which is usually much shorter.

This saves time since hashing is much faster than signing.

The value of the hash is unique to the hashed data.

Any change in the data, even changing or deleting a single character, results in a different
value.

This attribute enables others to validate the integrity of the data by using the signer's public
key to decrypt the hash.

If the decrypted hash matches a second computed hash of the same data, it proves that the
data hasn't changed since it was signed.

If the two hashes don't match, the data has either been tampered with in some way (integrity)
or the signature was created with a private key that doesn't correspond to the public key
presented by the signer (authentication).

A digital signature can be used with any kind of message -- whether it is encrypted or not --
simply so the receiver can be sure of the sender's identity and that the message arrived intact.

Digital signatures make it difficult for the signer to deny having signed something (non-
repudiation) -- assuming their private key has not been compromised -- as the digital
signature is unique to both the document and the signer, and it binds them together.

A digital certificate, an electronic document that contains the digital signature of the
certificate-issuing authority, binds together a public key with an identity and can be used to
verify a public key belongs to a particular person or entity.

Most modern email programs support the use of digital signatures and digital certificates,
making it easy to sign any outgoing emails and validate digitally signed incoming messages.

Digital signatures are also used extensively to provide proof of authenticity, data integrity and
non-repudiation of communications and transactions conducted over the Internet.

exponential function
An exponential function is a mathematical function of the following form:

f(x)=ax

where x is a variable, and a is a constant called the base of the function.

The most commonly encountered exponential-function base is the transcendental number e ,


which is equal to approximately 2.71828. Thus, the above expression becomes:

f(x)=ex
When the exponent in this function increases by 1, the value of the function increases by a
factor of e .

When the exponent decreases by 1, the value of the function decreases by this same factor (it
is divided by e ).
In electronics and experimental science, base-10 exponential functions are encountered. The
general form is:

f ( x ) = 10 x

When the exponent increases by 1, the value of the base-10 function increases by a factor of
10; when the exponent decreases by 1, the value of the function becomes 1/10 as great.

A change of this extent is called one order of magnitude.

For a given, constant base such as e or 10, the exponential function "undoes"
the logarithmfunction, and the logarithm undoes the exponential.

Thus, these functions are inverses of each other.

For example, if the base is 10 and x = 3:

log (10 x ) = log (10 3 ) = log 1000 = 3

If the base is 10 and x = 1000:

10 (log x) = 10 (log 1000) = 10 3 = 1000

confidentiality
Confidentiality is a set of rules or a promise that limits access or places restrictions on certain
types of information.

integrity
Integrity, in terms of data and network security, is the assurance that information can only be
accessed or modified by those authorized to do so.

Measures taken to ensure integrity include controlling the physical environment of networked
terminals and servers, restricting access to data, and maintaining
rigorous authentication practices. Data integrity can also be threatened by environmental
hazards, such as heat, dust, and electrical surges.

Practices followed to protect data integrity in the physical environment include: making servers
accessible only to network administrators, keeping transmission media (such as cables and
connectors) covered and protected to ensure that they cannot be tapped, and protecting
hardware and storage media from power surges, electrostatic discharges, and magnetism.

Network administration measures to ensure data integrity include: maintaining current


authorization levels for all users, documenting system administration procedures, parameters,
and maintenance activities, and creating disaster recovery plans for occurrences such as power
outages, server failure, and virus attacks.

authentication
Authentication is the process of determining whether someone or something is, in fact, who or
what it is declared to be.

Logically, authentication precedes authorization (although they may often seem to be


combined).

The two terms are often used synonymously but they are two different processes.

Authentication vs. authorization


Authentication is a process in which the credentials provided are compared to those on file in a
database of authorized users information on a local operating system or within
an authentication server.

If the credentials match, the process is completed and the user is granted authorization for
access.

The permissions and folders returned define both the environment the user sees and the way
he can interact with it, including hours of access and other rights such as the amount
of allocated storage space.

The process of an administrator granting rights and the process of checking user account
permissions for access to resources are both referred to as authorization.

The privileges and preferences granted for the authorized account depend on the users
permissions, which are either stored locally or on the authentication server.

The settings defined for all these environment variables are set by an administrator.

User authentication vs. machine authentication


User authentication occurs within most human-to-computer interactions other than guest
accounts, automatically logged-in accounts and kiosk computer systems.

Generally, a user has to enter or choose an ID and provide their password to begin using a
system.

User authentication authorizes human-to-machine interactions in operating systems and


applications as well as both wired and wireless networks to enable access to networked and
Internet-connected systems, applications and resources.

Machines need to authorize their automated actions within a network too.

Online backupservices, patching and updating systems and remote monitoring systems such as
those used in telemedicine and smart grid technologies all need to securely authenticate to
verify that it is the authorized system involved in any interaction and not a hacker.

Machine authentication can be carried out with machine credentials much like a users ID and
password only submitted by the device in question.

They can also use digital certificates issued and verified by a Certificate Authority (CA) as part
of a public key infrastructure to prove identification while exchanging information over the
Internet, like a type of digital password.

The importance of strong machine authentication


With the increasing number of Internet-enabled devices, reliable machine authentication is
crucial to allow secure communication in home automation and other networked environments.
In the Internet of things scenario, which is increasingly becoming a reality, almost any
imaginable entity or object may be made addressable and able to exchange data over a
network.

It is important to realize that each access point is a potential intrusion point.

Each networked device needs strong machine authentication and also, despite their normally
limited activity, these devices must be configured for limited permissions access as well, to
limit what can be done even if they are breached.
Password-based authentication
In private and public computer networks (including the Internet), authentication is commonly
done through the use of login IDs (user names) and passwords. Knowledge of the login
credentials is assumed to guarantee that the user is authentic.

Each user registers initially (or is registered by someone else, such as a systems
administrator), using an assigned or self-declared password. On each subsequent use, the user
must know and use the previously declared password.

However, password-based authentication is not considered to provide adequately strong


security for any system that contains sensitive data.

The problem with password-based authentication:


User names are frequently a combination of the individuals first initial and last name, which
makes them easy to guess.

If constraints are not imposed, people often create weak passwords -- and even strong
passwords may be stolen, accidentally revealed or forgotten. For this reason, Internet business
and many other transactions require a more stringent authentication process.

Password-based authentication weaknesses can be addressed to some extent with smarter


user names and password rules like minimum length and stipulations for complexity, such as
including capitals and symbols.

However, password-based authentication and knowledge-based authentication (KBA) are more


vulnerable than systems that require multiple independent methods.

An authentication factor is a category of credential used for identity verification. The three
most common categories are often described as something you know (the knowledge factor),
something you have (the possession factor) and something you are (the inherence factor).

Authentication factors:
Knowledge factors -- a category of authentication credentials consisting of information that the
user possesses, such as a personal identification number (PIN), a user name, a password or
the answer to a secret question.

Possession factors -- a category of credentials based on items that the user has with them,
typically a hardware device such as a security token or a mobile phone used in conjunction
with a software token.

Inherence factors -- a category of user authentication credentials consisting of elements that


are integral to the individual in question, in the form of biometric data.

User location and current time are sometimes considered the fourth factor and fifth factor for
authentication.

The ubiquity of smartphones can help ease the burdens of multifactor authentication for users.

Most smartphones are equipped with GPS, enabling reasonable surety confirmation of the login
location.

Lower surety measures include theMAC address of the login point or physical presence
verifications through cards and other possession factor elements.

Strong authentication vs. multifactor authentication (MFA)


Strong authentication is a commonly used term that is largely without a standardized
definition.
For general purposes, any method of verifying the identity of a user or device that is
intrinsically stringent enough to ensure the security of the system it protects can be considered
strong authentication.

The term strong authentication is often used to refer to two factor authentication (2FA) or
multifactor authentication (MFA). That usage probably came about because MFA is a widely-
applied approach to strengthen authentication.

In cryptography, strong authentication is defined as a system involving multiple challenge/


response answers. Because such a system involves multiple instances from a single factor (the
knowledge factor), it is an example of single-factor authentication (SFA), regardless of its
strength.

Other definitions of strong verification:

In some environments, any system in which the password is not transmitted in the verification
process is considered strong.

As defined by the European Central Bank, strong security is any combination of at least two
mutually-independent factors of authentication, which must also have one non-reusable
element that is not easily reproduced or stolen from the Internet.

Although strong authentication is not necessarily multifactor, multifactor authentication


processes have become commonplace for system logins and transactions within systems with
high security requirements.

Two factor (2FA) and three factor authentication (3FA) are becoming common; four factor
(4FA) and even five factor (5FA) authentication systems are used in some high-security
installations.

The use of multiple factors increases security due to the unlikelihood that an attacker could
access all of the elements required for authentication.

Each additional factor increases the security of the system and decreases the likelihood that it
could be breached.

nonrepudiation
Nonrepudiation is the assurance that someone cannot deny something.

Typically, nonrepudiation refers to the ability to ensure that a party to a contract or a


communication cannot deny the authenticity of their signature on a document or the sending
of a message that they originated.

To repudiate means to deny.

For many years, authorities have sought to make repudiation impossible in some situations.
You might send registered mail, for example, so the recipient cannot deny that a letter was
delivered.

Similarly, a legal document typically requires witnesses to signing so that the person who signs
cannot deny having done so.

On the Internet, a digital signature is used not only to ensure that a message or document has
been electronically signed by the person that purported to sign the document, but also, since a
digital signature can only be created by one person, to ensure that a person cannot later deny
that they furnished the signature.
Since no security technology is absolutely fool-proof, some experts warn that a digital
signature alone may not always guarantee nonrepudiation. It is suggested that multiple
approaches be used, such as capturing unique biometric information and other data about the
sender or signer that collectively would be difficult to repudiate.

Email nonrepudiation involves methods such as email tracking that are designed to ensure that
the sender cannot deny having sent a message and/or that the recipient cannot deny having
received it.

public key certificate


A public key certificate is a digitally signed document that serves to validate the sender's
authorization and name.

The document consists of a specially formatted block of data that contains the name of the
certificate holder (which may be either a user or a system name) and the holder's public key,
as well as the digital signature of a certification authority for authentication.

The certification authority attests that the sender's name is the one associated with the public
key in the document. A user ID packet, containing the sender's unique identifier, is sent after
the certificate packet.
There are different types of public key certificates for different functions, such as authorization
for a specific action or delegation of authority.

Public key certificates are part of a public key infrastructure that deals with digitally signed
documents.

The other components are public key encryption, trusted third parties (such as the certification
authority), and mechanisms for certificate publication and issuing.

Pretty Good Privacy (PGP)


Pretty Good Privacy or PGP is a popular program used to encrypt and decrypt emailover
the Internet, as well as authenticate messages with digital signatures and encrypted stored
files.

Previously available as freeware and now only available as a low-cost commercial version, PGP
was once the most widely used privacy-ensuring program by individuals and is also used by
many corporations. It was developed by Philip R. Zimmermann in 1991 and has become a de
facto standard for email security.

How PGP works


Pretty Good Privacy uses a variation of thepublic key system.

In this system, each user has an encryption key that is publicly known and a private key that
is known only to that user.

You encrypt a message you send to someone else using their public key.

When they receive it, they decrypt it using their private key.

Since encrypting an entire message can be time-consuming, PGP uses a faster


encryption algorithm to encrypt the message and then uses the public key to encrypt the
shorter key that was used to encrypt the entire message.

Both the encrypted message and the short key are sent to the receiver who first uses the
receiver's private key to decrypt the short key and then uses that key to decrypt the message.

PGP comes in two public key versions -- Rivest-Shamir-Adleman (RSA) and Diffie-Hellman.
The RSA version, for which PGP must pay a license fee to RSA, uses the IDEAalgorithm to
generate a short key for the entire message and RSA to encrypt the short key.

The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message
and the Diffie-Hellman algorithm to encrypt the short key.

When sending digital signatures, PGP uses an efficient algorithm that generates a hash(a
mathematical summary) from the user's name and other signature information.

This hash code is then encrypted with the sender's private key.

The receiver uses the sender's public key to decrypt the hash code.

If it matches the hash code sent as the digital signature for the message, the receiver is sure
that the message has arrived securely from the stated sender.

PGP's RSA version uses the MD5 algorithm to generate the hash code.
PGP's Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.

Getting PGP
To use Pretty Good Privacy, download or purchase it and install it on your computer system.

It typically contains a user interface that works with your customary email program.

You may also need to register the public key that your PGP program gives you with a PGP
public-key server so that people you exchange messages with will be able to find your public
key.

PGP freeware is available for older versions of Windows, Mac, DOS, Unix and other operating
systems. In 2010, Symantec Corp. acquired PGP Corp., which held the rights to the PGP code,
and soon stopped offering a freeware version of the technology.

The vendor currently offers PGP technology in a variety of its encryption products, such as
Symantec Encryption Desktop, Symantec Desktop Email Encryption and Symantec Encryption
Desktop Storage.

Symantec also makes the Symantec Encryption Desktopsource code available for peer review.

Though Symantec ended PGP freeware, there are other non-proprietary versions of the
technology that are available.

OpenPGP is an open source version of PGP that's supported by the Internet Engineering Task
Force (IETF).

OpenPGP is used by several software vendors, including as Coviant Software, which offers a
free tool for OpenPGP encryption, and HushMail, which offers a Web-based encrypted email
service powered by OpenPGP.

In addition, the Free Software Foundation developed GNU Privacy Guard (GPG), an OpenPGG-
compliant encryption software.

Where can you use PGP?


Pretty Good Privacy can be used to authenticate digital certificates and encrypt/decrypt texts,
emails, files, directories and whole disk partitions.

Symantec, for example, offers PGP-based products such as Symantec File Share Encryption for
encrypting files shared across a network and Symantec Endpoint Encryption for full disk
encryption on desktops, mobile devices and removable storage.
In the case of using PGP technology for files and drives instead of messages, the Symantec
products allows users to decrypt and re-encrypt data via a single sign-on.

Originally, the U.S. government restricted the exportation of PGP technology and even
launched a criminal investigation against Zimmermann for putting the technology in the public
domain (the investigation was later dropped).

Network Associates Inc. (NAI) acquired Zimmermann's company, PGP Inc., in 1997 and was
able to legally publish the source code (NAI later sold the PGP assets and IP to ex-PGP
developers that joined together to form PGP Corp. in 2002, which was acquired by Symantec
in 2010).

Today, PGP encrypted email can be exchanged with users outside the U.S if you have the
correct versions of PGP at both ends.

There are several versions of PGP in use.

Add-ons can be purchased that allow backwards compatibility for newer RSA versions with
older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each
other since they use different algorithms.

There are also a number of technology companies that have released tools or services
supporting PGP. Google this year introduced an OpenPGP email encryption plug-in for Chrome,
while Yahoo also began offering PGP encryption for its email service.

Diffie-Hellman key exchange (exponential key exchange)


Diffie-Hellman key exchange, also called exponential key exchange, is a method
ofdigital encryption that uses numbers raised to specific powers to produce decryptionkeys on
the basis of components that are never directly transmitted, making the task of a would-be
code breaker mathematically overwhelming.

To implement Diffie-Hellman, the two end users Alice and Bob, while communicating over a
channel they know to be private, mutually agree on positive whole numbers p and q, such
that pis a prime number and q is a generator of p.

The generator q is a number that, when raised to positive whole-number powers less than p,
never produces the same result for any two such whole numbers.

The value of p may be large but the value of q is usually small.


Once Alice and Bob have agreed on p and q in private, they choose positive whole-number
personal keys a and b, both less than the prime-number modulus p.

Neither user divulges their personal key to anyone; ideally they memorize these numbers and
do not write them down or store them anywhere.

Next, Alice and Bob compute public keys a*and b* based on their personal keys according to
the formulas

a* = qa mod p

and
b* = qb mod p

The two users can share their public keys a* and b* over a communications medium assumed
to be insecure, such as the Internet or a corporate wide area network (WAN).

From these public keys, a number x can be generated by either user on the basis of their own
personal keys.

Alice computes x using the formula

x = (b*)a mod p

Bob computes x using the formula

x = (a*)b mod p

The value of x turns out to be the same according to either of the above two formulas.

However, the personal keys a and b, which are critical in the calculation of x, have not been
transmitted over a public medium.

Because it is a large and apparently random number, a potential hacker has almost no chance
of correctly guessing x, even with the help of a powerful computer to conduct millions of trials.

The two users can therefore, in theory, communicate privately over a public medium with an
encryption method of their choice using the decryption key x.

The most serious limitation of Diffie-Hellman in its basic or "pure" form is the lack of
authentication.

Communications using Diffie-Hellman all by itself are vulnerable to man in the middle attacks.
Ideally, Diffie-Hellman should be used in conjunction with a recognized authentication method
such as digital signatures to verify the identities of the users over the public communications
medium.

Diffie-Hellman is well suited for use in data communication but is less often used for data
stored or archived over long periods of time.

Secure Sockets Layer (SSL)


The Secure Sockets Layer (SSL) is a computer networking protocol that manages
server authentication, client authentication and encrypted communication between
servers and clients.

SSL uses a combination of public-key and symmetric-key encryption to secure a connection


between two machines, typically a Web or mail server and a client machine, communicating
over the Internet or an internal network.

Using the OSI reference model as context, SSL runs above the TCP/IP protocol, which is
responsible for the transport and routing of data over a network, and below higher-level
protocols such as HTTP and IMAP, encrypting the data of network connections in
the application layer of the Internet Protocol suite.

The "sockets" part of the term refers to the sockets method of passing data back and forth
between a client and a server program in a network, or between program layers in the same
computer.
The Transport Layer Security (TLS) protocol evolved from SSL and has largely superseded it,
although the terms SSL or SSL/TLS are still commonly used; SSL is often used to refer to what
is actually TLS.

The combination of SSL/TLS is the most widely deployed security protocol used today and is
found in applications such as Web browsers, email and basically any situation where data
needs to be securely exchanged over a network, like file transfers, VPN connections, instant
messaging and voice over IP.

How it works
The SSL protocol includes two sub-protocols: the record protocol and the "handshake"
protocol.

These protocols allow a client to authenticate a server and establish an encrypted SSL
connection.

In what's referred to as the "initial handshake process," a server that supports SSL presents
its digital certificate to the client to authenticate the server's identity.

Server certificates follow the X.509 certificate format that is defined by the Public-Key
Cryptography Standards (PKCS).

The authentication process uses public-key encryption to validate the digital certificate and
confirm that a server is in fact the server it claims to be.

Once the server has been authenticated, the client and server establish cipher settings and a
shared key to encrypt the information they exchange during the remainder of the session.

This provides data confidentiality and integrity.

This whole process is invisible to the user.

For example, if a webpage requires an SSL connection, the URL will change from HTTP
to HTTPS and a padlock icon appears in the browser once the server has been authenticated.

The handshake also allows the client to authenticate itself to the server.

In this case, after server authentication is successfully completed, the client must present its
certificate to the server to authenticate the client's identity before the encrypted SSL session
can be established.

The history of SSL


The SSL protocol was developed by Netscape Communications in the 1990s.

The company wanted to encrypt data in transit between its flagship Netscape Navigator
browser and Web servers on the Internet to ensure that sensitive data, such as credit card
numbers, were protected.

Version 1.0 was never publicly released and version 2.0, released in February 1995, contained
a number of security flaws.

Version 3.0 involved a complete redesign and was released in 1996.

Even though it was never formally standardized -- the 1996 draft of SSL 3.0 was published
by IETF as a historical document in RFC 6101 -- it became the de facto standard for providing
communication security over the Internet.
After the IETF officially took over the SSL protocol to standardize it via an open process,
version 3.1 of SSL was released as Transport Layer Security 1.0 and introduced security
improvements to mitigate weaknesses that had been found in earlier versions.

(The name was changed to avoid any legal issues with Netscape.)

Many attacks against SSL have focused on implementation issues, but


the POODLE vulnerability is a known flaw in the SSL 3.0 protocol itself, exploiting the way in
which it ignores padding bytes when running in cipher block chaining (CBC) mode.

This flaw could allow an attacker to decrypt sensitive information such as


authentication cookies.
TLS 1.0 is not vulnerable to this attack because it specifies that all padding bytes must have
the same value and be verified.
Other key differences between SSL and TLS that make TLS a more secure and efficient
protocol are message authentication, key material generation and the supported cipher suites
with TLS supporting newer and more secure algorithms.

TLS and SSL are not interoperable, though TLS provides backwards compatibility in order to
work with legacy systems.

TLS 1.2 is the latest version.

Transport Layer Security (TLS)


Transport Layer Security (TLS) is a protocol that provides privacy and data integrity between
two communicating applications.

It's the most widely deployed security protocol used today, and is used for Web browsers and
other applications that require data to be securely exchanged over a network, such as file
transfers, VPN connections, instant messaging and voice over IP.

TLS evolved from Netscape's Secure Sockets Layer (SSL) protocol and has largely superseded
it, although the terms SSL or SSL/TLS are still sometimes used.

Key differences between SSL and TLS that make TLS a more secure and efficient protocol are
message authentication, key material generation and the supported cipher suites, with TLS
supporting newer and more secure algorithms.

TLS and SSL are not interoperable, though TLS currently provides some backward
compatibility in order to work with legacy systems.

According to the protocol specification, TLS is composed of two layers:

the TLS Record Protocol and the TLS Handshake Protocol.

The Record Protocol provides connection security, while the Handshake Protocol allows
the server and client to authenticate each other and to
negotiate encryption algorithms and cryptographic keys before any data is exchanged.

Implementation flaws have always been a big problem with any encryption technology, and
TLS is no exception.

The infamous Heartbleed bug was the result of a surprisingly small bug in a piece of logic that
relates to OpenSSL's implementation of the TLS heartbeat mechanism, which is designed to
keep connections alive even when no data is being transmitted.
Although TLS is not vulnerable to the POODLE attack, because it specifies that all padding
bytes must have the same value and be verified, a variant of the attack has exploited certain
implementations of the TLS protocol that don't correctly validate encryption padding.

This makes some systems vulnerable to POODLE, even if they disable SSL -- one of the
recommended techniques for countering a POODLE attack.
The IETF officially took over the SSL protocol to standardize it with an open process and
released version 3.1 of SSL in 1999 as TLS 1.0.

The protocol was renamed TLS to avoid legal issues with Netscape, which developed the SSL
protocol as a key feature part of its original Web browser.

TLS 1.2 is the current version of the protocol, and as of this writing, the Transport Layer
Security Working Group of the IETF is working on TLS 1.3 to address the vulnerabilities that
have been exposed over the past few years, reduce the chance of implementation errors and
remove features no longer needed.

TLS 1.3 is expected to be based on the earlier TLS 1.1 and 1.2 specifications, but without
unnecessary options and functions, such as support for compression and non-AEAD
(Authenticated Encryption with Associated Data) ciphers.

It may not support SSL 3.0, either.


The IETF has also decided to move away from RSA-based key transport in favor of protocols
that support perfect forward secrecy and are easier to analyze.

RSA certificates will still be allowed, but key establishment will use standard Diffie-
Hellman orelliptical curve Diffie-Hellman key exchange.

Support for HMAC-SHA256 cipher suites has been added, while the IDEA and Data Encryption
Standard (DES) cipher suites have been deprecated.

The IETF's Using TLS in Applications (UTA) working group plans to offer common guidelines
and best practices for using TLS in applications, such as the use of the latest cryptographic
algorithms and eliminating the use of older TLS/SSL versions, as well as guidance on how
certain applications should use the encryption protocol.

TLS 1.3 is still a draft and has not been finalized yet, but having an updated protocol that's
faster, more secure and easier to implement is essential to ensure the privacy and security of
information exchange and maintain trust in the Internet as a whole.

elliptical curve cryptography (ECC)


Elliptical curve cryptography (ECC) is a public key encryption technique based on elliptic curve
theory that can be used to create faster, smaller, and more efficient cryptographickeys.

ECC generates keys through the properties of the elliptic curve equation instead of the
traditional method of generation as the product of very large prime numbers.

The technology can be used in conjunction with most public key encryption methods, such
asRSA, and Diffie-Hellman.

According to some researchers, ECC can yield a level of security with a 164-bit key that other
systems require a 1,024-bit key to achieve.

Because ECC helps to establish equivalent security with lower computing power and battery
resource usage, it is becoming widely used for mobile applications.

ECC was developed by Certicom, a mobile e-business security provider, and was recently
licensed by Hifn, a manufacturer of integrated circuitry (IC) and network security products.
RSA has been developing its own version of ECC.

Many manufacturers, including 3COM, Cylink, Motorola, Pitney Bowes, Siemens, TRW, and
VeriFone have included support for ECC in their products.

The properties and functions of elliptic curves have been studied in mathematics for 150 years.

Their use within cryptography was first proposed in 1985, (separately) by Neal Koblitz from the
University of Washington, and Victor Miller at IBM.

An elliptic curve is not an ellipse(oval shape), but is represented as a looping line intersecting
two axes (lines on a graph used to indicate the position of a point).

ECC is based on properties of a particular type of equation created from the mathematical
group (a set of values for which operations can be performed on any two members of the
group to produce a third member) derived from points where the line intersects the axes.

Multiplying a point on the curve by a number will produce another point on the curve, but it is
very difficult to find what number was used, even if you know the original point and the result.

Equations based on elliptic curves have a characteristic that is very valuable for cryptography
purposes:

they are relatively easy to perform, and extremely difficult to reverse.


The industry still has some reservations about the use of elliptic curves.

Nigel Smart, a Hewlett Packard researcher, discovered a flaw in which certain curves are
extremely vulnerable.

However, Philip Deck of Certicom says that, while there are curves that are vulnerable, those
implementing ECC would have to know which curves could not be used.

He believes that ECC offers a unique potential as a technology that could be implemented
worldwide and across all devices.

According to Deck (quoted in Wired), "the only way you can achieve that is with elliptic curve."

hashing
Hashing is the transformation of a string of characters into a usually shorter fixed-length value
or key that represents the original string.

Hashing is used to index and retrieve items in a database because it is faster to find the item
using the shorter hashed key than to find it using the original value.

It is also used in many encryption algorithms.

As a simple example of the using of hashing in databases, a group of people could be arranged
in a database like this:

Abernathy, Sara Epperdingle, Roscoe Moore, Wilfred Smith, David (and many more sorted into
alphabetical order)

Each of these names would be the key in the database for that person's data.

A database search mechanism would first have to start looking character-by-character across
the name for matches until it found the match (or ruled the other entries out).
But if each of the names were hashed, it might be possible (depending on the number of
names in the database) to generate a unique four-digit key for each name.

For example:

7864 Abernathy, Sara 9802 Epperdingle, Roscoe 1990 Moore, Wilfred 8822 Smith, David (and
so forth)

A search for any name would first consist of computing the hash value (using the same hash
function used to store the item) and then comparing for a match using that value.

It would, in general, be much faster to find a match across four digits, each having only 10
possibilities, than across an unpredictable value length where each character had 26
possibilities.

The hashing algorithm is called the hash function-- probably the term is derived from the idea
that the resulting hash value can be thought of as a "mixed up" version of the represented
value.

In addition to faster data retrieval, hashing is also used to encrypt and decrypt digital
signatures (used to authenticate message senders and receivers).

The digital signatureis transformed with the hash function and then both the hashed value
(known as a message-digest) and the signature are sent in separate transmissions to the
receiver.

Using the same hash function as the sender, the receiver derives a message-digest from the
signature and compares it with the message-digest it also received. (They should be the
same.)

The hash function is used to index the original value or key and then used later each time the
data associated with the value or key is to be retrieved.

Thus, hashing is always a one-way operation.

There's no need to "reverse engineer" the hash function by analyzing the hashed values.

In fact, the ideal hash function can't be derived by such analysis.

A good hash function also should not produce the same hash value from two different inputs.

If it does, this is known as a collision.

A hash function that offers an extremely low risk of collision may be considered acceptable.

Here are some relatively simple hash functions that have been used:

Division-remainder method:

The size of the number of items in the table is estimated.

That number is then used as a divisor into each original value or key to extract a quotient and
a remainder.

The remainder is the hashed value.

(Since this method is liable to produce a number of collisions, any search mechanism would
have to be able to recognize a collision and offer an alternate search mechanism.)
Folding method:

This method divides the original value (digits in this case) into several parts, adds the parts
together, and then uses the last four digits (or some other arbitrary number of digits that will
work ) as the hashed value or key.

Radix transformation method:

Where the value or key is digital, the number base (or radix) can be changed resulting in a
different sequence of digits.

(For example, a decimal numbered key could be transformed into a hexadecimal numbered
key.)

High-order digits could be discarded to fit a hash value of uniform length.

Digit rearrangement method:

This is simply taking part of the original value or key such as digits in positions 3 through 6,
reversing their order, and then using that sequence of digits as the hash value or key.

There are several well-known hash functions used in cryptography.

These include the message-digest hash functions MD2, MD4, and MD5, used for hashing digital
signatures into a shorter value called a message-digest, and the Secure Hash Algorithm (SHA),
a standard algorithm, that makes a larger (60-bit) message digest and is similar to MD4.

A hash function that works well for database storage and retrieval, however, might not work as
for cryptographic or error-checking purposes.

brute force cracking


Brute force (also known as brute force cracking) is a trial and error method used by application
programs to decode encrypted data such as passwords or Data Encryption Standard (DES)
keys, through exhaustive effort (using brute force) rather than employing intellectual
strategies.

Just as a criminal might break into, or "crack" a safe by trying many possible combinations, a
brute force cracking application proceeds through all possible combinations of legal characters
in sequence.

Brute force is considered to be an infallible, although time-consuming, approach.

Crackers are sometimes used in an organization to test network security, although their more
common use is for malicious attacks.

Some variations, such as L0phtcrack from L0pht Heavy Industries, start by making
assumptions, based on knowledge of common or organization-centered practices and then
apply brute force to crack the rest of the data.

L0phtcrack uses brute force to crack Windows NT passwords from a workstation.

PC Magazine reported that a system administrator who used the program from a Windows 95
terminal with no administrative privileges, was able to uncover 85 percent of office passwords
within twenty minutes.

stream cipher
A stream cipher is a method of encrypting text (to produce ciphertext) in which a
cryptographic key and algorithm are applied to each binary digit in a data stream, one bit at a
time.

This method is not much used in modern cryptography.

The main alternative method is the block cipher in which a key and algorithm are applied to
blocks of data rather than individual bits in a stream.

going dark
Going dark is slang for the sudden termination of communication.

In the military, the term is also used to describe a scenario in which communication appears to
have ceased, but in reality has just moved from a public communication channel, where it
could be monitored, to a private communication channel that prevents eavesdropping.

According to the United States Federal Bureau of Investigation (FBI), terrorists use mobile
apps and encryption to go dark and make it difficult for the Bureau to monitor conversations in
legally-intercepted transmissions.

FBI Director James Comey described the going dark problem at the Aspen Security Conference
in July 2015: ISIL's M.O. is to broadcast on Twitter, get people to follow them, then move
them to Twitter Direct Messaging" to evaluate them as recruits. "Then they'll move them to an
encrypted mobile-messaging app so they go dark to us."

Mobile apps that use end-to-end encryption (E2EE) are designed to protect data at rest and in
transit and keep the end user's text messages, emails and video chats private and secure.
However, the same technologies that protect user communications from intruders make it
impossible for government agencies to make sense of the transmissions they legally gather,
especially if the app uses strong encryption.

The National Security Agency (NSA) has proposed two potential encryption methods to solve
the going dark problem:
split-key encryption and encryption using "key escrow."

In split key encryption, also known as secret sharing, the technology vendor or service
provider retains half of a master key and law enforcement retains the other half so that
decryption requires participation of both parties.

For key escrow, decryption requires multiple keys, one of which is stored separately from the
user, possibly in a government agency location.

Opponents of the proposed technologies maintain that both methods would be prohibitively
complex to implement and the complexity would provide points of entry that would ultimately
endanger user data security.

strong cryptography
Strong cryptography is secreted and encrypted communication that is well-protected against
cryptographic analysis and decryption to ensure it is readable only to intended parties.

Depending on the algorithms, protocols and implementation, a cryptographic system may be


vulnerable to analysis, leading to possible cracking of the system.

The ideal is an unbreakable system of which there is just one well known example:

the one-time pad.


The one-time pad is a system in which a randomly generated single-use private key is used to
encrypt a message.

The message is then decrypted by the receiver using a matching one-time pad and key.

The challenge in this system is exchanging pads and keys without allowing them to be
compromised.

Strong cryptography is used by most governments to protect communications.

While it is increasingly available to the general public, there are still many countries where
strong cryptography and encryption are kept from the general public, justified by the need to
protect national security.

While the definition of strong cryptography in general may be broad, the The PCI Security
Standards Council defines strong cryptography requirements for use in the payment card
industry (PCI) specifically:

Cryptography based on industry-tested and accepted algorithms, along with strong key
lengths (minimum 112-bits of effective key strength) and proper key-management practices.

Cryptography is a method to protect data and includes both encryption (which is reversible)
and hashing (which is not reversible, or one way).

At the time of publication, examples of industry-tested and accepted standards and algorithms
for minimum encryption strength include AES (128 bits and higher),

TDES (minimum triple-length keys), RSA (2048 bits and higher), ECC (160 bits and higher),
and ElGamal (2048 bits and higher).

Demonstrating the strength of a given cryptographic system is a complex affair that requires
in-depth consideration.

As such, the demonstration is best achieved by a large number of collaborators.

Planning tests, sharing and analyzing and reviewing of results are best conducted in a public
forum.

block cipher
A block cipher is a method of encrypting text (to produce ciphertext) in which a cryptographic
key and algorithm are applied to a block of data (for example, 64 contiguous bits) at once as a
group rather than to one bit at a time.

The main alternative method, used much less frequently, is called the stream cipher.

So that identical blocks of text do not get encrypted the same way in a message (which might
make it easier to decipher the ciphertext), it is common to apply the ciphertext from the
previous encrypted block to the next block in a sequence.

So that identical messages encrypted on the same day do not produce identical ciphertext,
an initialization vectorderived from a random number generator is combined with the text in
the first block and the key.

This ensures that all subsequent blocks result in ciphertext that doesn't match that of the first
encrypting.

cryptanalysis
Cryptanalysis refers to the study of ciphers, ciphertext, or cryptosystems (that is, to secret
code systems) with a view to finding weaknesses in them that will permit retrieval of
the plaintext from the ciphertext, without necessarily knowing the key or thealgorithm.

This is known as breaking the cipher, ciphertext, or cryptosystem.

Breaking is sometimes used interchangeably with weakening.

This refers to finding a property (fault) in the design or implementation of the cipher that
reduces the number of keys required in a brute force attack (that is, simply trying every
possible key until the correct one is found).

For example, assume that a symmetric cipher implementation uses a key length of 2^128 bits
(2 to the power of 128):

this means that a brute force attack would need to try up to all 2^128 possible combinations
(rounds) to be certain of finding the correct key (or, on average, 2^127 possible
combinations) to convert the ciphertext into plaintext, which is not possible given present and
near future computing abilities.

However, a cryptanalysis of the cipher reveals a technique that would allow the plaintext to be
found in 2^40 rounds.

While not completely broken, the cipher is now much weaker and the plaintext can be found
with moderate computing resources.

There are numerous techniques for performing cryptanalysis, depending on what access the
cryptanalyst has to the plaintext, ciphertext, or other aspects of the cryptosystem.

Below are some of the most common types of attacks:

1) Known-plaintext analysis: With this procedure, the cryptanalyst has knowledge of a portion
of the plaintext from the ciphertext. Using this information, the cryptanalyst attempts to
deduce the key used to produce the ciphertext.

2) Chosen-plaintext analysis (also known as differential cryptanalysis): The cryptanalyst is able


to have any plaintext encrypted with a key and obtain the resulting ciphertext, but the key
itself cannot be analyzed. The cryptanalyst attempts to deduce the key by comparing the
entire ciphertext with the original plaintext. The Rivest-Shamir-Adlemanencryption technique
has been shown to be somewhat vulnerable to this type of analysis.

3) Ciphertext-only analysis: The cryptanalyst has no knowledge of the plaintext and must work
only from the ciphertext. This requires accurate guesswork as to how a message could be
worded. It helps to have some knowledge of the literary style of the ciphertext writer and/or
the general subject matter.

4) Man-in-the-middle attack: This differs from the above in that it involves tricking individuals
into surrendering their keys. The cryptanalyst/attacker places him or herself in the
communication channel between two parties who wish to exchange their keys for secure
communication (via asymmetric or public key infrastructure cryptography). The
cryptanalyst/attacker then performs a key exchange with each party, with the original parties
believing they are exchanging keys with each other.

The two parties then end up using keys that are known to the cryptanalyst/attacker.

This type of attack can be defeated by the use of a hash function.


5) Timing/differential power analysis: This is a new technique made public in June 1998,
particularly useful against the smart card, that measures differences in electrical consumption
over a period of time when a microchip performs a function to secure information.

This technique can be used to gain information about key computations used in the encryption
algorithm and other functions pertaining to security.

The technique can be rendered less effective by introducing random noise into the
computations, or altering the sequence of the executables to make it harder to monitor the
power fluctuations.

This type of analysis was first developed by Paul Kocher of Cryptography Research, though Bull
Systems claims it knew about this type of attack over four years before.

In addition to the above, other techniques are available, such as convincing individuals to
reveal passwords/keys, developing Trojan horse programs that steal a victim's secret key from
their computer and send it back to the cryptanalyst, or tricking a victim into using a weakened
cryptosystem.

All of these are valid techniques in cryptanalysis, even though they may be considered
unorthodox.

Successful cryptanalysis is a combination of mathematics, inquisitiveness, intuition,


persistence, powerful computing resources - and more often than many would like to admit -
luck.

However, successful cryptanalysis has made the enormous resources often devoted to it more
than worthwhile: the breaking of the German Enigma code during WWII, for example, was one
of the key factors in an early Allied victory.

Today, cryptanalysis is practiced by a broad range of organizations: governments try to break


other governments' diplomatic and military transmissions; companies developing security
products send them to cryptanalysts to test their security features and to ahacker or cracker to
try to break the security of Web sites by finding weaknesses in the securing protocols.

It is this constant battle between cryptographers trying to secure information and


cryptanalysts trying to break cryptosystems that moves the entire body of cryptology
knowledge forward.

International Data Encryption Algorithm (IDEA)


IDEA (International Data Encryption Algorithm) is an encryption algorithm developed at ETH in
Zurich, Switzerland.

It uses a block cipher with a 128-bit key, and is generally considered to be very secure.

It is considered among the best publicly known algorithms.

In the several years that it has been in use, no practical attacks on it have been published
despite of a number of attempts to find some.

IDEA is patented in the United States and in most of the European countries.

The patent is held by Ascom-Tech.

Non-commercial use of IDEA is free.

Commercial licenses can be obtained by contacting Ascom-Tech.


MD5
MD5 is an algorithm that is used to verify data integrity through the creation of a 128-bit
message digest from data input (which may be a message of any length) that is claimed to be
as unique to that specific data as a fingerprint is to the specific individual.

MD5, which was developed by Professor Ronald L. Rivest of MIT, is intended for use withdigital
signature applications, which require that large files must be compressed by a secure method
before being encrypted with a secret key, under a public keycryptosystem.

MD5 is currently a standard, Internet Engineering Task Force (IETF) Request for Comments
(RFC) 1321.

According to the standard, it is "computationally infeasible" that any two messages that have
been input to the MD5 algorithm could have as the output the same message digest, or that a
false message could be created through apprehension of the message digest.
MD5 is the third message digest algorithm created by Rivest. All three (the others are MD2 and
MD4) have similar structures, but MD2 was optimized for 8-bit machines, in comparison with
the two later formulas, which are optimized for 32-bit machines.

The MD5 algorithm is an extension of MD4, which the critical review found to be fast, but
possibly not absolutely secure. In comparison, MD5 is not quite as fast as the MD4 algorithm,
but offers much more assurance of data security.

user interface (UI)


In information technology, the user interface (UI) is everything designed into an information
device with which a human being may interact -- including display screen, keyboard, mouse,
light pen, the appearance of a desktop, illuminated characters, help messages, and how an
application program or a Web site invites interaction and responds to it.

In early computers, there was very little user interface except for a few buttons at an
operator's console.

The user interface was largely in the form of punched card input and report output.

Later, a user was provided the ability to interact with a computer online and the user interface
was a nearly blank display screen with a command line, a keyboard, and a set of commands
and computer responses that were exchanged.

This command line interface led to one in which menus (list of choices written in text)
predominated.

And, finally, the graphical user interface (GUI) arrived, originating mainly in Xerox's Palo Alto
Research Center, adopted and enhanced by Apple Computer, and finally effectively
standardized by Microsoft in its Windows operating systems.

The user interface can arguably include the total "user experience," which may include the
aesthetic appearance of the device, response time, and the content that is presented to the
user within the context of the user interface.

IETF (Internet Engineering Task Force)


The IETF (Internet Engineering Task Force) is the body that defines standard Internet
operating protocols such as TCP/IP.

The IETF is supervised by the Internet SocietyInternet Architecture Board (IAB).

IETF members are drawn from the Internet Society's individual and organization membership.
Standards are expressed in the form of Requests for Comments (RFCs).
single sign-on (SSO)
Single sign-on (SSO) is a session and user authentication service that permits a user to use
one set of login credentials (e.g., name and password) to access multiple applications.

The service authenticates the end user for all the applications the user has been given rights to
and eliminates further prompts when the user switches applications during the same session.

On the back end, SSO is helpful for logging user activities as well as monitoring user accounts.

In a basic web SSO service, an agent module on the application server retrieves the specific
authentication credentials for an individual user from a dedicated SSO policy server, while
authenticating the user against a userrepository such as a lightweight directory access protocol
(LDAP) directory.

Some SSO services use protocols such asKerberos and the security assertion markup language
(SAML).

SAML is an XML standard that facilitates the exchange of user authentication


and authorization data across secure domains.

SAML-based SSO services involve communications between the user, an identity provider that
maintains a user directory, and a service provider.

When a user attempts to access an application from the service provider, the service provider
will send a request to the identity provider for authentication.

The service provider will then verify the authentication and log the user in.
The user will not have to log in again for the rest of his session.

In a Kerberos-based setup, once the user credentials are provided, a ticket-granting ticket
(TGT) is issued.

The TGT fetches service tickets for other applications the user wishes to access, without asking
the user to re-enter credentials.

Although single sign-on is a convenience to users, it present risks to enterprise security.

An attacker who gains control over a user's SSO credentials will be granted access to every
application the user has rights to, increasing the amount of potential damage.

In order to avoid malicious access, it's essential that every aspect of SSO implementation be
coupled with identity governance.

Organizations can also use two factor authentication (2FA) or multifactor authentication (MFA)
with SSO to improve security.
Fermat prime
A Fermat prime is a Fermat number that is also a prime number .

A Fermat number F n is of the form 2 m + 1, where m is the n th power of 2 (that is, m = 2 n ,


where n is an integer ).

To find the Fermat number F n for an integer n , you first find m = 2 n , and then calculate
2 m + 1.

The term arises from the name of a 17th-Century French lawyer and mathematician, Pierre de
Fermat, who first defined these numbers and noticed their significance.

Fermat believed that all numbers of the above form are prime numbers; that is, that F n is
prime for all integral values of n .

This is indeed the case for n = 0, n = 1, n = 2, n = 3, and n = 4:

When n = 0, m = 2 = 1; therefore
F = 2 1 + 1 = 2 + 1 = 3, which is prime
When n = 1,? m = 2 1 = 2; therefore
F 1 = 2 2 + 1 = 4 + 1 = 5, which is prime
When n = 2, m = 2 2 = 4; therefore
F 2 = 2 4 + 1 = 16 + 1 = 17, which is prime
When n = 3, m = 2 3 = 8; therefore
F 3 = 2 8 + 1 = 256 + 1 = 257, which is prime
When n = 4, m = 2 4 = 16; therefore
F 4 = 2 16 + 1 = 65536 + 1 = 65537, which is prime

Using computers, mathematicians have not yet found any Fermat primes for n greater than 4.
So far, Fermat's original hypothesis seems to have been wrong.

The search continues for Fermat numbers F n that are prime when n is greater than 4.
integer
An integer (pronounced IN-tuh-jer) is a whole number (not a fractional number) that can be
positive, negative, or zero.

Examples of integers are: -5, 1, 5, 8, 97, and 3,043.

Examples of numbers that are not integers are: -1.43, 1 3/4, 3.14, .09, and 5,643.1.

The set of integers, denoted Z, is formally defined as follows:

Z = {..., -3, -2, -1, 0, 1, 2, 3, ...}

In mathematical equations, unknown or unspecified integers are represented by lowercase,


italicized letters from the "late middle" of the alphabet. The most common are p, q, r, and s.

The set Z is a denumerable set.


Denumerability refers to the fact that, even though there might be an infinite number of
elements in a set, those elements can be denoted by a list that implies the identity of every
element in the set.

For example, it is intuitive from the list {..., -3, -2, -1, 0, 1, 2, 3, ...} that 356,804,251 and -
67,332 are integers, but 356,804,251.5, -67,332.89, -4/3, and 0.232323 ... are not.

The elements of Z can be paired off one-to-one with the elements of N, the set of natural
numbers, with no elements being left out of either set. Let N = {1, 2, 3, ...}.

Then the pairing can proceed in this way:

In infinite sets, the existence of a one-to-one correspondence is the litmus test for
determining cardinality, or size.

The set of natural numbers and the set of rational numbers have the same cardinality as Z.

However, the sets of real numbers, imaginary numbers, and complex numbers have cardinality
larger than that of Z.
algorithm

An algorithm (pronounced AL-go-rith-um) is a procedure or formula for solving a problem,


based on conductiong a sequence of specified actions.

A computer program can be viewed as an elaborate algorithm.

In mathematics and computer science, an algorithm usually means a small procedure that
solves a recurrent problem.

Algorithms are widely used throughout all areas of IT (information technology).

A search engine algorithm, for example, takes search strings of keywords and operators as
input, searches its associated database for relevant web pages, and returns results.

An encryption algorithm transforms data according to specified actions to protect it.

A secret key algorithm such as the U.S. Department of Defense's Data Encryption Standard
(DES), for example, uses the same key to encrypt and decrypt data.
As long as the algorithm is sufficiently sophisticated, no one lacking the key can decrypt the
data.

The word algorithm derives from the name of the mathematician, Mohammed ibn-Musa al-
Khwarizmi, who was part of the royal court in Baghdad and who lived from about 780 to 850.

Al-Khwarizmi's work is the likely source for the word algebra as well.

Khan Academy provides an introductory tutorial on algorithms:

https://youtu.be/CvSOaYi89B4

prime number
A prime number is a whole number greater than 1, whose only two whole-number factors are
1 and itself. The first few prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, and 29.

As we proceed in the set of natural numbers N = {1, 2, 3, ...}, the primes become less and
less frequent in general.

However, there is no largest prime number.

For every prime number p, there exists a prime number p' such that p' is greater than p.

This was demonstrated in ancient times by the Greek mathematician Euclid.

Suppose n is a whole number, and we want to test it to see if it is prime.

First, we take the square root (or the 1/2 power) of n;

then we round this number up to the next highest whole number.

Call the result m. We must find all of the following quotients:

qm = n / m
q(m-1) = n / (m-1)
q(m-2) = n / (m-2)
q(m-3) = n / (m-3)
...
q3 = n / 3
q2 = n / 2

The number n is prime if and only if none of the q's, as derived above, are whole numbers.

A computer can be used to test extremely large numbers to see if they are prime.

But, because there is no limit to how large a natural number can be, there is always a point
where testing in this manner becomes too great a task even for the most powerful
supercomputers.

Various algorithms have been formulated in an attempt to generate ever-larger prime


numbers.

These schemes all have limitations.


Mersenne prime (or Marsenne prime)
A Mersenne (also spelled Marsenne) prime is a specific type of prime number.

It must be reducible to the form 2 n - 1, where n is a prime number.

The term comes from the surname of a French monk who first defined it.
The first few known values of n that produce Mersenne primes are where n = 2, n = 3, n =
5, n = 7, n = 13, n = 17, n = 19, n = 31, n = 61, and n = 89.

With the advent of computers to perform number-crunching tasks formerly done by humans,
ever-larger Mersenne primes (and primes in general) have been found.

The quest to find prime numbers is akin to other numerical searches done by computers.
Examples are the decimal expansions of irrational numbers such as pi (the circumference-to-
diameter ratio of a circle) or e (the natural logarithm base).

But the 'next' prime is more difficult to find than the 'next' digit in the expansion of an
irrational number.

It takes the most powerful computer a long time to check a large number to determine if it is
prime, and an even longer time to determine if it is a Mersenne prime.

For this reason, Mersenne primes are of particular interest to developers of


strong encryption methods.

In August 2008, Edson Smith, a system administrator at UCLA, found the largest prime
number known to that date. Smith had installed software for the Great Internet Mersenne
Prime Search (Gimps), a volunteer-based distributed computing project.

The number (which is a Mersenne prime) is 12,978,189 digits long.

It would take nearly two-and-a-half months to write out and, if printed, would stretch out for
30 miles.

irrational number
An irrational number is a real number that cannot be reduced to any ratio between
aninteger p and a natural number q .

The union of the set of irrational numbers and the set ofrational number s forms the set of real
numbers.

In mathematical expressions, unknown or unspecified irrationals are usually represented


by u through z .

Irrational numbers are primarily of interest to theoreticians.

Abstract mathematics has potentially far-reaching applications in communications and


computer science, especially in data encryption and security.

Examples of irrational numbers are 2 1/2 (the square root of 2), 3 1/3 (the cube root of 3), the
circular ratio pi , and the natural logarithm base e .

The quantities 2 1/2 and 3 1/3 are examples of algebraic number s.

Pi and e are examples of special irrationals known as atranscendental number s.

The decimal expansion of an irrational number is always nonterminating (it never ends) and
nonrepeating (the digits display no repetitive pattern).
If x and z are irrationals such that x < z , then there always exists an irrational y such
that x <y < z .

The set of irrationals is "dense" like the set Q of rationals.

But theoretically, the set of irrationals is "more dense." Unlike Q , the set of irrationals
is nondenumerable .

There are more nonterminating, nonrepeating decimals than is possible to list, even by
implication.

To prove this, suppose there is an implied list of all the nonterminating, nonrepeating decimal
numbers between 0 and

1. Every such number consists of a zero followed by a decimal point, followed by an


infinite sequence of digits from the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.

Suppose the elements of the list are denoted x 1 , x 2 , x 3 , ... and the digits in the numbers
are denoted a ii .

The list can be written like this:


x 1 = 0. a 11 a 12 a 13 a 14 a 15 a 16 ...
x 2 = 0. a 21 a 22 a 23 a 24 a 25 a 26 ...
x 3 = 0. a 31 a 32 a 33 a 34 a 35 a 36 ...
x 4 = 0. a 41 a 42 a 43 a 44 a 45 a 46 ...
x 5 = 0. a 51 a 52 a 53 a 54 a 55 a 56 ...
x 6 = 0. a 61 a 62 a 63 a 64 a 65 a 66 ...
...

Even though we don't know the actual values of any of the digits, it is easy to imagine a
number between 0 and 1 that can't be in this list.

Think of a number y of the following form:

y = 0. b 11 b 22 b 33 b 44 b 55 b 66 ...

such that no b ii in y is equal to the corresponding a ii in the list.

The resulting number y is nonterminating and nonrepeating, is between 0 and 1, but is not
equal to any x i in the list, because there is always at least one digit that does not match.

The non-denumerability of the set of irrational numbers has far-reaching implications.

Perhaps most bizarre is the notion that "not all infinities are created equal.

" Although the set of rationals and the set of irrationals are both infinite, the set of irrationals
is larger in a demonstrable way.

real number
A real number is any element of the set R, which is the union of the set of rational numbersand
the set of irrational numbers.

In mathematical expressions, unknown or unspecified real numbers are usually represented by


lowercase italic letters u through z.
The set R gives rise to other sets such as the set of imaginary numbers and the set of complex
numbers.

The idea of a real number (and what makes it "real") is primarily of interest to theoreticians.

Abstract mathematics has potentially far-reaching applications in communications and


computer science, especially in data encryption and security.

If x and z are real numbers such that x < z, then there always exists a real number y such
thatx < y < z.

The set of reals is "dense" in the same sense as the set of irrationals. Both sets
arenondenumerable.
There are more real numbers than is possible to list, even by implication.
The set R is sometimes called the continuum because it is intuitive to think of the elements
of R as corresponding one-to-one with the points on a geometric line.

This notion, first proposed by Georg Cantor who also noted the difference between
the cardinalities (sizes) of the sets of rational and irrational numbers, is called the Continuum
Hypothesis.

This hypothesis can be either affirmed or denied without causing contradictions in theoretical
mathematics.

https://youtu.be/GqE9FKPZtPI

natural number
A natural number is a number that occurs commonly and obviously in nature.

As such, it is a whole, non-negative number.

The set of natural numbers, denoted N, can be defined in either of two ways:

N = {0, 1, 2, 3, ...}
N = (1, 2, 3, 4, ...}

In mathematical equations, unknown or unspecified natural numbers are represented by


lowercase, italicized letters from the middle of the alphabet.

The most common is n, followed by m, p, and q.

In subscripts, the lowercase i is sometimes used to represent a non-specific natural number


when denoting the elements in a sequence or series.

However, i is more often used to represent the positive square root of -1, the unit imaginary
number.

The set N, whether or not it includes zero, is a denumerable set.

Denumerability refers to the fact that, even though there might be an infinite number of
elements in a set, those elements can be denoted by a list that implies the identity of every
element in the set.

For example, it is intuitive from either the list {1, 2, 3, 4, ...} or the list {0, 1, 2, 3, ...} that
356,804,251 is a natural number, but 356,804,251.5, 2/3, and -23 are not.

Both of the sets of natural numbers defined above are denumerable.


They are also exactly the same size. It's not difficult to prove this; their elements can be
paired off one-to-one, with no elements being left out of either set.

In infinite sets, the existence of a one-to-one correspondence is the litmus test for
determining cardinality, or size.

The set of integers and the set of rational numbers has the same cardinality as N.

However, the sets of real numbers, imaginary numbers, and complex numbers have cardinality
larger than that of N.
pi

Pi is a numerical constant that represents the ratio of a circle's circumference to its diameter
on a flat plane surface.

The value is the same regardless of the size of the circle.

The decimal expansion of pi is a nonterminating, nonrepeating sequence of digits.


For most calculations, the value can be taken as 3.14159.

This means, for example, that a circle with a diameter of 10 centimeters, as measured on a
flat surface, has a circumference of approximately 31.4159 centimeters.

The number pi is also the ratio of the diameter of a sphere to the length of any great circle
(geodesic) on the sphere.

So, for example, if the earth is considered to be a perfect sphere with a diameter of 8,000
miles, then the distance around the earth, as measured along the equator or along any great
circle, is approximately 8,000 x 3.14159, or 25,133 miles.

Pi is an irrational number.It cannot be precisely defined as the ratio of any two whole numbers.

Thus, its decimal expansion has no pattern and never ends.

The first few hundred, thousand, million, or billion digits of pi can be calculated using a
computer to add up huge initial sequences of the terms of an infinite sum known as a Fourier
series.

Mathematically, it can be shown that the following equation holds:

The symbol to the left of the equal sign is the lowercase Greek letter used in mathematics,
physics, and engineering to represent pi.

logarithm (logarithmic)
A logarithm is an exponent used in mathematical calculations to depict the perceived levels of
variable quantities such as visible light energy, electromagnetic field strength, and sound
intensity.

Suppose three real numbers a, x, and y are related according to the following equation:

x = ay

Then y is defined as the base-a logarithm of x.

This is written as follows:

loga x = y
As an example, consider the expression 100 = 102.

This is equivalent to saying that the base-10 logarithm of 100 is 2;

that is, log10 100 = 2.

Note also that 1000 = 103; thus log10 1000 = 3.

(With base-10 logarithms, the subscript 10 is often omitted, so we could write log 100 = 2 and
log 1000 = 3).

When the base-10 logarithm of a quantity increases by 1, the quantity itself increases by a
factor of 10.

A 10-to-1 change in the size of a quantity, resulting in a logarithmic increase or decrease of 1,


is called an order of magnitude.

Thus, 1000 is one order of magnitude larger than 100.

Base-10 logarithms, also called common logarithms, are used in electronics and experimental
science.

In theoretical science and mathematics, another logarithmic base is encountered: the


transcendental number e, which is approximately equal to 2.71828.

Base-e logarithms, written loge or ln, are also known as natural logarithms.

If x = ey, then

loge x = ln x = y

exponent
An exponent is a quantity representing the power to which some other quantity is raised.

Exponents do not have to be numbers or constants; they can be variables.

They are often positive whole number s, but they can be negative number s, fractional
number s, irrational number s, or complex number s.

Consider the following mathematical expressions:

y=ex
x3+5x2-5x+6=0
x2+y2=z2

In the first expression, x is the exponent of e .

In the second expression, the numbers 3 and 2 are exponents of x .

In the third expression, the number 2 is an exponent of x , y , and z .

Exponents are important in scientific notation, when large or small quantities are denoted as
powers of 10.

Consider this expression of a large number:

534,200,000,000 = 5.342 x 10 11
Here, the exponent 11, attached to the base 10, indicates the quantity 100,000,000,000.

In conventional documentation, exponents are denoted by superscripts, as in the examples


above.

But it is not always possible to write them this way.

When sending an e-mail message, the body of text must be in plain ASCII , which does not
support specialized character attributes such as superscripts.

If x is the exponent to which some base quantity ais raised, then a x can be written in ASCII
as a^x.

In scientific notation, the uppercase letter E can be used to indicate that a number is raised to
a positive or negative power of 10.

That power is usually indicated by two-digit numbers between -99 and +99.

Here are some examples:

2.45 x 10 6 = 2.45E+06
6.0033 x 10 -17 = 6.0033E-17
algebraic number
An algebraic number is any real number that is a solution of some single-
variable polynomialequation whose coefficient s are all integer s.

While this is an abstract notion, theoretical mathematics has potentially far-reaching


applications in communications and computer science, especially in data encryption and
security.

The general form of a single-variable polynomial equation is:

a + a 1 x + a 2 x 2 + a 3 x 3 + ... + a n x n = 0

where a , a 1 , a 2 , ..., a n are the coefficients, and x is the unknown for which the equation is
to be solved.

A number x is algebraic if and only if there exists some equation of the above form such
that a , a 1 , a 2 , ..., a n are all integers.

All rational number s are algebraic.

Examples include 25, 7/9, and -0.245245245.

Some irrational number s are also algebraic.

Examples are 2 1/2 (the square root of 2) and 3 1/3 (the cube root of 3).

There are irrational numbers x for which no single-variable, integer-coefficient polynomial


equation exists with x as a solution.

Examples are pi (the ratio of a circle's circumference to its diameter in a plane) and e (the
natural logarithm base).

Numbers of this type are known as transcendental number s.

transcendental number
A transcendental number is a real number that is not the solution of any single-
variablepolynomial equation whose coefficients are all integers .

All transcendental numbers areirrational numbers .

But the converse is not true; there are some irrational numbers that are not transcendental.

Examples of transcendental numbers include pi , the ratio of a circle's circumference to its


diameter in a plane, and e , the base of the natural logarithm .

The case of pi has historical significance.

The fact that pi is transcendental means that it is impossible to draw to perfection, using a
compass and straightedge and following the ancient Greek rules for geometric constructions, a
square with the same area as a given circle.

This ancient puzzle, known as squaring the circle , was, for centuries, one of the most baffling
challenges in geometry.

Schemes have been devised that provide amazingly close approximations to squaring the
circle.
But in theoretical mathematics (unlike physics and engineering), approximations are never
good enough; a solution, scheme, or method is either valid, or else it is not.

It can be difficult, and perhaps impossible, to determine whether or not a certain irrational
number is transcendental.

Some numbers defy classification (algebraic, irrational, or transcendental) to this day.

Two examples are the product of pi and e (call this quantity P pie) and the sum of pi
and e (call this S pie ).

It has been proved that pi and e are both transcendental.

It has also been shown that at least one of the two quantities P pie and S pieare
transcendental.

But as of this writing, no one has rigorously proven that P pie is transcendental, and no one
has rigorously proved that S pie is transcendental.

polynomial
A polynomial is a mathematical expression consisting of a sum of terms, each term including a
variable or variables raised to a power and multiplied by a coefficient.

The simplest polynomials have one variable.

A one-variable (univariate) polynomial of degree n has the following form:

anxn + an-1xn-1 + ... + a2x2 + a1x1 + ax

where the a's represent the coefficients and x represents the variable.

Because x1 = x and x= 1 for all complex numbers x, the above expression can be simplified
to:

anxn + an-1xn-1 + ... + a2x2 + a1x + a

When an nth-degree univariate polynomial is equal to zero, the result is a univariate


polynomial equation of degree n:

anxn + an-1xn-1 + ... + a2x2 + a1x + a = 0

There may be several different values of x, called roots, that satisfy a univariate polynomial
equation.

In general, the higher the order of the equation (that is, the larger the value of n), the more
roots there are.

A univariate polynomial equation of degree 1 (n = 1) constitutes a linear equation.

When n= 2, it is a quadratic equation;

when n = 3, it is a cubic equation;

when n = 4, it is a quartic equation;

when n = 5, it is a quintic equation.


The larger the value of n, the more difficult it is to find all the roots of a univariate polynomial
equation.

Some polynomials have two, three, or more variables.

A two-variable polynomial is called bivariate; a three-variable polynomial is called trivariate.

e-mail (electronic mail or email)

E-mail (electronic mail) is the exchange of computer-stored messages by telecommunication.

(Some publications spell it email; we prefer the currently more established spelling of e-mail.)

E-mail messages are usually encoded in ASCII text.

However, you can also send non-text files, such as graphic images and sound files, as
attachments sent in binary streams.

E-mail was one of the first uses of the Internet and is still the most popular use.

A large percentage of the total traffic over the Internet is e-mail.

E-mail can also be exchanged between online service provider users and in networks other
than the Internet, both public and private.

E-mail can be distributed to lists of people as well as to individuals.

A shared distribution list can be managed by using an e-mail reflector. Some mailing lists allow
you to subscribe by sending a request to the mailing list administrator.

A mailing list that is administered automatically is called a list server.

E-mail is one of the protocols included with the Transport Control Protocol/Internet Protocol
(TCP/IP) suite of protocols.

A popular protocol for sending e-mail is Simple Mail Transfer Protocol and a popular protocol
for receiving it is POP3.

Both Netscape and Microsoft include an e-mail utility with their Web browsers.

Internet
The Internet, sometimes called simply "the Net," is a worldwide system of computer networks
- a network of networks in which users at any one computer can, if they have permission, get
information from any other computer (and sometimes talk directly to users at other
computers).

It was conceived by the Advanced Research Projects Agency (ARPA) of the U.S. government in
1969 and was first known as the ARPANet.

The original aim was to create a network that would allow users of a research computer at one
university to "talk to" research computers at other universities.

A side benefit of ARPANet's design was that, because messages could be routed or rerouted in
more than one direction, the network could continue to function even if parts of it were
destroyed in the event of a military attack or other disaster.

Today, the Internet is a public, cooperative and self-sustaining facility accessible to hundreds
of millions of people worldwide.
Physically, the Internet uses a portion of the total resources of the currently existing public
telecommunication networks.

Technically, what distinguishes the Internet is its use of a set of protocols calledTCP/IP (for
Transmission Control Protocol/Internet Protocol).

Two recent adaptations of Internet technology, the intranetand the extranet, also make use of
the TCP/IP protocol.

For most Internet users, electronic mail (email) practically replaced the postal service for short
written transactions.

People communicate over the Internet in a number of other ways including Internet Relay
Chat (IRC), Internet telephony, instant messaging, video chat or social media.

The most widely used part of the Internet is the World Wide Web (often abbreviated "WWW" or
called "the Web").

Its outstanding feature is hypertext, a method of instant cross-referencing.

In most Web sites, certain words or phrases appear in text of a different color than the rest;
often this text is also underlined.

When you select one of these words or phrases, you will be transferred to the site or page that
is relevant to this word or phrase.

Sometimes there are buttons, images, or portions of images that are "clickable." If you move
the pointer over a spot on a Web site and the pointer changes into a hand, this indicates that
you can click and be transferred to another site.

Using the Web, you have access to billions of pages of information.

Web browsing is done with a Web browser, the most popular of which
are Chrome, Firefox and Internet Explorer.

The appearance of a particular Web site may vary slightly depending on the browser you use.

Also, later versions of a particular browser are able to render more "bells and whistles" such as
animation, virtual reality, sound, and music files, than earlier versions.

The Internet has continued to grow and evolve over the years of its existence.

IPv6, for example, was designed to anticipate enormous future expansion in the number of
available IP addresses.

In a related development, the Internet of Things (IoT) is the burgeoning environment in which
almost any entity or object can be provided with a unique identifier and the ability to transfer
data automatically over the Internet.

digital signature
A digital signature (not to be confused with a digital certificate) is a mathematical technique
used to validate the authenticity and integrity of a message, software or digital document.

The digital equivalent of a handwritten signature or stamped seal, but offering far more
inherent security, a digital signature is intended to solve the problem of tampering and
impersonation in digital communications.
Digital signatures can provide the added assurances of evidence to origin, identity and status
of an electronic document, transaction or message, as well as acknowledging informed consent
by the signer.

In many countries, including the United States, digital signatures have the same legal
significance as the more traditional forms of signed documents.

The United States Government Printing Office publishes electronic versions of the budget,
public and private laws, and congressional bills with digital signatures.

Cipher
A cipher (pronounced SAI-fuhr) is any method of encrypting text (concealing its readability and
meaning).

It is also sometimes used to refer to the encrypted text message itself although here the
term ciphertext is preferred.

Its origin is the Arabic sifr, meaningempty or zero.

In addition to the cryptographic meaning, cipher also means (1) someone insignificant, and (2)
a combination of symbolic letters as in an entwined weaving of letters for a monogram.

Some ciphers work by simply realigning the alphabet (for example, A is represented by F, B is
represented by G, and so forth) or otherwise manipulating the text in some consistent pattern.

However, almost all serious ciphers use both a key (a variable that is combined in some way
with the unencrypted text) and analgorithm (a formula for combining the key with the text).

A block cipher is one that breaks a message up into chunks and combines a key with each
chunk (for example, 64-bits of text).

A stream cipher is one that applies a key to each bit, one at a time.

Most modern ciphers are block ciphers.

protocol
In information technology, a protocol is the special set of rules that end points in a
telecommunication connection use when they communicate.

Protocols specify interactions between the communicating entities.

Protocols exist at several levels in a telecommunication connection.

For example, there are protocols for the data interchange at the hardware device level and
protocols for data interchange at the application program level.

In the standard model known as Open Systems Interconnection (OSI), there are one or more
protocols at each layer in the telecommunication exchange that both ends of the exchange
must recognize and observe.

Protocols are often described in an industry or international standard.

The TCP/IP Internet protocols, a common example, consist of:

Transmission Control Protocol (TCP), which uses a set of rules to exchange messages with
other Internet points at the information packet level
Internet Protocol (IP), which uses a set of rules to send and receive messages at the Internet
address level

Additional protocols that include the Hypertext Transfer Protocol (HTTP) and File Transfer
Protocol (FTP), each with defined sets of rules to use with corresponding programs elsewhere
on the Internet

There are many other Internet protocols, such as the Border Gateway Protocol (BGP) and the
Dynamic Host Configuration Protocol (DHCP).

The word protocol comes from the Greek protocollon, meaning a leaf of paper glued to a
manuscript volume that describes the contents.
End-to-end encryption (E2EE) is a method of secure communication that prevents third-parties
from accessing data while it's transferred from one end system or device to another.

In E2EE, the data is encrypted on the sender's system or device and only the recipient is able
to decrypt it.

Nobody in between, be they anInternet service provider, application service provider or


hacker, can read it or tamper with it.

The cryptographic keys used to encrypt and decrypt the messages are stored exclusively on
the endpoints, a trick made possible through the use of public key encryption.

Although thekey exchange in this scenario is considered unbreakable using known algorithms
and currently obtainable computing power, there are at least two potential weaknesses that
exist outside of the mathematics.

First, each endpoint must obtain the public key of the other endpoint, but a would-be attacker
who could provide one or both endpoints with the attacker's public key could execute a man-
in-the-middle attack.

Additionally, all bets are off if either endpoint has been compromised such that the attacker
can see messages before and after they have been encrypted or decrypted.

The generally employed method for ensuring that a public key is in fact the legitimate key
created by the intended recipient is to embed the public key in a certificate that has
been digitally signed by a well-recognized certificate authority (CA).

Because the CA's public key is widely distributed and generally known, its veracity can be
counted on, and a certificate signed by that public key can be presumed authentic.

Since the certificate associates the recipient's name and public key, the CA would presumably
not sign a certificate that associated a different public key with the same name.

The first widely used E2EE messaging software was Pretty Good Privacy, which secured email
and stored files, as well as securing digital signatures.

Text messaging applications frequently utilize end-to-end encryption, including Jabber,


TextSecure and Apple's
iMessage.

Advanced Encryption Standard (AES)


The Advanced Encryption Standard or AES is a symmetric block cipher used by the U.S.
government to protect classified information and is implemented in software and hardware
throughout the world to encrypt sensitive data.
The origins of AES date back to 1997 when theNational Institute of Standards and Technology
(NIST) announced that it needed a successor to the aging Data Encryption Standard
(DES)which was becoming vulnerable to brute-force attacks.

This new encryption algorithm would be unclassified and had to be "capable of protecting
sensitive government information well into the next century.

" It was to be easy to implement in hardware and software, as well as in restricted


environments (for example, in a smart card) and offer good defenses against various attack
techniques.

Choosing AES
The selection process to find this new encryption algorithm was fully open to public scrutiny
and comment; this ensured a thorough, transparent analysis of the designs.

Fifteen competing designs were subject to preliminary analysis by the world cryptographic
community, including the National Security Agency (NSA). In August 1999, NIST selected five
algorithms for more extensive analysis.

These were:
MARS, submitted by a large team from IBM Research
RC6, submitted by RSA Security
Rijndael, submitted by two Belgian cryptographers, Joan Daemen and Vincent Rijmen
Serpent, submitted by Ross Andersen, Eli Biham and Lars Knudsen
Twofish, submitted by a large team of researchers including Counterpane's respected
cryptographer, Bruce Schneier
Implementations of all of the above were tested extensively in ANSI, C and Javalanguages for
speed and reliability in encryption and decryption, key and algorithm setup time, and
resistance to various attacks, both in hardware- and software-centric systems.

Members of the global cryptographic community conducted detailed analyses (including some
teams that tried to break their own submissions).

After much enthusiastic feedback, debate and analysis, the Rijndael cipher -- a mash of the
Belgian creators' last names Daemen and Rijmen -- was selected as the proposed algorithm for
AES in October 2000 and was published by NIST as U.S. FIPS PUB 197.

The Advanced Encryption Standard became effective as a federal government standard in


2002.

It is also included in the ISO/IEC 18033-3 standard which specifies block ciphers for the
purpose of data confidentiality.
In June 2003, the U.S. government announced that AES could be used to protect classified
information, and it soon became the default encryption algorithm for protecting classified
information as well as the first publicly accessible and open cipher approved by the NSA for
top-secret information.

AES is one of the Suite B cryptographic algorithms used by NSA's Information Assurance
Directorate in technology approved for protecting national security systems.

Its successful use by the U.S. government led to widespread use in the private sector, leading
AES to become the most popular algorithm used in symmetric key cryptography.

The transparent selection process helped create a high level of confidence in AES among
security and cryptography experts.
AES is more secure than its predecessors -- DES and 3DES -- as the algorithm is stronger and
uses longer key lengths.

It also enables faster encryption than DES and 3DES, making it ideal for software applications,
firmware and hardware that require either low-latency or high throughput, such
asfirewalls and routers. It is used in many protocols such as SSL/TLS and can be found in most
modern applications and devices that need encryption functionality.

How AES encryption works


AES comprises three block ciphers, AES-128, AES-192 and AES-256.

Each cipher encrypts and decrypts data in blocks of 128 bits using cryptographic keys of 128-,
192- and 256-bits, respectively.

(Rijndael was designed to handle additional block sizes and key lengths, but the functionality
was not adopted in AES.)

Symmetric or secret-key ciphers use the same key for encrypting and decrypting, so both the
sender and the receiver must know and use the same secret key.
All key lengths are deemed sufficient to protect classified information up to the "Secret" level
with "Top Secret" information requiring either 192- or 256-bit key lengths.

There are 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit
keys -- a round consists of several processing steps that include substitution, transposition and
mixing of the input plaintextand transform it into the final output of ciphertext.

As a cipher, AES has proven reliable.


The only successful attacks against it have been side-channel attacks on weaknesses found in
the implementation or key management of certain AES-based encryption products.

(Side-channel attacks don't use brute force or theoretical weaknesses to break a cipher, but
rather exploit flaws in the way it has been implemented.)

The BEAST browser exploit against the TLS v1.0 protocol is a good example;

TLS can use AES to encrypt data, but due to the information that TLS exposes, attackers
managed to predict the initialization vector block used at the start of the encryption process.

Various researchers have published attacks against reduced-round versions of the Advanced
Encryption Standard, and a research paper published in 2011 demonstrated that using a
technique called a biclique attack could recover AES keys faster than a brute-force attack by a
factor of between three and five, depending on the cipher version.

Even this attack, though, does not threaten the practical use of AES due to its high
computational complexity.

Learning the difference between PGP and SSL

PGP has been around since 1991.

It first became popular in use with the need to secure e-mail messages.

Today it is popping up in business practices more often than not.


Using SSL or a virtual private network (VPN) to protect and share files between businesses can
be considered overkill.

The difference between PGP and SSL is that PGP is encryption of stored data while SSL only
encrypts data while it is being transported.

If you are transferring sensitive information such as credit card numbers you will want to use
SSL.

If you are storing data such as file transfer protocol (FTP) servers you will want to use PGP.

As you can see, PGP is a powerful tool for securing FTP and can make your life much easier.

Is messaging in symmetric encryption better than PGP email security?

Suppose two people exchange messages using symmetric encryption; every time they
communicate, a session key is generated that encrypts the message using a protocol that
handles session keys like SSL.

They could, alternatively, use PGP to exchange messages.

Do you think in this scenario that PGP or symmetric encryption would offer better security?

It depends on how trusted the local environment is.

Symmetric encryption will ensure non-disclosure between "systems" by encrypting all message
packets between mail servers using a shared encryption key.

PGP ensures non-disclosure of an individual message by encrypting the actual message and
making it viewable only by the sender and recipient.

PGP is a bit more flexible as it can be used when the message traverses an unsecured network
channel between two systems or even if the recipient is on the same system.
As a general guideline, if you have a trusted messaging environment, but the network between
servers is in question, then symmetric encrypted sessions like SSL will work.

If you're exchanging messages that are so sensitive in nature that even the messaging system
administrators shouldn't have access to the message content, like legal or executive
communications, I'd use PGP.

Which public key algorithm is used for encrypting emails?

Although PGP and S/MME both use public key encryption, Expert Joel Dubin explains PGP and
S/MME's distinct approaches to e-mail coding.

Which public key algorithm is used for encrypting emails?

You can encrypt email using either Pretty Good Privacy (PGP) or S/MIME.

Unfortunately you can't use both, because...


the two applications aren't compatible and use different methods for encryption.

However, both use public key encryption at some point in their respective processes.

Public key or asymmetric encryption is supposed to solve the fundamental problem of securely
distributing a private key over a public medium like the Internet.
It uses two keys: a public key, available to the world, and a private or secret key that is only
kept by its owner.

Both keys are needed to encrypt and decrypt the message.

The system is secure because even though the two keys are mathematically related, they can't
be derived from each other.

Since only the public key, which is openly available but can't be used to decrypt the message
by itself, is needed to encrypt a message, the private key doesn't have to be distributed in the
wild, where it could be exposed and its secrecy compromised.

PGP was invented by Phil Zimmerman in 1991 and uses two asymmetric algorithms: RSA and
DSA.

RSA was named after its MIT inventors, Ron Rivest, Adi Shamir and Len Adleman.

It uses key lengths ranging from 1024 to 2048 bits.

DSA, or Digital Signature Algorithm, is a U.S. government standard which PGP uses to create
a digital signature for a message to verify the authenticity of the sender.

S/MIME, on the other hand, also uses RSA and DSA, but only for providing digital signatures.

S/MIME, unlike PGP, relies on the use of a certificate authority (CA) for storing certificate
hierarchies, which are used for encrypting messages, instead of public key encryption.

As a result, such encryption is only needed for digital signatures, when necessary.

EXPECT US

Das könnte Ihnen auch gefallen