Sie sind auf Seite 1von 10

Module 7 – Final Paper Ricardo Nevarez

Module 7 – Final Paper

OpenSSL Analysis Paper

CSOL 560 – Secure Software Design and Development

Ricardo Nevarez

May 1, 2017
Module 7 – Final Paper Ricardo Nevarez

Transmitted data over the internet needs to be somehow widely secure. It makes sense
that this data somehow is protected out on the open public internet while in transit. As we
continue with online banking, purchases on Amazon, groceries from PeaPod, email, we expect
that the data flow from our web browser over the public internet to the servers are encrypted,
preventing our data from unauthorized access. Secure Sockets Layer (SSL), and its successor
Transport Layer Security (TLS) that relies on PKI to authenticate the server, allows this to be
possible. Encrypting data from the browser to the web server is possible with the SSL/ TLS, the
security industry standard.

It is this analysis goal to explore some of OpenSSL’s inherent design vulnerabilities


discovered within the last year. OpenSSL is an open source cryptographic software library,
freely available, using the SSL and TLS protocol. We use it because it’s considered a standard
with encrypting data out on the public internet. Good software programmers will use OpenSSL
libraries within their programs without really understanding the behavior of some or all of the
code. This can cause bugs, including buffer overflows within their software. What processes,
verification and validation techniques can be implemented within the software design lifecycle to
mitigate these bugs within programs will be explored.

It is also the goal of this analysis to point out some major design flaws within OpenSSL
such as in the handshake process between the client and the server, provide coding best practices,
and introduce LibreSSL as an alternative to OpenSSL. LibreSSL is a forked version from
OpenSSL designed with the goal of better security, and the implementation of software
development best practices. Everything OpenSSL lacks. Also, to propose a plan of which
includes static and dynamic analysis, verification and validation techniques to reduce software
vulnerabilities.

Detailed analysis of one or more major design flaws that have resulted in vulnerabilities

OpenSSL an open-source project uses open-source cryptographic software library using


the SSL, and TLS protocol. There are others such as Public Key Crypto, Ciphers, Hash
Algorithms and MACs. Specifically, ssl/tls cryptographic library is what encrypts the data
communication between the communicating client, and the web server. Its’ design is also to
authenticate through the use of certificates, and is used for the maintenance of assuring a
connection through message integrity checking. More than over half of the website servers on
the internet are using OpenSSL with inherent vulnerabilities. Libraries within OpenSSL include
but not limited to libcrypto, and libssl, including the openssl.exe. This is the command line tool
openssl.exe to create encryption/decryption with ciphers, ssl/tls client server tests, and
management of the s/mime signed or encrypted email.

The vulnerabilities within OpenSSL stem from more than one area. It stems from the
overwhelming number of lines of code that are in the hundreds of thousands. It also stems from
the coders, and programmers not adequately prepared of the features within the ssl/tls library.
This will be addressed later. And, allowing obsolete code to exist, and the lack of involvement
from the development team.
Module 7 – Final Paper Ricardo Nevarez

The following vulnerabilities will include the TLS renegotiation MIM vulnerability. This
design flaw within the Transport Layer Security (TLS) protocol is open to a man-in-the-middle
(MIM) attack by renegotiating on behalf of the client, in the client to server handshake. The
figure below shows an SSL/ TLS handshake between a client to the web server.

Figure 1.

The vulnerability occurs during the first initial handshake is established. The ssl/tls
renegotiation creates a new handshake during the initial ssl/ tls connection is being established.
When the ClientA reaches out to the webserver with a connection request, the server only knows
that ClientA wants to connect to it, and nothing more. Next this webserver ServerA accepts the
connection request, and sends back an acknowledgement (ack) establishing that initial
handshake. The vulnerability is present during the renegotiation handshake when ClientA
requests for a webpage from Server A. With ServerA already trusting the initial handshake, the
attacker at this point can inject himself on behalf of ClientA compromising the handshake
negotiation creating a MIM attack. For major companies, and with over half of the websites
using OpenSSL, this kind of vulnerability, and there are many others, should be unacceptable to
the internet community.

Another vulnerability design flaw within OpenSSL allows exploitation of the fragile code
in the SSLv2 upon transport layer security (TLS) layer. The web server is vulnerable when both
TLS and SSLv2 are running (Khandelwal, 2016). This vulnerability affects almost every web
server on the internet that uses HTTPS. It not only affects web servers, but also negatively
impacts different services that depend on SSL and TLS. The web servers affected and other
services are vulnerable to this design flaw because of using old code, not understanding the
design of the code, and misconfigurations. This vulnerability is exploited through a cross –
protocol attack, because to implement the web server needs two protocols installed: TLS and
SSLv2. The bad agent misuses the old SSLv2 protocol on the web server to capture the
information about the encryption key. Once the bad agent acquires the information of the
encryption key, it can be used in an attempt to attack the TLS security protocol on the web
server. The handling of the public key is what is at risk here under SSLv2. Once the public key is
Module 7 – Final Paper Ricardo Nevarez

obtained from the web server that uses SSLv2 connections, the bad agent can use this to attack
the TLS connection.

Figure 2.

Discussing key principles and best practices for designing a safe and secure replacement
for OpenSSL

Discussed above I have stated some of OpenSSL’s inherent vulnerabilities. Because over
two thirds of the websites on the internet use OpenSSL’s libraries for banking, doing business,
and more, it’s really important that a light is shed on these vulnerabilities so that the OpenSSL
community can fix it. Many vulnerabilities go undiscovered for years, until somewhere the right
conditions exist that triggers the vulnerability. What happens next is for the vulnerability to be
reported to OpenSSL. Giving OpenSSL the benefit of the doubt here, because of funding, and the
enormous amount of convoluted code, it’s hard to clean up and, close many of the reported bugs.
It seems that OpenSSL coders, and programmers just add features without thoroughly going
through the code. Now, it seems that some reported bugs are completely ignored because these
are deemed not important enough to fix. You have to give proper credit to the OpenSSL
community, and their volunteers who since the beginning have been fixing these reported bugs
since 2002. Now, because of funding, programmer expertise, time, and management, OpenSSL
has a long way before it can be considered a solid open-source toolkit for SSL/ TLS.

July 2014 was the initial release of LibreSSL. Its’ stable release came out a few months
ago February 2017. It is written in the C programming language and assembly. LibreSSL forked
off from OpenSSL. Its’ founder Mr. Theo de Raadt considered OpenSSL to be difficult to read,
and understand, with too many inherent vulnerabilities that were not being addressed, lacked
Module 7 – Final Paper Ricardo Nevarez

code efficiency, and a proper software development lifecycle, and more. LibreSSL focus and
mission right now is on the TLS/ crypto stack from OpenSSL (Project, 2017).

A big contributor to the open-source LibreSSL is Bob Beck and his cohorts. (martyb,
2014). Their simple mission is to design a safe and secure replacement for OpenSSL. The intent
is to comprehensively inspect and review all the code within the TLS/ crypto stack, and make the
new code easy to read, and to implement algorithms that are reliant. LibreSSL will also consist
of modern stream ciphers, message authentication codes (MAC), including elliptic curve ciphers.

Start from the ground up doing it the right way, is the goal of the LibreSSL team. The
challenge is having the developers to work together. All programmers do not code the same, and
this makes it the challenge. Certainly, another best practice with producing a solid LibreSSL is to
use software tools such as coverty, and keep up with running test analyzers throughout the entire
software development process (martyb, 2014). As we have learned from our course content,
coverty is scalable, and designed to analyze millions of lines of code in a short time.

To make OpenSSL shortcomings better (replacement for LibreSSL), LibreSSL coders,


and programmers have already cleaned up thousands of lines of code that is part of the protocol
conversion project. For example, “wrapper” functions of which are used to call upon other
subroutines have been defanged. The argument for leaving the code, but defanging it, is because
replacing these “wrapper” functions at runtime is not permitted. Rather, the new library will use
intrinsic functions such as malloc, free, calloc, realloc, and reallocarray. These code conversions
are required to remove the overflow vulnerabilities. For example, in OpenSSL malloc + memset
is used, whereas LibreSSL will use calloc, of which perpetually checks for overflow. Now, if an
overflow has to be triggered, the code will simply fail. Another overflow example fixed in
LibreSSL is using the reallocaarry, unlike malloc in OpenSSL. In OpenSSL a possible overflow
will occur at (*) in (x*y). This is converted in LibreSSL to be reallocarray (x,y). Reallocarray
will use bounds checking to negate a memory overflow. These are just a few examples of the
work being done from the LibreSSL team to designing, a safe and secure replacement for
OpenSSL. Perhaps, in the future LibreSSL will be gain their goal and be recognized for being a
safe and secure TLS crypto library.

LibreSSL current success is based on following tried and true software development best
practices, and design implementations. Per Mr. Beck who is in part a large contributor to the
success of LibreSSL, has some ideas with what is considered good software practice, and
implementation. If you can’t do it right, don’t do it all (martyb, 2014). Documentation is a big
part. Documentation needs to be meaningful to everyone who reads it. It should be standard
practice that documentation is kept up to date and available. Documentation should be easy to
understand by anyone reading it. It should be coherent throughout, since this allows for better
verification of the code when finding bugs.

Proposing a testing and evaluation plan that includes static analysis, dynamic analysis, and
automated verficiation, and validation techniques to reduce or eliminate vulnerabilities.

Designing a safe and secure replacement for OpenSSL is no small deed. It will require it
to be programed using sound programming principles (Bosworth, Kabay, & Whyne, 2014).
Module 7 – Final Paper Ricardo Nevarez

Designing a replacement is a logistical challenge, especially with getting everyone involved on


the same page. Technology evolves quickly, and because of that the software program
complexity evolves much faster. Any software design should follow the established Software
Design Life Cycle paradigm. Having a software design structure will give stakeholders
confidence in the product and will want the product to succeed. By following a coherent
Software Design Life Cycle paradigm, it opens up the opportunity for a clear design of the
program, and a successful one. Other benefits of good programming practices mixed in there will
include compiler warnings, code reviews, use of static code analyzers, including unit and fuzz
testing, and automation testing.

On top of this, when running test analyzers check for input validation, bounds checking,
string manipulation, error checking, principle of least privilege and privilege separation. Per Mr.
Beck who is a contributor to the success of LibreSSL, he suggests to apply constant unit testing
within the port tree. Where there are certain code sections that are not able to be removed like
bio_snprintf & crypto_malloc because these are exposed API’s, these can be left behind, and
simply not used. In addition, any API’s that are left in are for backward compatibility with other
external software applications.

As I have stated above, implementing the five categories within the SDLC will help
towards designing a safe and secure program. For example, the five categories are Requirement,
Design, Development, Test, and Maintenance (Michael , Carol, & Sean).

Requirement: what are the security requirements, risks, what are the threats, how will
the software be used.
Design: what are the risks, how will the software be used/ designed.
Development: incorporate secure coding, following NIST 800-160 guidelines.
Test: risk analysis, run pen testing here, and mitigate any found vulnerabilities.
Maintenance: risk analysis – again, and ongoing maintenance throughout the life of the
software.

Figure 3.
Module 7 – Final Paper Ricardo Nevarez

It’s important that the coders, and programmers on the software development team
integrate security within the SSL/TLS libraries to ensure a secure and safe programs is designed
in a manner that address confidentiality , integrity, and availability of the input data. By
following programming best practices during each of these stages, the coder, and programmer is
able to have ample time to mitigate, and effectively respond to potential issues during, and at the
end of that respective phase before moving onto the next phase in the SDLC. This is following
the Waterfall Model (Michael , Carol, & Sean).

Figure 4. Waterfall Model.

It’s also going to be important to adhere to other key design principles that directly apply
to the key areas that LibreSSL is working to address, that are blatantly bad code. In no particular
order of importance, coders and programmers alike need to keep it simple. Keep the code simple,
because keeping it simple provides transparency, and readability of the code easier. By writing
complicated code makes it that much harder to debug for errors. Another benefit of writing
simple code is during the operations & maintenance stage. At this stage of the SDLC the simpler
the code, the easier it will be to maintain.

Another best design principle is to exclude users’ access, not giving them unnecessary
access to resources. The more access you allow, the more they are responsible for, the more
damage can occur. To mitigate this risk, simply allow access to only what is needed. Test
everything. Testing everything is another important component to designing a safe and secure
replacement to OpenSSL. Testing proves the software is useful, and works. Reveals areas that
need improvement, and demonstrates reliability. LibreSSL will need to test everything along the
way, to catch errors early. There are many methodologies, and open source tools to test code.
There is white box, and black box test, unit testing, entire system testing, regression testing,
automated testing, agile testing, and more.
Module 7 – Final Paper Ricardo Nevarez

Another best design principle is open design. You want your code to be open, so it can be
readily available for the community to poke and prod, and check that it’s reliable, and secure.
This especially applies to the security aspects of the software. Security by obscurity has been
proven to not work. Further on implementing sound code, and design principles, will include
separation of privileges. This is going to be important when implementing secure code to obtain
least privileges. Separating privileges respectively, helps lessen the impact of a possible
vulnerability. Amid other design principles, you want to permit access to resources that are based
on given access rights. Another principle is the psychological acceptance of the security
algorithms implemented within the code. It’s important that these libraries are usable, easily
accessible, so that it’s used, and not ignored by the coders, and programmers.

Listed here are eight examples of secure design principle of which to apply protection
mechanisms (Jerome & Michael).
Economy of mechanism: keep the software design simple. This allows for adequate line
by line inspection of the code.
Fail-safe defaults: lack of permissions to resources is secure software practice.
Complete mediation: every access to every object within the application needs to be
verified, and checked for authority.
Open design: the software should be open for review. As mentioned earlier, obscurity
does not provide security. This is what authentication is used for.
Separation of Privilege: separation of privilege to access information is secure. Consider
how two physical keys are used to access a safe deposit box at a bank.
Least privilege: all users, and programs should only have the minimum privileges to
accomplish their task.
Least common mechanism: minimize the common shared processes among running
programs. This will mitigate umping to other processes either intentional or
unintentional to access its data without authorization.
Psychological acceptability: the GUI is easy to use, and security protection controls are
easy to understand, and program.

To ensure a safe and sound product will require testing, recoding, and testing some more.
This phase layer over the software development life cycle cannot be overstated. This is how
important it is. Static analysis inspection is one approach, and can be applied to code using a
bug-finding tool Coverity. It is designed to find generic memory corruptions, data races, and
violations of the function –ordering constraints (Bessey, et al., 2010). Using the static analysis
approach may detect most of the bugs, but also may miss finding other bugs. Missing finding
bugs will produce both false positives, and false negative results. This saves money. The other
benefits of applying the static analysis approach are to verify the code for security weaknesses.

On the other side of the same hand, dynamic testing can be applied, because unlike static
analysis, it can be applied to the output of the program after it has run. Other features within
dynamic testing are finding incorrectly coded functions, and c programming data structures.
Random testing is designed to explore all the paths of the execution tree within the data
structure. Another benefit of applying dynamic over static analysis, is when two pointers
pointing. If either is pointing to the “same” memory location in DRAM, it can check the values
of each point for equality without the need of using alias analysis (Godefroid, Klarlund, & Sen).
Module 7 – Final Paper Ricardo Nevarez

There is one other process needed to round off the software development life cycle and
that is verification and validation processes. It is required to have an approach to detect, and stop
coding errors early through the SDLC. This saves money. Verification and validation is the
applied procedure of inspecting the code meets the expected conditions initially specified, and it
does what it was designed to do. This is going to help us develop a safe and secure product
replacement for OpenSSL. Now, verification as I mentioned is done within the software
development life cycle, within the methodology used. During each phase of the SDLC, it is
verified that, that phase being worked on is correct before moving onto the next one. Validation
is applied after the code is complete and to authenticate that the software is what was initially
specified. This is going to involve what was mentioned earlier, unit testing, system testing,
including integration and functional testing.

Conclusion
OpenSSL has been found to have complex code libraries, of which have led it to be an
unsecure product that all users and devices rely on for an encrypted secure connection. This is
given within the technical community. It’s such a mess that given any bugs presented to the
OpenSSL open source group, are disregarded because they have considered not important. There
is just too much code to go through, that the best they can do is patch the important stuff and
move on, and it seems hope for the best. On the other side of the same hand LibreSSL has
responded by cleaning up, and fixing the bad code in OpenSSL. You will discover LibreSSL
continue towards their goal to simplify the code, make it easy to read, and keep it safe and secure
as an alternative to OpenSSL.
Module 7 – Final Paper Ricardo Nevarez

Resources
Bessey, A., Block, K., Chelf, B., Chou, A., Fulton, B., Hallem, S., et al. (2010, February). A Few
Billion Lines of Code Later. Communication of the ACM, 10.
Bosworth, S., Kabay, M., & Whyne, E. (2014). Computer Security Handbook (5 ed.). Hoboken,
New Jersey: Wiley.
Godefroid, P., Klarlund, N., & Sen, K. (n.d.). Directed Automated Random Testing. 11.
Jerome, S. H., & Michael, S. D. (n.d.). The Proection of Information in Computer Systems.
Retrieved March 28, 2017
Khandelwal, S. (2016, March 01). DROWN Attack - More than 11 Million OpenSSL HTTPS
Websites at Risk. Retrieved May 01, 2017, from The Hacker News:
http://thehackernews.com/2016/03/drown-attack-openssl-vulnerability.html
martyb. (2014, May 18). Bob Beck gives a 30-day status update on LibreSSL. Retrieved April 30,
2017, from Soylentnews.org: https://soylentnews.org/article.pl?sid=14/05/18/0254237
Michael , G., Carol, W., & Sean, B. (n.d.). US-CERT. Retrieved March 27, 2017, from SDLC
Process: https://www.us-cert.gov/bsi/articles/knowledge/sdlc-process
Project Goals. (2017). Retrieved April 30, 2017, from Libressl: https://www.libressl.org/
Project, T. O. (2017). LibreSSL. Retrieved April 29, 2017, from LibreSSL:
https://www.libressl.org/
Vulnerabilities. (2017). Retrieved April 29, 2017, from OpenSSL Cryptopgraphy and SSL/TLS
Toolkit: https://www.openssl.org/

Das könnte Ihnen auch gefallen