Sie sind auf Seite 1von 7

SOFTWARE SECURITY In security, why should software be on the same level as crypto, access control, and protocols?

For one thing, virtually all of information security is implemented in software. In effect, software is the foundation on which all of these other security mechanisms Bad Software 1. NASA Mars Lander (cost $165 million) : Crashed into Mars due to error in converting English and metric units of measure 2. Denver airport Baggage handling system --- very buggy software-Delayed airport opening by 11 months and Cost of delay exceeded $1 million/day 3. MV-22 Osprey Advanced military aircraft - Faulty software can be fatal Normal users of software find bugs and flaws more or less by accident and hate buggy software, but, theyve learned to live with it. Users have become very skilled at making bad software work. Attackers look at buggy software as an opportunity, not a problem. They actively search for bugs and flaws in software, and they like bad software. Attackers try to make software misbehave, and flaws often prove very useful. Complexity: Complexity is the enemy of security, Paul Kocher, Cryptography Research, Inc. A new car contains more LOC than was required to land the Apollo astronauts on the moon. A conservative estimate places the number of bugs in software at 5 per 1,000 lines of code. A typical computer might have 3,000 executable files, each of which contains, perhaps, 100,000 LOC, on average. Then, on average, each executable has 50 bugs, which implies about 150,000 bugs for a single computer. If we extend this calculation to a medium-sized corporate network with 30,000 nodes, wed expect to find about 4.5 billion bugs in the network. Of course, many of these bugs would be duplicates, but its still a staggering number. Suppose that only 10% of such bugs are security critical and only 10% of these security-critical bugs are remotely exploitable. Then there are only 4.5 million serious security flaws due to bad software in this network. Program Flaws An error is a programming mistake. An error may lead to incorrect state: fault-A fault is internal to the program. A fault may lead to a failure, where a system departs from its expected behavior-A failure is externally observable In software engineering, try to ensure that a program does what is intended. Secure software engineering requires that software does what is intended and nothing more. Absolutely secure software is impossible. Program flaws are unintentional - But can still create security risks.

Possible Attack Scenario Users enter data into a Web form. Web form is sent to server. Server writes data to array called buffer, without checking length of input data. Data overflows buffer

Such overflow might enable an attack If so, attack could be carried out by anyone with Internet access Buffer Overflows Suppose that a Web form asks the user to enter data, such as name, age, date of birth, and so on. This data is then sent to a server that writes the data to buffer that can hold N characters. If the server software does not verify that the length of the data is at most N characters, then a buffer overflow can occur. At the least, its reasonably likely that overflowing data will cause a computer to crash. Trudy could take advantage of this to launch a denial of service (DoS) attack. And under certain conditions, Trudy can do much more she can have code of her choosing execute on the affected machine. Its remarkable that a common programming bug can lead to such an outcome. E.gint main(){ int buffer[10]; buffer[20] = 37;} Consider again the C source code. When this code is executed, a buffer overflow occurs. The effect of this buffer overflow depends on what resides in memory at location buffer[20]. The buffer overflow might overwrite user data or code, or it could overwrite system data or code. Consider, for example, software that is used for authentication. Ultimately, the authentication decision resides in a single bit. If a buffer overflow overwrites this authentication bit, then Trudy can authenticate herself as, say, Alice.

If a buffer overflow overwrites the memory position where the boolean flag is stored, Trudy can overwrite F with T, and the software will then believe that Trudy has been authenticated.

A simplified view of memory organization

The text section is for code, while the data section holds static variables. The heap is for dynamic data, while the stack can be viewed as scratch paper for the processor. The stack pointer, or SP, indicates the top of the stack. Notice that the stack grows up from the bottom in, while the heap grows down. Incomplete Mediation The C function strcpy(buffer, input) copies the contents of the input string input to the array buffer. A buffer overflow will occur if the length of input is greater than the length of buffer. To prevent such a buffer overflow, the program must validate the input by checking the length of input before attempting to write it to buffer. Failure to do so is an example of the problem of incomplete mediation. Consider data that is input to a Web form. Such data is often transferred to the server by including it in a URL. This URL is interpreted to mean that the customer with ID number 112 has ordered 20 of item number 55, at a cost of $10 each, which, with $5 shipping, gives a total cost of $205. Since the input was checked on the client, the developer of the server software believes it would be wasted effort to check it again on the server. But instead of using the client software, Trudy can directly send a URL to the server. Suppose Trudy sends the following URL to the server: Since the server doesnt bother to validate the input, Trudy can obtain the same order as above, but for the bargain price of $25 instead of the legitimate price of $205. Recent research has revealed numerous buffer overflows in the Linux kernel, and most of these are due to incomplete mediation. This is somewhat surprising since the Linux kernel is usually considered to be very good software. After all, it is open source, so anyone can look for flaws in the code (well have much more to say about this topic in the next chapter) and it is the kernel, so it must have been written by experienced programmers. Race Conditions Security processes should be atomic, that is, they should occur all at once. Race conditions can arise when a security-critical process occurs in stages. In such cases, an attacker may be able to make a change between the stages and thereby break the security. The term race condition refers to a race between the attacker and the next stage of the process, though its not so much a race as a matter of careful timing for the attacker.

Unix command mkdir, which creates a new directory. With this version of mkdir, there is a stage that determines authorization followed by a stage that transfers ownership. If Trudy can make a change after the authorization stage but before the transfer of ownership, then she can become the owner of any directory on the system. Race conditions are probably common. But attacks based on race conditions require careful timing, which makes them much more difficult to exploit than buffer overflow conditions. The

way to prevent race conditions is to be sure that security-critical processes are atomic.

Salami Attacks In a salami attack, a programmer slices off an amount of money from individual transactions. These slices must be difficult for the victim to detect. For example, its a matter of computing folklore that a programmer for a bank used a salami attack to slice off fractional cents leftover from interest calculations. These fractional cents, which were not noticed by the customers or the bank, were supposedly deposited in the programmers account. According to the legend, the programmer made a lot of money. Although it is not known whether this ever actually occurred, it certainly is possible. And there are many confirmed cases of similar insider attacks. Linearization Attacks Linearization attacks have been used to break security in a wide variety of situations. A real world example of a linearization attack occurred in TENEX, a timeshare system used in ancient times. In TENEX, passwords were verified one character at a time, but careful timing was not even necessary. Instead, it was possible to arrange for a page fault to occur when the next unknown character was guessed correctly. Then a user-accessible page fault register would tell the attacker that a page fault had occurred and, therefore, that the next character had been guessed correctly. This attack could be used to crack any password in seconds. Trusting Software Can you trust software? Suppose that a C compiler has a virus. When compiling the login program, this virus creates a backdoor in the form of an account with a known password. Also, when the C compiler is recompiled, the virus incorporates itself into the newly compiled C compiler. Now suppose that you suspect that your system is infected with a virus. To be absolutely certain that you fix the problem, you decide to start over from scratch. First, you recompile the C compiler, and then you recompile the operating system, which includes the login program. In this case, you still havent gotten rid of the problem. In the real-world, an attacker can hide a virus in your virus scanner. Or consider the damage that could be done by a successful attack on online virus signature updates or other automated software updates. SOFTWARE REVERSE ENGINEERING SRE is also known as reverse code engineering, or, simply, reversing can be used for good or not so good purposes. The good uses include understanding malware or legacy code. Trudy only has an executable, or exe, file, and, in particular, she does not have access to the source code. Trudy might want to simply understand the software better in order to devise an attack, but more likely she wants to modify the software in some way. SRE is usually aimed at code for Microsoft

Window. The essential tools of the trade for SRE include a disassembler, a debugger, and a hex editor. A disassembler converts an executable into assembly code. A disassembler cant always disassemble code correctly, since, for example, its not always possible to distinguish code from data. This implies that in general, its not possible to disassemble an exe file and reassemble the result into a functioning executable. IDA Pro is the hackers disassembler of choice, and it is a powerful and flexible tool. A debugger is used to set break points, which allows Trudy to step through the code as it executes. This is necessary to completely understand the code. SoftICE is the alpha and omega of debuggers. A hex editor is also a necessary SRE tool. The hex editor is the tool Trudy will use to directly modify, or patch, the exe file. UltraEdit is a popular hex editor. Why do we need a disassembler and a debugger? The disassembler gives a static view that provides a good overview of the program logic. But to jump to a specific place in the disassembled code would require that the attacker mentally execute the code in order to know the state of all register values. This is an insurmountable obstacle for all but the simplest code. A debugger on the other hand is dynamic and allows Trudy to set break points. In this way, Trudy can treat complex code as a black box. Also not all code disassembles correctly, and for such cases a debugger is required to make sense of the code. The bottom line is that both disassembler and debugger are required for any serious SRE task. The necessary technical skills for SRE include a working knowledge of the target assembly language and experience with the necessary tools primarily a disassembler and a debugger. For Windows, knowledge of the Windows portable executable, or PE, file format is also important. Anti-Disassembly Techniques There are several well-known anti-disassembly methods. For example its possible to encrypt the executable file, and, when the exe file is in encrypted form, it cant be disassembled correctly. Anti-Debugging Techniques There are several methods that can be used to make debugging more difficult. Since a debugger uses certain debug registers, a program can monitor the use of these registers and stop if these registers are in use. A program can also monitor for inserted breakpoints, which is another tell tale sign of a debugger. Debuggers sometimes dont handle threads well, so interacting threads may confuse the debugger. DIGITAL RIGHTS MANAGEMENT DRM, provides an excellent example of the limitations of doing security in software. What is DRM? At its most fundamental level, DRM is an attempt to provide remote control over digital content. Suppose Trudy wants to sell her new book in digital form online. Since there is a huge potential market on the Internet and Trudy can keep all of the profits and she wont need to pay any shipping, this seems like an ideal solution. However, after a few moments of reflection Trudy realizes that there is a serious problem. What happens if, say, Alice buys Trudys digital book and then redistributes it for free online? The fundamental problem is that its trivial to make a perfect copy of digital content and almost as easy to redistribute it to virtually anyone. This is a

dramatic change from the pre-digital era, when copying a book was costly, and redistributing it was relatively difficult. For an excellent discussion of the challenges faced in the digital age compared with those of the pre-digital era. Persistent protection is the buzzword for the required level of DRM protection. That is, we must protect the digital content so that the protection stays with the content after its delivered. Examples of the kinds of persistent protection restrictions that we might want to enforce on a digital book include the following: No copying Read once Do not open until Christmas No forwarding Another DRM option is to simply give up on enforcing DRM on an open platform such as a PC. The lure of Internet sales has created a strong interest in DRM, even if it cant be made perfectly robust. A reasonably high level of DRM protection can be achieved. Closed systems, such as game systems, are very good at enforcing restrictions similar to the persistent protection requirements. In the standard crypto scenario, the attacker Trudy has access to the ciphertext and perhaps some plaintext and some side-channel information. In the DRM scenario, we are trying to enforce persistent protection on a remote computer. Whats more, the legitimate recipient is a potential attacker. With DRM its necessary to use encryption so that the data can be securely delivered, and so that Trudy cant trivially remove the persistent protection. But if Trudy is clever, she wont try to attack the crypto directly. Instead, she will try to find the key, which is hidden somewhere in the software. One of the fundamental problems in DRM can be reduced to the problem of playing hide and seek with a key in software. Software-based DRM systems are forced to rely on security by obscurity; that is, the security resides in the fact that Trudy doesnt completely understand, for example, how and where the key is hidden. There is a fundamental limit on the effectiveness of any DRM system, since the so called analog hole is present in any DRM system. That is, when the content is rendered, it can be captured in analog form. For example, when digital music is played, it can be recorded using a microphone, regardless of the strength of the DRM protection. Similarly, a digital book can be captured in unprotected form using a camera to picture the pages displayed on a computer screen. No DRM system can prevent such attacks. Interesting feature of DRM is the degree to which human nature matters. For software-based systems, its clear that absolute DRM security is impossible. So the challenge is to develop something that works in practice OPERATING SYSTEM SECURITY FUNCTIONS The OS must be able to deal effectively with security-critical issues whether they occur accidentally or as part of a malicious attack. Modern OSs are designed for multiuser environments and multi-tasking operations, and at a minimum, deal with separation, memory protection, and access control. Separation The OS must keep users separate from each other as well as separating individual processes.

Physical separation, where users are restricted to separate devices. This provides a strong form of separation, but it is often impractical. Temporal separation, where processes execute one at a time. This eliminates many problems that arise due to concurrency and simplifies the job of the OS. Generally speaking, simplicity is the friend of security. Logical separation can be implemented via sandboxing, where each process has its own sandbox. A process is free to do almost anything within its sandbox, but its highly restricted as to what it can do outside of its sandbox. Cryptographic separation can be used to make information unintelligible to an outsider. Memory Protection The second fundamental issue an OS must deal with is memory protection. This includes protection for the memory that the OS itself uses as well as the memory of user processes. A fence address can be used for memory protection. A fence is a particular address that users and their processes cannot cross only the OS can operate on one side of the fence, and users are restricted to the other side. A fence could be static in which case there is a fixed fence address. A major drawback to this approach is that it places a strict limit on the size of the OS. An alternative is a dynamic fence which can be implemented using a fence register to specify the current fence address. Access Control OSs are the ultimate enforcers of access control. This is one reason why the OS is such an attractive target for attack a successful attack on the OS will effectively nullify any protection built in at a higher level. A system is trusted if we rely on it for security. If a trusted system fails to provide the expected security, then the security of the system fails. There is a distinction between trust and security. Trust implies reliance, that is, trust is binary choice either we trust or we dont. Security, on the other hand, is a judgment of the effectiveness of the security mechanisms. Security is judged relative to a specified policy or statement. However, security depends on trust, since a trusted component that fails to provide the expected security will break the security of the system. Ideally, we only trust secure systems, and, ideally, all trust relationships are explicit. Since a trusted system is one that we rely on for security, an untrusted system must be one that we dont rely on for security. As a consequence, if all untrusted systems are compromised, the security of the system is unaffected. A curious implication of this simple observation is that only a trusted system can break security. A trusted OS must securely enforce separation, memory protection, and access control. A trusted OS must also prevent information from leaking from one user to another. Any OS will use some form of memory protection and access control to prevent such leaking of information. But we expect more from a trusted OS.

Das könnte Ihnen auch gefallen