Sie sind auf Seite 1von 31

IV- unit Information security Information security means protecting information and information systems from unauthorized access,

use, disclosure, disruption, modification, perusal, inspection, recording or destruction. The terms information security, computer security and information assurance are frequently used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them. These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of data regardless of the form the data may take: electronic, print, or other forms. Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer. Information assurance focuses on the reasons for assurance that information is protected, and is thus reasoning about information security. Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic computers and transmitted across networks to other computers. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor, such a breach of security could lead to negative consequences. Protecting confidential information is a business requirement, and in many cases also an ethical and legal requirement. For the individual, information security has a significant effect on privacy, which is viewed very differently in different cultures. The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics science, etc.

Concepts in information security: For over twenty years, information security has held confidentiality, integrity and availability. Confidentiality Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to the merchant and from the merchant to a transaction processing network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer containing sensitive information about a company's employees is stolen or sold, it could result in a breach of confidentiality. Giving out confidential information over the telephone is a breach of confidentiality if the caller is not authorized to have the information. Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds. Integrity In information security, integrity means that data cannot be modified undetectably. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of Consistency as understood in the classic ACID model of transaction processing. Integrity is violated when a message is actively modified in transit. Information security systems typically provide message integrity in addition to data confidentiality. Availability For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks. Authenticity In computing, e-Business, and information security, it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are

genuine. It is also important for authenticity to validate that both parties involved are who they claim they are. Non-repudiation In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. Electronic commerce uses technology such as digital signatures and public key encryption to establish authenticity and non-repudiation. Testing and error deductions: In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data. Introduction The general idea for achieving error detection and correction is to add some redundancy (i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message, and to recover data determined to be erroneous. Error-detection and correction schemes can be either systematic or nonsystematic: In a systematic scheme, the transmitter sends the original data, and attaches a fixed number of check bits (or parity data), which are derived from the data bits by some deterministic algorithm. If only error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. In a system that uses a nonsystematic code, the original message is transformed into an encoded message that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memory-less models where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, errordetecting and correcting codes can be generally distinguished between randomerror-detecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel capacity cannot be determined, or is highly varying, an errordetection scheme may be combined with a system for retransmissions of erroneous

data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding. Implementation Error correction may generally be realized in two different ways: Automatic repeat request (ARQ) (sometimes also referred to as backward error correction): This is an error control technique whereby an error detection scheme is combined with requests for retransmission of erroneous data. Every block of data received is checked using the error detection code used, and if the check fails, retransmission of the data is requested this may be done repeatedly, until the data can be verified. Forward error correction (FEC): The sender encodes the data using an error-correcting code (ECC) prior to transmission. The additional information (redundancy) added by the code is used by the receiver to recover the original data. In general, the reconstructed data is what is deemed the "most likely" original data. ARQ and FEC may be combined, such that minor errors are corrected without retransmission, and major errors are corrected via a request for retransmission: this is called hybrid automatic repeat-request (HARQ). Error detection schemes Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors). Random-error-correcting codes based on minimum distance coding can provide a suitable alternative to hash functions when a strict guarantee on the minimum number of errors to be detected is desired. Repetition codes, described below, are special cases of error-correcting codes: although rather inefficient, they find applications for both error correction and detection due to their simplicity. Repetition codes A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data is divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern "1011", the four-bit block can be repeated three times, thus producing "1011 1011 1011". However, if this

twelve-bit pattern was received as "1010 1011 1011" where the first block is unlike the other two it can be determined that an error has occurred. Repetition codes are very inefficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., "1010 1010 1010" in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations. Parity bits A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Extensions and variations on the parity bit mechanism are horizontal redundancy checks, vertical redundancy checks, and "double," "dual," or "diagonal" parity (used in RAID-DP). Checksums A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a ones'complement operation prior to transmission to detect errors resulting in all-zero messages. Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Luhn algorithm and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Cyclic redundancy checks (CRCs) A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic code and non-secure hash function designed to detect accidental changes to digital data in computer networks. It is characterized by specification of a so-called generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend, and where the remainder becomes the result. Cyclic codes have favorable properties in that they are well suited for detecting burst errors. CRCs are particularly easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives. Even parity is a special case of a cyclic redundancy check, where the single-bit CRC is generated by the divisor x + 1.

Cryptographic hash functions A cryptographic hash function can provide strong assurances about data integrity, provided that changes of the data are only accidental (i.e., due to transmission errors). Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is infeasible to find some input data (other than the one given) that will yield the same hash value. Message authentication codes, also called keyed cryptographic hash functions, provide additional protection against intentional modification by an attacker. Error-correcting codes Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distance d = 2 are degenerate cases of errorcorrecting codes, and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. The Berger code is an early example of a unidirectional error(-correcting) code that can detect any number of errors on an asymmetric channel, provided that only transitions of cleared bits to set bits or set bits to cleared bits can occur. A constant-weight code is another kind of unidirectional error-detecting code. Error correction Automatic repeat request Automatic Repeat reQuest (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet. However, ARQ requires the availability of a back channel, results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity.

Error-correcting code An error-correcting code (ECC) or forward error correction (FEC) code is a system of adding redundant data, or parity data, to a message, such that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) were introduced, either during the process of transmission, or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a back-channel is not required in forward error correction, and it is therefore suitable for simplex communication such as broadcasting. Error-correcting codes are frequently used in lower-layer communication, as well as for reliable storage in media such as CDs, DVDs, hard disks, and RAM. Error-correcting codes are usually distinguished between convolutional codes and block codes: Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding. Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, Reed-Solomon codes being the most notable due to their current widespread use. Turbo codes and low-density parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency. Shannon's theorem is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes which are both optimal and have efficient encoding and decoding algorithms. Hybrid schemes Hybrid ARQ is a combination of ARQ and forward error correction. There are two basic approaches:[2] Messages are always transmitted with FEC parity data (and error-detection redundancy). A receiver decodes a message using the parity information, and requests retransmission using ARQ only if the parity data was not

sufficient for successful decoding (identified through a failed integrity check). Messages are transmitted without parity data (only with error-detection information). If a receiver detects an error, it requests FEC information from the transmitter using ARQ, and uses it to reconstruct the original message. The latter approach is particularly attractive on an erasure channel when using a rateless erasure code. Applications Applications that require low latency (such as telephone conversations) cannot use Automatic Repeat reQuest (ARQ); they must use Forward Error Correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be any good. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. (This is also why FEC is used in data storage systems such as RAID and distributed data store). Applications that use ARQ must have a return channel. Applications that have no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ. Internet In a typical TCP/IP stack, error control is performed at multiple levels: Each Ethernet frame carries a CRC-32 checksum. Frames received with incorrect checksums are discarded by the receiver hardware. The IPv4 header contains a checksum protecting the contents of the header. Packets with mismatching checksums are dropped within the network or at the receiver. The checksum was omitted from the IPv6 header in order to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819). UDP has an optional checksum covering the payload and addressing information from the UDP and IP headers. Packets with incorrect checksums are discarded by the operating system network stack. The checksum is optional under IPv4, only, because the Data-Link layer checksum may already provide the desired level of error protection. TCP provides a checksum for protecting the payload and addressing information from the TCP and IP headers. Packets with incorrect checksums are discarded within the network stack, and eventually get retransmitted

using ARQ, either explicitly (such as through triple-ack) or implicitly due to a timeout. Deep-space telecommunications Development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting from 1968 digital error correction was implemented in the form of (sub-optimally decoded) convolutional codes and Reed-Muller codes. The Reed-Muller code was well suited to the noise the spacecraft was subject to (approximately matching a bell curve), and was implemented at the Mariner spacecraft for missions between 1969 and 1977. The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging amongst scientific information of Jupiter and Saturn. This resulted in increased coding requirements, and thus the spacecraft were supported by (optimally Viterbi-decoded) convolutional codes that could be concatenated with an outer Golay (24,12,8) code. The Voyager 2 probe additionally supported an implementation of a Reed-Solomon code: the concatenated Reed-SolomonViterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune. The CCSDS currently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes. The different kinds of deep space and orbital missions that are conducted suggest that trying to find a "one size fits all" error correction system will be an ongoing problem for some time to come. For missions close to earth the nature of the channel noise is different from that of a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from earth, the problem of correcting for noise gets larger. Satellite broadcasting (DVB) The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and High Definition TV) and IP data. Transponder availability and bandwidth constraints have limited this growth, because transponder capacity is determined by the selected modulation scheme and Forward error correction (FEC) rate. Overview QPSK coupled with traditional Reed Solomon and Viterbi codes have been used for nearly 20 years for the delivery of digital satellite TV.

Higher order modulation schemes such as 8PSK, 16QAM and 32QAM have enabled the satellite industry to increase transponder efficiency by several orders of magnitude. This increase in the information rate in a transponder comes at the expense of an increase in the carrier power to meet the threshold requirement for existing antennas. Tests conducted using the latest chipsets demonstrate that the performance achieved by using Turbo Codes may be even lower than the 0.8 dB figure assumed in early designs. Data storage Error detection and correction codes are often used to improve the reliability of data storage media. A "parity track" was present on the first magnetic tape data storage in 1951. The "Optimal Rectangular Code" used in group code recording tapes not only detects but also corrects single-bit errors. Some file formats, particularly archive formats, include a checksum (most often CRC32) to detect corruption and truncation and can employ redundancy and/or parity files to recover portions of corrupted data. Reed Solomon codes are used in compact discs to correct errors caused by scratches. Modern hard drives use CRC codes to detect and Reed-Solomon codes to correct minor errors in sector reads, and to recover data from sectors that have "gone bad" and store that data in the spare sectors. RAID systems use a variety of error correction techniques, to correct errors when a hard drive completely fails. Error-correcting memory DRAM memory may provide increased protection against soft errors by relying on error correcting codes. Such error-correcting memory, known as ECC or EDACprotected memory, is particularly desirable for high fault-tolerant applications, such as servers, as well as deep-space applications due to increased radiation. Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy. Interleaving allows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as a single event upset (SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error correcting code), and the illusion of an error-free memory system may be maintained.

Vulnerability vulnerability is a weakness which allows an attacker to reduce a sys--tem's information assurance.Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw. To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface. Vulnerability management is the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities" This practice generally refers to software vulnerabilities in computing systems. A security risk may be classified as a vulnerability. The usage of vulnerability with the same meaning of risk can lead to confusion. The risk is tied to the potential of a significant loss. Then there are vulnerabilities without risk: for example when the affected asset has no value. A vulnerability with one or more known instances of working and fully implemented attacks is classified as an exploitable vulnerability a vulnerability for which an exploit exists. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled. Security bug is a narrower concept: there are vulnerabilities that are not related to software: hardware, site, personnel vulnerabilities are examples of vulnerabilities that are not software security bugs. Constructs in programming languages that are difficult to use properly can be a large source of vulnerabilities. Classification Vulnerabilities are classified according to the asset class they are related to: hardware o susceptibility to humidity o susceptibility to dust o susceptibility to soiling o susceptibility to unprotected storage software o insufficient testing o lack of audit trail network o unprotected communication lines o insecure network architecture personnel o inadequate recruiting process

inadequate security awareness

site
o

area subject to flood o unreliable power source organizational o lack of regular audits o lack of continuity plans o lack of security Causes Complexity: Large, complex systems increase the probability of flaws and unintended access points Familiarity: Using common, well-known code, software, operating systems, and/or hardware increases the probability an attacker has or can find the knowledge and tools to exploit the flaw Connectivity: More physical connections, privileges, ports, protocols, and services and time each of those are accessible increase vulnerability Password management flaws: The computer user uses weak passwords that could be discovered by brute force. The computer user stores the password on the computer where a program can access it. Users re-use passwords between many programs and websites. Fundamental operating system design flaws: The operating system designer chooses to enforce suboptimal policies on user/program management. For example operating systems with policies such as default permit grant every program and every user full access to the entire computer.[18] This operating system flaw allows viruses and malware to execute commands on behalf of the administrator. Internet Website Browsing: Some internet websites may contain harmful Spyware or Adware that can be installed automatically on the computer systems. After visiting those websites, the computer systems become infected and personal information will be collected and passed on to third party individuals. Software bugs: The programmer leaves an exploitable bug in a software program. The software bug may allow an attacker to misuse an application. Unchecked user input: The program assumes that all user input is safe. Programs that do not check user input can allow unintended direct execution of commands or SQL statements (known as Buffer overflows, SQL injection or other non-validated inputs). Not learning from past mistakes: for example most vulnerabilities discovered in IPv4 protocol software were discovered in the new IPv6 implementations

The research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human:[24] so humans should be considered in their different roles as asset, threat, information resources. Social engineering is an increasing security concern. Vulnerability consequences This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. The impact of a security breach can be very high. The fact that IT managers, or upper management, can (easily) know that IT systems and applications have vulnerabilities and do not perform any action to manage the IT risk is seen as a misconduct in most legislations. Privacy law forces managers to act to reduce the impact or likelihood that security risk. Information technology security audit is a way to let other independent people certify that the IT environment is managed properly and lessen the responsibilities, at least having demonstrated the good faith. Penetration test is a form of verification of the weakness and countermeasures adopted by an organization: a White hat hacker tries to attack an organization information technology assets, to find out how is easy or difficult to compromise the IT security. [25] The proper way to professionally manage the IT risk is to adopt an Information Security Management System, such as ISO/IEC 27002 or Risk IT and follow them, according to the security strategy set forth by the upper management. One of the key concept of information security is the principle of defence in depth: i.e. to set up a multilayer defence system that can: prevent the exploit detect and intercept the attack find out the threat agents and persecute them Intrusion detection system is an example of a class of systems used to detect attacks. Physical security is a set of measures to protect physically the information asset: if somebody can get physical access to the information asset is quite easy to made resources unavailable to its legitimate users. Some set of criteria to be satisfied by a computer, its operating system and applications in order to meet a good security level have been developed: ITSEC and Common criteria are two examples Identifying and removing vulnerabilities Many software tools exist that can aid in the discovery (and sometimes removal) of vulnerabilities in a computer system. Though these tools can provide an auditor with a good overview of possible vulnerabilities present, they can not replace

human judgment. Relying solely on scanners will yield false positives and a limited-scope view of the problems present in the system. Vulnerabilities have been found in every major operating system including Windows, Mac OS, various forms of Unix and Linux, OpenVMS, and others. The only way to reduce the chance of a vulnerability being used against a system is through constant vigilance, including careful system maintenance (e.g. applying software patches), best practices in deployment (e.g. the use of firewalls and access controls) and auditing (both during development and throughout the deployment lifecycle). Examples of vulnerabilities Vulnerabilities are related to: physical environment of the system the personnel management administration procedures and security measures within the organization business operation and service delivery hardware software communication equipment and facilities and their combinations. It is evident that a pure technical approach cannot even protect physical assets: you should have administrative procedure to let maintenance personnel to enter the facilities and people with adequate knowledge of the procedures, motivated to follow it with proper care. see Social engineering (security). Four examples of vulnerability exploits: an attacker finds and uses an overflow weakness to install malware to export sensitive data; an attacker convinces a user to open an email message with attached malware; an insider copies a hardened, encrypted program onto a thumb drive and cracks it at home; a flood damage your computer systems installed at ground floor.

Computer crime Computer crime, or super crime, refers to any crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Netcrime refers to criminal exploitation of the Internet. Cybercrimes are defined as: "Offences that are committed against

individuals or groups of individuals with a criminal motive to intentionally harm the reputation of the victim or cause physical or mental harm to the victim directly or indirectly, using modern telecommunication networks such as Internet (Chat rooms, emails, notice boards and groups) and mobile phones (SMS/MMS)". Such crimes may threaten a nations security and financial health.[5] Issues surrounding this type of crime have become high-profile, particularly those surrounding cracking, copyright infringement, child pornography, and child grooming. There are also problems of privacy when confidential information is lost or intercepted, lawfully or otherwise. Internationally, both governmental and non-state actors engage in cybercrimes, including espionage, financial theft, and other cross-border crimes. Activity crossing international borders and involving the interests of at least one nationstate is sometimes referred to as cyber warfare. The international legal system is attempting to hold actors accountable for their actions through the International Criminal Court. Topology Computer crime encompasses a broad range of activities. Generally, however, it may be divided into two categories: (1) crimes that target computers directly; (2) crimes facilitated by computer networks or devices, the primary target of which is independent of the computer network or device. Crimes that primarily target computer networks or devices include: Computer viruses Denial-of-service attacks Malware (malicious code) Crimes that use computer networks or devices to advance other ends include: Cyberstalking Fraud and identity theft Information warfare Phishing scams Spam Spam, or the unsolicited sending of bulk email for commercial purposes, is unlawful in some jurisdictions. While anti-spam laws are relatively new, limits on unsolicited electronic communications have existed for some time. Fraud Computer fraud Computer fraud is any dishonest misrepresentation of fact intended to let another to do or refrain from doing something which causes loss. In this context, the fraud will result in obtaining a benefit by: Altering computer input in an unauthorized way. This requires little technical expertise and is not an uncommon form of theft by employees

altering the data before entry or entering false data, or by entering unauthorized instructions or using unauthorized processes; Altering, destroying, suppressing, or stealing output, usually to conceal unauthorized transactions: this is difficult to detect; Altering or deleting stored data; Altering or misusing existing system tools or software packages, or altering or writing code for fraudulent purposes. Other forms of fraud may be facilitated using computer systems, including bank fraud, identity theft, extortion, and theft of classified information. A variety of Internet scams target consumers direct. Obscene or offensive content The content of websites and other electronic communications may be distasteful, obscene or offensive for a variety of reasons. In some instances these communications may be illegal. Over 25 jurisdictions place limits on certain speech and ban racist, blasphemous, politically subversive, libelous or slanderous, seditious, or inflammatory material that tends to incite hate crimes. The extent to which these communications are unlawful varies greatly between countries, and even within nations. It is a sensitive area in which the courts can become involved in arbitrating between groups with strong beliefs. One area of Internet pornography that has been the target of the strongest efforts at curtailment is child pornography. Harassment Whereas content may be offensive in a non-specific way, harassment directs obscenities and derogatory comments at specific individuals focusing for example on gender, race, religion, nationality, sexual orientation. This often occurs in chat rooms, through newsgroups, and by sending hate e-mail to interested parties (see cyber bullying, cyber stalking, harassment by computer, hate crime, Online predator, and stalking). Any comment that may be found derogatory or offensive is considered harassment. Drug trafficking Drug traffickers are increasingly taking advantage of the Internet to sell their illegal substances through encrypted e-mail and other Internet Technology.Some drug traffickers arrange deals at internet cafes, use courier Web sites to track illegal packages of pills, and swap recipes for amphetamines in restricted-access chat rooms. The rise in Internet drug trades could also be attributed to the lack of face-to-face communication. These virtual exchanges allow more intimidated individuals to more comfortably purchase illegal drugs. The sketchy effects that are often

associated with drug trades are severely minimized and the filtering process that comes with physical interaction fades away. Cyber terrorism Government officials and Information Technology security specialists have documented a significant increase in Internet problems and server scans since early 2001. But there is a growing concern among federal officials that such intrusions are part of an organized effort by cyberterrorists, foreign intelligence services, or other groups to map potential security holes in critical systems. A cyberterrorist is someone who intimidates or coerces a government or organization to advance his or her political or social objectives by launching computer-based attack against computers, network, and the information stored on them. Cyber terrorism in general, can be defined as an act of terrorism committed through the use of cyberspace or computer resources (Parker 1983). As such, a simple propaganda in the Internet, that there will be bomb attacks during the holidays can be considered cyberterrorism. As well there are also hacking activities directed towards individuals, families, organized by groups within networks, tending to cause fear among people, demonstrate power, collecting information relevant for ruining peoples' lives, robberies, blackmailing etc. Cyberextortion is a form of cyberterrorism in which a website, e-mail server, or computer system is subjected to repeated denial of service or other attacks by malicious hackers, who demand money in return for promising to stop the attacks. According to the Federal Bureau of Investigation, cyberextortionists are increasingly attacking corporate websites and networks, crippling their ability to operate and demanding payments to restore their service. More than 20 cases are reported each month to the FBI and many go unreported in order to keep the victim's name out of the domain. Perpetrators typically use a distributed denial-ofservice attack. ------------------------------------------------------------------------------------------------Securing web A secure Web server provides a protected foundation for hosting your Web applications, and Web server configuration plays a critical role in your Web application's security. Badly configured virtual directories, a common mistake, can lead to unauthorized access. A forgotten share can provide a convenient back door, while an overlooked port can be an attacker's front door. Neglected user accounts can permit an attacker to slip by your defenses unnoticed. What makes a Web server secure? Part of the challenge of securing your Web server is recognizing your goal. As soon as you know what a secure Web server is, you can learn how to apply the configuration settings to create one. This chapter

provides a systematic, repeatable approach that you can use to successfully configure a secure Web server. The chapter begins by reviewing the most common threats that affect Web servers. It then uses this perspective to create a methodology. The chapter then puts the methodology into practice, and takes a step-by-step approach that shows you how to improve your Web server's security. While the basic methodology is reusable across technologies, the chapter focuses on securing a Web server running the Microsoft Windows 2000 or Microsoft Windows Server 2003 operating system and hosting the Microsoft .NET Framework. This chapter provides a methodology and the steps required to secure your Web server. You can adapt the methodology for your own situation. The steps are modular and demonstrate how you can put the methodology in practice. You can use these procedures on existing Web servers or on new ones. To gain the most from this chapter: Read Chapter 2, "Threats and Countermeasures." This will give you a broader understanding of potential threats to Web applications. Use the Snapshot. The section "Snapshot of a Secure Web Server" lists and explains the attributes of a secure Web server. It reflects input from a variety of sources including customers, industry experts, and internal Microsoft development and support teams. Use the snapshot table as a reference when configuring your server. Use the Checklist. "Checklist: Securing Your Web Server" in the "Checklist" section of this guide provides a printable job aid for quick reference. Use the task-based checklist to quickly evaluate the scope of the required steps and to help you work through the individual steps. Use the "How To" Section. The "How To" section in this guide includes the following instructional articles: o "How To: Use URLScan" o "How To: Use Microsoft Baseline Security Analyzer" o "How To: Use IISLockdown" Threats and Countermeasures The fact that an attacker can strike remotely makes a Web server an appealing target. Understanding threats to your Web server and being able to identify appropriate countermeasures permits you to anticipate many attacks and thwart the ever-growing numbers of attackers. The main threats to a Web server are: Profiling Denial of service Unauthorized access Arbitrary code execution

Elevation of privileges Viruses, worms, and Trojan horses Figure 16.1 summarizes the more prevalent attacks and common vulnerabilities.

Figure 16.1 Prominent Web server threats and common vulnerabilities Profiling Profiling, or host enumeration, is an exploratory process used to gather information about your Web site. An attacker uses this information to attack known weak points. Vulnerabilities Common vulnerabilities that make your server susceptible to profiling include: Unnecessary protocols Open ports Web servers providing configuration information in banners Attacks Common attacks used for profiling include: Port scans Ping sweeps NetBIOS and server message block (SMB) enumeration Countermeasures Countermeasures include blocking all unnecessary ports, blocking Internet Control Message Protocol (ICMP) traffic, and disabling unnecessary protocols such as NetBIOS and SMB.

Denial of Service Denial of service attacks occur when your server is overwhelmed by service requests. The threat is that your Web server will be too overwhelmed to respond to legitimate client requests. Vulnerabilities Vulnerabilities that increase the opportunities for denial of service include: Weak TCP/IP stack configuration Unpatched servers Attacks Common denial of service attacks include: Network-level SYN floods Buffer overflows Flooding the Web server with requests from distributed locations Countermeasures Countermeasures include hardening the TCP/IP stack and consistently applying the latest software patches and updates to system software. Unauthorized Access Unauthorized access occurs when a user without correct permissions gains access to restricted information or performs a restricted operation. Vulnerabilities Common vulnerabilities that lead to unauthorized access include: Weak IIS Web access controls including Web permissions Weak NTFS permissions Countermeasures Countermeasures include using secure Web permissions, NTFS permissions, and .NET Framework access control mechanisms including URL authorization. Arbitrary Code Execution Code execution attacks occur when an attacker runs malicious code on your server either to compromise server resources or to mount additional attacks against downstream systems. Vulnerabilities Vulnerabilities that can lead to malicious code execution include: Weak IIS configuration Unpatched servers Attacks Common code execution attacks include: Path traversal Buffer overflow leading to code injection

Countermeasures Countermeasures include configuring IIS to reject URLs with "../" to prevent path traversal, locking down system commands and utilities with restrictive access control lists (ACLs), and installing new patches and updates. Elevation of Privileges Elevation of privilege attacks occur when an attacker runs code by using a privileged process account. Vulnerabilities Common vulnerabilities that make your Web server susceptible to elevation of privilege attacks include: Over-privileged process accounts Over-privileged service accounts Countermeasures Countermeasures include running processes using least privileged accounts and using least privileged service and user accounts. Viruses, Worms, and Trojan Horses Malicious code comes in several varieties, including: Viruses. Programs that are designed to perform malicious acts and cause disruption to an operating system or applications. Worms. Programs that are self-replicating and self-sustaining. Trojan horses. Programs that appear to be useful but that actually do damage. In many cases, malicious code is unnoticed until it consumes system resources and slows down or halts the execution of other programs. For example, the Code Red worm was one of the most notorious to afflict IIS, and it relied upon a buffer overflow vulnerability in an ISAPI filter. Vulnerabilities Common vulnerabilities that make you susceptible to viruses, worms, and Trojan horses include: Unpatched servers Running unnecessary services Unnecessary ISAPI filters and extensions Countermeasures Countermeasures include the prompt application of the latest software patches, disabling unused functionality such as unused ISAPI filters and extensions, and running processes with least privileged accounts to reduce the scope of damage in the event of a compromise. Methodology for Securing Your Web Server To secure a Web server, you must apply many configuration settings to reduce the server's vulnerability to attack. So, how do you know where to start, and when do

you know that you are done? The best approach is to organize the precautions you must take and the settings you must configure, into categories. Using categories allows you to systematically walk through the securing process from top to bottom or pick a particular category and complete specific steps. Configuration Categories The security methodology in this chapter has been organized into the categories shown in Figure 16.2.

Figure 16.2 Web server configuration categories The rationale behind the categories is as follows: Patches and Updates Many security threats are caused by vulnerabilities that are widely published and well known. In many cases, when a new vulnerability is discovered, the code to exploit it is posted on Internet bulletin boards within hours of the first successful attack. If you do not patch and update your server, you provide opportunities for attackers and malicious code. Patching and updating your server software is a critical first step towards securing your Web server. Services Services are prime vulnerability points for attackers who can exploit the privileges and capabilities of a service to access the local Web server or other downstream servers. If a service is not necessary for your Web server's operation, do not run it on your server. If the service is necessary, secure it and maintain it. Consider monitoring any service to ensure availability. If your service software is not secure, but you need the service, try to find a secure alternative. Protocols

Avoid using protocols that are inherently insecure. If you cannot avoid using these protocols, take the appropriate measures to provide secure authentication and communication, for example, by using IPSec policies. Examples of insecure, clear text protocols are Telnet, Post Office Protocol (POP3), Simple Mail Transfer Protocol (SMTP), and File Transfer Protocol (FTP). Accounts Accounts grant authenticated access to your computer, and these accounts must be audited. What is the purpose of the user account? How much access does it have? Is it a common account that can be targeted for attack? Is it a service account that can be compromised and must therefore be contained? Configure accounts with least privilege to help prevent elevation of privilege. Remove any accounts that you do not need. Slow down brute force and dictionary attacks with strong password policies, and then audit and alert for logon failures. Files and Directories Secure all files and directories with restricted NTFS permissions that only allow access to necessary Windows services and user accounts. Use Windows auditing to allow you to detect when suspicious or unauthorized activity occurs. Shares Remove all unnecessary file shares including the default administration shares if they are not required. Secure any remaining shares with restricted NTFS permissions. Although shares may not be directly exposed to the Internet, a defense strategy with limited and secured shares reduces risk if a server is compromised. Ports Services that run on the server listen to specific ports so that they can respond to incoming requests. Audit the ports on your server regularly to ensure that an insecure or unnecessary service is not active on your Web server. If you detect an active port that was not opened by an administrator, this is a sure sign of unauthorized access and a security compromise. Registry Many security-related settings are stored in the registry and as a result, you must secure the registry. You can do this by applying restricted Windows ACLs and by blocking remote registry administration. Auditing and Logging Auditing is one of your most important tools for identifying intruders, attacks in progress, and evidence of attacks that have occurred. Use a combination of Windows and IIS auditing features to configure auditing on your Web server. Event and system logs also help you to troubleshoot security problems. Sites and Virtual Directories

Sites and virtual directories are directly exposed to the Internet. Even though secure firewall configuration and defensive ISAPI filters such as URLScan (which ships with the IISLockdown tool) can block requests for restricted configuration files or program executables, a defense in depth strategy is recommended. Relocate sites and virtual directories to non-system partitions and use IIS Web permissions to further restrict access. Note By default, IIS 6.0 has security-related configuration settings similar to those made by the IIS Lockdown Tool. Therefore, you do not need to run the IIS Lockdown Tool on Web servers running IIS 6.0. However, if you are upgrading from a previous version of IIS (5.0 or lower) to IIS 6.0, it is recommended that you run the IIS Lockdown Tool to enhance the security of your Web server. IIS 6.0 on Windows Server 2003 has functionality equivalent to URLScan built in. Your decision whether to install UrlScan should be based on your specific organizational requirements. For more information, see "Installing UrlScan 2.5." Script Mappings Remove all unnecessary IIS script mappings for optional file extensions to prevent an attacker from exploiting any bugs in the ISAPI extensions that handle these types of files. Unused extension mappings are often overlooked and represent a major security vulnerability. ISAPI Filters Attackers have been successful in exploiting vulnerabilities in ISAPI filters. Remove unnecessary ISAPI filters from the Web server. IIS Metabase The IIS metabase maintains IIS configuration settings. You must be sure that the security related settings are appropriately configured, and that access to the metabase file is restricted with hardened NTFS permissions. Machine.config The Machine.config file stores machine-level configuration settings applied to .NET Framework applications including ASP.NET Web applications. Modify the settings in Machine.config to ensure that secure defaults are applied to any ASP.NET application installed on the server. Note The .NET Framework 2.0 provides a machine-level Web.config file with settings related to ASP.NET 2.0. You need to review these settings to ensure that secure defaults are applied to any ASP.NET application installed on the server. Code Access Security Restrict code access security policy settings to ensure that code downloaded from the Internet or Intranet have no permissions and as a result will not be allowed to execute.

Intranet It is the generic term for a collection of private computer networks within an organization. An intranet uses network technologies as a tool to facilitate communication between people or work groups to improve the data sharing capability and overall knowledge base of an organization's employees. Intranets utilize standard network hardware and software technologies like Ethernet, WiFi, TCP/IP, Web browsers and Web servers. An organization's intranet typically includes Internet access but is firewalled so that its computers cannot be reached directly from the outside. A common extension to intranets, called extranets, opens this firewall to provide controlled access to outsiders. Many schools and non-profit groups have deployed them, but an intranet is still seen primarily as a corporate productivity tool. A simple intranet consists of an internal email system and perhaps a message board service. More sophisticated intranets include Web sites and databases containing company news, forms, and personnel information. Besides email and groupware applications, an intranet generally incorporates internal Web sites, documents, and/or databases. The business value of intranet solutions is generally accepted in larger corporations, but their worth has proven very difficult to quantify in terms of time saved or return on investment. It also known as: corporate portal, private business network -------------------------------------------------------------------------------------------What is Wireless Networking? The term refers to any kind of networking that does not involve cables. It is a technique that helps entrepreneurs and telecommunications networks to save the cost of cables for networking in specific premises in their installations. The transmission system is usually implemented and administrated via radio waves where the implementation takes place at physical level.

What are the Types of Wireless Connections? The types of networks are defined on the bases of their size (that is the number of machines), their range and the speed of data transfer. Wireless PAN - Personal area network Wireless Personal Area Networks Such networks interconnect devices in small premises usually within the reach of a person for example invisible infra red light and Bluetooth radio interconnects a headphone to a laptop by the virtue of WPAN. With the installation of Wi-Fi into customer electronic devices the Wi-Fi PANs are commonly encountered. Wireless LAN - Local Area Network The simplest wireless distribution method that is used for interlinking two or more devices providing a connection to wider internet through an access point. OFDM or spread-spectrum technologies give clients freedom to move within a local coverage area while remaining connected to the LAN. LANs data transfer speed is

typically 10 Mbps for Ethernet and 1 Gbps for Gigabit Ethernet. Such networks could accommodate as many as hundred or even one thousand users. Wireless MAN - Metropolitan Area Networks The wireless network that is used to connect at high speed multiple wireless LANs that are geographically close (situates anywhere in a few dozen kilometers). The network allows two or more nodes to communicate with each other as if they belong to the same LAN. The set up makes use of routers or switches for connecting with high-speed links such as fiber optic cables. WiMAX described as 802.16 standard by the IEEE is a type of WMAN. Wireless WAN WAN is the wireless network that usually covers large outdoor areas. The speed on such network depends on the cost of connection that increases with increasing distance. The technology could be used for interconnecting the branch offices of a business or public internet access system. Developed on 2.4GHz band these systems usually contain access points, base station gateways and wireless bridging relays. Their connectivity with renewable source of energy makes them stand alone systems. The most commonly available WAN is internet. Mobile devices networks The advent of smart phones have added a new dimension in telecommunications; todays telephones are not meant to converse only but to carry data. -GSM - Global System for Mobile Communications Global System for Mobile Communications is categorized as the base station system, the operation and support system and the switching system. The mobile phone is initially connected to the base system station that establishes a connection with the operation and support station that later on connects to the switching station where the call is made to the specific user. PCS - Personal Communications Service is a radio band that is employed in South Asia and North America; the first PCS service was triggered by Sprint. D-AMPS Digital Advanced Mobile Phone Service is the upgraded version of AMPS that is faded away due to technological advancements.
o

TAN - Tiny Area Network and CANs - Campus Area Networks are two other types of networks. TAN is similar to LAN but

comparatively smaller (two to three machines) where CAN resembles MAN (with limited bandwidth between each LAN network). The Utility of Wireless Networks The development of wireless networks is still in progress as the usage is rapidly growing. Personal communications are made easy with the advent of cell phones where radio satellites are used for networking between continents. Whether small or big, businesses uses wireless networks for fast data sharing with economical means. Sometimes compatibility issues with new devices might arise in these extremely vulnerable networks but the technology has made the uploading and the downloading of huge data a piece of cake with least maintenance cost. WEP - Wired Equivalent Privacy as well as firewalls could be used for securing the network. Wireless networks are the future of global village. For referring to security of wireless lan networks you can refer to related articles in section below Software audit Software audit can mean: a software licensing audit, where a user of software is audited for licence compliance software quality assurance, where a piece of software is audited for quality a software audit review, where a group of people external to a software development organisation examines a software product a physical configuration audit a functional configuration audit Software licensing audit From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2007) Software Asset Management is an organization process, which is outlined in ISO/IEC 19770-1. It is also now embraced within # ISO 27001:2005 Information Technology - Security Techniques - Information Security Management Systems Requirements and ISO/IEC 17799:2005 Information Technology - Security Techniques - Code of Practice for Information Security Management. Software Asset Management is a comprehensive strategy that has to be addressed from top to bottom in an organization to be effective, to minimize risk. A software

compliance audit is an important sub-set of Software Asset Management and is covered in the above referenced standards. At its simplest it involves the following: 1. Identification of Software Assets. 2. Verifying the Software Assets including licenses, usage, and rights. 3. Identifying gaps that may exist between what exists on the installations, and the licenses possessed, and the rights of usage. 4. Taking action to close any gaps. 5. Recording the results in a centralized location with Proof Of Purchase records. The audit process itself should be a continuing action, and modern SAM software identifies what is installed, where it is installed, its usage, and provides a reconciliation of this discovery against usage. This is a very useful means of controlling software installations and lowering the costs of licensing. Large organisations could not do this without discovery and inventory applications. From time to time internal or external audits may take a forensic approach to establish what is installed on the computers in an organisation with the purpose of ensuring that it is all legal and authorised and to ensure that its process of processing transactions or events is correct. Software audits should not be confused with code audits, which are carried out on the source code of a software project. Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuring conformance to one or more standards, such as ISO 9000 or a model such as CMMI. Contents [hide] 1 Overview 2 See also 3 References 4 External links Overview SQA encompasses the entire software development process, which includes processes such as requirements definition, software design, coding, source code control, code reviews, change management, configuration management, testing, release management, and product integration. SQA is organized into goals, commitments, abilities, activities, measurements, and verifications. The American Society for Quality offers a Certified Software Quality Engineer (CSQE) certification with exams held a minimum of twice a year.

A software audit review, or software audit, is a type of software review in which one or more auditors who are not members of the software development organization conduct "An independent examination of a software product, software process, or set of software processes to assess compliance with specifications, standards, contractual agreements, or other criteria". "Software product" mostly, but not exclusively, refers to some kind of technical document. IEEE Std. 1028 offers a list of 32 "examples of software products subject to audit", including documentary products such as various sorts of plan, contracts, specifications, designs, procedures, standards, and reports, but also nondocumentary products such as data, test data, and deliverable media. Software audits are distinct from software peer reviews and software management reviews in that they are conducted by personnel external to, and independent of, the software development organization, and are concerned with compliance of products or processes, rather than with their technical content, technical quality, or managerial implications. The term "software audit review" is adopted here to designate the form of software audit described in IEEE Std. 1028. Physical configuration audit A Physical Configuration Audit (PCA) is the formal examination of the "asbuilt" configuration of a configuration item against its technical documentation to establish or verify the configuration item's product baseline. The PCA is used to examine the actual configuration of the Configuration Item (CI) that is representative of the product configuration in order to verify that the related design documentation matches the design of the deliverable CI. It is also used to validate many of the supporting processes that the contractor uses in the production of the CI. The PCA is also used to verify that any elements of the CI that were redesigned after the completion of the Functional Configuration Audit (FCA) also meet the requirements of the CI's performance specification. Additional PCAs may be accomplished later during CI production if circumstances such as the following apply: The original production line is "shut down" for several years and then production is restarted The production contract for manufacture of a CI with a fairly complex, or difficult-to-manufacture, design is awarded to a new contractor or vendor. This re-auditing in these circumstances is advisable regardless of whether the contractor or the government controls the detail production design.[1]
______________________________________________________________

Information ethics

It has been defined as "the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society".[1] It provides a critical framework for considering moral issues concerning informational privacy, moral agency (e.g. whether artificial agents may be moral), new environmental issues (especially how agents should one behave in the infosphere), problems arising from the life-cycle (creation, collection, recording, distribution, processing, etc.) of information (especially ownership and copyright, digital divide, and digital rights). Information Ethics is related to the fields of computer ethics and the philosophy of information. Dilemmas regarding the life of information are becoming increasingly important in a society that is defined as "the information society". Information transmission and literacy are essential concerns in establishing an ethical foundation that promotes fair, equitable, and responsible practices. Information ethics broadly examines issues related to ownership, access, privacy, security, and community. Information technology affects common issues such as copyright protection, intellectual freedom, accountability, and security. Many of these issues are difficult or impossible to resolve due to fundamental tensions between Western moral philosophies (based on rules, democracy, individual rights, and personal freedoms) and the traditional Eastern cultures (based on relationships, hierarchy, collective responsibilities, and social harmony). The multi-faceted dispute between Google and the government of the People's Republic of China reflects some of these fundamental tensions. Professional codes offer a basis for making ethical decisions and applying ethical solutions to situations involving information provision and use which reflect an organizations commitment to responsible information service. Evolving information formats and needs require continual reconsideration of ethical principles and how these codes are applied. Considerations regarding information ethics influence personal decisions, professional practice, and public policy.[4] Therefore, ethical analysis must provide a framework to take into consideration many, diverse domains (ibid.) regarding how information is distributed.

Das könnte Ihnen auch gefallen