Sie sind auf Seite 1von 67

PDMCE SEM 5 MCA Computer Security (CS)

Unit-1
The security problem in computing:
Computer security:
Computer security, also known as cyber security or IT security, is the protection of information
systems from theft or damage to the hardware, the software, and to the information on them, as
well as from disruption or misdirection of the services they provide. It includes controlling
physical access to the hardware, as well as protecting against harm that may come via network
access, data and code injection, and due to malpractice by operators, whether intentional,
accidental, or due to them being tricked into deviating from secure procedures.
Status of security in computing:
In terms of security, computing is very close to the Wild West days.
Some computing professionals & managers do not even recognize the value of the resources they
use or control.
In the event of a computing crime, some companies do not investigate or prosecute.
Characteristics of Computer Intrusion:
A computing system: a collection of hardware, software, data, and people that an
organization uses to do computing tasks
Any piece of the computing system can become the target of a computing crime.

The weakest point is the most serious vulnerability.

The principle of easiest penetration

Exposure

a form of possible loss or harm

Vulnerability

a weakness in the system

Attack

Threats

Human attacks, natural disasters, errors

Control a protective measure

Assets h/w, s/w, data

1
PDMCE SEM 5 MCA Computer Security (CS)

Types of Security Breaches


Interruption
Example: DOS (Denial of Service)
Interception
Peeping eyes
Modification
Change of existing data

Fabrication
Addition of false or spurious data

Security Goals:
Confidentiality
The assets are accessible only by authorized parties.
Integrity
The assets are modified only by authorized parties, and only in authorized ways.
Availability
Assets are accessible to authorized parties.

Computing System Vulnerabilities


Hardware vulnerabilities
Software vulnerabilities
Data vulnerabilities
Human vulnerabilities ?

Software Vulnerabilities:
Destroyed (deleted) software
Stolen (pirated) software
Altered (but still run) software
Logic bomb
Trojan horse
Virus
Trapdoor
Information leaks
Data Security

The principle of adequate protection


Features
Confidentiality: preventing unauthorized access
2
PDMCE SEM 5 MCA Computer Security (CS)
Integrity: preventing unauthorized modification (e.g., salami attack)
Availability: preventing denial of authorized access

Other Exposed Assets


Storage media
Networks
Access
Key people

Methods of Defense

Encryption
Software controls
Hardware controls
Policies
Physical controls

Encryption
At the heart of all security methods
Confidentiality of data
Some protocols rely on encryption to ensure availability of resources.
Encryption does not solve all computer security problems.

Software controls
Internal program controls
OS controls
Development controls
Software controls are usually the 1st aspects of computer security that come to mind.

Policies
Policy controls can be simple but effective
Example: frequent changes of passwords
Legal and ethical controls
Gradually evolving and maturing

Principle of Effectiveness
Controls must be used to be effective.
Efficient
Time, memory space, human activity,
Easy to use
Appropriate

The Meaning of Computer Security


3
PDMCE SEM 5 MCA Computer Security (CS)
We have seen that any computer-related system has both theoretical and real weaknesses. The
purpose of computer security is to devise ways to prevent the weaknesses from being exploited.

To understand what preventive measures make the most sense, we consider what we mean when
we say that a system is "secure."

Security Goals

We use the term "security" in many ways in our daily lives. A "security system" protects our
house, warning the neighbours or the police if an unauthorized intruder tries to get in. "Financial
security" involves a set of investments that are adequately funded; we hope the investments will
grow in value over time so that we have enough money to survive later in life. And we speak of
children's "physical security," hoping they are safe from potential harm. Just as each of these terms
has a very specific meaning in the context of its use, so too does the phrase "computer security."

When we talk about computer security, we mean that we are addressing three important aspects of
any computer-related system: confidentiality, integrity, and availability.

Confidentiality ensures that computer-related assets are accessed only by authorized


parties. That is, only those who should have access to something will actually get that
access. By "access," we mean not only reading but also viewing, printing, or simply
knowing that a particular asset exists. Confidentiality is sometimes called secrecy or
privacy.

Integrity means that assets can be modified only by authorized parties or only in
authorized ways. In this context, modification includes writing, changing, changing status,
deleting, and creating.

Availability means that assets are accessible to authorized parties at appropriate times. In
other words, if some person or system has legitimate access to a particular set of objects,
that access should not be prevented. For this reason, availability is sometimes known by its
opposite, denial of service.

Security in computing addresses these three goals. One of the challenges in building a secure
system is finding the right balance among the goals, which often conflict. For example, it is easy
to preserve a particular object's confidentiality in a secure system simply by preventing everyone
from reading that object. However, this system is not secure, because it does not meet the
requirement of availability for proper access. That is, there must be a balance between
confidentiality and availability.

But balance is not all. In fact, these three characteristics can be independent, can overlap (as
shown in Figure 1-3), and can even be mutually exclusive. For example, we have seen that strong
protection of confidentiality can severely restrict availability. Let us examine each of the three
qualities in depth.

4
PDMCE SEM 5 MCA Computer Security (CS)

Figure 1-3 Relationship Between Confidentiality, Integrity, and Availability.

Confidentiality

You may find the notion of confidentiality to be straightforward: Only authorized people or
systems can access protected data. However, as we see in later chapters, ensuring confidentiality
can be difficult. For example, who determines which people or systems are authorized to access
the current system? By "accessing" data, do we mean that an authorized party can access a single
bit? the whole collection? pieces of data out of context? Can someone who is authorized disclose
those data to other parties?

Confidentiality is the security property we understand best because its meaning is narrower than
the other two. We also understand confidentiality well because we can relate computing examples
to those of preserving confidentiality in the real world.

Integrity

Integrity is much harder to pin down. As Welke and Mayfield [WEL90, MAY91, NCS91b] point
out, integrity means different things in different contexts. When we survey the way some people
use the term, we find several different meanings. For example, if we say that we have preserved
the integrity of an item, we may mean that the item is

precise

accurate

unmodified

modified only in acceptable ways

modified only by authorized people

modified only by authorized processes

consistent

internally consistent

meaningful and usable

Integrity can also mean two or more of these properties. Welke and Mayfield recognize three
particular aspects of integrityauthorized actions, separation and protection of resources, and
5
PDMCE SEM 5 MCA Computer Security (CS)
error detection and correction. Integrity can be enforced in much the same way as can
confidentiality: by rigorous control of who or what can access which resources in what ways.
Some forms of integrity are well represented in the real world, and those precise representations
can be implemented in a computerized environment. But not all interpretations of integrity are
well reflected by computer implementations.

Availability

Availability applies both to data and to services (that is, to information and to information
processing), and it is similarly complex. As with the notion of confidentiality, different people
expect availability to mean different things. For example, an object or service is thought to be
available if

It is present in a usable form.

It has capacity enough to meet the service's needs.

It is making clear progress, and, if in wait mode, it has a bounded waiting time.

The service is completed in an acceptable period of time.

We can construct an overall description of availability by combining these goals. We say a data
item, service, or system is available if

There is a timely response to our request.

Resources are allocated fairly so that some requesters are not favored over others.

The service or system involved follows a philosophy of fault tolerance, whereby hardware
or software faults lead to graceful cessation of service or to work-arounds rather than to
crashes and abrupt loss of information.

The service or system can be used easily and in the way it was intended to be used.

Concurrency is controlled; that is, simultaneous access, deadlock management, and


exclusive access are supported as required.

As you can see, expectations of availability are far-reaching. Indeed, the security community is
just beginning to understand what availability implies and how to ensure it. A small, centralized
control of access is fundamental to preserving confidentiality and integrity, but it is not clear that a
single access control point can enforce availability. Much of computer security's past success has
focused on confidentiality and integrity; full implementation of availability is security's next great
challenge.

Vulnerabilities

When we prepare to test a system, we usually try to imagine how the system can fail; we then look
for ways in which the requirements, design, or code can enable such failures. In the same way,

6
PDMCE SEM 5 MCA Computer Security (CS)
when we prepare to specify, design, code, or test a secure system, we try to imagine the
vulnerabilities that would prevent us from reaching one or more of our three security goals.

It is sometimes easier to consider vulnerabilities as they apply to all three broad categories of
system resources (hardware, software, and data), rather than to start with the security goals
themselves. Figure 1-4 shows the types of vulnerabilities we might find as they apply to the assets
of hardware, software, and data. These three assets and the connections among them are all
potential security weak points. Let us look in turn at the vulnerabilities of each asset.

Figure 1-4 Vulnerabilities of Computing Systems.

Hardware Vulnerabilities

Hardware is more visible than software, largely because it is composed of physical objects.
Because we can see what devices are hooked to the system, it is rather simple to attack by adding
devices, changing them, removing them, intercepting the traffic to them, or flooding them with
traffic until they can no longer function. However, designers can usually put safeguards in place.

But there are other ways that computer hardware can be attacked physically. Computers have been
drenched with water, burned, frozen, gassed, and electrocuted with power surges. People have
spilled soft drinks, corn chips, ketchup, beer, and many other kinds of food on computing devices.
Mice have chewed through cables. Particles of dust, and especially ash in cigarette smoke, have
threatened precisely engineered moving parts. Computers have been kicked, slapped, bumped,
jarred, and punched. Although such attacks might be intentional, most are not; this abuse might be
considered "involuntary machine slaughter": accidental acts not intended to do serious damage to
the hardware involved.

A more serious attack, "voluntary machine slaughter" or "machinicide," usually involves someone
who actually wishes to harm the computer hardware or software. Machines have been shot with
guns, stabbed with knives, and smashed with all kinds of things. Bombs, fires, and collisions have
destroyed computer rooms. Ordinary keys, pens, and screwdrivers have been used to short-out
circuit boards and other components. Devices and whole systems have been carried off by thieves.
The list of the kinds of human attacks perpetrated on computers is almost endless.

In particular, deliberate attacks on equipment, intending to limit availability, usually involve theft
or destruction. Managers of major computing centers long ago recognized these vulnerabilities and
installed physical security systems to protect their machines. However, the proliferation of PCs,
especially laptops, as office equipment has resulted in several thousands of dollars'worth of
equipment sitting unattended on desks outside the carefully protected computer room. (Curiously,
the supply cabinet, containing only a few hundred dollars' worth of pens, stationery, and paper
7
PDMCE SEM 5 MCA Computer Security (CS)
clips, is often locked.) Sometimes the security of hardware components can be enhanced greatly
by simple physical measures such as locks and guards.

Laptop computers are especially vulnerable because they are designed to be easy to carry. (See
Sidebar 1-3 for the story of a stolen laptop.) Safeware Insurance reported 600,000 laptops stolen in
2003. Credent Technologies reported that 29 percent were stolen from the office, 25 percent from
a car, and 14 percent in an airport. Stolen laptops are almost never recovered: The FBI reports 97
percent were not returned [SAI05].

Record Record Loss

The record for number of personal records lost stands at 26.5 million.

Yes, 26.5 million records were on the hard drive of a laptop belonging to the U.S. Veterans
Administration (V.A.) The lost data included names, addresses, social security numbers, and birth
dates of all veterans who left the service after 1975, as well as any disabled veterans who filed a
claim for disability after 1975, as well as some spouses. The data were contained on the hard drive
of a laptop stolen on 3 May 2006 near Washington D.C. A V.A. employee took the laptop home to
work on the data, a practice that had been going on for three years.

The unasked, and therefore unanswered, question in this case is why the employee needed names,
social security numbers, and birth dates of all veterans at home? One supposes the employee was
not going to print 26.5 million personal letters on a home computer. Statistical trends, such as
number of claims, type of claim, or time to process a claim, could be determined without birth
dates and social security numbers.

Computer security professionals repeatedly find that the greatest security threat is from insiders, in
part because of the quantity of data to which they need access to do their jobs. The V.A. chief

testified to Congress that his agency had failed to heed years of warnings of lax security
procedures. Now all employees have been ordered to attend a cybersecurity training course.

Software Vulnerabilities

Computing equipment is of little use without the software (operating system, controllers, utility
programs, and application programs) that users expect. Software can be replaced, changed, or
destroyed maliciously, or it can be modified, deleted, or misplaced accidentally. Whether
intentional or not, these attacks exploit the software's vulnerabilities.

Sometimes, the attacks are obvious, as when the software no longer runs. More subtle are attacks
in which the software has been altered but seems to run normally. Whereas physical equipment
usually shows some mark of inflicted injury when its boundary has been breached, the loss of a
line of source or object code may not leave an obvious mark in a program. Furthermore, it is
possible to change a program so that it does all it did before, and then some. That is, a malicious
intruder can "enhance" the software to enable it to perform functions you may not find desirable.
In this case, it may be very hard to detect that the software has been changed, let alone to
determine the extent of the change.

A classic example of exploiting software vulnerability is the case in which a bank worker realized
that software truncates the fractional interest on each account. In other words, if the monthly

8
PDMCE SEM 5 MCA Computer Security (CS)
interest on an account is calculated to be $14.5467, the software credits only $14.54 and ignores
the $.0067. The worker amended the software so that the throw-away interest (the $.0067) was
placed into his own account. Since the accounting practices ensured only that all accounts
balanced, he built up a large amount of money from the thousands of account throw-aways
without detection. It was only when he bragged to a colleague of his cleverness that the scheme
was discovered.

Software Deletion

Software is surprisingly easy to delete. Each of us has, at some point in our careers, accidentally
erased a file or saved a bad copy of a program, destroying a good previous copy. Because of
software's high value to a commercial computing center, access to software is usually carefully
controlled through a process called configuration management so that software cannot be deleted,
destroyed, or replaced accidentally. Configuration management uses several techniques to ensure
that each version or release retains its integrity. When configuration management is used, an old
version or release can be replaced with a newer version only when it has been thoroughly tested to
verify that the improvements work correctly without degrading the functionality and performance
of other functions and services.

Software Modification

Software is vulnerable to modifications that either cause it to fail or cause it to perform an


unintended task. Indeed, because software is so susceptible to "off by one" errors, it is quite easy
to modify. Changing a bit or two can convert a working program into a failing one. Depending on
which bit was changed, the program may crash when it begins or it may execute for some time
before it falters.

With a little more work, the change can be much more subtle: The program works well most of the
time but fails in specialized circumstances. For instance, the program may be maliciously
modified to fail when certain conditions are met or when a certain date or time is reached. Because
of this delayed effect, such a program is known as a logic bomb. For example, a disgruntled
employee may modify a crucial program so that it accesses the system date and halts abruptly after
July 1. The employee might quit on May l and plan to be at a new job miles away by July.

Another type of change can extend the functioning of a program so that an innocuous program has
a hidden side effect. For example, a program that ostensibly structures a listing of files belonging
to a user may also modify the protection of all those files to permit access by another user.

Other categories of software modification include

Trojan horse: a program that overtly does one thing while covertly doing another

virus: a specific type of Trojan horse that can be used to spread its "infection" from one
computer to another

trapdoor: a program that has a secret entry point

9
PDMCE SEM 5 MCA Computer Security (CS)
information leaks in a program: code that makes information accessible to unauthorized
people or programs

Software Theft

This attack includes unauthorized copying of software. Software authors and distributors are
entitled to fair compensation for use of their product, as are musicians and book authors.
Unauthorized copying of software has not been stopped satisfactorily. As we see in Chapter 11, the
legal system is still grappling with the difficulties of interpreting paper-based copyright laws for
electronic media.

Data Vulnerabilities

Hardware security is usually the concern of a relatively small staff of computing center
professionals. Software security is a larger problem, extending to all programmers and analysts
who create or modify programs. Computer programs are written in a dialect intelligible primarily
to computer professionals, so a "leaked" source listing of a program might very well be
meaningless to the general public.

Printed data, however, can be readily interpreted by the general public. Because of its visible
nature, a data attack is a more widespread and serious problem than either a hardware or software
attack. Thus, data items have greater public value than hardware and software because more
people know how to use or interpret data.

By themselves, out of context, pieces of data have essentially no intrinsic value. For example, if
you are shown the value "42," it has no meaning for you unless you know what the number
represents. Likewise, "326 Old Norwalk Road" is of little use unless you know the city, state, and
country for the address. For this reason, it is hard to measure the value of a given data item.

On the other hand, data items in context do relate to cost, perhaps measurable by the cost to
reconstruct or redevelop damaged or lost data. For example, confidential data leaked to a
competitor may narrow a competitive edge. Data incorrectly modified can cost human lives. To
see how, consider the flight coordinate data used by an airplane that is guided partly or fully by

Software, as many now are. Finally, inadequate security may lead to financial liability if certain
personal data are made public. Thus, data have a definite value, even though that value is often
difficult to measure.

Typically, both hardware and software have a relatively long life. No matter how they are valued
initially, their value usually declines gradually over time. By contrast, the value of data over time
is far less predictable or consistent. Initially, data may be valued highly. However, some data items
are of interest for only a short period of time, after which their value declines precipitously.

To see why, consider the following example. In many countries, government analysts periodically
generate data to describe the state of the national economy. The results are scheduled to be
released to the public at a predetermined time and date. Before that time, access to the data could
allow someone to profit from advance knowledge of the probable effect of the data on the stock
market. For instance, suppose an analyst develops the data 24 hours before their release and then
wishes to communicate the results to other analysts for independent verification before release.
The data vulnerability here is clear, and, to the right people, the data are worth more before the
10
PDMCE SEM 5 MCA Computer Security (CS)
scheduled release than afterward. However, we can protect the data and control the threat in
simple ways. For example, we could devise a scheme that would take an outsider more than 24
hours to break; even though the scheme may be eminently breakable (that is, an intruder could
eventually reveal the data), it is adequate for those data because confidentiality is not needed
beyond the 24-hour period.

Data security suggests the second principle of computer security.

Principle of Adequate Protection: Computer items must be protected only until they lose
their value. They must be protected to a degree consistent with their value.

This principle says that things with a short life can be protected by security measures that are
effective only for that short time. The notion of a small protection window applies primarily to
data, but it can in some cases be relevant for software and hardware, too.

Confirms that intruders take advantage of vulnerabilities to break in by whatever means they can.

Figure 1-5 illustrates how the three goals of security apply to data. In particular, confidentiality
prevents unauthorized disclosure of a data item, integrity prevents unauthorized modification, and
availability prevents denial of authorized access.

Figure 1-5 Security of Data.

Data Confidentiality

Data can be gathered by many means, such as tapping wires, planting bugs in output devices,
sifting through trash receptacles, monitoring electromagnetic radiation, bribing key employees,
inferring one data point from other values, or simply requesting the data. Because data are often
available in a form people can read, the confidentiality of data is a major concern in computer
security.

Top Methods of Attack

In 2006, the U.K. Department of Trade and Industry (DTI) released results of its latest annual
survey of businesses regarding security incidents [PWC06]. Of companies surveyed, 62 percent
reported one or more security breaches during the year (down from 74 percent two years earlier).
The median number of incidents was 8.

11
PDMCE SEM 5 MCA Computer Security (CS)
In 2006, 29 percent of respondents (compared to 27 percent in 2004) reported an accidental
security incident, and 57 percent (compared to 68 percent) reported a malicious incident. The
percentage reporting a serious incident fell to 23 percent from 39 percent.

The top type of attack was virus or other malicious code at 35 percent (down significantly from 50
percent two years earlier). Staff misuse of data or resources was stable at 21 percent (versus 22
percent). Intrusion from outside (including hacker attacks) was constant at 17 percent in both
periods, incidents involving fraud or theft were down to 8 percent form 11 percent, and failure of
equipment was up slightly to 29 percent from 27 percent.

Attempts to break into a system from outside get much publicity. Of the respondents, 5 percent
reported they experienced hundreds of such attacks a day, and 17 percent reported "several a day."

Data are not just numbers on paper; computer data include digital recordings such as CDs and
DVDs, digital signals such as network and telephone traffic, and broadband communications such
as cable and satellite TV. Other forms of data are biometric identifiers embedded in passports,

Online activity preferences, and personal information such as financial records and votes.
Protecting this range of data types requires many different approaches.

Data Integrity

Stealing, buying, finding, or hearing data requires no computer sophistication, whereas modifying
or fabricating new data requires some understanding of the technology by which the data are
transmitted or stored, as well as the format in which the data are maintained. Thus, a higher level
of sophistication is needed to modify existing data or to fabricate new data than to intercept
existing data. The most common sources of this kind of problem are malicious programs, errant
file system utilities, and flawed communication facilities.

Data are especially vulnerable to modification. Small and skillfully done modifications may not be
detected in ordinary ways. For instance, we saw in our truncated interest example that a criminal
can perform what is known as a salami attack: The crook shaves a little from many accounts and
puts these shavings together to form a valuable result, like the meat scraps joined in a salami.

A more complicated process is trying to reprocess used data items. With the proliferation of
telecommunications among banks, a fabricator might intercept a message ordering one bank to
credit a given amount to a certain person's account. The fabricator might try to replay that
message, causing the receiving bank to credit the same account again. The fabricator might also
try to modify the message slightly, changing the account to be credited or the amount, and then
transmit this revised message.

Other Exposed Assets

We have noted that the major points of weakness in a computing system are hardware, software,
and data. However, other components of the system may also be possible targets. In this section,
we identify some of these other points of attack.

Networks

12
PDMCE SEM 5 MCA Computer Security (CS)
Networks are specialized collections of hardware, software, and data. Each network node is itself a
computing system; as such, it experiences all the normal security problems. In addition, a network
must confront communication problems that involve the interaction of system components and
outside resources. The problems may be introduced by a very exposed storage medium or access
from distant and potentially untrustworthy computing systems.

Thus, networks can easily multiply the problems of computer security. The challenges are rooted
in a network's lack of physical proximity, use of insecure shared media, and the inability of a
network to identify remote users positively.

Access

Access to computing equipment leads to three types of vulnerabilities. In the first, an intruder may
steal computer time to do general-purpose computing that does not attack the integrity of the
system itself. This theft of computer services is analogous to the stealing of electricity, gas, or
water. However, the value of the stolen computing services may be substantially higher than the
value of the stolen utility products or services. Moreover, the unpaid computing access spreads the
true costs of maintaining the computing system to other legitimate users. In fact, the unauthorized
access risks affecting legitimate computing, perhaps by changing data or programs. A second
vulnerability involves malicious access to a computing system, whereby an intruding person or
system actually destroys software or data. Finally, unauthorized access may deny service to a
legitimate user. For example, a user who has a time-critical task to perform may depend on the
availability of the computing system. For all three of these reasons, unauthorized access to a
computing system must be prevented.

Key People

People can be crucial weak points in security. If only one person knows how to use or maintain a
particular program, trouble can arise if that person is ill, suffers an accident, or leaves the
organization (taking her knowledge with her). In particular, a disgruntled employee can cause
serious damage by using inside knowledge of the system and the data that are manipulated. For
this reason, trusted individuals, such as operators and systems programmers, are usually selected
carefully because of their potential ability to affect all computer users.

We have described common assets at risk. In fact, there are valuable assets in almost any computer
system. (See Sidebar 1-5 for an example of exposed assets in ordinary business dealings.)

Next, we turn to the people who design, build, and interact with computer systems, to see who can
breach the systems' confidentiality, integrity, and availability.

Computer Criminals
In television and film westerns, the bad guys always wore shabby clothes, looked mean and
sinister, and lived in gangs somewhere out of town. By contrast, the sheriff dressed well, stood
proud and tall, was known and respected by everyone in town, and struck fear in the hearts of
most criminals.

To be sure, some computer criminals are mean and sinister types. But many more wear business
suits, have university degrees, and appear to be pillars of their communities. Some are high school
13
PDMCE SEM 5 MCA Computer Security (CS)
or university students. Others are middle-aged business executives. Some are mentally deranged,
overtly hostile, or extremely committed to a cause, and they attack computers as a symbol. Others
are ordinary people tempted by personal profit, revenge, challenge, advancement, or job security.
No single profile captures the characteristics of a "typical" computer criminal, and many who fit
the profile are not criminals at all.

Whatever their characteristics and motivations, computer criminals have access to enormous
amounts of hardware, software, and data; they have the potential to cripple much of effective
business and government throughout the world. In a sense, then, the purpose of computer security
is to prevent these criminals from doing damage.

For the purposes of studying computer security, we say computer crime is any crime involving a
computer or aided by the use of one. Although this definition is admittedly broad, it allows us to
consider ways to protect ourselves, our businesses, and our communities against those who use
computers maliciously.

The U.S. Federal Bureau of Investigation regularly reports uniform crime statistics. The data do
not separate computer crime from crime of other sorts. Moreover, many companies do not report
computer crime at all, perhaps because they fear damage to their reputation, they are ashamed to
have allowed their systems to be compromised, or they have agreed not to prosecute if the
criminal will "go away." These conditions make it difficult for us to estimate the economic losses
we suffer as a result of computer crime; our dollar estimates are really only vague suspicions. Still,
the estimates, ranging from $300 million to $500 billion per year, tell us that it is important for us
to pay attention to computer crime and to try to prevent it or at least to moderate its effects.

One approach to prevention or moderation is to understand who commits these crimes and why.
Many studies have attempted to determine the characteristics of computer criminals. By studying
those who have already used computers to commit crimes, we may be able in the future to spot
likely criminals and prevent the crimes from occurring. In this section, we examine some of these
characteristics.

Amateurs
Amateurs have committed most of the computer crimes reported to date. Most embezzlers are not
career criminals but rather are normal people who observe a weakness in a security system that
allows them to access cash or other valuables. In the same sense, most computer criminals are
ordinary computer professionals or users who, while doing their jobs, discover they have access to
something valuable.

When no one objects, the amateur may start using the computer at work to write letters, maintain
soccer league team standings, or do accounting. This apparently innocent time-stealing may
expand until the employee is pursuing a business in accounting, stock portfolio management, or
desktop publishing on the side, using the employer's computing facilities. Alternatively, amateurs
may become disgruntled over some negative work situation (such as a reprimand or denial of

14
PDMCE SEM 5 MCA Computer Security (CS)
promotion) and vow to "get even" with management by wreaking havoc on a computing
installation.

Crackers or Malicious Hackers


System crackers,2 often high school or university students, attempt to access computing facilities
for which they have not been authorized. Cracking a computer's defenses is seen as the ultimate
victimless crime. The perception is that nobody is hurt or even endangered by a little stolen
machine time. Crackers enjoy the simple challenge of trying to log in, just to see whether it can be
done. Most crackers can do their harm without confronting anybody, not even making a sound. In
the absence of explicit warnings not to trespass in a system, crackers infer that access is permitted.
An underground network of hackers helps pass along secrets of success; as with a jigsaw puzzle, a
few isolated pieces joined together may produce a large effect. Others attack for curiosity, personal
gain, or self-satisfaction. And still others enjoy causing chaos, loss, or harm. There is no common
profile or motivation for these attackers.

Career Criminals
By contrast, the career computer criminal understands the targets of computer crime. Criminals
seldom change fields from arson, murder, or auto theft to computing; more often, criminals begin
as computer professionals who engage in computer crime, finding the prospects and payoff good.
There is some evidence that organized crime and international groups are engaging in computer
crime. Recently, electronic spies and information brokers have begun to recognize that trading in
companies' or individuals' secrets can be lucrative.

Recent attacks have shown that organized crime and professional criminals have discovered just
how lucrative computer crime can be. Mike Danseglio, a security project manager with Microsoft,
said, "In 2006, the attackers want to pay the rent. They don't want to write a worm that destroys
your hardware. They want to assimilate your computers and use them to make money" [NAR06a].
Mikko Hyppnen, Chief Research Officer with the Finnish security company f-Secure, agrees that
today's attacks often come from Russia, Asia, and Brazil and the motive is now profit, not fame
[BRA06]. Ken Dunham, Director of the Rapid Response Team for Verisign says he is "convinced
that groups of well-organized mobsters have taken control of a global billion-dollar crime network
powered by skillful hackers" [NAR06b].

Snow [SNO05] observes that a hacker wants a score, bragging rights. Organized crime wants a
resource; they want to stay and extract profit from the system over time. These different objectives
lead to different approaches: The hacker can use a quick-and-dirty attack, whereas the professional
attacker wants a neat, robust, and undetected method.

As mentioned earlier, some companies are reticent to prosecute computer criminals. In fact, after
having discovered a computer crime, the companies are often thankful if the criminal quietly
resigns. In other cases, the company is (understandably) more concerned about protecting its
assets and so it closes down an attacked system rather than gathering evidence that could lead to
identification and conviction of the criminal. The criminal is then free to continue the same illegal
pattern with another company.
15
PDMCE SEM 5 MCA Computer Security (CS)
Terrorists
The link between computers and terrorism is quite evident. We see terrorists
using computers in three ways:

Targets of attack: denial-of-service attacks and web site defacements are popular for any
political organization because they attract attention to the cause and bring undesired
negative attention to the target of the attack.

Propaganda vehicles: web sites, web logs, and e-mail lists are effective, fast, and
inexpensive ways to get a message to many people.

Methods of attack: to launch offensive attacks requires use of computers.

We cannot accurately measure the amount of computer-based terrorism because our definitions
and measurement tools are rather weak. Still, there is evidence that all three of these activities are
increasing.

Encryption:
Encryption is the process of encoding a message so that its meaning is not obvious; decryption is
the reverse process, transforming an encrypted message back into its normal, original form.
Alternatively, the terms encode and decode or encipher and decipher are used instead of encrypt
and decrypt. That is, we say that we encode, encrypt, or encipher the original message to hide its
meaning. Then, we decode, decrypt, or decipher it to reveal the original message. A system for
encryption and decryption is called a cryptosystem.

There are slight differences in the meanings of these three pairs of words, although they are not
significant in this context. Strictly speaking, encoding is the process of translating entire words or
phrases to other words or phrases, whereas enciphering is translating letters or symbols
individually; encryption is the group term that covers both encoding and enciphering.

The original form of a message is known as plaintext, and the encrypted form is called ciphertext.
This relationship is shown in Figure 2-1. For convenience, we denote a plaintext message P as a
sequence of individual characters P = <p1, p2, , pn>. Similarly, ciphertext is written as C = <c1,
c2, , cm>. For instance, the plaintext message "I want cookies" can be denoted as the message
string <I, ,w,a,n,t, , c,o,o,k,i,e,s>. It can be transformed into ciphertext <c1, c2, , c14>, and the
encryption algorithm tells us how the transformation is done.

Figure 2-1. Encryption.

16
PDMCE SEM 5 MCA Computer Security (CS)
We use this formal notation to describe the transformations between plaintext and ciphertext. For
example, we write C = E(P) and P = D(C), where C represents the ciphertext, E is the encryption
rule, P is the plaintext, and D is the decryption rule. What we seek is a cryptosystem for which P =
D(E(P)). In other words, we want to be able to convert the message to protect it from an intruder,
but we also want to be able to get the original message back so that the receiver can read it
properly.

Encryption Algorithms
The cryptosystem involves a set of rules for how to encrypt the plaintext and how to decrypt the
ciphertext. The encryption and decryption rules, called algorithms, often use a device called a key,
denoted by K, so that the resulting ciphertext depends on the original plaintext message, the
algorithm, and the key value. We write this dependence as C = E(K, P). Essentially, E is a set of
encryption algorithms, and the key K selects one specific algorithm from the set. We see later in
this chapter that a cryptosystem, such as the Caesar cipher, is keyless but that keyed encryptions
are more difficult to break.

This process is similar to using mass-produced locks in houses. As a homeowner, it would be very
expensive for you to contract with someone to invent and make a lock just for your house. In
addition, you would not know whether a particular inventor's lock was really solid or how it
compared with those of other inventors. A better solution is to have a few well-known, well-
respected companies producing standard locks that differ according to the (physical) key. Then,
you and your neighbor might have the same model of lock, but your key will open only your lock.
In the same way, it is useful to have a few well-examined encryption algorithms that everyone
could use, but the differing keys would prevent someone from breaking into what you are trying to
protect.

Sometimes the encryption and decryption keys are the same, so P = D(K, E(K,P)). This form is
called symmetric encryption because D and E are mirror-image processes. At other times,
encryption and decryption keys come in pairs. Then, a decryption key, KD, inverts the encryption
of key KE so that P = D(KD, E(KE,P)). Encryption algorithms of this form are called asymmetric
because converting C back to P involves a series of steps and a key that are different from the
steps and key of E. The difference between symmetric and asymmetric encryption is shown in
Figure 2-2.

Figure 2-2. Encryption with Keys.

17
PDMCE SEM 5 MCA Computer Security (CS)

A key gives us flexibility in using an encryption scheme. We can create different encryptions of
one plaintext message just by changing the key. Moreover, using a key provides additional
security. If the encryption algorithm should fall into the interceptor's hands, future messages can
still be kept secret because the interceptor will not know the key value. Sidebar 2-1 describes how
the British dealt with written keys and codes in World War II. An encryption scheme that does not
require the use of a key is called a keyless cipher.

The history of encryption is fascinating; it is well documented in Kahn's book [KAH96].


Encryption has been used for centuries to protect diplomatic and military communications,
sometimes without full success. The word cryptography means hidden writing, and it refers to the
practice of using encryption to conceal text. A cryptanalyst studies encryption and encrypted
messages, hoping to find the hidden meanings.

Both a cryptographer and a cryptanalyst attempt to translate coded material back to its original
form. Normally, a cryptographer works on behalf of a legitimate sender or receiver, whereas a
cryptanalyst works on behalf of an unauthorized interceptor. Finally, cryptology is the research
into and study of encryption and decryption; it includes both cryptography and cryptanalysis.

Cryptography

Goal
Its goal is to ensure communication security over insecure medium. we had learned that the
security fundamentally has three goals: Confidentiality, Availability and Integrity.

Main Components in Sending Messages


Sender
Medium <===> Intruder
Receiver

18
PDMCE SEM 5 MCA Computer Security (CS)
Intruder can
Interrupt (make an asset unavailable, unusable) thus breaks Availability
Intercept (gain access to the asset) thus breaks Confidentiality
Modify (tamper with an asset) thus breaks Integrity
Fabricate (create objects) thus breaks Integrity

Approaches to Secure Communication

Steganography

Hide the existence of the message (Remember picture in picture in the slides !)

Cryptography

Hide the meaning of the message (Message is there but what is it ?)

Secret Writing
Make the message difficult to be read, modified or fabricated

Encryption is transforming plain text to cipher text : C = E(c), where E is encryption rule
Decryption is transforming cipher text to plain text : P = D(c), where D is decryption rule

Cryptosystem
Sender encrypts the original plain text ===> cipher text flies over the medium (Intruder does not
have access to the plain text) ===> Receiver decrypts the cipher text

Cryptosystem helps us by providing the privacy and the integrity.

Encryption

Keyless
No key is used (algorithm doesn't take any parameters) in encryption or decryption.

Symmetric Key
The same key used in both encryption and decryption.

Asymmetric Key
Two different keys are used in encryption and decryption.

We do not use very strong keys (such as 1 million bit ) due to the computational cost for
encryption and decryption

19
PDMCE SEM 5 MCA Computer Security (CS)
Cryptanalysis
Cryptanalysis is the deduction of the original meaning from the cipher text by coming up with the
decryption algorithm.

Ciphers
Important Note on Notation:
From now on UPPERCASE means PLAINTEXT, and lowercase denotes ciphertext

Substitution Ciphers are done by substituting each symbol by some other symbol.
E.g. Ceaser Cipher, Permutation.

Ceaser just substitutes every letter in the alphabet with another letter where there are always "n"
letters in between them. For example, (for n==2) If A becomes d, then B becomes e.

Permutation is another way of substitution where each symbol is mapped to some other symbol
without following a rule.
Cryptanalysis of Substitution Ciphers Since

Break (blank character), and repeated letters are preserved,

We can use clues like short words,

Knowledge of language simplify it (e.g. E,T,O,A occur far more than J,Q,X,Z)

We can use brute force attach (26! possibilities for permutation)

It is easy to break.
Solution
We can avoid regularity if a symbol in plain text is transformed to different symbols at
different occurrences. We can do that by using one-time pads where the receiver and the sender
have identical pads.
Plaintext
V E R N A M C I P H E R
21 4 17 13 0 12 2 8 15 7 4 17
Random numbers
76 48 16 82 44 3 58 11 60 5 48 88
Sum
97 52 33 95 44 15 60 19 75 12 52 105
Sum mod 26
19 0 7 17 18 15 8 19 23 12 0 1
Ciphertext
t a h r s p i t x m a b

Difficulties in practice of using one-time pads


Both sender and the receiver need access to identical objects such as telephone book
Since the phone book is not completely random but instead consists of high frequency letters just
as the plain text, then for example, for the standard English case, the probability that the key and
plain text letter is either A,E,O,T,N or I is 0.25.

20
PDMCE SEM 5 MCA Computer Security (CS)

Transposition

Transposition Ciphers are done by rearranging the places of the symbols


Here is an example to columnar transposition:

THIS IS A MESSAGE TO SHOW HOW A COLMUNAR TRANSPOSITION WORKS

T H I S I
S A M E S
S A G E T
O S H O W
H O W A C
O L M U N
A R T R A
N S P O S
I T I O N
W O R K S

tssoh oaniw haaso lrsto imghw utpir seeoa mrook istwc nasna

This is also easy to break since the frequency distribution technique can be applied and also the
pattern of transposition can be identified easily.

The AES Encryption algorithm


The Advanced Encryption Standard (AES), also known as Rijndael (its original name), is a
specification for the encryption of electronic data established by the U.S. National Institute of
Standards and Technology (NIST) in 2001.
AES is based on the Rijndael cipher developed by two Belgian cryptographers, Joan Daemen and
Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a
family of ciphers with different key and block sizes.
For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits,
but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by the U.S. government and is now used worldwide. It supersedes the Data
Encryption Standard (DES) which was published in 1977. The algorithm described by AES is a
symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the
data.

21
PDMCE SEM 5 MCA Computer Security (CS)

AES Structure
data block of 4 columns of 4 bytes is state

key is expanded to array of words

has 9/11/13 rounds in which state undergoes:

byte substitution (1 S-box used on every byte)

shift rows (permute bytes between groups/columns)

mix columns (subs using matrix multiply of groups)

add round key (XOR state with key material)

view as alternating XOR key & scramble data bytes

initial XOR key material & incomplete last round

with fast XOR & table lookup implementation

Public Key Encryption:


A cryptographic system that uses two keys -- a public key known to everyone and a private or
secret key known only to the recipient of the message.When John wants to send a secure message
to Jane, he uses Jane's public key to encrypt the message. Jane then uses her private key to decrypt
it.

22
PDMCE SEM 5 MCA Computer Security (CS)

An important element to the public key system is that the public and private keys are related in
such a way that only the public key can be used to encrypt messages and only the corresponding
private key can be used to decrypt them. Moreover, it is virtually impossible to deduce the private
key if you know the public key.
Public-key systems, such as Pretty Good Privacy (PGP), are becoming popular for transmitting
information via the Internet. They are extremely secure and relatively simple to use. The only
difficulty with public-key systems is that you need to know the recipient's public key to encrypt a
message for him or her. What's needed, therefore, is a global registry of public keys, which is one
of the promises of the new LDAP technology.
Uses of encryption:

There are many reasons for using encryption (examples are given below), and the cryptosystem
that one should use is the one best suited for one's particular purpose and which satisfies the
requirements of security, reliability and ease-of-use.

Ease-of-use is easy to understand.

Reliability means that the cryptosystem, when used as its designer intended it to be used,
will always reveal exactly the information hidden when it is needed (in other words, that
the cipher text will always be recoverable and the recovered data will be the same as to the
original plaintext).

Security means that the cryptosystem will in fact keep the information hidden from all but
those persons intended to see it despite the attempts of others to crack the system.

Ease-of-use is the quality easiest to ascertain. If the encryption key is a sequence of 64


hexadecimal digits (a 256-bit key), such as:

B923A24C98D98F83E24234CF8492C384E9AD19A128B3910F3904C324E920DA31

Then you may have a problem not only in remembering it but also in using it (try typing the
sequence above a few times). With such a key it is necessary to write it down or store it in a disk
file, in which case there is the danger that it may be discovered by someone else. Thus such a key
is not only inconvenient to use but also is a security risk.

23
PDMCE SEM 5 MCA Computer Security (CS)

Unit-2
Program Security:
Secure Program:
Programming errors with security implications-buffer overflows, incomplete access control
Malicious code-viruses, worms, Trojan horses
Program development controls against malicious code and vulnerabilities-software
engineering principles and practices
Controls to protect against program flaws in execution-operating system support and
administrative controls
Security implies some degree of trust that the program enforces expected
confidentiality,
integrity, and
Availability.
Why is it so hard to write secure programs?
Axiom (Murphy):
Programs have bugs
Corollary:
Security-relevant programs have security bugs
Flaws, faults, and failures
A flaw is a problem with a program
A security flaw is a problem that affects security in some way
Confidentiality, integrity, availability
Flaws come in two types: faults and failures
A fault is a mistake behind the scenes
An error in the code, data, specification, process, etc.
A fault is a potential problem
A failure is when something actually goes wrong
Goes wrong means deviation from desired behaviour,
not necessarily from specified behaviour!

24
PDMCE SEM 5 MCA Computer Security (CS)
A failure is the user/outside view
The quantity and types of faults in requirements design and code implementation are often
used as evidence of a products quality or security

Types of Flaws
Intentional
Malicious
Nonmalicious
Inadvertent
Validation error (incomplete / inconsistent)
Domain error
Serialization and aliasing
Inadequate identification and authentication
Boundary condition violation
Other exploitable logic errors
Non malicious program error
Many such errors cause program malfunctions but do not lead to more serious security
vulnerabilities.
Buffer overflow
Stack overflow
Incomplete Mediation
Time-of-Check to Time-of-Use Errors
Virus
Vital information resources under seize. A computer virus is a program or piece of code that is
loaded onto your computer without your knowledge and runs against your wishes. Viruses can
also replicate themselves. All computer viruses are man-made. A simple virus that can make a
copy of itself over and over again is relatively easy to produce.
Types of virus:
1. Macro virus
2. Overwriting virus
3. Cluster virus
4. Polymorphic virus
5. Memory resident virus
6. Self-garbling virus

Macro virus:

25
PDMCE SEM 5 MCA Computer Security (CS)
A macro is a series of commands and actions that help to automate some tasks - effectively
a program but usually quite short and simple. However they are created, they need to be executed
by some system which interprets the stored commands.

Some macro systems are self-contained programs, but others are built into complex applications
(for example word processors) to allow users to repeat sequences of commands easily, or to allow
developers to tailor the application to local needs. The step which has made some applications
susceptible to macro viruses was to allow macros to be stored in the very documents which are
being edited or processed by the application. This makes it possible for a document to carry a
macro, not obvious to the user, which will be executed automatically on opening the document. A
macro virus can be spread through e-mail attachments, discs, networks, modems, and
the Internet and is notoriously difficult to detect. Uninfected documents contain normal macros.
Most malicious macros start automatically when a document is opened or closed. A common way
for a macro virus to infect a computer is by replacing normal macros with the virus. The macro
virus replaces the regular commands with the same name and runs when the command is selected.
In the cases where the macro is run automatically, the macro is opened without the user knowing.
Over writing virus:
A type of computer virus that will copy its own code over the host computer system's file data,
which destroys the original program. After your computer system has been cleaned using
an antivirus program, users will need to install the original program again.
Cluster Virus:
A type of computer virus that associates itself with the execution of programs by modifying
directory table entries to ensure the virus itself will start when any program on the computer
system is started. If infected with a cluster virus it will appear as if every program on
the computer system is infected; however, a cluster virus is only in one place on the system
Polymorphic virus:
Its capability to modifying their code. It means code changes itself each time it runs.
Self-Garbling Virus:
A type of computer virus that will attempt to hide from an antivirus program by garbling its
own code. When a self-garbling virus propagates it will change the encoding of its own code to
trick antivirus programs and stay hidden on the computer system.
Memory Resident Virus:
A virus that stays in memory after it executes and after its host program is terminated. In contrast,
non-memory-resident viruses only are activated when an infected application runs.
COMPUTER THREATS
Computer threats can come from many ways either from human or natural disasters. For example,
when someone is stealing your account information from a trusted bank, this threat is considered
as a human threat. However, when your computer is soaked in heavy rain, then that is a natural
disaster threat.

26
PDMCE SEM 5 MCA Computer Security (CS)
MALICIOUS CODE:
Malicious code is also known as a rogue program. It is a threat to computing assets by causing
undesired effects in the programmers part. The effect is caused by an agent, with the intention to
cause damage.

VIRUS
TROJAN HORSE
LOGIC BOMB
TRAPDOOR OR BACKDOOR
WORM
HACKER
NATURAL AND ENVIRONMENTAL THREATS

Computers are also threatened by natural or environmental disaster. Be it at home, stores, offices
and also automobiles. Examples of natural and environmental disasters:
Flood
Fire
Earthquakes, storms and tornados
Excessive Heat
Inadequate Power Supply

THEFT
Two types of computer theft:

1) Computer is used to steal money, goods, information and resources.


2) Stealing of computer, especially notebook and PDAs.
Kinds of Malicious Code
Virus code that attaches to another program and copies itself to other programs
Transient virus life depends on life of its host
Resident virus locates inside memory
Trojan Horse malicious effect is hidden from user
Logic bomb triggered by an event
Time bomb triggered by a time or date
Trapdoor (backdoor) feature that allows access to program other than through normal
channels
Worm program that spreads copies of itself through a network

27
PDMCE SEM 5 MCA Computer Security (CS)
Rabbit virus/worm that self-replicates without bound

Goals of IT Security
Integrity: guaranteeing that the data are those that they are believed to be
Confidentiality: ensuring that only authorized individuals have access to the resources
being exchanged
Availability: guaranteeing the information system's proper operation
Non-repudiation: guaranteeing that an operation cannot be denied
Authentication: ensuring that only authorized individuals have access to the resources
Security Policies
System Security Policy
Data Security Policy
User Security Policy
Password Management Policy
Auditing Policy
A Security Checklist
Security Model
Access control list (ACL)
Bell-La Padula model
Capability-based security
Context-based access control
Lattice-based access control (LBAC)
Multi-level security (MLS)
Non-interference (security)
Object-capability model
Role-based access control (RBAC)
Take-grant protection model

Security in Operating System


Security breaches
Security goals
Protected objects of the general purpose operating system
Protection of objects

Security Methods of Operating Systems


Separation: keeping one users objects separate from other users
Physical Separation
Temporal Separation
Logical Separation
28
PDMCE SEM 5 MCA Computer Security (CS)
Cryptographic Separation
Granularity of Control the larger the level of object controlled, the easier it is to implement
access control.

Relocation:

Relocation is the process of taking a program written as if it began at address 0 and changing all
addresses to reflect the actual address at which the program is located in memory.
Fence register can be used within relocation process. To each program address, the contents of the
fence register are added. This both relocates the address and guarantees that no one can access a
location lower than a fence address

Segmentation;

Segmentation divides a program into separate pieces. Each piece has a logical unity, a relationship
among all of its code or data value.
Segmentation was developed as a feasible means to have the effect of an unbounded number of
base/bounds registers: a program could be divided into many pieces having different access rights.
The operating system must maintain a table of segment names and their true addresses in memory.
The program address is in the form <name, offset>. OS can retrieve the real address via looking
for the table then making a simple calculation:
address of the name + offset

Paging:
An alternative to segmentation is paging. The program is divided into equal-sized pieces called
pages, and memory is divided into the same sized units, called page frames. Each address is
represented in a form <page, offset>.
Operating system maintains a table of user page numbers and their true addresses in memory. The
page portion of every <page, offset> reference is converted to a page frame address by a table
lookup; the offset portion is added to the page frame address to produce the real memory address
of the object referred to as <page, offset>.

General Objects:
Memory
a file or data set on an auxiliary storage device
an executing program in memory
a directory of files
a hardware device
a data structure, such as a stack.

29
PDMCE SEM 5 MCA Computer Security (CS)
A table of the operating system
instructions, especially privileged instructions
passwords
the protection mechanism itself

File Protection Mechanisms:


Basic Forms of Protection:
All-None Protection
The principal protection was trust, combined with ignorance.
Group Protection
Users in the same group have the same right for objects.
Single Permissions
Password or other token
assign a password to a file
Temporary Acquired Permission
UNIX set user id permission. If this protection is set for a file to be
executed, the protection level is that of the files owner, not the
executor.

User Authentication:
Use of Passwords
Attacks on Passwords
Password Selection Criteria
The Authentication Process
Flaws in the Authentication Process
Authentication Other Than Passwords
Trusted Operating System
More stringent authentication.
Mandatory access control layered over discretionary access control.
Object reuses protection.
Complete mediation.
Trusted path -- prevent, for example, user spoofing during login. Windows' ``three-
fingered'' salute.
Accountability and audit -- log access and use.
Audit log reduction.
Intrusion detection

30
PDMCE SEM 5 MCA Computer Security (CS)

Military Security policy:

Unclassified

Restricted

Confidential

Secret

Top

Secret

Trusted OS Design:
OS is a complex system
difficult to design
Adding the responsibility of security enforcement makes it even more difficult
31
PDMCE SEM 5 MCA Computer Security (CS)
Clear mapping from security requirements to the design
Design must be checked using formal reviews or simulation

Requirements design testing

Operating System Function:

32
PDMCE SEM 5 MCA Computer Security (CS)
Security features of Trusted OS:
Identification and Authentication
Mandatory and Discretionary Access Control
Object reuse protection
Complete mediation (all accesses are checked)
Trusted path
Accountability and Audit (security log)
Audit log reduction
Intrusion detection (patterns of normal system usages, anomalies)

Unit-3

33
PDMCE SEM 5 MCA Computer Security (CS)
Data base Security: Security Requirements

34
PDMCE SEM 5 MCA Computer Security (CS)

Reliability vs. Integrity

Real world definitions:

Reliability = the ability of a person or system to perform and maintain its functions in
routine circumstances, as well as hostile or unexpected circumstances.

Integrity = consistency between one's actions, values, methods, measures and principles.

IBs definition of integrity is really worded differently, but there are similarities. If data is
consistent with itself at creation, it has integrity. (not the same as, it is only consistent with)

You make an entry about me in a database. It has the Yes.


correct information about where I live. I move. We
change my entry in the database. Does the data still It is not the same as it was at creation, but it is
have integrity? consistent. It is still my address.

You make an entry about me in a database. It has the No.


correct information about where I live. I move. No
changes are made in the database. Does the data still Even though no changes have been made since its
have integrity? creation, it is no longer representative of me.

You make an entry about me in a database. It is No.


initially correct. A hacker enters the database and
changes my last name to Smorski. Does the data still The data was changed incorrectly. It is no longer
have integrity? consistent with me as it was at creation.

Your computer is open to hackers, you have no Possibly.


firewalls. Does it have integrity?
It depends if the hackers messed things up. If the data
is still untouched you do. What youve got is a
reliability issue: you dont know if the data has
integrity or not.

A computer holds data about students. It is constantly Yes.


crashing, but the data is always there when you
manage to turn it back on. This computer is not The data is still consistent with itself at creation.
reliable, but does the data still have integrity?

You have songs on your iTunes. ITunes never goes No.


down or has ever lost one of your songs. It is very
reliable. However, youre stupid little sister goes in to
iTunes and changes all of the song names to I Love
35
PDMCE SEM 5 MCA Computer Security (CS)

Sponge Bob. Does your database have integrity? The song names are all inconsistent with their
starting condition. Your database is reliable.

Reliability refers to the operation of hardware, the design of software, the accuracy of data or the
correspondence of data with the real world. Data may be unreliable if it has been entered
incorrectly or if it becomes outdated. The reliability of machines, software and data determines
our confidence in their value.
Integrity refers to safeguarding the accuracy and completeness of stored data. Data lacks integrity
when it
has been changed accidentally or tampered with. Examples of data losing integrity are where
information is duplicated in a relational database and only one copy is updated or where data
entries have been maliciously altered.

Data integrity and reliability are referring to quality, security and safety of the data on your
system.
Data people generate with their PCs store on their hard disks. However, it is essential to take care
of it carefully.
Another issue is that data integrity is positively correlated with 'safe' or 'conservative' designs. It
means that all else being equal, a design with an established track record, using proven
components from a major manufacturer, is likely to be safer for your data than a brand-new design
using the very latest CPU and motherboard. Made by a company that's been around less than a
year. Emphasizing performance above all else increases the potential for problems, overclocking
would be the ultimate example of this.

Sensitive data:
Sensitive data encompasses a wide range of information and can include: youre ethnic or
racial origin; political opinion; religious or other similar beliefs; memberships; physical or mental
health details; personal life; or criminal or civil offences. These examples of information are
protected by your civil rights.
Sensitive data encompasses a wide range of information and can include: youre ethnic or
Racial origin; political opinion; religious or other similar beliefs; memberships; physical or mental
Health details; personal life; or criminal or civil offences. These examples of information are
Protected by your civil rights.
Sensitive data can also include information that relates to you as a consumer, client, employee,
Patient or student; and it can be identifying information as well: your contact information,
Identification cards and numbers, birth date, and parents names.
All of this data belongs to you. You have full rights to access and use this information and
You also have rights to know how others are doing the same. You should be protective of this
Information, just like you would be of your other belongings

Following are some prominent examples of data protected by state and federal law and
university policy. Often, context plays a role in data sensitivity; thus, this list is not
exhaustive:

Personal and financial data, including:

o Social Security number (SSN)


36
PDMCE SEM 5 MCA Computer Security (CS)
o Credit card number or banking information

o Passport number

o Foreign visa number

o Tax information

o Credit reports

o Anything that can be used to facilitate identity theft (e.g., mother's maiden name)

Federally protected data, including:

o FERPA-protected information (e.g., student information and grades)

o HIPAA-protected information (e.g., health, medical, or psychological information)

State protected data

The state of Indiana has recently enacted data protection and disclosure laws, specifying
certain data as sensitive "personal information". Indiana's notification law reads:

"Personal information" means:

o An individual's:

1. First name and last name; or

2. First initial and last name; and

o At least one (1) of the following data elements:

1. Social Security number

2. Driver's license number or identification card number

3. Account number, credit card number, debit card number, security


code, access code, or password of an individual's financial account

2. University restricted or critical data

3. Human subjects research data

4. Passwords
37
PDMCE SEM 5 MCA Computer Security (CS)
Following are some examples of non-sensitive data. Again, this list is not exhaustive:

Publicly available information that is lawfully made available to the public from records of
another federal or local agency

Information that would appear in the telephone directory

The last four digits only of a Social Security number or credit card number

38
PDMCE SEM 5 MCA Computer Security (CS)

Multilevel security or multiple levels of security (MLS) is the application of a computer system to
process information with incompatible classifications (i.e., at different security levels), permit
access by users with different security clearances and needs-to-know, and prevent users from
obtaining access to information for which they lack authorization. There are two contexts for the
use of Multilevel Security. One is to refer to a system that is adequate to protect itself from
subversion and has robust mechanisms to separate information domains, that is, trustworthy.
Another context is to refer to an application of a computer that will require the computer to be
strong enough to protect itself from subversion and possess adequate mechanisms to separate
information domains, that is, a system we must trust. This distinction is important because systems
that need to be trusted are not necessarily trustworthy.

Proposals for Multilevel Security

As you can already tell, implementing multilevel security for databases is difficult, probably more
so than in operating systems, because of the small granularity of the items being controlled. In the
remainder of this section, we study approaches to multilevel security for databases.

Separation

As we have already seen, separation is necessary to limit access. In this section, we study
mechanisms to implement separation in databases. Then, we see how these mechanisms can help
to implement multilevel security for databases.

39
PDMCE SEM 5 MCA Computer Security (CS)
Partitioning

The obvious control for multilevel databases is partitioning. The database is divided into separate
databases, each at its own level of sensitivity. This approach is similar to maintaining separate files
in separate file cabinets.

This control destroys a basic advantage of databases: elimination of redundancy and improved
accuracy through having only one field to update. Furthermore, it does not address the problem of
a high-level user who needs access to some low-level data combined with high-level data.

Nevertheless, because of the difficulty of establishing, maintaining, and using multilevel


databases, many users with data of mixed sensitivities handle their data by using separate, isolated
databases.

Encryption

If sensitive data are encrypted, a user who accidentally receives them cannot interpret the data.
Thus, each level of sensitive data can be stored in a table encrypted under a key unique to the level
of sensitivity. But encryption has certain disadvantages.

First, a user can mount a chosen plaintext attack. Suppose party affiliation of REP or DEM is
stored in encrypted form in each record. A user who achieves access to these encrypted fields can
easily decrypt them by creating a new record with party=DEM and comparing the resulting
encrypted version to that element in all other records. Worse, if authentication data are encrypted,
the malicious user can substitute the encrypted form of his or her own data for that of any other
user. Not only does this provide access for the malicious user, but it also excludes the legitimate
user whose authentication data have been changed to that of the malicious user. These possibilities
are shown in Figures 6-5 and 6-6.

Figure 6-5. Cryptographic Separation: Different Encryption Keys.

40
PDMCE SEM 5 MCA Computer Security (CS)

Figure 6-6. Cryptographic Separation: Block Chaining.

Using a different encryption key for each record overcomes these defects. Each record's fields can
be encrypted with a different key, or all fields of a record can be cryptographically linked, as with
cipher block chaining.

The disadvantage, then, is that each field must be decrypted when users perform standard database
operations such as "select all records with SALARY > 10,000." Decrypting the SALARY field,
even on rejected records, increases the time to process a query. (Consider the query that selects
just one record but that must decrypt and compare one field of each record to find the one that
satisfies the query.) Thus, encryption is not often used to implement separation in databases.

Integrity Lock

The integrity lock was first proposed at the U.S. Air Force Summer Study on Data Base Security
[AFS83]. The lock is a way to provide both integrity and limited access for a database. The
operation was nicknamed "spray paint" because each element is figuratively painted with a color
that denotes its sensitivity. The coloring is maintained with the element, not in a master database
table.

A model of the basic integrity lock is shown in Figure 6-7. As illustrated, each apparent data item
consists of three pieces: the actual data item itself, a sensitivity label, and a checksum. The
sensitivity label defines the sensitivity of the data, and the checksum is computed across both data
and sensitivity label to prevent unauthorized modification of the data item or its label. The actual
data item is stored in plaintext, for efficiency because the DBMS may need to examine many
fields when selecting records to match a query.

41
PDMCE SEM 5 MCA Computer Security (CS)
Figure 6-7. Integrity Lock.

The sensitivity label should be

unforgeable, so that a malicious subject cannot create a new sensitivity level for an element

unique, so that a malicious subject cannot copy a sensitivity level from another element

concealed, so that a malicious subject cannot even determine the sensitivity level of an
arbitrary element

The third piece of the integrity lock for a field is an error-detecting code, called a cryptographic
checksum. To guarantee that a data value or its sensitivity classification has not been changed, this
checksum must be unique for a given element, and must contain both the element's data value and
something to tie that value to a particular position in the database. As shown in Figure 6-8, an
appropriate cryptographic checksum includes something unique to the record (the record number),
something unique to this data field within the record (the field attribute name), the value of this
element, and the sensitivity classification of the element. These four components guard against
anyone's changing, copying, or moving the data. The checksum can be computed with a strong
encryption algorithm or hash function.

Figure 6-8. Cryptographic Checksum.

Sensitivity Lock

The sensitivity lock shown in Figure 6-9 was designed by Graubert and Kramer [GRA84b] to
meet these principles. A sensitivity lock is a combination of a unique identifier (such as the record
number) and the sensitivity level. Because the identifier is unique, each lock relates to one
particular record. Many different elements will have the same sensitivity level. A malicious
42
PDMCE SEM 5 MCA Computer Security (CS)

Subject should not be able to identify two elements having identical sensitivity levels or identical
data values just by looking at the sensitivity level portion of the lock. Because of the encryption,
the lock's contents, especially the sensitivity level, are concealed from plain view. Thus, the lock is
associated with one specific record, and it protects the secrecy of the sensitivity level of that
record.

Figure 6-9. Sensitivity Lock.

Designs of Multilevel Secure Databases

This section covers different designs for multilevel secure databases. These designs show the
tradeoffs among efficiency, flexibility, simplicity, and trustworthiness.

Integrity Lock

The integrity lock DBMS was invented as a short-term solution to the security problem for
multilevel databases. The intention was to be able to use any (untrusted) database manager with a
trusted procedure that handles access control. The sensitive data were obliterated or concealed
with encryption that protected both a data item and its sensitivity. In this way, only the access
procedure would need to be trusted because only it would be able to achieve or grant access to
sensitive data. The structure of such a system is shown in Figure 6-10.

Figure 6-10. Trusted Database Manager.

43
PDMCE SEM 5 MCA Computer Security (CS)

The efficiency of integrity locks is a serious drawback. The space needed for storing an element
must be expanded to contain the sensitivity label. Because there are several pieces in the label and
one label for every element, the space required is significant.

Problematic, too, is the processing time efficiency of an integrity lock. The sensitivity label must
be decoded every time a data element is passed to the user to verify that the user's access is
allowable. Also, each time a value is written or modified, the label must be recomputed. Thus,
substantial processing time is consumed. If the database file can be sufficiently protected, the data
values of the individual elements can be left in plaintext. That approach benefits select and project
queries across sensitive fields because an element need not be decrypted just to determine whether
it should be selected.

A final difficulty with this approach is that the untrusted database manager sees all data, so it is
subject to Trojan horse attacks by which data can be leaked through covert channels.

Trusted Front End

The model of a trusted front-end process is shown in Figure 6-11. A trusted front end is also
known as a guard and operates much like the reference monitor of Chapter 5. This approach,
originated by Hinke and Schaefer [HIN75], recognizes that many DBMSs have been built and put
into use without consideration of multilevel security. Staff members are already trained in using
these DBMSs, and they may in fact use them frequently. The front-end concept takes advantage of
existing tools and expertise, enhancing the security of these existing systems with minimal change
to the system. The interaction between a user, a trusted front end, and a DBMS involves the
following steps.

Figure 6-11. Trusted Front End.

1. A user identifies himself or herself to the front end; the front end authenticates the user's
identity.

2. The user issues a query to the front end.


3. The front end verifies the user's authorization to data.
4. The front end issues a query to the database manager.

44
PDMCE SEM 5 MCA Computer Security (CS)

5. The database manager performs I/O access, interacting with low-level access control to
achieve access to actual data.

6. The database manager returns the result of the query to the trusted front end.
7. The front end analyzes the sensitivity levels of the data items in the result and selects those
items consistent with the user's security level.
8. The front end transmits selected data to the untrusted front end for formatting.
9. The untrusted front end transmits formatted data to the user.

The trusted front end serves as a one-way filter, screening out results the user should not be able to
access. But the scheme is inefficient because potentially much data is retrieved and then discarded
as inappropriate for the user.

Commutative Filters

The notion of a commutative filter was proposed by Denning [DEN85] as a simplification of the
trusted interface to the DBMS. Essentially, the filter screens the user's request, reformatting it if
necessary, so that only data of an appropriate sensitivity level are returned to the user.

A commutative filter is a process that forms an interface between the user and a DBMS. However,
unlike the trusted front end, the filter tries to capitalize on the efficiency of most DBMSs. The
filter reformats the query so that the database manager does as much of the work as possible,
screening out many unacceptable records. The filter then provides a second screening to select
only data to which the user has access.

Filters can be used for security at the record, attribute, or element level.

When used at the record level, the filter requests desired data plus cryptographic checksum
information; it then verifies the accuracy and accessibility of data to be passed to the user.

At the attribute level, the filter checks whether all attributes in the user's query are
accessible to the user and, if so, passes the query to the database manager. On return, it
deletes all fields to which the user has no access rights.

At the element level, the system requests desired data plus cryptographic checksum
information. When these are returned, it checks the classification level of every element of
every record retrieved against the user's level.

Suppose a group of physicists in Washington works on very sensitive projects, so the current user
should not be allowed to access the physicists' names in the database. This restriction presents a
problem with this query:

retrieve NAME where ((OCCUP=PHYSICIST) (CITY=WASHDC))

Suppose, too, that the current user is prohibited from knowing anything about any people in
Moscow. Using a conventional DBMS, the query might access all records, and the DBMS would
then pass the results on to the user. However, as we have seen, the user might be able to infer

45
PDMCE SEM 5 MCA Computer Security (CS)
things about Moscow employees or Washington physicists working on secret projects without
even accessing those fields directly.

The commutative filter re-forms the original query in a trustable way so that sensitive information
is never extracted from the database. Our sample query would become

retrieve NAME where ((OCCUP=PHYSICIST) (CITY=WASHDC))

from all records R where


(NAME-SECRECY-LEVEL (R) USER-SECRECY-LEVEL)

(OCCUP-SECRECY-LEVEL (R) USER-SECRECY-LEVEL)

(CITY-SECRECY-LEVEL (R) USER-SECRECY-LEVEL))

The filter works by restricting the query to the DBMS and then restricting the results before they
are returned to the user. In this instance, the filter would request NAME, NAME-SECRECY-
LEVEL, OCCUP, OCCUP-SECRECY-LEVEL, CITY, and CITY-SECRECY-LEVEL values and
would then filter and return to the user only those fields and items that are of a secrecy level
acceptable for the user. Although even this simple query becomes complicated because of the
added terms, these terms are all added by the front-end filter, invisible to the user.

An example of this query filtering in operation is shown in Figure 6-12. The advantage of the
commutative filter is that it allows query selection, some optimization, and some subquery
handling to be done by the DBMS. This delegation of duties keeps the size of the security filter
small, reduces redundancy between it and the DBMS, and improves the overall efficiency of the
system.

Figure 6-12. Commutative Filters.

46
PDMCE SEM 5 MCA Computer Security (CS)

Distributed Databases

The distributed or federated database is a fourth design for a secure multilevel database. In this
case, a trusted front end controls access to two unmodified commercial DBMSs: one for all low-
sensitivity data and one for all high-sensitivity data.

The front end takes a user's query and formulates single-level queries to the databases as
appropriate. For a user cleared for high-sensitivity data, the front end submits queries to both the
high- and low-sensitivity databases. But if the user is not cleared for high-sensitivity data, the
front end submits a query to only the low-sensitivity database. If the result is obtained from either
back-end database alone, the front end passes the result back to the user. If the result comes from
both databases, the front end has to combine the results appropriately. For example, if the query is
a join query having some high-sensitivity terms and some low, the front end has to perform the
equivalent of a database join itself.

The distributed database design is not popular because the front end, which must be trusted, is
complex, potentially including most of the functionality of a full DBMS itself. In addition, the
design does not scale well to many degrees of sensitivity; each sensitivity level of data must be
maintained in its own separate database.

Window/View

Traditionally, one of the advantages of using a DBMS for multiple users of different interests (but
not necessarily different sensitivity levels) is the ability to create a different view for each user.
That is, each user is restricted to a picture of the data reflecting only what the user needs to see.
For example, the registrar may see only the class assignments and grades of each student at a
university, not needing to see extracurricular activities or medical records. The university health
clinic, on the other hand, needs medical records and drug-use information but not scores on
standardized academic tests.

The notion of a window or a view can also be an organizing principle for multilevel database
access. A window is a subset of a database, containing exactly the information that a user is
entitled to access. Denning [DEN87a] surveys the development of views for multilevel database
security.

A view can represent a single user's subset database so that all of a user's queries access only that
database. This subset guarantees that the user does not access values outside the permitted ones,
because nonpermitted values are not even in the user's database. The view is specified as a set of
relations in the database, so the data in the view subset change as data change in the database.

For example, a travel agent might have access to part of an airline's flight information database.
Records for cargo flights would be excluded, as would the pilot's name and the serial number of
the plane for every flight. Suppose the database contained an attribute TYPE whose value was
either CARGO or PASS (for passenger). Other attributes might be flight number, origin,
destination, departure time, arrival time, capacity, pilot, and tail number.

47
PDMCE SEM 5 MCA Computer Security (CS)
Now suppose the airline created some passenger flights with lower fares that could be booked only
directly through the airline. The airline might assign their flight numbers a more sensitive rating to
make these flights unavailable to travel agents. The whole database, and the agent's view, might
have the logical structure shown in Table 6-18.

Table 6-18. Airline Database.

(a) Airline's View.

FLT# ORIG DEST DEP ARR CAP TYPE PILOT TAIL

362 JFK BWI 0830 0950 114 PASS Dosser 2463

397 JFK ORD 0830 1020 114 PASS Bottoms 3621

202 IAD LGW 1530 0710 183 PASS Jevins 2007

749 LGA ATL 0947 1120 0 CARGO Witt 3116

286 STA SFO 1020 1150 117 PASS Gross 4026

(b) Travel Agent's View.

FLT ORIG DEST DEP ARR CAP

48
PDMCE SEM 5 MCA Computer Security (CS)

Table 6-18. Airline Database.

(a) Airline's View.

FLT# ORIG DEST DEP ARR CAP TYPE PILOT TAIL

362 JFK BWI 0830 0950 114

397 JFK ORD 0830 1020 114

202 IAD LGW 1530 0710 183

286 STA SFO 1020 1150 117

The travel agent's view of the database is expressed as

view AGENT-INFO
FLTNO:=MASTER.FLTNO
ORIG:=MASTER.ORIG
DEST:=MASTER.DEST

DEP:=MASTER.DEP

ARR:=MASTER.ARR
CAP:=MASTER.CAP
where MASTER.TYPE='PASS'
class AGENT
auth retrieve

49
PDMCE SEM 5 MCA Computer Security (CS)

Because the access class of this view is AGENT, more sensitive flight numbers (flights booked
only through the airline) do not appear in this view. Alternatively, we could have eliminated the
entire records for those flights by restricting the record selection with a where clause. A view may
involve computation or complex selection criteria to specify subset data.

The data presented to a user is obtained by filtering of the contents of the original database.
Attributes, records, and elements are stripped away so that the user sees only acceptable items.
Any attribute (column) is withheld unless the user is authorized to access at least one element. Any
record (row) is withheld unless the user is authorized to access at least one element. Then, for all
elements that still remain, if the user is not authorized to access the element, it is replaced by
UNDEFINED. This last step does not compromise any data because the user knows the existence
of the attribute (there is at least one element that the user can access) and the user knows the
existence of the record (again, at least one accessible element exists in the record).

In addition to elements, a view includes relations on attributes. Furthermore, a user can create new
relations from new and existing attributes and elements. These new relations are accessible to
other users, subject to the standard access rights. A user can operate on the subset database defined
in a view only as allowed by the operations authorized in the view. As an example, a user might be
allowed to retrieve records specified in one view or to retrieve and update records as specified in
another view. For instance, the airline in our example may restrict travel agents to retrieving data.

The Sea Views project described in [DEN87a, LUN90a] is the basis for a system that integrates a
trusted operating system to form a trusted database manager. The layered implementation as
described is shown in Figure 6-13. The lowest layer, the reference monitor, performs file
interaction, enforcing the BellLa Padula access controls, and does user authentication. Part of its
function is to filter data passed to higher levels. The second level performs basic indexing and
computation functions of the database. The third level translates views into the base relations of
the database. These three layers make up the trusted computing base (TCB) of the system. The
remaining layers implement normal DBMS functions and the user interface.

Figure 6-13. Secure Database Decomposition.

50
PDMCE SEM 5 MCA Computer Security (CS)

This layered approach makes views both a logical division of a database and a functional one. The
approach is an important step toward the design and implementation of a trustable database
management system.

Practical Issues

The multilevel security problem for databases has been studied since the 1970s. Several promising
research results have been identified, as we have seen in this chapter. However, as with trusted
operating systems, the consumer demand has not been sufficient to support many products.
Civilian users have not liked the inflexibility of the military multilevel security model, and there
have been too few military users. Consequently, multilevel secure databases are primarily of
research and historical interest.

The general concepts of multilevel databases are important. We do need to be able to separate data
according to their degree of sensitivity. Similarly, we need ways of combining data of different
sensitivities into one database (or at least into one virtual database or federation of databases). And
these needs will only increase over time as larger databases contain more sensitive information,
especially for privacy concerns.

In the next section we study data mining, a technique of growing significance, but one for which
we need to be able to address degrees of sensitivity of the data.

Security in Network:
Network security consists of the policies adopted to prevent and monitor unauthorized access,
misuse, modification, or denial of a computer network and network-accessible resources. Network
security involves the authorization of access to data in a network, which is controlled by the
network administrator.
Network security consists of the policies adopted to prevent and monitor unauthorized access,
misuse, modification, or denial of a computer network and network-accessible resources. Network
security involves the authorization of access to data in a network, which is controlled by the
network administrator. [Citation needed] Users choose or are assigned an ID and password or
other authenticating information that allows them access to information and programs within their
authority. Network security covers a variety of computer networks, both public and private, that
are used in everyday jobs; conducting transactions and communications among businesses,
government agencies and individuals. Networks can be private, such as within a company, and
others which might be open to public access. Network security is involved in organizations,
enterprises, and other types of institutions. It does as its title explains: It secures the network, as
well as protecting and overseeing operations being done. The most common and simple way of
protecting a network resource is by assigning it a unique name and a corresponding password

What Is Network Security?

What is network security? How does it protect you? How does network security work? What are
the business benefits of network security?

51
PDMCE SEM 5 MCA Computer Security (CS)
You may think you know the answers to basic questions like, What is network security? Still, it's a
good idea to ask them of your trusted IT partner. Why? Because small and medium-sized
businesses (SMBs) often lack the IT resources of large companies. That means your network
security may not be sufficient to protect your business from today's sophisticated Internet threats.

What Is Network Security?


In answering the question What is network security?, your IT partner should explain that network
security refers to any activities designed to protect your network. Specifically, these activities
protect the usability, reliability, integrity, and safety of your network and data. Effective network
security targets a variety of threats and stops them from entering or spreading on your network.

What Is Network Security and How Does It Protect You?


After asking What is network security?, you should ask, What are the threats to my network?

Many network security threats today are spread over the Internet. The most common include:

Viruses, worms, and Trojan horses

Spyware and adware

Zero-day attacks, also called zero-hour attacks

Hacker attacks

Denial of service attacks

Data interception and theft

Identity theft

How Does Network Security Work?


To understand What is network security?, it helps to understand that no single solution protects
you from a variety of threats. You need multiple layers of security. If one fails, others still stand.

Network security is accomplished through hardware and software. The software must be
constantly updated and managed to protect you from emerging threats.

A network security system usually consists of many components. Ideally, all components work
together, which minimizes maintenance and improves security.

Network security components often include:

Anti-virus and anti-spyware

Firewall, to block unauthorized access to your network

Intrusion prevention systems (IPS), to identify fast-spreading threats, such as zero-day or

52
PDMCE SEM 5 MCA Computer Security (CS)

zero-hour attacks

Virtual Private Networks (VPNs), to provide secure remote access

What are the Business Benefits of Network Security?


With network security in place, your company will experience many business benefits. Your
company is protected against business disruption, which helps keep employees productive.
Network security helps your company meet mandatory regulatory compliance. Because network
security helps protect your customers' data, it reduces the risk of legal action from data theft.

Ultimately, network security helps protect a business's reputation, which is one of its most
important assets.

Assets:

Firewalls:

53
PDMCE SEM 5 MCA Computer Security (CS)

The term firewall originally referred to a wall intended to confine a fire or potential fire within a
building. Later uses refer to similar structures, such as the metal sheet separating the engine
compartment of a vehicle or aircraft from the passenger compartment.

Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in
terms of its global use and connectivity .The predecessors to firewalls for network security were
the routers used in the late 1980s

Clifford Stoll's discovery of German spies tampering with his system

Bill Cheswick's "Evening with Berferd" 1992, in which he set up a simple electronic "jail"
to observe an attacker

In 1988, an employee at the NASA Ames Research Centre in California sent a memo by
email to his colleagues that read, "We are currently under attack from an Internet VIRUS!
It has hit Berkeley, UC San Diego, Lawrence Livermore, Stanford, and NASA Ames."

The Morris Worm spread itself through multiple vulnerabilities in the machines of the
time. Although it was not malicious in intent, the Morris Worm was the first large scale
attack on Internet security; the online community was neither expecting an attack nor
prepared to deal with one.

Intrusion Detection System:

54
PDMCE SEM 5 MCA Computer Security (CS)

An intrusion detection system (IDS) is a device or software application that monitors network or
system activities for malicious activities or policy violations and produces reports to a
management station. IDS come in a variety of flavors and approach the goal of detecting
suspicious traffic in different ways.

An intrusion detection system (IDS) is a device or software application that monitors network or
system activities for malicious activities or policy violations and produces reports to a
management station. IDS come in a variety of flavors and approach the goal of detecting
suspicious traffic in different ways. There are network based (NIDS) and host based (HIDS)
intrusion detection systems. NIDS is a network security system focusing on the attacks that come
from the inside of the network (authorized users).

Some systems may attempt to stop an intrusion attempt but this is neither required nor expected of
a monitoring system. Intrusion detection and prevention systems (IDPS) are primarily focused on
identifying possible incidents, logging information about them, and reporting attempts. In
addition, organizations use IDPSes for other purposes, such as identifying problems with security
policies, documenting existing threats and deterring individuals from violating security policies.
IDPSes have become a necessary addition to the security infrastructure of nearly every
organization.

IDPSes typically record information related to observed events, notify security administrators of
important observed events and produce reports. Many IDPSes can also respond to a detected threat
by attempting to prevent it from succeeding. They use several response techniques, which involve
the IDPS stopping the attack itself, changing the security environment (e.g. reconfiguring a
firewall) or changing the attack's content.

Secure E-mail:

55
PDMCE SEM 5 MCA Computer Security (CS)

Email privacy is the broad topic dealing with issues of unauthorized access and inspection of
electronic mail. This unauthorized access can happen while an email is in transit, as well as when
it is stored on email servers or on a user computer. In countries with a constitutional guarantee of
the secrecy of correspondence, whether email can be equated with letters and get legal protection
from all forms of eavesdropping comes under question because of the very nature of email. This is
especially important as more and more communication occurs via email compared to postal mail.

Email has to go through potentially untrusted intermediate computers (email servers, ISPs) before
reaching its destination, and there is no way to tell if it was accessed by an unauthorized entity.
This is different from a letter sealed in an envelope, where by close inspection of the envelope, it
might be possible to tell if someone opened it. In that sense, an email is much like a postcard
whose contents are visible to everyone who handles it.

There are certain technological workarounds that make unauthorized access to email hard, if not
impossible. However, since email messages frequently cross nation boundaries, and different
countries have different rules and regulations governing who can access an email, email privacy is
a complicated issue.

Unit-4

Administering Security:

56
PDMCE SEM 5 MCA Computer Security (CS)

Administering Security
Security Planning

Risk Analysis

Security Policies

Physical Security

Security Planning:
Policy

Current state risk analysis

Requirements

Recommended controls

Accountability

Timetable

Continuing attention

Assuring Commitment to a Security Plan

Business Continuity Plans

Assess Business Impact

57
PDMCE SEM 5 MCA Computer Security (CS)
Develop Strategy
Develop Plan
Incident Response Plans

Advance Planning
Response Team
After the Incident is Resolved

Security Planning Policy:


Who should be allowed access?

To what system and organizational resources should access be allowed?

What types of access should each user be allowed for each resource?

What are the organizations goals on security?

Where does the responsibility for security lie?

What is the organizations commitment to security?

Risk Analysis:
Risk impact - loss associated with an event

risk probability likelihood that the event will occur

Risk control degree to which we can change the outcome

Risk exposure risk impact * risk probability

Avoid the risk

Transfer the risk

Assume the risk

Risk leverage = [(risk exposure before reduction) (risk exposure after reduction)] / cost
of risk reduction
Cannot guarantee systems are risk free

Security plans must address action needed should an unexpected risk becomes a problem

Steps of a Risk Analysis

58
PDMCE SEM 5 MCA Computer Security (CS)
Identify assets

Determine vulnerabilities

Estimate likelihood of exploitation

Compute expected annual loss

Survey applicable controls and their costs

Project annual savings of control

Security Policies:
This document provides information about security policies. This document is meant to only
provide ideas and information about security policies and is not a definitive guide to computer
security. The reader should use their own judgement about how to handle their own computer
security and in reading information on this site agrees not to hold the webmasters of this site
responsible for security incidents.

Policies are a set of requirements or rules which are required to set a path to a specific objective.
Security policies should balance access and security. Security policies should minimize risk while
not imposing undue access restrictions on those who need access to resources.

In addition, when defining policies and when living with them from day to day, the reasons for the
policy should be kept in mind. A policy should never replace thinking. The reasons for the policy
and the potential threats of every action should always be considered regardless of policy. Then
when the actual threat possibility and potential damage is considered, it may be determined that
policy should be changed.

When writing security policies, keep in mind that just because experts recommend specific
policies, it does not make your network more secure because you try to follow the policy. Experts
in the game of chess say that a player should try to control the centre of the board, but following
this recommendation does not guarantee that I will win the game. I must still think. It is important
that those who try to follow security policies think and understand the reasons for them.

Policies should define:

1. Scope - Who the policy applies to.

2. Who does the actions defined by the policy.

3. Defines when defined actions are to be done.

4. Defines where or on what equipment the policy applies to.

5. Defines the organizational level that the policy applies to such as a division or the entire
enterprise.

6. Who enforces the policy

7. What are the consequences of failure to follow the policy.

59
PDMCE SEM 5 MCA Computer Security (CS)
8. Policies may reference procedures that are used but do not define the procedures. For
example the policy may specify that passwords must be changed every 60 days but not
provide a procedure telling how to change them.

Security policies should be concise and as brief as possible while still fulfilling their purpose.

Security Policy Scope


Some security policies may pertain to everyone in the enterprise such as a password policy and
others may be specific to how the IT department will handle communications such as the system
update policy. Different people or organizations may break policies into different categories. The
listing of security policies in this document are only one way to break security policies down. The
listing of security policies in this document are not necessarily inclusive of all policies that an
organization should create. SANS lists security policies at http://www.sans.org/resources/policies/

Enforcement and Auditing


Another problem with security policies is enforcement and auditing. Your organization must
determine how to enforce and audit security policies or they will be worthless. Auditing is a
process of determining whether the policies are being followed. Your organization should create a
complete Information Systems Security Plan (ISSP) and incorporate the security policies into it
along with the set auditing process. This means that there must be resources and personnel set
aside to perform periodic audits and the management in the departments across the organization
must accept the Information Systems Security Plan (ISSP).

This document provides some security policies which are shown as generic examples and others
that provide guidance and ideas about how to write them.

Security policies may include:

1. Password policy * - Defines minimum and maximum length of passwords, password


complexity, how often it must be changed.

2. Network login policy - May be defined by the password policy. Defines how many bad
login attempts over what specific amount of time will cause an account to be locked. This
may be included in the password policy.

3. Remote access policy * - Specifies how remote users can connect to the main
organizational network and the requirements for each of their systems before they are
allowed to connect. This will specify the anti-virus program remote users must use, how
often it must be updated, what personal firewalls they are required to run, and other
protection against spyware or other malware. Also defines how users can connect remotely
such as dial up or VPN. It will specify how the dial up will work such as whether the
system will call the remote user back, and the authentication method. If using VPN, the
VPN protocols used will be defined. Methods to deal with attacks should be considered in
the design of the VPN system.

60
PDMCE SEM 5 MCA Computer Security (CS)
4. Internet connection policy * - Specifies how users are allowed to connect to the internet
and provides for IT department approval of all connections to the internet or other private
network. Requires all connections such as connections by modems or wireless media to a
private network or the internet be approved by the IT department and what is typically
required for approval such as the operation of a firewall to protect the connection. Also
defines how the network will be protected to prevent users from going to malicious web
sites. Defines whether user activity on the network will be logged and to what extent.
Specifies what system will be used to prevent unauthorized viewing of sites and what
system will log internet usage activity. Defines whether a proxy server will be used for
user internet access.

5. Approved Application policy * - Defines applications which are approved to operate on


computer systems inside or connected to the organizational network.

6. Asset control policy * - Defines how assets such as computers are tracked. This policy
will allow the locations and users of all assets to be tracked. This policy will define a
property move procedure. This policy will define what must be done when a piece of
property is moved from one building to another or one location to another. It will define
who signs off on the movement of the property. This will allow the database to be updated
so the location of all computer equipment is known. This policy will help network
administrators protect the network since they will know what user and computer is at what
station in the case of a worm infecting the network. This policy must also cover the fact
that data on the computer being moved between secure facilities may be sensitive and must
be encrypted during the move.

7. Equipment and media disposal policy - May be incorporated into the asset control
policy. Ensures that electronic equipment or media to be disposed of does not contain any
kind of harmful data that may be accessible by third parties.

8. Media use and re-use policy - May be incorporated into the asset control policy. Defines
the types of data that may be stored on removable media and whether that media may be
removed from a physically secure facility and under what conditions it would be permitted.

9. Mobile computer policy * - Defines the network security requirements for all mobile
computers which will be used on the network, who is allowed to own them, what firewall
they must run, what programs may be run on them, how the system will be protected
against malware, how often the system must be updated, and more. Also defines what data
may be stored on them and whether the data must be encrypted in case of theft.

Physical Control

Deter potential intruders (e.g. warning signs and perimeter markings);

Distinguish authorized from unauthorized people (e.g. using pass cards/badges and keys)

Delay, frustrate and ideally prevent intrusion attempts (e.g. strong walls, door locks and
safes);

61
PDMCE SEM 5 MCA Computer Security (CS)
detect intrusions and monitor/record intruders (e.g. intruder alarms and CCTV systems);
and

Trigger appropriate incident responses (e.g. by security guards and police)

Ethical issues of computer security

1. Protecting Programs and Data


2. Information and the Law
3. Rights of Employees and Employers
4. Software Failures
5. Computer Crime

Spam Fraud

Harassment Threats

Cyber terrorism Cyber warfare

Protecting program and data:

Data should be accurate and, where necessary, kept up to date.

Data should not be kept longer than is necessary for the purposes for which it is processed.

Data should be processed in accordance with the rights of the data subject under the Act.

Appropriate technical and organizational measures should be taken against unauthorized or


unlawful processing of personal data and against accidental loss or destruction of, or
damage to, personal data.

Legal, Privacy, and Ethical Issues in Computer Security:


Program and data protection by patents, copyrights, and trademarks
Computer Crime
Privacy
Ethical Analysis of computer security situations
Codes of professional ethics
Know what protection the law provides for computers and data
Appreciate laws that protect the rights of others with respect to computers, programs, and
data
Understand existing laws as a basis for recommending new laws to protect computers,
programs, and data
Protecting computing systems against criminals
Protecting code and data (copyright...)
62
PDMCE SEM 5 MCA Computer Security (CS)
Protecting programmers and employers rights
Protecting private data about individuals
Protecting users of programs

Ethical issues:
Privacy- right of individual to control personal information
Accuracy who is responsible for the authenticity, fidelity, and accuracy of information?
Property Who owns the information? Who controls access? (e.g. buying the IP verses
access to the IP)
Accessibility what information does an organization have the right to collect? Under
what safeguards?

Type of ethics
Consequence-Based Ethics
Priority is given to choices that lead to a good outcome (consequence)
The outcome outweighs the method
Egoism: the right choice benefits self
Utilitarianism: the right choice benefits the interests of others
Rule based ethics
Priority is given to following the rules without undue regard to the outcome
Rules are often thought to codify principles like truthfulness, right to freedom, justice, etc.
Stress fidelity to a sense of duty and principle (never tell a lie)
Exist for the benefit of society and should be followed

Computer Crime
Computer crime refers to any crime that involves a computer and a network. Crimes
that primarily target computer networks or devices include:
Computer viruses
Denial-of-service attacks
Malware (malicious code)
Crimes that use computer networks or devices to advance other ends include:
Cyber stalking
Fraud and identity theft
Information warfare
Phishing scams
Software Failure:
A system failure occurs when the delivered service no longer complies with the specifications, the
latter being an agreed description of the system's expected function and/or service
Right of Employer and employee:
63
PDMCE SEM 5 MCA Computer Security (CS)
Employee
An employee contributes labor and expertise to an endeavor of an employer and is usually
hired to perform specific duties which are packaged into a job.
In most modern economies, the term "employee" refers to a specific defined relationship between
an individual and a corporation, which differs from those of customer or client
Finding employees or employment
Workforce organizing
Ending employment
Employment contract
Information and The law:
The Laws of information systems are a collection of observations and generalizations
characterizing the behaviour of people, hardware, software, and procedures enclosed in a certain
scope (an information system). Information systems is an amalgam of scientific and humanistic
disciplines including computer science, management science and social sciences. It is therefore
characterized by laws as well as a number of principles. These span the range of transaction
processing, effective systems, user interfaces and system development.

Laws
Transaction Volumes
The volume of transactions will increase with the stage of development of a society. The greater
the development the greater the number of goods and services exchanged and the greater the
number of transactions.

Symbol Systems
This is a corollary of the law of transaction volumes. The symbol systems by which messages are
encoded grow more complex as a society evolves. Thus the 7 bit ASCII code gave way to the 8 bit
version which is giving way to the 16 bit Unicode system. This trend is likely to continue when
civilization expands beyond the confines of our planet and the volume of information exchanged
becomes astronomical.

Technological Evolution
Technology seeks the most efficient form, unless otherwise constrained. Efficient form is defined
qualitatively as one that is best adapted to its application or as one with the least number of
problems. This is a variation on Darwins theory of evolution and is manifested in the case of
memories, storage devices, databases etc.

Infinite Processing Needs


The information processing needs of an organization or society will always exceed its information
processing capabilities. This can be seen as a converse of Moores law doubling of processing
power every two years. Whenever a new technology is introduced it is overwhelmed by new
applications or increased usage. For instance, current technologies are not up to the task of
64
PDMCE SEM 5 MCA Computer Security (CS)
processing images streaming from space probes in real time. This is also evidenced in the case of
email, internet, search engines etc. that are being swamped by volume.

Good Systems
A good system produces benefits that are disproportionately high in comparison to the initial
investment. Any complex system, including an information system is typically interconnected
with other systems. So a good system has ripple effects which show up as unexpected benefits.
The freeway system in the U.S. for example, led to the growth of the automotive, steel and motel
industries. Another example is the Sabre system that has been designed for making reservations
but has been used in crew scheduling and flight forecasting. As a corollary a bad system produces
problems that are disproportionately high in comparison to its area of operation, a prime example
here being the 64K memory limitation of DOS which for a long time stymied software developers.

Right Design
Every software that involves users has a right design. The right design refers to decomposition
of functions into menus/controls. The fact that some types of software are intuitive while others
are not leads to the belief that there is a right design for every software/IT application that is up
to the designer to find.

Interconnected Systems
An interconnected system cannot be controlled unless each interconnect is individually
controllable. A very simple example is provided here. In one email system, the signature text
and message composition are wedded together so that the same signature is produced every time a
message is composed. This is also known as coupling. It provides an option to select the default
signature but doesnt allow this to be done dynamically. It should provide an option to select
which signature is used, at the time of message composition.

Complex Interfaces
There cannot be a simple interface to a complex system (Here complexity is informally measured
as number of menu options). This is a variation on the law of requisite variety which states that
variety in a system should be at least as great as that found in its environment. Complex systems
such as visual programming environments or CASE tools therefore cannot have simple interfaces.

Irregular Transactions
Information systems (transaction processing systems) which cannot process irregular transactions
are doomed to fail. An irregular transaction is defined as one that deviates from the norm, in terms
of items bought, conditions or constraints. Examples include registering for two courses that are
scheduled to start at the same time or including a child safety seat in a car rental reservation.

Qualitative Decision Models


It is impossible to calculate outcomes with any certainty in a decision situation that involves
qualitative variables. This is based on the theory of computability which states that a problem is
computable if an algorithm exists, the algorithm is efficient/tractable and if there is a well defined
65
PDMCE SEM 5 MCA Computer Security (CS)
solution state. Because of their very nature, qualitative problems lack well-defined state and hence
the law.

Mental Models
The systems model must not exceed the users mental model in complexity. The systems model
is the organization of features in the system whereas the users mental model is their
conceptualization of the system. When the systems model exceeds the users model, user will not
be able to operate the system without extensive training and the result is often an implementation
failure.

Accelerating Returns
The rate of progress of technology is accelerating to such an extent that it produces returns that are
not linear, but exponential. Much like Moores law, every decade there is a doubling of progress
and therefore technological advances will occur exponentially. Kurzweil [2] expects that the next
great evolutionary step of human kind will be the integration of human biological components
with machines.

Brooks Law
Adding manpower to a late software project makes it late. Fred Brooks was the chief engineer
overseeing the 360 project, which was one of the largest software projects ever undertaken. Based
on his experience, Brooks came to the conclusion that putting additional programmers on a
delayed project will not hasten its implementation because of the additional communications
overhead. In fact, it had the tendency to reduce productivity.

Information Independence
Users should be able to access their information regardless of where it is physically located. This
is a variation on the concept of distribution independence in databases that has been extended to
include other types of information and other types of computing contexts.

Soft information principle


Information systems must incorporate soft information or they are doomed to fail. One way in
which irregular transactions can be handled is to provide additional notes on the transaction. Those
systems that do not accommodate such soft information may result in a transaction failure or may
result in inconveniencing the user/consumer.

Reapportionment Principle
Tasks that can be performed by the system (in the context of software use) should be performed by
it. Automatic filling of personal details from ss# or phone# in a customer registration form is one
example. This is ultimately based on the simple economic principle of labor substitution to
leverage productivity. Its a widely used principle nowadays.

Sharing User Information

66
PDMCE SEM 5 MCA Computer Security (CS)
All desktop systems must share information about the user. This is a corollary of the re-
apportionment principle. To the extent that desktop systems require user information (such as
email address, phone# etc.) it is advantageous for users to have the system obtain it from a
common profile.

Information Responsibility Principle


Those who have information are obliged to share it with those who need it. This is a principle
attributed to Peter Drucker in his1988 HBR article, The Coming of the Knowledge-Based
Organization. Since information is intangible, it is difficult for potential consumers of
information to perceive its source and hence the principle. The principle implicitly assumes that
reasons for excluding information specifically from a person do not exist.

Information Ownership
Owners of information must have access to it. This is a corollary of the information responsibility
principle. When information changes, the owner has a stake in making the change in the system so
it is reasonable to give them the access to do it. Many companies are web-enabling their systems,
thereby illustrating this principle.

67

Das könnte Ihnen auch gefallen