Beruflich Dokumente
Kultur Dokumente
Instituto Nacional
Notebook
de Tecnologías
de la Comunicación
In computer security, a honeypot is a tool used to lure attackers and analyse their
behaviour in the Internet. It seems a contradiction, as the ordinary function of security
tools is precisely the opposite: to keep attackers away and prevent their attacks. However,
since a few years ago, honeypots are used to draw attackers into a controlled
environment and attempt to know more details about how they carry out their attacks, and
even to find out new vulnerabilities.
Lance Spitzner, consultant and computer analyst, expert in security, built a six-computer
network in his own house at the beginning of the year 2000. He designed this network to
study the attackers’ behaviour and ways of action. He was one of the first researchers to
adopt that idea and, today, he is one of the most prominent experts in honeypots,
precursor of the Honeynet Project (www.honeynet.org), running from 1999, and author of
the book "Honeypots: Tracking Hackers”.
This was a trial system for almost one year, from April 2000 to February 2001, saving all
the generated information. The results spoke for themselves: in those periods when
attacks were more intense, he saw that the more usual ways of access to the computers
in his household were scanned from outside his network, up to 14 times a day, by means
of automated attack tools.
Since then, a whole community of developers has been built around honeynet.org,
offering all kind of tools and advice to use them.
II Classification
Source: INTECO
They are mainly used by companies in their corporate networks. These honeypots are
built with real machines or made up of only one real machine with a “standard” operating
system, such as the one any user could use. They are placed in the corporate production
network. If they are configured correctly, any attempt to access them should generate an
alert that should be taken into account. As their usefulness is no other than to be attacked,
the fact that there is someone trying to access that resource means, by definition, that
something is wrong.
Every interaction with this honeypot is regarded as suspicious by definition. All this traffic
must be appropriately monitored and stored in a secure area of the network, which a
potential attacker must not have access to. This is so because, if it was a real attack, the
hacker could in turn remove all the traffic generated by them and the traces left, and,
therefore, the attack would not be noticed and the honeypot would have no real use.
The advantage offered by high-interaction honeypots is that they can prevent all kind of
attacks, both known and unknown. As it is a real system, it contains all known and
unknown software errors which any other system may also have. If an attacker attempts
to take advantage of a so far unknown flaw (known as “0 day” in computer slang), it will be
Honeypots are used to mitigate the companies’ risks, in the sense of the traditional use of
known defensive tools. What differentiates them from traditional firewalls or intrusion
detection systems is their “active”, and not passive, nature. Figuratively speaking, a
honeypot is a hook, not a retaining wall to avoid attacks; on the contrary, it searches for
such attacks and “entertain” them. Many companies use it as an added value to their
security elements, as a complement to their ordinary tools.
Consequently, it is possible to detect and recognise attacks easily, so that they can
elaborate statistics with those data that help configure their passive security tools more
effectively. The sooner the most attacked security problems or the new targets are known,
the more effective a certain company will be able to defend itself against them.
Just like every tool aimed to enhance security, honeypots have their own pros and cons.
Its greatest usefulness lies in its simplicity. Since the only purpose of this mechanism is
that attackers attempt to exploit its vulnerabilities, it does not perform any actual service,
and the traffic passing through it will be really low. If incoming or outgoing traffic is
detected in the system, it will be almost in all probability a test, scan or attack.
The traffic recorded in such a system is suspicious by nature and, therefore, its
management and study becomes greatly simplified, even though, of course, “false
positives” may occur, expression that reverses its meaning in this case.
While a “false positive” usually occurs when a suspicious activity regarded as attack turns
out not to be so, in a honeypot environment the false positive would be the computer-
managed traffic that does not represent a threat. In this simplicity to use the traffic and
resources lies its greatest advantage. To sum up: little information, but extraordinarily
useful.
Source: http://www.honeyd.org/
Among the problems which are likely to arise as a result of working with honeypots, we
can highlight the possibility of it turning against the administrator. If it is not designed in a
completely thorough way, if loose ends are left untied, if it is not correctly isolated, the
attacker may eventually compromise a real system and get valuable data from the
honeypot.
IV Low-interaction honeypots
They are usually created and managed by organisations devoted to online fraud research,
or by any other type of organisation that needs to research on new online threats. They
are much more difficult to manage and maintain, and the information they receive must be
as extensive as possible and must be organised and analysed in order for it to be useful.
Generally, they are specific systems that emulate services, networks, TCP devices or any
other element of a real system, but without being real. There is a “meta-system” behind,
invisible for the attacker, which is pretending to be anything for which it is programmed to
be. They do not need to behave exactly like a system or service. They usually emulate a
service, and provide answers to a simple subset of requests. For instance, a honeypot
This type of honeypots does not usually aim to “catch” real attackers, but automated tools.
A human being could rapidly detect whether it is a real server or not, either thanks to
his/her experience or to other features which make him/her suspect that it is not a real
environment. However, automated systems such as automatic exploitation software,
worms or viruses, specially crafted to carry out a certain action on a service, will not detect
anything unusual. They will do their job trying to exploit some vulnerability, the honeypot
will pretend to be exploited and the honeypot administrator will obtain the desired
information.
This type of honeypots has the problem that it is more complex to discover new forms of
attacks in them. They are prepared to simulate certain attacked services and to respond in
a specific way for the attacker to think it has achieved their target. However, it can never
behave in ways for which it is not programmed, for instance to simulate the exploitation of
new types of threats.
Among other many possibilities, they are used to generate exploitation statistics, to detect
attack patterns and new malware. This last point is particularly interesting. On most
occasions, malware takes advantage of vulnerabilities to download files (viruses) from a
server. In order to evade antivirus software and be as unnoticed as possible, this
downloaded file is highly variable, and new variants may appear within a few hours. A
honeypot can simulate the exploitation of that vulnerability and allow the new file to be
downloaded. Consequently, a honeypot may be an excellent systematic and automated
collector of new variants of viruses and malware in general.
Just like high-interaction honeypots, these systems must be very well protected, so that
they do not turn against the honeypot administrator. An attacker, either automatedly or
manually, could somehow reach the “meta-system” that hosts the simulated service or
attack it.
Since several years ago, the most used attack vector in the Internet is the browser.
Security measures have been enhanced and it is increasingly difficult to exploit
vulnerabilities in electronic mail clients –programs-, which was the most used attack
vector before. In addition, the widespread use of firewalls made it even more complicated
for attackers to exploit vulnerabilities in the operating system. Therefore, as services
(forums, chats, etc.) were moved to the web, the browser became the favourite target for
attackers. By just visiting a website, it is possible to exploit all types of vulnerabilities
existing in the browser in order to execute code on the client an infect it.
Honeymonkeys emerge behind this concept. Its main function, just like honeypots’, is to
detect new types of attacks and infection methods, and, like honeypots, they consist of a
“scanning” module and a data-gathering module. As for the case of honeymonkeys,
however, the scanning is actively performed through browsers. A honeymonkey works as
an automatic browsing system visiting all kind of websites in order for some of them to
attempt to exploit vulnerabilities in the browser. They have a much more active nature
than that of the honeypot, in the sense that they “patrol” the network as if they were a user
compulsively visiting website links.
It was Microsoft that named them as “Monkeys”. It refers to the jumps and dynamism in
the type of actions they performed. With this method, just like honeypots, new exploits,
worms, etc. can be found, as long as all the collected information is appropriately
anaylsed and processed.
Source: research.microsoft.com
VI Honeypots y honeynets
The aim of honeynets is, just like that of honeypots, to examine the techniques and tools
used by attackers in the Internet. Their basic difference from honeypots is that they do not
consist of only one computer, but multiple systems and applications emulating many
others, pretending to be vulnerabilities or known services or creating “cage” environments
where it is possible a better observation and analysis of attacks. The basic and
indispensable requirements to construct a honeynet are two: the so-called Data Control
and Data Capture.
Data Control
They carry out the controlled restraint of information and connections. Dealing with
attackers always entails a risk which needs to be minimised as much as possible and,
therefore, it is essential to make sure that, once the honeypot is compromised, legitimate
systems will not be compromised.
The challenge lies in maintaining an absolute control of the data flow without the attacker
noticing it. It is not possible to close a system completely in order to avoid unnecessary
traffic.
A conclusion, which summarises the art of constructing a useful honeynet, can be drawn
from this situation: it is necessary to find the adequate balance between freedom of action
for attackers, which entails a greater risk, and actual security of the system, which may
lead to less interesting results for the study.
Data Capture
This is the tracking and storage of the information sought, i.e. logs (data registries) of their
actions, which will be analysed a subsequently. It is necessary to gather as much
information isolated from the legitimate traffic as possible, thus preventing the attacker
from knowing their actions are being captured. In order to achieve this, it is essential to
avoid the local storage of results in the honeypot, since they can be potentially detected
and erased with the logical purpose of not leaving any trace of the attack. The information
must be stored remotely and in layers. It cannot be limited to registering a simple layer of
information, but it also collects it from the widest possible range of resources. By
combining all computers and data layers together the desired information table will be
formed.
Virtual tools
The tools to build a honeypot or honeynet are highly varied, but the most used method is
the use of physical or virtual machines to build the honeypot.
Given the potential danger of using honeypots, and due to its very nature, the use of
virtual tools is highly recommended and is widely accepted. The advantages of a virtual
system over a physical one are obvious:
• They can be fixed within a few minutes in the event of an accident, disaster,
or compromise: most virtual systems permit users to store an “ideal“ state and
return to it at any time in a much more rapid way than if a physical system had to
be repaired and returned to a previous state.
• They allow saving costs: a single physical machine can host an indefinite
number of virtual machines, as many as allowed by its resources, and with as
many operating systems as desired.
• VMware: it is the most well-known and used virtualization system. It can simulate
machines executing any operating system and on any operating system. Many of
the virtualization utilities are offered freely, such as VMWare Player
• VirtualBox: it is a free and open source Sun Project. Just like VMWare, it can
simulate machines executing any operating system and on any operating system.
Source: INTECO
There are few commercial tools covering the honeypot market; however, in the world of
open source tools, many utilities are provided that can act as honeypots, both for
companies and users:
Image 5: Specter Console, the most well-known commercial honeypot for Windows
Source: specter.com
KFSensor is another commercial honeypot acting as a honeypot and IDS for Windows
operating systems.