Beruflich Dokumente
Kultur Dokumente
Shadow Honeypots
Kostas G. Anagnostakis1, Stelios Sidiroglou2, Periklis Akritidis1,3, Michalis Polychronakis4,
Angelos D. Keromytis4, Evangelos P. Markatos5
1
Niometrics R&D, Singapore
kostas@niometrics.com
2
Computer Science and Artificial Intelligence Laboratory, MIT, USA
stelios@csail.mit.edu
3
University of Cambridge, UK
pa280@cl.cam.ac.uk
4
Department of Computer Science, Columbia University, USA
{mikepo, angelos}@cs.columbia.edu
5
Institute of Computer Science, Foundation for Research & Technology – Hellas, Greece
markatos@ics.forth.gr
anomaly detection systems, and honeypots in a way that expensive instrumentation to detect attacks. The shadow and
exploits the best features of these mechanisms, while the regular application fully share state to avoid attacks that
shielding their limitations. We focus on transactional exploit differences between the two; we assume that an
applications, i.e., those that handle a series of discrete attacker can only interact with the application through the
requests. Our architecture is not limited to server filtering and AD stages, i.e., there are no side-channels. The
applications, but can be used for clientside applications such level of instrumentation used in the shadow depends on the
as web browsers and P2P clients. As shown in Figure 2, the amount of latency we are willing to impose on suspicious
architecture is composed of three main components: a traffic (whether truly malicious or misclassified legitimate
filtering engine, an array of anomaly detection traffic). In our implementation, described in Section 3, we
focus on memory-violation attacks, but any attack that can
be determined algorithmically can be
attacker lures a victim user to download data containing under the 1 Gbit/s mark [27], [28].
an attack, as with the recent buffer overflow vulnerability Faced with these limitations, we considered a distributed
in Internet Explorer’s JPEG handling [25]. In this design, similar in principle to [29], [30]: we use a network
scenario, the context of an attack is an important processor (NP) as a scalable, custom load balancer, and
consideration in replaying the attack in the shadow. It implement all detection heuristics on an array of (modified)
may range from data contained in a single packet to an Snort sensors running on standard PCs that are connected to
entire flow, or even set of flows. Alternatively, it may be the network processor board. We chose not to implement
defined at the application layer. For our testing scenario any of the detection heuristics on the NP for two reasons.
using HTTP, the request/response pair is a convenient First, currently available NPs are designed primarily for
context. simple forwarding and lack the processing capacity required
for speeds in excess of 1 Gbit/s. Second, they remain harder
Tight coupling assumes that the application can be to program and debug than standard general purpose
modified. The advantage of this configuration is that attacks processors. For our implementation, we used the IXP1200
that exploit differences in the state of the shadow vs. the network processor. A high-level view of our implementation
application itself become impossible. However, it is also is shown in Figure 4.
possible to deploy shadow honeypots in a loosely coupled
configuration, where the shadow resides on a different
system and does not share state with the protected
application. The advantage of this configuration is that
management of the shadows can be “outsourced” to a third
entity.
Note that the filtering and anomaly detection components
can also be tightly coupled with the protected application, or
may be centralized at a natural aggregation point in the
network topology (e.g., at the firewall).
Finally, it is worth considering how our system would
behave against different types of attacks. For most attacks
we have seen thus far, once the AD component has
identified an anomaly and the shadow has validated it, the
filtering component will block all future instances of it from
Figure 4. High-level diagram of prototype shadow
getting to the application. However, we cannot depend on
honeypot implementation.
the filtering component to prevent polymorphic or
metamorphic [26] attacks. For low-volume events, the cost
A primary function of the anomaly detection sensor is the
of invoking the shadow for each attack may be acceptable.
ability to divert potentially malicious requests to the shadow
For high-volume events, such as a Slammer-like outbreak,
honeypot. For web servers in particular, a reasonable
the system will detect a large number of correct AD
definition of the attack context is the HTTP request. For this
predictions (verified by the shadow) in a short period of
purpose, the sensor must construct a request, run the
time; should a configurable threshold be exceeded, the
detection heuristics, and forward the request depending on
system can enable filtering at the second stage, based on the
the outcome. This processing must be performed at the
unverified verdict of the anomaly detectors. Although this
HTTP level thus an HTTP proxy-like function is needed. We
will cause some legitimate requests to be dropped, this could
implemented the anomaly detection sensors for the tightly-
be acceptable for the duration of the incident. Once the
coupled shadow server case by augmenting an HTTP proxy
number of (perceived) attacks seen by the ADS drop beyond
with ability to apply the APE detection heuristic on
a threshold, the system can revert to normal operation.
incoming requests and route them according to its outcome.
For the shadow client scenario, we use an alternative
3. Implementation solution based on passive monitoring. Employing the proxy
approach in this situation would be prohibitively expensive,
3.3 Filtering and Anomaly Detection
in terms of latency, since we only require detection
During the composition of our system, we were faced with capabilities. For this scenario, we reconstruct the TCP
numerous design issues with respect to performance and streams of HTTP connections and decode the HTTP protocol
extensibility. When considering the deployment of the to extract suspicious objects.
shadow honeypot architecture in a high-performance As part of our proof-of-concept implementation we have
environment, such as a Web server farm, where speeds of at used three anomaly detection heuristics: payload sifting,
least 1 Gbit/s are common and we cannot afford to abstract payload execution, and network-level emulation.
misclassify traffic, the choice for off-the-shelf components Payload sifting as developed in [19] derives fingerprints of
becomes very limited. To the best of our knowledge, current rapidly spreading worms by identifying popular substrings
solutions, both standalone PCs and network-processor-based in network traffic. It is a prime example of an anomaly
network intrusion detection systems (NIDSes), are well
(IJCNS) International Journal of Computer and Network Security, 5
Vol. 2, No. 9, September 2010
a return statement). We take care to properly handle the allocates a new page (which is given the permissions the
sizeof construct, a fairly straightforward task with TXL. original page had from the backup) for the process to
Pointer aliasing is not a problem, since we instrument the use, in exactly the same way copy-on-write works in
allocated memory regions; any illegal accesses to these will modern operating system. Both copies of the page are
be caught. maintained until transaction() is called again, as
For memory allocation, we use our own version of we describe below. This call to transaction() must
malloc(), called pmalloc(), that allocates two be placed manually by the programmer or system
additional zero-filled, write-protected pages that bracket the designer.
requested buffer, as shown in Figure 5. The guard pages are • Inside the main processing loop, immediately after the
mmap()’ed from /dev/zero as read-only. As mmap() end of handling a request, to indicate to the operating
operates at memory page granularity, every memory request system that a transaction has successfully completed. The
is rounded up to the nearest page. The pointer that is operating system then discards all original copies of
returned by pmalloc() can be adjusted to immediately memory pages that have been modified during
catch any buffer overflow or underflow depending on where processing this request. This call to transaction()
attention is focused. This functionality is similar to that must also be placed manually.
offered by the ElectricFence memory-debugging library, the • Inside the signal handler that is installed automatically
difference being that pmalloc() catches both buffer by our tool, to indicate to the operating system that an
overflow and underflow attacks. Because we mmap() pages exception (attack) has been detected. The operating
from /dev/zero, we do not waste physical memory for system then discards all modified memory pages by
the guards (just page-table entries). Memory is wasted, restoring the original pages.
however, for each allocated buffer, since we allocate to the
next closest page. While this can lead to considerable Although we have not implemented this, a similar
memory waste, we note that this is only incurred when mechanism can be built around the filesystem by using a
executing in shadow mode, and in practice has proven easily private copy of the buffer cache for the process executing in
manageable. shadow mode. The only difficulty arises when the process
Figure 6 shows an example of such a translation. Buffers must itself communicate with another process while
that are already allocated via malloc() are simply servicing a request; unless the second process is also
switched to pmalloc(). This is achieved by examining included in the transaction definition (which may be
declarations in the source and transforming them to pointers impossible, if it is a remote process on another system),
where the size is allocated with a malloc() function call. overall system state may change without the ability to roll it
Furthermore, we adjust the C grammar to free the variables back. For example, this may happen when a web server
before the function returns. After making changes to the communicates with a remote back-end database. Our system
standard ANSI C grammar that allow entries such as does not currently address this, i.e., we assume that any such
malloc() to be inserted between declarations and state changes are benign or irrelevant (e.g., a DNS query).
statements, the transformation step is trivial. For single- Specifically for the case of a back-end database, these
threaded, non-reentrant code, it is possible to only use inherently support the concept of a transaction rollback, so it
pmalloc() once for each previously-static buffer. is possible to undo any changes.
Generally, however, this allocation needs to be done each The signal handler may also notify external logic to
time the function is invoked. indicate that an attack associated with a particular input
Any overflow (or underflow) on a buffer allocated via from a specific source has been detected. The external logic
pmalloc() will cause the process to receive a may then instantiate a filter, either based on the network
Segmentation Violation (SEGV) signal, which is caught by source of the request or the contents of the payload [20].
a signal handler we have added to the source code in 3.5 Using Feedback to Improve Network-level
main(). The signal handler simply notifies the operating Detection
system to abort all state changes made by the process while A significant benefit stemming from the combination of
processing this request. To do this, we added a new system network-level anomaly detection techniques with host-level
call to the operating system, transaction(). This is attack prevention mechanisms is that it allows for increasing
conditionally (as directed by the shadow enable() macro) the detection accuracy of current network-level detectors.
invoked at three locations in the code: This improvement may go beyond simply increasing the
sensitivity of the detector and then mitigating the extra false
• Inside the main processing loop, prior to the beginning positives through the shadow honeypot. In certain cases, it is
of handling of a new request, to indicate to the operating also possible to enhance the robustness of the anomaly
system that a new transaction has begun. The operating detection algorithm itself against evasion attacks. In this
system makes a backup of all memory page permissions, section, we describe how shadow honeypots enhance the
and marks all heap memory pages as read-only. As the detection ability of network-level emulation, one of the
process executes and modifies these pages, the operating detection techniques that we have used in our
system maintains a copy of the original page and implementation.
(IJCNS) International Journal of Computer and Network Security, 7
Vol. 2, No. 9, September 2010
Network-level emulation [13], [31] is a passive network inputs, which are interpreted by the emulator as random
monitoring approach for the detection of previously code, might not stop soon, or even not at all, due to the
unknown polymorphic shellcode. The approach relies on a accidental formation of loop structures that may execute for
NIDSembedded CPU emulator that executes every potential a very large number of iterations. To avoid extensive
instruction sequence in the inspected traffic, aiming to performance degradation due to stalling on such seemingly
identify the execution behavior of polymorphic shellcode. “endless” loops, if the number of executed instructions for a
The principle behind network-level emulation is that the given input reaches a certain execution threshold, then the
machine code interpretation of arbitrary data results to execution is terminated.
random code, which, when it is attempted to run on an This unavoidable precaution introduces an opportunity for
actual CPU, usually crashes soon, e.g., due to the execution evasion attacks against the detection algorithm through the
of an illegal instruction. In contrast, if some network request placement of a seemingly endless loop before the decryptor
actually contains a polymorphic shellcode, then the code. An attacker could construct a decryptor that spends
shellcode runs normally, exhibiting a certain detectable millions of instructions just for reaching the execution
behavior. threshold before revealing any signs of polymorphic
Network-level emulation does not rely on any exploit or behavior. We cannot simply skip the execution of such
vulnerability specific signatures, which allows the detection loops, since the loop body may perform a crucial
of previously unknown attacks. Instead, it uses a generic computation for the subsequent correct execution of the
heuristic that matches the runtime behavior of polymorphic decoder, e.g., computing the decryption key.
shellcode. At the same time, the actual execution of the Such “endless” loops are a well-known problem in the
attack code on a CPU emulator makes the detector robust to area of dynamic code analysis [33], and we are not aware of
evasion techniques such as highly obfuscated or self- any effective solution so far. However, employing network-
modifying code. Furthermore, each input is inspected level emulation as a first-stage detector for shadow
autonomously, which makes the approach effective against honeypots mitigates this problem. Without shadow honeypot
targeted attacks, while from our experience so far with real- support, the network-level detector does not alert on inputs
world deployments, it has not produced any false positives. that reach the execution threshold without exhibiting signs
The detector inspects either or both directions of each of malicious behavior, which can potentially result to false
network flow, which may contain malicious requests negatives. In contrast, when coupling network-level
towards vulnerable services, or malicious content served by emulation with shadow honeypots, such undecidable inputs
some compromised server towards a vulnerable client. Each can be treated more conservatively by considering them as
input is mapped to a random memory location in the virtual potentially dangerous, and redirecting them to the shadow
address space of the emulator, as shown in Figure 7. Since version of the protected service. If an undecidable input
the exact position of the shellcode within the input stream is indeed corresponds to a code injection attack, then it will be
not known in advance, the emulator repeats the execution detected by the shadow honeypot. In Section 4.3 we show,
multiple times, starting from each and every position of the through analysis of real network traffic, that the number of
stream. Before the beginning of a new execution, the state of such streams that are undecidable in reasonable time (and
the CPU is randomized, while any accidental memory thus have to be forwarded to the shadow) is a small,
modifications in the addresses where the attack vector has manageable fraction of the overall traffic.
been mapped to are rolled back after the end of each
execution. The execution of polymorphic shellcode is 4. Experimental Evaluation
identified by two key behavioral characteristics: the
We have tested our shadow honeypot implementation
execution of some form of GetPC code, and the occurrence
against a number of exploits, including a recent Mozilla
of several read operations from the memory addresses of the
PNG bug and several Apache-specific exploits. In this
input stream itself, as illustrated in Figure 7. The GetPC
section, we report on performance benchmarks that illustrate
code is used for finding the absolute address of the injected
the efficacy of our implementation.
code, which is mandatory for subsequently decrypting the
First, we measure the cost of instantiating and operating
encrypted payload, and involves the execution of some
shadow instances of specific services using the Apache web
instruction from the call or fstenv instruction groups.
server and the Mozilla Firefox web browser. Second, we
evaluate the filtering and anomaly detection components,
and determine the throughput of the IXP1200-based load
balancer as well as the cost of running the detection
heuristics. Third, we look at the false positive rates and the
trade-offs associated with detection performance. Based on
these results, we determine how to tune the anomaly
Figure 7. A typical execution of a polymorphic shellcode detection heuristics in order to increase detection
using network-level emulation. performance while not exceeding the budget allotted by the
shadow services.
There exist situations in which the execution of benign
8 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
For the standard page load configuration, the performance minimize the number of different versions that need to be
degradation for instrumentation was 35%. For the scrolling checked based on their popularity.
configuration, where in addition to the page load time, the
4.2 Filtering and Anomaly Detection
time taken to scroll through the page is recorded, the
overhead was 50%. IXP1200-based firewall/load-balancer: We first determine
the performance of the IXP1200-based firewall/load
balancer. The IXP1200 evaluation board we use has two
Gigabit Ethernet interfaces and eight Fast Ethernet
interfaces. The Gigabit Ethernet interfaces are used to
connect to the internal and external network and the Fast
Ethernet interfaces to communicate with the sensors. A set
of client workstations is used to generate traffic through the
firewall. The firewall forwards traffic to the sensors for
processing and the sensors determine if the traffic should be
dropped, redirected to the shadow honeypot, or forwarded to
the internal network.
Previous studies [37] have reported forwarding rates of at
least 1600 Mbit/s for the IXP1200, when used as a simple
forwarder/router, which is sufficient to saturate a Gigabit
Figure 9. Normalized Mozilla Firefox benchmark results Ethernet interface. Our measurements show that despite the
using a modified version of i-Bench. added cost of load balancing, filtering, and coordinating
with the sensors, the firewall can still handle the Gigabit
interface at line rate.
Figure 10. Popularity of different Mozilla versions, as Figure 11. Utilization(%) of the IXP1200 Microengines,
measured in the logs of the CIS Department Web server at for forwarding-only (FWD), load-balancing-only (LB), both
the University of Pennsylvania. (LB+FWD), and full implementation (FULL), in stress-tests
with 800 Mbit/s worst-case 64-byte-packet traffic.
The results follow our intuition as more calls to
malloc() are required to fully render the page. Figure 9 To gain insight into the actual overhead of our
illustrates the normalized performance results. It should be implementation, we carry out a second experiment using
noted that depending on the browser implementation Intel’s cycle-accurate IXP1200 simulator. We assume a
(whether the entire page is rendered on page load) clock frequency of 232 MHz for the IXP1200, and an IX bus
mechanisms such at the automatic scrolling need to be configured to be 64- bit wide with a clock frequency of 104
implemented in order to protect against targeted attacks. MHz. In the simulated environment, we obtain detailed
Attackers may hide malicious code in unrendered parts of a utilization measurements for the microengines of the
page or in javascript code activated by user-guided pointer IXP1200. The results are shown in Figure 11. The results
movement. show that even at line rate with worst-case traffic, the
How many different browser versions would have to be implementation is quite efficient as the microengines
checked by the system? Figure 10 presents some statistics operate at 50.9%-71.5% of their processing capacity.
concerning different versions of Mozilla. The statistics were PC-based sensor performance: In this experiment, we
collected over a five-week period from the CIS Department measure the throughput of the PC-based sensors that
web server at the University of Pennsylvania. As evidenced cooperate with the IXP1200 for analyzing traffic and
by the figure, one can expect to check up to six versions of a performing anomaly detection. We use a 2.66 GHz Pentium
particular client. We expect that this distribution will be IV Xeon processor with hyper-threading disabled. The PC
more stabilized around final release versions and expect to has 512 Mbytes of DDR DRAM at 266 MHz. The PCI bus is
10 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
often a benign inspected input may look “suspicious” and overhead incurred to the shadow server is modest.
causes a redirection to the shadow honeypot. If the fraction
of such undecidable inputs is large, then the shadow server 5. Limitations
may be overloaded with a higher request rate than it can
There are two limitations of the shadow honeypot design
normally handle. To evaluate this effect, we used full
presented in this paper that we are aware of. The
payload traces of real network traffic captured at ICS-
effectiveness of the rollback mechanism depends on the
FORTH and the University of Crete. The set of traces
proper placement of calls to transaction() for
contains more than 2.5 million user requests to ports 80,
committing state changes, and the latency of the detector.
445, and 139, which are related to the most exploited
The detector used in this paper can instantly detect attempts
vulnerabilities.
to overwrite a buffer, and therefore the system cannot be
corrupted. Other detectors, however, may have higher
latency, and the placement of commit calls is critical to
recovering from the attack. Depending on the detector
latency and how it relates to the cost of implementing
rollback, one may have to consider different approaches.
The trade-offs involved in designing such mechanisms are
thoroughly examined in the fault-tolerance literature (c.f.
[39]).
Furthermore, the loosely coupled client shadow honeypot
is limited to protecting against relatively static attacks. The
honeypot cannot effectively emulate user behavior that may
be involved in triggering the attack, for example, through
Figure 14. Percentage of benign network streams DHTML or Javascript. The loosely coupled version is also
reaching the execution threshold of the network-level weak against attacks that depend on local system state on
detector. the user’s host that is difficult to replicate. This is not a
problem with tightly coupled shadows, because we
Figure 14 shows the percentage of streams with at least accurately mirror the state of the real system. In some cases,
one instruction sequence that, when executed on the CPU it may be possible to mirror state on loosely coupled
emulator of network-level detector, reached the given shadows as well, but we have not considered this case in the
execution threshold. As the execution threshold increases, experiments presented in this paper.
the number of streams that reach it decreases. This effect
occurs only for low threshold values, due to large code 6. Related Work
blocks with no branch instructions that are executed
linearly. For example, the execution of linear code blocks Much of the work in automated attack reaction has focused
with more than 256 but less than 512 valid instructions is on the problem of network worms, which has taken truly
terminated before reaching the end when using a threshold epidemic dimensions (pun intended). For example, the
of 256 instructions, but completes correctly with a threshold system described in [24] detects worms by monitoring
of 512 instructions. However, the occurrence probability of probes to unassigned IP addresses (“dark space”) or inactive
such blocks is reversely proportional to their length, due to ports and computing statistics on scan traffic, such as the
the illegal or privileged instructions that accidentally occur number of source/destination addresses and the volume of
in random code. Thus, the percentage of streams that reach the captured traffic. By measuring the increase on the
the execution threshold stabilizes beyond the value of 2048. number of source addresses seen in a unit of time, it is
After this value, the execution threshold is reached solely possible to infer the existence of a new worm when as little
due to instruction sequences with “endless” loops, which as 4% of the vulnerable machines have been infected. A
usually require a prohibitive number of instructions for the similar approach for isolating infected nodes inside an
slow CPU emulator in order to complete. enterprise network [40] is taken in [23], where it was shown
Fortunately, for an execution threshold above 2048 that as little as four probes may be sufficient in detecting a
instructions, which allows for accurate polymorphic new port-scanning worm.
shellcode detection with a decent operational throughput Smirnov and Chiueh [41] describe an approximating
[13], the fraction of streams that reach the execution algorithm for quickly detecting scanning activity that can be
threshold is only around 4% for port 445, 2.6% for port 139, efficiently implemented in hardware. Newsome et al. [42]
and 0.1% for port 80. Binary traffic (ports 445 and 139) is describe a combination of reverse sequential hypothesis
clearly more likely to result to an instruction sequence that testing and credit-based connection throttling to quickly
reaches the execution threshold in contrast to the mostly detect and quarantine local infected hosts. These systems are
ASCII traffic of port 80. In any case, even in the worst case effective only against scanning worms (not topological, or
of binary-only traffic, the percentage of benign streams that “hit-list” worms), and rely on the assumption that most
reach the execution threshold is very small, so the extra scans will result in non-connections. As such, they are
susceptible to false positives, either accidentally (e.g., when
12 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
a host is joining a peer-to-peer network such as Gnutella, or for pushing to workstations vulnerability-specific,
during a temporary network outage) or on purpose (e.g., a application-aware filters expressed as programs in a simple
malicious web page with many links to images in language.
random/notused IP addresses). Furthermore, it may be The Internet Motion Sensor [7] is a distributed blackhole
possible for several instances of a worm to collaborate in monitoring system aimed at measuring, characterizing, and
providing the illusion of several successful connections, or tracking Internet-based threats, including worms. [53]
to use a list of known repliers to blind the anomaly detector. explores the various options in locating honeypots and
Another algorithm for finding fast-spreading worms using correlating their findings, and their impact on the speed and
2-level filtering based on sampling from the set of distinct accuracy in detecting worms and other attacks. [54] shows
source-destination pairs is described in [43]. that a distributed worm monitor can detect non-uniform
Wu et al. [22] describe an algorithm for correlating scanning worms two to four times as fast as a centralized
packet payloads from different traffic flows, towards telescope [55], and that knowledge of the vulnerability
deriving a worm signature that can then be filtered [44]. density of the population can further improve detection time.
The technique is promising, although further improvements However, other recent work has shown that it is relatively
are required to allow it to operate in real time. Earlybird straightforward for attackers to detect the placement of
[19] presents a more practical algorithm for doing payload certain types of sensors [56], [57]. Shadow Honeypots [58]
sifting, and correlates these with a range of unique sources are one approach to avoiding such mapping by pushing
generating infections and destinations being targeted. honeypot-like functionality at the end hosts.
However, polymorphic and metamorphic worms [26] remain The HACQIT architecture [59], [60], [61], [62] uses
a challenge; Spinelis [45] shows that it is an NP-hard various sensors to detect new types of attacks against secure
problem. Vigna et al. [46] discuss a method for testing servers, access to which is limited to small numbers of users
detection signatures against mutations of known at a time. Any deviation from expected or known behavior
vulnerabilities to determine the quality of the detection results in the possibly subverted server to be taken off-line.
model and mechanism. Polygraph [47] attempts to detect A sandboxed instance of the server is used to conduct “clean
polymorphic exploits by identifying common invariants room” analysis, comparing the outputs from two different
among the various attack instances, such as return implementations of the service (in their prototype, the
addresses, protocol framing and poor obfuscation. Microsoft IIS and Apache web servers were used to provide
Toth and Kruegel [10] propose to detect buffer overflow application diversity). Machine-learning techniques are used
payloads (including previously unseen ones) by treating to generalize attack features from observed instances of the
inputs received over the network as code fragments. They attack. Content-based filtering is then used, either at the
use restricted symbolic execution to show that legitimate firewall or the end host, to block inputs that may have
requests will appear to contain relatively short sequences of resulted in attacks, and the infected servers are restarted.
valid x86 instruction opcodes, compared to attacks that will Due to the feature-generalization approach, trivial variants
contain long sequences. They integrate this mechanism into of the attack will also be caught by the filter. [8] takes a
the Apache web server, resulting in a small performance roughly similar approach, although filtering is done based
degradation. STRIDE [48] is a similar system that seeks to on port numbers, which can affect service availability.
detect polymorphic NOP-sleds in buffer overflow exploits. Cisco’s Network-Based Application Recognition (NBAR)
[49] describes a hybrid polymorphic-code detection engine [21] allows routers to block TCP sessions based on the
that combines several heuristics, including NOP-sled presence of specific strings in the TCP stream. This feature
detector and abstract payload execution. was used to block CodeRed probes, without affecting regular
HoneyStat [3] runs sacrificial services inside a virtual web-server access. Porras et al. [63] argue that hybrid
machine, and monitors memory, disk, and network events to defenses using complementary techniques (in their case,
detect abnormal behavior. For some classes of attacks (e.g., connection throttling at the domain gateway and a peer-
buffer overflows), this can produce highly accurate alerts based coordination mechanism), can be much more effective
with relatively few false positives, and can detect zero-day against a wide variety of worms.
worms. Although the system only protects against scanning DOMINO [64] is an overlay system for cooperative
worms, “active honeypot” techniques [4] may be used to intrusion detection. The system is organized in two layers,
make it more difficult for an automated attacker to with a small core of trusted nodes and a larger collection of
differentiate between HoneyStats and real servers. FLIPS nodes connected to the core. The experimental analysis
(Feedback Learning IPS) [50] is a similar hybrid approach demonstrates that a coordinated approach has the potential
that incorporates a supervision framework in the presence of of providing early warning for large-scale attacks while
suspicious traffic. Instruction-set randomization is used to reducing potential false alarms. A similar approach using a
isolate attack vectors, which are used to train the anomaly DHT-based overlay network to automatically correlate all
detector. The authors of [51] propose to enhance NIDS relevant information is described in [65]. Malkhi and Reiter
alerts using host-based IDS information. Nemean [52] is an [66] describe an architecture for an early warning system
architecture for generating semantics-aware signatures, where the participating nodes/routers propagate alarm
which are signatures aware of protocol semantics (as reports towards a centralized site for analysis. The question
opposed to general byte strings). Shield [20] is a mechanism of how to respond to alerts is not addressed, and, similar to
(IJCNS) International Journal of Computer and Network Security, 13
Vol. 2, No. 9, September 2010
DOMINO, the use of a centralized collection and analysis instrumented instance of the application we are trying to
facility is weak against worms attacking the early warning protect. Attacks against the shadow honeypot are detected
infrastructure. and caught before they infect the state of the protected
Suh et al. [67], propose a hardware-based solution that application. This enables the system to implement policies
can be used to thwart control-transfer attacks and restrict that trade off between performance and risk, retaining the
executable instructions by monitoring “tainted” input data. capability to re-evaluate this trade-off effortlessly.
In order to identify “tainted” data, they rely on the operating Our experience so far indicates that despite the
system. If the processor detects the use of this tainted data as considerable cost of processing suspicious traffic on our
a jump address or an executed instruction, it raises an Shadow Honeypots and overhead imposed by
exception that can be handled by the operating system. The instrumentation, such systems are capable of sustaining the
authors do not address the issue of recovering program overall workload of protecting services such as a Web server
execution and suggest the immediate termination of the farm, as well as vulnerable Web browsers. We have also
offending process. DIRA [68] is a technique for automatic demonstrated how the impact on performance can be
detection, identification and repair of control-hijaking minimized by reducing the rate of false positives and tuning
attacks. This solution is implemented as a GCC compiler the AD heuristics using a feedback loop with the shadow
extension that transforms a program’s source code adding honeypot. We believe that shadow honeypots can form the
heavy instrumentation so that the resulting program can foundation of a type of application community.
perform these tasks. Unfortunately, the performance
implications of the system make it unusable as a front line Acknowledgments
defense mechanism. Song and Newsome [69] propose
This material is based on research sponsored by the Air
dynamic taint analysis for automatic detection of overwrite
Force Research Laboratory under agreement number
attacks. Tainted data is monitored throughout the program
FA8750-06-2-0221, and by the National Science Foundation
execution and modified buffers with tainted information will under NSF Grant CNS-09-14845. Evangelos Markatos is
result in protection faults. Once an attack has been also with the University of Crete.
identified, signatures are generated using automatic
semantic analysis. The technique is implemented as an
extension to Valgrind and does not require any References
modifications to the program’s source code but suffers from [1] M. Roesch. Snort: Lightweight intrusion detection for
severe performance degradation. One way of minimizing networks. In Proceedings of USENIX LISA, November
this penalty is to make the CPU aware of memory tainting 1999. (software available from http://www.snort.org/).
[70]. Crandall et al. report on using a taint-based system for [2] N. Provos. A Virtual Honeypot Framework. In
capturing live attacks in [71]. Proceedings of the 13th USENIX Security Symposium,
The Safe Execution Environment (SEE) [72] allows users pages 1–14, August 2004.
to deploy and test untrusted software without fear of [3] D. Dagon, X. Qin, G. Gu, W. Lee, J. Grizzard, J.
damaging their system. This is done by creating a virtual Levine, and H. Owen. HoneyStat: Local Worm
environment where the software has read access to the real Detection Using Honepots. In Proceedings of the 7th
data; all writes are local to this virtual environment. The International Symposium on Recent Advances in
user can inspect these changes and decide whether to Intrusion Detection (RAID), pages 39–58, October
commit them or not. We envision use of this technique for 2004.
[4] V. Yegneswaran, P. Barford, and D. Plonka. On the
unrolling the effects of filesystem changes in our system, as
Design and Use of Internet Sinks for Network Abuse
part of our future work plans. A similar proposal is
Monitoring. In Proceedings of the 7th International
presented in [73] for executing untrusted Java applets in a
Symposium on Recent Advances in Intrusion Detection
safe “playground” that is isolated from the user’s (RAID), pages 146–165, October 2004.
environment. [5] L. Spitzner. Honeypots: Tracking Hackers. Addison-
Wesley, 2003.
7. Conclusion [6] J. G. Levine, J. B. Grizzard, and H. L. Owen. Using
Honeynets to Protect Large Enterprise Networks. IEEE
We have described a novel approach to dealing with zeroday
Security & Privacy, 2(6):73– 75, Nov./Dec. 2004.
attacks by combining features found today in honeypots and
[7] M. Bailey, E. Cooke, F. Jahanian, J. Nazario, and D.
anomaly detection systems. The main advantage of this
Watson. The Internet Motion Sensor: A Distributed
architecture is providing system designers the ability to fine Blackhole Monitoring System. In Proceedings of the
tune systems with impunity, since any false positives 12th ISOC Symposium on Network and Distributed
(legitimate traffic) will be filtered by the underlying Systems Security (SNDSS), pages 167–179, February
components. We have implemented this approach in an 2005.
architecture called Shadow Honeypots. In this approach, we [8] T. Toth and C. Kruegel. Connection-history Based
employ an array of anomaly detectors to monitor and Anomaly Detection. In Proceedings of the IEEE
classify all traffic to a protected network; traffic deemed Workshop on Information Assurance and Security,
anomalous is processed by a shadow honeypot, a protected June 2002.
14 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
[9] K. Wang and S. J. Stolfo. Anomalous Payload-based [23] J. Jung, V. Paxson, A. W. Berger, and H.
Network Intrusion Detection. In Proceedings of the 7th Balakrishnan. Fast Portscan Detection Using
International Symposium on Recent Advanced in Sequential Hypothesis Testing. In Proceedings of the
Intrusion Detection (RAID), pages 201–222, IEEE Symposium on Security and Privacy, May 2004.
September 2004. [24] J. Wu, S. Vangala, L. Gao, and K. Kwiat. An Effective
[10] T. Toth and C. Kruegel. Accurate Buffer Overflow Architecture and Algorithm for Detecting Worms with
Detection via Abstract Payload Execution. In Various Scan Techniques. In Proceedings of the ISOC
Proceedings of the 5th Symposium on Recent Symposium on Network and Distributed System
Advances in Intrusion Detection (RAID), October Security (SNDSS), pages 143–156, February 2004.
2002. [25] Microsoft Security Bulletin MS04-028, September
[11] M. Bhattacharyya, M. G. Schultz, E. Eskin, S. 2004.
Hershkop, and S. J. Stolfo. MET: An Experimental http://www.microsoft.com/technet/security/Bulletin/M
System for Malicious Email Tracking. In Proceedings S04-028.mspx.
of the New Security Paradigms Workshop (NSPW), [26] P. Ször and P. Ferrie. Hunting for Metamorphic.
pages 1–12, September 2002. Technical report, Symantec Corporation, June 2003.
[12] C. Kruegel and G. Vigna. Anomaly Detection of Web- [27] C. Clark, W. Lee, D. Schimmel, D. Contis, M. Kone,
based Attacks. In Proceedings of the 10th ACM and A. Thomas. A Hardware Platform for Network
Conference on Computer and Communications Intrusion Detection and Prevention. In Proceedings of
Security (CCS), pages 251–261, October 2003. the 3rd Workshop on Network Processors and
[13] M. Polychronakis, E. P. Markatos, and K. G. Applications (NP3), February 2004.
Anagnostakis. Network-level polymorphic shellcode [28] L. Schaelicke, T. Slabach, B. Moore, and C. Freeland.
detection using emulation. In Proceedings of the Third Characterizing the Performance of Network Intrusion
Conference on Detection of Intrusions and Malware & Detection Sensors. In Proceedings of Recent Advances
Vulnerability Assessment (DIMVA), pages 54–73, July in Intrusion Detection (RAID), September 2003.
2006. [29] Top Layer Networks. http://www.toplayer.com.
[14] CERT Advisory CA-2001-19: ‘Code Red’ Worm [30] C. Kruegel, F. Valeur, G. Vigna, and R. Kemmerer.
Exploiting Buffer Overflow in IIS Indexing Service Stateful Intrusion Detection for High-Speed Networks.
DLL. http://www.cert.org/advisories/CA-2001- In Proceedings of the IEEE Symposium on Security
19.html, July 2001. and Privacy, pages 285–294, May 2002.
[15] Cert Advisory CA-2003-04: MS-SQL Server Worm. [31] M. Polychronakis, E. P. Markatos, and K. G.
http://www.cert.org/advisories/CA-2003-04.html, Anagnostakis. Emulation-based detection of non-self-
January 2003. contained polymorphic shellcode. In Proceedings of the
[16] S. Staniford, V. Paxson, and N. Weaver. How to Own 10th International Symposium on Recent Advances in
the Internet in Your Spare Time. In Proceedings of the Intrusion Detection (RAID), September 2007.
11th USENIX Security Symposium, pages 149–167, [32] A. J. Malton. The Denotational Semantics of a
August 2002. Functional Tree-Manipulation Language. Computer
[17] S. Staniford, D. Moore, V. Paxson, and N. Weaver. Languages, 19(3):157–168, 1993.
The Top Speed of Flash Worms. In Proceedings of the [33] P. Ször. The Art of Computer Virus Research and
ACM Workshop on Rapid Malcode (WORM), pages Defense. Addison- Wesley Professional, February
33–42, October 2004. 2005.
[18] US-CERT Technical Cyber Security Alert TA04-217A: [34] ApacheBench: A complete benchmarking and
Multiple Vulnerabilities in libpng. http://www.us- regression testing suite.
cert.gov/cas/techalerts/TA04-217A.html, August 2004. http://freshmeat.net/projects/apachebench/, July 2003.
[19] S. Singh, C. Estan, G. Varghese, and S. Savage. [35] Microsoft Security Bulletin MS04-028: Buffer Overrun
Automated worm fingerprinting. In Proceedings of the in JPEG Processing Could Allow Code Execution.
6th Symposium on Operating Systems Design & http://www.microsoft.com/technet/security/bulletin/MS
Implementation (OSDI), December 2004. 04-028.mspx, September 2004.
[20] H. J. Wang, C. Guo, D. R. Simon, and A. Zugenmaier. [36] i-Bench. http://http://www.veritest.com/benchmarks/i-
Shield: Vulnerability-Driven Network Filters for bench/default.asp.
Preventing Known Vulnerability Exploits. In [37] T. Spalink, S. Karlin, L. Peterson, and Y. Gottlieb.
Proceedings of the ACM SIGCOMM Conference, Building a Robust Software-Based Router Using
pages 193–204, August 2004. Network Processors. In Proceedings of the 18th ACM
[21] Using Network-Based Application Recognition and Symposium on Operating Systems Principles (SOSP),
Access Control Lists for Blocking the “Code Red” pages 216–229, Chateau Lake Louise, Banff, Alberta,
Worm at Network Ingress Points. Technical report, Canada, October 2001.
Cisco Systems, Inc. [38] P. Akritidis, K. Anagnostakis, and E. P. Markatos.
[22] H. Kim and Brad Karp. Autograph: Toward Efficient contentbased fingerprinting of zero-day
Automated, Distributed Worm Signature Detection. In worms. In Proceedings of the IEEE International
Proceedings of the 13th USENIX Security Symposium, Conference on Communications (ICC), May 2005.
pages 271–286, August 2004. [39] E. N. Elnozahy, Lorenzo Alvisi, Yi-Min Wang, and
David B. Johnson. A survey of rollback-recovery
(IJCNS) International Journal of Computer and Network Security, 15
Vol. 2, No. 9, September 2010
protocols in message-passing systems. ACM Comput. [53] E. Cook, M. Bailey, Z. M. Mao, and D. McPherson.
Surv., 34(3):375–408, 2002. Toward Understanding Distributed Blackhole
[40] S. Staniford. Containment of Scanning Worms in Placement. In Proceedings of the ACM Workshop on
Enterprise Networks. Journal of Computer Security, Rapid Malcode (WORM), pages 54–64, October 2004.
2005. (to appear). [54] M. A. Rajab, F. Monrose, and A. Terzis. On the
[41] N. Weaver, S. Staniford, and V. Paxson. Very Fast Effectiveness of Distributed Worm Monitoring. In
Containment of Scanning Worms. In Proceedings of Proceedings of the 14th USENIX Security Symposium,
the 13th USENIX Security Symposium, pages 29–44, pages 225–237, August 2005.
August 2004. [55] D. Moore, G. Voelker, and S. Savage. Inferring
[42] S. E. Schechter, J. Jung, and A. W. Berger. Fast Internet Denialof- Service Activity. In Proceedings of
Detection of Scanning Worm Infections. In the 10th USENIX Security Symposium, pages 9–22,
Proceedings of the 7th International Symposium on August 2001.
Recent Advances in Intrusion Detection (RAID), [56] J. Bethencourt, J. Franklin, and M. Vernon. Mapping
October 2004. Internet Sensors With Probe Response Attacks. In
[43] S. Venkataraman, D. Song, P. B. Gibbons, and A. Proceedings of the 14th USENIX Security Symposium,
Blum. New Streaming Algorithms for Fast Detection pages 193–208, August 2005.
of Superspreaders. In Proceedings of the 12th ISOC [57] Y. Shinoda, K. Ikai, and M. Itoh. Vulnerabilities of
Symposium on Network and Distributed Systems Passive Internet Threat Monitors. In Proceedings of the
Security (SNDSS), pages 149–166, February 2005. 14th USENIX Security Symposium, pages 209–224,
[44] D. Moore, C. Shannon, G. Voelker, and S. Savage. August 2005.
Internet Quarantine: Requirements for Containing [58] K. G. Anagnostakis, S. Sidiroglou, P. Akritidis, K.
Self-Propagating Code. In Proceedings of the IEEE Xinidis, E. P. Markatos, and A. D. Keromytis.
Infocom Conference, April 2003. Detecting Targetted Attacks Using Shadow Honeypots.
[45] D. Spinellis. Reliable identification of bounded-length In Proceedings of the 14th USENIX Security
viruses is NP-complete. IEEE Transactions on Symposium, pages 129–144, August 2005.
Information Theory, 49(1):280– 284, January 2003. [59] J. E. Just, L. A. Clough, M. Danforth, K. N. Levitt, R.
[46] G. Vigna, W. Robertson, and D. Balzarotti. Testing Maglich, J. C. Reynolds, and J. Rowe. Learning
Network-based Intrusion Detection Signatures Using Unknown Attacks – A Start. In Proceedings of the 5th
Mutant Exploits. In Proceedings of the 11th ACM International Symposium on Recent Advances in
Conference on Computer and Communications Intrusion Detection (RAID), October 2002.
Security (CCS), pages 21–30, October 2004. [60] J. C. Reynolds, J. Just, E. Lawson, L. Clough, and R.
[47] J. Newsome, B. Karp, and D. Song. Polygraph: Maglich. The Design and Implementation of an
Automatically Generating Signatures for Polymorphic Intrusion Tolerant System. In Proceedings of the
Worms. In Proceedings of the IEEE Security & Privacy International Conference on Dependable Systems and
Symposium, pages 226–241, May 2005. Networks (DSN), June 2002.
[48] P. Akritidis, E. P. Markatos, M. Polychronakis, and K. [61] J.C. Reynolds, J. Just, E. Lawson, L. Clough, and R.
Anagnostakis. STRIDE: Polymorphic Sled Detection Maglich. Online Intrusion Protection by Detecting
through Instruction Sequence Analysis. In Proceedings Attacks with Diversity. In Proceedings of the 16th
of the 20th IFIP International Information Security Annual IFIP 11.3 Working Conference on Data and
Conference (IFIP/SEC), June 2005. Application Security Conference, April 2002.
[49] U. Payer, P. Teufl, and M. Lamberger. Hybrid Engine [62] J. C. Reynolds, J. Just, L. Clough, and R. Maglich. On-
for Polymorphic Shellcode Detection. In Proceedings Line Intrusion Detection and Attack Prevention Using
of the Conference on Detection of Intrusions and Diversity, Generate-and- Test, and Generalization. In
Malware & Vulnerability Assessment (DIMVA), July Proceedings of the 36th Annual Hawaii International
2005. Conference on System Sciences (HICSS), January
[50] M. Locasto, K. Wang, A. Keromytis, and S. Stolfo. 2003.
FLIPS: Hybrid Adaptive Intrusion Prevention. In [63] P. Porras, L. Briesemeister, K. Levitt, J. Rowe, and Y.-
Proceedings of the 8th Symposium on Recent C. A. Ting. A Hybrid Quarantine Defense. In
Advances in Intrusion Detection (RAID), September Proceedings of the ACM Workshop on Rapid Malcode
2005. (WORM), pages 73–82, October 2004.
[51] H. Dreger, C. Kreibich, V. Paxson, and R. Sommer. [64] V. Yegneswaran, P. Barford, and S. Jha. Global
Enhancing the Accuracy of Network-based Intrusion Intrusion Detection in the DOMINO Overlay System.
Detection with Host-based Context. In Proceedings of In Proceedings of the ISOC Symposium on Network
the Conference on Detection of Intrusions and and Distributed System Security (SNDSS), February
Malware & Vulnerability Assessment (DIMVA), July 2004.
2005. [65] M. Cai, K. Hwang, Y-K. Kwok, S. Song, and Y. Chen.
[52] V. Yegneswaran, J. T. Giffin, P. Barford, and S. Jha. Collaborative Internet Worm Containment. IEEE
An Architecture for Generating Semantics-Aware Security & Privacy Magazine, 3(3):25–33, May/June
Signatures. In Proceedings of the 14th USENIX 2005.
Security Symposium, pages 97–112, August 2005. [66] C. C. Zou, L. Gao, W. Gong, and D. Towsley.
Monitoring and Early Warning for Internet Worms. In
16 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
Abstract. A low-power analog filter to be used in UMTS and linearity. The Active Gm- RC approach for realizing the
WLAN applications is reported. The 4th order low-pass filter is therefore proposed here in which both linearity is
continuous-time filter is going to be included in the receiver high and power consumption is low. Simulations confirm
path of a reconfigurable terminal. The filter is made up by the the excellent properties of the proposed circuit.
cascade of two Active-Gm-RC low-pass Biquadratic cells. The In section 2, the active Gm-RC cell is proposed. The filter
unity-gain bandwidth of the opamps embedded in the Active- consists of opamps and passive elements. The opamp
Gm-RC cells is comparable to the filter cut-off frequency. Thus,
frequency response is taken into account in the synthesis of
the power consumption of the opamp can be strongly reduced.
In addition, the filter can be programmed in order to process
overall transfer function of filter where the opamp frequency
UMTS and WLAN signals. A Fourth order low pass filter with 3 response is fixed and the external components are designed
MHz cut-off frequency and a DC gain of 40 dB for UMTS as a function of opamp frequency response. This makes the
receiver has been designed in 0.18μ CMOS technology with a overall transfer function to fully depend on the opamp.
1.2 V supply voltage. The filter has a power dissipation of
122µW. The filter has input referred noise (spot noise) of 13.25 2. Low Pass Filter
nV @ 2MHz.
Keywords: Analog Filters, CMOS, low voltage, zero IF The filter consists of two Biquad Cells connected in cascade.
Receivers. The figure shows the Biquad cell using Active Gm-RC
technique. It is equivalent to second order Low Pass Filter.
1. Introduction The opamp used in the structure is having a single pole
transfer function given by -
The Transfer Function for the filter using Butterworth Table 2 :MOSFETS and passive elements device size used
Approximation is as shown below – in opamps
Opamp for Cell 1 Opamp for Cell 2
T(s) = (s2 + 0.7654s + 1)(s2 + 1.8478s + 1) (3) Devices Size Devices Size
M1=M2 9.47μ/0.18μ M1=M2 12.47μ/0.18μ
Table 1: Baseband Filter Requirements M3=M4 9.47μ/0.18 μ M3=M4 12.47μ/0.18 μ
Specifications Value M5=M6 18.94μ/0.18 μ M5=M6 24.94μ/0.18 μ
Order 4th M7=M8 18.94μ/0.18μ M7=M8 24.94μ/0.18μ
Transfer Function Butterworth
M9 0.24μ/0.18μ M9 0.48μ/0.18μ
DC gain 40 dB
Cut-off frequency 2 MHz Rc 680Ω Rc 920Ω
Cc 2.4pF Cc 4.8pF
Cell1 Cell2 IB 20μA IB 20μA
Order 2nd 2nd
Transfer Function Butterworth Butterworth
DC gain 20 dB 20 dB Table 3: Filter passive elements
Cut-off frequency 2MHz 2MHz
Cell 1 Cell 2
Quality Factor 1.3066 0.5412
Specifications Value Value
R1 3.17k 2.8k
R2 31.7k 28k
3. Operational Amplifier Design C 2.6pF 2.9pF
4. Filter Design
(6)
6. Conclusion
References
Authors Profile
Abstract: The research paper looks at the perception of response questions based on principles of instructional
students and their readiness for the new approach towards design and perceived benefit under post-hoc structured
learning through the Learning Objects. The Learning Objects categories. They evaluated five learning objects with
for the C++ course were developed as part of this study and then secondary school students but this study has the limitation
tested using the LORI scale and the performance of the students that it focused only on perceived benefits of Learning
taught using learning objects was compared with the students Objects rather than on the actual learning outcomes
learning the same course in the traditional way.Finally the
resulting from the Learning Object activities.
conclusions are drawn.
In another study Akpinar and Simsek [1] tested eight
Keywords: Learning Objects, LORI Scoresheets, e-learning, Learning Objects, with school children in a pre-post test
learning evaluation. research design. The data analysis revealed that seven of the
Learning Objects helped the sample students improve their
pretest scores, but in one, the Horizontal Projectile Motion
1. Introduction (HPM) LO for ninth grade students, the scores did not
improve.
The development of effective content suiting the learning On similar design, Nurmi and Jaakkola [4] conducted an
style of users and the prevailing learning scenarios improves experimental study using a pre-test post-test design to
the success rate of an e-Learning initiative significantly. It evaluate the effectiveness of three Learning Objects from
is, therefore, important for content to adhere to the three different subject areas, i.e. Mathematics, Finnish
objectives of the program and be powerful enough to engage Language and Science. The Learning Objects, tested with
the user. The establishment of means of quality assurance, school children, were used in different instructional settings.
requires criteria for evaluation that supports the The results revealed that no significant differences were
communication of meaningful feedback to designers for observed between the Learning Object and the traditional
content information. teaching conditions with low and high prior knowledge
Development of Learning objects that matches intended students.
outcomes and delivers the requisite cognitive load requires
careful planning and structured development. For that This study, thus, developed ten Learning Objects for the
purpose, Nesbit and Li [10] developed a Learning Object C++ course using the Authoring Software ‘Xerte’ [9] and
Review Instrument (LORI 1.5) which can be used to reliably ‘Moodle’ and then tested them by conducting two studies.
assess some aspects of Learning Objects. This approach was The details of Learning Object development have been
adopted in the design of their convergent participation deliberately kept out of this research paper since its major
model for the evaluation of learning objects. Their model focus is on evaluation of developed Learning Objects.
proposed an evaluation panel drawn from different
stakeholder groups and a two-cycle process, whereby
participants would begin by evaluating the learning object
2. Study 1: To evaluate the Quality of Learning
independently and asynchronously. The two-stage cycle was Objects using the LORI score sheets.
facilitated by electronic communication tools and used the
Learning Object Review Instrument (LORI) to specify the 2.1 Aim of the Study
rating scale and criteria of evaluation. Subsequent research
on the use of the LORI revealed that objects that were To evaluate the quality of developed Learning Objects the
evaluated collaboratively led to greater inter-rater reliability students were asked to rate and review the Learning objects
as opposed to ones evaluated independently individually using the LORI score sheets (Sample sheet
There have been a limited number of empirical studies variables discussed below). Following the reviewing and the
examining the learning outcomes and the instructional rating process, the researcher combined the ratings and
effectiveness of Learning Objects despite the fact that estimated average rating for each Learning Object. Average
Learning Object repositories commonly use the review ratings were estimated both for each of nine issues for a
instruments. A few of the worth mentioning are the study by particular Learning Object and for the overall rating of that
Kay and Knaack [3][2] which examines the quality of Learning Object.
Learning Objects through content analysis of open-ended
(IJCNS) International Journal of Computer and Network Security, 23
Vol. 2, No. 9, September 2010
2.3 Observation
Correlations
V1 V2 V3 V4 V5 V6 V7 V8 V9
Pearson -
1 .498(**) -.072 -.330 .152 -.218 -.194 .412(*)
Correlation .478(**)
V1
Sig. (2-tailed) .002 .682 .004 .053 .383 .208 .263 .014
N 35 35 35 35 35 35 35 35 35
Pearson - - .464(**
.498(**) 1 .172 .080 -.196 -.099
Correlation .547(**) .734(**) )
V2
Sig. (2-tailed) .002 .324 .647 .001 .259 .000 .571 .005
N 35 35 35 35 35 35 35 35 35
Pearson -
-.072 .172 1 .709(**) .150 .056 -.119 .268
Correlation .511(**)
V3
Sig. (2-tailed) .682 .324 .000 .389 .751 .002 .494 .119
N 35 35 35 35 35 35 35 35 35
Pearson -
.080 .709(**) 1 .206 -.031 -.293 .099 .157
Correlation .478(**)
V4
Sig. (2-tailed) .004 .647 .000 .234 .860 .087 .570 .369
N 35 35 35 35 35 35 35 35 35
Pearson -
-.330 .150 .206 1 .181 .210 -.237 -.361(*)
Correlation .547(**)
V5
Sig. (2-tailed) .053 .001 .389 .234 .298 .226 .170 .033
N 35 35 35 35 35 35 35 35 35
Pearson -
.152 -.196 .056 -.031 .181 1 .459(**) .084
V6 Correlation .484(**)
Sig. (2-tailed) .383 .259 .751 .860 .298 .006 .003 .631
24 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
N 35 35 35 35 35 35 35 35 35
Pearson - -
-.218 -.293 .210 .459(**) 1 .201 -.343(*)
Correlation .734(**) .511(**)
V7
Sig. (2-tailed) .208 .000 .002 .087 .226 .006 .247 .043
N 35 35 35 35 35 35 35 35 35
Pearson -
-.194 -.099 -.119 .099 -.237 .201 1 -.211
Correlation .484(**)
V8
Sig. (2-tailed) .263 .571 .494 .570 .170 .003 .247 .224
N 35 35 35 35 35 35 35 35 35
Pearson
.412(*) .464(**) .268 .157 -.361(*) .084 -.343(*) -.211 1
Correlation
V9
Sig. (2-tailed) .014 .005 .119 .369 .033 .631 .043 .224
N 35 35 35 35 35 35 35 35 35
** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
time to explore the topic. At the end of the course both the
Over a period of a semester the students were taught the groups were tested.
course in different modes. The total group size was 35
where 18 students were imparted training using Learning
Object approach and the rest 17 using traditional mode.
Ahead of this group formation the students with no 3.6 Hypothesis
programming experience and previous programming
experience were identified and then the groups were formed Traditional Approach
with both types randomly distributed within the two groups. 1) HO = The teaching using traditional method is not
effective.
3.3 Structure of Teaching Session 2) H1 = The teaching using traditional method is
effective
Each group was given an hour’s session with the following Learning Object Approach
structure 1) HO = The teaching using Learning Object method is
1. A lecture about the C++ topic. not effective.
2. Individual study and use of corresponding learning 2) H1 = The teaching using Learning Object method
materials is effective
3. Solution of a test
Where HO is the Null Hypothesis which is an important
In order to have a controlled teaching style variable the technique used by the researchers in the field of education.
same teacher conducted both the sessions. In the LO A Null hypothesis is useful in testing the significance of
condition (n=18) the students were given introduction on difference.
the subject content to the students first and for the rest of the
sessions the students completed LO assignments 3.7 Analysis and Observations
individually at their own pace. The Learning Objects were
principally quite simple drill-and-practice programs which In order to note and compare the behavioral patterns of the
were designed to be game-like and to provide instant students studying the same course in different modes, the
feedback for students’ input/answers. The way of working teacher conducting the test provided the qualitative data to
was student-led because there was no direct teaching nor the analysis. They asserted the following observations that
teacher controlled tasks during the assignment phase. The during the course of evaluation it was found that the
students were briefed that in order to be successful in this participants of the learning object group were nervous,
approach they required a higher level of self-regulation and stressed and anxious. They took more time to complete the
meta cognitive skills like self-monitoring, controlling, test whereas the traditional group was more at ease, relaxed
maintenance of task orientation etc. than working in the and less stressed. The data obtained for both groups has
traditional condition. been summarized as below:-
The students (n=17) in the traditional mode were taught in
the normal classroom. The teaching method resembled Table 2: Pre-test Post-test analysis of Traditional and LO
normal instruction with a teacher-led introduction followed group
by an assignment phase when students individually Group N Mean Std. Paired p-
completed different paper-and-pencil tasks. These tasks
were similar to the assignments completed in the LO mode. Deviatio t-test value
n
3.4 Description of the Test
Traditiona
The students were evaluated on the basis of a written exam. 17 2.29 2.51 .483 >.05
l Pre-test
The test consisted of two exercises : one to demonstrate
theoretical knowledge of the students and second to judge
their practical programming abilities. The complete test 17 3.11 2.12
Post-test
summed up to a total maximum score of 10 points. The test
was conducted for the one hour duration. LO
Pre-test 18 3.00 2.26 .388 >.05
3.5 Teaching Methodology:
18 3.61 2.10
The traditional group was imparted training using lecture
Post-test
delivery and instructional material containing theory and
examples. Later they were given free time to explore, study
and experiment with the topics covered in the class. The results clearly indicate that the mean of the Learning
However, the lectures were delivered to the Learning Object Object group was slightly superior to the traditional group.
group by using the Learning Objects designed by the They performed better for both the tests given to them as
researcher as part of this study. They too were given free part of the evaluation. In the pre-test the traditional group
(mean=2.29), though, did not perform better than the LO
26 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
group(mean=3.00). However, the performance of the Technology and Distance Learning, 4(3), pp 31-44,
Learning Object group (mean=3.61) was better than the 2007
traditional group(mean=3.11). When a paired t-test was [2] R. Kay & L. Knaack, “Developing learning
applied on the groups to pre and post observations, the t test objects for secondary school students: A
results were non-significant in each case. (Pre-test =.483, multicomponent model”. Interdisciplinary Journal of
Post test =.388). The paired t-test, thus, showed no Knowledge and Learning Objects,1, pp 229-254 , 2005
statistically significant difference between the [3] R. Kay & L. Knaack, “Evaluating the learning in
condition(p>.05). The Null Hypothesis, therefore, learning objects”. Open Learning: The Journal of Open
has to be accepted that there is no significant difference and Distance Learning, 22(1), pp 5–28, 2007.
between the performance of students in both/each modes. [4] S. Nurmi & T. Jaakkola, “Problems underlying the
According to the preliminary analysis , the observation learning object approach”. International Journal of
revealed that in the LO condition students worked mainly on Instructional Technology and Distance Learning,
the perception that they had two different assignments: learn 2(11), pp 61–66, 2005.
to use the Learning Objects and complete the test. They [5] K.Salas & L. Ellis, “The development and
seemed to focus on the procedural features and concrete implementation of learning objects in a higher
functions rather than on content or instructional aspects. education setting”. Interdisciplinary Journal of
They were more interested in solving the logic behind the Knowledge and Learning Objects, 2, pp 1-22, 2006.
Learning Objects i.e. how they worked. Although student’s [6] Felix g,Hamza-Lup, Razvan Stefan Bot, Ioan Salomie.
task orientation remained well during the sessions while “Virtual University of Cluj-Napoca, A Web based
they were actively accomplishing LO assignments, the depth Educational Framework”. [Accessed: Aug 20, 2010]
of orientation was not much. The working with LO did not [7] T. Cochrane, “Interactive QuickTime: Developing
engage students with thinking about the content being and evaluating multimedia learning objects to enhance
learnt. Instead, the work in the traditional condition was both face-to-face and distance e-learning
much more focused on the learning tasks. This could be due environments”. Interdisciplinary Journal of Knowledge
to a larger amount of external control imposed by the and Learning Objects. 2005
teacher. It can, therefore, be concluded that though the [8] D.A. Wiley, “Connecting learning objects to
requirements of self-regulation in the LO condition were instructional design theory: A definition, a metaphor,
overwhelming at the same time they were detrimental to and a taxonomy. In D. A.Wiley (Ed.), The instructional
their learning outcomes. use
of learning objects”: Online version. [Accessed: Aug
It can also be conveniently assumed that the sole use of 20,
Learning Object in lecture impartation and course delivery 2010]
cannot bring a significant difference in the performance of [9] www.nottingham.ac.uk/xerte/.
students. There are other important contextual factors that [Accessed: Aug 20, 2010]
may yet have to be identified to improve academic [10] J.C. Nesbit & J.Li, “ Web-based tools for learning
achievement. Besides, a more prolonged exposure to LO has object evaluation”. Proceedings of the
to be explored and measured under carefully controlled International Conference on Education and
conditions. Also it is imperative to think if the learning Information Systems: Technologies and Applications,
orientation in the LO condition of the students is mere Orlando, Florida, USA, 2, pp 334-339, 2004.
curiosity to a new style of learning for students or the LO
design then supporting pedagogy needs to be modified.
Therefore, when examining the effectiveness of Learning
Objects on student learning outcomes, it is essential to note
that it is the effect of the whole learning environment and
not just of that of Leaning Objects. As it is impossible to
separate learning activities, learned contents and learning
situations from each other, it is also not feasible to detach
the educational technology applications used from the social
and contextual factors of the learning processes. Thus, it is
the Learning Objects and the instructional arrangements
within learning environments that interact together to
stimulate certain student learning activities, behaviours and
outcomes. Learning Object represent only one part of the
larger learning environment and not as self-contained
instructional solution.
References
Abstract: A wireless ad hoc network is an autonomous system to any other node. The idea of ad hoc networking is
of mobile hosts connected by wireless links. The nodes are free sometimes also referred to as “infrastructure less
to move randomly and organize themselves arbitrarily; thus networking”, since the mobile nodes in the network
network’s topology may change rapidly and unpredictably. dynamically establish routing among themselves to form
Unlike traditional wireless network, ad hoc network do not rely their own network on the fly. Due to the limited
on any fixed infrastructure. Instead, hosts rely on each other to transmission range of wireless networks interfaces, multiple
keep the network connected. One main challenge in the design
network hops may be needed for one node to exchange data
of these networks is their vulnerability to security attacks. Ad
hoc networks are vulnerable due to their fundamental
with another across the network.[1] Ad hoc network
characteristics, such as open medium, dynamic topology, technology can provide an extremely flexible method of
distributed cooperation and constraint capability. Routing plays establishing communications in situations where
an important role in security of ad-hoc network. In Ad hoc geographical or terrestrial constraints demand a totally
network, there are mainly two kinds of routing protocols: distributed network system without any fixed based station,
proactive routing protocol and on demand routing protocol. In such as battlefields, military applications, and other
general, routing security in wireless ad hoc network appears to emergency and disaster situations.[2]
be a problem that is not trivial to solve. However, security is an important issue of ad hoc network
In this paper, we introduce the wormhole attack, a severe attack especially for security sensitive applications. The intrinsic
in ad hoc networks that is particularly challenging to defend nature of wireless ad hoc networks makes them vulnerable
against. The wormhole attack is possible even if the attacker has to attacks ranging from passive eavesdropping to active
not compromised any hosts and even if all communication interfering. There is no guarantee that a routed
provides authenticity and confidentiality. In the wormhole communication path between two nodes will be free of
attack, an attacker receives packets at one point in the network, malicious nodes that will, in some way, not comply with the
“tunnels” them to another point in the network, and then employed protocol and attempt to interfere the network
replays them into the network from that point. The wormhole operation. Most routing protocol cannot cope with
attack can form a serious threat in wireless networks, especially disruptions due to malicious behavior. For example, any
against many ad hoc network routing protocols and location-
node could claim that it is one hop away from a given
based wireless security systems. For example, most existing ad
hoc network routing protocols, without some mechanism to
destination node, causing all routes to that destination to
defend against the wormhole attack, would be unable to find pass through itself.
routes longer than one or two hops, severely disrupting In this paper, we introduce the wormhole attack, a severe
communication. We present a technique to identify wormhole attack in ad hoc networks that is particularly challenging to
attacks in wireless ad hoc network and a solution to discover a defend against. The wormhole attack is possible even if the
safe route avoiding wormhole attack. It is time based calculation attacker has not compromised any hosts and even if all
which requires minimal calculation. communication provides authenticity and confidentiality. In
Keywords: Ad hoc Networks, Wormholes. the wormhole attack, an attacker receives packets at one
point in the network, “tunnels” them to another point in the
1. Introduction network, and then replays them into the network from that
point. The wormhole attack can form a serious threat in
Ad hoc networks consist of wireless nodes that communicate
wireless networks, especially against many ad hoc network
with each other in the absence of a fixed infrastructure.
routing protocols and location-based wireless security
These networks are envisioned to have dynamic, sometimes
systems. For example, most existing ad hoc network routing
rapidly changing, random, multi-hop topologies, which are
protocols, without some mechanism to defend against the
likely composed of relatively bandwidth-constrained
wormhole attack, would be unable to find routes longer than
wireless links. In such a network, each mobile node operates
one or two hops, severely disrupting communication. We
not only as a host but also as a router, forwarding packets
present a technique to identify wormhole attacks in wireless
for other mobile nodes in the network that may not be
ad hoc network and a solution to discover a safe route
within direct wireless transmission range of each other.
avoiding wormhole attack.
Each node participates in an ad hoc routing protocol that
allows it to discover “multi-hop” paths through the network
28 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
The rest of paper is organized as follows. Section II of this cryptographic information, nor do they need any special
paper presents the wormhole attacks in details. Section III capabilities, such as a high speed wire line link or a high
studies various solutions to wormhole attack. Section IV power source. A simple way of countering this mode of
discusses proposed mechanism to prevent ad hoc wireless attack is a by-product of the secure routing protocol ARAN
network from wormhole attack. Section V concludes paper. [10], which chooses the fastest route reply rather than the
one which claims the shortest number of hops. This was not
2. Wormhole Attacks a stated goal of ARAN, whose motivation was that a longer,
less congested route is better than a shorter and congested
In a wormhole attack, an attacker receives packets at one route.
point in the network, “tunnels” them to another point in the
network, and then replays them into the network from that (b) Wormhole using Out-of-Band Channel
point. For tunneled distances longer than the normal This mode of the wormhole attack is launched by having an
wireless transmission range of a single hop, it is simple for out-of-band high-bandwidth channel between the malicious
the attacker to make the tunneled packet arrive sooner than nodes. This channel can be achieved, for example, by using
other packets transmitted over a normal multihop route, for a long-range directional wireless link or a direct wired link.
example by use of a single long-range directional wireless This mode of attack is more difficult to launch than the
link or through a direct wired link to a colluding attacker. It previous one since it needs specialized hardware capability.
is also possible for the attacker to forward each bit over the
wormhole directly, without waiting for an entire packet to be (c) Wormhole with High Power Transmission
received before beginning to tunnel the bits of the packet, in In this mode, when a single malicious node gets a route
order to minimize delay introduced by the wormhole. request, it broadcasts the request at a high power level, a
The wormhole attack is a particularly dangerous attack capability which is not available to other nodes in the
against many ad hoc network routing protocols in which the network. Any node that hears the high-power broadcast,
nodes that hear a single-hop transmission of a packet rebroadcasts it towards the destination. By this method, the
consider themselves to be in range of the sender. malicious node increases its chance to be in the routes
established between the source and the destination even
without the participation of a colluding node. A simple
method to mitigate this attack is possible if each node can
accurately measure the received signal strength and has
models for signal propagation with distance. In that case, a
node can independently determine if the transmission it
receives is at a higher than allowable power level. However,
this technique is approximate at best and dependent on
environmental conditions. LITEWORP provides a more
feasible defense against this mode.
(d) Wormhole using Packet Relay
In this mode of the wormhole attack, a malicious node
Figure 1. Wormhole attack using out of band Channel relays packets between two distant nodes to convince them
that they are neighbors. It can be launched by even one
2.1 Classification malicious node.
There are several ways to classify wormhole attacks. (e) Wormhole using Protocol Deviations
2.1.1 Depending on whether wormhole nodes put their In this mode, a malicious node can create a wormhole by
identity into packet’s header.[12] simply not complying with the protocol and broad casting
Here we can categorize wormhole attack into two categories: without backing off. The purpose is to let the request packet
Hidden Attacks and Exposed Attacks. it forwards arrive first at the destination and it is therefore
In Hidden Attack, Wormhole nodes do not update packets’ included in the path to the destination.
headers as they should so other nodes do not realize
existence of them. 3. Solutions To. Wormhole Attacks
In Exposed Attack, wormhole nodes do not modify the
Packet Leash [1] is an approach in which some information
content of packets but they include their identities in the
in added to restrict the maximum transmission distance of
packet header as legitimate nodes do. Therefore, other nodes
packet. There are two types of packet leashes: geographic
are aware of wormhole nodes’ existence but they do not
leash and temporal leash. In geographic leash, when a node
know wormhole nodes are malicious.
A sends packet to another node B, the node must include its
2.1.2 Based on the techniques used for launching location information and sending time into the packet. B
wormhole attack.[2] can estimate the distance between them. The geographic
(a) Wormhole using Encapsulation leash computes an upper bound on the distance, whereas the
temporal leash ensures that packet has an upper bound on
This mode of the wormhole attack is easy to launch since its lifetime. In temporal leashes, all nodes must have tight
the two ends of the wormhole do not need to have any time synchronization. The maximum difference between any
(IJCNS) International Journal of Computer and Network Security, 29
Vol. 2, No. 9, September 2010
two nodes’ clocks is bounded by Δ, and this value should be exposed attacks. It is unable to detect hidden attacks because
known to all the nodes. By using metrics mentioned above, in this kind of attack wormhole links does not appear in
each node checks the expiration time in the packet and obtained routes.
determine whether or not wormhole attacks have occurred.
In [10], the author proposed two statistical
If packet receiving time exceed the expiration time, the
approaches to detect wormhole attack in Wireless Ad Hoc
packet is discarded.
Networks. The first one called Neighbor Number Test bases
Capkun et al. [7] presented SECTOR, which does not on a simple assumption that a wormhole
require any clock synchronization and location information,
will increase the number of neighbors of the nodes (fake
by using Mutual Authentication with Distance-Bounding
neighbors) in its radius. The base station will get
(MAD). Node estimates the distance to another node in its
neighborhood information from all sensor nodes, computes
transmission range by sending it one-bit challenge, which
the hypothetical distribution of the number of neighbors and
responds to instantaneously. By using the time of flight,
uses statistical test to decide if there is a wormhole or not.
detects whether or not is neighbor or not. However, this
The second one called
approach uses special hardware that can respond to one-bit
challenge without any delay as Packet leash is. All Distance Test detects wormhole by computing the
distribution of the length of the shortest paths between all
The Delay per Hop Indicator (DelPHI) [9] proposed by
pairs of nodes. In these two algorithms, most of the
Hon Sun Chiu and King-Shan Lui, can detect both hidden
workload is done in the base station to save sensor nodes’
and exposed wormhole attacks. In DelPHI, attempts are
resources. However, one of the major drawbacks is that they
made to find every available disjoint route between sender
can not pinpoint the location of wormhole which is
and receiver. Then, the delay time and length of each route
necessary for a successful defense.
are calculated and the average delay time per hop along
each route is computed. These values are used to identify Possible solutions to wormhole attacks proposed by different
wormhole. The route containing wormhole link will have researchers are discussed in this section. The detection of
greater Delay per Hop (DPH) value. This mechanism can wormhole attacks that does not need any special hardware
detect both types of wormhole attack; however, it cannot and additional information is proposed in this paper.
pinpoint the location of wormhole. Moreover, because the
lengths of the routes are changed by every node, including 4. Proposed Detection Mechanism
wormhole nodes, wormhole nodes can change the route
length in certain manner so that they cannot be detected. In this section the proposed wormhole detection mechanism
is discussed in detail. This mechanism does not need any
Hu and Evans [6] use directional antennas to prevent special hardware or synchronized clocks because it only
the wormhole attack. To thwart the wormhole, each node considers its local clock to calculate the RTT.
shares a secret key with every other node and maintains an
updated list of its neighbors. To discover its neighbors, a 4.1 Network model and assumptions
node, called the announcer, uses its directional antenna to
The network is assumed to be homogeneous (all network
broadcast a HELLO message in every direction. Each node
nodes contain the same hardware and software
that hears the HELLO message sends its identity and an
configuration), static (network do not move after
encrypted message, containing the identity of the announcer
deployment), and Symmetric (Node A can only
and a random challenge nonce, back to the announcer.
communicate with node B if and only if B can communicate
Before the announcer adds the responder to its neighbor list,
with A). All nodes are uniquely identified.
it verifies the message authentication using the shared key,
and that it heard the message in the opposite directional To make detection, it is based on the RTT of the
antenna to that reported by the neighbor. This approach is message between successive intermediate nodes. The
suitable for secure dynamic neighbor detection. However, it consideration is that RTT between two fake neighbors or
only partially mitigates the wormhole problem. Specifically, two wormhole links will be considerable higher than that
it only prevents the kind of wormhole attacks in which between two real neighbors.
malicious nodes try to deceive two nodes into believing that
This proposed mechanism consists of two phases. The first
they are neighbors.
phase is to find route between source and destination. In
In [9], another statistical approach called SAM Second phase, it calculates the RTT of all intermediate
(Statistical Analysis of Multi-path) was proposed to detect nodes and detect wormhole link in route.
exposed wormhole attacks in Multi-path routing protocol.
The main idea of the proposed scheme SAM is based on the 4.2 Phase 1: Route Finding
observation that certain statistics of the discovered routes by In the first phase, node sends the route request (RREQ)
routing protocols will change dramatically under wormhole message to the neighbor node and save the time of its RREQ
attacks. Because wormhole links are extremely attractive to sending TREQ. The intermediate node also forwards the
routing RREQ message and saves
requests so it will appear in more routes than normal links. TREQ of its sending time. When the RREQ message
By doing statistics on the relative frequency of each link reaches the destination node, it sends route reply message
appear in the set of all obtained routes, they can identify (RREP) with the reserved path. When the intermediate node
wormhole attacks. This technique is only used to detect receives the RREP message, it saves the time of receiving of
30 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
RREP. The assumption is based on the RTT of the route Where RTTA is the RTT between node A and the
request and reply. The RTT can be calculated as destination, RTTB is the RTT between node B and the
destination.
RTT= TREP – TREQ ………….. (1).
For example, the route from source (S) to destination (D)
All intermediate nodes save this information and then send
pass through node A, and B so which routing path includes:
it also to the source node.
S → A → B → K→L→D
4.3 Phase 2: Wormhole Attack Detection
whereas T(S)REQ, T(A)REQ, T(B)REQ , T(K)REQ,
In this phase, the source node calculates the RTT of all T(L)REQ, T(D)REQ is the time the node S, A, B, K, L, D
intermediate nodes since wormhole attack launched by forward RREQ and (S)REP, T(A)REP, T(B)REP, T(K)REP
adversary intermediate nodes there is no need to calculate , T(L)REP ,T(D)REP is the time the node S, A, B, K, L, D
RTT between source to first node and last node to forward REP.
destination. It calculates the RTT of successive intermediate
nodes and compares the value to check whether the Then the RTT between S, A, B, K, L and D will be
wormhole attack can be there or not. If there is no attack, calculated based on equation (1) as follows:
the values of them are nearly the same. If the RTT value is RTTA = T(A)REP – T(A)REQ
higher than other successive nodes, it can be suspected as
wormhole attack between this link. In this way the RTTB = T(B)REP – T(B)REQ
mechanism can pinpoint the location of the wormhole RTTK = T(K)REP – T(K)REQ
attack.
RTTL = T(L)REP – T(L)REQ
And the RTT values between two successive intermediate
nodes along the path will be calculated based on equation
(2):
RTTA,B = RTTA – RTTB
RTTB,K = RTTB – RTTD
RTTK,L = RTTB – RTTD
Under normal circumstances, RTTA,B RTTB,K RTTK,L
are similar value in range. If there is a wormhole line
between two nodes, the RTT value may considerably higher
than other successive RTT values and suspected that there
may be a wormhole link between these two nodes.
Compare to another RTT based technique[12] our technique
has lesser number of calculations. Our technique is based on
the fact that wormhole attack is launched by intermediate
nodes therefore there is no need to calculate RTT between
source node to first node and RTT between last node to
destination node. By doing so, we can reduce number of
calculations which in turn speed up the wormhole attack
Figure 2. Time of forwarding RREQ & receiving RREP. detection process.
4.4 Calculation of RTT
In this subsection, the detailed calculation of the RTT is
discussed. The value of RTT is considered the time 5. Conclusions
difference between a node receives RREP from a destination
to it send RREQ to the destination. During route setup In this paper, we have introduced the wormhole attack, a
procedure, the time of sending RREQ and receiving RREP powerful attack that can have serious consequences on many
is described in Figure 2. In this case, every node will save proposed ad hoc network routing protocols. The
the time they forward RREQ and the time they receive countermeasures for the wormhole attack can be
RREP from the destination to calculate the RTT and send implemented at different layers. For example, directional
these values to source node. The source node is in charge of antennas are used at the media access layer to defend
calculating all RTT values between intermediate nodes against wormhole attacks, and packet leashes are used at a
along the established route. network layer. To detect and defend against the wormhole
attack, we proposed an efficient mechanism based on the
Given all RTT values between nodes in the route and RTT of the route message. The significant feature of the
the destination, RTT between two successive nodes, say A propose mechanism is that it does not need any specific
and B, can be calculated as follows: hardware to detect the wormhole attacks and it also reduces
RTTA,B = RTTA – RTTB …………….. (2). number of RTT calculations. Our mechanism is better than
(IJCNS) International Journal of Computer and Network Security, 31
Vol. 2, No. 9, September 2010
discuss the performance evaluation. Section IV concludes is used. The limitation of the scheme is high communication
with the scope of the scheme in future. overhead and high energy consumption is high.
2.3 Assumptions
2. Background and proposed Scheme Seven assumptions are proposed in detection mechanism.
First, the Nodes are mobile and transmit the messages
This section describes the selective forward attack and during different sessions. Second, the size of the window is
reviews the existing works. constant i.e. the total time duration for transmission of
2.1 Selective Forward Attack messages per session is kept constant. Third, the Dynamic
Source Routing protocol is implemented in nodes. Fourth,
during a particular session topology is static .Fifth; the node
id is different per session. Sixth, the malicious node only
drops maximum number of packets. And finally, the
messages are authenticated using one-way hash chains.
2.4 Detection Scheme
The existing detection scheme consists of inclusion of
packets such as cumulative acknowledgement of each node,
event packet, acknowledgement packet, control packets and
alert packet. With the inclusion of packets for detections,
communication overhead will be more. The proposed
detection schemes consist of cumulative acknowledgement
packet between the check points of the forward path and the
check point generates the trap message and is sent to the
Figure 1 shows an example of selective forward attack. It next node of the forwarding path.
drops packet and refuses to forward the message to neighbor The different phases of the proposed mechanism are as
node. If the malicious nodes drop the entire message, the follows:
node is called black hole. Malicious node can forward the 1. Node id assignment phase and location
message in a wrong path and gives unfaithful routing phase
information in the network. It creates unnecessary packet 2. Topology identification
delay and it leads to confusion in forwarding the message. It 3. Forward route selection path
also creates false information and transmission in the 4. Check Point assignment
network. It is difficult to detect the malicious node when 5. Data transmission
there is collision, packet drop due to timer expiry and link 6. Malicious node detection
failure, since the nodes are mobile nodes. Selective forward 2.4.1 Node id and Location Phase
attack affects the exsisting routing protocols such as DSR, Node id is activated only when the transmission is
GPSR, GEAR and Tinos beaconing. required. Node id is configured dynamically per session by
2.2 Review the sink node/base station. Whenever the sink node/base
Selective forward attack may corrupt some mission- station needs any information it broadcasts the set of node
critical applications such as military surveillance and forest ids and activates the timer. Node id is valid until timer
fire monitoring in wireless sensor networks. BinXiao[3][4] expires. Base station stores the allotted node id temporarily
proposed a lightweight security scheme and detected the for each session temporarily.
selective forward attack using multi-hop 2.4.2 Topology identification phase
acknowledgements. It has limitations as it requires nodes to After receiving the node id, the node identifies its
be loosely time synchronized and keep one-way key chains neighbor node and stores the next hop neighbor id to dentify
for authentication. Kim[5] suggested cumulative the topology of the network.
acknowledgement based detection. The limitations, data- 2.4.3Forward route selection path
reply packets are transmitted through multiple paths. But, The source node sends the route_ request packet to the
the communication overhead will be high because of destination node/base station. It responds the route_reply
cumulative acknowledgement and there by reducing the packet with the selected forward path through which data is
node energy. Y.B.Reddy[7] proposed a new framework to transmitted. Forward path is selected based on the Dynamic
detect the selective forward attack using game theory model, source routing protocol.
the detection of a malicious node is found between the 2.4.4Check point selection phase
selective acknowledgement points irrespective of the Base station/Destination node assigns the nodes to be
dropping rate. J.Brown[8] proposed a sequential the check point in the forward path randonly. In the
probability ratio test for detecting the attacks in downstream link, check point generates a trap message after
heterogeneous sensor networks. Mathematical foundations the successful reception of the packet.
are also be helpful in detecting the attack. The major 2.4.5Data transmission phase
concept of the existing works adopted a scheme in routing Once the forward path is selected, data is transmitted
protocols and analyzed its performance in terms of from the source to the base station/destination node. Upon
communication overhead, network throughput, and energy successful reception of data, each node sends an
consumption. In this paper, the light weight scheme based acknowledgement packet to its next node which lies in the
on dynamic source routing protocol for detecting the attack forward path. The acknowledgement packet of next node
34 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
44
the source and the destination. 48
9
7
43
16
22 39
2.4.6Detection process 49
15
12
8
23
10 24
Step: 1 Base station issues the node id and it is dynamic 50
42
40
18
and unique for a window. 11 25
26
Step 2: Base station sends the data request to all the 33
19
nodes. 3
4
27
Step 3: Source nodes send a route request packet to the 34
B
base station. S
The Check-points are randomly selected, if the base mobile nodes. Overlap of window causes the packet drop in
station/destination selects the malicious node as check- the network. Check point should not misjudge an ordinary
points that generate acknowledgement and trap message on node to be a compromised node. In Fig. 5 Node 4 drops the
its own and forward the packet to its neighbor node. In that cumulative acknowledgement packet and it is treated as
case, detection of malicious node may be suspected based on compromised node. Based on the Negative
the node id and packet delivery ratio. Check point id is valid acknowledgement, the compromised node is identified.
until window expires. In Fig. 3 Node 26 and 67 are source
Destina
nodes whereas BS is the base station and it is treated as Source 1 5 8
tion
destination node and forward paths are 26-54-22-6-52-36
and 67-13-44-78-21-88-17-62 respectively. Check points are
22, 16 and 21.The forward path from the source 26 to base Figure 5. Node as Compromised Node
station does not contain any malicious node. But the forward
path from 67 to the Base station contains 21 as check point Format of the Cumulative Acknowledgement packet
but it is also a malicious node. In this case, check point is a Dat Ac Ac … Ack NACK
malicious node and it is detected based on node id and a k0 k1 N
packet drop ratio. Format of the Trap message
Scenario 3: Source node detection Check point RDS Node ids of NACK
The base station broadcasts the request to nodes, and the Node id
malicious node responds to the base station with route
request packet station to gather the routing information and If NACK is set to 0, it denotes that it is a negative
misguide the route in the network. Fig. 4 shows that acknowledgement of data packet and if itis set to 1, it
malicious node 67 voluntarily responds to the base station denotes that it is a negative acknowledgement of route, if the
after receiving the route request and misguides the route. node has not seen the route packet sent by the base
The actual forward path is 67-6-16-52-3 instead of 67-13- station/destination.
44-78-21-88-17-62. The node is detected based on the Received data successfully (RDS=1) denotes that data is
packet drop ratio and based on cumulative received upto the particular check point indicated by its
acknowledgement packet. node id.Once the destination/base station identifies the
46 11 44
99
14
19 malicious nodes, the destination broadcasts the node id of
9
NACK packet .Source requests the destination to send the
45 18 77
14 27
88
43 alternate forward path.
67 78
91 66
13
23 3. Performance Evaluation
44
21
51 The proposed algorithm is implemented in ns2 [6] and
22 6 16
54 the performance is evaluated in terms of network throughput
15
52 17
62 and packet delivery ratio
43
26
3
4
36
Evaluation Metrics:
83
17 The following metrics [6][8] evaluate the effectiveness of
44
81
5
80 B the proposed detection Scheme.
S
2
7 8
Packet delivery ratio: It is the ratio of number of packets
1
10 72 received and the number of packets sent.
98 54
Throughput: This gives the fraction of channel capacity
Figure 4. Source Node Detection. used for data transmission.
Communication Overhead: It is the ratio of overheads
Scenario 4: Node can be a compromised node with and without the detection scheme.
The existing methods such as CHEMAS, CADE, Average Latency: Mean time in seconds taken by the
[2][3][4][5] detect any two nodes in the selective forward packets to reach their respective destination.
path as malicious node . In CHEMAS, authors suggest that Undetected ratio: It is the ratio of number of undetected
malicious node lies within the rang of check points. In maliciously dropped packets to the total number of
CADE, authors present the detection mechanism to identify maliciously dropped packets.
the two malicious nodes in the forward path. The proposed 3.1 Simulation parameters
mechanism detects the exact compromised nodes. Check The parameters used in our simulations are shown in
point generates a trap message and forwards it to the next Table 1. Window is static and malicious nodes are randomly
check point stating that there is no packet drop exists upto located on the forward paths of source and base station.
that check point. Between the two check points, Node ids, check points, source and destination are assigned
acknowledgements of each node are cumulated if the data before the transmission starts.
has been transmitted successfully. Once the check point
receives the cumulative acknowledgement successfully then
it generates the trap message. If any node between the
check points fail to forward the data packet, Cumulative
acknowledgment and trap message, that node is suspected to
be compromised node. Cumulative Acknowledgment packet
can also drop by collision and timer expiry since nodes are
36 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
Table 1: Parameters used in simulations During the data transmission, malicious nodes are
Area 2000mX2000m detected, and the authors have detected that node 2 is a
Nodes 50 check point node and also malicious node and other
Packet size 512 bytes malicious nodes are 7, 14 and 41 as shown in Fig. 8.
Transmission protocol UDP
Application Traffic CBR
Transmission rate 10 Mbits/sec
Pause time 24.73 sec
Maximum speed 31sec
Simulation time 100 sec
Propagation model Radio Propagation
Maximum Malicious node 50
Type of attack Selective forward attack
Examined DSR
Figure 12.Throughput
The researcher observes that the number of packets sent
and throughput vary due to the presence of malicious nodes.
In Fig. 11 and Fig. 12, the malicious nodes increase the
packet drop ratio and decrease the throughput of the
network, the presence of malicious node affects the
performance of the network. A cumulative
acknowledgement is transmitted up to the check point and
thus reduces the communication overhead in the forward
path.
The packet drop rate of the normal nodes is significantly
different from that of the compromised node. The proposed
detection scheme can achieve 90% of detection rate when
drop rate is less.
The performance of the scheme is compared with the
other existing schemes and it is tabulated in Table 2. The
overall performance of the proposed scheme is better than
the existing schemes. Though the scheme consumes 60% of
node energy, it provides better accuracy than the existing
schemes.
Authors Profile
S.Sharmila received the B.E and M.E
degrees in Electronics and Communication
Engineering and Applied Electronics from
Bharathiyar University and Anna University,
India in 1999 and 2004 respectively. Her
research interest includes wireless sensor
networks,
computer networks and security.
Abstract: This article explores the learner-content interaction Intrapersonal interaction, and 5) Performance support.
in the Online Top-Down Modeling Model networked learning Hirumi [12] indicates in his study that there are three levels
environment. It used a graduate computer literacy course as a of interaction: Level I—within the learner, Level II—
case to explore the phenomenon. The findings of this study between the learner and human/non-human resources, and
expose the major factors that influenced learners to actively use Level III—between learner and instruction. In summary, the
online learning resources and the factors that negatively learner-content interaction has been recognized as very
affected the learners in using online learning resources. It
important among all kinds of interactional perspectives.
discusses the strategies to design effective online learning
resources so as to motivate students into active involvement in
The issue of learner-content interaction has caught the
the Internet assisted resources-based learning. attention of educators [13, 25]. It is a defining characteristic
of education [13]. The learner-content interaction is among
Keywords: learner-content interaction, networked learning, the four types of interactions in distance education [22]. The
intrinsic factors, extrinsic factors, distance education. four types of interaction have been recognized widely in the
literature as learner-content, learner-learner, learner-
1. Introduction teacher, and learner-interface [27, 28]. The learner-content
interaction is the essential process of learning in which the
This section introduces the background of the learner- learners intellectually interact with the cognitive content
content interaction in the networked learning environment. matter. It is regarded an “internal didactic conversation”
when the learner transfers or internalizes knowledge input
1.1 The Background in his mental cognitive structure [26]. Learning from a book
Internet is impacting education profoundly. The Internet’s is a kind of learner-content interaction. Yet in the network
capability of transferring information and multimedia environment, learning from online resources makes the
empowers the delivery of instructional materials. The learning input richer in variety and more challenging [19,
possibility of using online learning resources enables 20, 21]. Thurmond [22] defined interaction as:
educators with a platform to generate creativity and increase …the learner’s engagement with the
the effectiveness of teaching and learning. The learning course content, other learners, the
interaction has always been the core issue related to the instructor, and the technological
quality of network assisted education [14]. More and more medium used in the course. True
varieties of learning resources are put online, leading to interactions with other learners, the
more strategies for designing and using Internet-based instructor, and the technology results in
learning resources. Resource-based learning emerges with a reciprocal exchange of information.
the increasing use of the Internet [11]. The concept of The exchange of information is intended
interaction is an essential element of the seven principles of to enhance knowledge development in
good practice in education [4]. These practices include: the learning environment. Depending
encouraging faculty/student contact; developing reciprocity on the nature of the course content, the
and cooperation; engaging in active learning; providing reciprocal exchange may be absent –
quick feedback; emphasizing the amount of time dedicated such as in the case of paper printed
to a task; communicating high expectations; and respecting content. Ultimately, the goal of
diversity. Wagner [24] defines interaction as “reciprocal interaction is to increase
events that require at least two objects and two actions. Understanding of the course content or
Interactions occur when these objects and events mutually mastery of the defined goals.
influence one another.”
Northrup [15] describes the purposes of the interaction as 1)
Interaction for content, 2) Collaboration, 3) Conversation, 4)
(IJCNS) International Journal of Computer and Network Security, 41
Vol. 2, No. 9, September 2010
The educational theorist Vygotsky [23] argues that learning student-centered and allows them to discover knowledge for
is fundamentally a social process. He also contends that the themselves in a constructivist manner [9].
most fruitful experience in learners’ educational processes When teachers establish or design an effective course with
occur when they interact, in a context, with more online learning resources, there needs to be a connection to
experienced partners or adults who provide an “intellectual the learners. An effective course with learning resources
scaffold” that allows the learner to achieve a higher level of ready is definitely effective in connecting students to this
skill or understanding than would be possible for the learner highly efficient learning behavior. The students’ needs and
to do alone. Scaffolding provides individualized support expectations for learning are vital to teachers.
based on the learner's “Zone of Proximal Development” Understanding the students and what motivates students
[23]. In scaffolding instruction as Vygotskey said, a teacher, toward effective learning behavior is conducive to effective
or an experienced person, provides scaffolds, or supports, to teaching and planning.
facilitate the learner’s development. The student builds on Li and his colleague [18] have created an online learning
the prior knowledge and internalizes new information. The model, the Online Top-Down Modeling Model at Alabama
activities provided in scaffolding are just beyond the level of A&M University. The purpose of this model is to enhance
what the student can do alone. By scaffolding, the learners the learning effectiveness through the learner-resource
can accomplish (with assistance) tasks that would otherwise interaction on a graduate course website of FED 529
be overwhelming or they could not do. Computer-Based Instructional Technology at
Scaffolding can take many forms. The form should be http://myspace.aamu.edu/users/sha.li. The FED 529 Course
contingent on what the learner needs in order to move is a computer literacy course for graduate teacher students.
forward and achieve a higher level of skill with that support, It is taught in a blended format of the online resources and
until the next level is internalized by the learner and can be assisted traditional instruction. It is a project-based
enacted without support of the scaffolding. computer literacy class. It includes project creation, problem
In relation to active learning, constructivist learning has solving, hands-on activities and theory-based learning.
drawn great attention [1, 3]. The constructivist learning Learning is based on a constructivist paradigm. The teacher
advocates assert that learning is experience based [6, 7]. The teaches, but provides a resource rich course website to
learners bring in different perspectives, cultures, interests scaffold learners. The course website has plenty of tutorials
and skills and actively engage in the learning activities. and project models such as Word projects, PowerPoint
The learners explore additional resources and brainstorm to projects, Excel projects, video production, and web page
construct their own understanding and enhance learning design, graphics design and tutorials. The tutorials (in
through dialogue and joint production of knowledge artifacts FAQs) are available online in text and video formats.
for meaningful learning to occur. Constructivism suggests Through the integration of the online learning resources
that students need to explore subject matter in a broader into class instruction, the effective learning outcome occurs,
context than what is provided in their reading materials—by and the students’ motivation and positive attitude toward the
sharing experiences and interacting [2]. Within the use of technology-aided learning resource increases [18].
constructivist paradigm, the emphasis of learning is on the Even though the FED 529 class website provides plenty of
learner rather than on the teacher. It is the learner who learning resources for learners to access, this kind of
interacts with his or her environment and thus gains an interaction is learner-content, a one-way interaction. Some
understanding of its features and characteristics. The two-way interaction takes place in the Blackboard online
teacher is actually a facilitator who supports and guides the learning space, such as discussion, chat room, video
students in the learning process. conferencing, email listserv, and bulletin board. The course
Motivation is a component that energizes and directs website and the Blackboard space both constitute the online
behavior toward a goal [8]. Motivation is the energy to learning platforms for this class. The design of the course
study, to learn and achieve, and to maintain these positive website interface is in Fig. 1.
behaviors over time. Motivation is what stimulates students
to make an effort
This article tries to describe the factors that influence the
to acquire, transform and use knowledge [10, 29]. learners’ behavior in the learner-content interaction in the
According Online Top-Down Modeling Model network environment.
to Groccia, “People study and learn because the Then the statistic data from surveys are analyzed to find the
consequences of such behavior satisfy certain internal and/or student perspectives and experiences in relation to the
external motives.” Without motivation, there would be no factors that influence their interaction with the online
learning. learning resources.
Resource-based learning (RBL) is one instructional strategy There are several factors that commonly related to the
where students construct meaning through interaction with student learning behavior in using online learning
a wide range of learning resources. The Internet assisted resources:
resource-based learning empowers learners with a large 1) The extrinsic factors: the course content matter, the
amount of information/resources and strategies necessary to supportive resource platform, the course
make learning a truly productive and meaningful experience requirement, trustful and reliable resource
[5]. The RBL has a strong relationship to Inquiry Learning, environment, grading policy, and the social
Project-Based Learning and Problem-Based Learning. It is influence.
42 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
2) The learner’s intrinsic factors: interest, motivation, The social influence is essential to the students’ attitude
expectation to learn, and self efficacy. toward using of the online learning resources. The
experienced students who had this class before or who used
1.2 The Extrinsic Factors online learning resources successfully often impacted each
Extrinsic factors are factors from the environment that relate other in sharing the information on how to use online
to the students’ motivation of accessing the online learning resources successfully.
resources.
1.3 The Intrinsic Factors
1.2.1 The Course Content Matter Intrinsic factors are factors psychologically in the learner.
The FED class is a multimedia driven computer class. The intrinsic factors have the most influence to guide the
Almost every student likes multimedia, such as graphics learners’ behavior.
design, animation, photo editing, sound editing and video
production. Using multimedia to motivate students to learn 1.3.1 Interest
computer literacy skills is the main strategy of this class. Interest is the major factor that stimulates learning. Without
There are plenty of multimedia projects online that are free interest, there would be no learning [17]. Our students are
to education and students. Those attractive online resources graduate teacher students. Using effective computer-based
are the initial motivators to the students to engage learners projects in teaching has aroused their attention and interest.
in learning, critiquing, creating and to mimicking the The students’ personal interest to learn the multimedia skills
models. guides them to invest energy and efforts in learning and
searching for the useful resources.
1.2.2 A Supportive Resource Platform
This class is a blended format that uses online support as a 1.3.2 Motivation
supplement to learning. The classroom provides the face-to- Motivation directs behavior toward particular goals;
face interaction, and the online learning resources provide motivation leads to increased effort and energy, and
learner-content interaction for learning purposes. The increases initiation of and persistence in learning activities
website interface is user-friendly and easy to navigate. Once [30]. Since this course stimulates students’ interest, students
students found the online learning resource a trustful, are motivated to be involved in the activities. While they
reliable and convenient support, the students’ motivation to learn, they generate high frequency of using online learning
use it increased. resources to retrieve information and support.
2. Method
To verify the students’ perspectives on the factors that
Figure1. The Layout of the Online Top-Down influence their use of online learning resources, a survey is
Modeling Model Site distributed. Then the descriptive statistic analysis is
1.2.3 The Course Participation Requirement conducted. Twenty-nine students in the FED 529 Computer-
The instructor set the rules for the students to participate in Based Instructional Technology course participated in the
the online learning activities. Each student must review survey in the spring semester of 2010. The data are
three online model projects and tutorials to learn new presented below.
projects. They have to post three discussions in each module Table 1: Students’ Responses about their Multimedia
to critique other people’s or other teams’ projects on the Preference related to their learning Style
Blackboard Discussion Board. For the four modules of the
whole semester, at least twelve accesses to the online models # Question: My learning style N %
and twelve discussion postings are required. Actually most with using multimedia is
students created more postings than required. The access to _________
the online communication activities are monitored and
marked for grades. This makes it a disciplinary guideline for I like visual information more
1. 17 59
than others.
the teacher to monitor and prompt the students to retrieve
and respond to the learning information.
I like auditory information more
2. 0 0
than others.
1.2.4 The Social Influence
(IJCNS) International Journal of Computer and Network Security, 43
Vol. 2, No. 9, September 2010
Acknowledgement: This project is funded by the Title III distance education experience. Virtual University
Mini Research Grant. Great thanks to the Title III Office in Gazette. January 2, 2004. Available:
the Alabama A&M University for their support to this http://www.geteducated.com/vug/aug02/vug0802.htm
project. [20] N. Shin, Beyond interaction: The relational construct
of 'Transactional Presence'. Open Learning, 17, 121-
137, 2002.
Reference:
[21] P. L. Smith, C.L. Dillon, Comparing distance learning
[1] G.J. Brooks, G.M. Brooks, In Search of Understanding: and classroom learning: Conceptual considerations.
The Case for Constructivist Classrooms. Association American Journal of Distance Education, 13(2), 6-23,
for Supervision and Curriculum Development, 1999.
Alexandria, VA, 1993. [22] V.A. Thurmond, Examination of interaction variables
[2] J. Bruner, Going Beyond the Information, Norton, as predictors of students' satisfaction and willingness
Given. New York, 1973. to enroll in future Web-based courses while controlling
[3] J.P. Byrnes, Cognitive Development and Learning in for student characteristics. Unpublished Dissertation,
Instructional Contexts. Allyn and Bacon, Boston, University of Kansas, Parkland, FL, p. 4, 2003.
1996. [23] L.S. Vygotsky, Mind in society, the development of
[4] A.W. Chickering, Z.F. Gamson, Seven principles for higher psychological processes. Harvard University
good practice in undergraduate education, AAHE Press, Cambridge, Ma, p. 86, 1978.
Bulletin, 39(7), 3-6, 1987. [24] E.D. Wagner, In support of a functional definition of
[5] C. Crook, Deferring to Resources: Collaborations interaction. The American Journal of Distance
around Traditional vs. Computer-based Notes, Journal Education, 8(2), 6-26, 1994.
of Computer-Assisted Learning, 18, 64-76, 2002. [25] G. Zafeiriou, J.M. Nunes, N. Ford, Using students'
[6] J. Dewey, Experience and Education. Macmillan, New perceptions of participation in collaborative learning
York, 1938. activities in the design of online learning
[7] J. Dewey, Democracy and Education, Free Press, New environments. Education for Information, 19, 83-106,
York, 1966. 2001.
[8] P. Eggen, D. Kauchak, Educational psychology: [27] H. Chen, Interaction In distance education. January 4,
Classroom connections, (2nd Ed.), Macmillan 2004. Available:
Publishing Company, New York, 1994. http://seamonkey.ed.asu.eduac/disted/week2/7focushc.
[9] G. Gibbs, N. Pollard, J. Farrell, Institutional Support html
for Resource Based Learning, Oxford Centre for Staff [28] M.W. Crawford, Students' perceptions of the
Development, Oxford, 1994. interpersonal communication courses offered through
[10] J.E. Groccia, The college success book: A whole- distance education, unpublished doctoral dissertation,
student approach to academic excellence. Glenbridge Ohio University, 1999. UMI Dissertation Services,
Publishing Ltd , Lakewood, CO, p. 62, 1992. (UMI No. 9929303).
[11] J.R. Hill, M.J. Hannafin, The resurgence of resource- [29] B. Holmberg, Growth and Structure of Distance
based learning. Educational Technology, Research and Education. Croon Helm, London, 1986.
Development, 49(3), 37-52, 2001. [30] P. R. Pintrich, V. De-Groot, Motivational and self-
[12] A. Hirumi, A Framework for Analyzing, Designing, regulated learning components of classroom academic
and Sequencing Planned eLearning Interactions, performance. Journal of Educational Psychology,
Quarterly Review of Distance Education, 3(2), 141- 82(1), 33-40, 1990.
160, 2002.
[13] M.G. Moore, Three types of interaction. The American Author Profile
Journal of Distance Education, 3(2), 1-6, 1989.
Sha Li received the doctoral degree in educational technology from
[14] B. Muirhead, Enhancing social interaction in the Oklahoma State University in 2001. He is an Associate
computer-mediated distance education, USDLA Professor in the Alabama A&M University. His research interests
Journal, 15(4), 2001. are in E-learning in the networked environment, distance
[15] P.T. Northrup, A Framework for Designing education, multimedia production, and instructional design with
Interactivity into Web-Based Instruction, Educational technology. He is also an instructional design facilitator for the
Technology, 41(2), 31-39, 2001. local public school systems.
[16] J. Piaget, The Psychology of Intelligence, Routledge, Shirley King received the Ed. D. degree in Special Education from
New York, 1950. the University of Alabama. She is an Associate Professor in the
Alabama A&M University. Her research interests are in special
[17] M. Pressley, C.B. McCormick, Advanced educational
education, elementary education, and multicultural education. She
psychology: for educators, researchers, and is the Program Coordinator of a USAID Textbooks and Learning
policymakers. Harper Collins college Publisher, New Materials Project serving Ethiopia.
York, 1995. Yujian Fu received the Ph. D. degree of Computer Science from
[18] S. Li, D. Liu, The Online Top-Down Modeling Model, the Florida International University. She is an Assistant Professor
Quarterly Review of Distance Education, 6(4), 343- in the Alabama A&M University. Her research interests are in
359, 2005. software verification, software quality assurance, runtime
[19] G.T. Sciuto, Setting students up for success: The verification, and formal methods.
instructor’s role in creating a positive, asynchronous,
46 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
3.6 Studied Model 1-The User request a play right for the video streamed.
The Figure1 illustrates the video streaming scenario studied 2-The DRM block parse the license XML based with a
which consists of 3 parts: A video streaming source server parser
(Blue block, Fig.2), video end user (Brown block, Fig.2) and 3-The DRM block evaluate the license according to the user
a bandwidth-limited communication network (Green block, request
Fig.2). The implementation of DRM access control can be 4-If the play right exist then the user can play the video
done either in the server side or in the end user side. 5-If the play right doesn’t exit then a message will be played
informing the user that he/she is not authorized to play the
video.
Video End
Video Source User The MPEG-21REL Embedded MATLAB Function Block is
Server inserted to the studied model to control user rights for the
video streamed.
The added DRM controls components are: 3.8 Simulation and Experimentation
1- The Embedded MATLAB function block (Fig.5)
that we call “MPEG-21 REL” which consists of 2.3.1 Without DRM Control
controlling the video user rights by parsing the When we run the simulation without DRM components
XML license file (Fig.8), we have the Scopes that plot the following
quantities to evaluate performance:
3. Conclusion
In this paper we have used DRM technologies to control
access and use of real time streamed videos in the case of a
bandwidth-limited network.
The video quality and network performance was not affected
Figure 9. Simulation results using DRM control. by DRM processing. This makes it suitable to enable real
time DRM e-commerce. Further research is under way to
convert the program into VHDL to be tested on FPGA since
the MATLAB software support the conversion operation to
VHDL if all the components or programs are synthesizable
like the parser used. In order to secure licenses to be
tampered with we can convert them to binary format. Also
more complex licenses will be studied in order to deal with
complex business model and the XML licenses time parsing.
4. References
a) Output video for authorized user b) Output video for non authorized user
Cryptanalysis is the science of recovering the plaintext of a One of the most intense areas of research in the field of
message without the access to the key. Successful symmetric block ciphers is that of S-box design. The
cryptanalysis may recover the plaintext or the key. It also characteristic of the S-box is its size. An n x m S-box has n
finds weakness in the cryptosystem. input bits and m output bits. Larger S-boxes, by and large,
Brute – Force attack: The attack tries every possible key are more resistant to differential and linear cryptanalysis.
on a piece of cipher text until an intelligible translation into However, large dimension n leads to larger lookup table.
plain text is obtained. This is tedious and may not be The size of lookup table decides the size of the program
feasible if key length is relatively long. memory. Therefore, the small S-box is required for the
hardware with less program memory and large S-box can be
1.4 Confusion and Diffusion used with hardware having more program memory. For
These are the two important techniques for building any example, AES uses 16 x 16 S-box. This is implemented in a
cryptographic system. Claude Shannon introduced the terms suite of hardware platforms: 8051 based microcontrollers,
Confusion and Diffusion. According to Shannon, in an ideal PIC processor, ARM processors, FPGA based processors,
cipher, “all statistics of the cipher text are independent of ASIC, etc. It is possible to implement 256 x 256 S-box in
the particular key used”. In Diffusion, each plaintext digit high end processors.
affects many cipher text digits, which is equivalent to saying
that each cipher text digit is affected by many plain text Another practical consideration is that the larger the S-box,
digits. the more difficult it is to design it properly. S-box is
All encryption algorithms will make use of diffusion and required for both encryption and decryption. An n x m S-box
confusion layers. Diffusion layer is based upon simple linear typically consists of 2n rows of m bits each. The n bits of
operations such as multi-permutations, key additions, input select one of the rows of the S-box, and the m bits in
multiplication with known constants etc. On the other hand, that row are the output. For example, in an 8 x 32 S-box, if
confusion layer is based upon complex and linear operations the input is 00001001, the output consists of the 32 bits in
such as Substitution Box (S-box). row 9 (the first row is labeled row 0).
Nb = 4 which reflects the number of 32-bit words (number Hence, at the beginning of the Cipher or Inverse Cipher, the
of columns) in the State. input array, in, is copied to the State array according to the
For the AES algorithm, the length of the Cipher Key, K, scheme:
is 128, 192, or 256 bits. The key length is represented by Nk s[r, c] = in[r + 4c] for 0 ≤ r < 4 and 0 ≤ c < Nb
= 4, 6, or 8, which reflects the number of 32-bit words and at the end of the Cipher and Inverse Cipher, the State is
(number of columns) in the Cipher Key. For the AES copied to the output array out as follows:
algorithm, the number of rounds to be performed during the out[r + 4c] = s[r, c] for 0 ≤ r < 4 and 0 ≤ c < Nb.
execution of the algorithm is dependent on the key size. The 4.2 The State as an Array of Columns
number of rounds is represented by Nr, where Nr = 10 when
The four bytes in each column of the State array form 32-bit
Nk = 4, Nr = 12 when Nk = 6, and Nr = 14 when Nk = 8.
words, where the row number r provides an index for the
four bytes within each word. The state can hence be
interpreted as a one-dimensional array of 32 bit words
(columns), w0...w3, where the column number c provides an
index into this array. Hence, for the example in Fig. 2, the
State can be considered as an array of four words, as
follows:
w0 = s0,0 s1,0 s2,0 s3,0 w2 = s0,2 s1,2 s2,2 s3,2
w1 = s0,1 s1,1 s2,1 s3,1 w3 = s0,3 s1,3 s2,3 s3,3
The only Key-Block-Round combinations that conform
are shown below.
5. Diffusion Analysis
Figure 1. Key-Block-Round Combinations. Diffusion analysis of any encryption algorithm enables to
estimate the strength of that algorithm. The strength of the
For both its Cipher and Inverse Cipher, the AES algorithm is related to how cipher values are sensitive to
algorithm uses a round function that is composed of four input plain text changes. In other words, how many of
different byte-oriented transformations: 1) byte substitution output cipher text bits undergo changes when a single bit of
using a substitution table (S-box), 2) shifting rows of the input plain text is changed.
State array by different offsets, 3) mixing the data within Hamming distance is a measure of Hamming weight of a
each column of the State array, and 4) adding a Round Key function derived from xoring two cipher text values.
to the State. Hamming distance indicates the Avalanche of encryption
algorithm. For well-diffused cipher values, higher avalanche
4.1 The State values are required. Therefore, it is imperative to define the
Internally, the AES algorithm’s operations are performed on amount of avalanche is required for a given encryption
a two-dimensional array of bytes called the State. The State algorithm. Strict Avalanche Criterion (SAC) is defined to
consists of four rows of bytes, each containing Nb bytes, indicate the required diffusion level. It is mandatory to every
where Nb is the block length divided by 32. In the State encryption algorithm to satisfy the SAC in order to meet the
array denoted by the symbol s, each individual byte has two diffusion requirements.
indices, with its row number r in the range 0 £ r < 4 and its In this paper, Avalanche values are measured for this
column number c in the range 0 £ c < Nb. This allows an encryption algorithm for First order SAC and for Higher
individual byte of the State to be referred to as either sr,c or Order SAC. The measured results are shown in later
s[r,c]. For this standard, Nb=4, i.e., 0 £ c < 4. sections.
At the start of the Cipher and Inverse Cipher, the input – Flipping one bit input plain text and keeping the key value
the array of bytes in0, in1 … in15 – is copied into the State constant, avalanche values are measured for each round.
array as illustrated in Fig. 2. The Cipher or Inverse Cipher The measured result shows a definite pattern.
operations are then conducted on this State array, after With respect to CASE (1) i.e. implementation of the first
which its final value is copied to the output – the array of order SAC, keeping the plaintext constant. Initially in the
bytes out0, out1 … out15. first round it is low, the number of bits that differ are 22 and
the SAC value is 17. Then increases to a maximum, in the
7th round ,the number of bits that differ are 75 with a SAC
value 58 and decreases, finally after the 10 round, it ends
with the number of bits differ are 72 with a SAC value 56
which satisfies the desired Strict Avalanche Criteria.
Similarly the same holds for all the other cases which are
shown in the later sections. From the results, it is evident
that Avalanche values exceed the SAC value in the initial,
rounds, sometimes in the second round itself.
Figure 2. State array input and output. The AES encryption algorithm is designed based upon
the various criteria, and then the number of rounds in here
is adequate and robust, as it uses S-boxes as nonlinear
components. So far Rijndael has no known security attacks.
54 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
6. Alternate S-box
In a block cipher, S-box provides the confusion. S-box maps
the plain text to a cipher value using nonlinear operations.
Since plain text and cipher values are not related linearly, it
6.3 Proposed S-box
is difficult to construct plain text from a given cipher value.
This problem is generally known as “hard”. Some of the Here we are proposing that we can generate our own S-
block ciphers have used multiplicative inverse of a byte in boxes by choosing different constant value which is used in
the GF(28) field for constructing S-box. This S-box is the affine transformation in the construction of S-box.
constructed by filling the multiplicative inverse values. The
same S-box can be used for decryption thus providing 7. Experimental Results
involution. However, these are not as secure as that of an S-
AES algorithm is designed with a same three key size
box constructed using double transformation, i.e., separate
alternatives i.e. 128/192/256 but limits the block length to
S-box for each encryption and decryption. But involution S-
128 bits. The algorithm efficiently encrypts and decrypts the
box is extremely useful for involution cipher, where
plaintext and the result is tabulated. Also diffusion analysis
hardware is premium such as Smart card, etc. This is also
is used as a tool to measure the strength of the AES
used as a basic building block to construct an S-box using
algorithm. This is achieved by analyzing the diffusion that
double transformation.
exhibits a strong avalanche effect for the First order SAC
6.1 Design Criteria for S-Box and Higher order SAC taking the following cases.
Following are the design criteria for S-box, appearing in • Changing one bit at a time in a plaintext, keeping
order of importance: key as constant.
• Non – Linearity: • Changing one bit at a time in a key, keeping
(a) Correlation: The maximum input-output correlation plaintext as constant.
amplitude must be as small as possible. • Changing many bits at a time in a plaintext,
(b) Difference propagation probability: The maximum keeping key as constant.
difference propagation probability must be as small as • Changing many bits at a time in a key, keeping
possible. plaintext as constant.
• Algebraic Complexity: Each round avalanche value is tabulated for all the above
The algebraic expression of SRD in GF (28) has to be cases and proved that the Rijndael algorithm exhibit good
complex. Strict avalanche Criteria.
Also, generation of an alternate S-box is an attempt to
6.2 S-Box of AES secure the algorithm from any attacks and then using the
S-box is constructed in the following fashion: generated S-box for encryption and diffusion analysis, for
• Initialize the S-box with the byte values in ascending comparison.
sequence row by row contains {00},{01},{02},……..{0F}; The following are the results that have been achieved:
the second row contains {10},{11},etc.; and so on. Thus
the value of a byte at row x, column y is {xy}. 7.1 Encryption
• Map each byte in the S-box to its Multiplicative inverse in The length of the key is entered; accordingly the key and the
the finite field GF(28); the value {00} is mapped to itself. plaintext are to be entered in hexadecimal. Simultaneously
• Consider that each byte in the S-box consists of 8 bits the cipher text is generated.
labeled (b7,b6,b5,b4,b3,b2,b1,b0). Apply the following
transformation to each bit of each byte in the S-box:
b`i=bi b(i+4)mod 8 b(i+5)mod 8 b(i+6)mod 8 b(i+7)mod 8
b(i+8) mod 8 ci
Where ci is the ith bit of byte c with the value {63} i.e. (c7
c6 c5 c4 c3 c2 c1 c0) = (01100011). The prime (`) indicates that
the variable is to be updated by the value on the right.
The AES standard depicts this transformation in matrix
form as follows:
7.2 Decryption
The key has to be entered, which was previously entered for
encryption. As a result, the plain text entered during
encryption and the text after decrypting is generated.
Figure 4. Showing the result after decryption for 128 bit key
length.
7.3 Diffusion Analysis for First Order SAC
CASE 1: Changing one bit at a time in a key, keeping
plaintext as constant
Authors Profile
The basic design of an encryption algorithm is based upon Mohan H.S. received his Bachelor’s
the strength of diffusion and confusion. This dissertation degree in computer Science and
explored diffusion and confusion elements used in the AES Engineering from Malnad college of
to an extent. Based on the studies, following techniques are Engineering, Hassan during the year
developed as a security improvement, these are 1999 and M. Tech in computer Science
• Diffusion analysis, which is used as a tool to measure the and Engineering from Jawaharlal Nehru
strength of the algorithm. Therefore from the Experimental National College of Engineering, Shimoga during the year
results; it is proved that AES meets the Strict Avalanche 2004. Currently pursing his part time Ph.D degree in Dr.
Criteria which is mandatory to an encryption algorithm in MGR university ,Chennai. He is working as a professor in
order to meet the diffusion requirements. the Dept of Information Science and Engineering at SJB
• Suggesting an alternate S-box. Institute of Technology, Bangalore-60. He is having total 12
years of teaching experience. His area of interests are
9. Future Enhancements Networks Security, Image processing, Data Structures,
Computer Graphics, finite automata and formal languages,
• An alternate S-box for decryption can be developed.
Compiler Design. He has obtained a best teacher award for
• All encryption algorithms both symmetric and public key,
his teaching during the year 2008 at SJBIT Bangalore-60.
involve with arithmetic operations on integers with a finite
He has published and presented papers in journals,
field. Rijndael algorithm uses a irreducible polynomial
international and national level conferences.
m(x) = x8 +x4 +x3+x+1 = 0x11b (hex).
So, a new irreducible polynomial of degree 8 could be used.
A. Raji reddy received his M.Sc from
There are 30 irreducible polynomials of degree 8 are present
Osmania University and M.Tech in
Electrical and Electronics and
References communication Engineering from IIT,
[1] W Stallings, CRYPTOGRAPHY AND NETWORK Kharagpur during the year 1979 and his
SECURITY, Printice Hall, 2003. Ph.D degree from IIT, kharagpur during
[2] AES page available via the year 1986.He worked as a senior
http://www.nist.gov/CryptoToolkit.4 scientist in R&D of ITI Ltd, Bangalore for about 24 years.
[3] Computer Security Objects Register (CSOR): He is currently working as a professor and head in the
http://csrc.nist.gov/csor/. department of Electronics and Communication,
[4] J. Daemen and V. Rijmen, AES Proposal: Rijndael, Madanapalle Institute of Technology & Science.
AES Algorithm Submission, September 3, 1999. Madanapalle. His current research areas in Cryptography
[5] J. Daemen and V. Rijmen, The block cipher Rijndael, and its application to wireless systems and network security.
Smart Card research and Applications, LNCS 1820, He has published and presented papers in journals,
Springer-Verlag, pp. 288-296. international and national level conferences.
[6] B. Gladman’s AES related home page
http://fp.gladman.plus.com/cryptography_tetechnolo/.
[7] A. Lee, NIST Special Publication 800-21, Guideline for
Implementing Cryptography in the Federal
Government, National Institute of Standards and
Technology, November 1999.
[8] A. Menezes, P. van Oorschot, and S. Vanstone,
Handbook of Applied Cryptography, CRC Press, New
York, 1997, p. 81-83.
[9] J. Nechvatal, Report on the Development of the
Advanced Encryption Standard (AES), National
Institute of Standards and Technology, October 2, 2000.
[10] Mohan H.S and A. Raji Reddy. " Diffusion Analysis of
Mars Encryption Algorithm","International conference
on current trends of information technology,MERG-
2005”,Bhimavaram, Andhrapradesh.
[11] Mohan H.S and A. Raji Reddy. "An Effective Defense
Against Distributed Denial of Service in Grid”, "IEEE
International conference on integrated intelligent
computing ICIIC-2010.SJBIT, Bangalore-60. ISBN
978-0-7695-4152-5, PP. 84-89.
(IJCNS) International Journal of Computer and Network Security, 57
Vol. 2, No. 9, September 2010
Abstract: Today most of the scientific enterprises are highly acquisition and modeling, the Web ontology language for
data-intensive, computation-intensive and collaboration- Knowledge representation and semantic-based reasoning for
intensive. This necessitates the interaction and sharing of decision making support.
various resources, especially knowledge, despite their Grid computing offers a promising distributed
heterogeneity and geographical distribution. Intelligent process computing infrastructure where large-scale cross
automation and collaborative problem solving have to be organizational resource sharing and routine interactions are
adapted. A Semantic Web-based approach is proposed to tackle
a common place. Grid applications usually refer to large-
the six challenges of the knowledge lifecycle namely those of
acquiring, modeling, and retrieving, reusing, publishing and
scale science and engineering that are carried out through
maintaining knowledge. To achieve this vision the Semantic distributed global collaboration enabled by the Grid.
Web community has proposed some core enabling technologies Typically, such scientific enterprises are data-intensive
and reasoning which provide an infrastructure for distributed and/or collaboration-intensive and/or computation-intensive,
information and knowledge management based on metadata, i.e., they require access to very large data collections, very
semantics, and reasoning. A Semantic Web-based approach to large-scale computing resources, and close collaboration
managing Grid resources’ knowledge for Grid applications is an among multiple stakeholders. This necessitates the
approach where a semantics-based knowledge layer is added interaction and sharing of specific resources, despite the
between Grid resources and Grid applications. In this layer, the heterogeneity of their respective policies, platforms and
Semantic Web technologies are used to carry out knowledge technologies, and their geographical and organizational
acquisition, modeling, representation, publishing, storage and
dispersal. [8] It is envisioned that Grid applications would
reuse Ontology’s. They are used to conduct knowledge
be carried out through flexible collaborations and
acquisition through Ontology modeling and Semantic
annotation. Ontology modeling provides conceptual structures computations on a global scale with a higher degree of easy-
for preserving knowledge and Semantic annotation captures to-use and seamless automation.[1]
metadata, generates semantic instances and populates them into Grid applications are usually knowledge resides
knowledge bases. implicitly in resource models and/or descriptions. Making
domain knowledge explicit and understandable for third
Keywords: Ontology, Semantic Web, Client, Grid, K-Service. party consumers can enhance effective resource reuse by
providing well-informed decisions regarding when, where,
1. Introduction and how to use a resource.[4] By enriching metadata and
knowledge with semantics, the Grid can break down the
The Semantic Web is an extension of the current Web barrier of heterogeneity and move to truly seamless access
in which information is given well-defined meaning, better and cross-organizational resource sharing. Furthermore,
enabling computers and people to work in cooperation. It is semantics empowers machines or software agents to
the idea of having data on the Web defined and linked in a understand and process resources’ metadata. Consequently
way that it can be used for more effective discovery, it will increase the level of automation and reduce the need
automation, integration, and reuse across various of manual intervention.
applications where data can be shared and processed by
automated tools as well as by people. To achieve this vision, 2. Existing System
we have made use of core enabling technologies, APIs and
tools, encompassing ontologies, ontology languages, Most of today’s Web content is suitable for human
annotation, semantic repositories, and reasoning, which consumption. Even Web content that is generated
provide an infrastructure for distributed information and automatically from databases is usually presented without
knowledge management based on metadata, semantics, and the original structural information found in databases.
reasoning. [3]. A Semantic Web-based approach to KM is Typical uses of the Web today involve people seeking and
proposed here in which we use ontology’s for knowledge making use of information, searching for and getting in
58 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
touch with other people, reviewing catalogs of online stores conceptual structures for preserving knowledge.[1] Semantic
and ordering products by filling out forms, and viewing annotation captures metadata, generates semantic instances
adult material. as knowledge entities, and populates them into knowledge
These activities are not particularly well supported by bases. The essence of this approach is to add a semantics-
software tools. Apart from existence of links that establish based knowledge layer between primitive Grid resources and
connections between documents, the main valuable, indeed Grid applications. Grid applications would be carried out
indispensable, tools are search engines.[9] through flexible collaborations and computations on a global
Keyword based search engines, such as Alta Vista, scale with a high degree of easy-to-use and seamless
Yahoo and Google are the main tools for using today’s Web. automation.
However there are serious problems associated with their
use. Most information is available in a weakly structured 3. Owl (Web Ontology Language):
form, for example, text, audio and video. From the
Knowledge management perspective, the current technology The OWL (Web Ontology Language) is designed for
suffers from limitations in the following areas: use by applications that need to process the content of
information instead of just presenting information to
Extracting Information: Human time and effort are humans. OWL facilitates greater machine interpretability of
required to browse the retrieved documents for relevant Web content than that supported by XML, RDF, and RDF
information. Schema (RDF-S) by providing additional vocabulary along
Uncovering Information: New knowledge implicitly with a formal semantics. OWL has three increasingly-
existing in corporate databases is extracted using data expressive sublanguages: OWL Lite, OWL DL, and OWL
mining. However this task is still difficult for distributed, Full. OWL is intended to be used when the information
weakly structured collection of documents.[4] contained in documents needs to be processed by
Viewing Information: Often it is desirable to restrict access applications, as opposed to situations where the content only
to certain information to certain groups of employees. needs to be presented to humans. [3]
“Views”, which hide certain information, are known form OWL can be used to explicitly represent the
the area of databases but are hard to realize over an intranet meaning of terms in vocabularies and the relationships
(or the Web). between those terms. This representation of terms and their
Searching Information: Currently the keyword based interrelationships is called ontology. OWL has more
search engines return too much or too little or irrelevant facilities for expressing meaning and semantics than XML,
information. RDF, and RDF-S, and thus OWL goes beyond these
languages in its ability to represent machine interpretable
2.1 Proposed System content on the Web. OWL is a revision of the DAML+OIL
Web represents information using natural language web ontology language incorporating lessons learned from
such as English, Hungarian, and Chinese etc. This is okay the design and application of DAML+OIL.[11]
for humans but difficult for machines. In the case of
distributed applications, automatic procedures are involved 3.1 Owl Description:
and not only humans, agents try to make “sense” of The Semantic Web is a vision for the future of the
resources on the Web and a well defined terminology on the Web, in which information is given explicit meaning,
domain is necessary. So it is appropriate to represent the making it easier for machines to automatically process and
Web content in a form that is easily machine-processable integrate information available on the Web. The Semantic
and to use intelligent techniques to take advantage of the Web will build on XML's ability to define customized
representations. We refer to this plan of revolutionizing the tagging schemes and RDF's flexible approach to
Web as the Semantic Web initiative. It is important to representing data. The first level above RDF required for the
understand that the Semantic Web will not be a new global Semantic Web is an ontology language what can formally
information highway parallel to the existing World Wide describe the meaning of terminology used in Web
Web; instead it will gradually evolve out of the existing documents. If machines are expected to perform useful
Web. reasoning tasks on these documents, the language must go
The aim of the Semantic Web is to allow much more beyond the basic semantics of RDF Schema. OWL has been
advanced knowledge management systems. Knowledge will designed to meet this need for a Web Ontology Language.
be organized in conceptual spaces according to its meaning. OWL is part of the growing stack of W3C recommendations
Automated tools will support maintenance by checking for related to the Semantic Web.[6,5]
inconsistencies and extracting new knowledge. Keyword-
based search will be replaced by query answering; requested • XML provides a surface syntax for structured
knowledge will be retrieved, extracted, and presented in a documents, but imposes no semantic constraints on the
human friendly way. Query answering over several meaning of these documents.
documents will be supported. Defining who may view • XML Schema is a language for restricting the
certain parts of information (even parts of documents) will structure of XML documents and also extends XML with
be possible. Knowledge management is done using data types.
metadata, semantics and reasoning. Some of the cores • RDF is a data model for objects ("resources") and
enabling technologies made use are Ontology modeling and relations between them provides a simple semantics for this
Semantic annotations. Ontology modeling provides
(IJCNS) International Journal of Computer and Network Security, 59
Vol. 2, No. 9, September 2010
Search
K-Service Development:
2
Dr. R.M.L. Avadh University Faizabad, Uttar Pradesh, India
proflksingh@yahoo.com
3
Institute of Engg. & Technology, Alwar, Rajasthan, India.
neelam_sr@yahoo.com
3. Adder Design two rules are chosen. The chosen intermediate carry and
intermediate sum are listed in the last column of Table 2 as
3.9 Design Algorithm the QSD coded number.
Table 2: The Intermediate Carry and Sum Between -6 to 6
Arithmetic has played an important role in human
civilization especially in the field of science, engineering
and technology. The everlasting need for higher computing
power and processing speed in a wide range of information
processing applications are placing stringent demands for
fast computation on digital computer design.
Recent advances in technologies for integrated circuits
make large scale arithmetic circuits suitable for VLSI
implementation [9]. However, arithmetic operations still
suffer from known problems including limited number of
bits, propagation time delay, and circuit complexity [6].
With recent advances of integrated circuits technology
higher radix circuits are becoming a reality.
Addition is the most important arithmetic operation in
digital computation. A carry-free addition is highly
desirable as the number of digits becomes large. We can
achieve carry-free addition by exploiting the redundancy of
QSD numbers and the QSD addition. The redundancy
allows multiple representations of any integer quantity i.e. This addition process can be well understood by following
example.
(-5)10 = ( 2 3)QSD = ( 11)QSD
There are two steps involved in the carry-free addition Example: To perform QSD addition of two numbers A =
[3]. The first step generates an intermediate carry and sum 107 and B = -233.
from the addend and augend. The second step combines the First convert the decimal number to their equivalent QSD
intermediate sum of the current digit with the carry of the representation:
lower significant digit[10]. (107)10 = 2 × 43 + 2 × 42 + 3 × 41 + 1 × 40 = (2 2 3 1 )QSD
To prevent carry from further rippling, we define two
rules. The first rule states that the magnitude of the (233)10 = 3 × 43 + 3 × 42 + 2 × 41 + 1 × 40 = (33 2 1)QSD
intermediate sum must be less than or equal to 2. The Hence, (-233)10 = ( 3 3 2 1 )QSD
second rule states that the magnitude of the carry must be Now the addition of two QSD numbers can be done as
less than or equal to 1.Consequently, the magnitude of the follows:
second step output cannot be greater than 3 which can be
represented by a single-digit QSD number; hence no further A = 107 2 2 3 1
carry is required. In step 1, all possible input pairs of the B = -233 2
3 3 1
addend and augend are considered. The output ranges from
Decimal
-6 to 6 as shown in Table 1. -1 -5 5 -2
Sum
Table 1: The ouputs of All Possible Combinations of a Pair IC 0 1 1 0
of Addend (A) and Augend(B) IS 1 1 1 2
S 2 0 1 2
Cout 0
In the step 1 QSD adder, the range of output is from -6 to 3.10 Step 1 Adder Design
+6 which can be represented in the intermediate carry and
The step 1 QSD adder accepts QSD number as the input and
sum in QSD format as shown in Table 2 [4]. We can see in
gives intermediate carry and sum as the output. Figure 1
the first column of Table 2 that some numbers have multiple
shows the step 1 adder block as the intermediate carry and
representations, but only those that meet the above defined
sum circuit.
64 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
4. Simulation Results
The four digit QSD adder written in VHDL, compiled
and simulated using Modelsim SE 6.4. The simulated result
for 4-digit QSD adders is shown in figure 6.
5. Result Implementation
The delay for QSD adder is 2ns which is the minimum delay
in comparision to Ripple Carry Adder (RCA) and Carry
Look Ahead (CLA) Adder The QSD adders have constant
delay of 2ns for higher number of bits. Figure 8 shows the
Figure 4. Single Digit QSD Adder Structure timing comparision chart for RCA, CLA Adder and QSD
Adders.
66 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
100
90 CLA [9] Hwang K., ComputerArithmatic Principles
80
70 QSD Architecture and Design. New York : Wiley, 1979.
60
50 [10] Reena Rani, Neelam Sharma, L.K.Singh, “Fast
40 Computing using Signed Digit Number System” proc.
30
20 IEEE International Conference On Control,
10
0 Automation, Communication And Energy
Conservation -2009, 4th-6th June 2009, pp.1-4.
16
32
64
4
8
128
Number of Bits
[11] N. Takagi, H. Yasuura, and S. Yajima, “High Speed
VLSI Multiplication Algorithm with a Redundant
Binary Addition Tree, ” IEEE Trans. Comp., C-34, pp.
Figure 7. Timing Comparision of RCA, CLA and QSD 789-795, 1985
Adder [12] A.A.S Awwal, Syed M. Munir, A.T.M. Shafiqul
Khalid, Howard E. Michel and O. N. Garcia,
“Multivalued Optical Parallel Computation Using An
6. Conclusion Optical Programmable Logic Array”, Informatica,
We have presented an algorithm for radix-4 carry free vol. 24, No. 4,2000, pp. 467-473.
addition which is suitable for realizing high-speed compact [13] P. K. Dakhole, D.G. Wakde, “ Multi Digit
arithmetic VLSI circuits. The QSD addition scheme is Quaternary adder on Programmable Device : Design
independent of the processed bit strings length and thus it is and verification” International Conference on
very fast. QSD based addition technique is also memory Electronic Design, 2008, 1-3 Dec, pp. 1-4.
efficient since more information can be encoded in fewer
digits than its BSD addition counterpart. Authors Profile
Reena Rani obtained M.Tech (VLSI design)
References from Banasthali Vidyapith, Rajasthan,
INDIA. Currently pursuing Ph.D. in
[1] A. Avizienis, "Signed-digit number representations for Electronics from Dr. Ram Manohar Lohiya,
fast parallel arithmetic," IRE Trans. on Electronic Avadh University. Wnner of Prize 3rd from
Computers, vol.- EC-10, pp. 389-400, 1961. AMIETE council of INDIA. Her research
[2] Abdallah K. Cherri, “Canonical Quaternary area is VLSI design.She is Senior Lecturer
Arithmetic Based on Optical Content- Addressable in department of Electronics &
Memory (CAM)”, Proc. IEEE National Aerospace and Communication Engineering at B.S.A. College of Engineering &
Electronic Conference, vol.- 2, 1996, pp. 655-661. Technology, Mathura (U.P.), and Associate Member Institution of
[3] Reena Rani, Upasana Agrawal, Neelam Sharma, L.K. Electronics and Telecommunication Engineering.
Singh, “High Speed Arithmetic Logical Unit using
Lakshami Kant Singh obtained Ph.D.
Quaternary Signed Digit Number System” (Optoelectronics) in 1976. He is currently
International Journal Of Electronic Engineering Director and Professor in Dr. Ram Manohar
Research, ISSN 0975 – 6450, Volume 2 Number 3, Lohiya, Avadh University, Faizabad.U.P.
2010 pp. 383–391. India. Posts hold was dean faculty of
[4] Songpol Ongwattanakul, Phaisit Chewputtanagul, science, Pro-Vice Chancellor. He has over
David J. Jackson, Kenneth G. Ricks, “Quaternary 35 years of teaching experience and has
Arithmetic Logic Unit on a Programmable Logic published around 30 research papers and
Device”, proc. IEEE conference, 2001. articles. He is a member of the Institution of
Engineers, Institution of Electronics and Telecommunication
[5] Reena Rani, Neelam Sharma, L.K.Singh, “FPGA
Engineering, Delhi, and Computer Society of India.
Implementation of Fast Adders using Quaternary
Signed Digit Number System” proc. IEEE Neelam Sharma received the PhD and
International Conference on Emerging Trends in M.Tech from U.P.T.U., Lucknow UP
Electronic and Photonic Devices & Systems and B.E. from Thapar Institute of
(ELECTRO-2009), 2009, pp 132-135. Engineering and Technology, Punjab
[6] Behrooz Parhami, “Carry-Free Addition of Recoded India. Presently she is Professor in the
Binary Signed-Digit Numbers”, IEEE Transactions on Department of Electronics and
Computers, Vol. 37, No. 11, pp. 1470-1476, November Instrumentation Engineering, Institute of
1988. Engineering and Technology, Alwar,
Raj. India. Her current research interests
[7] A. T. M. Shafiqul Khalid, A. A. S. Awwal and O. N.
are Computer Architecture, Neural Networks, VLSI, FPGA, etc.
Garcia, “Digital Design of Higher Radix Quaternary She has twenty-five research publications and convened number of
Carry Free Parallel Adder”, Proc. 39th Midwest sponsored research projects. She is member of IEEE, IETE and IE.
(IJCNS) International Journal of Computer and Network Security, 67
Vol. 2, No. 9, September 2010
Triple-Critical Graphs
Basheer Ahamed M., and Bagyam I
3.8 Example
For subcase 3(b), when G1 = K3 and G2 = K6 ,
G = K3 ⊗ K6 = K9 . Then, the chromatic number,
χ (G ) = 9, χ (G1 ) = 3 and χ (G2 ) = 6. Removing no
vertex from G1 and three vertices from G2 , we get,
χ (G − va − vb − vc ) = χ (G1 ) + χ (G2 − va − vb − vc )
= χ (G1 ) + χ (G2 ) − 3 = 3 + 6 − 3 = 6.
70 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
3
School of Information Technology, JNTUH, Hyderabad, India,
sdurga.bhavani@gmail.com
(11)
This completes the process of interlacing.
It may be noted here that inverse interlacing is a reverse
(6)
process to interlacing.
In the process of interlacing, we associate the binary
In what follows, we present the algorithms for
bits given above, in a column wise manner. Thus we get the
encryption, and decryption.
new Pi as the first m columns of the matrix in (6), the new
Qi as the next m columns of the matrix in (6), and so on. For Algorithm for Encryption
example,
(7)
In a similar manner, we can obtain Qi to Wi.
Let us illustrate the above process by considering a simple
example. Let
(8)
This can be written in the binary form as shown below.
(10)
Here, the first row contains the first four columns of (9),
the second row contains the next four columns of (9) and so
on. Thus we get
72 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
3. Illustration of the cipher If we assume that the process of encryption with each
key requires 10–7 seconds, then the time required for the
Consider the plain text given below. computation with all possible keys is equal to
“Dear brother! I have passed my B. Tech (ECE) with
distinction. All this is due to the scholarship sanctioned by
our Government. Now I want to enter into politics”
(12) As this process of computation is taking a very long
Let us focus our attention on the first 32 characters. time, the brute-force attack is totally impossible.
This is given by Let us now study the known plain text attack. In this
“Dear brother! I have passed my B” (13) case, we are having as many plain text and cipher text pairs
On using EBCDIC code, (13) can be written in the form as we require. Here, in carrying out the encryption process,
as the m decimal numbers in each of the 2m decimal
(14) numbers are multiplied by the key and as the portions of the
Let us consider a square key matrix K of size 4. We modified plain text are interlaced at the end of each round of
have the iteration process, the key K cannot be determined by any
means.
(15) In the last two cases, that is, in the chosen plain text
attack and in the chosen cipher text attack, intuitively no
On using (14) and (15) and applying the encryption special choice appears as a possible one, as the process of
algorithm, given in section 2, with n = 16, we get the cipher encryption is a complex one.
text C in the form In the light of the above analysis, we conclude that the
cipher cannot be broken by any attack.
(16)
On applying the decryption algorithm (See Section 2),
we get back the original plain text given by (13). 5. Computations and Conclusions
Now, let us study the avalanche effect for determining In this paper, we have developed a block cipher by using a
the strength of the algorithm. Let us change the 24th Feistel structure. In this, we have taken a plain text of length
character ‘s’ to ‘t’ in (13). Due to this, 162 in (14) becomes 256 binary bits and made use of a key K containing 128 bits.
163 and hence, the P0 undergoes a change of 1 binary bit. The programs for encryption and decryption are written in C
Thus the entire plain text comprising P0 to W0 also language.
undergoes a change of 1 binary bit. Now, on applying the On adopting the procedure discussed in section 3, the
encryption algorithm on the modified plain text, we get cipher text corresponding to the rest of the plain text (which
(17) can be divided into four parts) can be obtained as
On comparing the cipher texts given by (16) and (17),
after converting them into binary form, we find that they
differ by 130 bits. As a change of 1 bit in the plain text is
leading to a change of 130 bits (out of 256 bits) in the cipher
texts, we notice that the cipher is a strong one.
Now let us consider the effect of changing the key by The avalanche effect mentioned in section 3 and the
one binary bit. This can be done by replacing 126 (the 2nd cryptanalysis discussed in section 4, clearly indicate that the
row 2nd column element of the key) by 127. Now on cipher is a strong one, and it cannot be broken by any
applying the process of encryption on the original plain text cryptanalytic attack.
(14), we get the new C given by
(18)
References
On comparing the cipher text C given in (16) and (18), [1] William Stallings, Cryptography and Network Security,
we notice that they differ by 138 bits (out of 256 bits). This Principles and Practice, Third Edition, Pearson, 2003.
also shows that the cipher is a strong one. [2] Schaefer, E., “A Simplified Data Encryption Standard
Algorithm”, Cryptologia, Jan 1996.
4. Cryptanalysis [3] H. Feistel, “Cryptography and Computer Privacy”,
Scientific American, May 1973.
In the study of cryptography [6] the well known methods for [4] Feistel, H., Notz, W., and Smith, J., “Some
cryptanalysis are: Cryptographic Techniques for Machine-to-Machine
1. Cipher text only (brute-force) attack Data Communications”, Proceedings of the IEEE, Nov.
2. Known Plain text attack 1975.
3. Chosen Plain text attack [5] R. C. W. Phan, and Mohammed Umar Siddiqi, “A
4. Chosen Cipher text attack Framework for Describing Block Cipher
In all these attacks, it is assumed that the encryption Cryptanalysis”, IEEE Transactions on Computers, Vol.
algorithm and the cipher text are known to the attacker. 55, No. 11, pp. 1402 – 1409, Nov. 2006.
In the brute-force attack, as the key is containing 128 [6] D Denning, “Cryptography and Data Security”,
binary bits, the size of the key space is Addison-Wesley, 1982.
2128 = (210)12.8 ≈ 1038.4
(IJCNS) International Journal of Computer and Network Security, 73
Vol. 2, No. 9, September 2010
Abstract: Correct functioning of object-oriented software UML models as a source of information in software testing
systems depends upon the successful interaction of objects and [1, 5, 6, 7, 9, 12, 15, 19, 21, 24, 27, 26, 28, 14]. Many UML
classes. While individual classes may function correctly, several design artifacts have been used in different ways to perform
new faults can arise when these classes are integrated together. different kinds of testing. For instance, UML statecharts
The interaction among classes has increased the difficulty of have been used to perform unit testing, and interaction
object-oriented testing dramatically. Currently traditional
diagrams (collaboration and sequence diagrams) have been
approaches generates testing paths from source code or UML1
diagram lacks of analysis and puts obstacles of automation of
used to test class interactions.
test case generation. In this paper, we present an integrated As the major benefit of object oriented programming,
approach to enhance testing of interactions among classes. The encapsulation aims at modularizing a group of related
approach combines UML2 sequence diagrams and statecharts functionalities in classes. However, a complete system-level
hierarchically and generate test paths based on message flow
functionality (use case) is usually implemented through the
graph. We have applied it to a case study to investigate its fault
detection capability. The results show that the proposed
interaction of objects. Typically, the complexity of an OO
approach effectively detects all the seeded faults. As a result, system lies in its object interactions, not within class
this work provides a solid foundation for further research on methods which tend to be small and simple. As a result,
automatic test case generation, coverage criteria analysis of complex behaviors are observed when related classes are
sequence diagram based object oriented testing. integrated and several kinds of faults can arise during
integration: interface faults, conflicting functions, and
Keywords: Software testing, UML model, sequence diagram,
missing functions [4]. Thus testing each class independently
statecharts diagram.
does not eliminate the need for integration testing. A large
number of possible interactions between collaborating
1. Introduction classes may need to be tested to ensure the correct
communication among classes and further functionality of
Nowadays, object-oriented paradigm has become a popular the system.
technology in modern software industry due to several
distinguish features, such as encapsulation, abstraction, and More and more software developers use UML and associated
reusability to improve the quality of software. However, visual modeling tools as a basis to design and implement
along with the development of object-oriented software, their applications. In addition, UML sequence diagram is
compared to testing of procedural software [10, 4], OO widely used for specifying the dynamic behaviors of classes
features also introduce new challenges for testers: and contains necessary information about object
communications and interactions between objects may give communications in terms of object life lines that is more
rise to subtle errors that could be hard to detect. Although, propitious to object-oriented software testing. Therefore, in
most traditional unit testing and system testing techniques the research reported in this paper, UML sequence diagram
may also be applicable to object-oriented testing, it still are used as a basis to generate message flow graph (MFG)
makes a great difference for testing of procedural software hierarchically. Firstly, we discuss an approach to generated
and object-oriented software since object communication hierarchical MFG based on sequence and state chart
and interaction may introduce more complicated and diagram of corresponding objects. After that, a verification
unforeseen situations. Therefore, it is necessary to explore method is provided for the coverage criteria.
new and effective object-oriented testing technique in theory The remainder of the paper is organized as follows. Section
and practice. 3 presents a brief survey of the related works in the areas of
The Unified Modeling Language (UML) has emerged as the state-based testing and UML-based test path generation. A
de facto standard for analysis and design of OO systems. description of classification with respect to UML2 diagrams
UML provides a variety of diagramming notations for is given in Section 2. Section 4 presents an approach to
capturing design information from different perspectives. generate a hierarchical message flow graph based test cases.
This approach can also derive independent testing path. A
In recent years, researchers have realized the potential of case study of a web-based information system is illustrated
74 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
in Section 5. Conclusive remarks and future work are, transition system. The formalization of model based testing
finally, indicated in Section 6. represents a new trend of state based testing.
Although many works had been done on the OO testing of
2. Related Works sequence diagram and statecharts diagram, this work is
different from the above unit level testing in two aspects.
Traditional testing strategies for procedural programs, such First, this work presents a hierarchical synthesized approach
as data flow analysis and control flow analysis cannot be to sequence diagram testing using a message flow graph
directly applied to OO programs [22]. Extensions of these (MFG). The proposed MFG is generated from the statechart
techniques for OO programs have been proposed by Buy et that supports message generation in the sequence diagram.
al. [9] and Martena et al. [25]. A structural test case Secondly, the hierarchical structure provides a novel graphic
generation strategy by Buy et al. [8] generates test cases based testing technique for OO program validation.
through symbolic execution and automates deduction for the
data flow analysis of a class. Kung et al. [20] proposed an
idea to extract state models from the source code, whereas 3. Graph based Testing Approach
others suggest test generations from pre-existing state-
The run-time behavior of an object-oriented system is
models [12, 13, 29]. In the sections below, we will discuss
modeled by well-defined sequences of messages passed
more specific UML-based testing techniques.
among collaborating objects. In the context of UML, this is
Tse and Xu [30] have proposed an approach to derive test usually modeled as interaction diagrams (sequence and/or
cases from Object Test Models (OTM). State space collaboration diagrams). In many cases, the states of the
partitions of the attributes of a class are used with the OTM objects sending and receiving a message at the time of
to generate a test tree. The actual test cases are derived from message passing strongly influence their behavior in
the test tree. Nori and Sreenivas [26] have proposed a following aspects:
specification-based testing technique that splits the
• An object receiving a message can provide different
specifications of a class into structural and behavioral
functionalities in different states.
components. Structural aspects define the attributes and
method specifications of a class, whereas state machine is • Certain functionalities may even be variable or
used to defined the behavioral component that describes the unavailable if the receiving object is not in the
sequence of method invocation. In the work of [12, 28], an correct state.
idea of converting test generation into an AI planning
problem was proposed. UML statecharts are processed by • The functionality of providing object may also
planning tools and used to produce AI planning depend on the states of other objects including the
specifications. The test cases are generated based on the sending object of a message.
processed statecharts. Another example of statecharts based In this work, a graph based testing technique is proposed,
test case generation technique was proposed by Kim et al. which is on the idea that the communication between objects
[18]. These statecharts are transformed to Extended Finite should ideally be exercised (represented by sequence
State Machines (EFSMs) to generate test cases and then use diagram) for all possible states of the objects involved
traditional control and data flow analysis on the generated (statecharts diagram). This is of particular importance in the
test cases. context of OO software as many classes exhibit an
Several state-based approaches were proposed based on interaction state-dependent behavior. Such testing objective
state-chart or finite state machine. In the work of [23], Li et is implemented by generating a graph-based testing
al. presented an approach to testing specific properties of approach and testing path on message flow graph (MFG) on
reactive systems. Kim et al. [17] used statecharts to generate the defined criteria. The proposed technique can be applied
test sequences for Java-based concurrent systems. during the integration test phase, right after the completion
Kansomkeat and Rivepiboon [16] have converted UML of class testing. It consists of the following three steps:
statecharts into an intermediate model known as Testing 1. Message Flow Graph (MFG) Generation: We
Flow Graph (TFG). This graph reduces complexities of investigate the sequence diagram of the (sub)system,
statecharts and produces a simple flow graph. Test cases are and generate corresponding MFG following the MFG
finally generated by traversing the TFG using state and generation algorithm (will be discussed in the
transition coverage criteria. The proposed methodology was following section).
evaluated using mutation testing. Results of an experiment
carried out to validate the application of Round Trip Test 2. Hierarchical Testing Path Generation: Based on the
Strategy [4] on UML statecharts are presented in Briand et MFG of sequence diagram, for each object that we
al. [6]. Authors also propose improvements on the strategy concern, we refer the state-chart diagram and
based on the analysis of these results. Swain et al. has generate a MFG for some node of MFG.
proposed a method of statecharts and activity model based 3. Coverage Criteria: We test the sequence diagram
testing technique by constructing an intermediate model against the coverage criteria that we defined.
named state-activity diagram (SAD) [29]. Besides, some
recent work [11] was proposed using formalization on the In the following sub-sections, we describe the proposed
statechart diagram to perform model-based testing. In the testing technique in greater detail with the help of a simple
work of [3] a semantic model is proposed using the labeled example.
(IJCNS) International Journal of Computer and Network Security, 75
Vol. 2, No. 9, September 2010
We define dependency path as follows. For example, in Fig. 1, assume node N6 involve a series of
state transitions, then we can generate a subset of MFG GN6
Definition 2 (Dependency Path (DP)) Given a MFG G =< for it (Fig. 2).
N, E, L, I >, a dependency path (DP i) in G from node ni to
76 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
analysis, pages 60–70, New York, NY, USA, 2000. International Workshop on Software Engineering
ACM. Tools and Techniques, 2001.
[16] S. Kansomkeat and W. Rivepiboon. Automated- [27] H. Reza, K. Ogaard, and A. Malge. A model based
generating test case using uml statechart diagrams. In testing technique to test web applications using
SAICSIT ’03: Proceedings of the 2003 annual statecharts. In ITNG’08: Proceedings of the Fifth
research conference of the South African institute of International Conference on Information Technology:
computer scientists and information technologists on New Generations, pages 183–188, Washington, DC,
Enablement through technology, pages 296–300, , USA, 2008. IEEE Computer Society.
Republic of South Africa, 2003. South African [28] M. Scheetz, A. v. Mayrhauser, R. France, E.
Institute for Computer Scientists and Information Dahlman, and A. E. Howe. Generating test cases
Technologists. from an oo model with an ai planning system. In
[17] S.-K. Kim, L.Wildman, and R. Duke. A uml ISSRE ’99: Proceedings of the 10th International
approach to the generation of test sequences for java- Symposium on Software Reliability Engineering,
based concurrent systems. In ASWEC ’05: page 250, Washington, DC, USA, 1999. IEEE
Proceedings of the 2005 Australian conference on Computer Society.
Software Engineering, pages 100–109,Washington, [29] S. K. Swain, D. P. Mohapatra, and R. Mall. Test case
DC, USA, 2005. IEEE Computer Society. generation based on state and activity models. Journal
[18] Y. Kim, H. Hong, D. Bae, and S. Cha. Test cases of Object and Technology, 9(5):1–27, 2010.
generation from uml state diagrams. Software, IEE [30] T. Tse and Z. Xu. Class-level object-oriented state
Proceedings, 146(4):187–192, 1999. testing: A formal approach. Technical Report HKU
[19] Y. Kim, H. S. Hong, S. Cho, D. H. Bae, and S. D. CSIS Technical Report TR-95-05, Department of
Cha. Test cases generation from uml state diagrams. Computer Science, The University of Hong Kong,
In In IEEE Proceedings: Software, pages 187–192, 1995.
1999. [31] Q. ul-ann Farooq, M. Z. Z. Iqbal, Z. I. Malik, and M.
[20] D. C. Kung, P. Hsia, Y. Toyoshima, C. Chen, and J. Riebisch. A model-based regression testing approach
Gao. Object-oriented software testing: Some research for evolving software systems with flexible tool
and development. In HASE ’98: The 3rd IEEE support. Engineering of Computer-Based Systems,
International Symposium on High-Assurance Systems IEEE International Conference on the, 0:41–49,
Engineering, pages 158–165, Washington, DC, USA, 2010.
1998. IEEE Computer Society.
[21] D. C. Kung, N. Suchak, J. Gao, P. Hsia, Y. Authors Profile
Toyoshima, and C. Chen. On object state testing. In
in Proceedings of Computer Software and Yujian Fu is an assistant professor at department of computer
Applications Conference, pages 222–227. IEEE science. Dr. Fu received the B.S. and M.S. degrees in Electrical
Computer Society Press, 1994. Engineering from Tianjin Normal University and Nankai
[22] D. C. Kung, N. Suchak, J. Gao, P. Hsia, Y. University in 1992 and 1997, respectively. In 2007, she received
Toyoshima, and C. Chen. On object state testing. In her Ph.D. degree in computer science from Florida International
in Proceedings of Computer Software and University. Dr. Yujian Fu conducts research in the software
verification, software quality assurance, runtime verification, and
Applications Conference, pages 222–227. IEEE
formal methods. Dr. Yujian Fu continuously committed as a
Computer Society Press, 1994. member of IEEE, ACM and ASEE.
[23] S. Li, J. Wang, and Z.-C. Qi. Property-oriented test
generation from uml statecharts. In ASE ’04: Sha Li is an associate professor at department of curriculum,
Proceedings of the 19th IEEE international teaching and educational leadership, school of education of
Alabama A&M University. Dr. Sha Li received his doctorial
conference on Automated software engineering,
degree of educational technology from Oklahoma State University,
pages 122–131, Washington, DC, USA, 2004. IEEE 2001. Sha Li’ research interests include distance education,
Computer Society. instructional technology, instructional design and multimedia for
[24] W. Linzhang, Y. Jiesong, Y. Xiaofeng, H. Jun, L. learning.
Xuandong, and Z. Guoliang. Generating test cases
from uml activity diagram based on gray-box method.
In APSEC ’04: Proceedings of the 11th Asia-Pacific
Software Engineering Conference, pages 284–291,
Washington, DC, USA, 2004. IEEE Computer
Society.
[25] V. Martena, A. Orso, and M. Pezz´e. Interclass
testing of object oriented software. In ICECCS ’02:
Proceedings of the Eighth International Conference
on Engineering of Complex Computer Systems, page
135, Washington, DC, USA, 2002. IEEE Computer
Society.
[26] A. V. Nori and A. Sreenivas. A technique for model-
based testing of classes. In Proceedings of the Second
(IJCNS) International Journal of Computer and Network Security, 79
Vol. 2, No. 9, September 2010
Abstract: Peer-to-Peer (P2P) networks have been shown to be Currently, the traffic generated by P2P systems accounts
a promising approach to provide large-scale Video on Demand for a major fraction of the Internet traffic today, and is
(VoD) services over Internet for its potential high scalability. bound to increase. The increasingly large volume of P2P
However, for a normal peer, how to efficiently schedule media traffic highlights the importance of caching such traffic to
data to multiple asynchronous peers for VoD services in such reduce the cost incurred by Internet Services Providers
networks remains a major challenge. These systems
(ISPs) and alleviate the load on the Internet backbone.
dramatically reduce the server loading, and provide a platform
We are faced with a problem of delivering quality video
for scalable content distribution, as long as there is interest for
to a single receiver computer. In the streaming scenarios,
the content. The main challenges reside in ensuring that users
can start watching a movie at any point in time, with small start- the entire video is not always available at every peer
up times and sustainable playback rates. In this work, we machine and/or it would not be feasible to transmit the
address the challenges underlying the problem of near Video- entire video from a single peer ,for example, that would
on-Demand (nVoD) using P2P systems, and provide evidence overload a particular peer. The beauty of streaming is
that high-quality nVoD is feasible. In particular, we investigate obviously the fact that we don’t need to have the entire video
the scheduling problem of efficiently disseminating the blocks of downloaded before the play out begins [6][10]. We can
a video file in a P2P mesh-based system, and show scheduling simply split the video file, identify the peers that have the
algorithm can provide significant benefits and the experimental segments of interest available, request these segments from
results will show that load sharing scheduling performs
the peers, receive them and play them out.
significantly better than other dynamic algorithm, network
An important requirement of a VoD service is scalability,
coding.
i.e., to be able to support a large number of users, as a
Keywords: Networks, VoD, nVoD, P2P. typical video stream imposes a heavy burden both on the
1. Introduction network and the system resources e.g. disk I/O of the server.
The multicasting paradigm has been proposed to address the
Video-on-demand (VoD) systems provide multimedia scalability issues. However, these systems require a
services offering more flexibility and convenience to users multicast-enabled infrastructure, which unfortunately has
by allowing them to watch any kind of video at any point in never materialized. Peer-to-Peer networks promise to
time. Such systems are capable of delivering the requested provide scalable distribution solutions without infrastructure
information and responsible for providing continuous support.
multimedia visualization [5].
80 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
The difficulty lies in the fact that users need to receive (d)Other servers such as log servers for logging significant
blocks “sequentially” in order to watch the movie while events for data measurement, and transit servers for helping
downloading, and, unlike streaming systems, the users may peers in the system [11].
be interested in different parts of the movie, and may We assume a large number of users interested in some
compete for system resources. In other words, we assume video content, which initially exists on a special peer that
linear viewing, but we allow the users to join at arbitrary we call the server. Users arrive at random points in time,
times. The resources especially network bandwidth of the and want to watch the video sequentially from the beginning
server are limited, and hence users contribute their own of the server are limited, and hence, users should contribute
resources to the system. Users organize themselves in an their own resources to the system. The upload and download
unstructured overlay mesh which resembles a random capacities of the users are also limited and typically
graph. The goal then is to design a P2P system which meets asymmetric. A client joins the system by contacting a central
the VoD requirements, while maintaining a high utilization tracker. This tracker gives the client a small subset of active
of the system resources [10]. nodes .The client then contacts each of these nodes and joins
We study algorithms that provide the users with a high- the network. At any point in time, a node is connected to a
quality VoD service while ensuring a high utilization of the small subset of the active nodes, and can exchange content
system resources. We evaluate our algorithm such as and control messages only with them. We call this subset
segment scheduling using both extensive simulations and the neighborhood of the node.The neighborhood changes as
real experiments under different user arrival/departure a result of node arrivals and departures, and because nodes
patterns. The results will show that load sharing scheduling periodically try to find new neighbors to increase their
algorithm will be able to improve throughput for bulk data download rates. We assume cooperative nodes. The file is
transfer and scheduling that results in high system divided into a number of segments, which are further
throughput while delivering content “pseudo sequentially”, divided into blocks. The system is media codec agnostic,
to provide efficient VoD with small setup delays as hence, nodes need to download all blocks; if a block is not
compared to other dynamic algorithms such as Segment available when needed, the video pauses and this is
Scheduling and Network Coding[1][2][6]. undesirable. Clients have enough storage to keep all the
In this paper we are implementing Scheduling which blocks they have downloaded.
takes Segment Scheduling in which the entire video is Our system divides the constant stream of data into
divided into segments once the play out begins. It takes stripes to improve performance and robustness. In a peer-to-
video in the form of segments, define the segments, identify peer system, the stream of data is dis-rupted whenever a
the peers that have segments of interest available, request client leaves the system either due to a failure or a regular
these segments from the peers , receive them and play them disconnects. Since clients receive pieces of the content from
out. Load sharing Scheduling algorithm will improve different senders, they can continue to receive some data
performance by increasing sender’s bandwidth, dividing even if one of the senders disconnects. To further hide
segments based on loads and the number of segments and disruption from the user, a client keeps a buffer of several
decreasing number of missed segments. Section 2 describes seconds of data. When a client reconnects after being cut off
Architecture of the peer to peer system with load sharing from a sender, the buffer allows the video to play smoothly
scheduling algorithm. Section 3 describes brief description as the client catches up on the data that was missed during
of algorithm. Section 4 describes the implementation of the the disconnection. In a one-directional live video streaming
whole algorithm, Section 5 describes simulations and system, it is allowable for the video to be viewed a few
Section 6 presents our conclusion. seconds after its creation.
The goals of our system include the following: Our algorithm hinges on having a good estimate of how
• To ensure low set up time and a high sustainable well-represented a segment is. This estimate should include
playback rate for all users, regardless of their nodes that have the complete segment, and those that have
arrival time. partially downloaded the segment. In our implementation,
• We are also interested in increasing the total number the tracker monitors the rarity of segments in the network.
of blocks exchanged per round, which we call Clients in our system report the fraction of blocks they have
throughput, and the total server bandwidth. received from each segment. Those fractions are used to
estimate the popularity of the segments; for example, a
3. Related Work segment is considered under-represented if the vast majority
The overall throughput improves if all the nodes seek to of nodes have very few blocks from that segment [14] [15].
improve the diversity of segments in the network. If the The primary objective of the scheduling algorithm is to
segment policy is to upload a block from a lesser represented create a schedule such that all requested video segments are
segment whenever possible throughput improves delivered to the receiver before their respective deadlines. If
significantly for both existing and new nodes [15] [19]. that is impossible to achieve given insufficient resources,
We are faced with a problem of delivering quality video etc., we want to minimize the number of segments missed.
to a single receiver computer. In the streaming scenarios, We also have two secondary objectives. We want to make
the entire video is not always available at every peer efficient use of the bandwidth and avoid transmitting
machine and/or it would not be feasible to transmit the segments before they are required because there is no
entire video from a single peer. guarantee that they will even be used. Finally, since we do
It is important that we provide a continuous supply of the not want to overload any of the senders, we make sure that
video segments once the play out begins. If only a few the peers are load balanced. The sender for a particular
segments are missing once in a while, the video quality will segment is selected based on the estimated load of each
still be acceptable by most standards. If too many segments sender. The estimated load of a sender is the amount of
are missing the video and the audio quality will suffer and work time spent transmitting segments that the sender has
the media will not be useful. A missing segment does not assigned by the schedule at a particular instance of time.
always indicate network failure (due to congestion or The time it takes a particular sender to transmit a segment
other problems). It may simply mean that the segment was depends on the segment size and the sender bandwidth. We
not available at the receiver’s media player at the time it was predict the load of each sender by temporarily assigning the
supposed to be rendered, i.e. the segment deadline was next scheduled segment to each sender. We select the sender
missed. The main problem that we are solving is creating a with the smallest predicted estimated load. This ensures that
schedule that minimizes the number of video segments the load is shared among all the senders. The proper
missing their deadlines [14][15][18][19]. segment transmission start time is also most essential for a
Load Sharing Scheduling Algorithm provides track of successful schedule. We schedule each segment to arrive at
every sender’s estimated load. Before we can sort the list of the receiver a fixed amount of time before its deadline thus
suppliers for segments, where segment has n potential fixing the client buffer. This ensures the least amount of
suppliers, we create a look-ahead estimated load, i.e. what bandwidth wasted on unsolicited video segments.
would be the load of each one of those potential suppliers if In situations where senders do not have all segments
segment s was to be assigned to them. Once we have these available, we must make sure that we do not commit a
look-ahead estimated loads, we can sort the list of potential sender who is the only one that can deliver particular
suppliers in the increasing order of the estimated load. Since segments to do work delivering other segments that could be
we do not have a guarantee that the supplier with the delivered by other senders. When determining the
smallest estimated load is suitable to deliver segments, we appropriate peer to deliver a segment, we must therefore
iterate over the list of potential suppliers. Algorithm examine the number of potential senders that have the
minimizes the number of missed segments and the easiest segment available. It is usually more difficult to meet the
way to do this is to compare it against a couple of other deadline of a segment that has fewer potential suppliers [4]
known scheduling algorithms and the results show that that [10] [14] [15]. For this reason, the algorithm first calculates
load sharing scheduling improves the throughput of the the number of potential suppliers for each segment and the
system and decrease missed segments[20]. segments with least potential suppliers are scheduled first.
That is, the segments with one potential supplier are
4. Load Sharing Scheduling Algorithm scheduled first, then segments with two potential suppliers,
82 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
then three and so on. This is a potential area of improving bandwidth decreases the number of missed segments which
the algorithm in the future because it is not always necessary solves the problem of missing segments. It’s possible that
to schedule the segments with the least potential suppliers we will have any number of segments with n potential
first. It is still possible to change segment scheduling order suppliers; we must iterate over the list of segments with n
from the order of the number of potential suppliers and still potential suppliers. If we have large-scale video, we can
create an efficient schedule where all segment deadlines are simply split the video file into well defined segments,
met. The proper segment transmission start time is also identify the peers that have the segments of interest
most essential for a successful schedule. We schedule each available, request these segments from the peers receive
segment to arrive at the receiver a fixed amount of time them and play them out. In situations where senders do not
before its deadline thus fixing the client buffer. This ensures have all segments available, we must make sure that we do
the least amount of bandwidth wasted on unsolicited video not commit a sender who is the only one that can deliver
segments [16] [18]. particular segments to do work delivering other segments
In the most general form we are given a set of n video that could be delivered by other senders.
segments, S = {S1, S2, . . . , Sn}, with an associated set of The first task is to create the segment supplier table. The
segment lengths, L = {L1, L2, . . . , Ln}, and a set of reason for creating this table is to enable actual schedule
segment deadlines, D = {D1, D2, . . . , Dn}. We also have a creation code to iterate the segments in the increasing order
set of m peers, P = {P1, P2, . . . , Pm} with an associated set of the number of segment suppliers. The Segment table
of peer bandwidths, B = {B1, B2, . . . , Bm}[20]. Finally, we should fetch data in such a way that it can be easy to
are given the set of segment ranges available at each peer. retrieve. For every potential sender p for segment s, we must
Since typically peers hold contiguous ranges of segments, confirm that the sender p is suitable to deliver segment s.
we assume that the segment range at each peer is simply This means that sender p must be able to deliver segment s
given by the highest segment number available. We assume before its deadline and it cannot overload the receiver, i.e.
that all segments with lower segment numbers are also the resulting bandwidth at the receiver must be ≤ maximum
available at that peer. The set of available segments is given Bandwidth. Secondly, we create schedule that minimizes the
by A = {A1, A2, . . . , Am}. We need to create a schedule J number of video segments missing their deadlines.
where each segment has an assigned peer and transmission Increasing the average sender bandwidth has very little
start time such that the segment will be transmitted to the effect on the execution time but a higher average sender
receiver before its deadline. Since this may not always be bandwidth decreases the number of missed segments which
possible and missed segments must be taken into solves the problem of missing segments. It’s possible that
consideration, we want to minimize the number of segments we will have any number of segments with n potential
that miss their deadlines. We have the additional constraint suppliers; we must iterate over the list of segments with n
that the schedule cannot exceed the incoming receiver potential suppliers. This data is readily available from the
bandwidth. segment supplier table. For every segment with n potential
Consider an instance of the segment scheduling problem suppliers, we need to determine its sender and transmission
where we are given a set of n segments. start time. Since we want to schedule the segments with
S = {S1, S2,..., Sn} and a set of m peers least potential suppliers first, we must iterate over the
P = {P1, P2,..., Pm}. We apply two restrictions: collection of segment supplier table indices in the increasing
1) There is only one sender peer available, i.e. m = 1. table index order. It’s possible that we will have any number
2) The sender peer has all the segments available. of segments with n potential suppliers; we must iterate over
The segment transmissions are the tasks that need to be the list of segments with n potential suppliers. This data is
scheduled. The release time of each segment transmission is readily available from the segment supplier table. For every
0. Since we only have one sender, clearly only one segment segment with n potential suppliers, we need to determine its
can be transmitted at a time. sender and transmission start time.
The execution length of each task is the segment Here, schedule creation iterates the segments in the
transmission time, which can be calculated from the increasing order of the number of segment suppliers. The
Segment length and the sender’s bandwidth. Every segment data is arranged in such a way that it is easy to retrieve. For
still has a deadline. At this point, it should be obvious that every potential sender p for segment s, we must confirm that
we have an instance of sequencing the Sender’s problem to the sender p is suitable to deliver segment s. Then, the
be solved by our Efficient Scheduling algorithm. segments are transferred from senders to the receivers with
Increasing the average sender bandwidth has very little the least segment first. The step 5-10 iterates the loop for
effect on the execution time but a higher average sender loading peers. The segments are then loaded at the client
(IJCNS) International Journal of Computer and Network Security, 83
Vol. 2, No. 9, September 2010
side with the increased bandwidth at the server side. The (ii) To pre-fetch or not to pre-fetch;
bandwidth of client should not exceed the maximum while pre-fetching could improve performance, it could also
available bandwidth at Server side. Finally, the scheduled waste uplink bandwidth resources of the peer.
peers are returned. (iii) Selecting which segment or movie to remove when the
For every potential sender p for segment s, we must disc cache is full; preferred choices for many caching
confirm that the sender p is suitable to deliver segment s. algorithms are least recently used (LRU) or least frequent
This means that sender p must be able to deliver segment s used (LFU).
before its deadline and it cannot overload the receiver, i.e. Content Discovery:
the resulting bandwidth at the receiver must be ≤ maximum Together with a good replication strategy, peers must also be
bandwidth. As the initial guess, we set the start time to be able to learn who is holding the content they need without
such that segment s will arrive at the receiver right before its introducing too much overhead into the system. P2P systems
deadline and before we confirm this transmission slot for depend on the following methods for content discovery:
segment s we must make sure that it doesn’t violate the (i) A tracker; to keep track of which peers are replicating
maximum bandwidth constraint. what part of the movie;
(ii) DHT; used to assign movies to trackers for load
balancing purposes.
Congestion Control
• Rate Control-Match the rate of the video stream to the
maximum available bandwidth thus reducing congestion
and segment loss. Without rate control segments which
would exceed the maximum bandwidth would be discarded.
This approach focuses on the transport concept.
• Rate-adaptive video encoding -Use compression to make
video feeds more practical and bandwidth efficient. This
approach focuses on the compression concept.
• Rate Shaping -This approach is a combination of the
previous two. The video feed is being re-coded with rate-
adaptive video encoding and rate control makes sure there is
no loss in segments.
5. Performance Evaluation
In this section we evaluate the performance of both
segment scheduling and load sharing scheduling algorithm.
Initially, we will present data that compares the segment
rate of both the algorithms. Then Load sharing Scheduling
algorithm will be compared with another dynamic
Figure 2. Load Sharing Scheduling Algorithm Replication algorithm. And then the results will clearly show that load
Strategy of Segments sharing scheduling algorithm is better than segment
scheduling especially under the missed segments.
Assuming each peer contributes with some amount of hard
5.1. Simulation Set Up:
disc storage, a P2P storage system is formed by the entire Matlab Software was used for all simulations. Along with
viewer population, each of them containing segments. Once load sharing scheduling algorithm network coding and
all the segments are available locally, the segment is segment scheduling will also be simulated for benchmark
advertised to other Peers. The aim of replication strategy is comparison purposes. In general, the behavior of the
to make segments available to every user in the shortest time algorithms is rather intuitive. For all 3 algorithms,
possible in order to meet with viewing demands. Design increasing the number of senders has very little effect on
issues regarding replication strategies contemplate: the execution time but having more senders’ decreases the
(i) Allowing multiple movies to be cached if there is room number of missed segments. Similarly, increasing the
on the hard disc. This is referred as multiple movie cache average sender bandwidth has very little effect on the
(MVC); and lets a peer watching a movie upload a different execution time but a higher average sender bandwidth
movie at the same time.
84 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
decreases the number of missed segments. Finally, simulated scheduling algorithm with increased bandwidth
increasing the bit rate has similar effects, i.e. it has little on server side systems used in our experiments.
effect on the execution time but increasing the bit rate
increases the number of missed segments[11][13]. Table 1: System Parameters.
The parameters that we should vary include:
Parameters. Values
• Average segment size, i.e. video quality (bit rate) Segments 10
• Segment count (shorter vs. longer videos) Potential Suppliers 4
• Number of senders Bandwidth 100kbps
• Bandwidth of senders bandwidth1 80kbps
Deadline 60 seconds
• Receiver’s bandwidth
Size 250kbs
For each scenario, we should compare the results of each Available 2
algorithm according to the following criteria:
• Number of missed segments 5.2 Impact of missed Segments:
• Running time The Graph here shows the segment loss is less in load
In the current evaluation, the maximum receiver bandwidth sharing scheduling algorithm as compared to other dynamic
constraint has been ignored for all test cases. Since we are algorithms. Measuring the available bandwidth is of great
most interested in the effects of the bit rate on the number of importance for predicting the end-to-end performance of
missed segments and the algorithm execution time, most applications, for dynamic path selection and traffic
results provide an execution time or number of missed engineering, and for selecting between a numbers of
segments vs. bit rate graphs. More specifically, we plot the differentiated classes of service. The available bandwidth is
following result graphs: an important metric for several applications, such as grid,
a. Execution time/missed segments vs. time for varying video and voice streaming, overlay routing, p2p file
number of segments transfers, server selection, and inter domain path
b. Execution time/missed segments vs. time for varying monitoring. The overall throughput improves if all the
number of senders nodes seek to improve the diversity of segments in the
c. Execution time/missed segments vs. time for a varying network.
average sender bandwidth.
d. Missed segment and execution time algorithm
comparison graph.
On the other hand, increasing the number of segments
(video length) has little effect on the number of missed
segments or at least little effect on the bit rate at which the
number of missed segments starts to increases but in the
case of the segment scheduling algorithm, longer videos
take more time to schedule. This could possibly be caused by
the fact that those 2 algorithms are much faster than the
Load Sharing Scheduling algorithm and the inputs provided Graph1. Bit Rate v/s Time.
in the tests do not stress the algorithms enough to product From the above graph we can conclude that as the
trends in the execution time graphs. transmission time increases for the senders the amount of
When comparing the algorithms, the general trends data that can be sent through the sender of a particular
exhibited by all 3 of them are very similar. When comparing segment, the data rate of the segment decreases.
actual values, it becomes evident that the Load Sharing
Scheduling algorithm is superior to the other 2 when it
comes to minimizing the number of missed segments but in
most cases, this algorithm takes a lot to compute the results.
This table-1 summarizes various system parameters
which are included in the simulation calculation and the
system used for our experiments. Before presenting
empirical results, we present the simulation model as
follows. Table 1 summarizes the configuration parameters of
Graph 2. Time v/s Bit rate for varying Segments
(IJCNS) International Journal of Computer and Network Security, 85
Vol. 2, No. 9, September 2010
From the above graph we can conclude that as transmission locally, then the user will suffer a (moderate) waiting time
time increases for the senders the number of segments that it as the system searches for segments of peers to download the
can handle in any given time reduces and hence the bit rate desired content from.
decreases.
5.3 Scalability:
Generally speaking, scalability can be defined as the References
adaptability to changes in the peer-to-peer system size, load [1] Gnutella, “http://gnutella.wego.com/”.
extent and nature of the load. That is, the network load [2] Y.-H. Chu, S. G. Rao, and H. Zhang, “A case for end
should be distributed evenly among the peers, which means system multicast,” in Measurement and Modeling of
that every peer should be aware of approximately the same Computer Systems, 2000.
number of other peers. From the data it can be concluded [3] B. Cohen, “Incentives Build Robustness in BitTorrent,”
that our algorithm is more scalable than other dynamic in Workshop on Economics of Peer-to-Peer Systems,
algorithm such as segment scheduling and network coding. 2003.
The second approach to deal with the scalability issue of [4] V. Agarwal and R. Rejaie. Adaptive Multi-source
video streaming systems is to use P2P load sharing. “P2P Streaming in Heterogeneous Peer-to-Peer Networks In
networking architectures receive a lot of attention nowadays, MMCN, 2005.
as they enable a variety of new applications that can take [5] Y. Huang, Tom Z.J. Fu, Dah-Ming Chiu, J.C.S. Lui, and
advantage of the distributed storage and increased C. Huang, “Challenges, design and analysis of a large-
computing resources offered by such networks”. Their scale p2p-vod system,” Proc ACM SIGCOMM 2008, pp.
advantage resides in their ability for self organization, 375-388, 2008.
bandwidth scalability, and network path redundancy, which [6] X. Zhang, G. Neglia, J. Kurose, and D. Towsley. On the
are all very attractive features for effective delivery of media benefits of random linear coding for unicast applications
streams over networks .Also the space is saved for the peers in disruption tolerant networks. Second Workshop on
as the replicas will be deleted. Network Coding,Theory, and Applications (NETCOD),
2006.
6. Conclusion
[7] Wikipedia, P2P http://en.wikipedia.org/wiki/P2P,
The success of the peer-to-peer paradigm in both file accessed 20/11/2006.
distribution and live streaming application derived in the [8] M. R. Garey and D. J. Johnson, Computers and
adoption of this technology for the delivery of video-on- intractability: a guide to the theory of NP-completeness.
demand content. Fast response time is a technology factor New York: W.H. Freeman, 1979.
that end-users demand. Considerable research has been [9] C. Gkantsidis, J. Miller, and P. Rodriguez,
performed to find better ways to arrange data such that fast “Comprehensive view of a live network coding P2P
response time can be achieved by increasing throughput and system”, in Proc. ACM SIGCOMM/USENIX IMC’06,
maximum bandwidth with low startup delay. Load sharing Brasil, October 2006.
scheduling is better for improving the performance of peer- [10] S. Deering and D. Cheriton, “Multicast routing in
to-peer systems. Though further work is required towards a datagram internetworks and extended LANs”, ACM
better understanding of the efficacy of our algorithms in Transaction on Computer Systems, vo. 8, no. 2, pp. 85-
more realistic scenarios, we believe that the guidelines 110, May 1990.
proposed in this paper can be used to build high- [11] S. Banerjee, B. Bhattacharjee, and C. Kommareddy,
performance P2P VoD systems. Thus, high quality VoD is “Scalable application layer multicast”, in Proc. ACM
feasible with high playback rates. SIGCOMM’02, Pittsburgh, PA, August 2002.
Our system was designed to guarantee that the video [12] M. Castro, P. Druschel, A.-M. Kermarrec, A. Nandi,
starts playing shortly after the beginning of the download, A. Rowstron and A. Singh, “SplitStream: High-
and progresses without interruptions until the end of the bandwidth multicast in cooperative environments”, in
movie. While we have made an implicit assumption that Proc. ACM SOSP’03, New York, USA, October 2003.
users watch the entire video linearly, we believe that the [13] S. Acendanski, S. Deb, M. Medard, and R. Koetter,
same principles used in our system could be extended to “How good is random linear coding based distributed
support non-linear viewing, i.e., where users would be able networked storage?,” in NetCod, 2005.
to start watching from arbitrary points in the video and [14] PPLive internet site. http://www.pplive.com.
perform fast forward and rewind operations. However, if the [15] Xiaojun Hei, Chao Liang, Jian Liang, Yong Liu and
user desires to watch a part of the video that is not available Keith Ross, "Insight into PPLive: A Measurement Study
86 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
Authors Profile
transmission by running a linear program. This algorithm is spanning tree or the same an aggregation tree rooted at BS
able to maximum lifetime of network with certain location is formed. In this paper, the network lifetime is credited
of each node and base station. until all nodes are active.
One year later, Dasgupta et al. [3] extend MLDA with
applying cluster based heuristic algorithm called CMLDA 4. Genetic Algorithm
where nodes are grouped into several clusters with pre-
In this study, GA is applied in order to obtain balanced and
defined size. The energy summation of cluster member
energy efficient spanning trees. Every chromosome
nodes is their cluster’s energy. The distance between
represents a tree where gene index indicates node and
clusters is computed by maximum distance between every
contained value point out corresponding parent. Using
pair of nodes of two clusters. From cluster formation on
standard GA, the optimum minimum spanning tree will be
then, MLDA is applied.
resulted.
Ten at al. [4] study two spanning tresses so that aggregate
scheme and data gathering have been applied to extend 4.1 Gene and Chromosome
network lifetime. In this paper, generally, two methods are A chromosome is the collection of genes or nodes which
considered to manage power among nodes. The first, power have fixed length according to the number of former nodes.
aware version (PEDAP) which have an attempt to extend
lifetime, balancing the energy consumption among nodes, 3 0 5 1 2 0
unlike second method, PEDAP, non power aware version
which minimizing the total energy consumed from the 1
system in each data gathering round [1]. This method 5
extends the lifetime of the last node. The edge cost is 0
2
calculated in different ways. In PEDAP, edge cost is the
4 3
same summation of energy mounts for transmission and
receiving while PEDAPPA, dividing PEDAP edge cost with
transmitter residual energy results asymmetric Figure 1. Chromosome and corresponding Tree Example
communication costs. A node with higher cost is considered 4.2 Crossover
later in the tree as it has few incoming. After determining of The main step to produce new generation is crossover or
edge costs, Prime’s minimum spanning tree rooted at the reproduction process. In fact, it is a simulation of the sexual
BS, will be formed for routing of packets. This calculation is reproductive process that the inheritance characteristics
computed per every 100 iterations. Being active for all nodes naturally are transferred into the new population. To
and awareness of node locations at the BS also are their generate new offspring, crossover selects a pair of
assumptions. individuals as parents from the collection formed by
Jin et al. [5] utilize GA to fulfill energy consumption selection process for breeding. This process will continue
reduction. This algorithm gets a primary number of pre- until the certain size of new population is obtained. In
defined independent clusters and then biases them toward general, there are various crossover operations which have
optimal solution with minimum communication distance by been developed for different aims. The simplest method is
the iterations of generation. They come to conclusion that single-point in which a random point is chosen whereby two
the number of cluster heads is reduced about 10 percentage parents exchange their characteristics. Table 1 shows an
of the total number of nodes. They also show cluster based example of mating of two chromosomes in single point way.
methods decrease 80 percentage of communication distance
as compare to direct transmission distance. Table 1: Single point method at random point 6
In 2005, Ferentinos et al. [6] improve the proposed Jin et al.
First Second
algorithm with extended fitness parameter. They investigate
Parents 101101’01101101 011110’10001011
energy consumption optimization and uniformity
Offsprin 101101’10001011 011110’01101101
measurement point, using a fitness function involved status
g
of sensor nodes, network clustering with the suitable cluster
heads, and also the selection between two signal ranges 4.3 Fitness Function
from normal sensor nodes. Fitness function is a procedure which scores any
chromosome. This value helps us to compare the whole ones
3. Problem Statement to each other to survival or death. Below, we propose a
In this study, we suppose that every node firstly has pre- fitness function where N is the number of nodes and setup
defined energy for receiving multi data packets as it is able energy is considered for calculating electrical power.
to monitor environment in addition to transmit children Echildren is required energy to sending data packet received
packets as well as sending single one to parent or BS. This from children.
task periodically will be continued as long as possible.
In our algorithm, at first, all nodes send a sample certain
packet to the BS once they are at ready. Just then minimum
(IJCNS) International Journal of Computer and Network Security, 89
Vol. 2, No. 9, September 2010
The mass-spring model is simplified in this paper, we In the dynamic simulation of the cloth, the damp force is
Pi, j+1 Pi+1, j Pi, j Pi +1, j+1 Ft
know that spring and are the shear necessary for maintaining the system stability, the di (the
spring in Figure 1a. It is proved by the experiment that two damp force of all masses j that are the neighboring masses
shear springs do not have much affection for the of the mass i at the cloth surface) expression is as follow:
performance of the system instead of one shear spring. n
Therefore we reduce one shear spring to simplify the model, Fdit = ∑ d ij (vi − v j )
figure 2 is the simplified mass-spring model. j =0
(2)
t
F is the damp force, d i , j is the coefficient of elasticity
di
Author Profile
Yao yu-feng received the M.S. degrees in
Computer science and technology from
Beijing University of posts and
telecommunication in 2008. Then works as
an teaching assistant at Changshu Institute
(c) draped cloth (d) cloth deformation when the collision of Technology ,Computer science and
occurs( without masses) Engineering College .
Figure 4. experimental results
7. Conclusions
In this paper, we present a mass-spring model to simulate
the cloth deformation, and use simplified mass-spring model
when create the model of cloth, and decrease some shear
springs. The computation performance is highly improved
by using this model. The dynamic equation of the cloth-
object model is worked out by the Euler integration method.
The collision between the cloth and object is also
considered. Experimental results show that the model
creates realistic simulations good stability, easy to
implement.
References
MAP and the effective survival period of which not 0. all depends its speed and the counts. When the load balance
The above procedure will go on until all the priority value problem is solved, a limit control algorism and substitute
of MAP is lessoned to 0, then another MAP is selected. algorism is used to make the load balance for the two MAP
The advantage of the furthest selection schema is layer and the different MAP in the same layer.
registering frequently and repeatedly, it is effective for the The schema based on topology structure designs the two
MN of quicker mobile speed. The relative further MAP can information of mobile history such as the IP address of AR
lesson the communication price between HA and CN, but is and the access time for MN. Every MN has its own mobile
not proper for the MN of slower mobile speed. Furthermore, history. When a new AR area is entered, the mobile history
if all MN choose the furthest MAP as their service MAP, the is sent to the current AR, then computing the relative high
MAP can be the performance bottleneck of system, the speed of MAP. If there is a big speed of MAP, MN can
higher operation time delay is generated. register with it.
The nearest selection schema is selecting the nearest
3.2 Adaptive MAP Selection Schema
MAP as the current MAP of MN, respecting to the furthest
selection schema, the price for the local register renewing of For the performance of HMIPv6 relying on the activity and
the schema is relatively small, but the frequent changing the mobility of the conversation, the transport price and the
MAP makes the overall price bigger. The message of binding renewing price of packet can not be ignored. These
mobility anchor point is showed in Fig.1. aspects is considered in adaptive MAP selection schema,
and compared with the above schema, it is more precisely
and flexible.
One of the schemas in adaptive selection schemas is to
compute the signaling overheads between the remote home
registration and the local register, namely, to decide whether
the current FA can come to MAP or not. The schema also
considers the mobile characters and the network load.
The familiar adaptive selection schema is through
Figure 1. The Message format of Mobility anchor point reckoning on the ratio of packet of arrival speed and the
mobile speed to select MAP. The ratio of MN is smaller, the
3. Dynamic MAP Selection Schema quicker speed of mobile speed of MN, so the furthest MAP
is used as service MAP. Contrarily, the ratio is bigger, the
The procedures of the dynamic schema are showed as nearest MAP is selected.
follows:
(1)MN receives RA contained the MAP choice, and a
4. Comparing of Schemas
MAP list is obtained, so the information of MAP is obtained
(the distance leap and the load of MAP). The farthest selection schema MAP is relative near with the
(2) Through the reckoning some special parameters (like gateway, so it is often used as the gateway of the outer
session arrival rate), a proper MAP is confirmed gateway. If the farthest MAP is selected, the MAP will be
dynamically. According the different value of selective the bottleneck of the network. Moreover, if the MN just can
parameters, it can be departed to the following several move in the limited area of the out network, it is no
schemas. necessary to register in the farthest MAP. In this case, if we
3.1 Selection Schema Based on Mobility Character select the farthest selection schema, for the distance between
The schema is improved based on the distance schema. MN MN and MAP is larger than the distance with the nearer
selects the service MAP according the self mobility MAP, therefore, the register time delay will be larger. If we
characters. select the farther schema, then the MAP is not necessary to
The schema based on speed schema is deciding which be renewed in the moving process of MN. So the moving
MAP to choose according the mobile character such as counts are lessoned, for the MAP area is large, so the switch
mobile speed. Here the rapider mobility MN selects the counts of area in the moving process will be added.
farthest MAP as the service MAP, and the slower mobile The nearest selection schema is contrary to the farthest
speed MN selects the nearest MAP as its service MAP. The selection. If MN is registering with the nearer MAP, the
procedure based on speed is as follows: register time delay is smaller, because the formed MAP
(1)Reckon on the speed of MN. This is the difficulty of domain is smaller, the switch counts in area is relatively
the schema, because it is difficult to be reckoned precisely, smaller. But MAP needs to renew frequently, so the switch
namely, the speed is not the practical speed, the history counts between domains are larger and the signaling
speed is used as the current speed of MN. overheads is relatively large.
(2)Choose the properest MAP. When the speed is given, a The selection schema based on the mobile characters
proper MAP is selected in the list. The selective list records using the mobile characters to select the farthest MAP or the
the mapping relation between mobility type and the answer nearer MAP, it is a compromise of the two schemas, but has
MAP. Wan Zheng[5] introduces whether MN interacting an improvement in the performance.
with high layer MAP, low layer MAP or register with HA Compared with the former schemas, the adaptive
selection schema is more flexible and precise, the load of
96 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
every MAP is balanced effectively, the switch counts [5] Zheng Wan, Xuezeng Pan, Jian Chen et al, “A Three-
between areas and in area is balanced, so the signaling level Mobility Management Scheme for Hierarchical
overheads is lessoned. Mobile IPv6 Networks,”Zhejiang University:
For clearance, we list the MAP load, the switch counts in Science,PP.8-13, 2006.
area, the switch counts between areas, the signaling [6] Sangheon Pack, Taekyoug Kwon, Yanghee Choi, “An
overheads and the comparing results of the register time Adaptive mobility Anchor Point Selection Scheme in
delay. Hierarchical Mobile IPv6 networks,” Computer
Communications,PP. 3066-3078,2006.
Table 1: The result 1 of several MAP selection schema
Authors Profile
The Switch Register
performanc MAP load counts in time Shan Zhong received M.S. degrees in
e schema areas delay Computer Science from Jiangsu University in
2008, her main research area is Mobile IP,
Farthest Large Much Large PETRI NE , and Artificial intelligence. Now she
Nearest Medium Less small works in school of computer science and
Based on engineering in Changshu Institute of technology.
mobile Medium Medium Mediu
characters m
Switch
The performance counts Signal
schema between overload
areas
Farthest Less Medium
Nearest Much Large
Based on Medium Small
mobile characters
5. The Conclusion
The MAP selection schemas are concluded in the current
HMIPv6, the advantages and the disadvantages are
analyzed.
The adaptive selection schema is the relative excellent
schema. The next work is to introduce the intelligence
algorism to the adaptive selection schema, then simulate it
and compare it with the other schemas.
References
[1] HU Qing,ZHANG Shu-fang,ZHANG Jing-bo, WANG
Er-shen. “A New Automatic Identification System
(AIS) Model-MIP-AIS,”.ACTA ELECTRONICA
SINICA, PP. 1186-1191, 2009.(in chinese)
[2] LU Bin., “Study of multi-layer mobile support
mechanism in IP network,” Journal on
Communication, PP.129-135,2006.
[3] SOLIMAN H, CASTELLUCCIA C, MALKI K E L,
et al. RFC 4140, “Hierarchical mobile IPv6 mobility
management [ EB/OL] ,”(2005-08-10).
http://www.ietf.org/rfc/ rfc4140.txt
[4] Jiang Liang, Guo Jiang, etc.The next network mobile
movement Ipv6 technology. Press: Beijin: Mechnism
industry Press, 2005.
(IJCNS) International Journal of Computer and Network Security, 97
Vol. 2, No. 9, September 2010
2
Mir@cl, Multimedia, Information Systems and Advanced
Computing Laboratory Higher Institute of Computer Science and Multimedia
Sfax BP 3021, 69042 TUNISIA, University of Sfax
walid.mahdi@isimsf.rnu.tn
Consequently, based on the fact that the instances of a given defined by the set of the visual and sound entities and their
program have the same visual marker, we proceed to relations which carry out to characterize a visual identity for
represent the visual characteristics of an instance of a a particular TV channel. This identity is created by the
particular type to identify thereafter the other ones. conception of a suitable graphic style. Graphic components
3.1.1 State of art of visual content representation of this style are generally recurring for a long duration
Several researches have dealt with visual content signature (minimum a year) and their application follows a logic
generation. A signature is a compact description that specified by the “graphic chart” of TV channel.
constitutes the start point for the similarity detection of the Consequently, the grammar generation consists first of
visual contents. Signature determination allows a direct detecting the visual invariants, then describing them in a
indexing that facilitates the identification of similar formal way and finally deduce the semantic interpretation
contents. Generally, a signature generation requires mainly for their appearance. For example, in the case of sport video
two steps: First, detecting the significant low-level image grammar, the detection of cautioned player event requires
features, and secondly characterizing descriptors in compact the extraction of the markers zones of text as well as the
format. visual invariants to symbolize the yellow card.
According to the most realized researches in this context, 3.1.3 A spatial-temporal video signature
we cite two main types of relevant image primitives used to Our video signature generation method is based on visual
compute the signature: point of interest [15] [18] and color invariants (forms, colors …) (Figure 3). This approach
[7] [8]. Works based on first class of primitives (POI) were relies on the fact that the TV channels programs (such as
designed using different detectors such as Harris [6] or SIFT news or sport programs) use distinctive graphical
detector [15]. These methods are based on a common components to identify them visually. As a result, we exploit
ground: detection and description of the most relevant visual grammars characterizing instances of the same
points in the image, which gives more information than program type in order to identify the various program types
others [15]. As an example, we cite the work of Stoettinger in the TV stream. The main role of this grammar is to
[14] which uses a color version of POI detector for image represent the visual invariants for each visual marker
retrieval. (jingle) using a set of descriptors appropriate for each TV
For the second class relying on the color feature, the main program.
idea is to represent the image color information in a
distinctive and reduced format. For example, [7] [8] use the
ordinal measure method to represent the intensity
distribution within the image. These methods proceed first
by computing the intensity average of N blocks in the given
image. Secondly, the set of average intensities is sorted in
ascending order and the rank is assigned to each block
referring to their mean intensity value. The ordinal measure
is expressed by the ranked sequence. On the other hand,
another type of methods exploiting the color information to
create the signature use color coherent vectors CCV [16] and
Vector quantization [9].
In summary, most existing video signature computing
employ feature vector extracted from a single frame for two
reasons: some approaches are proposed to CBIR systems,
and others rely on key-frame signature generation to detect
web videos copies. Hence, the majority of these approaches Figure 3. Samples of visual invariants of different TV
do not consider the temporal aspect even though it improves programs.
the signature efficiency.
For an efficient representing of the visual invariants, it is
3.1.2 Audiovisual grammar and video necessary to extract the most relevant features that can
signature generation characterize them. In fact, the signature generation process
The concept of grammar is defined as a set of formalisms creates a compact description of these features while
allowing representing the relations which can exist between respecting two crucial properties: robustness and
a set of entities. This formalization makes it possible to uniqueness. These properties guarantee the discriminative
represent data in structured and significant way for a better effect for distinguishing video content and ensure the
semantic interpretation. Inspired from this definition, in the capability of noise tolerance. For the robustness property, a
audiovisual field, the grammar notion is a recent concept signature must not vary when the video sequence contains
whose aim is to define an appropriate style to the TV for example an insignificant signal noise or slight
channels and to deduce the typical structures of TV luminosity variation. For the uniqueness property, two
programs. Hence, a video grammar could be exploited in a different video contents must have two different signatures.
multitude of multimedia applications. As examples of these In this sense, every semantically different video segment
applications, we cite the identification of TV programs type should possess a unique signature. Consequently, we
[19], the characterization of a particular event in a sport propose a new spatial-temporal method to generate a
program (substitution, goals …). Indeed, video grammar is
100 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
signature for video segment identification that preserves color. CCV is a more sophisticated form of histogram
these two properties. refinement, in which buckets are partitioned based on
Indeed, contrary to most proposed approaches, we spatial coherence. By separating coherent pixels from
generate video signature from a set of frames of the incoherent ones, CCV provides finer distinctions than
audiovisual segment and not a single (key) frame. This fact classic histograms. The CCV computing process is
may influence on the signature efficacy (i.e., the uniqueness composed essentially of three steps:
property) as two videos with similar key-frames do not
necessarily have the same or similar content. • Image preprocessing
In order to overcome this deficiency, we opt for a bi- This first step smoothes the image by applying a medium
dimensional signature. The main idea is to create a spatial- filter to the neighboring pixels. The major aim of this
temporal signature (1) where the generation process is preprocessing is to eliminate small variations between
carried out from a set of frames, separated by a definite time adjacent pixels. Then, we precede to discrete the color
step Tstep. That is, the generation process provides different space, to obtain only Ncolor distinct colors in the image for
levels of signature discrimination. Two signatures will be the following two reasons: First decreasing the luminance
similar/ dissimilar on three levels: Nframe, Tstep and SigF variation effects, and secondly reducing the size of image
which can further guarantee the uniqueness property. signature.
([ ]
Vsig = SigF , Nframe,Tstep ) (1)
• Image segmentation
Our signature generation process starts by select a number
To classify pixels within a given color bucket as coherent or
of frames Nframe and a temporal step (Tstep) which separates
incoherent, we proceed in the second step to region
them. Secondly, for each frame of the Nframe selected frames,
segmentation.
we compute SigF the low-level characteristics vector
In order to determine the pixel groups, the image is
derived from this frame. The signature of a video segment is segmented into disjoint and homogeneous regions. A region
defined by the whole of frames signatures. Figure 4 is defined as a connected set of pixels for which uniformity
illustrates our spatial-temporal signature generation process. (homogeneity) condition is satisfied. Referring to
segmentation method category, a uniform region can be
obtained by two different ways: It can be derived by growing
from a seed block/pixel by joining others pixels or obtained
by splitting a large region which is not uniform.
Several image segmentation advanced techniques have
been proposed and classified in different categories (region
growing, split and merge…). We use the Statistical Region
Merging (SRM) [13] algorithm that belongs to the family of
region growing techniques with statistical test for region
fusion. The advantages of this method are simplicity and
performance without the use of color space transformations.
In addition, we opted also for the SRM method as it gives,
for each segmented region, the list of pixels belonging to it
Figure 4. Process of spatial-temporal video signature and the related mean color which facilitates afterward the
generation. computing of CCV’s bins. Note that in our work we apply
3.1.4 Frame level signature the SRM method on grayscale quantified images. Image is
Several methods were proposed to create an identified image segmented according to color buckets. This effectively
representation, especially these designed for CBIR segments the image based on the discretized color space.
applications. Most common approaches are based on low The SRM method is based on two major components: a
level features (color, intensity …) as detailed in the previous merging predicate (2) and the order followed in testing this
section. In our approach and in order to ensure the predicate. The merging predicate is defined as:
robustness of signature, we opted to use two descriptors:
true, if R ' − R ≤b(R ') +b(R)
colorimetric feature and a POI descriptor. P ( R, R ') = (2)
a) CCV descriptor false, otherwise
For the colorimetric descriptor, histograms are used to
represent images in many multimedia applications. Their 1 RR
with: b ( R ) = g
advantages are insensitivity to small changes. However, ln
2Q R δ
color histograms lack spatial information, so images with
very different appearances can have similar histograms. where R and R’ represent the two regions being tested,
Hence, we use a histogram-based method for representing R denotes the color average in region R and R|R| is the set of
images, that incorporates spatial information. Each pixel is regions with p pixels. The order of region merging follows a
classified in a given color bucket as coherent or incoherent, criterion f, which implies that when any test between two
based on whether or not it is part of a large similarly- parts within a true region is performed. g and Q are global
homogeneous region. A color coherence vector (CCV) stores (random) variables are used to determine the merging
the number of coherent versus incoherent pixels for each predicate threshold.
(IJCNS) International Journal of Computer and Network Security, 101
Vol. 2, No. 9, September 2010
with : uuuuur uuuuur N ccv with a minimum number of false identifications i.e. d is
∑ (CCV k , CCV j ) = ∑ α k (i ), β j (i ),α j (i ), β k ( i )
significant if both recall and precision having high values at
i =1
As for SigF POI signature similarity measurement, we test the once. So that, wd is a combination formula of these two
Euclidean distance of all POIs values defined in this metrics.
signature. Two SigF POI signatures are similar if and only if
the majority of POIs (a percentage number of NPOI) CI d (10)
R=
descriptors are similar (7). CI d + MI d
N POI uuuuur uuuuur
∑ Sim ( POI k , POI j ) CI d (11)
SimSigf POI ( k , j ) = k =1 (7) P =
N POI CI d + FI d
with:
uuuuur uuuuur 1, if Disteucd < threDist & Dist harris < threHarris 2× R × P (12)
Sim(POI k , POI j ) = wd = F1=
R +P
0, otherwise
With:
Disteucd ( POI k , POI j ) = ( xk − x j ) + ( yk − y j )
2 2 CId: number of programs identified correctly using
descriptor d
Dist Harris ( POI k , POI j ) = r ( x k , y k ) − r ( x j , y j ) MId: number of missed identifications using descriptor d
FId: number of false identifications using descriptor d
3.2.2 Descriptors similarity combination
In order to define these weights, we conducted a training
Since a frame has a composed-signature (SigfCCV and
phase to choose the optimum weight value for the descriptor
SigfPOI), to detect the video type after computing the
d of each TV program/channel. Table 1 summarizes the
similarity of these signatures, we combine their similarities
weight values for various TV channels.
to obtain a single decision value: SimVideo(9). This
combination aims at increasing the identification rates.
Table 1: w1 and w2 values for each channel.
The combination is done using the average rule, following
Recall (R) Precision (P) wd (F1)
normalizing and pondering coefficients of two signature
descriptors. Hence, the frame signature similarity is defined
as (8): Descriptor d1 d2 d1 d2 w1 w2
Channel
w1 ×simSigf CCV (k ,t ) +w 2 ×simSigf POI (k ,t ) M6 0,8 0,92 1 0,92 0,89 0,92
simSigf (k ,t ) =
w1 +w 2 (8) RTV 0,95 0,66 0,75 0,87 0,84 0,75
LCI 0,99 0,69 0,98 0,9 0,98 0,78
1, if simsigf > thSig itele 0,83 0,53 0,67 0,75 0,74 0,62
simVideo (k , t ) = (9) Abmoteurs 0,66 0,74 1 0,67 0,79 0,70
0, ohterwinse France 24 1 0,5 0,84 0,71 0,91 0,59
with:
A segment localized at frame j in a video stream was
w th + w 2thCCV
thSig = 1 POI identified of type t only if the frames signatures (NsigF ) of
w 1 +w 2 Vsig(t) are similar to their homologous in the stream as
detailed in our similarity measurement metrics (13).
N sigF
According to our experimental study, we concluded that a videoType (t ), if ∑ SimVideo ( k , t ) = N sigF
VideoSegment ( j ) = (13)
discriminative identification of a descriptor d differ from a i =1
undefined , otherwise
channel to another. This could be explained by the fact that
the graphic charter of channel used during the generics
production stage exploits graphics compositions rich of a 4. Experimental Results
particular descriptor (Figure 3) than another. Therefore, the
weighted coefficient wd of descriptor d must be relative to its 4.1 AViTyp: Automatic Video Type identification
efficiency to identify individual types of programs to tool
indicate the importance of d in the combination with To implement the proposed approach and in order to
other(s) descriptor(s) (8). In other words, the identification evaluate its efficacy, we have developed a system called
rate of d is as important as its weight of its coefficient. That AViTyp (Figure 6). This system offers two main features:
is, this weight must be proportional to identification rate signature creation for references catalogue items, and
(12). programs identification in files from TV channels. In
The relevance of descriptor d is evaluated by its capacity addition, AViTyp provides an ergonomic interface to adjust
to identify the maximum number of correct identifications
(IJCNS) International Journal of Computer and Network Security, 103
Vol. 2, No. 9, September 2010
identification process: defining the weights and appropriate their grammars expressed as spatial-temporal video
thresholds for the current channel. signatures.
In order to improve the video type identification quality,
we focus our future work on the integration of other
descriptors such as the form or the texture features in video
grammar which can characterize the visual jingles since the
used features are not always discriminative.
References
[1] Babaguchi, N. ; Kawai, Y. ; Kitahashi, T. “Event
Based Indexing of Broadcasted Sports Video by
Intermodal Collaboration”. IEEE Transactions On
Multimedia, Vol. 4, NO. 1, pp. 68-75 (2002)
[2] Berrani, S. A. ; Manson, G. ; Lechat ,P. “A Non-
Supervised Approach for Repeated Sequence Detection
Figure 6. User interface of the AViTyp tool. in TV Broadcast Streams”. Signal Processing: Image
Communication, special issue on "Semantic Analysis
for Interactive Multimedia Services", pp. 525-537
4.2 Evaluation of the video type identification (2008)
performance [3] Conseil Supéreiur de l’Audiovisuel. “Publicité,
To evaluate experimentally our video type identification parrainage et téléachat à la télévision et à la radio”,
approach, we used a large and varied corpus composed of a http://www.csa.fr, (2008) France
set of video files which are long streams from 6 different TV [4] Duan, L. Y.; Xu, M. ; Tian, Q. ; Xu C. S. ; Jin, J.S. “A
channels. Various programs and inter-programs types Unified Framework For Semantic Shot Classification
(news, sport, varieties, documentaries, pubs…) are In Sports Video”. IEEE Transactions on Multimedia
contained in these streams. Volume: 7, Issue 6, pp. 1066- 1083(2005)
To evaluate the performance of the proposed approach, [5] Haller, M. ; Hyoung-Gook, K. ; Sikora, T.
we used the recall (10) and precision (11) metrics. Table 2 “Audiovisual Anchorperson Detection For Topic-
presents the experimental results values regrouped by TV Oriented Navigation In Broadcast News” IEEE
channel. International Conference on Multimedia and Expo
(ICME), pp. 1817 – 1820 Canada (2006).
Table 2: Experimental results grouped by channel. [6] Harris, C.; Stephens M. “A Combined Corner And
TV channel recall Precision Edge Detector”. Alvey Vision Conf, pp. 147-151
M6 93,2 100 (1988)
RTV 81,6 100 [7] Hua, X.S. ; Chen, X. and Zhang, H.J. “Robust Video
Signature Based On Ordinal Measure International”.
LCI 85,22 85,71 Conference on Information Processing ICIP04, pp.
Itele 74,2 83,33 685-688 (2004)
Abmoteurs 75 100 [8] Kimura, A. ; Kashino,K. ; Kurozumi, T. ; Hiroshi, M.
France 24 94,7 87,5 “A Quick Search Method for Multimedia Signals
All channels 83,98 92,75 Using Feature Compression Based on Piecewise Linear
(average) Maps”. Proc. of International Conference on
Acoustics, Speech and Signal Processing (ICASSP),
In this experimentation, despite the good precision value, vo1.4, pp. 3656-3659 (2002)
we conclude that the recall was rather satisfactory (83,98%) [9] Kurozumi, T. ; Kashino, K. ; Hiroshi, M. “A Method
and needs to be improved. The degradation of the recall is for Robust and Quick Video Searching Using
due essentially to some missed identifications. The main Probabilistic Dither-Voting”. Proc. International
reason behind the missed cases is due essentially to the Conference on Image Processing, vol.2, pp. 653-656
quality of the broadcast streams, like a blur signal. (2001)
[10] Law-To, J. ; Chen, L.; Joly, A. ; Laptev, Y. ; Buisson,
O. ; Gouet-Brunet, V. ; Boujemaa, N. ; Stentiford, F.
5. Conclusion and future work “Video Copy Detection: A Comparative Study” ACM.
We have proposed in this paper a grammar-based approach International Conference on Image and Video
for video program identification in TV streams. The Retrieval (CIVR'07), pp. 371-378 (2007).
approach is composed by two steps: (i) creation of references [11] Mahdi, W. ; Ardebilian, M. ; Chen, L. “Automatic
catalogue and (ii) identification of programs TV in channels Video Scene Segmentation Based On Spatial-
streams. It compares the visual similarity between TV Temporal Clues And Rhythm”. Journal on Network
stream signal and video signatures stored within as Info Systems. v.2(5), pp. 1-25 (2000)
grammar descriptors in a reference catalogue. This [12] Naturel, X.; Gros, P. “Detecting repeats for video
catalogue is composed of a set of visual “jingles” that structuring”. Multimedia Tools and Applications Vol
characterize the starting of TV programs associated, with 38, Issue 2, pp. 233 – 252 (2008).
104 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
Authors Profile
Tarek ZLITNI received the M.S
degree in information systems and new
technologies in 2007 from the
University of Sfax, TUNISIA, where
he is pursuing the Ph.D. degree in
computer science.
His research interests focus on video
and image processing and analysis,
multimedia indexing, and content-
based video segmentation and
structuring.
Abstract: In this paper, we have developed a block cipher by analysis. Finally in section 6 we have presented the
introducing a pair of keys-one as a left multiplicant of the numerical computations carried out in this analysis and
plaintext and the second one as a right multiplicant of the drawn conclusions.
plaintext. Here as we utilize EBCDIC code for converting
characters into decimal numbers, we use mod 256. We have
developed an iterative procedure, which includes a permutation, 2. Development of the cipher
for the cipher. The avalanche effect and the cryptanalysis
clearly show that the cipher is a potential one. Consider a plaintext, P. Let this be written in the form of a
matrix given by
Keywords: Encryption, Decryption, Cryptanalysis, avalanche P = [Pij], i= 1 to n , j=1 to n. (1)
effect, permutation, pair of keys. Here each Pij is a decimal number lying between 0 and 255.
Let us choose a pair of keys denoted by K and L, where K
1. Introduction and L can be represented in the form
K = [Kij], i=1 to n, j=1 to n, (2)
In the recent years, several modifications of Hill Cipher [1- and L = [Lij], i=1 to n, j=1 to n. (3)
5] have appeared in the literature of Cryptography. In all Here the elements of K and L are decimal numbers lying in
these investigations, modular arithmetic inverse of a key [0-255].
matrix plays a vital role in the processes of encryption Let the ciphertext, C be given by
and decryption. C = [Cij], i=1 to n, j=1 to n, (4)
It is well known that the Hill Cipher containing the key in which all the elements of C also lie in the interval 0 to
matrix on the left side of the plaintext as multiplicant can be 255.
broken by the known plaintext attack. In a recent paper, to The process of encryption and the process of decryption
overcome this drawback, Sastry et al.[6] have developed a are described by the flow charts given in Figure1.
block cipher which includes a key matrix on both the sides
of the plaintext matrix. In this analysis they have discussed
the avalanche effect and cryptanalysis, and have shown that
the cipher is a strong one.
In the present paper, our objective is to modify the Hill
Cipher by including a pair of key matrices, one on the left
side of the plaintext matrix and another one on the right
side of the plaintext matrix as multiplicants, so that the
strength of the cipher becomes highly significant. In this we
represent each character of the plaintext under consideration
in terms of EBCDIC code and use mod 256 as a
fundamental operation. Here the security of the cipher is
expected to be more as we have two keys. This is on account
of the fact that, in some untoward circumstances, though
one key is known to the hackers, other remains as a secret
one and it protects the secrecy of the cipher.
In what follows we present the plan of the paper. In
section 2, we have mentioned the development of cipher. In
section 3, we have illustrated the cipher by giving an
example and discussed the avalanche effect. Section 4 is
devoted to the cryptanalysis of the cipher. In section 5 we Figure 1. Flow Charts of the Cipher
have presented the summary of the results obtained in this
106 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
78 44 207 186
3 220 38 106
1 0 0 0 1 1 1 0 P= (13)
0 0 1 0 1 0 0 0 7 248 146 125
0 1 0 1 1 0 1 0
232 198 106 84
1 1 0 0 1 0 1 0
1 1 1 1 1 0 1 0 This is the final result of permutation.
1 0 1 0 0 1 0 1
On using (7 – 9), and applying the encryption
algorithm, given in section 2, with r=16, we get
0 0 1 1 0 1 1 0
0 0 1 0 1 1 0 0
P=
1
(11)
1 1 1 1 0 1 0
197 13 153 234
0 1 0 1 1 0 1 0 108 170 106 175
1 1 1 0 0 1 1 0 C= (14)
1 0 0 1 0 1 0 1 110 217 254 8
195 15 104 58
1 0 1 1 1 0 1 1
1 0 1 0 0 1 1 0 On adopting the decryption algorithm, we get back the
0 0 1 0 1 0 0 0 original plaintext given by (7).
1
0 0 1 0 0 0 0
Now, in order to examine the strength of the algorithm,
let us study the avalanche effect. To this end, let us modify
On adopting the permutation process described in
the plaintext (6) by changing the character G to F. The
section 2, (11) can be brought into the form of a matrix,
EBCDIC code of G and F are 199 and 198 respectively, and
containing 16 rows and eight columns, given by
they differ in one binary bit. Thus, on using the modified
0 1 0 0 1 1 1 0 plaintext and the encryption algorithm, let us compute the
0 0 1 0 1 1 0 0 corresponding ciphertext. This is given by
1 1 0 0 1 1 1 1
1 0 1 1 1 0 1 0 238 206 114 23
0 0 0 0 0 0 1 1 127 135 247 32
1 1 0 1 1 1 0 0 C = 233 (15)
216 221 177
0 0 1 0 0 1 1 0 17 223 251 14
0 1 1 0 1 0 1 0
P=
0 0 0 0 0 1 1 1
(12) On converting (14) and (15) into their binary form, we
notice that the two ciphertexts differ by 65 bits (out of 128
1 1 1 1 1 0 0 0 bits). This shows that the cipher is a strong one.
1 0 0 1 0 0 1 0
0 1 1 1 1 1 0 1 Let us now change a number in one of the keys, say
key, K. Here we change the third row first column element
1 1 1 0 1 0 0 0 of 8 from 48 to 49, which differ only by one binary bit. On
1 1 0 0 0 1 1 0 carrying out the process of encryption with the modified
0 1 1 0 1 0 1 0 key, keeping the other key and the original plaintext intact,
0 1 0 1 0 1 0 0
we get the ciphertext given by
108 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
64 186
52 62 247 108 192 65 153 113 2 103 95 161 254 228 2
On assuming that the computation of the cipher with a 174 187 58 138 54 197 192 199 131 192 107 174 29 72
specified pair of values of the keys takes 10-7 seconds, the 222 106 203 119 15 138 35 213 167 38 187 132 67 228 115 129
time required for the brute force attack is obtained as 19 47 99 229 114 138 22 168 129 152 78 38 66 117 152 108
Abstract: Intrusion detection systems (IDSs) aim at detecting techniques for Enterprise Information Security.
attack against computer systems and networks or in general, 2. Intrusion Detection System
against information system. With the rapid growing An intrusion detection system is software and/ or hardware
unauthorized activities in network, Intrusion Detection as a
designed to detect unwanted attempt at accessing,
component of defence is very necessary because traditional
firewall techniques cannot provide complete protection against
manipulating and disabling of computer systems, mainly
intrusion. Network – based IDSs are designed to monitor through a network, such as the internet. These attempts may
potential attacks in enterprise network information security. take the forms of attacks, as i.e. by crackers, malware and
Detection of intrusions falls in two categories anomaly and disgruntled employees. An intrusion detection system is
signature detection. This paper describes about various types of used to detect several types of malicious behaviors that can
IDS like Network–based IDS, Host-based IDS and Hybrid IDSs. compromise the security and trust of computer system. This
Further, evaluation of average intrusive events using signature includes network attacks against vulnerable services, data
detection technique for enterprise information security is driven attacks on applications, host based attacks such as
presented. privilege escalation, unauthorized logins and access to
sensitive files and malware.
Keywords : IDS, Attack, Security, Enterprise, Events
2.1 Need of Intrusion Detection System
1. Introduction The security incidents that occur on a network, the vast
majority (up to 85 percent by many estimates) come from
The movement towards a more secured computing system
inside the network. These attacks may consist of otherwise
continues to rise as management becomes cognizant of
authorized users who are disgruntled employees. The
numerous threats that exist to their enterprises [3]. As
remainder come from the outside, in the form of denial of
internet-based and Intranet-based network system are service attacks or attempts to penetrate a network
growing, to share information and conduct business with infrastructure. An intrusion detection system remains the
online partners. However hackers have also learned to use only proactive means of detecting and responding to threats
these systems to access private networks and resources. that stem from both inside and outside a corporate network.
Studies show that many enterprises have suffered external Intrusion detection Systems are integral and necessary
and internal network intrusions, including some that element of a complete information security infrastructure
resulted in sizable loss of money. Enterprise systems are performing as “the logical complement to network
subject to various types of attacks. For example, hackers can firewalls”[1]. Simply put, IDS tools allow for complete
penetrate systems by taking advantage of bugs or by supervisions of networks, regardless of the action being
acquiring passwords. Traditional security products can be taken such that information will always exist to determine
penetrated from outside and can also leave organization the nature of the security incident and its source. Study
vulnerable to internal attacks. Network – based IDSs solve shows that nearly all large enterprises and most medium-
these problems by detecting external and internal security sized organizations have installed some form of intrusion
breaches as they happen and immediately notifying security detection tools [6]. However it is clear that given the
increasing frequency of security incidents, any entity with a
personnel and network administrator by email or pager[2]
presence on the internet should have some form of IDS
This type of system covers an entire organization by
running as a line of defence. Network attacks and intrusions
deploying monitoring agents on local networks, between can be motivated by financial, political, military, or personal
subnets, and even on remote networks on internet. The rest reasons, so no company should feel immune. Realistically, if
of paper is organized as follows. Second section discusses there is a network then this is a potential target, and should
character tics of good intrusion detection system and need of have some form of IDS installed on system.
intrusion detection system to secure enterprise information.
Section three presents various types of intrusion detection 2.2 Characteristics of good Intrusion Detection System
Regardless of whether the IDSs are based on misuse or
systems like Host–based IDS, Network-based IDS and
anomaly detection, it should possess the following
Hybrid IDS and fifth section evaluates Intrusion Detection
characteristics [4]
110 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
• Run continually without human supervision – must be similar attempts. These specific patterns are called
adequately reliable to operate in the background. signatures. For HIDS, one example of a signature is “three
• Fault tolerant – able to survive a system crash without failed logins” and for NIDS, a signature can be simple as a
requiring its knowledge to re-built when restarted. specific pattern that matches a portion of network packet.
• Resist subversion – able to be monitor itself to ensure it For instance, packet content signatures and /or header
is not being subverted. content signature can indicate unauthorized actions, such as
• Minimal system overhead – must not adversely affect improper FTP initiation. The occurrence of a signature
the system performance. might not signify an actual attempted unauthorized access,
• Observe deviations from normal behavior. but it is good idea to take each alert seriously. Depending on
• Easily tailored and adaptable to changing usage the robustness and seriousness of a signature that is
patterns of the host system. triggered, some alarm, response, or notification should be
• Cope with changing system behavior over time as new sent to proper authorities [5]
applications are added. 4. Evaluation of ID Techniques For Enterprise
2.3 Types of Intrusion Detection System Information Security
Intrusion detection system has various types and approaches This section, presents the implementation of Intrusion
for the goal of detecting suspicious traffic in different ways. Detection techniques using signature detection algorithms
There are network based (NIDS), host based (HIDS) and through Sax2 simulator, which provides a simulation
hybrid intrusion detection systems. platform to detect intrusion events in the enterprise network.
• Network-Based Intrusion Detection System : Network- Sax2 is a professional intrusion detection and response
Based IDS are placed at a strategic point or points system that performs real-time packet capturing, 24/7
within the network to monitor traffic to and from all network monitoring, advanced protocol analyzing and
devices on the network. Ideally this would scan all automatic expert detection. These specific patterns are
inbound and outbound traffic; however doing so might called signatures. Depending the robustness and seriousness
create a bottleneck that would impair the overall speed of a signature that is triggered, some alarm, response, or
of network. notification should be sent to proper authorities.
• Host-Based Intrusion Detection System : Host-Based IDS
are run on individual hosts or devices on the network. A 4.1 Results
HIDS monitors the inbound and outbound packets from Analysis and observation of average intrusion detection in
the devices only and will alert the user or administrator real-time network using signature detection technique as
of suspicious activity is detected. shown in figure 1, 2, 3, 4 and 5 with respect to different
Hybrid Systems : A hybrid system is simply an IDS that has timings is done. Attack events are categorized as notice
features of both host-based and network-based systems are based, warning based, information and others.
becoming the norm, but most IDS’s still are stronger in one • Average Intrusion Events in percentage for 17 Hosts on
area or the other. A host-based system complemented by a Network after 15 minutes is shown in figure 1. (45%
handful of inexpensive network monitoring tools can make notice based, 3% warning based, 45% information based
for a complete strategy [7]. and 7% others type).
50
45
4 6 %, Not ice 50
40
45
35 4 6 %, N o t ice
3 % , Warning 40
30
35
25 3 % , W arning
30
43%,
20
Inf ormat ion 25
15
43% ,
20
8 %, Ot hers Inf o rmat i o n
10 15
8 %, O t her s
5 10
0 5
St at ist ics It em
S ta tis tics Ite m
25 3% , Warning
20
15
10 44% , Information Statistics Item
5
0
notifying security personnel and network administrator by conferences. He is recipient of Young Scientist Award by
email or pager. Evaluation of Intrusion Detection techniques International Academy of Physical Sciences for research work. He
for Enterprise Information Security was done using Sax2 is also Principal Investigator of two projects funded by National
agencies in area of Ubiquitous computing and MANET security.
simulator. Evaluation of average intrusive events in real-
His research interests include mobile communication, computer
time network for different hosts with respect to different networks and information security.
timings is also done. After analysis and observation of
intrusion detection in real-time network using signature
detection technique as shown in above mentioned figures, it Amrit Pal received the B.E. degree in Computer
is concluded that notice and information based attacks Science and Engineering from LIET Alwar
require more attention than warning based and others types affiliated to Rajasthan University, India in year
of attack on network. Attacks can be detected very 2005. He obtained his MTech degree with
efficiently using signature detection technique on enterprise DISTINCTION in Computer Sc & Engg from GJ
network. University of Science and Technology, Hisar,
India. He is working as Assistant Professor in St
MEC Alwar, India. His research interests include Computer
References Forensic and information security.
[1]. Bace and Rebecca, “An Introduction to Intrusion
Detection and Assessment: System and Network
Security Management”. ICSA White paper, 1998.
[2]. Chris H, Detecting Attacks on Network, McGraw Hill,
1997.
[3]. Garuba M, Liu C and Fraites D, “Intrusion Techniques:
Comparative Study of Network Intrusion Detection
Systems” Proc of IEEE Fifth International Conference
on Information Technology, IEEE Computer Society,
pp 794-798, 2008.
[4]. Hart R, Morgan D and Tran H, “Introduction to
automated intrusion detection approaches” Journal of
Information Management and Computer Security, pp
76-82, 1999.
[5]. Paul I and Oba M, An introduction to Intrusion
Detection System, John Wiley & Sons, 2001.
[6] Sans, “Intrusion Detection and Vulnerability Testing
Tools: 101 security solution” E-Alert News letters,
2001.
[7] Tony B, Introduction to Intrusion Detection Systems:
[Online] www.aboutids.com, 2001.
Authors Profile
2.3 Skin problems while laser printer is working which harms nasal mucous
High voltage of picture tube causes to produce electrostatic tissue, eyes and throat. Therefore, importers or producers of
field and a positive electric charge in the external surface of computers have to observe the required standards.
the screen. Dust particles move in all directions in the field
between the positive charge and operator’s face. Although 3. Negative effects of using computer games
the amounts of dust particles, depending on room Many of these people play in a fictional universe for a long
ventilation, flooring and other factors changes, they, always time. Disadvantages of computer games are:
exist.
The positive charge current in this field may cause dryness 3.1 Physical damages
and crack in hands and face skin in people who have skin Because of staring at the screen continuously, the eyes are
allergies. under strong pressure of light and will undergo
Other studies have also shown that people who complain complications. Observations have shown that teenagers are
about skin allergies and are under mental and psychic so much absorbed by the games that do not notice the
pressure when working more, the pressure and stress can amount of visual and mental pressure putt on themselves.
cause hormonal changes, such as thyroxin (thyroid Since they sit in a constant fixed position, the skeleton will
hormone) and prolactin (pituitary hormone) and also skin be afflicted by some abnormalities. Also twinge and stiff
loss. neck, shoulders and wrists are the other complications
Another research, which was conducted in Sweden, caused by relatively fixed and long-term working with
attributes the face skin loss and working with computers to computers. Skin exposes to continuous radiation of monitor.
psychosocial factors and workplace issues. Of course, Nausea and vertigo, especially in children and teenagers
personal factors are effective in the incidence of these losses. with epilepsy background, are other computer
complications. Stirring computer games result in bone and
2.4 Stress and neurotic-psychic issues
nerve diseases in hands and arms.
Initially convulsion is created by a special form of epilepsy.
Light sensitive epilepsy is formed when children have high 3.2 Psychological and nurturing injuries
sensitivity to flickering lights. In this case, the convulsion 3.2.1 Strengthen the sense of aggression
begins when they are in front of the screen bright lights and The main characteristic of computer games is that most of
the flash caused by computer games. Symptoms are varies, them are in warlike settings and the gamer must fight with
including headache, change in sight field, vertigo, dizziness, so-called enemy forces to reach the next stage of the game.
cognizance decrease and convulsion. Symptoms will The continuation of playing such games will make children
disappear as soon as they stop using the computer [1, 15]. aggressive and quarrelsome.“Violence” is the most
High workload and remoteness from colleagues at work can important motivation used extremely in designing the
lead to psychological problems. It should be noted, however, newest and most attractive computer games. Hollywood
celebrities, who are immoral and anti-value in our culture,
that working with computer does not necessarily cause
are being shown as an insuperable hero in these games.
depression, but high workload plays a role in creating
3.2.2 Isolationism
psychological stress. These diseases are nervous or mental Children who continuously involved with these games tend
tic such as blinking, shoulder unusual movements, nausea to be introvert and they are recluse in society and have
and vertigo [13]. anomalies in social communication [1].
2.5 Breathing hazardous gases 3.3 Mental retardation
Latest research shows PC hardware is full of variety of In these games because children and teenagers play created
metals and toxic substances contaminating the environment. programs with others, and since they are not able to change
“Lead” (used in cathode ray tubes), “Arsenic” (used in older them, their confidence of creation and improvement will be
CRTs), “Antimony trioxide” (used as anti-fire), instable.
“Polybrominated materials” Most families think that the gamer has a continuous mental
(Used as anti-fire in cables, circuits and plastic materials in involvement in the games, but this involvement is not
computers) are some materials that could be mentioned. mentally, rather they deceive brain cells and from the
"Selenium" (as a power supply circuits’ rectifier), physical active point of view there is only some fingers
"Cadmium" (used in computer’s circuit boards and movement. If we continue this manner and develop the
semiconductors), "chromium" (to prevent corrosion of metal games, the society will have frustrated, depressed, non-
parts of the computer), "cobalt" (used in metals for plasticity active and uncreative members. They would be less self
reliant and creative, while society needs creative, innovative
and magnetic) and "Mercury" (in the computer’s switch) are
and contemplative people [12].
other toxic and pollutant substances used in PCs.
Recent researches indicate that computer games lead to
Computer framework and screens have a special smell when
chronic brain damage. Games just stimulate parts of the
they get warm. Dioxins gas produced by the computer body brain that are dedicated to the vision and motion and do not
(because of heat) and the screen is an example of these help developing other parts. Frontal lobe does not develop in
odors. These materials are used in the framework of the children who devote long hours to play computer games.
screens and boards as fireproof. The ozone gas is produced Frontal lope plays an important role in the development of
(IJCNS) International Journal of Computer and Network Security, 115
Vol. 2, No. 9, September 2010
memory, emotion and learning. People, whose frontal lobes identity for teenagers through search. We should know that
are not evolved, are susceptible being violent and have less many interactions in the Internet require no human contacts
ability to control their behavior. [14].
3.4 Impact on family relationships 4.2 Immoral websites
Considering that life in our country, Iran is going towards Immoral websites have become a catastrophe on the Internet
machinelike life and in some families, parents are employed these days. Through providing immoral and obscene content
or some fathers have more than one job, emotional and images, these sites have jeopardized mental and
relationships and getting together in families have been emotional health of teenagers and therefore societies’
decreased automatically. A lot of people are not satisfied health. Most of these sites try to destroy the culture and
with this situation and existence of computer as a magic box values of a society.
which has resulted in cold family relationships [4, 12].
4.3 Chat
3.5 Educational failure
Chat rooms are used by a great deal of teenagers on the
Due to the glamorous attraction of these games, children
internet. These rooms are suitable place to meet and
spend a lot of their time and put their energy to play. Even
converse with other children and teenagers around the
some children, wake up earlier than usual in the morning to
world. But a lot of abuse is done by swindlers in these rooms
play a little before school and compensate for the wasted
including:
time in this way. One parent states that last year her son had
• presenting invalid personal information,
had the best scores, but since they had bought a computer
for him, he spends 2-3 hours a day to play and has had • Abuse of people’s information
educational failure [4, 12]. • Deceiving adolescents by contacting and making
appointments.
These matters lead to seducing adolescents and therefore
4. Negative effects of using the Internet corruption in societies.
4.1 Internet Addiction 4.4 Impact of internet games
The addicts to the Internet spend long hours during the day Today, electronic and computer games development has
using this media, in a way that their job and social become a great threat for teenagers and youth. This can lead
performance is influenced. This type of abnormal usage is to mental disorders and depression among youth as well. In
called internet addiction by experts. The reason for internet the past, games were done through children's
addiction in many of these people is to find a way to communication with each other.
suppress anxiety and stress in their lives. According to the
But today, they spend most hours of the day to play
researchers, dissociable people and those who have problem
computer games since having being aware of such games,
in their social and interpersonal communications, are more
while this communication does not make any emotional and
likely to become the Internet addicts.
human relations. Effect of games in children and teenagers
Known symptoms of this disorder include:
is especially creating violence among them. Researches
• Using computers for fun, enjoyment or stress show that the effect of games on violent behavior in children
relief. and teenagers depends on several factors [4, 3]:
• Extreme depression when they do not use the
Internet. • Severity of violence in games.
• Spending a lot of time and money for • Child's ability to discern and differentiate between
software, hardware and computer-related imaginary world and real life.
activities. • Child's capability to restrain the natural tendency
• Being carefree towards work, school and and motivations.
family. • The value framework in which the child is growing
• Uncontrollable feeling of irritability while or living now and the values that game content is
using computer presenting.
One of the negative aspects of the Internet is entering 4.4.1 Social impact on person
anonymously. Teenagers have the opportunity to do Relationships between individuals on the Internet is
whatever they would like to in the Internet. superficial and does not have the depth, this type of
They get disturbed when they are asked about how they use communication lacks features such as proximity, regular
the Internet by adults because they regard the Internet as a contact, deep influence, exchanging information about
private place for themselves. Internet is replaced with public social context [2].
space for them. In this case they will have more experiences 4.4.2 Being cut off
and information about how to control and use this new While the Internet can connect human being electronically,
medium. The only problem is that the relationship between but it stops "face to face" communication. So it will reduce
youth and adults vanishes in cyberspace. Puberty is a critical human relations and social cohesion.
stage that an adolescent discovers and internalizes the
values. 4.4.3 Mental involvement
Internet with its unlimited volume of information and One of the problems mentioned by psychiatrists about
instant communication tools, introduces other tools to create children and teenagers who spend long hours in chat rooms
116 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
is mental involvement which is caused by creating mental • Limit the time of computer use, if the child has
images that are produced by the materials exchanged among reduced social contacts. Excessive use of computer
people in these chat rooms. It causes mental disorders usually indicates a problem.
including depression. • Accompany children when they are in chat room.
4.4.4 Internet effect on social skills • Revise children’s e-mail and delete inappropriate
Online games delay appropriate development of child’s messages.
social skills. When the child becomes addicted to the • Using filter software to prevent from visiting
Internet, his motivation for interacting with others will inappropriate content. Such software could also log
decrease. This has negative effects on their personal the child’s visited site addresses so parents can
relationship and social interactions [2]. Recent studies show review them later.
that using the Internet cause to feel misery, loneliness and • Of course, no software can be replaced with parents
totally reduced mental health. People who use the Internet association with children.
more keep the friendships less. They spend less time to talk • Programs should be suitable with the child's growth
to family, experience more stress and feel lonely and and development.
depressed. • Encourage child to interact with the family rather
than excessive use of computers.
4.4.5 Internet and families • Computer should be as an educational complement
Using the Internet affects family relationships for several tool; not as the only way of training.
reasons: • Choosing appropriate programs with children's age.
• Using Internet is a time-consuming activity so it • Controlling the access ways to computer.
can reduce children’s interaction with family. • Enhancing parents and teachers’ computer literacy.
Dedicated time to interact with each other is • Providing educational programs for parents, teachers
prerequisite for a high quality relationship. In and others who work with children.
a study, 50% of families stated when they are • Some researches should be done on the effects of
online they speak less and 41% admitted they computer on physical, intellectual, rational, social
had learned anti-social behavior during this and psychological development of children.
time.
• Internet creates new conflicts within the family. References
When there is only one computer at home,
there will be a competition between children [1] M.K Shields, R.E Beharman, “Children and computer
and parents to use the computer which technology:Analysis and recommendations”, The
sometimes causes struggle. future of children and computer technology, Vol.10.
No.2, pp. 4-30, Fall/winter2000.
• Visiting web pages that have inappropriate
[2] “Internet & its affect on social live” Website, May
contents for child's age causes argument and
2005, Available: http://www.ayandehnegar.com.
conflict between parents and children.
[3] “ICTs and children” Website, April 2004, Available:
• Sometime conflict is because of child’s access to
www.wiki.media-culture.org.au.
parents’ private information.
[4] B. Affonoso, “Is the internet affecting the social skills
• Parents are concerned that Internet may prevent
of our children?”, December 1.1999,
children from other activities and have
Available:http://www.sierrasource.com/cep612/internet
isolating effects on them.
.html.
[5] L.K. Wan. “Children and computer vision syndrome”,
5. Conclusion 2005, Available: www.allaboutvision.com.
In recent years, computer and internet gradually have been [6] Saeed. Shamlou, “Mental Health”, Roshd Publication,
replaced with television to some extent and likely in the 2001, pp. 4-30.
near future will play more significant role than television in [7] M. Emick, “Study finds direct link between computer
children and teenagers’ lives. If this technology is used use and vision problems in children”, Mar 2002,
correctly, it has positive effects. Yet it is inferred from the Available:www.allaboutvision.com/cvs/productivity.ht
content of this paper that the risks of its uncontrolled and m
incorrect application, threatens all users especially children. [8] A. Azimi, M. Salehi, F. Salehi, H. Masoudi, “affect of
work with computer on vision performance”, the secret
To ensure optimum use and that computer improves of better life. Vol 30, pp. 33-41, Fall 2004.
children's lives in the present and future considering the [9] “What’s new in health care computers cause vision
following suggestions may help in general: problems in children. Johns Hopkins University
• Parents should get familiar with computers and take Website”, April 1.2002, Available: www.jhu.edu.
training courses in this field and learn some tips [10] S.S Lang, “Cornell ergunomist offers web guide lines
from children, if necessary. on how children can avoid injury while at their
• Talk with children about how to use computers and computers”,
risks that may threaten them while they are online. Available:http://ergo.human.cornell.edu/MBergo/schoo
• Putting the computer in a place at home that child’s lguide.html, Feb 2010.
activities could be controllable.
(IJCNS) International Journal of Computer and Network Security, 117
Vol. 2, No. 9, September 2010
Abstract: In DTN networks end-to-end path may not be exist buffer management issues, but an efficient [13] buffer
and disconnections may occur frequently due to propagation management scheme is still required to overcome
delay, node mobility, power outage and operational environment congestion.
(deep space underwater). Thus transmission in such
In this work we propose an efficient buffer management
environments follow store-carry-forward strategy in which a
node store the message in buffer, carries it while moving and
policy (DLA) to improve message delivery, message drop,
transmit when connection becomes available. In addition overhead ratio and buffer time average under highly
multiple copies of message may be forwarded to increase the congested network. To evaluate the performance of our
delivery probability. In such case the node buffer becomes the proposed buffer management policy we use ONE simulator
critical recourse and need to be managed effectively to overcome [5]. We have performed the simulations with spray&wait,
congestion and message drop ratio. The efficient buffer direct contact, First contact and Epidemic routing protocols.
management policy deals which message to drop when
The proposed scheme performs well only delivery ratio in
congestion arises. In this paper we propose a new buffer
management policy for DTN in which when the node buffer is case of epidemic routing is minimized.
congested and it needs to store a new message, the large size The rest of paper is organized follows Section 2 discuss
message from the buffer will be dropped. The strategy is called existing buffer management policies. Section 3 summarizes
drop largest (DLA). We prove through the simulation that our performance metrics. Section 4 describes evaluation our
buffer management policy (DLA) outperforms well as compared buffer management policy under routing protocols. Section
to existing Drop Oldest. 5 is about proposed algorithm (DLA). Section 6-7 is for
Keywords: Delay Tolerance Network DTN, DLA (Drop
simulation results and conclusion.
largest), DO (Drop oldest), Algorithm
Trade offs in delay-tolerant wireless networks,” in [17] Yun Li, Ling Zhao ,Zhanjun Liu,Qilie Liu.” N-Drop
SIGCOMM Workshop on Delay Tolerant Networking Congestion Control strategy under Epidemic Routing in
(WDTN), pp.260-267, 2005. DTN.” Research center for wireless information
[3] T. Spyropoulos, K. Psounis, and C. S. Ranghavendra networks,chongqing university of posts &
“Spray and wait: an efficient routing scheme for Telecommunications ,chongqing 400065,china, pp. 457-
intemitteltly connected mobile networks,” in Proceedings 460, 2009.
of the ACM SIGCOMM workshop on Delay-tolerant [18] indgren and K. S. Phanse, “Evaluation of queuing
networking.pp. 252-259, 2005. policies and forwarding strategies for routing in
[4] T. Spyropoulos, K. Psounis, and C. Raghavendra. intermittently connected networks,”in Proc. of IEEE
“Efficient Routing in Intermittently Connected Mobile COMSWARE, pp. 1-10, Jan. 2006.
Networks: The Multi-copy Case,”IEEE/ACM
Transactions on Networking (TON), vol. 16 , pp. 77-
90,Feb. 2008 Sulma Rashid She has received her MS
[5] Homepage of Opportunistic Network Environment Degree in computer science in 2007 from
(ONE). http://www.netlab.tkk._/%7Ejo/dtn/#one, Version IQRA University Islamabad Pakistan and MCS
1, Accessed July 2010. degree in 2001 from UAAR Pakistan. She has
10 years of teaching experience. Her areas of
[6] A. Vahdat and D. Becker, “Epidemic routing for
interest are DTN, Adhoc, security, Network
partially connected ad hoc networks,” Duke University, programming, Operating system, wireless networks and MANETS.
Tech. Rep. CS-200006, Apr. 2000. As a part of this paper she is working on Optimizing and
[7] K. Scott and S. Burleigh, “Bundle protocol forwarding research issues in DTN routing.
Specification.” RFC 5050, November 2007.
[8] J. Burgess, B. Gallagher, D. Jensen, and B. N. Levine. Qaisar Ayub He has obtained his MCS
“MaxProp: Routing for Vehicle-Based Disruption- Computer Science degree in 2005 from
Tolerant Networks”. In IEEE International Conference on Comsat Institute of Information Technology
Computer Communications (INFOCOM),pp. 1-1, 2006. Pakistan. And BCS (Hons.) computer science
from Allama Iqbal Open University Pakistan
[9] A. Balasubramanian, B. N. Levine, and A.
in 2003.
Venkataramani. “DTN Routing as a Resource Allocation He has 5 years of experience in conducting
Problem,”. In ACM Conference on Applications, professional trainings (Oracle, Java,) and software development.
Technologies, and Protocols for Computer As a part of this paper he is working on QOS in DTN routing.
Communication (SIGCOMM), pp. 373 – 384, 2007.
[10] D. Aldous and J. Fill, “Reversible markov chains and
random walks on graphs. (monograph in preparation.),”
http://statwww.berkeley.edu/users/aldous/RWG/book.html
.
[11] A. Krifa, C. Barakat, and T. Spyropoulos.” Optimal
buffer management policies for delay tolerant
networks”.In IEEE Communications Society Conference
on Sensor, Mesh and Ad Hoc Communications and
Networks (SECON), pp. 260-268, 2008.
[12] A. Lindgren, A. Doria, and O. Schelen, “Probabilistic
routing in intermittently connected networks,”
SIGMOBILE Mobile Computing and Communication
Review, vol. 7, no. 3, 2003.
[13] Ms. E. Jenefa JebaJothi , Dr. V. Kavitha , Ms. T.
Kavitha “Contention Based Routing in Mobile Ad Hoc
Networks with Multiple Copies” JOURNAL OF
COMPUTING, VOLUME 2, ISSUE 5, pp.14-19, MAY
2010.
[14] J.-Y. L BOUDEC, AND M. VOJNOVIC, “Perfect
Simulation and Stationary of a Class of Mobility Models”.
In Proc. Of IEEE Infocom, pp.2743 - 2754, 2005.
[15] A. KERÄNEN AND J. OTT, Increasing Reality for
DTN Protocol Simulations. Tech. rep., Helsinki
University of Technology, Networking Laboratory, July
2007.
[16] T. Spyropoulos, K. Psounis, and C. Raghavendra A,
C. S. “Single-copy routing in intermittently connected
mobile networks,” IEEE/ACM Transactions on
Networking (TON), vol. 16, pp. 63-76, Feb. 2008.
122 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 9, September 2010
2
HOD/ EEE,
Government College Engineering, Trinelveli, Tamilnadu, India,
rajaramgct@redifmail.com
Abstract: Information Security and integrity are becoming analyze the techniques available to detect the stego content
more important as we use email for personal communication in corporate emails.
and business. Steganography is used to hide the occurrence of
communication. Recent suggestions in US newspapers indicate 1.1 Steganography vs. Steganalysis
that terrorists use steganography to communicate in secret with
their accomplices. In particular, images on the Internet were Steganography is the art of covered or hidden writing [7].
mentioned as the communication medium. While the newspaper
The purpose of Steganography is covert communication to
articles sounded very dire, none substantiated these rumors.
Today, email management is not only a filing and storage
hide a message from a third party. Steganography is often
challenge. Because law firms and attorneys must be equipped to confused with cryptology because the two are similar in the
take control of litigation, email authenticity must be way that they both are used to protect important information
unquestionable with strong chains of custody, constant [7]. The difference between the two is that Steganography
availability, and tamper-proof security. Email is insecure. This involves hiding information so it appears that no
Paper is designed to give the survey about how the stego content information is hidden at all. If a person or persons views the
can be detected with the help of the steganalysis methods. This object that the information is hidden inside of he or she will
proposed will develop a steganalysis framework that will check have no idea that there is any hidden information[11],
the Email content of corporate mails by improving the DES therefore the person will not attempt to decrypt the
algorithm with the help of neural network approach. We information. Steganography in the modern day sense of the
anticipate that this paper can also give a clear picture of the
word usually refers to information or a file that has been
current trends in steganography so that we can develop and
concealed inside a digital Picture, Video or Audio file. New
improvise appropriate steganalysis algorithms.
steganographic techniques are being enveloped and
Keywords: Steganalysis, Steganography, Information Hiding, information hiding is becoming more advanced based on the
LSB, Stegdetect, Stego, Outguess motives of its use. Besides the hype of terrorists using
steganography, very recently there has been a case of
corporate espionage reported by Phadnis (2007), where
1. Introduction confidential information was leaked to a rival firm using
steganographic tools that hid the information in music and
The goal of steganalysis is to detect and/or estimate
picture files[9]. Although the perpetrator was caught in this
potentially hidden information from observed data with little
case, it does give an idea of the wide landscape in which
or no knowledge about the steganography algorithm and/or
steganography can be applied in [9].
its parameters. Steganalysis is both an art and a science. The
In modern approach, depending on the nature of cover
art of steganalysis plays a major role in the selection of
object, steganography can be divided into five types:
features or characteristics a typical stego message might
• Text Steganography
exhibit while the science helps in reliably testing the
• Image Steganography
selected features for the presence of hidden information.
• Audio Steganography
While it is possible to design a reasonably good steganalysis
• Video Steganography
technique for a specific steganographic algorithm, the long
• Protocol Steganography
term goal is to develop a steganalysis framework that can
So, in the modern age so many steganographic techniques
work effectively at least for a class of steganography
have been designed which works with the above concerned
methods, if not for all.
objects.
Current trend in steganalysis seems to suggest two extreme
Steganalysis is the science of detecting the presence of
approaches: (a) little or no statistical assumptions about the
hidden data in the cover media files and is emerging in
image under investigation. Statistics are learnt using a large
parallel with steganography. Steganalysis has gained
database of training images and (b) a parametric model is
prominence in national security and forensic sciences since
assumed for the image and its statistics are computed for
detection of hidden (ciphertext or plaintext) messages can
steganalysis detection. This proposed research is going to
(IJCNS) International Journal of Computer and Network Security, 123
Vol. 2, No. 9, September 2010
lead to the prevention of disastrous security incidents. Clearly, it is important to choose a proper steganalysis
Steganalysis is a very challenging field because of the domain, appropriate features, statistical models and
scarcity of knowledge about the specific characteristics of parameters, detector design, user inputs such as detection
the cover media (an image, an audio or video file) that can error probability etc. We discuss later some of the popular
be exploited to hide information and detect the same. The choices of current steganalysis algorithms in this regard.
approaches adopted for steganalysis also sometimes depend
on the underlying steganography algorithm(s) used. 3. Image steganalysis
embedding of a GIF image changes the 24-bit RGB value of of the findings. Therefore, the following are the steps that
a pixel and this could bring about a change in the palette will be followed throughout the process:
color (among the 256 distinct colors) of the pixel. The 1. Obtain the steganographic and steganalysis tools
strength of the steganographic algorithm lies in reducing the 2. Verify the tools (to ensure the tools is doing what it
probability of a change in the palette color of the pixel and claims)
in minimizing the visible distortion that embedding of the 3. Obtain cover images, and generate MD5 hashes
secret image can potentially introduce. The steganalysis of a 4. Apply steganalysis on cover images, and generate MD5
GIF stego image is conducted by performing a statistical hashes
analysis of the palette table vis-à-vis the image and the 5. Generate steganographic images, and generate MD5
detection is made when there is an appreciable increase in hashes
entropy (a measure of the variation in the palette colors). 6. Apply steganalysis on the steganographic image, and
The change in entropy is maximal when the embedded generate MD5 hashes
message is of maximum length. In each of the steps where the cover images or the
steganographic images are involved, MD5 hashes have been
3.1.2. Raw Image Steganalysis used to verify whether the image has changed in any
sense[1].
The Raw image steganalysis technique is primarily used for
BMP images that are characterized by a lossless LSB plane.
LSB embedding on such images causes the flipping of the 5. Detecting Stego content in corporate mails
two grayscale values. The embedding of the hidden message
is more likely to result in averaging the frequency of The proposed research is going to analyze the performance
occurrence of the pixels with the two gray-scale values. For of the improved version of image steganalysis algorithms in
example, if a raw image has 20 pixels with one gray-scale corporate mails. A hybrid algorithm is under processing for
value and 40 pixels with the other gray-scale value, then detection purpose which is going to detect the stego content
after LSB embedding, the count of the pixels with each of accurately. A large database is used to store the images. The
the two gray-scale values is expected to be around 30. This performance and the detection ratio are going to be
approach was first proposed by Westfeld and Pfitzmann [5], measured in corporate mails.
and it is based on the assumption that the message length
should be comparable to the pixel count in the cover image 5. Conclusions
(for longer messages) or the location of the hidden message
should be known (for smaller messages). In this paper, we have analyzed the steganalysis algorithms
available for Image Steganography. In summary, each
3.1.3. JPEG Image Steganalysis carrier media has its own special attributes and reacts
differently when a message is embedded in it. Therefore, the
JPEG is a popular cover image format used in steganalysis algorithms have also been developed in a
steganography. Two well-known Steganography algorithms manner specific to the target stego file and the algorithms
for hiding secret messages in JPEG images are: the F5 developed for one cover media are generally not effective for
algorithm [11] and Outguess algorithm [6]. The F5 a different media. This paper would cater well to providing
algorithm uses matrix embedding to embed bits in the DCT an overview of the steganalysis algorithms available for
(Discrete Cosine Transform) coefficients in order to images and proposed a new .
minimize the number of changes to a message.
The generic steganalysis algorithms, usually referred to as [1] Ahmed Ibrahim, Steganalysis in Computer Forensics,
Universal or Blind Steganalysis algorithms, work well on all Security Research Centre Conferences, Australian
known and unknown steganography algorithms. These Digital Forensics Conference, Edith Cowan University
steganalysis techniques exploit the changes in certain innate Year 2007.
features of the cover images when a message is embedded. [2] I. Avcibas, N. Memon, and B. Sankur, “Steganalysis
The focus is on to identify the prominent features of an using image quality metrics,” IEEE Trans. on Image
image that are monotonic and changes statistically as a Processing, vol. 12, no. 2, pp. 221–229, Feb. 2003.
result of message embedding. The generic steganalysis [3] R. Chandramouli, A Mathematical Approach to
algorithms are developed to precisely and maximally Steganalysis, Proc. SPIE Security and Watermarking of
distinguish these changes[9]. The accuracy of the prediction Multimedia Contents IV, California, Jan. 2002.
heavily depends on the choice of the right features, which [4] S. Geetha , S. Siva and Sivatha Sindhu, Detection of
should not vary across images of different varieties[12]. Stego Anomalies in Images Exploiting the Content
Independent Statistical Footprints of the Steganograms,
4. Evaluation of steganalysis tools Department of Information Technology, Thiagarajar
College of Engineering, Madurai, , Informatica 33
In order to evaluate the steganalysis tools, it is essential that (2009) 25–40
the whole process is forensically sound to ensure the validity
(IJCNS) International Journal of Computer and Network Security, 125
Vol. 2, No. 9, September 2010