Beruflich Dokumente
Kultur Dokumente
RIPPLE-2K7
Author:
Email: hemkumar_csa@yahoo.co.in
hem_love2001@yahoo.co.in
1
ABSTRACT
2
Contents:
1. Human computing
3. Approaches to measurement:
4. Research challenges
4.1……Scientific challenges
4.2……Technical challenges
5. Conclusion
6. References
3
HUMAN COMPUTING & MACHINE UNDERSTANDING OF
HUMAN BEHAVIOR
1. Human computing:
A widely accepted prediction is that computing is moving
background, projecting the human user into foreground. Today computing has
become a key in every technology .If this prediction is to come true, the next
generation computing , which we will call as human computing is about
anticipatory user interfaces that should be human centered , build for humans
based on human model.
4
Here we will focus on expressive behavior, in particular facial
expression, and approaches to measurement.
3.Approaches to measurement:
3.1. Real time facial expression recognition for natural interaction
3.2. Message judgment
3.3. Sign measurement
3.1. Real time facial expression recognition for natural interaction:
5
In this the basic step is facial action detection. The process of facial action
detection is as follows. Take a look at the above figure. First, the face image is
captured, then
the main important features that are needed are extracted . They are then
normalized based on the distance between the eyes. Now let us see about
behavioral recognition by real time facial expression recognition for natural
interaction.
Here mainly 5 distances are taken into account. They are distances
between right eye & eye brow, left eye & eyelid, left eye & left corner of mouth,
width of the mouth, and height of the mouth. Now initially the distances are stored
in the database. Now each parameter can take three different states for each of the
emotions: C+, C- , and S.
State C+ means that the value of the parameters has increased with respect
to the neutral one; state C- that its value has diminished with respect to the neutral
one; and the state S that its value has not varied with respect to the neutral
one.First, we build a descriptive table of emotions, according to the state of the
parameters, like the one of the table below:
6
Now based on the above table we will know the 6 basic emotions of a
human. For example: take a look at first row in the above table. Distance D1 must
diminish, D2 must be neutral or diminished, D3 must be increased, D4 must be
increased, D5 must be diminish, width / height must decrease or neutral to the
stored value, in order to exhibit the emotion Joy. Like that all the remaining 5
emotions will be known from the above table.
7
approach would code the face.
Actually there are six “basic emotions.” They are joy, surprise,
sadness, disgust, fear and anger. An example of facial expressions for the six basic
-motions are shown in the figure above.
There are some contradictory emotion expressions, for
example consider “MASKING SMILE,” in which smiling is used to cover up or hide
an underlying emotion. So such type of underlying emotions can be observed with
sign based approach.
8
• 9 Action Units for upper face
• 18 Action Units for lower face
• 11 Action Units for head position and movement
• 9 Action Units for eye position and movement and
Action Units may occur singly or in combinations. Action Unit
Combinations may be Additive Combinations or Non-Additive Combinations. In
Additive Combinations, the appearance of each Action Unit is Independent, where
as in Non-Additive Combinations the Action Units modify each other’s appearance.
Non-Additive Combinations are analogous to co-articulation effects in speech, in
which one phoneme modifies the sound of one’s with which it is contiguous.
An Example of an Additive Combinations in Facial Action Coding
System is (AU 1+2), which often occurs in surprise (along with eye widening, AU 5)
and in the brow-flash greeting. The combination two Action Units raises the inner
(AU 1) and outer (AU 2) corners of the eye brows and causes horizontal wrinkles to
appear across the forehead. The appearance changes associated with (AU 1+2) are
the product of their joint actions.
The Examples of Non-Additive Combinations are (AU 1+4) and (AU
1+2+4), comparable to co-articulation effects in speech. Here (AU 1+4), which
often occurs in sadness. When AU 1 occurs alone, the inner eyebrows are pulled
upward. When AU 2 occurs alone, they are pulled together downward. When AU 1
and AU 4 occur together, the downward action of AU 4 is modified. The result is
that the inner eyebrows are raised and pulled together. This action typically gives
an oblique shape to the brows and causes horizontal wrinkles to appear in the
center of the forehead, as well as other changes in appearance.
9
4. RESEARCH CHALLENGES
1. Modalities:
We should know how many and which behavioral signals has to be
combined for realization of robust and accurate human behavior analysis .Here the
behavioral channels includes face, body and tone of the voices of the human.
2. Fusion:
Here the main challenge is to know at what abstraction level the input
modalities have to be fused. We should know whether the input modalities depend
upon the machine learning techniques or not .One should know whether the tight
coupling persists when the modalities are used for the human behavior analysis.
1 .Robustness:
Most methods for human sensing, context sensing and human behavior
understanding works only in the constrained environments. Here the environment
should be very calm without any noise. If the environment is noisy then it causes
them to fail.
10
2. Speed:
Many of the methods in the field do not perform fast enough to
support interactivity. Speed of the signals should be fast enough to the destination
point. Many of the researchers choose for more sophisticated processing rather
than for real time processing .The main challenge is to find faster hardware
5. CONCLUSIONS
11
6. REFERENCES:
[1] Aarts, E. Ambient intelligence drives open innovation. ACM Interactions, 12, 4
(July/Aug. 2005), 66-68.
[3] Ba, S.O. and Odobez, J.M. A probabilistic framework for joint head tracking and
pose estimation. In Proc. Conf. Pattern Recognition, vol. 4, 264-267, 2004.
[4] Bartlett, M.S., Littlewort, G., Frank, M.G., Lainscsek, C.,Fasel, I. and Movellan ,
J. Fully automatic facial action recognition in spontaneous behavior. In Proc. Conf.
Face &Gesture Recognition, 223-230, 2006.
[5] Bicego, M., Cristani, M. and Murino, V. Unsupervised scene analysis: A hidden
Markov model approach. Computer Vision & Image Understanding, 102, 1 (Apr.
2006), 22-41.
[6] Bobick, A.F. Movement, activity and action: The role of knowledge in the
perception of motion. Philosophical Trans. Roy. Soc. London B, 352, 1358 (Aug.
1997), 1257-1265.
[7] Bowyer, K.W., Chang, K. and Flynn, P. A survey of approaches and challenges
in 3D and multimodal 3D+2D face recognition. Computer Vision & Image
Understanding, 101, 1 (Jan. 2006), 1-15.
[9] Cacioppo, J.T., Berntson, G.G., Larsen, J.T., Poehlmann,K.M. and Ito, T.A. The
psychophysiology of emotion. In Handbook of Emotions. Lewis, M. and Haviland-
Jones, J.M.,Eds. Guilford Press, New York, 2000, 173-191.
[10] Chiang, C.C. and Huang, C.J. A robust method for detecting arbitrarily tilted
human faces in color images. Pattern Recognition Letters, 26, 16 (Dec. 2005),
2518-2536.
[11] Costa, M., Dinsbach, W., Manstead, A.S.R. and Bitti, P.E.R. Social presence,
embarrassment, and nonverbal behavior.Journal of Nonverbal Behavior, 25, 4 (Dec.
2001), 225-240.
12
Fa ci
USING THE
13
ABSTRACT
While humans have had the innate ability to recognize and distinguish
different faces for millions of years, computers are just now catching up. In this
paper, we'll learn how computers are turning your face into computer code so it can
be compared to thousands, if not millions, of other faces. We'll also look at how
facial recognition software is being used in elections, criminal investigations and
to secure your personal computer.
14
Introduction
People have an amazing ability to recognize and remember thousands of
faces. While humans have had the innate ability to recognize and distinguish
different faces for millions of years, computers are just now catching up. In this
paper, you'll learn how computers are turning your face into computer code so it
can be compared to thousands, if not millions, of other faces. We'll also look at how
facial recognition software is being used in elections, criminal investigations and
to secure your personal computer.
The Face
Your face is an important part of who you are and how people identify you.
Imagine how hard it would be to recognize an individual if all faces looked the
same. Except in the case of identical twins, the face is arguably a person's most
unique physical characteristic. While humans have had the innate ability to
recognize and distinguish different faces for millions of years, computers are just
now catching up.
15
Facial recognition software can be used to find criminals in a crowd, turning a mass of
people into a big line up.
Facial recognition software is based on the ability to first recognize a face, which
is a technological feat in itself, and then measure the various features of each face.
If you look in the mirror, you can see that your face has certain distinguishable
landmarks. These are the peaks and valleys that make up the different facial
features. Visionics defines these landmarks as nodal points. There are about 80
nodal points on a human face. Here are a few of the nodal points that are measured
by the software:
16
The Software
Facial recognition software falls into a larger group of technologies known as
biometrics. Biometrics uses biological information to verify identity. The basic idea
behind biometrics is that our bodies contain unique properties that can be used to
distinguish us from others. Besides facial recognition, biometric authentication
methods also include:
• Fingerprint scan
• Retina scan
• Voice identification
Facial recognition methods may vary, but they generally involve a series of steps
that serve to capture, analyze and compare your face to a database of stored
images. Here is the basic process that is used by the FaceIt system to capture and
compare images:
recognition software searches the field of view of a video camera for faces. If
there is a face in the view, it is detected within a fraction of a second. A
multi-scale algorithm is used to search for faces in low resolution. (An
algorithm is a program that provides a set of instructions to accomplish a
specific task). The system switches to a high-resolution search only after a
head-like shape is detected.
17
2. Alignment - Once a face is detected, the system determines the head's
3. Normalization -The image of the head is scaled and rotated so that it can be
4. Representation - The system translates the facial data into a unique code. This
coding process allows for easier comparison of the newly acquired facial data
to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and
The heart of the FaceIt facial recognition system is the Local Feature Analysis
(LFA) algorithm. This is the mathematical technique the system uses to encode
faces. The system maps the face and creates a faceprint, a unique numerical code
for that face. Once the system has stored a faceprint, it can compare it to the
thousands or millions of faceprints stored in a database. Each faceprint is stored as
an 84-byte file.
The system can match multiple faceprints at a rate of 60 million per minute
from memory or 15 million per minute from hard disk. As comparisons are made,
the system assigns a value to the comparison using a scale of one to 10. If a score
is above a predetermined threshold, a match is declared. The operator then views
the two photos that have been declared a match to be certain that the computer is
accurate.
Gotcha!
The primary users of facial recognition software like FaceIt have been law
enforcement agencies, which use the system to capture random faces in crowds.
18
These faces are compared to a database of criminal mug shots. In addition to law
enforcement and security surveillance, facial recognition software has several other
uses, including:
Many people who don't use banks use check-cashing machines. Facial
recognition could eliminate possible criminal activity.
This biometric technology could also be
used to secure your computer files. By
mounting a Webcam to your computer and
installing the facial recognition software, your
face can become the password you use to get
into your computer. IBM has incorporated the
technology into a screensaver for it’s A, T and
19
X series ThinkPad laptops.
Newly
Captured
Image
Comparing
newly
captured
Image
Database
Image
Conclusion
With the following advantages and also some of the drawbacks, we conclude our paper on
Facial Recognition using Biometrics. Potential applications are as follows:
While facial recognition can be used to protect your private information, it can just as
easily be used to invade your privacy by taking you picture when you are entirely unaware of the
camera. As with many developing technologies, the incredible potential of facial recognition
comes with drawbacks.
But if we add both the facial recognition and the normal password security we can have
an added Double Security which is more reliable than one shield security, Just same as the
quote “Two heads are better than one”.
20
MOBILE COMPUTING
PRESENTED BY:
21
ABSTRACT
22
MOBILE COMPUTING
INTRODUCTION
What will computers look like in ten years, in the next country? No wholly
accurate prediction can be made, but as a general feature, most computers will
certainly be portable. How will users access networks with the help of computers or
other communication devices? An ever-increasing number without any wires, i.e.,
wireless. How will people spend much of their time at work, during vacation? Many
people will be mobile already one of the key characteristics of today’s society.
Think, for example, of an aircraft with 800 seats. Modern aircraft already offer
limited network access to passengers, and aircraft of the next generation will offer
easy Internet access. In this scenario, a mobile network moving at high sped
above ground with a wireless link will be the only means of transporting data from
passengers.
There are two kinds of mobility: user mobility and device portability. User
mobility refers to a user who has access to the same or similar telecommunication
services at different places, i.e., the user can be mobile, and the services will follow
him or her.
With device portability the communication device moves (with or without a
user). Many mechanisms in the network and inside the device have to make sure
that communication is still possible while it is moving.
APPLICATIONS
1. Vehicles:
Tomorrow’s cars will comprise many wireless communication systems and
mobility aware applications. Music, news, road conditions, weather reports, and
other broadcast information are received via digital audio broadcasting (DAB) with
1.5Mbits/s. For personal communication, a global system for mobile
communications (GSM) phone might be available.
Networks with a fixed infrastructure like cellular phones (GSM, UMTS) will be
interconnected with trucked radio systems (TETRA) and wireless LANs (WLAN).
Additionally, satellite communication links can be used.
23
2. Business:
Today’s typical traveling salesman needs instant access to the company’s
database: to ensure that the files on his or her laptop reflect the actual state etc.
MOBILE AND WIRELESS DEVICES
Sensor:
A very simple wireless device is represented by a sensor transmitting state
information. An example for such a sensor could be a switch sensing the office
door. If the door is closed, the switch transmits this state to the mobile phone
inside the office and the mobile phone will not accept incoming calls. Thus, without
user interaction the semantics of a closed door is applied to phone calls.
Pager:
Mobile Phones:
Personal digital assistant:
Mobile Internet is all about Internet access from mobile devices. Well, it’s true, but
the ground realities are different. No doubt Internet has grown fast, well really fast!
But mobile Internet is poised to grow even faster. The fundamental difference lies
in the fact that whereas academics and scientists started the Internet, the force
behind mobile Internet access is the cash-rich mobile phone industry. On the
equipment side, the wireless devices represent the ultimate constrained computing
device with:
Less powerful CPUs, less memory (ROM and RAM), restricted power consumption
The Wireless Application Protocol is the de-facto world standard for the presentation and delivery of
wireless information and telephony services on mobile phones and other wireless terminals..
24
Wireless Application Protocol – WAP
Three are three essential product components that you need to extend your host
applications and data to WAP-enabled devices. These three components are:
1. WAP Micro browser – residing in the client handheld device
2. WAP Gateway – typically on wireless ISP’s network infrastructure
3. WAP Server - residing either on ISP’s infrastructure or on end user
organization’s infrastructure
WAP Micro-browser :
A WAP micro-browser is client software designed to overcome challenges of mobile
handheld devices that enable wireless access to services such as Internet
information in combination with a suitable network server
Lots of WAP browsers and emulators are available free of cost which can be used to
test your WAP pages. Many of these browsers and emulators are specific to mobile
devices.
WAP emulators can be used to see how your site will look like on specific phones.
As these images show, the same thing can look different on different mobile
phones. So, the problems that web developer faces with the desktop browsers
(Netscape/I explorer) is present here also. So, make sure you test your code on
different mobile phones (or simulators)
25
Source: WAP for web developers, anywhereyougo.com
A WAP server is simply a combined web server and WAP gateway. WAP devices do
not use SSL. Instead they use WTLS. Most existing web servers should be able to
support WAP content as well. Some new MIME types need to be added to your web
server to enable it support WAP content. MIME stands for Multipurpose Internet
Mail Extension.
The WAP Protocols cover both the application (WAE), and the underlying transport
layers (WSP and WTP, WTLS, and WDP). WML and WML Script are collectively
known as WAE, the Wireless Application Environment. As described earlier the
'bearer' level of WAP depends on the type of mobile network. It could be CSD, SMS,
CDMA, or any of a large number of possible data carriers. Whichever bearer your
target client is using, the development above remains the same. Although it’s not
absolutely essential for a developer to know the details of the WAP communication
protocols, a brief understanding of the various protocols involved, their significance
and the capabilities can help a lot while looking for specific solutions.
WAE
WML and WMLScript are collectively known as WAE, the Wireless Application
Environment
WML
WML is the WAP equivalent to HTML. It is a markup language based on XML
(extensible Markup Language). The WAE specification defines the syntax, variables,
and elements used in a valid WML file.
WBMP
WBMP stands for Wireless Bitmap. It is the default picture format for WAP. The
current version of WBMP is called type 0. As a thumb rule, a WBMP should not be
wider than 96 pixels and higher than 48 pixels (at 72 dots per inch). There is also
plug-ins available for Paint shop, Photoshop and Gimp, which let you save WBMP
files with these programs.
26
• Teraflops online converter
• pic2wbmp
• WAP Pictus
• WAP Draw
• WBMPconv & UnWired plug-in for Photoshop/PaintShop
WTLS
the Security layer protocol in the WAP architecture is called the Wireless Transport
Layer Security, WTLS. WTLS provides an interface for managing, secure
connections.
The differences arise due to specific requirements of the WTLS protocol because of
the constraints presented by the mobile data systems.
• Long round-trip times.
• Memory limitations of mobile devices
• The low bandwidth (most of the bearers)
• The limited processing power of mobile devices
• The restrictions on exporting and using cryptography
WDP
WTP
The Transport layer protocol in the WAP architecture consists of the Wireless
Transaction Protocol (WTP) and the Wireless Datagram Protocol (WDP). The WDP
protocol operates above the data capable bearer services supported by multiple
network types. As a general datagram service, WDP offers a consistent service to
the upper layer protocol (Security, Transaction and Session) of WAP and
communicate transparently over one of the available bearer services.
Source: WAP- WDP Specifications
WAP:
In Internet a WWW client requests a resource stored on a web server by
identifying it using a unique URL, that is, a text string constituting an address to
that resource. Standard communication protocols, like HTTP and Transmission
Control Protocol/Internet Protocol (TCP/IP) manage these requests and transfer of
data between the two ends. The content is transferred can be either static or
27
dynamic.
The strength of WAP (some call it the problem with WAP) lies on the fact that it
very closely resembles the Internet model. In order to accommodate wireless
access to the information space offered by the WWW, WAP is based on well-known
Internet technology that has been optimized to meet the constraints of a wireless
environment. Corresponding to HTML, WAP specifies a markup language adapted to
the constraints of low bandwidth available with the usual mobile data bearers and
the limited display capabilities of mobile devices - the Wireless Markup Language
(WML). WML offers a navigation model designed for devices with small displays and
limited input facilities.
28
Future Outlook For WAP:
The point brought about by many analysts against WAP is that with the emergence
of next generations networks (including GPRS), as the data capabilities of these
networks evolve, it will make possible the delivery of full-motion video images and
high-fidelity sound over mobile networks. Service delivers information at a speed of
9,600 bits of information a second. With GPRS the speed will rise to 100,000.
Mobile commerce is one such application that can open up lots of opportunities for
WAP. By 2010, there could be more than 1500m mobile commerce users. M-
commerce is emerging more rapidly in Europe and in Asia, where mobile services
are relatively advanced, than in the US where mobile telephony has only just begun
to take off.
By allowing mobile to be in always connected state GPRS (or other services like
CDPD) will bring Internet more closely to the mobile.
29
WAP Applications
One of the most significant advantages of Internet access from mobile rather that
your PC is the ability to instantly identify users geographic location. Some of the
interesting applications of WAP (already existing or being worked on) are:
• Computer Sciences Corporation (CSC) and Nokia are working with a Finnish
fashion retailer who plans to send clothing offers direct to mobile telephones
using a combination of cursors.
• In Finland, children already play new versions of competitive games such as
"Battleships", via the cellular networks. In the music world, Virgin Mobile in
the UK offers to download the latest pop hits to customers in a daily offering.
• Nokia says applications that will benefit from WAP include customer care and
provisioning, message notification and call management, e-mail, mapping
and location services, weather and traffic alerts, sports and financial services,
address book and directory services and corporate intranet applications.
WIRELESS LAN
Wireless LAN technology constitutes a fast-growing market introducing the
flexibility of wireless access into office, home, or production environments. WLANs
are typically restricted in their diameter to buildings, a campus, single rooms etc.
And are operated by individuals, not by large-scale network providers. The global
goal of WLANs is to replace office cabling and, additionally, to introduce a higher
flexibility for ad hoc communication in, e.g. Group meetings.
Design:
Only wireless networks allow for the design of small, independent devices
which can for example be put into a pocket.
30
Robustness:
Wireless networks can survive disasters.
Cost:
While, e.g., high-speed Ethernet adapters are in the range of some 10 E,
wireless LAN adapters, e.g., as PC-Card, still cost some 100 E.
Proprietary Solutions:
Restrictions:
Safety and Security:
This topic introduces protocols and mechanisms developed for the network
layer to support mobility. The most prominent example is Mobile IP, which adds
mobility support to the Internet network layer protocol IP. While systems like GSM
have been designed with mobility in mind from the very beginning, the Internet
started at a time when no-one had a concept of mobile computers.
Another kind of mobility, rather portability of equipment, is supported by
DHCP. In former times computers did not change their location often. Today, due
to laptops or notebooks,
MOBILE IP
The following gives an overview of Mobile IP, the extensions needed for the
Internet to support the mobility of hosts. The following requires some familiarity
with Internet protocols especially IP.
31
of users. So why not simply use a mobile computer in the Internet?
The reason is quite simple: you will not receive a single packet as soon as
you leave your home network, i.e., the network your computer is configured for,
and reconnect your computer (wireless or wired) at another place. The reason for
this is quite simple if you consider routing mechanisms in the Internet. A host
sends an IP packet with the header containing a destination address besides other
fields. The destination address not only determines the receiver of the packet, but
also the physical subnet of the receiver.
Quick ‘Solutions’:
One might think of a quick solution to this problem by assigning the
computer a new, topologically correct IP address. So moving to a new location
would also mean assigning a new address. Now the problem is that nobody knows
of this new address. It is almost impossible to find a (mobile) host in the Internet
which has just changed its address. Especially the domain name system (DNS)
need some time before it updates its internal tables necessary for the mapping of a
logical name to an IP address. This approach does not work if the mobile node
moves quite often.
Furthermore, there is a severe problem with higher layer protocols like TCP
that rely on IP addresses. Changing the IP address while still having a TCP
connection open means breaking the connection. A TCP connection can be
identified by the tuple (source IP address, source port, destination IP address,
destination port), also known as a socket.
Requirements:
• Compatibility: A new standard cannot require changes for applications or
network protocols already in use.
• Transparency: Mobility should remain ‘invisible’ for many higher layer
protocols and applications. Besides maybe noticing a lower bandwidth and
some interruption in service, higher layers should continue to work even if
the mobile computer changed its point of attachment to the network.
• Scalability and efficiency: Introducing a new mechanism into the Internet
must not jeopardize the efficiency of the network. Enhancing IP for mobility
must not generate many new messages flooding the whole network.
32
• Security: Mobility poses many security problems. A minimum requirement is
the authentication of all messages related to the management of Mobile IP.
CONCLUSION
BIBLIOGRAPHY
33
Rajeev Gandhi Memorial College of Engineering and
Technology.
II M.C.A, II
M.C.A,
06091F0007,
06091F0036,
R.G.M.C.E.T,
R.G.M.C.E.T,
Nandyal.
Nandyal.
dharanikumar.mca@gmail.com
p.n.satishkumar@gmail.
com
Cell: 9441422114.
Cell: 9704268488
ABSTRACT:
Generally, data mining is the process of analyzing data from different perspectives
And summarizing it into useful information - information that can be used to
34
increaseRevenue, cuts costs, or both. Data mining software is one of a number of
analytical tools for Analyzing data. It allows Users to analyze data from many
different dimensions or angles, Technically, data mining is the Process of finding
correlations or patterns among dozens of Fields in large relational Databases.
Although data mining is a relatively new term, the Technology is not.
The objective of this paper is to provide full fledged information about the
process Of data mining, the steps to process the mining etc., it also provides the
more advantageous Techniques like data cleaning, integration etc., and all schemas
for effective process and Mining.
INTRODUCTION:
Data mining is the task of discovering interesting patterns from large
amounts of Data, where the data can be stored in data bases, data ware houses, or
other information Repositories. It is a young interdisciplinary field, drawing from
areas such as database Systems, data ware housing, Statistics, machine learning,
data visualization, information Retrieval, and high performance Computing.
35
Data mining-on what kind of data?
Data mining should be acceptable to any kind of information
repository. This Include relational data bases, data ware houses, transactional
databases, advansed Database systems Include object-oriented and object-
relational data bases, and specific Application oriented data bases Such as spatial
databases, text databases, and multimedia Data Bases. The challenges and
techniques of mining may differ for each repository systems.
DATA PREPROCESSING:
We have to preprocess the data in order to help improve the quality of
data.
Today’s real-World data bases are highly suspect able to noisy, missing, and
inconsistent
Data due to their typically huge size and their likely origin from multiple
heterogeneous
36
DATA CLEANING – Remove noise and correct inconsistencies in data.
DATA INTEGRATION Merges data from multiple sources, into coherent data sore.
The above techniques are not mutually exclusive, they may work together.
OLAP TECHNOLOGY:
Data ware houses provide On-line analytical process (OLAP) tools for
Interactive Analysis of multidimensional data of varied granularities, which
facilitates data Generalization and Data mining. Many other data mining functions
such as association, Classification, and prediction and Clustering can be integrated
with OLAP operations to Enhance interactive mining of knowledge at multiple levels
of abstraction.
MULTIDIMENSION DATAMODEL:
Data ware houses and OLAP tools are based on multidimensional data
model. This Model views data to be modeled and viewed in multiple dimensions. It
is identified by Dimensions and facts.
37
between Dimensions”. Fact table contains, or measures as well as keys to each of
the relate Dimension tables.
3-d CUBES:
We usually think of cubes as 3-d geometric structures, in data
warehousing the Data cube is n-dimensional. The example of 3-d cube is shown in
the below figure (2).
California 80 110 60 25
Location
Utah 40 90 50 30
Arizona 70 55 60 35
12/31/2003
Washington 75 85 45 45 …..
1/2/2003
4-d: Colorado
65 45 85 60
1/1/2003 Time
Soda Diet Orange Lime
soda soda soda
Product
If we want to add or view additional data we can go for 4-d approach. We can
think 4-d Cube as a series of 3-d cubes. The data cube is a metaphor for data
storage.
38
Snow flake schema: The snowflake schema is a variant of the star schema
where Some Dimension tables are normalized, thereby further splitting the data
into Additional tables the major difference between snowflake and star schema
is that the Dimension tables of Snowflake model are kept in normalized form to
avoid Redundancies.
OLAP operations in a Multi Dimensional Data Model: Roll Up, Drill Drown,
Slice And Dice, And Pivot.
Choose the dimensions that will apply to each fact table record.
Choose the measures that will populate each fact table record
39
This is considered a good choice for data warehouse development, especially
for data Marts, because the turnaround time is short, modifications can be
done quickly and New designs and technologies can be adapted in a timely
manner.
The top tier is a client, which contains query and reporting tools, Analysis
tools, and/or data mining tools (e.g., trend analysis, prediction…).
MOLAP TECHNOLOGY:
Partition the array into chunks. A chunk is a sub cube that is small enough to
fit intoThe Memory available for cube computation. Chunking is a method for
dividing an N-dimensional chunks, where each chunk is stored as an object
on disk.
Compute aggregates by visiting (i.e.accesing the values at) cube cells. The
order in Which Cells are visited can be optimized so as to minimize the
number of times that Each cell must be revisited, thereby reducing memory
access and storage costs.
40
Metadata Repository:
Which include indices and profiles that improve data access and retrieval?
Performance, in Addition to rules for the timing and scheduling of refresh, update,
and Replication cycles. Business metadata, which include business terms and
definitions, data Ownership information and charging policies.
Data Cleaning, which detects errors in the data and rectifies them when
possible
41
Refresh, which propagates the updates from the data sources to the
warehouse
Decision support
tools
DIRECT
OLA Mining
QUERY Reporting P
Tools tools
Essbase Intelligent Miner
Crystal reports
Merge Relational
Clean Data warehouse
DBMS+
Summarize E.g. Redbrick
GI
Detailed S
transactional dat
data Operational data a Cens
Bombay Delhi Calcutta us
branch branch branch data
Oracle IMS SAS
Conclusion:
42
Before going to mine the data we have to pre process the data to prevent
anomalies Such as noise etc..,
BIBLIOGRAPHY
3) www.anderson.ucla.edu (url.)
43
ARCHITECTURE, CHALLENGES AND RESEARCH
ISSUES
ON
MOBILE COMPUTING
BY
H.SUSHMA
III CSE
G.PULLA REDDY ENGINNERING COLLEGE
KURNOOL
Email: sushmareddy.h@gmail.com
N.KAUSER JAHAN
III CSE
G.PULLA REDDY ENGINNERING COLLEGE
KURNOOL
Email: kauser.jahan3@gmial.com
44
Abstract
With the advent of the Internet and the plurality and variety of fancy applications it
brought with it, the demand for more advanced services on cellular phones
increasingly becoming urgent. Unfortunately, so far the introduction of new
enabling technologies did not succeed in boosting new services. The adoption of
Internet services has shown to be more difficult due to the difference between the
Internet and the mobile telecommunication system. The goal of this paper is to
examine the characteristics of the mobile system and to clarify the constraints that
are imposed on existing mobile services. The paper will also investigate
successively the enabling technologies and the improvements they brought. Most
importantly, the paper will identify their limitations and capture the fundamental
requirements for future mobile service architectures namely openness, separation
of service logic and content, multi-domain services, personalization, Personal Area
Network (PAN)-based services and collaborative services. The paper also explains
the analysis of current mobile service architecture such as voice communication;
supplementary services with intelligent network, enabling services on SIM with SIM
application tool kit, text services with short message service, internet services with
WAP and dynamic applications on mobile phones with J2ME.
Further our paper gives information on challenges of mobile
Computing which includes harsh communications, connections, bandwidth and
Heterogeneous networks. Under research issues seamless connectivity over multiple
Overlays, scalable mobile processing, wireless communications, mobility and
portability are discussed.
1. Introduction
With digitalization the difference between telecommunication and computer net-working is
fading and the same technologies are used in both fields. However, the
convergence does not progress as rapidly as expected. Moving applications and ser-
vices from one field to the other has proven to be very difficult or in many cases
impossible. The explanation is that although the technologies in use are rather
similar there are crucial differences in architecture and concepts. The paper starts
with a study of how mobile services are implemented in mobile telecommunication
45
systems and an identification of their limitations so as to meet the needs of the
future.
46
on an SMS Gateway the system running the SMS Gateway to act as an SMSC itself (e.g.
a PC using a radio modem and the additional identifiers and parameters for a
specific service (the protocol)
2.5 Internet access with WAP: Wireless Application Protocol (WAP) [5]
was to provide access to the WWW on handheld terminals. A micro browser
installed Internet and the mobile network to convert Internet protocols to Wireless
binary protocols as shown in Figure 3.. One restriction of the technology is that it
is not possible to access ordinary web pages using a WAP browser.
3. Advanced Architecture
This section aims at identifying and elucidating the advanced pieces and
hence contributes to the definition of advanced architecture.
47
properties of a service are dependent on the architecture and particularly on the
location of the components, service logic and service content, makes the analysis
easier. In early mobile telecom services the service logic was embedded in the
dedicated hardware components. This has been a hindrance for development of
flexible services; these services will by default not be accessible from outside an
operator domain.
To enhance the mobility of ser-vices, it is necessary to decouple the
service logic from the system components.
48
3.3 PAN-based Services: Nowadays, each individual is using several
devices like mobile phones, PDA’s, digital camera, GPS, etc. With the
emergence of wireless short-range technologies like Bluetooth, WLAN and
potentially, Personal Area Networks can be formed to allow communication
betweendevices.
49
Freedom from Collocation
50
• Harsh communications environment.
The unfavorable Communication environment is coupled with Lower
bandwidth/higher latency not good enough for videoconferencing or any
other process. It has higher error rates and more frequent disconnection. Its
performance depends on density of nearby users but inherent scalability of
cellular/frequency reuse architecture helps.
• Connection/Disconnection:
Network failure is a common issue and therefore Autonomous operation is
highly desirable. Found it often Caching is a good idea, e.g., web cache.
• Low Bandwidth
– Orders of magnitude differences between wide-area, in building wireless
• Variable Bandwidth
– Applications adaptation to changing quality of connectivity
• Heterogeneous Networks
“Vertical Handoff” among collocated wireless networks
5.Research Issues
• Seamless connectivity over multiple overlays
– Implementing low latency handoffs
– Exploiting movement tracking and geography
– Performance characterization of channels
– Authentication, security, privacy
network mgmt
– Integration with local- & wide-area networked servers
-Application support for adaptive connections
Wireless Communications
– Quality of connectivity and Bandwidth limitations
• Mobility
Location transparency and Location dependency
• Portability
Power limitations, Display, processing and storage limitations
6. Conclusion
This paper presents an analysis of the evolutionary path of mobile services,
from early voice communication services to prospects of future service
possibilities. It is argued that increasing openness can help excel the future of
mobile services. Each of the concepts discussed around mobile services in this paper
are on their own and of research and they must be further elaborated in separate
studies. Thus, the discussions in this paper are preliminary and do address only the
basic structures and further works will be carried out.
7. References
Gunnar Heine, GSM Networks: Protocols, Terminology and Implementation.
http\\: www.iitd.ac.in
J. B. Andersen, T. S. Rappaport, S. Yoshida, "Propagation Measurements
and
Models for Wireless Communications Channels," IEEE Communications
Magazine, pp. 42-49 and H. Forman, J. Zahorjan, "The Challenges of Mobile
Computing," IEEE Computer, V 27, N
WIRELESS ATTACKS
Wireless Deploying a wireless network does not require special expertise. If
a department is eager to expand its network and it can’t or doesn’t want to wait
for the normal IT process, it can expand the network itself cheaply and easily, just
by plugging an Access Point into an Ethernet jack and by plugging a wireless card
into a laptop.
Attacks
There are several approaches to locating a wireless network. The most basic
method is a surveillance attack. You can use this technique on the spur of the
moment, as it requires no special hardware or preparation. Most significantly, it is
difficult, if not impossible, to detect. How is this type of attack launched? You
simply observe the environment around you.
• War driving
The term war driving is borrowed from the 1980s phone hacking tactic
known as war dialing. War dialing involves dialing all the phone numbers in a
given sequence to search for modems. In fact, this method of finding modems
is so effective that it's still in use today by many hackers and security
professionals. Similarly, war driving, which is now in its infancy, will most likely
be used for years to come both to hack and to help secure wireless networks.
• Client-to-client hacking
Clients exist on both wireless and wired networks. A client can range from
anything such asa Network Attached Storage (NAS) device, to a printer, or
even a server. Because the majority of consumer operating systems are
Microsoft based, and since the majority of users do not know to how to secure
their computers, there is plenty of room to play here. An attacker can connect
to the laptop, upon which he could exploit any number of operating system
vulnerabilities, thus gaining root access to the laptop.
Denial-of-service (DoS) attacks are those that prevent the proper use of
functions or services. Such attacks can also be extrapolated to wireless
networks.
"Cracking WEP," is fundamentally flawed, allowing you to crack it. This will
thwart the casual drive-by hacker. It also enables another layer of legal
protection that prohibits the cracking of transmitted, encrypted signals.
For WLANs, the first step in security hardening is to focus on the access
point. Since the AP is the foundation of wireless LAN data transfer, you must
ensure that it is part of the solution, instead of the problem.
eliminate 99% of your threat. Similar to a car lock, WEP will protect your
network from passers-by; however, just as a dedicated thief will quickly
bypass the lock by smashing a car window, a dedicated hacker will put
forth the effort to crack WEP if it is the only thing between him and your
network.
• MAC Filtering: Every device on a wireless network, by default, has
a unique address that's used to identify one WNIC from another. This
address is called the MAC address. To find the MAC address of the
network card the user only have to perform a few steps based on the
operating system However, while this in theory is an excellent way to stop
hackers from accessing your WLAN, there is a serious flaw in MAC
filtering. The problem with MAC filtering is that MAC addresses can be
spoofed by changing WNIC settings.
• Controlling Radiation zone: When a wireless network is active, it
broadcasts radio frequency (RF) signals. These signals are used to
transmit the wireless data from an access point to the WNIC and back
again. The same signal is also used in ad-hoc networks, or even between
PDAs with 802.11 WNICs. By using antenna management techniques, you
can control the range of your WLAN. In high-rise buildings or apartment
complexes, this can be a serious issue. Interference—and nosy neighbors
—can quickly become a problem. By removing one antenna, reducing the
output, and adjusting the position of the antenna, you can effectively
keep the signal within a tight range
• Defensive Security Through a DMZ: A DMZ, or demilitarized zone,
is a concept of protection. A DMZ typically defines where you place
servers that access the Internet. In other words, a Web server or mail
server is often set up in a DMZ. This allows any Internet user to access
the allocated resources on the server, but if the server becomes
compromised, a hacker will not be able to use the "owned" computer to
search out the rest of the network. Technically, a DMZ is actually its own
little network, separate from the internal network, and separate from the
Internet. However, while this type of protection can help protect internal
resources, it will not protect the wireless network users. Therefore, the
DMZ should be just one part of your wireless security plan.
Thus, VPNs are secure communication solutions that take advantage of public
networks to lower your costs. However, VPNs have their share of problems.
Steel-Belted Radius earns a second look because it provides extra security for
WLAN users by increasing the level of security and access by working with
existing access points to ensure only authorized users are allowed access.
Features of Funk's Steel-Belted Radius:
• Central User Administration: Steel-Belted Radius manages remote and
• Dial-up users who connect via remote access servers from 3Com, Cisco,
Lucent, Nortel, and others.
• Internet users who connect via firewalls from Check Point, Cisco, and
others.
• Tunnel/VPN users who connect via routers from 3Com, Microsoft, Nortel,
Red Creek, V-One, and others.
• Remote users who connect via outsourced remote access services from
ISPs and other service providers.
• Wireless LAN users who connect via access points from Cisco, 3Com,
Avaya, Ericsson, Nokia and others.
• Users of any other device that supports the RADIUS protocols.
Steel-Belted Radius also makes it possible to manage both wireless LAN and
remote users from a single database and console, greatly reducing your
administrative burden by eliminating the need for two separate authentication
systems.
problems. TKIP uses RC4 as the encryption algorithm, but it removes the
weak key problem and forces a new key to be generated every 10,000
packets or 10kb, depending on the source. TKIP is a stronger and more
secure method of verifying the integrity of the data. Called the Message
Integrity Check, this part of TKIP closes a hole that would enable a hacker to
inject data into a packet so he can more easily deduce the streaming key
used to encrypt the data.
• AES: Advanced Encryption Standard (AES) is a newer encryption method
that was selected by the U.S. government to replace DES as their standard.
It is quite strong, and is actually under review for the next version of the
wireless 802.11 standard (802.11i). AES allows different sizes of keys,
depending on need. The key size directly reflects the strength of the
encryption, as well as the amount of processing required to encrypt and
decipher the text.
3.4 x 1038 possible 128-bit keys
before it is sent over the Internet. This provides a layer of security to any
sensitive data and has been incorporated into almost all facets of online
communication. Everything from Web stores, online banking, Web-based
email sites, and more use SSL to keep data secure. The reason why SSL is so
important is because without encryption, anyone with access to the data
pipeline can sniff and read the information as plaintext. By using a Web
browser with SSL enabled, an end user can make a secure and encrypted
connection to a WLAN authentication server without having to deal with
cumbersome software. As most wireless users will be familiar with using
secure Web sites, the integration of SSL will go unnoticed. Once the
connection is made, the user account information can be passed securely and
safely.
Public key
ring
plaintext Encryption Decryption plaintext
input Algorithm Algorithm output
Two common one-way hash functions are MD5 and SHA-1. MD5 produces a 128-bit
hash value, and is now considered less secure. SHA-1 produces a 160-bit hash
value. In PKI, hashes are used to create digital signatures.
CONCLUSION
Today, most organizations deploying wireless LANs simply haven’t put
enough effort into its security – it isn’t right, but it is true. Just like in the wired
world, organizations only began to take Internet security seriously after there had
been a series of highly visible and financially damaging hacker attacks. Only a
similar series of public wireless disasters will catalyze the change needed for
organizations to take wireless security more seriously.
While there are a number of inherent security problems with the 802.11
technology, there are also many straightforward measures that can be taken to
mitigate them. As with many new technologies, the best way to get started is to
recognize the problems and make a commitment to address the ones that can
reasonably be solved in our environment
Digital Certificates, and Cryptographic Modules are the fundamental
building blocks of strong authentication and there is no way around that. You can
make the best of it by leveraging the hefty investment for all your security needs.
REFERENCES
1) www.gigapedia.org
2) www.findfreearticles.com
ABSTRACT
The internet has been a wide usage in all the fields in the present competitive
world. It is being used in the education, research, business and what not, in
everything. But providing security for the users information or transactions or any
other data in any of the field has become a paramount. This paper gives a vivid
picture of “E-commerce” and the vulnerabilities they are facing in providing a
secure system for the users. In other words, how the security attacks are made
either by the hackers or the intruders, the ways how they attack and exploit to
illegitimate means.
This paper is an overview of the security and privacy concerns based on the
experiences as developers of E-commerce. E-commerce is a business middleware
that accelerates the development of any business transaction-oriented application,
from the smallest retailer to the distributor, to the consumer (user). These
transactions may apply b between manufacturers and distributors or suppliers.
Here, the user needs to be assured with the privacy of his/her information. In this
article, we focus on possible attack scenarios in an e-Commerce system and
provide preventive strategies, including security features that one can implement.
Here we present you the better ways of how to defend from the attacks
and protect your personal data without depending on the network provider’s
security with the help of personnel firewalls and honey pots.
INTRODUCTION
E-Commerce refers to the exchange of goods and services over the Internet. All
major retail brands have an online presence, and many brands have no associated
bricks and mortar presence. However, e-Commerce also applies to business to
business transactions, for example, between manufacturers and suppliers or
distributors.
E-Commerce systems are relevant for the services industry. For example,
online banking and brokerage services allow customers to retrieve bank
statements online, transfer funds, pay credit card bills, apply for and receive
approval for a new mortgage, buy and sell securities, and get financial guidance
and information.
A secure system accomplishes its task with no unintended side effects. Using the
analogy of a house to represent the system, you decide to carve out a piece of
your front door to give your pets' easy access to the outdoors. However, the hole
is too large, giving access to burglars. You have created an unintended implication
and therefore, an insecure system. While security features do not guarantee a
secure system, they are necessary to build a secure system. Security features
have four categories:
• Authentication: Verifies who you say you are. It enforces that you are the
only one allowed to logon to your Internet banking account.
• Authorization: Allows only you to manipulate your resources in specific
ways. This prevents you from increasing the balance of your account or
deleting a bill.
• Encryption: Deals with information hiding. It ensures you cannot spy on
others during Internet banking transactions.
• Auditing: Keeps a record of operations. Merchants use auditing to prove
that you bought specific merchandise.
A threat is a possible attack against a system. It does not necessarily mean that
the system is vulnerable to the attack. An attacker can threaten to throw eggs
against your brick house, but it is harmless. Vulnerability is a weakness in the
system, but it is not necessarily be known by the attacker. Vulnerabilities exist at
entry and exit points in the system. In a house, the vulnerable points are the
doors and windows.
Points the attacker can target
• Shopper
• Shopper' computer
• Network connection between shopper and Web site's server
• Web site's server
• Software vendor
Tricking the shopper: Some of the easiest and most profitable attacks are
based on tricking the shopper, also known as social engineering techniques. These
attacks involve surveillance of the shopper's behavior, gathering information to
use against the shopper. For example, a mother's maiden name is a common
challenge question used by numerous sites. If one of these sites is tricked into
giving away a password once the challenge question is provided, then not only
has this site been compromised, but it is also likely that the shopper used the
same logon ID and password on other sites.
Sniffing the network: In this scheme, the attacker monitors the data between
the shopper's computer and the server. There are points in the network where
this attack is more practical than others. If the attacker sits in the middle of the
network, then within the scope of the Internet, this attack becomes impractical. A
request from the client to the server computer is broken up into small pieces
known as packets as it leaves the client's computer and is reconstructed at the
server. The packets of request are sent through different routes. The attacker
cannot access all the packets of a request and cannot decipher what message was
sent.
Using server root exploits: Root exploits refer to techniques that gain super
user access to the server. This is the most coveted type of exploit because the
possibilities are limitless. When you attack a shopper or his computer, you can
only affect one individual. With a root exploit, you gain control of the merchants
and all the shoppers' information on the site. There are two main types of root
exploits: buffer overflow attacks and executing scripts against a server.
DEFENSES
Despite the existence of hackers and crackers, e-Commerce remains a safe and
secure activity. The resources available to large companies involved in e-
Commerce are enormous. These companies will pursue every legal route to
protect their customers. Figure 6 shows a high-level illustration of defenses
available against attacks.
Education: Your system is only as secure as the people who use it. If a shopper
chooses a weak password, or does not keep their password confidential, then an
attacker can pose as that user. Users need to use good judgment when giving out
information, and be educated about possible phishing schemes and other social
engineering attacks.
Secure Socket Layer (SSL): Secure Socket Layer (SSL) is a protocol that
encrypts data between the shopper's computer and the site's server. When an
SSL-protected page is requested, the browser identifies the server as a trusted
entity and initiates a handshake to pass encryption key information back and
forth. Now, on subsequent requests to the server, the information flowing back
and forth is encrypted so that a hacker sniffing the network cannot read the
contents.
Server firewalls: A firewall is like the moat surrounding a castle. It ensures that
requests can only enter the system from specified ports, and in some cases,
ensures that all accesses are only from certain physical machines. A common
technique is to setup a demilitarized zone (DMZ) using two firewalls. The outer
firewall has ports open that allow ingoing and outgoing HTTP requests. This allows
the client browser to communicate with the server. A second firewall sits behind
the e-Commerce servers. This firewall is heavily fortified, and only requests from
trusted servers on specific ports are allowed through. Both firewalls use intrusion
detection software to detect any unauthorized access attempts. Figure shows the
firewalls and honey pots.
Password policies: Ensure that password policies are enforced for shoppers and
internal users. You may choose to have different policies provided by federal
information standard, shoppers versus your internal users. For example, you may
choose to lockout an administrator after 3 failed login attempts instead of 6.
These password policies protect against attacks that attempt to guess the user's
password. They ensure that passwords are sufficiently strong enough so that they
cannot be easily guessed.
There are many established policies and standards for avoiding security issues.
However, they are not required by law. Some of the basic rules include:
Security best practices remain largely an art rather than a science, but there are
some good guidelines and standards that all developers of e-Commerce software
should follow.
Using cookies
One of the issues faced by Web site designers is maintaining a secure session with
a client over subsequent requests. Because HTTP is stateless, unless some kind of
session token is passed back and forth on every request, the server has no way to
link together requests made by the same person. Cookies are a popular
mechanism for this. An identifier for the user or session is stored in a cookie and
read on every request. You can use cookies to store user preference information,
such as language and currency. The primary use of cookies is to store
authentication and session information, your information, and your preferences. A
secondary and controversial usage of cookies is to track the activities of users.
Use this security checklist to protect yourself as a shopper: some of the checks
will be like:
Threat models are particularly important when relying on a third party vendor for
all or part of the site's infrastructure. This ensures that the suite of threat models
is complete and up-to-date.
Conclusion
This article outlined the key players and security attacks and defenses in an e-
Commerce system. Current technology allows for secure site design. It is up to
the development team to be both proactive and reactive in handling security
threats, and up to the shopper to be vigilant when shopping online.
Resources
• Low level tips for writing secure code. Howard, Michael and LeBland, David,
Writing Secure Code, Second Edition, Microsoft Press, 2003.
PAPER PRESENTATION
ON
DATA WAREHOUSING & MINING
FOR
RIP
PL
E-
2K7
IX NATIONAL LEVEL
STUDENT TECHNICAL SYMPOSIUM FOR PAPER
PRESENTATION
HELD ON
29th December 2007
Presented by:-
1. A.MADHAVI LATHA
III – CSE B
Phone 9959276798
E-mail: madhavi.latha1010@gmail.com
2. A. PRAGNYA
II – CSE A
Phone 9966829702
E-mail: pragnya1021@gmail.com
FROM
KURNOOL
ABSTRACT
Data warehousing is the process where organizations extract value from
their informational assets though the use of special stores called data
warehouses. In general data warehouse is defined to be subject-oriented,
integrated, time-variant, and non-volatile. Using both internal and external
systems sources of data, a data warehouse is created.
Data mining, the extraction of hidden predictive information from large
databases, is a powerful new technology with great potential to help companies
focus on the most important information in their data warehouses.
This paper presents these strategies and technologies that will enhance the ROI
of Data Warehousing.
INTRODUCTION
Data warehousing primarily deals with gathering data from multiple transaction
processing systems and external sources, ensuring data warehousing quality,
organizing the data warehousing for information processing, and providing
information retrieval and analysis through on-line analytical processing (OLAP),
reporting, web-based and data mining tools.
Data Mining is the discovery of useful patterns in data. Data mining are
used for prediction analysis and classification - e.g. what is the likelihood that a
customer will migrate to a competitor.
DATA WAREHOUSING
Data warehouse is a combination of data from all types of sources and
have the following characteristics: subject oriented, integrated, time-variant, and
non-volatile. Data warehouses are specifically designed to maximize performance
on queries run against them for analysis. Data warehousing is open to an almost
limitless range of definitions. Simply put, Data Warehouses store an aggregation
of a company's data.
Data Warehouses are an important asset for organizations to maintain
efficiency, profitability and competitive advantages. Organizations collect data
through many sources - Online, Call Center, Sales Leads, Inventory
Management. The data collected have degrees of value and business relevance.
77
As data is collected, it is passed through a 'conveyor belt', call the Data Life Cycle
Management.
An organization's data life cycle management's policy will dictate the data
warehousing design and methodology.
The goal of Data Warehousing is to generate front-end analytics that will support
business executives and operational managers.
Pre-Data Warehouse
The pre-Data Warehouse zone provides the data for data warehousing. Data
Warehouse designers determine which data contains business value for insertion.
OLTP databases are where operational data are stored. OLTP databases can
reside in transactional software applications such as Enterprise Resource
Management (ERP), Supply Chain, Point of Sale, and Customer Serving Software.
OLTPs are design for transaction speed and accuracy.
Metadata ensures the sanctity and accuracy of data entering into the data
lifecycle process. Meta-data ensures that data has the right format and
relevancy. Organizations can take preventive action in reducing cost for the ETL
78
stage by having a sound Metadata policy. The commonly used terminology to
describe Metadata is "data about data".
Data Cleansing
Before data enters the data warehouse, the extraction, transformation and
cleaning (ETL) process ensures that the data passes the data quality threshold.
ETLs are also responsible for running scheduled tasks that extract data from
OLTPs.
Data Repositories
The Data Warehouse repository is the database that stores active data of
business value for an organization. The Data Warehouse modeling design is
optimized for data analysis. There are variants of Data Warehouses - Data
Marts and ODS. Data Marts are not physically any different from Data
Warehouses. Data Marts can be though of as smaller Data Warehouses built on a
departmental rather than on a company-wide level. Data Warehouses collects
data and is the repository for historical data. Hence it is not always efficient for
providing up-to-date analysis. This is where ODS, Operational Data Stores, come
in. ODS are used to hold recent data before migration to the Data Warehouse.
ODS are used to hold data that have a deeper history that OLTPs. Keep large
amounts of data in OLTPs can tie down computer resources and slow down
processing - imagine waiting at the ATM for 10 minutes between the prompts for
inputs. .
Front-End Analysis
The last and most critical potion of the Data Warehouse overview are the front-
end applications that business users will use to interact with data stored in the
repositories.
79
Data becomes active as soon as it is of interest to an organization. Data life cycle
begins with a business need for acquiring data. Active data are referenced on a
regular basis during day-to-day business operations. Over time, this data loses
its importance and is accessed less often, gradually losing its business value, and
ending with its archival or disposal.
Active Data
Active data is of business use to an organization. The ease of access for
business users to active data is an absolute necessity in order to run an efficient
business.
All data moves through life-cycle stages is key to improving data management.
By understanding how data is used and how long it must be retained, companies
can develop a strategy to map usage patterns to the optimal storage media,
thereby minimizing the total cost of storing data over its life cycle, when data is
stored in a relational database, although the challenge of managing and storing
relational data is compounded by complexities inherent in data relationships.
Inactive Data
Data are put out to pasture once they are no longer active. I.e. there are
no longer needed for critical business tasks or analysis. Prior to the mid-nineties,
most enterprises achieved data in Microfilms and tape back-ups.
There are now technologies for data archival such as Storage Area Networks
(SAN), Network Attached Storage (NAS ) and Hierarchical Storage
80
Management . These storage systems can maintain referential integrity and
business context.
What is OLAP?
OLAP allows business users to slice and dice data at will. Normally data in an
organization is distributed in multiple data sources and are incompatible with
each other. A retail example: Point-of-sales data and sales made via call-center
or the Web are
stored in
different
location
and formats.
It would a time consuming process for an executive to obtain OLAP reports.
Part of the OLAP implementation process involves extracting data from the
various data repositories and making them compatible. Making data compatible
involves ensuring that the meaning of the data in one repository matches all
other repositories. It is not always necessary to create a data warehouse for
OLAP analysis. Data stored by operational systems, such as point-of-sales, are in
types of databases called OLTPs. OLTP, Online Transaction Process , databases do
not have any difference from a structural perspective from any other databases.
The main difference and only, difference is the way in which data is stored.
Examples of OLTPs can include ERP, CRM, SCM, Point-of-Sale applications, Call
Center.
OLTPs are designed for optimal transaction speed. When a consumer makes a
purchase online, they expect the transactions to occur instantaneously. Next
cubes for OLAP database are built and at last Reports are produced.
81
DATA MINING
DAT-A is a data mining and OLAP platform for MySQL. The goals of DAT-A are to
design a high-end analytical product from the needs of the end-user and then
follow through on the technology. Data mining software are noted for their lack
of ease of usability. This together with the abstract nature of analytics has meant
that there has been a relatively low pick-up of data mining in the commercial
world. Data mining has a useful role to play but very often the total cost of
ownership far out weighs any business benefits. Biotechnological and financial
firms find data mining an absolute part of their competitive advantage. However
these industries find the cost of running data warehouses and perform analytics
can be an expensive burden.
Data collected by businesses are increasing at a large rate. Most data have
business relevancy and cannot be simply shunted to archive storage should the
cost of storage increase. This is especially valid for analytics where trends need
to be observed over an
extended period for greater
accuracy.
82
storage in an active medium was proving prohibitive for the client. For
competitive reasons it was not possible to simply ignore the data and par down
cost. A distributed data center was designed replacing the mainframe
environment that existed previously and presented management with an
alternative to expensive server environment, using a cluster of over 200 Linux
boxes powered by MySQL databases.
Linux boxes offered a tremendous cost savings over servers. As the amount of
data stored exceeded the terabyte range it was prudent to index the data and
store in a distributed manner over the data cluster.
MySQL was not quite an obvious choice as it may have seemed. While extremely
fast and reliable, MySQL does not have many sophisticated features that are
needed for data warehousing and data mining. This hurdle was overcome by
using a database transaction engine from InnoDB . Remote database
management systems were built for the MySQL/Linux cluster that allows data
administrators to visualize the "data spread" over the center - this is similar to a
table view found in most of the popular RDBMS databases. The data spread
diagram gave the data administrators, an ability to manipulate
and transfer data from afar without the need to logon to the
individual box. The data spread application also allowed the
user to build data cubes for OLAP analysis. There are two
methods by which users can perform data mining - the first
using a stovepipe approach that packages the business needs
tightly together with the data mining algorithms. This first
method allowed business users to directly perform data mining
and the user is given limited latitude in choosing the
methodologies. The second method gave freedom to explore
data and choose a number of algorithms. The types of user
envisioned for the second method are more seasoned data
miners.
83
Intelligent Data Mining
Data access standards such as OLE-DB, XML for Analysis and JSR will minimize
the challenges for data access. Building a user friendly software interfaces for
the end-user are the next steps in the evolution of data mining. A comparable
analogy can be made with the increasing ease of use of OLAP client tools. The
J2EE and .NET software platforms offer a large spectrum of built-in APIs that
enable smarter software applications.
Text Mining
Text mining has been on the radar screen of corporate users since the mid-eighties.
Technical limitations and the overall complexity of utilizing data mining has been a
hurdle to text mining that has been surmounted by very few organizations. Text
mining is coming out into the open. Some of the reasons are:
• Storage cost reduction - data are stored in an electronic medium even after being
declared non-active.
• Data volume increase - the exponential growth of data with the lowering of data
transmission cost and increasing usage of the Internet.
• Fraud detection and analysis - there are compelling reasons for organizations to
redress fraud.
• Competitive advantage - It is used to better understand the realms of data in an
organization.
84
Text data is so called unstructured data. Unstructured data implies that the data are
freely stored in flat files (e.g. Microsoft Word) and are not classified. Structured data
are found in well-designed data warehouses. The meaning of the data is well-known,
usually through meta-data description, and the analysis can be performed directly on
the data. Unstructured data has to jump an additional hoop before it can be
meaningfully analyzed - information will need to be extracted from the raw text.
For fraud detection, the client currently had a distinct data repository where
model scoring was performed. Based on the model score, reports would be
queried against the data warehouse to produce the claims that were suspect.
This process was inefficient for a number of reasons:
scores onto the fraud detection team. The team performed the investigation
and evaluated the merits of the claims. This was a disjointed process whereby
the fraud detection data mining scores were not being improved based on
investigation results.
• Fraud detection was not receptive to sudden changes in claim patterns. For
85
metadata repository. The scripting language used by the data extraction
module is Perl, which makes the extraction module highly accessible for making
changes by end-users.
While such open-ended undirected data mining may be suitable in some cases of text
mining, the cost associated can be very high and the results will have a lower
confidence of accuracy.
86
Future of Data Mining
Data mining algorithms are now freely available. Database vendors have started to
incorporate data mining modules. Developers can now access data mining via open
standards such as OLE-DB for data mining on SQL Server 2000. Data mining
functionality can now be added directly to the application source code.
CONCLUSION
Data warehousing and mining, architecture is used to tie together a wide range of
information technology, this architecture also help to integrate and tie together the
business purpose and enterprise architecture of data warehousing solutions. OLAP is
best used for power IT users and finance/accounting users who like the spreadsheet
based presentation.
87
The value of relationship mining can be a useful tool not only in sales and marketing
but also for law enforcement and science research. However there is a threshold
barrier under which relationship mining would not be cost effective.
Text mining has reduced complexity of utilizing data mining. The security agencies
have a government mandate to intercept data traffic and evaluate for items of
interest. This also involves intercepting international fax transmission electronically
and mining for patterns of interest.
In the future of the Data mining, we hide the complexity of data mining from the
end-users before it will take the true center stage in an organization. Business use
cases can be designed, with tight constrains, around data mining algorithms
References:
VISUAL KEYBOARD
(AN APPLICATION OF DIGITAL IMAGE SCANNING)
B.Anusha.
88
T.Hima bindu.
V.R.SIDDHARTHA ENGINEERING COLEGE.
Vijaywada.
Ph no:9490907741.
ABSTRACT
Information and communication technologies can have a key role in helping people
with educational needs, considering both physical and cognitive disabilities. Replacing
a keyboard or mouse, with eye-scanning cameras mounted on computers have
become necessary tools for people without limbs or those affected with paralysis. The
camera scans the image of the character, allowing users to ‘type’ on a monitor as
they look at the visual keyboard. The paper describes an input device, based on eye
scanning techniques that allow people with severe motor disabilities to use gaze for
selecting specific areas on the computer screen. It includes a brief description of eye,
about the visual key system where it gives overall idea on how does this process
goeson, also deals with the system architecture which includes calibration, image
acquisition, segmentation, recognition, and knowledge base.
The paper mainly includes three algorithms, one for face position,
for eye area identification, and for pupil identification which are based on scanning
the image to find the black pixel concentration.Inorder to implement this we use
software called dasher, which is highly appropriate for computer users who are
unable to use a two handed keyboard. One-handed users and users with no hands
love dasher. The only ability that is required is sight. Dashers along with eye tracking
devices are used.
This model is a novel idea and the first of its kind in the making,
which reflects the outstanding thinking of a human that he left no stone unturned.
1. INTRODUCTION
89
‘Vis-Key’ aims at replacing the conventional hardware keyboard with a ‘Visual
Keyboard’. It employs sophisticated scanning and pattern matching algorithms to
achieve the objective. It exploits the eyes’ natural ability to navigate and spot
familiar patterns. Eye typing research extends over twenty years; however, there is
little research on the design issues. Recent research indicates that the type of
feedback impacts typing speed, error rate, and the user’s need to switch her gaze
between the visual keyboard and the monitor.
2. THE EYE
Fig 2.1 shows us the horizontal cross section of the human eye.
The eye is nearly a sphere with an average diameter of approximately 20mm.Three
membranes – The Cornea &Sclera cover, the Choroids layer and the Retina –
encloses the eye. When the eye is properly focused, light from an object is imaged on
the retina. Pattern vision is afforded by the distribution of discrete light receptors
over the surface of the Retina. There are two classes of receptors – Cones and Rods.
The cones, typically present in the central portion of the retina called fovea is highly
sensitive to color. The number of cones in the human eye ranges from6-7 millions.
These cones can resolve fine details because they are connected to its very own
90
nerve end. Cone vision is also known as Photopic or Bright-light vision. The rods are
more in number when compared to the cones (75-150 million). Several rods are
connected to a single nerve and hence reduce the amount of detail discernible by the
receptors. Rods give a general overall picture of the view and not much inclined
towards color recognition. Rod vision is also known as the Scotopic vision or Dim-
light vision as illustrated in fig 2.1, the curvature of the anterior surface of the lens is
greater than the radius of its posterior surface. The shape of the lens is controlled by
the tension in the fiber of the ciliary body. To focus on distant objects , the
controlling muscles cause the lens to be relatively flattened. Similarly to focus on
nearer objects the muscles allow the lens to be thicker. The distance between the
focal distance of the lens and the retina varies from 17 mm to 14 mm as the
refractive power of the lens increases from its minimum to its maximum.
91
probability of success in matching exceeds the threshold value, the corresponding
character is displayed on the screen. The hardware requirements are simply a
personal computer, Vis-Key Layout (chart) and a web cam connected to the USB
port. The system design, which refers to the software level, relies on the
construction, design and implementation of image processing algorithms applied to
the captured images of the user.
4. System Architecture:
4.1. Calibration
92
The calibration procedure aims at initializing the system.
The first algorithm, whose goal is to identify the face position, is applied only to the
first image, and the result will be used for processing the successive images, in order
to speed up the process. This choice is acceptable since the user is supposed only to
make minor movements. If background is completely black (easy to obtain) the
user’s face appears as a white spot, and the borders can be obtained in
correspondence of a decrease in the number of black pixels. The Camera position is
below the PC monitor; if it were above, in fact, when the user looks at the bottom of
the screen the iris would be partially covered by the eyelid, making the identification
of the pupil very difficult. The user should not be distant from the camera, so that the
image does not contain much besides his/her face. The algorithms that respectively
identify the face, the eye and the pupil, in fact, are based on scanning the image to
find the black pixel concentration: the more complex the image is, the slowest the
algorithm is too. Besides, the image resolution will be lower. The suggested distance
is about 30 cm. The user’s face should also be very well illuminated, and therefore
two lamps were posed on each side of the computer screen. In fact, since the
identification algorithms work on the black and white images, shadows should not be
present on the User’s face.
4.2. Image Acquisition:
The Camera image acquisition is implemented via the Functions of
the AviCap window class that is part of the Video for Windows (VFW) functions. The
entire image of the problem domain would be scanned every 1/30 of second .The
output of the camera is fed to an Analog to Digital converter (digitizer) and digitizes
it. Here we can extract individual frames from the motion picture for further analysis
and processing.
4.3. Filtering of the eye component:
The chosen algorithms work on a binary (black and white) image,
and are based on extracting the concentration of black Pixels. Three algorithms are
applied to the first acquired image, while from the second image on only the third
one is applied
93
Fig 4.3.1
Algorithm 1 –
Face Positioning
image, while
from the First
algorithm, whose
goal is to identify
the face position,
is applied only to the first image, and the result will be used for processing the
successive images, in order to speed up the process. This choice is acceptable since
the user is supposed only to make minor movements. The Face algorithm converts
the image in black and white, and zooms it to obtain an image that contains only the
user’s face. This is done by scanning the original image and identifying the top,
bottom, left and right borders of the face. (Fig 4.3.1).Starting from the resulting
image, the Second algorithm extracts the information about the eye position (both
left and right) pixels is the one that contains the eyes. The algorithm uses this
information to determine the top and bottom borders of the eyes area (Fig.4.3.2), so
that it is extracted from the image. The new image is then analyzed to identify the
eye: the algorithm finds the right and left borders, and generates a new image
containing the left and right eyes independently
94
The procedure described up until now is applied only to the first image of the
sequence, and the data related to the right eye position are stored in a buffer and
used also for the following images. This is done to speed up the process, and is
acceptable if the user does only minor head movements. The Third algorithm extracts
the position of the center of the pupil from the right eye image. The Iris identification
procedure uses the same approach of the previous algorithm .First of all, the left and
right borders of the iris are extracted. Finding the top and bottom ones would be less
precise due to the eyelid presence, so a square area is built, that slides over the
image. The chosen area, which represents the iris, is the one that has the higher
concentration of black pixels. The center of this image represents also the center of
the pupil. The result of this phase is the coordinates of the center of the pupil for
each of the image in the sequence.
4.4. Preprocessing:
The
key function of
preprocessing is to
95
improve the image in ways to improve the chances for success with other processes.
Here preprocessing deals with 4 important techniques:
• To enhance the contrast of the image.
• To eliminate/minimize the effect of noise on the image.
• To isolate regions whose texture indicates likelihood to alphanumeric
information?
• To provide equalization for the image.
4.5. Segmentation:
Segmentation broadly defines the partitioning of an input image into its
constituent parts or objects. In general, autonomous segmentation is one of the most
difficult tasks in Digital Image Processing. A rugged segmentation procedure brings
the process a long way towards Successful solution of the image problem. In terms of
character recognition, the key role of segmentation is to extract individual characters
from the problem domain. The output of the segmentation stage is raw pixel data,
constituting either the boundary of a region or all points n the region itself. In either
case converting the data into suitable form for computer processing is necessary. The
first decision is to decide whether the data should be represented as a boundary or
as a complete region. Boundary representation is appropriate when the focus is on
external shape characteristics like corners and inflections. Regional representation is
appropriate when the focus is on internal shape characteristics such as texture and
skeletal shape Description also called feature selection deals with extracting features
that result in some quantitative information of interest or features that are basic for
differentiating one class of objects from another.
96
4.7. Knowledge Base:
Knowledge about a particular problem domain can be coded into
an image processing system in the form of a knowledge database. The knowledge
may be as simple as detailing regions of an image where the information of interest
is known thus limiting our search in seeking that information. Or it can be quite
complex such as an image entries are of high resolution. The key distinction of this
knowledgebase is that it, In addition to guiding the operation of various components,
facilitates feedback operations of various modules of the system. This depiction on
FIG 4.1 indicated that communication between processing modules is based on prior
knowledge of what a result should be.
5. Design Constraints:
Though this model is thought provoking, we need to address the design constraints
as well.
R & D constraints severely hamper our cause for a full-fledged working model
of the
Vis-Key system.
The need for a very high resolution camera calls for a high initial investment.
The accuracy and the processing capabilities of the algorithm are very much
liable to quality of the input.
6. Alternatives/Related References:
The approaches till date have only been centered on the
eye tracking theory. It lays more emphasis on the use of eye as a cursor and not as a
data input device. An eye tracking device lets users select the letters from a screen.
Dasher, the prototype program taps in to the natural gaze of the eye and makes
predictable words and phrases simpler to write.
97
Dasher is software which is highly appropriate for computer users. It calculates the
probability of one letter coming after another. It then presents the letters required as
if contained on infinitely expanding bookshelves. Researchers say people will be able
to write up to 25 words per minute with Dasher compared to on-screen keyboards,
which they say average about 15 words per minute. Eye-tracking devices are still
problematic. "They need re-calibrating each time you look away from the computer,"
says Willis. He controls Dasher using a trackball.
7. CONCLUSION:
It opens a new dimension to how we perceive the world and
should prove to be a critical technological break through considering the fact that
there has not been sufficient research in this field of eye scanning. If implemented, it
will be one of the AWE-INSPIRING technologies to hit the market.
Bibliography:
ü http://www.cs.uta.fi/~curly/publications/ECEM12-Majaranta.html
ü www.inference.phy.cam.ac.uk/djw30/dasher/eye.html
98
ü http://www.inference.phy.cam.ac.uk/dasher/
ü http://www.cs.uta.fi/hci/gaze/ eyetyping.php
ü http://www.acm.org/sigcaph
ü Ward, D. J. & MacKay, D. J. C. “Fast hands-free writing by gaze direction.”
Nature, 418, 838, (2002).
ü Daisheng Luo “Pattern Recognition and Image Processing” Horwood series in
engineering sciences
Kanchikacherala
E-MAIL:
herosrikanth@yahoo.com
99
pradeep_seelamneni@yahoo.com
Abstract
Interest in digital image processing methods stems from two principal application
areas: improvement pictorial information for human interpretation; and processing of
image data for storage, transmission, and representation for autonomous machine
perception.
Digital image processing refers to processing digital images by means of a
digital computer. Note that digital image is composed of finite number of elements,
each of which has a particular location and value.
As we know that the images could be in the form of analog signals there is a
need to convert these signals to digital form which can be done by plotting the image
using different transfer functions which are explained here under. A transfer function
maps the pixel values from the CCD (Charge coupled device) to the available
brightness values in the imaging software all the images so far have been plotted using
linear transfer functions
Filter masks and other manipulations have also discussed in this aspect in order
to make the image filter and get a clear cut form of the image.
Introduction
100
The first two numbers give the x and y coordinates of the pixel, and the third
gives its intensity, relative to all the other pixels in the image. The intensity is a
relative measure of the number of photons collected at that photosite on the CCD,
relative to all the others, for that exposure.
The clarity of a digital image depends on the number of “bits” the computer uses
to represent each pixel. The most common type of representation in popular usage
today is the “8-bit image”, in which the computer uses 8 bits, or 1 byte, to represent
each pixel.
This yields 28 or 256 brightness levels available within a given image. These
brightness levels can be used to create a black-and-white image with shades of gray
between black (0) and white (255), or assigned to relative weights of red, green, and
blue values to create a color image.
The range of intensity values in an image also depends on the way in which a
particular CCD handles its ANALOG TO DIGITAL (A/D) conversion. 12-bit A/D
conversion means that each image is capable of 212(4096) intensity values.
If the image processing program only handles 28(256) brightness levels, these
must be divided among the total range of intensity values in a given image.
Below histogram shows the number of pixels in a 12-bit image that have the
same intensity, from 0 to 4095.
Suppose u have software that only handles 8-bit information you assign black
and white limits, so that all pixels with values to the left of the lower limit are set to 0,
while all those to the right of the upper limit are set to 255. This allows u to look at
details within a given intensity range.
101
So, a digital image is a 2-dimensional array of numbers, where each number
represents the amount of light collected at one photosite, relative to all the other
photosites on the CCD chip.
It might look something like….
• 98 107 145 126 67 93 154 223 155 180 232 250 242 207 201
• 72 159 159 131 76 99 245 211 165 219 222 181 161 144 131
• 157 138 97 106 55 131 245 202 167 217 173 127 126 136 129
• 156 110 114 91 70 128 321 296 208 193 191 145 422 135 138
By this we can guess the brightest pixel in the “image”…
102
drop in temperature of the chip.
At – 1000c the dark current is negligible.
When you process an image correctly, you must account for this dark current, and
subtract it out from the image. This is done by taking a “closed shutter” image of a
dark background, and then subtracting this dark image from the “raw” image you are
observing.
The exposure time of the dark image should match that of the image of the
object or starfield you are viewing.
In fact, the one who regularly take CCD image keep files of dark current
exposures that match typical exposure times of images they are likely to take, such
as: 10,20,45,60or 300 seconds, which are updated regularly, if not nightly.
103
4) And another dark exposure, of the same integration time as your flat exposure!
Final image = raw image - dark raw
(Flat - dark )
flat
104
By changing the visualization limits in the histogram, the user can pre – define
the black and white levels of the image, thus increasing the level of detail available in
the mid – ranges of the intensities in a given image.
105
HISTOGRAM EQUILIZATION
It means of flattening your histogram by putting equal numbers of pixels in each “bin”,
it serves to enhance mid – range features in an image with a wide range of intensity
values. When you equalize your histogram, you distribute your 4096 intensity from
your CCD equally among the intensity values available in your software. This can be
particularly useful for bringing out features that close to the sky background, which
would otherwise be lost.
After you have corrected your raw image so that you are confident
that what you are seeing really come form incident photons and not
electronic noise of your CCD. You may still have unwanted components
in your image.
It is time now to perform mathematical operations on your signal which
106
will enhance certain features, remove unwanted noise, smooth rough
edges, or emphasize certain boundaries.
...and this brings us to our last topic for this module:
107
121
if you move this matrix across an image and matrix multiply along, you
will end up replacing the center pixel in the window with the weighted
average intensity of all the points located inside the window.
108
109
High pass filters enhance the short period features in your image, giving
it a sharper look.
Some examples of high pass filters:
0 -1 0 0 -1 0 -1 -1 -1 -1 -1 -1
-1 20 -1 -1 10 -1 -1 10 -1 -1 16 -1
0 -1 0 0 -1 0 -1 -1 -1 -1 -1 -1
You can also combine process – such as low pass, high pass, and image
subtraction in a process called UNSHARP MASKING.
Bibliography
1. Digital image processing by Rafael C. Gonzalez and Richard E.
Woods
2. www.physics.ucsb.edu
UBIQUITOUS COMPUTING
(PERVASIVE COMPUTING)
Presented By,
B.DHEERAJ
R.DEEPAK
III/IV B.TECH
III/IV B.TECH
K.L.C.E
K.L.C.E
Email:dheeru_bunny@yahoo.com
Email:deepakravuri@yahoo.com
Abstract. referred as
One is happy when Ubiquitous
one’s desires are computing through
fulfilled. out the paper.
The highest ideal of One of the goals of
Ubicomp is to make ubiquitous
a computer so computing is to
imbedded, so enable devices to
fitting, so natural, sense changes in
that we use it their environment
without even and to
thinking about it. automatically adapt
Pervasive and act based on
computing is these changes
:- lifetime of six
months. The devices
Co-ordinates the bats also contain two
"Placing" "Moving"
(detecting an (detecting its
object) movement)
Fig. 4. Basic reader and motion
concept of Mouse sensing devices into
Field one package. Fig. 4
Mouse Field is a shows an
device which implementation of
combines an ID Mouse Field, which
Resources
Application Coordination Infrastructure for Ubiquitous Computing
Rooms.
Ubiquitous Bio-Information Computing (UBIC 2)
What is Ubiquitous Computing? – Overview and resources.
How Ubiquitous Networking will work? – Kevin Bensor.
Panasonic Center: Realizing a Ubiquitous network society.
Ubiquitous Computing Management Architecture.
Introduction to UC.
UC in Education.
Designing Ubiquitous Computer – Resources.
Research works on UC
Ichiro Satoh’s Research work on UC.
Bill Schilit’s work on UC.
Matthias Lampe’s work on UC.
Pekka Ala – Siuru’s work on UC.
Louise Barkhuus’ work on UC.
George Roussos’ work on ubiquitous commerce.
Dr. Albrecht Schmidt’s Research work on Ubiquitous Computing.
UC Research
Research in UC and Applications at University of California, Irvine.
Fuego: Future Mobile and Ubiquitous Computing Research.
The Ubiquitous Computing Research Group at the University of
Victoria.
Computing Department Research themes – Mobile and Ubiquitous
computing.
Research in Ubiquitous Computing.
GGF Ubiquitous Computing Research Group.
Distributed Software Engineering Group Research into Ubiquitous
Computing.
Mobile Ubiquitous Security Environment (MUSE).
Bibliography
www.ubiq.com
www.ubiqcomputing.org
www.teco.edu
www.personalubicom.com
www.searchnetworking.techtarget.com
www.gseacademic.harvard.edu
www.comp.lancs.as.uk
www.Ice.eng.cam.ac.uk
CRYPTOGRAPHY
THE ART OF SECRET WRITING
PRESENTED BY:
Strong cryptography:
Cryptography can be strong or weak, as explained above.
Cryptographic strength is measured in the time and resources it would
require to recover the plaintext. The result of strong cryptography is
ciphertext that is very difficult to decipher without possession of the
appropriate decoding tool. How difficult? Given all of today’s computing
power and available time—even a billion computers doing a billion checks a
second—it is not possible to decipher the result of strong cryptography
before the end of the universe.
How does cryptography work?
A cryptographic algorithm, or cipher, is a mathematical function used
in the encryption and decryption process. A cryptographic algorithm works in
Combination with a key—a word, number, or phrase—to encrypt the
plaintext. The same plaintext encrypts to different ciphertext with different
keys. The security of encrypted data is entirely dependent on two things: the
strength of the cryptographic algorithm and the secrecy of the key.
A cryptographic algorithm, plus all possible keys and all the protocols
that make it work, comprise a cryptosystem. PGP is a cryptosystem.
Conventional cryptography:
In conventional cryptography, also called secret-key or symmetric-key
encryption, one key is used both for encryption and decryption. The Data
Encryption Standard (DES) is an example of a conventional cryptosystem
that is widely employed by the U.S. government.
Keys:
A key is a value that works with a cryptographic algorithm to produce
a specific ciphertext. Keys are basically really, really, really big numbers. Key
size is measured in bits; the number representing a 2048-bit key is huge. In
public-key cryptography, the bigger the key, the more secure the ciphertext.
However, public key size and conventional cryptography’s secret key size are
totally unrelated. A conventional 80-bit key has the equivalent strength of a
1024-bit public key. A conventional 128-bit key is equivalent to a 3000-bit
public key. Again, the bigger the key, the more secure, but the algorithms
used for each type of cryptography are very different.
While the public and private keys are mathematically related, it’s very
difficult to derive the private key given only the public key; however, deriving
the private key is always possible given enough time and computing power.
This makes it very important to pick keys of the right size; large enough to
be secure, but small enough to be applied fairly quickly.
Digital signatures:
A major benefit of public key cryptography is that it provides a method
for employing digital signatures. Digital signatures let the recipient of
information verify the authenticity of the information’s origin, and also verify
that the information was not altered while in transit. Thus, public key digital
signatures provide authentication and data integrity. A digital signature also
provides non-repudiation, which means that it prevents the sender from
claiming that he or she did not actually send the information. These features
are every bit as fundamental to cryptography as privacy, if not more.
A digital signature serves the same purpose as a handwritten
signature. However, a handwritten signature is easy to counterfeit. A digital
signature is superior to a handwritten signature in that it is nearly impossible
to counterfeit, plus it attests to the contents of the information as well as to
the identity of the signer.
Some people tend to use signatures more than they use encryption.
Instead of encrypting information using someone else’s public key, you
encrypt it with your private key. If the information can be decrypted with
your public key, then it must have originated with you.
Hash functions:
The system described above has some problems. It is slow, and it
produces an enormous volume of data—at least double the size of the
original information. An improvement on the above scheme is the addition of
a one-way hash function in the process. A one-way hash function takes
variable-length input in this case, a message of any length, even thousands
or millions of bits—and produces a fixed-length output; say, 160 bits. The
hash function ensures that, if the information is changed in any way—even
by just one bit—an entirely different output value is produced.
PGP uses a cryptographically strong hash function on the plaintext the
user is signing. This generates a fixed-length data item known as a message
digest. Then PGP uses the digest and the private key to create the
“signature.” PGP transmits the signature and the plaintext together. Upon
receipt of the message, the recipient uses PGP to recompute the digest, thus
verifying the signature. PGP can encrypt the plaintext or not; signing
plaintext is useful if some of the recipients are not interested in or capable of
verifying the signature.
As long as a secure hash function is used, there is no way to take
someone’s signature from one document and attach it to another, or to alter
a signed message in any way. The slightest change to a signed document will
cause the digital signature verification process to fail. Digital signatures play
a major role in authenticating and validating the keys of other PGP users.
DATA REPRESENTATION:
Two basis are used:
• Vertical
• Diagonal
THE EXCHANGE:
• The Sequence of events:
- A generates random key and encoding basis.
- A sends the polarized photons to B.
- A announces the polarization for each bit.
- B generates random encoding basis.
- B measures photons with random basis.
- B announces which basis are the same as A’s.
• Finally, the matching bits are used as the key for a classical channel.
SEQUENTIAL VIEW:
A B
EAVESDROPPING:
• Photon emitters and detectors are far from perfect, causing a lot of
errors.
• Most protocols require a classical channel.
REFERENCES:
Presented by:
K.SANTHOSH.
II/IV –B-Tech
ID: 06091A0582
santhoshsanthosh@hotmail.com
ksanthoshsanthosh@gmail.cpm
Ph: 9908538218
T.RUPESH ROHITH.
II/IV –B-Tech
ID: 06091A0577
rtirumalla@yahoo.co.in
Ph: 9441190808
Mobile map info helps the field workers to obtain the critical info
on natural calamities.
INTRODUCTION :
Mobile devices
Security
Security and privacy are of specific concerns in wireless
communication because of the ease of connecting to the wireless link
anonymously. Common problems are impersonation, denial of service and
tapping. The main technique used is encryption. In personal profiles of users
are used to restrict access to the mobile units.
RECENT APPLICATIONS :
Estate agents can work either at home or out in the field. With
mobile computers they can be more productive. They can obtain
current real estate information by accessing multiple listing services,
which they can do from home, office or car when out with clients. They
can provide clients with immediate feedback regarding specific homes
or neighborhoods, and with faster loan approvals, since applications
can be submitted on the spot. Therefore, mobile computers allow them
to devote more time to clients.
In courts
In companies
Government:
Applications center around assessments, inspections, and work
orders. Most of these applications involve auditing some sort of
facility or process (food service, restaurant, nursing home, child
care, schools, commercial and residential buildings).
Healthcare:
The focus in this industry has been on automating patient
records, medication dispension, and sample collection. A common
goal is to leverage mobile computing in the implementation of
positive patient identification.
Uses like the above are endless. People find one that serves their needs so
more and more are subscribing for mobile computers.
NEW ERA :
THE FUTURE:
REFERENCE:
*************************************************************
*************************************************************
*************************************************************
*************************************************************
************************
BIOMETRICS
FOR FOOL PROOF SECURITY
Submitted by
M.AVINASH M.CHIRANJEEVI
Roll.No.660751011 Roll.No.660751017
3rd year CSE 3rd year CSE
avinashm.india@gmail.com
chiru.btech@gmail.com
FROM
ANDHRA PRADESH
ABSTRACT
Matching Score
95%
Data Collection
Decision
Making Template
Biometric Capture Extraction
Verification
Signal
Processin
g Enrollment
Storage
BIOMETRIC MODEL
BIOMETRIC TECHNIQUES:
KEYSTROKE BIOMETRICS:
“The keystroke biometrics makes use of the inter-stroke gap
that exists between consecutive characters of the user identification
code.”
When a user types his authentication code, there exists a particular
rhythm or fashion in typing the code. If there does not exist any abrupt
change in this rhythmic manner, this uniqueness can be used as an
additional security constraint. It has been proved experimentally that the
manner of typing the same code varies from user to user. Thus this can be
used as a suitable biometric. Further, if the user knows before hand about
the existence of this mechanism, he can intentionally introduce the rhythm to
suite his needs.
IMPLEMENTATION DETAILS:
As the user logs onto the system for the first time, a database entry is
created for the user. He is then put through a training period, which consists
of 15-20 iterations. During this time, one obtains the inter-stroke timings of
all the keys of the identification code. The inter stroke interval between the
keys is measured in milliseconds. The systems’ delay routine can be used to
serve the purpose. The delay routine measures in milliseconds and the
amount of delay incurred between successive strokes can be used as a
counter to record this time interval.
The mean and standard deviation of the code are calculated. This is
done in order to provide some leverage to the user typing the code. The
reference level that we chose is the mean of the training period and the
rounded standard deviation is used as the leverage allotted per user. These
values are fed into the database of the user. These details can also be
incorporated onto the system’s password files in order to save the additional
overhead incurred.
The mean and the standard deviation can be determined by
using the relationship given below.
Once the database entry has been allotted for the user, this can be used in
all further references to the user. The next time the user tries to login, one
would obtain the entered inter-stroke timing along with the password. A
combination of all these metrics is used as a security check of the user. The
algorithm given below gives the details of obtaining the authorization for a
particular user. The algorithm assumes that the database already exists in
the system and one has a system delay routine available
PERFORMANCE MEASURES:
While considering any system for authenticity, one needs to
consider the false acceptance rate (FAR) and the false rejection rate
(FRR).
ALGORITHM
main ()
{
if (User==New)
{ read (User); // Getting User name, User_id,
Password
read (Inter-stroke gap); // Time interval between consecutive
characters
Add user (database); // Add the User to the database
User count =1; }
else if (User==Training)
{ read (User);
read (Inter-stroke gap);
if (Check (User, Password))
{ if (User count<15)
{ update ( User count); // User count = User count +1
add (Inter-stroke gap); }
else if (User count ==15)
{ update (User count);
add (Inter-stroke gap);
Calculate Mean (M), Standard deviation (S.D); }
}
}
else if (User==Existing)
{ read (User);
read (deviation);
if (Check (User, Password, deviation))
Login;
else
exit(0); } }
+R +R +R (L4)
ACCESS GRANTED
-R -R db c db -R (L1)
db db db
db c db
c (L2)
db db +R c
+R c +R +R
+R +R
A MULTIMODAL
+R BIOMETRIC SYSTEM: +R (L4)
+R c
c A biometric system which relies only on a single
c biometric
(L3)
Face Extractor
Databas
e
Minutiae Extractor
Browser
Ceptral Analysis
FACIAL
SCANNING
Template
Eigenspace Projection
Database
And HMM training
Face Eigenspace
Locator Comparison
Ceptral HMM
Analyzer scoring
SPEECH
ACQUISITION
MODULE
VERIFICATION MODULE
APPLICATIONS:
INTERNET SECURITY: If the password is leaked out, the computer or the web server
will not be able to identify whether the original user is operating the computer. PCs
fitted with biometric sensors can sense the biometric template and transmit it to the
remote computer so that the remote server is sure about the user in the computer.
BIOMETRIC SMART CARDS: Biometric technologies are used with smart cards for ID
systems applications specifically due to their ability to identify people with minimal
ambiguity. A biometric based ID allows for the verification of “who you claim to be”
(information about the card holder stored in the card) based on “who you are” (the
biometric information stored in the smart card), instead of, or possibly in addition to,
checking “what you know” (such as password).
A question that arises with any technology is that “Does this technology have any
constraints?” The answer to this question is that, “It purely depends upon its
implementation mechanism”. In Keystroke biometrics, the person being authenticated
must have registered their bio-identity before it can be authenticated. Registration
processes can be extremely complicated and very inconvenient for users. This is
particularly true if the user being registered is not familiar with what is happening. The
problem for the operator is that the right person will be rejected occasionally by what
might be presented as a ‘foolproof’ system. Both the FAR and the FRR depend to some
extent on the deviation allowed from the reference level and on the number of
characters in the identification code (Password). It has been observed that providing a
small deviation lowers the FAR to almost NIL but at the same time tends to increase
the FRR. This is due to the fact that the typing rhythm of the user depends to some
extent on the mental state of the user. So, a balance would have to be established
taking both the factors into consideration.
SOLUTION:
CONCLUSION:
(AUTONOMOUS)
Presented by ---
Email id:
balu_swapna2006@yahoo.com
svizays225@gmail.com
Abstract:
Find where your kids have been! Verify employee driving routes! Review family
members driving habits! Watch large shipment routes! Know where anything or
anyone has been! All this can be done merely by sitting at your own desk!
Finding your way across the land is an ancient art and science. The stars, the compass,
and good memory for landmarks helped you get from here to there. Even advice from
someone along the way came into play. But, landmarks change, stars shift position,
and compasses are affected by magnets and weather. And if you've ever sought
directions from a local, you know it can just add to the confusion. The situation has
never been perfect. This has led to the search of new technologies all over the world
.The outcome is THE GLOBAL POSITIONING SYSTEM. Focusing the application and
usefulness of the GPS over the age-old challenge of finding the routes, this paper
describes about the Global Positioning System, starting with the introduction, basic
idea and applications of the GPS in real world.
Introduction:
Global Navigation Satellite Systems (GNSS) are extended GPS systems, providing
users with sufficient accuracy and integrity information to be useable for critical
navigation applications. The NAVSTAR system, operated by the U.S. Department of
Defense, is the first GPS system widely available to civilian users. The Russian GPS
system, GLONASS, is similar in operation and may prove complimentary to the
NAVSTAR system.
These systems promise radical improvements to many systems that impact all people.
By combining GPS with current and future computer mapping techniques, we will be
better able to identify and manage our natural resources. Intelligent vehicle location and
navigation systems will let us avoid congested freeways and find more efficient routes to
our destinations, saving millions of dollars in gasoline and tons of air pollution. Travel
aboard ships and aircraft will be safer in all weather conditions. Businesses with large
amounts of outside plant (railroads, utilities) will be able to manage their resources
more efficiently, reducing consumer costs. However, before all these changes can take
place, people have to know what GPS can do.
What it does?
GPS uses the "man-made stars" as reference points to calculate positions accurate to a
matter of meters. In fact, with advanced forms of GPS you can make measurements to
better than a centimeter! In a sense it's like giving every square meter on the planet a
unique address.
GPS receivers have been miniaturized to just a few integrated circuits and so are
becoming very economical. And that makes the technology accessible to virtually
everyone. These days GPS is finding its way into cars, boats, planes, construction
equipment, movie making gear, farm machinery, even laptop computers. Soon GPS
will become almost as basic as the telephone.
The GPS receiver on Earth determines its own position by communicating with a
satellite. The results are provided in longitude and latitude. If the receiver is equipped
with a computer that has a map, the position will be shown on the map. If you are
moving, a receiver may also tell you your speed, direction of travel and estimated time
of arrival at a destination.
2. To "triangulate," a GPS receiver measures distance using the travel time of radio
signals.
3. To measure travel time, GPS needs very accurate timing which it achieves with
some tricks.
4. Along with distance, you need to know exactly where the satellites are in space.
High orbits and careful monitoring are the secret.
5. Finally you must correct for any delays the signal experiences as it travels
through the atmosphere.
Improbable as it may seem, the whole idea behind GPS is to use satellites in space as
reference points for locations here on earth. That's right, by very, very accurately
measuring our distance from three satellites we can "triangulate" our position
anywhere on earth.
Next, say we measure our distance to a second satellite and find out that it's 12,000
miles away. That tells us that we're not only on the first sphere but we're also on a
sphere that's 12,000 miles from the second satellite. Or in other words, we're
somewhere on the circle where these two spheres intersect.If we then make a
measurement from a third satellite and find that we're 13,000 miles from that one,
which narrows our position down even further, to the two points where the 13,000
mile sphere cuts through the circle that's the intersection of the first two spheres. So
by ranging from three satellites we can narrow our position to just two points in
space.To decide which one is our true location we could make a fourth measurement.
But usually one of the two points is a ridiculous answer (either too far from Earth or
moving at an impossible velocity) and can be rejected without a measurement.
But how can you measure the distance to something that's floating around in space?
We do it by timing how long it takes for a signal sent from the satellite to arrive at our
receiver.
In a sense, the whole thing boils down to those "velocity times travel time" math
problems we did in high school. Remember the old: "If a car goes 60 miles per hour for
two hours, how far does it travel?"
In the case of GPS we're measuring a radio signal so the velocity is going to be the
speed of light or roughly 186,000 miles per second. The problem is measuring the
travel time.
The timing problem is tricky. First, the times are going to be awfully short. If a satellite
were right overhead the travel time would be something like 0.06 seconds. So we're
going to need some really precise clocks.
But assuming we have precise clocks, how do we measure travel time? To explain it
let's use a goofy analogy:
Suppose there was a way to get both the satellite and the receiver to start playing
"The Star Spangled Banner" at precisely 12 noon. If sound could reach us from space
(which, of course, is ridiculous) then standing at the receiver we'd hear two versions of
the Star Spangled Banner, one from our receiver and one from the satellite. These two
versions would be out of sync. The version coming from the satellite would be a little
delayed because it had to travel more than 11,000 miles. If we wanted to see just how
delayed the satellite's version was, we could start delaying the receiver's version until
they fell into perfect sync. The amount we have to shift back the receiver's version is
equal to the travel time of the satellite's version. So we just multiply that time times
the speed of light and BINGO! We’ve got our distance to the satellite.
That's basically how GPS works.
Only instead of the Star Spangled Banner the satellites and receivers use something
called a "Pseudo Random Code" - which is probably easier to sing than the Star
Spangled Banner.
A Random Code?
There are several good reasons for that complexity: First, the complex pattern helps
make sure that the receiver doesn't accidentally sync up to some other signal. The
patterns are so complex that it's highly unlikely that a stray signal will have exactly the
same shape.
Since each satellite has its own unique Pseudo-Random Code this complexity also
guarantees that the receiver won't accidentally pick up another satellite's signal. So all
the satellites can use the same frequency without jamming each other. And it makes it
more difficult for a hostile force to jam the system. In fact the Pseudo Random Code
gives the DoD a way to control access to the system.
But there's another reason for the complexity of the Pseudo Random Code, a reason
that's crucial to making GPS economical. The codes make it possible to use
"information theory" to "amplify" the GPS signal. And that's why GPS receivers don't
need big satellite dishes to receive the GPS signals. Goofy Star-Spangled Banner
analogy assumes that we can guarantee that both the satellite and the receiver start
generating their codes at exactly the same time. But how do we make sure everybody
is perfectly synced?
If measuring the travel time of a radio signal is the key to GPS, then our stop watches
had better be darn good, because if their timing is off by just a thousandth of a
second, at the speed of light, that translates into almost 200 miles of error! On the
satellite side, timing is almost perfect because they have incredibly precise atomic
clocks on board.
Both the satellite and the receiver need to be able to precisely synchronize their
pseudo-random codes to make the system work. If our receivers needed atomic clocks
(which cost upwards of $50K to $100K) GPS would be a lame duck technology. Nobody
could afford it.
Luckily the designers of GPS came up with a brilliant little trick that lets us get by with
much less accurate clocks in our receivers. This trick is one of the key elements of GPS
and as an added side benefit it means that every GPS receiver is essentially an atomic-
accuracy clock.
That's right, if three perfect measurements can locate a point in 3-dimensional space,
then four imperfect measurements can do the same thing.
If our receiver's clocks were perfect, then all our satellite ranges would intersect at a
single point (which is our position). But with imperfect clocks, a fourth measurement,
done as a cross-check, will NOT intersect with the first three. So the receiver's
computer says "Uh-oh! there is a discrepancy in my measurements. I must not be
perfectly synced with universal time." Since any offset from universal time will affect
all of our measurements, the receiver looks for a single correction factor that it can
subtract from all its timing measurements that would cause them all to intersect at a
single point.
That correction brings the receiver's clock back into sync with universal time, and
bingo! - you've got atomic accuracy time right in the palm of your hand.
Once it has that correction it applies to all the rest of its measurements and now we've
got precise positioning. One consequence of this principle is that any decent GPS
receiver will need to have at least four channels so that it can make the four
measurements simultaneously. With the pseudo-random code as a rock solid timing
sync pulse, and this extra measurement trick to get us perfectly synced to universal
time, we have got everything we need to measure our distance to a satellite in space.
But for the triangulation to work we not only need to know distance, we also need to
know exactly where the satellites are.
But how do we know exactly where they are? After all they're floating around 11,000
miles up in space.
That 11,000 mile altitude is actually a benefit in this case, because something that
high is well clear of the atmosphere. And that means it will orbit according to very
simple mathematics. The Air Force has injected each GPS satellite into a very precise
orbit, according to the GPS master plan.On the ground all GPS receivers have an
almanac programmed into their computers that tells them where in the sky each
satellite is, moment by moment.
The basic orbits are quite exact but just to make things perfect the GPS satellites are
constantly monitored by the Department of Defense. They use very precise radar to
check each satellite's exact altitude, position and speed. The errors they're checking
for are called "ephemeris errors" because they affect the satellite's orbit or
"ephemeris." These errors are caused by gravitational pulls from the moon and sun
and by the pressure of solar radiation on the satellites. The errors are usually very
slight but if you want great accuracy they must be taken into account.
Once the DoD has measured a satellite's exact position, they relay that information
back up to the satellite itself. The satellite then includes this new corrected position
information in the timing signals it's broadcasting. So a GPS signal is more than just
pseudo-random code for timing purposes. It also contains a navigation message with
ephemeris information as well.
With perfect timing and the satellite's exact position you'd think we'd be ready to make
perfect position calculations. But there's trouble afoot.
Up to now we've been treating the calculations that go into GPS very abstractly, as if
the whole thing were happening in a vacuum. But in the real world there are lots of
things that can happen to a GPS signal that will make its life less than mathematically
perfect. To get the most out of the system, a good GPS receiver needs to take a wide
variety of possible errors into account. Here's what they've got to deal with.
First, one of the basic assumptions we've been using throughout this paper is not
exactly true. We've been saying that you calculate distance to a satellite by multiplying
a signal's travel time by the speed of light. But the speed of light is only constant in a
vacuum. As a GPS signal passes through the charged particles of the ionosphere and
then through the water vapor in the troposphere it gets slowed down a bit, and this
creates the same kind of error as bad clocks.
There are a couple of ways to minimize this kind of error. For one thing we can predict
what a typical delay might be on a typical day. This is called modeling and it helps but,
of course, atmospheric conditions are rarely exactly typical.
Another way to get a handle on these atmosphere-induced errors is to compare the
relative speeds of two different signals. This "dual frequency" measurement is very
sophisticated and is only possible with advanced receivers.
Trouble for the GPS signal doesn't end when it gets down to the ground. The signal
may bounce off various local obstructions before it gets to our receiver. This is called
multipath error and is similar to the ghosting you might see on a TV. Good receivers
use sophisticated signal rejection techniques to minimize this problem.
Even though the satellites are very sophisticated they do account for some tiny errors
in the system. The atomic clocks they use are very, very precise but they're not
perfect. Minute discrepancies can occur, and these translate into travel time
measurement errors. And even though the satellites positions are constantly
monitored, they can't be watched every second. So slight position or "ephemeris"
errors can sneak in between monitoring times. Basic geometry itself can magnify these
other errors with a principle called "Geometric Dilution of Precision" or GDOP. It sounds
complicated but the principle is quite simple.
There are usually more satellites available than a receiver needs to fix a position, so
the receiver picks a few and ignores the rest. If it picks satellites that are close
together in the sky the intersecting circles that define a position will cross at very
shallow angles. That increases the gray area or error margin around a position. If it
picks satellites that are widely separated the circles intersect at almost right angles
and that minimizes the error region. Good receivers determine which satellites will give
the lowest GDOP
GPS technology has matured into a resource that goes far beyond its original design
goals. These days scientists, sportsmen, farmers, soldiers, pilots, surveyors, hikers,
delivery drivers, sailors, dispatchers, lumberjacks, fire-fighters, and people from many
other walks of life are using GPS in ways that make their work more productive, safer,
and sometimes even easier.
Applications:
In this section you will see a few examples of real-world applications of GPS. These
applications fall into five broad categories.
What is TrackStick?
Simply put, the Track-Stick is a Personal GPS - Global Positioning System with a USB
Interface!
The GPS Track Stick records its own location, time, date, speed, heading and altitude
at preset intervals. With over 1Mb of memory, it can store months of travel
information. All recorded history can be outputted to the following formats:
The Track Stick GPS Systems outputs .KML files for compatibility with Google Earth. By
exporting to Google Earth's .KML file format, each travel location can be pinpointed
using 3D mapping technology.
View 3D images of actual recordings of the Track Stick revealing where it has been.
Track Stick comes with it's own HTML GPS Tracking Software
www.virtualearth.com maps.google.com
Microsoft Streets and Trips Encarta... and many other third party mapping programs.
www.mapquest.com
How it works
Where it works
Using the latest in GPS systems mapping technologies, your exact location can be
shown on graphical maps and 3D satellite images. The Track Stick's micro computer
contains special mathematical algorithms, that can calculate how long you have been
indoors. While visiting family, friends or even shopping, the Track Stick can accurately
time and map each and every place you have been. Global positioning System for
home or business has never been so easy!
To conclude, we hereby say that the Global Positioning System is improving day by
day. It is the simplest way of replacing the traditional way of finding routes with the
new technology. This is surely a striking issue and let’s hopes for the best of it in
coming years.
References:
www.google.co.in
www.computerworld.com
www.gpsworld.com
www.mapquest.com
BY
M.SINDHURI
B.TECH(COMPUTER SCIENCE)
NANDYAL-518501.
EMAIL ID:sindhu.maram@gmail.com.
J.SUJITHA
B.TECH(COMPUTER SCIENCE)
NANDYAL-518501.
EMAIL ID:sujitha_friend9@yahoo.co.in.
Abstract
Grid computing, emerging as a new paradigm for next-generation computing,
distributed. Availability, usage and cost policies vary depending on the particular user,
time, priorities and goals. It enables the regulation of supply and demand for
resources.
motivates the users to trade-off between deadline, budget, and the required level of
This paper focuses on introduction, grid definition and its evolution. It covers
about grid characteristics, types of grids and an example describing a community grid
INTRODUCTION
The Grid unites servers and storage into a single system that acts as a single
computer - all your applications tap into all your computing power. Hardware resources
are fully utilized and spikes in demand are met with ease. This Web site sponsored by
Oracle brings you the resources you need to evaluate your organization's adoption of
grid technologies. The Grid is ready when you are.
THE GRID
The Grid is the computing and data management infrastructure that will provide
the electronic underpinning for a global society in business, government, research,
science and entertainment, integrate networking, communication, computation and
information to provide a virtual platform for computation and data management in the
same way that the Internet integrates resources to form a virtual platform for
information. The Grid is the computing and data management infrastructure that will
provide the electronic. Grid infrastructure will provide us with the ability to dynamically
link together resources as an ensemble to support the execution of large-scale,
resource-intensive, and distributed applications.
Grid is a type of parallel and distributed system that enables the sharing, selection,
and aggregation of geographically distributed "autonomous" resources dynamically at
runtime depending on their availability, capability, performance, cost, and users'
quality-of-service requirements.
BEGINNINGS OF THE GRID
Parallel computing in the 1980s focused researchers’ efforts on the
development of algorithms, programs and architectures that supported simultaneity.
During the 1980s and 1990s, software for parallel computers focused on providing
powerful mechanisms for managing communication between processors, and
development and execution environments for parallel machines. Successful application
paradigms were developed to leverage the immense potential of shared and distributed
memory architectures. Initially it was thought that the Grid would be most useful in
extending parallel computing paradigms from tightly coupled clusters to geographically
distributed systems. However, in practice, the Grid has been utilized more as a
platform for the integration of loosely coupled applications – some components of
which might be running in parallel on a low-latency parallel machine – and for linking
disparate resources (storage, computation, visualization, instruments). Coordination
and distribution – two fundamental concepts in Grid Computing.
The first modern Grid is generally considered to be the information wide-area
year (IWAY). Developing infrastructure and applications for the I-WAY provided a
seminar and powerful experience for the first generation of modern Grid researchers
and projects. Grid research focuses on addressing the problems of integration and
management of software.
GRID COMPUTING CHARACTERSTICS
An enterprise-computing grid is characterized by three primary features -
• Diversity;
• Decentralization; and
Diversity:
A typical computing grid consists of many hundreds of managed resources of
various kinds including servers, storage, Database Servers, Application Servers,
Enterprise Applications, and system services like Directory Services, Security and
Identity Management Services, and others. Managing these resources and their life
cycle is a complex challenge.
Decentralization:
Traditional distributed systems have typically been managed from a central
administration point. A computing grid further compounds these challenges
since the resources can be even more decentralized and may be geographically
distributed across many different data centers within an enterprise.
TYPES OF GRID
Grid computing can be used in a variety of ways to address various kinds of application
requirements. Often, grids are categorized by the type of solutions that they best
address. The three primary types of grids are
Computational grid
A computational grid is focused on setting aside resources specifically for
computing power. In this type of grid, most of the machines are high-performance
servers.
Scavenging grid
A scavenging grid is most commonly used with large numbers of desktop
machines. Machines are scavenged for available CPU cycles and other resources.
Owners of the desktop machines are usually given control over when their resources
are available to participate in the grid.
Data grid
A data grid is responsible for housing and providing access to data across
multiple organizations. Users are not concerned with where this data is located as long
as they have access to the data. For example, you may have two universities doing life
science research, each with unique data. A data grid would allow them to share their
data, manage the data, and manage security issues such as who has access to what
data.
THE KIND OF GRID TOOLS
Infrastructure components include file systems, schedulers and resource
managers, messaging systems, security applications, certificate authorities, and file
transfer mechanisms like Grid FTP.
• Directory services. Systems on a grid must be capable of discovering what
services are available to them. In short, Grid systems must be able to define
(and monitor) a grid’s topology in order to share and collaborate. Many Grid
directory services implementations are based on past successful models, such as
LDAP, DNS, network management protocols, and indexing services.
• Schedulers and load balancers. One of the main benefits of a grid is maximizing
efficiency. Schedulers and load balancers provide this function and more.
Schedulers ensure that jobs are completed in some order (priority, deadline,
urgency, for instance) and load balancers distribute tasks and data management
across systems to decrease the chance of bottlenecks.
• Developer tools. Every arena of computing endeavor requires tools that allow
developers to succeed. Tools for grid developers focus on different niches (file
transfer, communications, environment control), and range from utilities to full-
blown APIs.
• Security. Security in a grid environment can mean authentication and
authorization in other words, controlling who/what can access a grid’s resources
-- but it can mean a lot more. For instance, message integrity and message
confidentiality are crucial to financial and healthcare environments
GRID COMPONENTS:A HIGH LEVEL PERSPECTIVE
Depending on the grid design and its expected use, some of these components
may or may not be required, and in some cases they may be combined to form a
hybrid component.
Portal/user interface
Just as a consumer sees the power grid as a receptacle in the wall, a grid user
should not see all of the complexities of the computing grid. Although the user
interface can come in many forms and be application-specific. A grid portal provides
the interface for a user to launch applications that will use the resources and services
provided by the grid. From this perspective, the user sees the grid as a virtual
computing resource just as the consumer of power sees the receptacle as an. interface
to a virtual generator.
Figure 2: Possible user view of a grid
Security
A major requirement for Grid computing is security. At the base of any grid
environment, there must be mechanisms to provide security, including authentication,
authorization, data encryption, and so on. The Grid Security Infrastructure (GSI)
component of the Globus Toolkit provides robust security mechanisms. The GSI
includes an OpenSSL implementation. It also provides a single sign-on mechanism, so
that once a user is authenticated, a proxy certificate is created and used when
performing actions within the grid. When designing your grid environment, you may
use the GSI sign-in to grant access to the portal, or you may have your own security
for the portal.
Figure 5:
Scheduler
Job and
resource
management
With all the other facilities we have just discussed in place, we now get to the
core set of services that help perform actual work in a grid environment. The Grid
Resource Allocation Manager (GRAM) provides the services to actually launch a job on
a particular resource, check its status, and retrieve its results when it is complete.
Figure 7: Gram
Job flow in a grid environment
Enabling an application for a grid environment, it is important to keep in mind
these components and how they relate and interact with one another. Depending on
your grid implementation and application requirements, there are many ways in which
these pieces can be put together to create a solution.
ADVANTAGES
Grid computing is about getting computers to work together. Almost every
organization is sitting on top of enormous, unused computing capacity, widely
distributed. Mainframes are idle 40% of the time With Grid computing, businesses can
optimize computing and data resources, pool them for large capacity workloads, share
them across networks, and enable collaboration. Many consider Grid computing the
next logical step in the evolution of the Internet, and maturing standards and a drop in
the cost of bandwidth are fueling the momentum we're experiencing today.
CHANLLANGES OF GRID
A word of caution should be given to the overly enthusiastic. The grid is not a
silver bullet that can take any application and run it a 1000 times faster without the
need for buying any more machines or software. Not every application is suitable or
enabled for running on a grid. Some kinds of applications simply cannot be
parallelized. For others, it can take a large amount of work to modify them to achieve
faster throughput. The configuration of a grid can greatly affect the performance,
reliability, and security of an organization's computing infrastructure. For all of these
reasons, it is important for us to understand how far the grid has evolved today and
which features are coming tomorrow or in the distant future
CONCLUSION
Grid computing introduces a new concept to IT infrastructures because it
supports distributed computing over a network of heterogeneous resources and is
enabled by open standards. Grid computing works to optimize underutilized resources,
decrease capital expenditures, and reduce the total cost of ownership. This solution
extends beyond data processing and into information management as well.
Information in this context covers data in databases, files, and storage devices. In this
article, we outline potential problems and the means of solving them in a distributed
environment. .
BIBLIOGRAPHY
[1] www.ibm.com/grid/index.html
[2] Foster, I. and Kesselman, C. (eds) (1999) The Grid: Blueprint for a New
Computing Infrastructure.. San Francisco, CA: Morgan Kaufmann
[3] Berman, F., Fox, G. and Hey, T. (2003) Grid Computing: Making the Global
Infrastructure a Reality. Chichester: John Wiley & Sons.
[4] Web Site associated with book, Grid Computing: Making the Global
Infrastructure a Reality,
http://www.grid2002.org.
E-Learning: Learning Through Internet
By:
Abstract:
This paper presents an approach for integrating e-learning with the traditional
education system. A conceptual map is then created for this integration leading to a
functional model for open and flexible learning. In the proposed integration,
convergence of CD-based, class based and web based education in recommended
and architecture to achieve this convergence is presented. In order to transform the
existing schools, colleges and universities into digital campuses, an inclusive system
architecture is designed for digital campus. A case study is given an actual
implementation in a conventional school. Integration of e-learning with traditional
education is not only possible but also highly effective with the proposed model.
Introduction:
E-learning is part of the Web Technologies. What is e-learning? Advances in
Information and Communication Technologies (ICT) have significantly transformed the
way we live and work today. This transformation is so ubiquitous and pervasive that
we often talk about emergence of “knowledge economy”, “knowledge-based society”
and “knowledge ear”.
ICT had fundamentally three effects on human society one, the barrier of
geography has been dissolved; “Distance is dead” is the new metaphor. Second, the
concept time itself has undergone change. Once can interact now synchronously as
well as asynchronously. The events can also be simulated and activities can take place
in cyber space on pre-defined times scales. Lastly, ICT allows us to individual himself
along with the environment in which he is immersed. No wonder, advances in ICT have
impacted the human civilization more than anything else in history ever before.
Experiments done at ETH Research Lab show that the success of e-learning
strategies depends on how best we can combine all the three learning modes. This is
the best method for integrating e-learning with the traditional teaching process. The e-
learning technology can create an open and flexible learning environment with the
convergence of CD based, class based and web based education.
Conceptual Model:
Traditionally our education system has two distinct processes, namely the
learning process and the learning administration or support process. Learning system
caters to the development of skills and competencies in the learners through personal
learning, group learning group learning in class, learning from teachers and experts,
and learning from the experiences of self and others.
The e-learning system architecture must address both to the learning process as
well as learning support process. In e-learning support processes are equally important
as the content and services has to be provided by educational providers (ESPs) as such
experts normally dose on the campuses.
ETH Research Lab has advanced open and flexible e-learning system
architecture as shown in fig1, providing convergence of formal, non formal and
informal education and bringing educational service providers, activity providers,
learning resources on one single platform for achieving mass personalized education.
The conceptual model has all required subsystems namely learning system, learning
support system, delivery system, maintenance-development-production (MDP)
system, and total administration and QA system.
Learning system of traditional education system will be enhanced with the
availability and accessibility of virtual learning resources.
Course Repository stores the course structures which integrates the learning
objects and sequence them.
Learning Planner assists learner with the appropriate alternative plans
corresponding to the learning objective of the learner. Finally it sends the User
Information to the Use Repository.
Digital
School
1
Binds Service
Digital
Learner Digital Campus
School
Portal
2
WSDL Digital
School
“n”
UDDI
Case Study:
Lectures can be enhanced with the electronic media and can be projected to the
student in the classrooms. Interactive CDs help learners to visualize and understand
the matter; teachers can make use of computers for making the lectures more lively
and effective through multimedia. Documentation management system can help issue
and management of bonafied certificates, leaving certificates and other statutory
certificates for student and staff. Work flow management system allows management
of processes like lecture scheduling; leave management, library management helps in
issue and procurement of books.
The concept of the Digital School has been implemented where we have
digitalized the operations of the schools, CD based content of the school syllabus for
teachers and student and web based support and enrichment material. It has got wide
acceptance from the teachers and learners as it reflected the processes and
methodologies they have been following in the school.
This system now helping in admission process, scheduling of lectures, managing
conventional and digital library resources, lecture on demand, students notes, group
learning through Special Interest Groups, pre and post examination processes,
accounts and finance management, budgetary control, asset management of the
school along with the day today administration of attendance payroll and also
documentation management of statutory certificates for students.
Currently the challenge lies in getting the schools into the e-learning framework,
orientation of teachers and making the teachers use the system on their own. Other
challenges are installation and maintenance of infrastructure as well as organizing
budgets for the digital campus initiative in each institution.
Conclusion:
After going through the survey work of implementation of e-learning through
several schools and colleges with the Open and Flexible Learning Environment
architecture, we feel that the success in the co-existence and integration of traditional
and e-learning strategies.
Current barriers for success of these convergence models are mind set, budgets
for infrastructures, preparedness if teachers and local support. However, with marked
reduction in the cost of PCs, laptops, networking and servers, introduction of IT subject
in school syllabus, spread of affordable internet and growing awareness of the benefits
of IT in education, the perceived barriers are getting dissolved.
However, there is a great challenge of quality content creation. This must be
compliant with the emerging international standards such as SCROM / IMS to be
reusable and modifiable. Our objective in this paper was to present an implementable
architecture for integration of e-learning with traditional education in school, colleges
and universities. We have also presented the results of our implementations as a case
study proving the validity of our architecture and strategies.
References:
1.White paper on “An Implementable Architecture off an e-learning system” by Xiaofel
Liu, Abdulmotaleb El Saddik and Nicolas D. Georganas.
2.“e-Learning Interoperability Standards”, Sun white paper, January 2002-
www.sun.com/products-n-solutions/edu/whitepapers/index.html
3.“e-Learning theories and pedagogical Strategies”- hhtp://www.homestead.com
4.“Evaluating the effectiveness of E-Learning Strategies for Small Medium Enterprises”
– Eduardo Figueria
5.“ Triple Play Solutions for the Student Residence”- Allied Telesyn
JNTU COLLEGE OF ENGINEERING ANANTAPUR
(AUTONOMOUS)
Presented by ---
Siddu_avinash_07@yahoo.co.in
rkpavan513@gmail.com
Abstract
Data warehousing provides architectures and tools for business
executives to systematically organize, understand, and use their data to make
Strategic decisions. Data Warehouses are arguably among the best resources a
modern company owns. As enterprises operate day in day out, the data warehouse
may be updated with a myriad of business process and transactional information:
orders, invoices, customers, shipments and other data together form the corporate
operations archive.
As the volume of data in the Warehouse continues to grow so, the time it takes
to mine (extract) the required data, individual queries, as well as loading data can
consume enormous amounts of processing Power and time, impeding other data
warehouse activity; and customers experiences slow response times while the
information technology(IT) budget shrinks-this is the data warehouse dilemma
Ideal solutions, which perform all the work speedily and with out
cost, are obviously impractical to consider. The near-ideal solution would, 1) help
reduce load process time, and 2) optimize available resources for the analysis, to
achieve these two tasks we need to invest in buying additional compute resources.
Grid Technology:
Grid technology is a form of distributed computing that involves
coordinating and sharing computing, application, data, storage, or network resources
across dynamic and geographically dispersed organizations, Grid technologies promise
to change the way organizations tackle complex computational problems.
With a Grid, networked resources desktops, servers, storage,
databases, even scientific instruments can be combined to deploy massive computing
power wherever and whenever it is needed most users can find resources quickly, use
them efficiently, and scale them seamlessly. There are various types of grids are their
viz.. cluster grids, campus grids, global grids e.t.c.
Conclusion :
Grid computing introduces a new concept to IT infrastructures because it
supports distributed computing over a network of heterogeneous resources and is
enabled by open standards. Grid computing—which helps optimize underutlized
resources, decrease expenses and reduce costs—has helped organizations accelerate
business processes, enable more innovative applications, enhance productivity, and
improve resiliency of IT infrastructure.
finally, applying grid technology to a data warehouse can
reduce the cost , with out purchasing the additional hardware.
References :
sun.com/software/grid/
gridcomputing.com
gridpartner.com
ibm.grid/
Network security and protocols
VULNERABILITY IN TCP/IP
PRESENTED BY:
However , there exist several security vulnerabilities in the TCP specification and
additional weaknesses in a number of widely available implementations of TCP.
These vulnerabilities may unable an intruder to "attack" TCP - based systems,
enabling him or her to "hijack" a TCP connection are cause denial of service to
legitimate users. We discuss some of the flaws present in the TCP implementation of
many widely used operating system and provide recommendations to improve the
security state of a TCP-based system. e.g., incorporation of a "Timer escape route"
from every TCP state.
There are some inherent security problems in the TCP/IP suite which
makes the situation conducive to intruders. TCP sequence numbers prediction, IP
address spoofing, misuse of IP's source routing principle, use of internet control
message protocol (ICMP) messages denial of service, etc are some methods to exploit
the networks vulnerabilities. Considering the fact that most important application
programs such as simple mail transfer protocol(SMPP),Telnet-
commands(rlogin,rsh,etc),file transfer protocol(FTP),etc. have TCP as their transport
layer, security flaws in TCP can prove to be very hazardous for the network.
The objectives of this paper are to identify and analyze the vulnerabilities of TCP/IP
and to develop security enhancements to overcome those flaws. Our work is based on
analyzing the state transition diagram of TCP and determining the security relevance
of some of the “improperly-defined” transitions between different states in the state
transition diagram of many widely used TCP code implementations.
TCP/IP
The TCP Protocol uses a simple method to establish communications between
computers. The origination computer (the one trying to start a communications
“session”) sends an initial bit of data or packet called a “SYN” to the computer or other
device with which it needs to communicate. The “target” computer answers the original
computer with another data packet called an “ACK” or acknowledgement. The original
computer then returns an “ACK” packet back to the “target” computer.
This process is referred to as the SYN-ACK “handshake” or the “three-way handshake”
and is characteristic of all TCP/IP communications. This process is illustrated in Figure
B.6-1.
NETWORK
`
Wakeup
Process Request /
TIME
TIME
ACK Send Response
ta +
onse Da
sp
+ Re
SYN
TCP
Wakeup ACK
Close
One way that “hackers” exploit TCP/IP is to launch what is called a SYN attack
(sometimes called “SYN Flooding”) which takes advantage of how hosts implement
the “three-way handshake.” When the “target” computer (illustrated in B.6-1)
receives a SYN request from the originating computer, it must keep track of the
partially opened connection in what is called a "listen queue" for at least 75
seconds. This was built into TCP/IP to allow successful connections even with long
network delays.
The problem with doing this is that sometimes the TCP/IP is configured so that it
can only keep track of a limited number of connections (most do 5 connections
by default, although some can track up to 1024). A person with malicious intent
can exploit the small size of the listen queue by sending multiple SYN requests
to a computer or host, but never replying to the SYN-ACK the “target” sends
back. By doing this, the host's listen queue is quickly filled up, and it will stop
accepting new connections, until a partially opened connection in the queue is
completed or times out.
The classic SYN flood was introduced in 1993 by members of the CompuServe
“Internet chat” community as a method of removing “undesirables” from chat
rooms or networks. The first UNIX programs to utilize this method were
synflood.c (circa 1993) and nuke.c Satanic Mechanic (circa 1992/1993). This
ability to effectively remove a host from the network for at least 75 seconds can
be used solely as a denial-of-service attack, or it can be used as a tool to
implement other attacks, like IP Spoofing.
Vulnerabilities
IP Spoofing
The Internet Protocol (IP) portion of TCP/IP is the part that carries the
information describing where the packet is coming from and where it’s going to.
This information is called the “IP Address.” “IP Spoofing” is an attack where an
attacker pretends to be sending data from an IP address other than his own.
TCP/IP assumes that the source address on any IP packet it receives is the same
IP address as the system that actually sent the packet (which is a vulnerability
of TCP/IP in that it incorporates no authentication.) Many higher level protocols
and applications also make this assumption, so anyone able to fake (or “forge”)
the source address of an IP packet (called "spoofing" an address) could get
authorized privileges as an unauthorized user.
There are two disadvantages to this spoofing technique. The first is that all
communication is likely to be one-way. The remote host will send all replies to
the spoofed source address, not to the host actually doing the spoofing. So, an
attacker using IP spoofing is unlikely to see output from the remote system
unless he has some other method of eavesdropping on the network between the
other two hosts. Additional information is available in the IT white paper,
Intrusion Prevention & Detection. The second disadvantage is that an attacker
needs to use the correct sequence numbers if he plans on establishing a TCP
connection with the compromised host. Many “common” applications or services
that run on many operating systems, like Telnet and FTP, use TCP. The final ACK
in the three-way handshake must contain the other host's first sequence
number, known as the initial sequence number or ISN. If TCP/IP does not “see”
this ISN in the ACK, the connection cannot be completed because the ISN in the
SYN+ACK packet is sent to the real host, an attacker must “steal” this ISN by
some technical method. If he could “eavesdrop” on the packets sent from the
other host (using an IDS or protocol analyzer), he could see the ISN.
Unfortunately, attackers have developed new ways to overcome both of these
challenges to IP Spoofing.
Source Routing
Another way to do IP spoofing makes use of an IP option which is rarely used
called "source routing." Source routing allows the originating host to specify the
path (route) that the receiver should use to reply to it. Any attacker can take
advantage of this by specifying a route that by-passes the real host, and instead
directs replies to a path it can monitor (probably to itself). Although simple, this
attack may not be as successful now, as most routers are commonly configured
to drop packets with source routing enabled.
Note that "ignored" or discarded packets may actually generate ACKs, rather
than being completely ignored. When the other end receives packets with wrong
sequence numbers, it sends back an ACK packet with the sequence number it is
expecting. The receiver of these ACK discards them, since they have the wrong
sequence numbers. The receiver then sends its own ACK to notify the sender. In
this way, a large number of ACKs are generated forming the attack. This is a
classic "signature" employed by Intrusion Detection Systems (IDS) to detect
session hijacking. Additional information on intrusion detection is available in the
ICMP Attack
The Internet control message protocol (ICMP) is used in networking to send a
one-way informational message to a host or device. The most common use of
ICMP is the "PING" utility. This application sends an ICMP "echo request" to a
host, and waits for that host to send back an ICMP "echo reply" message. This
utility application is very small and has no method of authentication and is
consequently used as a tool by “an attacker to intercept packets or cause a
denial of service.
These are just some of the more common attacks that can occur because of the
nature and design of the TCP/IP protocol. As we mentioned earlier, we must
employ additional technology and practices to compensate for this shortfall in
TCP/IP. These measures are characterized as Internet security.
Internet
Boundary Firewall
Network
Network Switch
Intrusion
Detection
er s th e s
S i n f i fi
e ga y o e r
ve t t e
a t a it v
r he
i c e ti c r
t if at n se
e r fic e w
C rti uth bro
ce e a r’s
t h se
U
Certificate Server
(Authentication ) Directory Server
(User Identity )
Internet
Authorized User
Web Server
(Running SSL Protocol )
For remote access configurations, remote access client software is installed on the
remote computing device and this software will create the VPN “tunnel” from the
remote computing device through the firewall to the VPN hardware providing the
secure connection.
This process is illustrated in figure B.6-5. Note that before the VPN establishes the
secure “session” or transmission, proper authentication must take place. If we have
determined that our remote access requires the security of a VPN, then we will usually
require at least two forms of authentication. This is known as “two-factor”
authentication and provides a higher level of security than a user name and password.
Additional information on two-factor authentication is available in the IT white paper,
Password and PIN Security.
Conclusion
The main objective of this was to identify and analyze some new vulnerabilities of
TCP/IP. We have discussed different attacks that can be launched by an intruder who
manipulates the security flaws in the TCP/IP specification as well as its
implementations. We have analyzed the TCP source code and identified spurious
state-transitions present in the implementation of TCP in several operating systems.
We have analyzed how TCP behaves for various combination of input packets. Finally,
we provide several recommendations to plug some of these flaws in TCP and its
implementations.
References: