Sie sind auf Seite 1von 37

Q.

To study communication guiding system


Guidance system
A guidance system is a device or group of devices used to
navigate a ship, aircraft, missile, rocket, satellite, or other
craft. Typically, this refers to a system that navigates without
direct or continuous human control. Systems that are
intended to have a high degree of human interaction are
usually referred to as a navigation system.
One of the earliest examples of a true guidance system is that
used in the German V-1 during World War II. This system
consisted of a simple gyroscope to maintain heading, an
airspeed sensor to estimate flight time, an altimeter to
maintain altitude, and other redundant systems.
A guidance system has three major sub-sections: Inputs, Processing, and Outputs. The input
section includes sensors, course data, radio and satellite links, and other information sources. The
processing section, composed of one or more CPUs, integrates this data and determines what
actions, if any, are necessary to maintain or achieve a proper heading. This is then fed to the
outputs which can directly affect the system's course. The outputs may control speed by
interacting with devices such as turbines, and fuel pumps, or they may more directly alter course
by actuating ailerons, rudders, or other devices.
Guidance systems
Guidance systems consist of 3 essential parts: navigation which tracks current location,
guidance which leverages navigation data and target information to direct flight control "where
to go", and control which accepts guidance commands to effect change in aerodynamic and/or
engine controls.
Navigation is the art of determining where you are, a science that has seen tremendous focus in
1711 with the Longitude prize. Navigation aids either measure position from a fixed point of
reference (ex. landmark, north star, LORAN Beacon), relative position to a target (ex. radar,
infra-red, ...) or track movement from a known position/starting point (e.g. IMU). Today's
complex systems use multiple approaches to determine current position. For example, today's
most advanced navigation systems are embodied within the Anti-ballistic missile, the RIM-161
Standard Missile 3 leverages GPS, IMU and ground segment data in the boost phase and relative
position data for intercept targeting. Complex systems typically have multiple redundancy to
address drift, improve accuracy (ex. relative to a target) and address isolated system failure.
Navigation systems therefore take multiple inputs from many different sensors, both internal to
the system and/or external (ex. ground based update). Kalman filter provides the most common
approach to combining navigation data (from multiple sensors) to resolve current position.
Example navigation approaches:
Celestial navigation is a position fixing technique that was devised to help sailors cross
the featureless oceans without having to rely on dead reckoning to enable them to strike
land. Celestial navigation uses angular measurements (sights) between the horizon and a
common celestial object. The Sun is most often measured. Skilled navigators can use the
Moon, planets or one of 57 navigational stars whose coordinates are tabulated in nautical
almanacs. Historical tools include a sextant, watch and ephemeris data. Today's space
shuttle, and most interplanetary spacecraft, use optical systems to calibrate inertial
navigation systems: Crewman Optical Alignment Sight (COAS),
[9]
Star Tracker.
[10]

Long-range Navigation (LORAN) : This was the predecessor of GPS and was (and to an
extent still is) used primarily in commercial sea transportation. The system works by
triangulating the ship's position based on directional reference to known transmitters.
Global Positioning System (GPS) : GPS was designed by the US military with the
primary purpose of addressing "drift" within the inertial navigation of Submarine-
launched ballistic missile(SLBMs) prior to launch. GPS transmits 2 signal types: military
and a commercial. The accuracy of the military signal is classified but can be assumed to
be well under 0.5 meters. GPS is a system of 24 satellites orbiting in unique planes 10.9-
14.4 Nautical miles above the earth. The Satellites are in well defined orbits and transmit
highly accurate time information which can be used to triangulate position.
Inertial Measurement Units (IMUs) are the primary inertial system for maintaining
current position (navigation) and orientation in missiles and aircraft. They are complex
machines with one or more rotating Gyroscopes that can rotate freely in 3 degrees of
motion within a complex gimbal system. IMUs are "spun up" and calibrated prior to
launch. A minimum of 3 separate IMUs are in place within most complex systems. In
addition to relative position, the IMUs contain accelerometers which can measure
acceleration in all axis. The position data, combined with acceleration data provide the
necessary inputs to "track" motion of a vehicle. IMUs have a tendency to "drift", due to
friction and accuracy. Error correction to address this drift can be provided via ground
link telemetry, GPS, radar, optical celestial navigation and other navigation aids. When
targeting another (moving) vehicle, relative vectors become paramount. In this situation,
navigation aids which provide updates of position relative to the target are more
important. In addition to the current position, inertial navigation systems also typically
estimate a predicted position for future computing cycles. See also Inertial navigation
system.

Radar/Infrared/Laser : This form of navigation provides information to guidance relative
to a known target, it has both civilian (ex rendezvous) and military applications.
o active (employs own radar to illuminate the target),
o passive (detects targets radar emissions),
o semiactive radar homing,
o Infrared homing : This form of guidance is used exclusively for military
munitions, specifically air-to-air and surface-to-air missiles. The missiles seeker
head homes in on the infrared (heat) signature from the targets engines (hence
the term heat-seeking missile),
o Ultraviolet homing, used in FIM-92 Stinger - more resistive to countermeasures,
than IR homing system
o Laser designation : A laser designator device calculates relative position to a
highlighted target. Most are familiar with the military uses of the technology on
Laser-guided bomb. The space shuttle crew leverages a hand held device to feed
information into rendezvous planning. The primary limitation on this device is
that it requires a line of sight between the target and the designator.
o Terrain contour matching (TERCOM). Uses a ground scanning radar to "match"
topography against digital map data to fix current position. Used by cruise
missiles such as the BGM-109 Tomahawk.
Guidance is the "driver" of a vehicle. It takes input from the navigation system (where am I) and
uses targeting information (where do I want to go) to send signals to the flight control system
that will allow the vehicle to reach its destination (within the operating constraints of the
vehicle). The "targets" for guidance systems are one or more state vectors (position and velocity)
and can be inertial or relative. During powered flight, guidance is continually calculating steering
directions for flight control. For example the space shuttle targets an altitude, velocity vector,
and gamma to drive main engine cut off. Similarly, an Intercontinental ballistic missile also
targets a vector. The target vectors are developed to fulfill the mission and can be preplanned or
dynamically created.
Control. Flight control is accomplished either aerodynamically or through powered controls
such as engines. Guidance sends signals to flight control. A Digital Autopilot (DAP) is the
common term used to describe the interface between guidance and control. Guidance and the
DAP are responsible for calculating the precise instruction for each flight control. The DAP
provides feedback to guidance on the state of flight controls.












Q. Study and verification of standard network topologies.
Network topology

This article needs additional citations for verification. Please help improve this article by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(December 2011)

This article includes a list of references, related reading or external links, but its sources remain
unclear because it lacks inline citations. Please improve this article by introducing more
precise citations. (November 2011)


Diagram of different network topologies.
Network topology is the layout pattern of interconnections of the various elements (links, nodes,
etc.) of a computer
[1][2]
or biological network.
[3]
Network topologies may be physical or logical.
Physical topology refers to the physical design of a network including the devices, location and
cable installation. Logical topology refers to how data is actually transferred in a network as
opposed to its physical design. In general physical topology relates to a core network whereas
logical topology relates to basic network.
Topology can be understood as the shape or structure of a network. This shape does not
necessarily correspond to the actual physical design of the devices on the computer network. The
computers on a home network can be arranged in a circle but it does not necessarily mean that it
represents a ring topology.
Any particular network topology is determined only by the graphical mapping of the
configuration of physical and/or logical connections between nodes. The study of network
topology uses graph theory. Distances between nodes, physical interconnections, transmission
rates, and/or signal types may differ in two networks and yet their topologies may be identical.
A local area network (LAN) is one example of a network that exhibits both a physical topology
and a logical topology. Any given node in the LAN has one or more links to one or more nodes
in the network and the mapping of these links and nodes in a graph results in a geometric shape
that may be used to describe the physical topology of the network. Likewise, the mapping of the
data flow between the nodes in the network determines the logical topology of the network. The
physical and logical topologies may or may not be identical in any particular
Bus
Main article: Bus network

Bus network topology
In local area networks where bus topology is used, each node is connected to a single
cable. Each computer or server is connected to the single bus cable. A signal from the
source travels in both directions to all machines connected on the bus cable until it finds
the intended recipient. If the machine address does not match the intended address for the
data, the machine ignores the data. Alternatively, if the data matches the machine
address, the data is accepted. Since the bus topology consists of only one wire, it is rather
inexpensive to implement when compared to other topologies. However, the low cost of
implementing the technology is offset by the high cost of managing the network.
Additionally, since only one cable is utilized, it can be the single point of failure. If the
network cable is terminated on both ends and when without termination data transfer stop
and when cable breaks, the entire network will be down.











Q.Study Various Type Of Correcting Technique.Community Language
Learning
Community language learning (CLL) is an approach in which students work together to
develop what aspects of a language they would like to learn. The teacher acts as a counsellor and
a paraphraser, while the learner acts as a collaborator, although sometimes this role can be
changed.
Examples of these types of communities have recently arisen with the explosion of educational
resources for language learning on the Web.
Background
The CLL method was developed by Charles A. Curran, a professor of psychology at Loyola
University in Chicago.
[1]
This method refers to two roles: that of the knower (teacher) and
student (learner). Also the method draws on the counseling metaphor and refers to these
respective roles as a counselor and a client. According to Curran, a counselor helps a client
understand his or her own problems better by 'capturing the essence of the clients concern
...[and] relating [the client's] affect to cognition...;' in effect, understanding the client and
responding in a detached yet considerate manner.
To restate, the counselor blends what the client feels and what he is learning in order to make the
experience a meaningful one. Often, this supportive role requires greater energy expenditure than
an 'average' teacher.
[2]

Methods
Natural Approach
The foreign language learner's tasks, according to CLL are (1) to apprehend the sound system of
the language (2) assign fundamental meanings to individual lexical units and (3) construct a
basic grammar.
In these three steps, the CLL resembles the Natural Approach to language teaching in which a
learner is not expected to speak until he has achieved some basic level of comprehension.
[3]

There are 5 stages of development in this method.
1. Birth stage: feeling of security and belonging are established.
2. As the learners' ability improve, they achieve a measure of independence from the parent.
3. Learners can speak independently.
4. The learners are secure enough to take criticism and being corrected.
5. The child becomes an adult and becomes the knower.
Online Communities
A new wave of Community Learning Languages have come into place with the internet growth
and the boom of social networking technologies. These online CLLs are social network services
that take advantage of the Web 2.0 concept of information sharing and collaboration tools, for
which users can help other users to learn languages by direct communication or mutual
correction of proposed exercises.
Barriers in Community Language Learning
When learning a different language while in a multilingual community, there are certain barriers
that one definitely will encounter. The reason for these barriers is that in language learning while
in a multicultural community, native and nonnative groups will think, act, and write in different
ways based on each of their own cultural norms. Research shows that students in multicultural
environments communicate less with those not familiar with their culture. Long-term problems
include that the foreign speakers will have their own terms of expression combined into the
language native to the area, which oftentimes makes for awkward sentences to a native speaker.
Native students tend to develop an exclusive attitude toward the nonnative speaker because they
feel threatened when they do not understand the foreign language. Short-term problems include
the fact that native students will usually lack in-depth knowledge of the nonnative cultures,
which makes them more likely to be unwilling to communicate with the foreign speakers.
Because these foreign students grew up and were educated in a totally different cultural
environment, their ideologies, identities and logic that form in the early age cause different ways
of expressing ideas both in written and spoken form. They will have to modify and redefine their
original identities when they enter a multicultural environment (Shen, 459). This is no easy task.
Consequentially, a low-level of social involvement and enculturation will occur for both native
and nonnative speakers in the community.
Bandwidth
A DS1 circuit is made up of twenty-four 8-bit channels (also known as timeslots or DS0s), each
channel being a 64 kbit/s DS0 multiplexed carrier circuit.
[2]
A DS1 is also a full-duplex circuit,
which means the circuit transmits and receives 1.544 Mbit/s concurrently. A total of 1.536
Mbit/s of bandwidth is achieved by sampling each of the twenty-four 8-bit DS0s 8000 times per
second. This sampling is referred to as 8-kHz sampling (See Pulse-code modulation). An
additional 8 kbit/s of overhead is obtained from the placement of one framing bit, for a total of
1.544 Mbit/s, calculated as follows:



DS1 frame synchronization
Frame synchronization is necessary to identify the timeslots within each 24-channel frame.
Synchronization takes place by allocating a framing, or 193rd, bit. This results in 8 kbit/s of
framing data, for each DS1. Because this 8-kbit/s channel is used by the transmitting equipment
as overhead, only 1.536 Mbit/s is actually passed on to the user. Two types of framing schemes
are Super Frame (SF) and Extended Super Frame (ESF). A Super Frame consists of twelve
consecutive 193-bit frames, whereas an Extended Super Frame consists of twenty-four
consecutive 193-bit frames of data. Due to the unique bit sequences exchanged, the framing
schemes are not compatible with each other. These two types of framing (SF, and ESF) use their
8 kbit/s framing channel in different ways.



















Q.To Study Various Type Of Routers And Bridges.
Network Devices
Routers, brouters, and gateways are inter-networking devices used for connecting different
networks.
Repeaters
A repeater connects two segments of your network cable. It re times and regenerates the
signals to proper amplitudes and sends them to the other segments. When talking about,
ethernet topology, you are probably talking about using a hub as a repeater. Repeaters require a
small amount of time to regenerate the signal. This can cause a propagation delay which can
affect network communication when there are several repeaters in a row. Many network
architectures limit the number of repeaters that can be used in a row. Repeaters work only at
the physical layer of the OSI network model.
Bridges
A bridge reads the outermost section of data on the data packet, to tell where the message is
going. It reduces the traffic on other network segments, since it does not send all packets.
Bridges can be programmed to reject packets from particular networks. Bridging occurs at the
data link layer of the OSI model, which means the bridge cannot read IP addresses, but only
the outermost hardware address of the packet. In our case the bridge can read the ethernet data
which gives the hardware address of the destination address, not the IP address. Bridges
forward all broadcast messages. Only a special bridge called a translation bridge will allow two
networks of different architectures to be connected. Bridges do not normally allow connection
of networks with different architectures. The hardware address is also called the MAC (media
access control) address. To determine the network segment a MAC address belongs to, bridges
use one of:
Transparent Bridging - They build a table of addresses (bridging table) as they receive
packets. If the address is not in the bridging table, the packet is forwarded to all
segments other than the one it came from. This type of bridge is used on ethernet
networks.
Source route bridging - The source computer provides path information inside the
packet. This is used on Token Ring networks.
Routers
A router is used to route data packets between two networks. It reads the information in each
packet to tell where it is going. If it is destined for an immediate network it has access to, it
will strip the outer packet (IP packet for example), readdress the packet to the proper ethernet
address, and transmit it on that network. If it is destined for another network and must be sent
to another router, it will re-package the outer packet to be received by the next router and send

it to the next router. Routing occurs at the network layer of the OSI model. They can connect
networks with different architectures such as Token Ring and Ethernet. Although they can
transform information at the data link level, routers cannot transform information from one
data format such as TCP/IP to another such as IPX/SPX. Routers do not send broadcast
packets or corrupted packets. If the routing table does not indicate the proper address of a
packet, the packet is discarded. There are two types of routers:
1. Static routers - Are configured manually and route data packets based on information in a
router table.
2. Dynamic routers - Use dynamic routing algorithms. There are two types of algorithms:
o Distance vector - Based on hop count, and periodically broadcasts the routing
table to other routers which takes more network bandwidth especially with more
routers. RIP uses distance vectoring. Does not work on WANs as well as it does
on LANs.
o Link state - Routing tables are broadcast at startup and then only when they
change. The open shortest path first (OSPF) protocol uses the link state routing
method to configure routes or distance vector algorithm (DVA).
Common routing protocols include:
IS-IS -Intermediate system to intermediate system which is a routing protocol for the OSI
suite of protocols.
IPX - Internet Packet Exchange. Used on Netware systems.
NLSP - Netware Link Services protocol - Uses OSPF algorithm and is replacing IPX to
provide internet capability.
RIP - Routing information protocol uses a distance vector algorithm.
There is a device called a brouter which will function similar to a bridge for network transport
protocols that are not routable, and will function as a router for routable protocols. It functions at
the network and data link layers of the OSI network model.
Gateways
A gateway can translate information between different network data formats or network
architectures. It can translate TCP/IP to AppleTalk so computers supporting TCP/IP can
communicate with Apple brand computers. Most gateways operate at the application layer, but
can operate at the network or session layer of the OSI model. Gateways will start at the lower
level and strip information until it gets to the required level and repackage the information and
work its way back toward the hardware layer of the OSI model. To confuse issues, when talking
about a router that is used to interface to another network, the word gateway is often used. This
does not mean the routing machine is a gateway as defined here, although it could be.


Q.Case Study Of Voip Concept.
Internet
This article is about the public worldwide computer network system. For other uses, see Internet
(disambiguation).
The Internet is a global system of interconnected computer networks that use the standard
Internet protocol suite (often called TCP/IP, although not all protocols use TCP) to serve billions
of users worldwide. It is a network of networks that consists of millions of private, public,
academic, business, and government networks, of local to global scope, that are linked by a
broad array of electronic, wireless and optical networking technologies. The Internet carries an
extensive range of information resources and services, such as the inter-linked hypertext
documents of the World Wide Web (WWW) and the infrastructure to support email.
Most traditional communications media including telephone, music, film, and television are
reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet
Protocol (VoIP) and Internet Protocol Television (IPTV). Newspaper, book and other print
publishing are adapting to Web site technology, or are reshaped into blogging and web feeds.
The Internet has enabled or accelerated new forms of human interactions through instant
messaging, Internet forums, and social networking. Online shopping has boomed both for major
retail outlets and small artisans and traders. Business-to-business and financial services on the
Internet affect supply chains across entire industries.
The origins of the Internet reach back to research of the 1960s, commissioned by the United
States government in collaboration with private commercial interests to build robust, fault-
tolerant, and distributed computer networks. The funding of a new U.S. backbone by the
National Science Foundation in the 1980s, as well as private funding for other commercial
backbones, led to worldwide participation in the development of new networking technologies,
and the merger of many networks. The commercialization of what was by the 1990s an
international network resulted in its popularization and incorporation into virtually every aspect
of modern human life. As of 2011, more than 2.2 billion people nearly a third of Earth's
population use the services of the Internet.
[1]

The Internet has no centralized governance in either technological implementation or policies for
access and usage; each constituent network sets its own standards. Only the overreaching
definitions of the two principal name spaces in the Internet, the Internet Protocol address space
and the Domain Name System, are directed by a maintainer organization, the Internet
Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and
standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering
Task Force (IETF), a non-profit organization of loosely affiliated international participants that
anyone may associate with by contributing technical expertise

Terminology
Computer network types by
geographical scope
Near field (NFC)
Body (BAN)
Personal (PAN)
Near-me (NAN)
Local (LAN)
o Home (HAN)
o Storage (SAN)
Campus (CAN)
Backbone
Metropolitan (MAN)
Wide (WAN)
Internet
Interplanetary Internet
This box:
view
talk
edit
See also: Internet capitalization conventions
Internet is a short form of the technical term internetwork,
[2]
the result of interconnecting
computer networks with special gateways or routers. The Internet is also often referred to as the
Net.
The term the Internet, when referring to the entire global system of IP networks, has been treated
as a proper noun and written with an initial capital letter. In the media and popular culture, a
trend has also developed to regard it as a generic term or common noun and thus write it as "the
internet", without capitalization. Some guides specify that the word should be capitalized as a
noun but not capitalized as an adjective.
[3][4]

The terms Internet and World Wide Web are often used in everyday speech without much
distinction. However, the Internet and the World Wide Web are not one and the same. The
Internet establishes a global data communications system between computers. In contrast, the
Web is one of the services communicated via the Internet. It is a collection of interconnected
documents and other resources, linked by hyperlinks and URLs.
[5]

Routing


Internet packet routing is accomplished among various tiers of Internet Service Providers.
Internet Service Providers connect customers (thought of at the "bottom" of the routing
hierarchy) to customers of other ISPs. At the "top" of the routing hierarchy are ten or so Tier 1
networks, large telecommunication companies which exchange traffic directly "across" to all
other Tier 1 networks via unpaid peering agreements. Tier 2 networks buy Internet transit from
other ISP to reach at least some parties on the global Internet, though they may also engage in
unpaid peering (especially for local partners of a similar size). ISPs can use a single "upstream"
provider for connectivity, or use multihoming to provide protection from problems with
individual links. Internet exchange points create physical connections between multiple ISPs,
often hosted in buildings owned by independent third parties.
[citation needed]









Q. To Study Various Types Of Lan Equipments.
A local area network (LAN) is a computer network that interconnects computers in a limited
area such as a home, school, computer laboratory, or office building using network media.
[1]
The
defining characteristics of LANs, in contrast to wide area networks (WANs), include their
usually higher data-transfer rates, smaller geographic area, and lack of a need for leased
telecommunication lines.
ARCNET, Token Ring and other technology standards have been used in the past, but Ethernet
over twisted pair cabling, and Wi-Fi are the two most common technologies currently used to
build LANs.
A conceptual diagram of a local area network using 10BASE5 Ethernet
The increasing demand and use of computers in universities and research labs in the late 1960s
generated the need to provide high-speed interconnections between computer systems. A 1970
report from the Lawrence Radiation Laboratory detailing the growth of their "Octopus"
network
[2][3]
gave a good indication of the situation.
Cambridge Ring was developed at Cambridge University in 1974
[4]
but was never developed into
a successful commercial product.
Ethernet was developed at Xerox PARC in 19731975,
[5]
and filed as U.S. Patent 4,063,220. In
1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper,
"Ethernet: Distributed Packet-Switching For Local Computer Networks."
[6]

ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977.
[7]
It had the
first commercial installation in December 1977 at Chase Manhattan Bank in New York.
[8]

Standards evolution
The development and proliferation of personal computers using the CP/M operating system in
the late 1970s, and later DOS-based systems starting in 1981, meant that many sites grew to
dozens or even hundreds of computers. The initial driving force for networking was generally to
share storage and printers, which were both expensive at the time. There was much enthusiasm
for the concept and for several years, from about 1983 onward, computer industry pundits would
regularly declare the coming year to be the year of the LAN.
[9][10][11]

In practice, the concept was marred by proliferation of incompatible physical layer and network
protocol implementations, and a plethora of methods of sharing resources. Typically, each
vendor would have its own type of network card, cabling, protocol, and network operating
system. A solution appeared with the advent of Novell NetWare which provided even-handed
support for dozens of competing card/cable types, and a much more sophisticated operating
system than most of its competitors. Netware dominated
[12]
the personal computer LAN business
from early after its introduction in 1983 until the mid-1990s when Microsoft introduced
Windows NT Advanced Server and Windows for Workgroups.
Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but
Banyan never gained a secure base. Microsoft and 3Com worked together to create a simple
network operating system which formed the base of 3Com's 3+Share, Microsoft's LAN Manager
and IBM's LAN Server - but none of these was particularly successful.
During the same period, Unix computer workstations from vendors such as Sun Microsystems,
Hewlett-Packard, Silicon Graphics, Intergraph, NeXT and Apollo were using TCP/IP based
networking. Although this market segment is now much reduced, the technologies developed in
this area continue to be influential on the Internet and in both Linux and Apple Mac OS X
networkingand the TCP/IP protocol has now almost completely replaced IPX, AppleTalk,
NBF, and other protocols used by the early PC LANs.
Cabling
Early LAN cabling had been based on various grades of coaxial cable. Shielded twisted pair was
used in IBM's Token Ring LAN implementation. In 1984, StarLAN showed the potential of
simple unshielded twisted pair by using Cat3 cablethe same simple cable used for telephone
systems. This led to the development of 10Base-T (and its successors) and structured cabling
which is still the basis of most commercial LANs today. In addition, fiber-optic cabling is
increasingly used in commercial applications.
As cabling is not always possible, wireless Wi-Fi is now very common in residential premises -
and elsewhere where support for mobile laptops and smartphones is important.
Technical aspects
Network topology describes the layout of interconnections between devices and network
segments. At the Data Link Layer and Physical Layer, a wide variety of LAN topologies have
been used, including ring, bus, mesh and star, but the most common LAN topology in use today
is switched Ethernet. At the higher layers, the Internet Protocol (TCP/IP) has become the
standard, replacing NetBEUI, IPX/SPX, AppleTalk and others.
Simple LANs generally consist of one or more switches. A switch can be connected to a router,
cable modem, or ADSL modem for Internet access. Complex LANs are characterized by their
use of redundant links with switches using the spanning tree protocol to prevent loops, their
ability to manage differing traffic types via quality of service (QoS), and to segregate traffic with
VLANs. A LAN can include a wide variety of network devices such as switches, firewalls,
routers, load balancers, and sensors.
[13]

LANs can maintain connections with other LANs via leased lines, leased services, or the Internet
using virtual private network technologies. Depending on how the connections are established
and secured in a LAN, and the distance involved, a LAN may also be classified as a metropolitan
area network (MAN) or a wide area network (WAN).








Q. To study various types of errors correcting techniques

It is to S.P. Corder that Error Analysis owes its place as a scientific method in linguistics. As
Rod Ellis cites (p. 48), "it was not until the 1970s that EA became a recognized part of applied
linguistics, a development that owed much to the work of Corder". Before Corder, linguists
observed learners' errors, divided them into categories, tried to see which ones were common and
which were not, but not much attention was drawn to their role in second language acquisition. It
was Corder who showed to whom information about errors would be helpful (teachers,
researchers, and students) and how.

There are many major concepts introduced by S. P. Corder in his article "The significance of
learners' errors", among which we encounter the following:

1) It is the learner who determines what the input is. The teacher can present a linguistic form,
but this is not necessarily the input, but simply what is available to be learned.

2) Keeping the above point in mind, learners' needs should be considered when teachers/linguists
plan their syllabuses. Before Corder's work, syllabuses were based on theories and not so much
on learners needs.

3) Mager (1962) points out that the learners' built-in syllabus is more efficient than the teacher's
syllabus. Corder adds that if such a built-in syllabus exists, then learners errors would confirm
its existence and would be systematic.

4) Corder introduced the distinction between systematic and non-systematic errors. Unsystematic
errors occur in ones native language; Corder calls these "mistakes" and states that they are not
significant to the process of language learning. He keeps the term "errors" for the systematic
ones, which occur in a second language.

5) Errors are significant in three ways:
- to the teacher: they show a students progress
- to the researcher: they show how a language is acquired, what strategies the learner uses.
- to the learner: he can learn from these errors.

6) When a learner has made an error, the most efficient way to teach him the correct form is not
by simply giving it to him, but by letting him discover it and test different hypotheses. (This is
derived from Carroll's proposal (Carroll 1955, cited in Corder), who suggested that the learner
should find the correct linguistic form by searching for it.

7) Many errors are due to that the learner uses structures from his native language. Corder claims
that possession of ones native language is facilitative. Errors in this case are not inhibitory, but
rather evidence of ones learning strategies.


The above insights played a significant role in linguistic research, and in particular in the
approach linguists took towards errors. Here are some of the areas that were influenced by
Corder's work:


STUDIES OF LEARNER ERRORS

Corder introduced the distinction between errors (in competence) and mistakes (in performance).
This distinction directed the attention of researchers of SLA to competence errors and provided
for a more concentrated framework. Thus, in the 1970s researchers started examining learners
competence errors and tried to explain them. We find studies such as Richards's "A non-
contrastive approach to error analysis" (1971), where he identifies sources of competence errors;
L1 transfer results in interference errors; incorrect (incomplete or over-generalized) application
of language rules results in intralingual errors; construction of faulty hypotheses in L2 results in
developmental errors.

Not all researchers have agreed with the above distinction, such as Dulay and Burt (1974) who
proposed the following three categories of errors: developmental, interference and unique.
Stenson (1974) proposed another category, that of induced errors, which result from incorrect
instruction of the language.
As most research methods, error analysis has weaknesses (such as in methodology), but these do
not diminish its importance in SLA research; this is why linguists such as Taylor (1986)
reminded researchers of its importance and suggested ways to overcome these weaknesses.

As mentioned previously, Corder noted to whom (or in which areas) the study of errors would be
significant: to teachers, to researchers and to learners. In addition to studies concentrating on
error categorization and analysis, various studies concentrated on these three different areas. In
other words, research was conducted not only in order to understand errors per se, but also in
order to use what is learned from error analysis and apply it to improve language competence.

Such studies include Kroll and Schafer's "Error-Analysis and the Teaching of Composition",
where the authors demonstrate how error analysis can be used to improve writing skills. They
analyze possible sources of error in non-native-English writers, and attempt to provide a process
approach to writing where the error analysis can help achieve better writing skills.

These studies, among many others, show that thanks to Corder's work, researchers recognized
the importance of errors in SLA and started to examine them in order to achieve a better
understanding of SLA processes, i.e. of how learners acquire an L2.
























Q.Case study of various routing algorithm
a router is used to manage network traffic and find the best route for sending packets. But have
you ever thought about how routers do this? Routers need to have some information about
network status in order to make decisions regarding how and where to send packets. But how do
they gather this information?
Routers use routing algorithms to find the best route to a destination. When we say "best route,"
we consider parameters like the number of hops (the trip a packet takes from one router or
intermediate point to another in the network), time delay and communication cost of packet
transmission.


Based on how routers gather information about the structure of a network and their analysis of
information to specify the best route, we have two major routing algorithms: global routing
algorithms and decentralized routing algorithms. In decentralized routing algorithms, each
router has information about the routers it is directly connected to -- it doesn't know about every
router in the network. These algorithms are also known as DV (distance vector) algorithms
Dijkstra's algorithm



Dijkstra's algorithm runtime
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1959,
[1]
is a
graph search algorithm that solves the single-source shortest path problem for a graph with
nonnegative edge path costs, producing a shortest path tree. This algorithm is often used in
routing. An equivalent algorithm was developed by Edward F. Moore in 1957.
[2]

For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e.
the shortest path) between that vertex and every other vertex. It can also be used for finding costs
of shortest paths from a single vertex to a single destination vertex by stopping the algorithm
once the shortest path to the destination vertex has been determined. For example, if the vertices
of the graph represent cities and edge path costs represent driving distances between pairs of
cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest route
between one city and all other cities. As a result, the shortest path first is widely used in network
routing protocols, most notably IS-IS and OSPF (Open Shortest Path First).

Algorithm
Let the node we are starting be called an initial node. Let a distance of a node Y be the distance
from the initial node to it. Dijkstra's algorithm will assign some initial distance values and will
try to improve them step-by-step.
1. Assign to every node a distance value. Set it to zero for our initial node and to infinity for
all other nodes.
2. Mark all nodes as unvisited. Set initial node as current.
3. For current node, consider all its unvisited neighbours and calculate their distance (from
the initial node). For example, if current node (A) has distance of 6, and an edge
connecting it with another node (B) is 2, the distance to B through A will be 6+2=8. If
this distance is less than the previously recorded distance (infinity in the beginning, zero
for the initial node), overwrite the distance.
4. When we are done considering all neighbours of the current node, mark it as visited. A
visited node will not be checked ever again; its distance recorded now is final and
minimal.
5. Set the unvisited node with the smallest distance (from the initial node) as the next
"current node" and continue from step 3.
Description of the algorithm
Suppose you're marking over the streets on a street map (tracing the street with a marker) in a
certain order, until you have a route marked in from the starting point to the destination. The
order is conceptually simple: from all the street intersections of the already marked routes, find
the closest unmarked intersection - closest to the starting point (the "greedy" part). It's the whole
marked route to the intersection, plus the street to the new, unmarked intersection. Mark that
street to that intersection, draw an arrow with the direction, then repeat. Never mark any two
intersections twice. When you get to the destination, follow the arrows backwards. There will be
only one path back against the arrows, the shortest one.
BELLman ford Algorithms
DV algorithms are also known as Bellman-Ford routing algorithms and Ford-Fulkerson


routing algorithms. In these algorithms, every router has a routing table that shows it the best
route for any destination. A typical graph and routing table for router J is shown below.


Destination Weight Line
A 8 A
B 20 A
C 28 I
D 20 H
E 17 I
F 30 I
G 18 H
H 12 H
I 10 I
J 0 ---
K 6 K
L 15 K
A typical network graph and routing table for router J
As the table shows, if router J wants to get packets to router D, it should send them to router H.
When packets arrive at router H, it checks its own table and decides how to send the packets to
D.
In DV algorithms, each router has to follow these steps:
1. It counts the weight of the links directly connected to it and saves the information to its table.
2. In a specific period of time, it send its table to its neighbor routers (not to all routers) and receive
the routing table of each of its neighbors.
3. Based on the information in its neighbors' routing tables, it updates its own.
One of the most important problems with DV algorithms is called "count to infinity." Let's
examine this problem with an example:
Imagine a network with a graph as shown below. As you see in this graph, there is only one link
between A and the other parts of the network. Here you can see the graph and routing table of all
nodes:


A B C D
A 0,- 1,A 2,B 3,C
B 1,B 0,- 2,C 3,D
C 2,B 1,C 0,- 1,C
D 3,B 2,C 1,D 0,-
Network graph and routing tables
Now imagine that the link between A and B is cut. At this time, B corrects its table. After a
specific amount of time, routers exchange their tables, and so B receives C's routing table. Since
C doesn't know what has happened to the link between A and B, it says that it has a link to A
with the weight of 2 (1 for C to B, and 1 for B to A -- it doesn't know B has no link to A). B
receives this table and thinks there is a separate link between C and A, so it corrects its table and
changes infinity to 3 (1 for B to C, and 2 for C to A, as C said). Once again, routers exchange
their tables. When C receives B's routing table, it sees that B has changed the weight of its link to
A from 1 to 3, so C updates its table and changes the weight of the link to A to 4 (1 for C to B,
and 3 for B to A, as B said).
This process loops until all nodes find out that the weight of link to A is infinity. This situation is
shown in the table below. In this way, experts say DV algorithms have a slow convergence rate.


B C D
Sum of weight to A after link cut ,A 2,B 3,C
Sum of weight to B after 1st updating 3,C 2,B 3,C
Sum of weight to A after 2nd updating 3,C 4,B 3,C
Sum of weight to A after 3rd updating 5,C 4,B 5,C
Sum of weight to A after 4th updating 5,C 6,B 5,C
Sum of weight to A after 5th updating 7,C 6,B 7,C
Sum of weight to A after nth updating ... ... ...


The "count to infinity" problem
One way to solve this problem is for routers to send information only to the neighbors that are
not exclusive links to the destination. For example, in this case, C shouldn't send any information
to B about A, because B is the only way to A.
Q.. To study and configure various type of routers and bridge?
Ans: A router is a device that forwards data packets across computer networks. Routers
perform the data "traffic directing" functions on the Internet. A router is a microprocessor-
controlled device that is connected to two or more data lines from different networks. When a
data packet comes in on one of the lines, the router reads the address information in the packet to
determine its ultimate destination. Then, using information in its routing table, it directs the
packet to the next network on its journey. A data packet is typically passed from router to router
through the networks of the Internet until it gets to its destination computer. Routers also perform
other tasks such as translating the data transmission protocol of the packet to the appropriate
protocol of the next network, and preventing unauthorized access to a network by the use of
a firewall.
[1]

Types of Routers
There are several types of routers that you will want to understand. You need to know the
difference so that you can set up your network or at least so that you can understand what the
local computer guy tells you to do.
1.Broadband Routers
Broadband routers can be used to do several different types of things. They can be used to
connect two different computers or to connect two computers to the Internet. They can also be
used to create a phone connection.
If you are using Voice over IP (VoIP) technology, then you will need a broadband router to
connect your Internet to your phone. These are often a special type of modem that will have both
Ethernet and phone jacks. Although this may seem a little confusing, simply follow the
instructions that your VoIP provider sends with your broadband router - usually you must
purchase the router from the company in order to obtain the service.
2.Wireless Routers
Wireless routers connect to your modem and create a wireless signal in your home or office. So,
any computer within range can connect to your wireless router and use your broadband Internet
for free. The only way to keep anyone from connecting to your system is to secure your router.
A word of warning about wireless routers: Be sure your secure them, or you will be susceptible
to hackers and identity thieves. In order to secure your router, you simply need to come to
WhatIsMyIPAddress.com, and get your IP address. Then, you'll type that into your web browser
and log into your router (the user ID and password will come with your router).

Types of bridges
There are six main types of bridges: beam bridges, cantilever bridges, arch bridges, suspension
bridges, cable-stayed bridges.
1.Beam bridges
Beam bridges are horizontal beams supported at each end by abutments, hence their structural
name of simply supported. When there is more than one span the intermediate supports are
known as piers. The earliest beam bridges were simple logs that sat across streams and similar
simple structures. In modern times, beam bridges are large box steel girder bridges. Weight on
top of the beam pushes straight down on the abutments at either end of the bridge.
[11]
They are
made up mostly of wood or metal. Beam bridges typically do not exceed 250 feet (76 m) long.
The longer the bridge, the weaker. The world's longest beam bridge is Lake Pontchartrain
Causeway in southern Louisiana in the United States, at 23.83 miles (38.35 km), with individual
spans of 56 feet (17 m).
[12]

2.Cantilever bridges
Cantilever bridges are built using cantilevershorizontal beams that are supported on only one
end. Most cantilever bridges use a pair ofcontinuous spans extending from opposite sides of the
supporting piers, meeting at the center of the obstacle to be crossed. Cantilever bridges are
constructed using much the same materials & techniques as beam bridges. The difference comes
in the action of the forces through the bridge. The largest cantilever bridge is the 549-metre
(1,801 ft) Quebec Bridge in Quebec, Canada.

3.Arch bridges
Arch bridges have abutments at each end. The earliest known arch bridges were built by the
Greeks and include the Arkadiko Bridge. The weight of the bridge is thrust into the abutments at
either side. Dubai in the United Arab Emirates is currently building the Sheikh Rashid bin Saeed
Crossing which is scheduled for completion in 2012. When completed, it will be the largest arch
bridge in the world.
[13]


4.Suspension bridges
Suspension bridges are suspended from cables. The earliest suspension bridges were made of
ropes or vines covered with pieces of bamboo. In modern bridges, the cables hang from towers
that are attached to caissons or cofferdams. The caissons or cofferdams are implanted deep into
the floor of a lake or river. The longest suspension bridge in the world is the 12,826 feet
(3,909 m) Akashi Kaikyo Bridge in Japan.
[14]
Seesimple suspension bridge, stressed ribbon
bridge, underspanned suspension bridge, suspended-deck suspension bridge, and self-anchored
suspension bridge.

5.Cable-stayed bridges
Cable-stayed bridges, like suspension bridges, are held up by cables. However, in a cable-stayed
bridge, less cable is required and the towers holding the cables are proportionately
shorter.
[15]
The first known cable-stayed bridge was designed in 1784 by C.T. Loescher.
[16]
The
longest cable-stayed bridge is the Sutong Bridge over the Yangtze River in China.










Q.Case Study Of Tool Command Language.
Description of the Tcl language
Easily extendable script language which became most famous for the
"TcTk GuiToolkit" and "TclExpect", as well as for easy embeddability, but it is
increasingly used in other spheres. Some of its most unique features:
Everything is a string (native representation is used behind the scenes). All
values can be used inter-changeably and anywhere where a string can be used
for natural serialization.
Extremely minimalistic syntax: cmd arg arg arg ...
No reserved keywords
No static language constructs (for, while etc. are just commands like any
other)
Flexible event-based programming. F.ex. IO and GUi events call callbacks
which greatly reduces the need for threads
Has a threading system, too.
Everything can be dynamically changed and replaced (remember: language
constructs are also just commands too!)
The Virtual Filesystem mechanism
Doesn't force any particular programming paradigm. For instance, a wide
range of OO extensions exist from the more traditional [incr Tcl] to the more
dynamic ExtendedObjectTcl and delegation -style Snit.
Cons of the TCL Language:
Not terribly fast, perl and python are generally faster. Usually fast enough
though.
o As usual, it depends on exactly what you are doing, with common
benchmarks often being written to make other languages look good.
The old quote about "lies, damned lies and benchmarks" does apply.
Events in Tk are evaluated at the top level scope. No closure tricks for you.
Lots of "gotchas" with respect to quoting
o This is a bit inbetween really; the quoting rules are actually very simple,
but beginners do sometimes have problems, as they infer extra
semantics which aren't really there (typically by assuming {} marks
blocks of code like in C).
o I find it's difficult to x-ray or study all the potential levels of quoting and
escape indirection unwinding. It's sort of the quote-level version
ofThickBreadSmell. I'd rather break things up a
bit, DivideAndConquer and have a language that facilitates such.
No anonymous functions - packages that do appear to provide them create
global objects that are not garbage collected among other problems.
o However anonymous code fragments are available, and version 8.5
allows these fragments to have scopes.
o Doesn't the "apply" command in Tcl 8.5 provide anonymous functions?
Suffered under a disgustingly hype-laden regime under scriptics. ("Leverage
your cross-functional b2b synergies with scripting!") Thankfully no longer a
problem after the dot com crash.
More on TCL
It is untyped and uncompiled [-- It does, however, utilize a bytecode compiler for
efficiency now, making it about as slow as other scripting languages]. Tcl is much
more a dynamic language than comparable scripting languages, and makes use
of data as code model to great advantage.
TCL's main attraction is its small footprint, ease of network and GUI programming,
event model, write-once-run-many (programmers usually have to go out of their way
to make Tcl programs platform specific.)
It has a C API which makes embedding an interpreter easy (making it
an EmbeddedLanguage). Passing data back and forth is easy, as is writing extension
commands to the interpreter.
See http://wiki.tcl.tk/4364 (a two-player car racing game in 127 LOC)
or http://wiki.tcl.tk/4448 (pages on TheTclersWiki) for examples of why some people
like Tcl so much - brevity and automation of complex tasks.
Tcl was invented by JohnOusterhout while he was a professor at CalBerkeley.
Tcl has several implementations and is widely available on many platforms:
Unix, MacOs9/ MacOsx, Windows, Palm OS, Windows CE, MSDOS, several realtime
OS's, and as a browser plugin for Mozilla, Netscape, Windows IE. Tcl is also available
as Jacl, a Tcl port to Java, thus it also runs anywhere Java runs.
See: http://tcljava.sf.net
Hiring Tclers
You will find a list of Tclers interested in doing contract work or working full
time here: http://wiki.tcl.tk/1588
Comments
Indeed, Tk is popular; it can even be accessed from Python and Perl (and Erlang). --
FalkBruegmann
Tk is a common reason Tcl is widely used. It is a very good GUI toolkit: lightweight,
easy to use, simple to extend. It had a Motif look and feel,but was far simpler to use
than Motif. It now uses the local look-and-feel of the platform on which it is running
[- which is Motif on UNIX]. It is quite common that programs written on Unix/X11
runs unchanged on Windows and Mac (the same is true for Windows to Unix.)
Tcl as a language is highly dynamic and provides useful introspection facilities. If one
writes idiomatic Tcl, using MetaProgramming techniques, rather than using it in the
way one writes C or Pascal, one can write efficient and elegant programs. -- NatPryce
Nat, how are those techniques when used for Tcl? I've written quite a lot of Tcl/Tk
code but I am not sure of what you are talking about. A friend and I were thinking
about writing a Tcl module so we could make an abstraction layer for access to
databases. The idea was creating something similar to DBI in Perl or JDBC in Java, but
for Tcl (starting with postgresql). I got a nice design and when I sent it to my friend,
he had already written a prototype, which created a 'database widget', similar to a
Tk widget, indeed virtually indistinguishable. I found the idea fascinating and so we
took that approach. Is this that metaprogramming you're talking about? --
DavidDeLis



Q. study and installation steps of standard network simulator n.s-2.
ns (from network simulator) is a name for series of discrete event network simulators,
specifically ns-1, ns-2 and ns-3. All of them are discrete-event network simulator, primarily used
in research
[4]
and teaching. ns-3 is free software, publicly available under the GNU GPLv2
license for research, development, and use.
The goal of the ns-3 project is to create an open simulation environment for networking research
that will be preferred inside the research community:
[citation needed]

It should be aligned with the simulation needs of modern networking research.
It should encourage community contribution, peer review, and validation of the software.
Since the process of creation of a network simulator that contains a sufficient number of high-
quality validated, tested and actively maintained models requires a lot of work, ns-3 project
spreads this workload over a large community of users and developers

ns-1
The first version of ns, known as ns-1, was developed at Lawrence Berkeley National
Laboratory (LBNL) in the 1995-97 timeframe by Steve McCanne, Sally Floyd, Kevin Fall, and
other contributors. This was known as the LBNL Network Simulator, and derived from an earlier
simulator known as REAL by S. Keshav. The core of the simulator was written in C++,
with Tcl-based scripting of simulation scenarios.
[5]
Long-running contributions have also come
from Sun Microsystems, the UC Berkeley Daedelus, and Carnegie Mellon Monarch projects.
ns-2
In 1996-97, ns version 2 (ns-2) was initiated based on a refactoring by Steve McCanne. Use of
Tcl was replaced by MIT'sObject Tcl (OTcl), an object-oriented dialect of Tcl. The core of ns-2
is also written in C++, but the C++ simulation objects are linked to shadow objects in OTcl and
variables can be linked between both language realms. Simulation scripts are written in the OTcl
language, an extension of the Tcl scripting language.
Presently, ns-2 consists of over 300,000 lines of source code, and there is probably a comparable
amount of contributed code that is not integrated directly into the main distribution
(many forks of ns-2 exist, both maintained and unmaintained). It runs
on GNU/Linux, FreeBSD, Solaris, Mac OS X and Windows versions that support Cygwin. It is
licensed for use underversion 2 of the GNU General Public License.
ns-3
A team led by Tom Henderson (University of Washington), George Riley (Georgia Institute of
Technology), Sally Floyd, and Sumit Roy (University of Washington), applied for and received
funding from the U.S. National Science Foundation (NSF) to build a replacement for ns-2, called
ns-3. This team collaborated with the Planete project of INRIA at Sophia Antipolis, with
Mathieu Lacage as the software lead, and formed a new open source project joined by other
developers worldwide.
In the process of developing ns-3, it was decided to completely abandon backward-compatibility
with ns-2. The new simulator would be written from scratch, using the C++ programming
language. Development of ns-3 began in July 2006. A framework for generating Python bindings
(pybindgen) and use of the Waf build system were contributed by Gustavo Carneiro.
The first release, ns-3.1 was made in June 2008, and afterwards the project continued making
quarterly software releases, and more recently has moved to three releases per year. ns-3 made
its eighteenth release (ns-3.18) in the third quarter of 2013.
Current status of the three versions is:
ns-1 is no longer developed nor maintained,
ns-2 build of 2009 is not actively maintained (and is not being accepted for journal
publications
[citation needed]
)
ns-3 is actively developed (but not compatible for work done on ns-2)
Design
ns-3 is built using C++ and Python with scripting capability. The ns-3 library is wrapped to
python thanks to the pybindgen library which delegates the parsing of the ns-3 C++ headers to
gccxml and pygccxml to generate automatically the corresponding C++ binding glue. These
automatically-generated C++ files are finally compiled into the ns-3 python module to allow
users to interact with the C++ ns-3 models and core through python scripts. The ns-3 simulator
features an integrated attribute-based system to manage default and per-instance values for
simulation parameters. All of the configurable default values for parameters are managed by this
system, integrated with command-line argument processing, Doxygen documentation, and an
XML-based and optional GTK-based configuration subsystem.
The large majority of its users focuses on wireless simulations which involve models for Wi-Fi,
WiMAX, or LTE for layers 1 and 2 and routing protocols such as OLSR and AODV.
Components
ns-3 is split over couple dozen modules containing one or more models for real-world network
devices and protocols.
ns-3 has more recently integrated with related projects: the Direct Code Execution extensions
allowing the use of C or C++-based applications and Linux kernel code in the simulations, and
the NetAnim offline animator based on the Qttoolkit.
Q.Case Study Of Various Type Of Cryptographic Algorithm ( Diffie
Hellman Rsa Etc
In an asymmetric key encryption scheme, anyone can encrypt messages using the public key, but only the holder of
the paired private key can decrypt. Security depends on the secrecy of the private key.

In the DiffieHellman key exchangescheme, each party generates a public/private key pair and distributes the public
key. After obtaining an authentic copy of each other's public keys, Alice and Bobcan compute a shared secret offline.
The shared secret can be used, for instance, as the key for a symmetric cipher.
Public-key cryptography, also known as asymmetric cryptography, is a class
of cryptographic algorithms which require two separate keys, one of which is secret (or private) and
one of which is public. Although different, the two parts of this key pair are mathematically linked.
The public key is used toencrypt plaintext or to verify a digital signature; whereas the private key is
used to decrypt ciphertext or to create a digital signature. The term "asymmetric" stems from the use
of different keys to perform these opposite functions, each the inverse of the other as contrasted
with conventional ("symmetric") cryptography which relies on the same key to perform both.
Public-key algorithms are based on mathematical problems which currently admit no efficient
solution that are inherent in certain integer factorization,discrete logarithm, and elliptic
curve relationships. It is computationally easy for a user to generate their own public and private key-
pair and to use them for encryption and decryption. The strength lies in the fact that it is "impossible"
(computationally unfeasible) for a properly generated private key to be determined from its
corresponding public key. Thus the public key may be published without compromising security,
whereas the private key must not be revealed to anyone not authorized to read messages or
perform digital signatures. Public key algorithms, unlike symmetric key algorithms, do notrequire
a secure initial exchange of one (or more) secret keys between the parties.
Message authentication involves processing a message with a private key to produce a digital
signature. Thereafter anyone can verify this signature by processing the signature value with the
signer's corresponding public key and comparing that result with the message. Success confirms the
message is unmodified since it was signed, and presuming the signer's private key has remained
secret to the signer that the signer, and no one else, intentionally performed the signature
operation. In practice, typically only a hash or digestof the message, and not the message itself, is
encrypted as the signature.
Public-key algorithms are fundamental security ingredients in cryptosystems, applications and
protocols. They underpin such Internet standards asTransport Layer Security (TLS), PGP, and GPG.
Some public key algorithms provide key distribution and secrecy (e.g., DiffieHellman key
exchange), some provide digital signatures (e.g., Digital Signature Algorithm), and some provide
both (e.g., RSA).
How it works[edit]
The distinguishing technique used in public-key cryptography is the use of asymmetric key
algorithms, where the key used to encrypt a message is not the same as the key used to decrypt it.
Each user has a pair of cryptographic keys a public encryption key and a private decryption
key. Similarly, a key pair used for digital signatures consists of a private signing key and a public
verification key. The public key is widely distributed, while the private key is known only to its
proprietor. The keys are related mathematically, but the parameters are chosen so that calculating
the private key from the public key is either impossible or prohibitively expensive.
In contrast, symmetric-key algorithms variations of which have been used for thousands of years
use a single secret key, which must be shared and kept private by both the sender and the receiver,
for both encryption and decryption. To use a symmetric encryption scheme, the sender and receiver
must securely share a key in advance.
Because symmetric key algorithms are nearly always much less computationally intensive than
asymmetric ones, it is common to exchange a key using a key-exchange algorithm, then transmit
data using that key and a symmetric key algorithm. PGP and the SSL/TLS family of schemes use
this procedure, and are thus called hybrid cryptosystems.

Description[
There are two main uses for public-key cryptography:
Public-key encryption, in which a message is encrypted with a recipient's public key. The
message cannot be decrypted by anyone who does not possess the matching private key, who
is thus presumed to be the owner of that key and the person associated with the public key. This
is used in an attempt to ensure confidentiality.
Digital signatures, in which a message is signed with the sender's private key and can be
verified by anyone who has access to the sender's public key. This verification proves that the
sender had access to the private key, and therefore is likely to be the person associated with the
public key. This also ensures that the message has not been tampered, as any manipulation of
the message will result in changes to the encoded message digest, which otherwise remains
unchanged between the sender and receiver.
An analogy to public-key encryption is that of a locked mail box with a mail slot. The mail slot is
exposed and accessible to the public its location (the street address) is, in essence, the public key.
Anyone knowing the street address can go to the door and drop a written message through the slot.
However, only the person who possesses the key can open the mailbox and read the message.
An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The
message can be opened by anyone, but the presence of the unique seal authenticates the sender.
A central problem with the use of public-key cryptography is confidence/proof that a particular public
key is authentic, in that it is correct and belongs to the person or entity claimed, and has not been
tampered with or replaced by a malicious third party. The usual approach to this problem is to use
a public-key infrastructure (PKI), in which one or more third parties known as certificate
authorities certify ownership of key pairs. PGP, in addition to being a certificate authority structure,
has used a scheme generally called the "web of trust", which decentralizes such authentication of
public keys by a central mechanism, and substitutes individual endorsements of the link between
user and public key. To date, no fully satisfactory solution to the "public key authentication problem"
has been found.






INDEX
Q.1 to study communication guiding system
Q. Study and verification of standard network topologies.
Q.study various type of correcting technique. community language learning
Q.to study various type of routers and bridges.
Q.case study of voip concept.
Q. To study various types of lan equipments
Q. To study various types of errors correcting techniques
Q.case study of various routing algorithm
Q.. To study and configure various type of routers and bridge?
Q.case study of tool command language
Q. Study and installation steps of standard network simulator n.s-2.
Q.case study of various type of cryptographic algorithm ( diffie hellman rsa etc.)











MAHARANA PRATAP COLLEGE OF TECHNOLOGY,
GWALIOR





PRACTICAL FILE
ON
COMPUTER NETWORK
(CS-604)

Submitted to :- Submitted By :-
Mrs. Neelam joshi Shiv Prabhanjan Singh Arya
Asst .prof 3
rd
year 6
th
sem
CS/IT 0903CS091082

Das könnte Ihnen auch gefallen