Sie sind auf Seite 1von 16

TCP/IP (Transmission Control

Protocol/Internet Protocol)
TCP/IP, or the Transmission Control Protocol/Internet Protocol, is a suite of
communication protocols used to interconnect network devices on the internet. TCP/IP
can also be used as a communications protocol in a private network (an intranet or
an extranet).

The entire internet protocol suite -- a set of rules and procedures -- is commonly referred
to as TCP/IP, though others are included in the suite.

TCP/IP specifies how data is exchanged over the internet by providing end-to-end
communications that identify how it should be broken into packets, addressed,
transmitted, routed and received at the destination. TCP/IP requires little central
management, and it is designed to make networks reliable, with the ability to recover
automatically from the failure of any device on the network.

The two main protocols in the internet protocol suite serve specific functions. TCP defines
how applications can create channels of communication across a network. It also
manages how a message is assembled into smaller packets before they are then
transmitted over the internet and reassembled in the right order at the destination address.

destination. Each gateway computer on the network checks this IP address to determine where to
forward the message.

The history of TCP/IP

The Defense Advanced Research Projects Agency (DARPA), the research branch of the U.S.
Department of Defense, created the TCP/IP model in the 1970s for use in ARPANET, a wide area
network that preceded the internet. TCP/IP was originally designed for the Unixoperating system,
and it has been built into all of the operating systems that came after it.

The TCP/IP model and its related protocols are now maintained by the Internet Engineering Task
Force.
How TCP/IP works

TCP/IP uses the client/server model of communication in which a user or machine (a client) is
provided a service (like sending a webpage) by another computer (a server) in the network.

Collectively, the TCP/IP suite of protocols is classified as stateless, which means each client
request is considered new because it is unrelated to previous requests. Being stateless frees up
network paths so they can be used continuously.

The transport layer itself, however, is stateful. It transmits a single message, and its connection
remains in place until all the packets in a message have been received and reassembled at the
destination.

The TCP/IP model differs slightly from the seven-layer Open Systems Interconnection (OSI)
networking model designed after it, which defines how applications can communicate over
a network.

TCP/IP model layers

TCP/IP functionality is divided into four layers, each of which include specific protocols.

 The application layer provides applications with standardized data exchange. Its protocols
include the Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Post Office
Protocol 3 (POP3), Simple Mail Transfer Protocol (SMTP) and Simple Network Management
Protocol (SNMP).
 The transport layer is responsible for maintaining end-to-end communications across the
network. TCP handles communications between hosts and provides flow control, multiplexing
and reliability. The transport protocols include TCP and User Datagram Protocol (UDP), which
is sometimes used instead of TCP for special purposes.

 The network layer, also called the internet layer, deals with packets and connects independent
networks to transport the packets across network boundaries. The network layer protocols are
the IP and the Internet Control Message Protocol (ICMP), which is used for error reporting.

 The physical layer consists of protocols that operate only on a link -- the network component
that interconnects nodes or hosts in the network. The protocols in this layer include Ethernet for
local area networks (LANs) and the Address Resolution Protocol (ARP).
Advantages of TCP/IP

TCP/IP is nonproprietary and, as a result, is not controlled by any single company.


Therefore, the internet protocol suite can be modified easily. It is compatible with all
operating systems, so it can communicate with any other system. The internet protocol
suite is also compatible with all types of computer hardware and networks.

HTTP (Hypertext Transfer Protocol)


HTTP (Hypertext Transfer Protocol) is the set of rules for transferring files (text, graphic
images, sound, video, and other multimedia files) on the World Wide Web. As soon as a
Web user opens their Web browser, the user is indirectly making use of HTTP. HTTP is
an application protocol that runs on top of the TCP/IP suite of protocols (the foundation
protocols for the Interne

HTTP concepts include (as the Hypertext part of the name implies) the idea that files can
contain references to other files whose selection will elicit additional transfer requests.
Any Web servermachine contains, in addition to the Web page files it can serve, an
HTTP daemon, a program that is designed to wait for HTTP requests and handle them
when they arrive. Your Web browseris an HTTP client, sending requests to server
machines. When the browser user enters file requests by either "opening" a Web file
(typing in a Uniform Resource Locator or URL) or clicking on a hypertext link, the browser
builds an HTTP request and sends it to the Internet Protocol address (IP address)
indicated by the URL. The HTTP daemon in the destination server machine receives the
request and sends back the requested file or files associated with the request. (A Web
page often consists of more than one file.)

Wireless security protocols: The


difference between WEP, WPA,
WPA2

In wireless security, passwords are only half the battle. Choosing the proper level
of encryption is just as vital, and the right choice will determine whether your wireless LAN
is a house of straw or a shielded fortress.

Most wireless access points come with the ability to enable one of three wireless
encryption standards: Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA)
or WPA2. Explore the chart below to get a basic understanding of the differences between
WPA, WEP and WPA2, as well as the uses and mechanisms of each one of these
wireless security protocols, and to find out whether WPA, WEP or WPA2 is the best
choice for your environment.

WPA vs. WPA2 vs. WEP


Looking for a slightly deeper dive on wireless security protocols? No need to crack open a
textbook -- we've got you covered here, too.

Wired Equivalency Privacy (WEP)


Developed in the late 1990s as the first encryption algorithm for the 802.11 standard,
WEP was designed with one main goal in mind: to prevent hackers from snooping on
wireless data as it was transmitted between clients and access points (APs). From the
start, however, WEP lacked the strength necessary to accomplish this.

Cybersecurity experts identified several severe flaws in WEP in 2001, eventually leading
to industrywide recommendations to phase out the use of WEP in both enterprise and
consumer devices. After a large-scale cyberattack executed against T.J. Maxx in 2009
was traced back to vulnerabilities exposed by WEP, the Payment Card Industry Data
Security Standard prohibited retailers and other entities that processed credit card data
from using WEP.
WEP uses the RC4 stream cipher for authentication and encryption. The standard
originally specified a 40-bit, preshared encryption key -- a 104-bit key was later made
available after a set of restrictions from the U.S. government was lifted. The key must be
manually entered and updated by an administrator.

The key is combined with a 24-bit initialization vector (IV) in an effort to strengthen the
encryption. However, the small size of the IV increases the likelihood that keys will be
reused, which, in turn, makes them easier to crack. This characteristic, along with several
other vulnerabilities -- including problematic authentication mechanisms -- makes WEP a
risky choice for wireless security.

Wi-Fi Protected Access (WPA)


The numerous flaws in WEP revealed the urgent need for an alternative, but the
deliberately slow and careful processes required to write a new security specification
posed a conflict. In response, in 2003, the Wi-Fi Alliance released WPA as an interim
standard, while the Institute of Electrical and Electronics Engineers (IEEE) worked to
develop a more advanced, long-term replacement for WEP.

WPA has discrete modes for enterprise users and for personal use. The enterprise mode,
WPA-EAP, uses more stringent 802.1x authentication with the Extensible Authentication
Protocol, or EAP. The personal mode, WPA-PSK, uses preshared keys for simpler
implementation and management among consumers and small offices. Enterprise mode
requires the use of an authentication server.

Although WPA is also based on the RC4 cipher, it introduced several enhancements to
encryption -- namely, the use of the Temporal Key Integrity Protocol (TKIP). The protocol
contains a set of functions to improve wireless LAN security: the use of 256-bit keys, per-
packet key mixing -- the generation of a unique key for each packet -- automatic broadcast
of updated keys, a message integrity check, a larger IV size (48 bits) and mechanisms to
reduce IV reuse.

WPA was designed to be backward-compatible with WEP to encourage quick, easy


adoption. Network security professionals were able to support the new standard on many
WEP-based devices with a simple firmware update. This framework, however, also meant
the security it provided was not as robust as it could be.
Wi-Fi Protected Access 2 (WPA2)
As the successor to WPA, the WPA2 standard was ratified by the IEEE in 2004
as 802.11i. Like its predecessor, WPA2 also offers enterprise and personal modes.
Although WPA2 still has vulnerabilities, it is considered the most secure wireless security

WPA2 replaces the RC4 cipher and TKIP with two stronger encryption and authentication
mechanisms: the Advanced Encryption Standard (AES) and Counter Mode with Cipher
Block Chaining Message Authentication Code Protocol (CCMP), respectively. Also meant
to be backward-compatible, WPA2 supports TKIP as a fallback if a device cannot support
CCMP.

Developed by the U.S. government to protect classified data, AES is composed of three
symmetric block ciphers. Each encrypts and decrypts data in blocks of 128 bits using 128-
, 192- and 256-bit keys. Although the use of AES requires more computing power from
APs and clients, ongoing improvements in computer and network hardware have
mitigated performance concerns.

CCMP protects data confidentiality by allowing only authorized network users to receive
data, and it uses cipher block chaining message authentication code to ensure message
integrity.

WPA2 also introduced more seamless roaming, allowing clients to move from one AP to
another on the same network without having to reauthenticate, through the use of
Pairwise Master Key caching or preauthentication.

802.11 Standards Explained: 802.11ac, 802.11b/g/n,


802.11a
The 802.11 family explained, from 802.11a through 802.11az
Home and business owners looking to buy networking gear face an array of choices. Many products
conform to the 802.11a, 802.11b/g/n, and/or 802.11ac wireless standards collectively known as Wi-
Fi technologies. Bluetooth and various other wireless (but not Wi-Fi) technologies also exist, each
designed for specific networking applications.

This article describes the Wi-Fi standards and related technologies, comparing and contrasting them
to help you better understand the evolution of Wi-Fi technology and make educated network planning
and equipment buying decisions.

For quick reference, 801.11aj is the most recently approved standard. The protocol was approved in
May 2018. Just because a standard is approved, however, does not mean it is available to you or that
it is the standard you need for your particular situation. Standards are always being updated, much
like the way software is updated in a smartphone or on your computer.

What Is 802.11?

In 1997, the Institute of Electrical and Electronics Engineers (IEEE) created the first WLAN standard.
They called it 802.11 after the name of the group formed to oversee its development. Unfortunately,
802.11 only supported a maximum network bandwidth of 2 Mbps – too slow for most applications. For
this reason, ordinary 802.11 wireless products are no longer manufactured. However, an entire family
has sprung up from this initial standard.
The best way to look at these standards is to consider 802.11 as the foundation, and all other
iterations as building blocks upon that foundation that focus on improving both small and large
aspects of the technology. Some building blocks are minor touch-ups while others are quite large.

The largest impacts to wireless standards come when the standards are 'rolled up' to include most or
all small updates. So, for example, the most recent rollup occurred in December of 2016 with 802.11-
2016. Since then, however, minor updates are still occurring and, eventually, another large rollup with
encompass them.

Below is a brief look at the most recently approved iterations, outlined from newest to oldest. Other
iterations – 802.11ax, 802.11ay, and 802.11az – are still in the approval process.

802.11aj

Known as the China Millimeter Wave, this standard applies in China and is basically a rebranding of
802.11ad for use in certain areas of the world. The goal is to maintain backwards compatibility with
802.11ad.

802.11ah

Approved in May 2017, this standard targets lower energy consumption and creates extended range
Wi-Fi networks that can go beyond the reach of a typical 2.4 - 5 GHz network. It is expected to
compete with Bluetooth given its lower power needs.

802.11ad

Approved in December 2012, this standard is freakishly fast. However, the client device must be
located within 11 feet of the access point.

802.11ac

The generation of Wi-Fi that first signaled popular use, 802.11ac utilizes dual-band
wireless technology, supporting simultaneous connections on both the 2.4 GHz and 5 GHz Wi-Fi
bands. 802.11ac offers backward compatibility to 802.11b/g/n and bandwidth rated up to 1300 Mbps
on the 5 GHz band plus up to 450 Mbps on 2.4 GHz. Most home wireless routers are compliant with
this standard.

 Pros of 802.11ac - Fastest maximum speed and best signal range; on par with standard wired
connections
 Cons of 802.11ac - Most expensive to implement; performance improvements only noticeable
in high-bandwidth applications
802.11ac is also referred to as Wi-Fi 5.

802.11n

802.11n (also sometimes known as Wireless N) was designed to improve on 802.11g in the amount
of bandwidth supported by utilizing multiple wireless signals and antennas (called MIMO technology)
instead of one. Industry standards groups ratified 802.11n in 2009 with specifications providing for up
to 300 Mbps of network bandwidth. 802.11n also offers somewhat better range over earlier Wi-Fi
standards due to its increased signal intensity, and it is backward-compatible with 802.11b/g gear.

 Pros of 802.11n - Significant bandwidth improvement from previous standards; wide support
across devices and network gear
 Cons of 802.11n - More expensive to implement than 802.11g; use of multiple signals may
interfere with nearby 802.11b/g based networks

802.11n is also referred to as Wi-Fi 4.

802.11g

In 2002 and 2003, WLAN products supporting a newer standard called 802.11gemerged on the
market. 802.11g attempts to combine the best of both 802.11a and 802.11b. 802.11g supports
bandwidth up to 54 Mbps, and it uses the 2.4 GHz frequency for greater range. 802.11g is backward
compatible with 802.11b, meaning that 802.11g access points will work with 802.11b
wireless network adapters and vice versa.

 Pros of 802.11g - Supported by essentially all wireless devices and network equipment in use
today; least expensive option
 Cons of 802.11g - Entire network slows to match any 802.11b devices on the network;
slowest/oldest standard still in use

802.11g is also referred to as Wi-Fi 3.

802.11a

While 802.11b was in development, IEEE created a second extension to the original 802.11 standard
called 802.11a. Because 802.11b gained in popularity much faster than did 802.11a, some folks
believe that 802.11a was created after 802.11b. In fact, 802.11a was created at the same time. Due
to its higher cost, 802.11a is usually found on business networks whereas 802.11b better serves the
home market.

802.11a supports bandwidth up to 54 Mbps and signals in a regulated frequency spectrum around 5
GHz. This higher frequency compared to 802.11b shortens the range of 802.11a networks. The
higher frequency also means 802.11a signals have more difficulty penetrating walls and other
obstructions.

Because 802.11a and 802.11b utilize different frequencies, the two technologies are incompatible
with each other. Some vendors offer hybrid 802.11a/b network gear, but these products merely
implement the two standards side by side (each connected devices must use one or the other).

802.11a is also referred to as Wi-Fi 2.

802.11b

IEEE expanded on the original 802.11 standard in July 1999, creating the 802.11bspecification.
802.11b supports a theoretical speed up to 11 Mbps. A more realistic bandwidth of 5.9 Mbps (TCP)
and 7.1 Mbps (UDP) should be expected.

802.11b uses the same unregulated radio signaling frequency (2.4 GHz) as the original 802.11
standard. Vendors often prefer using these frequencies to lower their production costs. Being
unregulated, 802.11b gear can incur interference from microwave ovens, cordless phones, and other
appliances using the same 2.4 GHz range. However, by installing 802.11b gear a reasonable
distance from other appliances, interference can easily be avoided.

802.11b is also referred to as Wi-Fi 1.

MAC address (Media Access Control


address)
In a local area network (LAN) or other network, the MAC (Media Access Control) address
is your computer's unique hardware number. (On an Ethernet LAN, it's the same as your
Ethernet address.) When you're connected to the Internet from your computer (or host as
the Internet protocol thinks of it), a correspondence table relates your IP address to your
computer's physical (MAC) address on the LAN.

The MAC address is used by the Media Access Control sublayer of the Data-Link Layer (DLC) of
telecommunication protocols. There is a different MAC sublayer for each physical device type. The
other sublayer level in the DLC layer is the Logical Link Control sublayer.
port
1) On computer and telecommunication devices, a port (noun) is generally a specific place
for being physically connected to some other device, usually with a socket and plug of
some kind. Typically, a personal computer is provided with one or more serial ports and
usually one parallel port. The serial port supports sequential, one bit-at-a-time
transmission to peripheral devices such as scanners and the parallel port supports
multiple-bit-at-a-time transmission to devices such as printers

2) In programming, a port (noun) is a "logical connection place" and specifically, using the
Internet's protocol, TCP/IP, the way a client program specifies a particular server program
on a computer in a network. Higher-level applications that use TCP/IP such as the Web
protocol, Hypertext Transfer Protocol, have ports with preassigned numbers. These are
known as "well-known ports" that have been assigned by the Internet Assigned Numbers
Authority (IANA). Other application processes are given port numbers dynamically for
each connection. When a service (server program) initially is started, it is said to bind to its
designated port number. As any client program wants to use that server, it also must
request to bind to the designated port number.

3) In programming, to port (verb) is to move an application program from an


operating system environment in which it was developed to another operating system
environment so it can be run there. Porting implies some work, but not nearly as much as
redeveloping the program in the new environment. Open standard
programming interface (such as those specified in X/Open's 1170 C language
specification and Sun Microsystem's Javaprogramming language) minimize or eliminate
the work required to port a program

server
A server is a computer program that provides a service to another computer programs
(and its user). In a data center, the physical computer that a server program runs in is also
frequently referred to as a server. That machine may be a dedicated server or it may be
used for other purposes as well.
In the client/server programming model, a server program awaits and fulfills requests
from clientprograms, which may be running in the same or other computers. A given
application in a computer may function as a client with requests for services from other
programs and also as a server of requests from other programs.

Types of servers

Servers are often categorized in terms of their purpose. A Web server, for example, is a
computer program that serves requested HTMLpages or files. The program that is
requesting web content is called a client. For example, aWeb browser is a client that
requests HTML files from Web servers.

Here are a few other types of servers, among a great number of other possibilities:

An application server is a program in a computer in a distributed network that provides


the business logic for an application program.

A proxy server is software that acts as an intermediary between an endpoint device,


such as a computer, and another server from which a user or client is requesting a
service.

A mail server is an application that receives incoming e-mail from local users (people
within the same domain) and remote senders and forwards outgoing e-mail for
delivery.

A virtual server is a program running on a shared server that is configured in such a


way that it seems to each user that they have complete control of a server.

A blade server is a server chassis housing multiple thin, modular electronic circuit
boards, known as server blades. Each blade is a server in its own right, often
dedicated to a single application.

A file server is a computer responsible for the central storage and management
of data files so that other computers on the same network can access them.

A policy server is a security component of a policy-based network that


provides authorization services and facilitates tracking and control of files.
Choosing the right server

There are many factors to consider in the midst of a server selection, including virtual
machine (VM) and container consolidation. When choosing a server, it is important to
evaluate the importance of certain features based on the use cases. Security
capabilities are also important and there will probably be a number of protection, detection
and recovery features to consider, including native data encryption to protect data in flight
and data at rest, as well as persistent event logging to provide an indelible record of all
activity. If the server will rely on internal storage, the choice of disk types and capacity is
also important because it can have a significant influence on input/output (I/O) and
resilience.

Many organizations are shrinking the number of physical servers in their data centers as
virtualization allows fewer servers to host more workloads. The advent of cloud computing
has also changed the number of servers an organization needs to host on
premises. Packing more capability into fewer boxes can reduce overall capital expenses,
data center floor space and power and cooling demands. Hosting more workloads on
fewer boxes, however, can also pose an increased risk to the business because more
workloads will be affected if the server fails or needs to be offline for routine maintenance.
network socket
A network socket is one endpoint in a communication flow between two programs running
over a network.

Sockets are created and used with a set of programming requests or "function calls"
sometimes called the sockets application programming interface (API). The most common
sockets API is the Berkeley UNIX C interface for sockets. Sockets can also be used for
communication between processes within the same computer.

This is the typical sequence of sockets requests from a server application in


the connectionlesscontext of the Internet in which a server handles many client requests
and does not maintain a connection longer than the serving of the immediate request:

socket()
|
bind()
|
recvfrom()
|
(wait for a sendto request from some client)
|
(process the sendto request)
|
sendto (in reply to the request from the client...for example, send an HTML file)

A corresponding client sequence of sockets requests would be:

socket()
|
bind()
|
sendto()
|
recvfrom()

Sockets can also be used for "connection-oriented" transactions with a somewhat different
sequence of C language system calls or functions.
The Secure Sockets Layer (SSL) is a computer networking protocol that manages
server authentication, client authentication and encrypted communication
between servers and clients.

Das könnte Ihnen auch gefallen