Beruflich Dokumente
Kultur Dokumente
suite (TCP/IP) to link billions of devices worldwide. It is a network of networks that consists of
millions of private, public, academic, business, and government networks of local to global scope,
linked by a broad array of electronic, wireless, and optical networking technologies. The Internet
carries an extensive range of information resources and services, such as the inter-
linked hypertext documents and applications of the World Wide Web(WWW), electronic
mail, telephony, and peer-to-peer networks for file sharing.
The origins of the Internet date back to research commissioned by the United States federal
government in the 1960s to build robust, fault-tolerant communication via computer networks.[1] The
primary precursor network, theARPANET, initially served as a backbone for interconnection of
regional academic and military networks in the 1980s. The funding of the National Science
Foundation Network as a new backbone in the 1980s, as well as private funding for other
commercial extensions, led to worldwide participation in the development of new networking
technologies, and the merger of many networks.[2] The linking of commercial networks and
enterprises by the early 1990s marks the beginning of the transition to the modern Internet,[3] and
generated a sustained exponential growth as generations of institutional, personal,
and mobile computers were connected to the network.
Although the Internet has been widely used by academia since the 1980s,
the commercialization incorporated its services and technologies into virtually every aspect of
modern life. Internet use grew rapidly in the West from the mid-1990s and from the late 1990s in
the developing world.[4] In the 20 years since 1995, Internet use has grown 100-times, measured for
the period of one year, to over one third of the world population.[5][6]
Most traditional communications media, including telephony and television, are being reshaped or
redefined by the Internet, giving birth to new services such as Internet telephony and Internet
television. Newspaper, book, and other print publishing are adapting to website technology, or are
reshaped into blogging and web feeds. The entertainment industry was initially the fastest growing
segment on the Internet.[citation needed] The Internet has enabled and accelerated new forms of personal
interactions through instant messaging, Internet forums, and social networking. Online shopping has
grown exponentially both for major retailers and small artisans and traders. Business-to-
business and financial services on the Internet affectsupply chains across entire industries.
The Internet has no centralized governance in either technological implementation or policies for
access and usage; each constituent network sets its own policies.[7] Only the overreaching definitions
of the two principal name spaces in the Internet, the Internet Protocol address space and
the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation
for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the
core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization
of loosely affiliated international participants that anyone may associate with by contributing
technical expertise.[8]
Contents
[hide]
1Terminology
2History
3Governance
4Infrastructure
o 4.1Routing and service tiers
o 4.2Access
o 4.3Structure
5Protocols
6Services
o 6.1World Wide Web
o 6.2Communication
o 6.3Data transfer
7Social impact
o 7.1Users
o 7.2Usage
o 7.3Social networking and entertainment
o 7.4Electronic business
o 7.5Telecommuting
o 7.6Crowdsourcing
o 7.7Collaborative publishing
o 7.8Politics and political revolutions
o 7.9Philanthropy
8Security
o 8.1Surveillance
o 8.2Censorship
9Performance
o 9.1Outages
o 9.2Energy use
10See also
11References
12Further reading
13External links
Terminology
The Internet Messenger by Buky Schwartz in Holon.
The term Internet, when used to refer to the specific global system of interconnected Internet
Protocol (IP) networks, is a proper noun[9] and may be written with an initial capital letter. In common
use and the media, it is often not capitalized, viz. the internet. Some guides specify that the word
should be capitalized when used as a noun, but not capitalized when used as an adjective.[10]
The Internet is also often referred to as the Net, as a short form of network.
Historically, as early as 1849, the word internetted was used uncapitalized as an adjective,
meaning Interconnected or interwoven.[11] The designers of early computer networks
used internet both as a noun and as a verb in shorthand form of internetwork or internetworking,
meaning interconnecting computer networks.[12]
The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is
common to speak of "going on the Internet" when invoking a web browser to view web pages.
However, the World Wide Web or the Web is only one of a large number of Internet services. The
Web is a collection of interconnected documents (web pages) and otherweb resources, linked
by hyperlinks and URLs.[13] As another point of comparison, Hypertext Transfer Protocol, or HTTP, is
the language used on the Web for information transfer, yet it is just one of many languages or
protocols that can be used for communication on the Internet.[14]
The term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to
parody a technically unsavvy user.
History
Main articles: History of the Internet and History of the World Wide Web
Research into packet switching started in the early 1960s,[15] and packet switched networks such as
the ARPANET,CYCLADES,[16][17] the Merit Network,[18] NPL network,[19] Tymnet, and Telenet, were
developed in the late 1960s and 1970s using a variety of protocols.[20] The ARPANET project led to
the development of protocols for internetworking, by which multiple separate networks could be
joined into a single network of networks.[21]
ARPANET development began with two network nodes which were interconnected between the
Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli
School of Engineering and Applied Sciencedirected by Leonard Kleinrock, and the NLS system
at SRI International (SRI) by Douglas Engelbart in Menlo Park, California, on 29 October
1969.[22] The third site was the Culler-Fried Interactive Mathematics Center at the University of
California, Santa Barbara, followed by the University of Utah Graphics Department. In an early sign
of future growth, fifteen sites were connected to the young ARPANET by the end of 1971.[23][24] These
early years were documented in the 1972 film Computer Networks: The Heralds of Resource
Sharing.
Early international collaborations on the ARPANET were rare. European developers were concerned
with developing theX.25 networks.[25] Notable exceptions were the Norwegian Seismic Array
(NORSAR) in June 1973, followed in 1973 by Sweden with satellite links to the Tanum Earth Station
and Peter T. Kirstein's research group in the United Kingdom, initially at the Institute of Computer
Science, University of London and later at University College London.[26][27][28]
In December 1974, RFC 675 (Specification of Internet Transmission Control Program), by Vinton
Cerf, Yogen Dalal, and Carl Sunshine, used the term internet as a shorthand for internetworking and
later RFCs repeated this use.[29] Access to the ARPANET was expanded in 1981 when the National
Science Foundation (NSF) funded the Computer Science Network(CSNET). In 1982, the Internet
Protocol Suite (TCP/IP) was standardized, which permitted worldwide proliferation of interconnected
networks.
TCP/IP network access expanded again in 1986 when the National Science Foundation
Network (NSFNet) provided access tosupercomputer sites in the United States for researchers, first
at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s.[30]Commercial Internet service
providers (ISPs) emerged in the late 1980s and early 1990s. The ARPANET was decommissioned
in 1990. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was
decommissioned, removing the last restrictions on use of the Internet to carry commercial
traffic.[31] The Internet rapidly expanded in Europe and Australia in the mid to late 1980s[32][33] and to
Asia in the late 1980s and early 1990s.[34]
The beginning of dedicated transatlantic communication between the NSFNET and networks in
Europe was established with a low-speed satellite relay between Princeton
University and Stockholm, Sweden in December 1988.[35] Although other network protocols such
as UUCP had global reach well before this time, this marked the beginning of the Internet as an
intercontinental network.
Slightly over a year later in March 1990, the first high-speed T1 (1.5 Mbit/s) link between the
NSFNET and Europe was installed between Cornell University and CERN, allowing much more
robust communications than were capable with satellites.[36] Six months later Tim Berners-Lee would
begin writing WorldWideWeb, the first web browser after two years of lobbying CERN management.
By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: theHyperText
Transfer Protocol (HTTP) 0.9,[37] the HyperText Markup Language (HTML), the first Web browser
(which was also a HTML editor and could access Usenet newsgroups and FTP files), the first
HTTP server software (later known asCERN httpd), the first web server (http://info.cern.ch), and the
first Web pages that described the project itself.
Since 1995 the Internet has tremendously impacted culture and commerce, including the rise of near
instant communication by email, instant messaging, telephony (Voice over Internet Protocol or
VoIP), two-way interactive video calls, and the World Wide Web[38] with its discussion forums,
blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at
higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more.
a
Estimate.
The Internet continues to grow, driven by ever greater amounts of online information and knowledge,
commerce, entertainment and social networking.[41] During the late 1990s, it was estimated that traffic
on the public Internet grew by 100 percent per year, while the mean annual growth in the number of
Internet users was thought to be between 20% and 50%.[42] This growth is often attributed to the lack
of central administration, which allows organic growth of the network, as well as the non-proprietary
nature of the Internet protocols, which encourages vendor interoperability and prevents any one
company from exerting too much control over the network.[43] As of 31 March 2011, the estimated
total number of Internet users was 2.095 billion (30.2% of world population).[44] It is estimated that in
1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by
2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information
was carried over the Internet.[45]
Governance
Main article: Internet governance
ICANN headquarters in the Playa Vista neighborhood of Los Angeles, California, United States.
The Internet is a global network comprising many voluntarily interconnected autonomous networks.
It operates without a central governing body.
The technical underpinning and standardization of the core protocols (IPv4 andIPv6) is an activity of
the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated
international participants that anyone may associate with by contributing technical expertise.
To maintain interoperability, the principal name spaces of the Internet are administered by
the Internet Corporation for Assigned Names and Numbers(ICANN). ICANN is governed by an
international board of directors drawn from across the Internet technical, business, academic, and
other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use
on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers
in the transport protocols, and many other parameters. Globally unified name spaces are essential
for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the
only central coordinating body for the global Internet.[46]
The National Telecommunications and Information Administration, an agency of the United States
Department of Commerce, continues to have final approval over changes to the DNS root
zone.[47][48][49]
The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development,
evolution and use of the Internet for the benefit of all people throughout the world".[50] Its members
include individuals (anyone may join) as well as corporations, organizations, governments, and
universities. Among other activities ISOC provides an administrative home for a number of less
formally organized groups that are involved in developing and managing the Internet, including:
theInternet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering
Steering Group (IESG),Internet Research Task Force (IRTF), and Internet Research Steering
Group (IRSG).
Infrastructure
See also: List of countries by number of Internet users and List of countries by Internet connection
speeds
2007 map showing submarine fiberoptic telecommunication cables around the world.
The communications infrastructure of the Internet consists of its hardware components and a system
of software layers that control various aspects of the architecture.
Packet routing across the Internet involves several tiers of Internet service providers.
Internet service providers establish the worldwide connectivity between individual networks at
various levels of scope. End-users who only access the Internet when needed to perform a function
or obtain information, represent the bottom of the routing hierarchy. At the top of the routing
hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly
with each other viapeering agreements. Tier 2 and lower level networks buy Internet transit from
other providers to reach at least some parties on the global Internet, though they may also engage in
peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to
achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with
physical connections to multiple ISPs.
Large organizations, such as academic institutions, large enterprises, and governments, may
perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their
internal networks. Research networks tend to interconnect with large subnetworks such
as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.
It has been determined that both the Internet IP routing structure and hypertext links of the World
Wide Web are examples of scale-free networks.[51]
Computers and routers use routing tables in their operating system to direct IP packets to the next-
hop router or destination. Routing tables are maintained by manual configuration or automatically
by routing protocols. End-nodes typically use a default route that points toward an ISP providing
transit, while ISP routers use the Border Gateway Protocolto establish the most efficient routing
across the complex connections of the global Internet.
Access
Common methods of Internet access by users include dial-up with a computer modem via telephone
circuits, broadbandover coaxial cable, fiber optics or copper wires, Wi-Fi, satellite and cellular
telephone technology (3G, 4G). The Internet may often be accessed from computers in libraries
and Internet cafes. Internet access points exist in many public places such as airport halls and
coffee shops. Various terms are used, such as public Internet kiosk, public access terminal,
andWeb payphone. Many hotels also have public terminals, though these are usually fee-based.
These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or
online payment. Wi-Fi provides wireless access to the Internet via local computer
networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own
wireless devices such as a laptop or PDA. These services may be free to all, free to customers only,
or fee-based.
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering
large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago
and Pittsburgh. The Internet can then be accessed from such places as a park bench.[52] Apart from
Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various
high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile
phones such as smartphones in general come with Internet access through the phone network. Web
browsers such as Opera are available on these advanced handsets, which can also run a wide
variety of other Internet software. More mobile phones have Internet access than PCs, though this is
not as widely used.[53] An Internet access provider and protocol matrix differentiates the methods
used to get online.
Structure
Many computer scientists describe the Internet as a "prime example of a large-scale, highly
engineered, yet highly complex system".[54] The structure was found to be highly robust to random
failures,[55] yet, very vulnerable to intentional attacks.[56]
The Internet structure and its usage characteristics have been studied extensively and the possibility
of developing alternative structures has been investigated.[57]
Protocols
Internet protocol suite
Application layer
BGP
DHCP
DNS
FTP
HTTP
IMAP
While the hardware components in the Internet infrastructure can often be used to support other
software systems, it is the design and the standardization process of the software that characterizes
the Internet and provides the foundation for its scalability and success. The responsibility for the
architectural design of the Internet software systems has been assumed by the Internet Engineering
Task Force (IETF).[58] The IETF conducts standard-setting work groups, open to any individual, about
the various aspects of Internet architecture. Resulting contributions and standards are published
as Request for Comments (RFC) documents on the IETF web site.
The principal methods of networking that enable the Internet are contained in specially designated
RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative,
experimental, or historical, or document the best current practices (BCP) when implementing
Internet technologies.
The Internet standards describe a framework known as the Internet protocol suite. This is a model
architecture that divides methods into a layered system of protocols, originally documented in RFC
1122 and RFC 1123. The layers correspond to the environment or scope in which their services
operate. At the top is the application layer, space for the application-specific networking methods
used in software applications. For example, a web browser program uses the client-
server application model and a specific protocol of interaction between servers and clients, while
many file-sharing systems use a peer-to-peer paradigm. Below this top layer, the transport
layer connects applications on different hosts with a logical channel through the network with
appropriate data exchange methods.
Underlying these layers are the networking technologies that interconnect networks at their borders
and hosts via the physical connections. The Internet layer enables computers to identify and locate
each other via Internet Protocol (IP) addresses, and routes their traffic via intermediate (transit)
networks. Last, at the bottom of the architecture is the link layer, which provides connectivity
between hosts on the same network link, such as a physical connection in the form of a local area
network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be
independent of the underlying hardware, which the model, therefore, does not concern itself with in
any detail. Other models have been developed, such as the OSI model, that attempt to be
comprehensive in every aspect of communications. While many similarities exist between the
models, they are not compatible in the details of description or implementation; indeed, TCP/IP
protocols are usually included in the discussion of OSI networking.
As user data is processed through the protocol stack, each abstraction layer adds encapsulation information at
the sending host. Data is transmittedover the wire at the link level between hosts and routers. Encapsulation is
removed by the receiving host. Intermediate relays update link encapsulation at each hop, and inspect the IP
layer for routing purposes.
The most prominent component of the Internet model is the Internet Protocol (IP), which provides
addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and, in
essence, establishes the Internet itself. Internet Protocol Version 4 (IPv4) is the initial version used
on the first generation of the Internet and is still in dominant use. It was designed to address up to
~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led toIPv4 address
exhaustion, which entered its final stage in 2011,[59] when the global address allocation pool was
exhausted. A new protocol version, IPv6, was developed in the mid-1990s, which provides vastly
larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in
growing deployment around the world, since Internet address registries (RIRs) began to urge all
resource managers to plan rapid adoption and conversion.[60]
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of
the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for
internetworking or nodes must have duplicate networking software for both networks. Essentially all
modern computer operating systems support both versions of the Internet Protocol. Network
infrastructure, however, is still lagging in this development. Aside from the complex array of physical
connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral
commercial contracts, e.g., peering agreements, and by technical specifications or protocols that
describe the exchange of data over the network. Indeed, the Internet is defined by its
interconnections and routing policies.
Services
The Internet carries many network services, most prominently mobile apps such as social
media apps, the World Wide Web, electronic mail, multiplayer online games, Internet telephony,
and file sharing services.
This NeXT Computer was used byTim Berners-Lee at CERN and became the world's first Web server.
Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the
two terms are not synonymous. The World Wide Web is the primary application that billions of
people use on the Internet, and it has changed their lives immeasurably.[61][62] However, the Internet
provides many other services. The Web is a global set of documents, images and other resources,
logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs
symbolically identify services, servers, and other databases, and the documents and resources that
they can provide. Hypertext Transfer Protocol(HTTP) is the main access protocol of the World Wide
Web. Web services also use HTTP to allow software systems to communicate in order to share and
exchange business logic and data.
World Wide Web browser software, such as Microsoft's Internet Explorer, Mozilla
Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to
another via hyperlinks embedded in the documents. These documents may also contain any
combination of computer data, including graphics, sounds, text, video, multimedia and interactive
content that runs while the user is interacting with the page. Client-side software can include
animations, games, office applications and scientific demonstrations. Through keyword-
driven Internet research using search engines like Yahoo! and Google, users worldwide have easy,
instant access to a vast and diverse amount of online information. Compared to printed media,
books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization
of information on a large scale.
The Web has also enabled individuals and organizations to publish ideas and information to a
potentially large audienceonline at greatly reduced expense and time delay. Publishing a web page,
a blog, or building a website involves little initialcost and many cost-free services are available.
However, publishing and maintaining large, professional web sites with attractive, diverse and up-to-
date information is still a difficult and expensive proposition. Many individuals and some companies
and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some
commercial organizations encourage staff to communicate advice in their areas of specialization in
the hope that visitors will be impressed by the expert knowledge and free information, and be
attracted to the corporation as a result.
One example of this practice is Microsoft, whose product developers publish their personal blogs in
order to pique the public's interest in their work.[original research?] Collections of personal web pages
published by large service providers remain popular and have become increasingly sophisticated.
Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web,
newer offerings from, for example, Facebook and Twitter currently have large followings. These
operations often brand themselves as social network services rather than simply as web page
hosts.[citation needed]
Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and
services directly via the Web continues to grow.
When the Web developed in the 1990s, a typical web page was stored in completed form on a web
server, formatted inHTML, complete for transmission to a web browser in response to a request.
Over time, the process of creating and serving web pages has become dynamic, creating a flexible
design, layout, and content. Websites are often created using content management software with,
initially, very little content. Contributors to these systems, who may be paid staff, members of an
organization or the public, fill underlying databases with content using editing pages designed for
that purpose while casual visitors view and read this content in HTML form. There may or may not
be editorial, approval and security systems built into the process of taking newly entered content and
making it available to the target visitors.
Communication
Email is an important communications service available on the Internet. The concept of sending
electronic text messages between parties in a way analogous to mailing letters or memos predates
the creation of the Internet. Pictures, documents, and other files are sent as email attachments.
Emails can be cc-ed to multiple email addresses.
Internet telephony is another common communications service made possible by the creation of the
Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all
Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications
for personal computers. In recent years many VoIP systems have become as easy to use and as
convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP
can be free or cost much less than a traditional telephone call, especially over long distances and
especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing
into a competitive alternative to traditional telephone service. Interoperability between different
providers has improved and the ability to call or receive a call from a traditional telephone is
available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a
personal computer.
Voice quality can still vary from call to call, but is often equal to and can even exceed that of
traditional calls. Remaining problems for VoIP include emergency telephone number dialing and
reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally
available. Older traditional phones with no "extra features" may be line-powered only and operate
during a power failure; VoIP can never do so without a backup power source for the phone
equipment and the Internet access devices. VoIP has also become increasingly popular for gaming
applications, as a form of communication between players. Popular VoIP clients for gaming
include Ventrilo and Teamspeak. Modern video game consoles also offer VoIP chat features.
Data transfer
File sharing is an example of transferring large amounts of data across the Internet. A computer
file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a
website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a
"shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to
many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these
cases, access to the file may be controlled by user authentication, the transit of the file over the
Internet may be obscured by encryption, and money may change hands for access to the file. The
price can be paid by the remote charging of funds from, for example, a credit card whose details are
also passed – usually fully encrypted – across the Internet. The origin and authenticity of the file
received may be checked by digital signatures or by MD5 or other message digests. These simple
features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of
anything that can be reduced to a computer file for transmission. This includes all manner of print
publications, software products, news, music, film, video, photography, graphics and the other arts.
This in turn has caused seismic shifts in each of the existing industries that previously controlled the
production and distribution of these products.
Streaming media is the real-time delivery of digital media for the immediate consumption or
enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live
audio and video productions. They may also allow time-shift viewing or listening such as Preview,
Classic Clips and Listen Again features. These providers have been joined by a range of pure
Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected
device, such as a computer or something more specific, can be used to access on-line media in
much the same way as was previously possible only with a television or radio receiver. The range of
available types of content is much wider, from specialized technical webcasts to on-demand popular
multimedia services. Podcasting is a variation on this theme, where – usually audio – material is
downloaded and played back on a computer or shifted to a portable media player to be listened to
on the move. These techniques using simple equipment allow anybody, with little censorship or
licensing control, to broadcast audio-visual material worldwide.
Digital media streaming increases the demand for network bandwidth. For example, standard image
quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-
the-line HDX quality needs 4.5 Mbit/s for 1080p.[63]
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-
rate video, the picture either is usually small or updates slowly. Internet users can watch animals
around an African waterhole, ships in thePanama Canal, traffic at a local roundabout or monitor their
own premises, live and in real time. Video chat rooms andvideo conferencing are also popular with
many uses being found for personal webcams, with and without two-way sound. YouTube was
founded on 15 February 2005 and is now the leading website for free streaming video with a vast
number of users. It uses a flash-based web player to stream and show video files. Registered users
may upload an unlimited amount of video and build their own personal profile. YouTube claims that
its users watch hundreds of millions, and upload hundreds of thousands of videos daily. Currently,
YouTube also uses an HTML5 player.[64]
Social impact
The Internet has enabled new forms of social interaction, activities, and social associations. This
phenomenon has given rise to the scholarly study of the sociology of the Internet.
Users
See also: Global Internet usage and English on the Internet
Internet usage has seen tremendous growth. From 2000 to 2009, the number of Internet users
globally rose from 394 million to 1.858 billion.[69] By 2010, 22 percent of the world's population had
access to computers with 1 billion Googlesearches every day, 300 million Internet users reading
blogs, and 2 billion videos viewed daily on YouTube.[70] In 2014 the world's Internet users surpassed
3 billion or 43.6 percent of world population, but two-thirds of the users came from richest countries,
with 78.0 percent of Europe countries population using the Internet, followed by 57.4 percent of the
Americas.[71]
The prevalent language for communication on the Internet has been English. This may be a result of
the origin of the Internet, as well as the language's role as a lingua franca. Early computer systems
were limited to the characters in theAmerican Standard Code for Information Interchange (ASCII), a
subset of the Latin alphabet.
After English (27%), the most requested languages on theWorld Wide Web are Chinese (25%),
Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian
(3% each), and Korean (2%).[67] By region, 42% of the world's Internet users are based in Asia, 24%
in Europe, 14% in North America, 10% in Latin America and theCaribbean taken together, 6% in
Africa, 3% in the Middle East and 1% in Australia/Oceania.[72] The Internet's technologies have
developed enough in recent years, especially in the use of Unicode, that good facilities are available
for development and communication in the world's widely used languages. However, some glitches
such as mojibake (incorrect display of some languages' characters) still remain.
In an American study in 2005, the percentage of men using the Internet was very slightly ahead of
the percentage of women, although this difference reversed in those under 30. Men logged on more
often, spent more time online, and were more likely to be broadband users, whereas women tended
to make more use of opportunities to communicate (such as email). Men were more likely to use the
Internet to pay bills, participate in auctions, and for recreation such as downloading music and
videos. Men and women were equally likely to use the Internet for shopping and banking.[73] More
recent studies indicate that in 2008, women significantly outnumbered men on most social
networking sites, such as Facebook and Myspace, although the ratios varied with age.[74] In addition,
women watched more streaming content, whereas men downloaded more.[75] In terms of blogs, men
were more likely to blog in the first place; among those who blog, men were more likely to have a
professional blog, whereas women were more likely to have a personal blog.[76]
According to forecasts by Euromonitor International, 44% of the world's population will be users of
the Internet by 2020.[77]Splitting by country, in 2012 Iceland, Norway, Sweden, the Netherlands, and
Denmark had the highest Internet penetration by the number of users, with 93% or more of the
population with access.[78]
Several neologisms exist that refer to Internet users: Netizen (as in as in "citizen of the net")[79] refers
to those actively involved in improving online communities, the Internet in general or surrounding
political affairs and rights such as free speech,[80][81] Internaut refers to operators or technically highly
capable users of the Internet,[82][83] digital citizen refers to a person using the Internet in order to
engage in society, politics, and government participation.[84]
Usage
The Internet allows greater flexibility in working hours and location, especially with the spread of
unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous
means, including through mobile Internet devices. Mobile phones, datacards, handheld game
consoles and cellular routers allow users to connect to the Internetwirelessly. Within the limitations
imposed by small screens and other limited facilities of such pocket-sized devices, the services of
the Internet, including email and the web, may be available. Service providers may restrict the
services offered and mobile data charges may be significantly higher than other access methods.
Educational material at all levels from pre-school to post-doctoral is available from websites.
Examples range fromCBeebies, through school and high-school revision guides and virtual
universities, to access to top-end scholarly literature through the likes of Google Scholar.
For distance education, help with homework and other assignments, self-guided learning, whiling
away spare time, or just looking up more detail on an interesting fact, it has never been easier for
people to access educational information at any level from anywhere. The Internet in general and
the World Wide Web in particular are important enablers of both formal and informal education.
Further, the Internet allows universities, in particular, researchers from the social and behavioral
sciences, to conduct research remotely via virtual laboratories, with profound changes in reach and
generalizability of findings as well as in communication between scientists and in the publication of
results.[85]
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have
made collaborative work dramatically easier, with the help of collaborative software. Not only can a
group cheaply communicate and share ideas but the wide reach of the Internet allows such groups
more easily to form. An example of this is the free software movement, which has produced, among
other things, Linux, Mozilla Firefox, and OpenOffice.org. Internet chat, whether using an IRC chat
room, an instant messaging system, or a social networking website, allows colleagues to stay in
touch in a very convenient way while working at their computers during the day. Messages can be
exchanged even more quickly and conveniently than via email. These systems may allow files to be
exchanged, drawings and images to be shared, or voice and video contact between team members.
Content management systems allow collaborating teams to work on shared sets of documents
simultaneously without accidentally destroying each other's work. Business and project teams can
share calendars as well as documents and other information. Such collaboration occurs in a wide
variety of areas including scientific research, software development, conference planning, political
activism and creative writing. Social and political collaboration is also becoming more widespread as
both Internet access and computer literacy spread.
The Internet allows computer users to remotely access other computers and information stores
easily from any access point. Access may be with computer security, i.e. authentication and
encryption technologies, depending on the requirements. This is encouraging new ways of working
from home, collaboration and information sharing in many industries. An accountant sitting at home
can audit the books of a company based in another country, on a server situated in a third country
that is remotely maintained by IT specialists in a fourth. These accounts could have been created by
home-working bookkeepers, in other remote locations, based on information emailed to them from
offices all over the world. Some of these things were possible before the widespread use of the
Internet, but the cost of private leased lineswould have made many of them infeasible in practice. An
office worker away from their desk, perhaps on the other side of the world on a business trip or a
holiday, can access their emails, access their data using cloud computing, or open aremote
desktop session into their office PC using a secure virtual private network (VPN) connection on the
Internet. This can give the worker complete access to all of their normal files and data, including
email and other applications, while away from the office. It has been referred to among system
administrators as the Virtual Private Nightmare,[86] because it extends the secure perimeter of a
corporate network into remote locations and its employees' homes.
Many people use the World Wide Web to access news, weather and sports reports, to plan and
book vacations and to pursue their personal interests. People use chat, messaging and email to
make and stay in touch with friends worldwide, sometimes in the same way as some previously
had pen pals.
Social networking websites such as Facebook, Twitter, and Myspace have created new ways to
socialize and interact. Users of these sites are able to add a wide variety of information to pages, to
pursue common interests, and to connect with others. It is also possible to find existing
acquaintances, to allow communication among existing groups of people. Sites likeLinkedIn foster
commercial and business connections. YouTube and Flickr specialize in users' videos and
photographs.
While social networking sites were initially for individuals only, today they are widely used by
businesses and other organizations to promote their brands, to market to their customers and to
encourage posts to "go viral". "Black hat" social media techniques are also employed by some
organizations, such as spam accounts and astroturfing.
A risk for both individuals and organizations writing posts (especially public posts) on social
networking websites, is that especially foolish or controversial posts occasionally lead to an
unexpected and possibly large-scale backlash on social media from other Internet users. This is also
a risk in relation to controversial offline behavior, if it is widely made known. The nature of this
backlash can range widely from counter-arguments and public mockery, through insults and hate
speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the
tendency of many individuals to behave more stridently or offensively online than they would in
person. A significant number of feminist women have been the target of various forms
of harassment in response to posts they have made on social media, and Twitter in particular has
been criticised in the past for not doing enough to aid victims of online abuse.[87]
For organizations, such a backlash can cause overall brand damage, especially if reported by the
media. However, this is not always the case, as any brand damage in the eyes of people with an
opposing opinion to that presented by the organization could sometimes be outweighed by
strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to
demands that others perceive as wrong-headed, that can then provoke a counter-backlash.
Some websites, such as Reddit, have rules forbidding the posting of personal information of
individuals (also known asdoxxing), due to concerns about such postings leading to mobs of large
numbers of Internet users directing harassment at the specific individuals thereby identified. In
particular, the Reddit rule forbidding the posting of personal information is widely understood to imply
that all identifying photos and names must be censored in Facebook screenshots posted to Reddit.
However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any
case, like-minded people online have many other ways they can use to direct each other's attention
to public social media posts they disagree with.
Children also face dangers online such as cyberbullying and approaches by sexual predators, who
sometimes pose as children themselves. Children may also encounter material which they may find
upsetting, or material which their parents consider to be not age-appropriate. Due to naivety, they
may also post personal information about themselves online, which could put them or their families
at risk unless warned not to do so. Many parents choose to enable Internet filtering, and/or supervise
their children's online activities, in an attempt to protect their children from inappropriate material on
the Internet. The most popular social networking websites, such as Facebook and Twitter, commonly
forbid users under the age of 13. However, these policies are typically trivial to circumvent by
registering an account with a false birth date, and a significant number of children aged under 13 join
such sites anyway. Social networking sites for younger children, which claim to provide better levels
of protection for children, also exist.[88]
The Internet has been a major outlet for leisure activity since its inception, with entertaining social
experiments such asMUDs and MOOs being conducted on university servers, and humor-
related Usenet groups receiving much traffic. Today, many Internet forums have sections devoted to
games and funny videos. Over 6 million people use blogs or message boards as a means of
communication and for the sharing of ideas. The Internet pornography and online gamblingindustries
have taken advantage of the World Wide Web, and often provide a significant source of advertising
revenue for other websites.[89] Although many governments have attempted to restrict both industries'
use of the Internet, in general, this has failed to stop their widespread popularity.[90]
Another area of leisure activity on the Internet is multiplayer gaming.[91] This form of recreation
creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer
games. These range from MMORPG to first-person shooters, from role-playing video
games to online gambling. While online gaming has been around since the 1970s, modern modes of
online gaming began with subscription services such as GameSpy and MPlayer.[92] Non-subscribers
were limited to certain types of game play or certain games. Many people use the Internet to access
and download music, movies and other works for their enjoyment and relaxation. Free and fee-
based services exist for all of these activities, using centralized servers and distributed peer-to-peer
technologies. Some of these sources exercise more care with respect to the original artists'
copyrights than others.
Internet usage has been correlated to users' loneliness.[93] Lonely people tend to use the Internet as
an outlet for their feelings and to share their stories with others, such as in the "I am lonely will
anyone speak to me" thread.
Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of
practitioners that may remain largely anonymous within the larger social context and operate in
relative secrecy, while still linked remotely to a larger network of believers who share a set of
practices and texts, and often a common devotion to a particular leader. Overseas supporters
provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance,
and share information on the internal situation with outsiders. Collectively, members and
practitioners of such sects construct viable virtual communities of faith, exchanging personal
testimonies and engaging in the collective study via email, on-line chat rooms, and web-based
message boards."[94] In particular, the British government has raised concerns about the prospect of
young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being
persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially
committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.
Cyberslacking can become a drain on corporate resources; the average UK employee spent 57
minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business
Services.[95] Internet addiction disorder is excessive computer use that interferes with daily
life.[96] Psychologist, Nicolas Carr believe that Internet use has other effects on individuals, for
instance improving skills of scan-reading and interfering with the deep thinking that leads to true
creativity.[97]
Electronic business
Electronic business (e-business) encompasses business processes spanning the entire value chain:
purchasing, supply chain management, marketing, sales, customer service, and business
relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance
relationships with clients and partners.
According to International Data Corporation, the size of worldwide e-commerce, when global
business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A
report by Oxford Economics adds those two together to estimate the total size of the digital
economy at $20.4 trillion, equivalent to roughly 13.8% of global sales.[98]
While much has been written of the economic advantages of Internet-enabled commerce, there is
also evidence that some aspects of the Internet such as maps and location-aware services may
serve to reinforce economic inequality and thedigital divide.[99] Electronic commerce may be
responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting
in increases in income inequality.[100][101][102]
Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has
recently focused on the economic effects of consolidation from Internet businesses. Keen cites a
2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for
every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental
start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Hotels, which
employs 152,000 people. And car-sharing Internet startup Uber employs 1,000 full-time employees
and is valued at $18.2 billion, about the same valuation as Avis and Hertz combined, which together
employ almost 60,000 people.[103]
Telecommuting
Telecommuting is the performance within a traditional worker and employer relationship when it is
facilitated by tools such as groupware, virtual private networks, conference
calling, videoconferencing, and voice over IP (VOIP) so that work may be performed from any
location, most conveniently the worker's home. It can be efficient and useful for companies as it
allows workers to communicate over long distances, saving significant amounts of travel time and
cost. As broadbandInternet connections become commonplace, more workers have adequate
bandwidth at home to use these tools to link their home to their corporate intranet and internal
communication networks.
Crowdsourcing
The Internet provides a particularly good venue for crowdsourcing, because individuals tend to be
more open in web-based projects where they are not being physically judged or scrutinized and thus
can feel more comfortable sharing.[citation needed]
Collaborative publishing
Wikis have also been used in the academic community for sharing and dissemination of information
across institutional and international boundaries.[104] In those settings, they have been found useful
for collaboration on grant writing, strategic planning, departmental documentation, and committee
work.[105] The United States Patent and Trademark Office uses a wiki to allow the public to
collaborate on finding prior art relevant to examination of pending patent applications. Queens, New
York has used a wiki to allow citizens to collaborate on the design and planning of a local park.[106]
The English Wikipedia has the largest user base among wikis on the World Wide Web[107] and ranks
in the top 10 among all Web sites in terms of traffic.[108]
The Internet has achieved new relevance as a political tool. The presidential campaign of Howard
Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet.
Many political groups use the Internet to achieve a new method of organizing for carrying out their
mission, having given rise to Internet activism, most notably practiced by rebels in the Arab
Spring.[109][110]
The New York Times suggested that social media websites, such as Facebook and Twitter, helped
people organize the political revolutions in Egypt, by helping activists organize protests,
communicate grievances, and disseminate information.[111]
The potential of the Internet as a civic tool of communicative power was explored by Simon R. B.
Berdal in his 2004 thesis:
As the globally evolving Internet provides ever new access points to virtual discourse forums, it also
promotes new civic relations and associations within which communicative power may flow and
accumulate. Thus, traditionally … national-embedded peripheries get entangled into greater,
international peripheries, with stronger combined powers... The Internet, as a consequence,
changes the topology of the "centre-periphery" model, by stimulating conventional peripheries to
interlink into "super-periphery" structures, which enclose and "besiege" several centres at once.[112]
Berdal, therefore, extends the Habermasian notion of the Public sphere to the Internet, and
underlines the inherent global and civic nature that interwoven Internet technologies provide. To limit
the growing civic potential of the Internet, Berdal also notes how "self-protective measures" are put
in place by those threatened by it:
If we consider China’s attempts to filter "unsuitable material" from the Internet, most of us would
agree that this resembles a self-protective measure by the system against the growing civic
potentials of the Internet. Nevertheless, both types represent limitations to "peripheral capacities".
Thus, the Chinese government tries to prevent communicative power to build up and unleash (as
the 1989 Tiananmen Square uprising suggests, the government may find it wise to install "upstream
measures"). Even though limited, the Internet is proving to be an empowering tool also to the
Chinese periphery: Analysts believe that Internet petitions have influenced policy implementation in
favour of the public’s online-articulated will …[112]
Incidents of politically motivated Internet censorship have now been recorded in many countries,
including western democracies.
Philanthropy
The spread of low-cost Internet access in developing countries has opened up new possibilities
for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects
for other individuals. Websites, such asDonorsChoose and GlobalGiving, allow small-scale donors to
direct funds to individual projects of their choice.
A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable
purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish
individual loan profiles for funding. Kiva raises funds for local
intermediary microfinance organizations which post stories and updates on behalf of the borrowers.
Lenders can contribute as little as $25 to loans of their choice, and receive their money back as
borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed
before being funded by lenders and borrowers do not communicate with lenders themselves.[113][114]
However, the recent spread of low-cost Internet access in developing countries has made genuine
international person-to-person philanthropy increasingly feasible. In 2009, the US-based
nonprofit Zidisha tapped into this trend to offer the first person-to-person microfinance platform to
link lenders and borrowers across international borders without intermediaries. Members can fund
loans for as little as a dollar, which the borrowers then use to develop business activities that
improve their families' incomes while repaying loans to the members with interest. Borrowers access
the Internet via public cybercafes, donated laptops in village schools, and even smart phones, then
create their own profile pages through which they share photos and information about themselves
and their businesses. As they repay their loans, borrowers continue to share updates and dialogue
with lenders via their profile pages. This direct web-based connection allows members themselves to
take on many of the communication and recording tasks traditionally performed by local
organizations, bypassing geographic barriers and dramatically reducing the cost of microfinance
services to the entrepreneurs.[115]
Security
Main article: Internet security
Internet resources, hardware, and software components are the target of malicious attempts to gain
unauthorized control to cause interruptions or access private information. Such attempts
include computer viruses which copy with the help of humans, computer worms which copy
themselves automatically, denial of service attacks, ransomware, botnets, andspyware that reports
on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists
have also speculated about the possibilities of cyber warfare using similar methods on a large
scale.[citation needed]
Surveillance
Main article: Computer and network surveillance
The vast majority of computer surveillance involves the monitoring of data and traffic on the
Internet.[116] In the United States for example, under the Communications Assistance For Law
Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant
messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law
enforcement agencies.[117][118][119]
Packet capture is the monitoring of data traffic on a computer network. Computers communicate
over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small
chunks called "packets", which are routed through a network of computers, until they reach their
destination, where they are assembled back into a complete "message" again. Packet Capture
Appliance intercepts these packets as they are traveling through the network, in order to examine
their contents using other programs. A packet capture is an information gathering tool, but not
an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they
mean. Other programs are needed to perform traffic analysis and sift through intercepted data
looking for important/useful information. Under theCommunications Assistance For Law
Enforcement Act all U.S. telecommunications providers are required to install packet sniffing
technology to allow Federal law enforcement and intelligence agencies to intercept all of their
customers'broadband Internet and voice over Internet protocol (VoIP) traffic.[120]
The large amount of data gathered from packet capturing requires surveillance software that filters
and reports relevant information, such as the use of certain words or phrases, the access of certain
types of web sites, or communicating via email or chat with certain parties.[121] Agencies, such as
the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to
develop, purchase, implement, and operate systems for interception and analysis of data.[122]
Similar systems are operated by Iranian secret police to identify and suppress dissidents. The
required hardware and software was allegedly installed by German Siemens AG and
Finnish Nokia.[123]
Censorship
Internet censorship and surveillance by country[124][125][126][127]
Pervasive
Substantial
Selective
Changing situation
Little or none
Not classified or no data
Some governments, such as those of Burma, Iran,North Korea, the Mainland China, Saudi
Arabia and theUnited Arab Emirates restrict access to content on the Internet within their territories,
especially to political and religious content, with domain name and keyword filters.[128]
In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed
to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to
contain only known child pornography sites, the content of the list is secret.[129] Many countries,
including the United States, have enacted laws against the possession or distribution of certain
material, such as child pornography, via the Internet, but do not mandate filter software. Many free or
commercially available software programs, called content-control software are available to users to
block offensive websites on individual computers or networks, in order to limit access by children to
pornographic material or depiction of violence.
Performance
This section
requires expansion.(July 2014)
As the Internet is a heterogeneous network, the physical characteristics, including for example
the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on
its large-scale organization.[citation needed]
Outages
An Internet blackout or outage can be caused by local signalling interruptions. Disruptions
of submarine communications cables may cause blackouts or slowdowns to large areas, such as in
the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small
number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging
for scrap metal severed most connectivity for the nation of Armenia.[130] Internet blackouts affecting
almost entire countries can be achieved by governments as a form of Internet censorship, as in the
blockage of the Internet in Egypt, whereby approximately 93%[131] of networks were without access in
2011 in an attempt to stop mobilization foranti-government protests.[132]
Energy use
In 2011, researchers estimated the energy used by the Internet to be between 170 and 307 GW,
less than two percent of the energy used by humanity. This estimate included the energy needed to
build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and
100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi
transmitters and cloud storage devices use when transmitting Internet traffic.[133][134]
Applications of internet:
The internet is treated as one of the biggest invention. It has a large number of uses.
1. Communication
2. Job searches
5. Travel
6. Entertainment
7. Shopping
9. Research
10. Business use of internet: different ways by which intenet can be used for business
are:
• Information about the product can be provided can be provided online to the the
customer .
• Eliminate middle men and have a direct contact with contact with customer .
• Providing information to the investor by providing companies back ground and financial
information on web site.
1. Communication:
Email is an important communications service available on the Internet. The concept of sending
electronic text messages between parties in a way analogous to mailing letters or memos predates
the creation of the Internet. Pictures, documents and other files are sent as email attachments.
Emails can be cc-ed to multiple email addresses
Internet telephony is another common communications service made possible by the creation of
the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies
all Internet communication.
2. Data Transfer:
File sharing is an example of transferring large amounts of data across the Internet. A computer
file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a
website or FTP server for easy download by others. Some of the example of file sharing are:-
FTP
TELNET( Remote Computing)
3. Information:
Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but
the two terms are not synonymous. The World Wide Web is a global set of documents, images
and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource
Identifiers (URIs). Hypertext Transfer Protocol (HTTP) is the main access protocol of the World
Wide Web, but it is only one of the hundreds of communication protocols used on the Internet.
Internet is interconnection of large number of heterogeneous computer networks all over the
world that can share information back and forth. These interconnected network exchange
information by using same standards and protocols.
Advantages of internet
1.Unlimited Communication
The Internet has made it easy for people to communicate with others because it is cheap and
convenient. The only costs incurred are those paid to the Internet service provider. If you want
to talk to someone who is in another part of the globe, you can just fire up Skype or any other
communication app and hold a video chat.
Services such as Skype have helped people from geographically segmented countries to
interact and share ideas. As such, people are able to share their thoughts and views on matters
affecting the globe. The Internet acts as common global platform where people explore
ideologies and cultures without limitation.
You can also get the latest news, breakthroughs in all fields including
medicine and even research publications at the click of a button. The Internet
is basically a globally accessible repository of knowledge, and the best part is
everyone gets to chip in.
Easy Sharing
Thanks to the Internet, sharing information is fast and seamless. If you want to
tell your 30 friends about your latest promotion, you can do so in an instant.
You can use social media sites such as Facebook or an IM app. They will all
get the news at the same time. You can also share music, videos and any
other file.
Online Services and E-commerce
Today it is possible to carry out financial transactions online. You can transfer
funds, pay taxes and utility bills or book movie tickets over the Internet in the
comfort of your office or home.
The growth of e-commerce has made it possible for people to shop for most
things online. This has seen the emergence of retail giants such as Amazon,
Ebay and Alibaba. They sell consumer goods globally. Such a feat was
virtually impossible before the Internet.
Entertainment
This is one of the major reasons why many people enjoy surfing the Internet –
entertainment. You can watch movies, listen to music, read your favorite
celebrity gossip columns and play games over the Internet. The Internet has
become a mammoth amusement park that never closes.
Disadvantages of internet
Spam Mail
Spamming is the sending of unwanted and useless emails to random people.
These emails obstruct the recipient needlessly. They are illegal and can cause
frustration because they make it hard for people to access their email
accounts. Bots are used to bombard your inbox with endless advertisements.
This is quite perplexing as it always gets mixed up with important emails.
Luckily, most email service providers have a security system in place to
prevent spam emails from going to your inbox. All emails that are deemed
suspicious get their email ID or IP address blocked or sent to the Spam folder.
Virus, Trojan & Other Malware
These are malicious programs that plague the Internet time and again. They
attack a computer with the sole intent of causing harm. They can make the
computer malfunction or even this can be very costly especially if you lose
important data. Worse yet is the fact that you can easily fall victim to malicious
software by clicking on a link on the Internet that appears genuine. Internet
viruses can be categorized to three types - those that harm your executable
boot files and system, those that affect a specific file by destroying it and
those that keep changing things in your computer like Word files. You can
protect yourself by installing a reliable anti-vi
Leakage of Private Information
The fact that the Internet has become a market place has also seen a rise in
fraud cases. Credit/debit card details are particularly vulnerable. This calls for
extreme caution when transacting online. Make sure to use a reliable payment
processor instead of sending your details directly to an individual or business.
Addiction to Internet
Just like everything else, people also get addicted to the Internet. This may
sound bizarre, but some people spend more than their fair amount of time on
the Internet. This affects their social interactions a great deal. Internet
addiction has been known to be a major cause of obesity and has, in some
cases, led to some diseases like carpal tunnel syndrome. With some help,
people addicted to the Internet can overcome this challenge.
Kids Exposed to Adults-Only Content
The fact that Internet has all information you could ever need is both a good
thing and a bad things. This is because it contains age-inappropriate content
like pornography. Unfortunately, such content can be accessed by children as
young as ten. All guardians and parents can do about it is lock out harmful
sites to keep their children safe. Nevertheless, this is not a full proof strategy
as children can still access the Internet from other devices.
There are many ways a personal electronic device can connect to the internet. They all use
different hardware and each has a range of connection speeds. As technology changes, faster
internet connections are needed to handle those changes. I thought it would be interesting to list
some of the different types of internet connections that are available for home and personal use,
paired with their average speeds.
DSL. DSL stands for Digital Subscriber Line. It is an internet connection that is always “on”.
This uses 2 lines so your phone is not tied up when your computer is connected. There is also
no need to dial a phone number to connect. DSL uses a router to transport data and the range of
connection speed, depending on the service offered, is between 128K to 8 Mbps.
Cable. Cable provides an internet connection through a cable modem and operates over cable
TV lines. There are different speeds depending on if you are uploading data transmissions or
downloading. Since the coax cable provides a much greater bandwidth over dial-up or DSL
telephone lines, you can get faster access. Cable speeds range from 512K to 20 Mbps.
Wireless. Wireless, or Wi-Fi, as the name suggests, does not use telephone lines or cables to
connect to the internet. Instead, it uses radio frequency. Wireless is also an always on
connection and it can be accessed from just about anywhere. Wireless networks are growing in
coverage areas by the minute so when I mean access from just about anywhere, I really mean it.
Speeds will vary, and the range is between 5 Mbps to 20 Mbps.
Satellite. Satellite accesses the internet via a satellite in Earth’s orbit. The enormous distance
that a signal travels from earth to satellite and back again, provides a delayed connection
compared to cable and DSL. Satellite connection speeds are around 512K to 2.0 Mbps.
Cellular. Cellular technology provides wireless Internet access through cell phones. The speeds
vary depending on the provider, but the most common are 3G and 4G speeds. A 3G is a term
that describes a 3rdgeneration cellular network obtaining mobile speeds of around 2.0 Mbps. 4G
is the fourth generation of cellular wireless standards. The goal of 4G is to achieve peak mobile
speeds of 100 Mbps but the reality is about 21 Mbps currently.
Wireless
Radio frequency bands are used in place of telephone or cable networks. One of the greatest advantages
of wireless Internet connections is the “always-on” connection that can be accessed from any location that
falls within network coverage. Wireless connections are made possible through the use of a modem,
which picks up Internet signals and sends them to other devices.
Mobile
Many cell phone and smartphone providers offer voice plans with Internet access.
Mobile Internet connections provide good speeds and allow you to access the Internet
on the go.
Hotspots
Hotspots are sites that offer Internet access over a wireless local area network (WLAN)
by way of a router that then connects to an Internet service provider. Hotspots
utilize Wi-Fi technology, which allows electronic devices to connect to the Internet or
exchange data wirelessly through radio waves. Hotspots can be phone-based or free-
standing, commercial or free to the public.
Dial-Up
Dial-up connections require users to link their phone line to a computer in order to
access the Internet. This particular type of connection—also referred to as analog—
does not permit users to make or receive phone calls through theirhome phone
service while using the Internet.
Broadband
DSL
DSL, which stands for Digital Subscriber Line, uses existing 2-wire copper telephone
line connected to one’s home so service is delivered at the same time as landline
telephone service. Customers can still place calls while surfing the Internet.
Cable
Satellite
In certain areas where broadband connection is not yet offered, a satellite Internet
option may be available. Similar to wireless access, satellite connection utilizes a
modem.
ISDN
ISDN (Integrated Services Digital Network) allows users to send data, voice and video
content over digital telephone lines or standard telephone wires. The installation of an
ISDN adapter is required at both ends of the transmission—on the part of the user as
well as the Internet access provider.
There are quite a few other Internet connection options available, including T-1 lines, T-
3 lines, OC (Optical Carrier) and other DSL technologies.
Satellite
Application layer
BGP
DHCP
DNS
FTP
HTTP
IMAP
LDAP
MGCP
NNTP
NTP
POP
ONC/RPC
RTP
RTSP
RIP
SIP
SMTP
SNMP
SSH
Telnet
TLS/SSL
XMPP
more...
Transport layer
TCP
UDP
DCCP
SCTP
RSVP
more...
Internet layer
IP
IPv4
IPv6
ICMP
ICMPv6
ECN
IGMP
IPsec
more...
Link layer
ARP
NDP
OSPF
Tunnels
L2TP
PPP
MAC
Ethernet
DSL
ISDN
FDDI
more...
v
t
e
ISDN telephone
Integrated Services for Digital Network (ISDN) is a set of communication standards for
simultaneous digital transmission of voice, video, data, and other network services over the
traditional circuits of the public switched telephone network. It was first defined in 1988 in
the CCITT red book.[1] Prior to ISDN, the telephone system was viewed as a way to transport voice,
with some special services available for data. The key feature of ISDN is that it integrates speech
and data on the same lines, adding features that were not available in the classic telephone system.
The ISDN standards define several kinds of access interfaces, such as Basic Rate
Interface (BRI), Primary Rate Interface(PRI), Narrowband ISDN (N-ISDN), and Broadband ISDN (B-
ISDN).
ISDN is a circuit-switched telephone network system, which also provides access to packet switched
networks, designed to allow digital transmission of voice and data over ordinary telephone copper
wires, resulting in potentially better voice quality than an analog phone can provide. It offers circuit-
switched connections (for either voice or data), and packet-switched connections (for data), in
increments of 64 kilobit/s. In some countries, ISDN found major market application for Internet
access, in which ISDN typically provides a maximum of 128 kbit/s bandwidth in both upstream and
downstream directions. Channel bonding can achieve a greater data rate; typically the ISDN B-
channels of three or four BRIs (six to eight 64 kbit/s channels) are bonded.
ISDN is employed as the network, data-link and physical layers in the context of theOSI model, or
could be considered a suite of digital services existing on layers 1, 2, and 3 of the OSI model. In
common use, ISDN is often limited to usage to Q.931and related protocols, which are a set
of signaling protocols establishing and breaking circuit-switched connections, and for
advanced calling features for the user. They were introduced in 1986.[2]
In a videoconference, ISDN provides simultaneous voice, video, and text transmission between
individual desktop videoconferencing systems and group (room) videoconferencing systems.
Contents
[hide]
1ISDN elements
2Basic Rate Interface
3Primary Rate Interface
4Bearer channels
5Signaling channel
6X.25
7Frame Relay
8Consumer and industry perspectives
o 8.1ISDN and broadcast industry
o 8.2Countries
8.2.1United States and Canada
8.2.2India
8.2.3Japan
8.2.4United Kingdom
8.2.5France
8.2.6Germany
8.2.7Greece
o 8.3International deployment
9Configurations
10Reference points
11Types of communications
12Sample call
13See also
o 13.1Protocols
o 13.2Other
14Notes
15References
16External links
ISDN elements[edit]
Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in
any combination of data, voice, video, and fax, over a single line. Multiple devices can be attached to
the line, and used as needed. That means an ISDN line can take care of most people's complete
communications needs (apart from broadband Internet access and entertainment television) at a
much higher transmission rate, without forcing the purchase of multiple analog phone lines. It also
refers to integrated switching and transmission[3] in that telephone switching and carrier
wave transmission are integrated rather than separate as in earlier technology.
The entry level interface to ISDN is the Basic Rate Interface (BRI), a 128 kbit/s service delivered
over a pair of standard telephone copper wires.[4] The 144 kbit/s payload rate is broken down into two
64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D' channel or data
channel). This is sometimes referred to as 2B+D.[5]
The U interface is a two-wire interface between the exchange and a network terminating unit,
which is usually thedemarcation point in non-North American networks.
The T interface is a serial interface between a computing device and a terminal adapter, which is
the digital equivalent of a modem.
The S interface is a four-wire bus that ISDN consumer devices plug into; the S & T reference
points are commonly implemented as a single interface labeled 'S/T' on an Network termination
1 (NT1).
The R interface defines the point between a non-ISDN device and a terminal adapter (TA) which
provides translation to and from such a device.
BRI-ISDN is very popular in Europe but is much less common in North America. It is also common in
Japan — where it is known as INS64.[6][7]
The other ISDN access available is the Primary Rate Interface (PRI), which is carried over
an E1 (2048 kbit/s) in most parts of the world. An E1 is 30 'B' channels of 64 kbit/s, one 'D' channel
of 64 kbit/s and a timing and alarm channel of 64 kbit/s. This is often referred to as 30B+D.[8]
In North America PRI service is delivered on one or more T1 carriers (often referred to as 23B+D) of
1544 kbit/s (24 channels). A PRI has 23 'B' channels and 1 'D' channel for signalling (Japan uses a
circuit called a J1, which is similar to a T1). Inter-changeably but incorrectly, a PRI is referred to as
T1 because it uses the T1 carrier format. A true T1 (commonly called "Analog T1" to avoid
confusion) uses 24 channels of 64 kbit/s of in-band signaling. Each channel uses 56 kb for data and
voice and 8 kb for signaling and messaging. PRI uses out of band signaling which provides the 23 B
channels with clear 64 kb for voice and data and one 64 kb 'D' channel for signaling and messaging.
In North America, Non-Facility Associated Signalling allows two or more PRIs to be controlled by a
single D channel, and is sometimes called "23B+D + n*24B". D-channel backup allows for a second
D channel in case the primary fails. NFAS is commonly used on a T3.
PRI-ISDN is popular throughout the world, especially for connecting private branch exchanges to the
public network.
Even though many network professionals use the term "ISDN" to refer to the lower-bandwidth BRI
circuit, in North America BRI is relatively uncommon whilst PRI circuits serving PBXs are
commonplace.
Bearer channels[edit]
The bearer channel (B) is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz
with G.711 encoding. B-Channels can also be used to carry data, since they are nothing more than
digital channels.
Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled
over RBS lines. This was commonplace in the 20th century, but has since become less so.
Signaling channel[edit]
The signaling channel (D) uses Q.931 for signaling with the other side of the link.
X.25[edit]
X.25 can be carried over the B or D channels of a BRI line, and over the B channels of a PRI line.
X.25 over the D channel is used at many point-of-sale (credit card) terminals because it eliminates
the modem setup, and because it connects to the central system over a B channel, thereby
eliminating the need for modems and making much better use of the central system's telephone
lines.
X.25 was also part of an ISDN protocol called "Always On/Dynamic ISDN", or AO/DI. This allowed a
user to have a constant multi-link PPP connection to the internet over X.25 on the D channel, and
brought up one or two B channels as needed.
Frame Relay[edit]
In theory, Frame Relay can operate over the D channel of BRIs and PRIs, but it is seldom, if ever,
used.
ISDN is also used as a smart-network technology intended to add new services to the public
switched telephone network(PSTN) by giving users direct access to end-to-end circuit-switched
digital services and as a backup or failsafe circuit solution for critical use data circuits.
ISDN and broadcast industry[edit]
ISDN is used heavily by the broadcast industry as a reliable way of switching low-latency, high-
quality, long-distance audio circuits. In conjunction with an appropriate codec using MPEG or various
manufacturers proprietary algorithms, an ISDN BRI can be used to send stereo bi-directional audio
coded at 128 kbit/s with 20 Hz – 20 kHz audio bandwidth, although commonly the G.722 algorithm is
used with a single 64 kbit/s B channel to send much lower latency mono audio at the expense of
audio quality. Where very high quality audio is required multiple ISDN BRIs can be used in parallel to
provide a higher bandwidth circuit switched connection. BBC Radio 3 commonly makes use of three
ISDN BRIs to carry 320 kbit/s audio stream for live outside broadcasts. ISDN BRI services are used
to link remote studios, sports grounds and outside broadcasts into the main broadcast studio. ISDN
via satellite is used by field reporters around the world. It is also common to use ISDN for the return
audio links to remote satellite broadcast vehicles.
In many countries, such as the UK and Australia, ISDN has displaced the older technology of
equalised analogue landlines, with these circuits being phased out by telecommunications providers.
IP-based streaming codecs are starting to gain a foothold in the broadcast sector, using broadband
internet to connect remote studios.[9] However, reliability and latency is crucially important for
broadcasters and the quality of service offered by ISDN has not yet been matched by packet
switched alternatives.
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (August 2013) (Learn how and when to remove this template message)
Asymmetric digital subscriber line (ADSL) is a type of digital subscriber line(DSL) technology, a
data communications technology that enables faster data transmission over copper telephone
lines rather than a conventional voicebandmodem can provide. ADSL differs from the less
common symmetric digital subscriber line (SDSL). Bandwidth (and bit rate) is greater toward the
customer premises (known as downstream) than the reverse (known as upstream). This is why it is
called asymmetric. Providers usually market ADSL as a service for consumers to receive Internet
access in a relatively passive mode: able to use the higher speed direction for the download from the
Internet but not needing to run servers that would require high speed in the other direction.
Contents
[hide]
1Overview
2Operation
o 2.1Interleaving and fastpath
3Installation problems
4Transport protocols
5ADSL standards
6See also
7References
8External links
Overview[edit]
ADSL works by utilizing frequencies that are not used by a voice telephone call.[1] A splitter, or DSL
filter, allows a single telephone connection to be used for both ADSL service and voice calls at the
same time. ADSL can generally only be distributed over short distances from the telephone
exchange (the last mile), typically less than 4 kilometres (2 mi),[2] but has been known to exceed 8
kilometres (5 mi) if the originally laid wire gauge allows for further distribution.
At the telephone exchange the line generally terminates at a digital subscriber line access
multiplexer (DSLAM) where another frequency splitter separates the voice band signal for the
conventional phone network. Data carried by the ADSL are typically routed over the telephone
company's data network and eventually reach a conventional Internet Protocolnetwork.
There are both technical and marketing reasons why ADSL is in many places the most common type
offered to home users. On the technical side, there is likely to be more crosstalk from other circuits
at the DSLAM end (where the wires from many local loops are close to each other) than at the
customer premises. Thus the upload signal is weakest at the noisiest part of the local loop, while the
download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense
to have the DSLAM transmit at a higher bit rate than does the modem on the customer end. Since
the typical home user in fact does prefer a higher download speed, the telephone companies chose
to make a virtue out of necessity, hence ADSL.
The marketing reasons for an asymmetric connection are that, firstly, most uses of internet traffic will
require less data to be uploaded than downloaded. For example, in normal web browsing a user will
visit a number of web sites and will need to download the data that comprises the web pages from
the site, images, text, sound files etc. but they will only upload a small amount of data, as the only
uploaded data is that used for the purpose of verifying the receipt of the downloaded data or any
data inputted by the user into forms etc. This provides a justification for internet service providers to
offer a more expensive service aimed at commercial users who host websites, and who therefore
need a service which allows for as much data to be uploaded as downloaded. File sharing
applications are an obvious exception to this situation. Secondly internet service providers, seeking
to avoid overloading of their backbone connections, have traditionally tried to limit uses such as file
sharing which generate a lot of uploads.
Operation[edit]
DSL SoC
Frequency plan for ADSL Annex A. Red area is the frequency range used by normal voice telephony (PSTN),
the green (upstream) and blue (downstream) areas are used for ADSL.
With commonly deployed ADSL over POTS (Annex A), the band from 26.075 kHz to 137.825 kHz is
used for upstream communication, while 138 kHz – 1104 kHz is used for downstream
communication. Under the usual DMT scheme, each of these is further divided into smaller
frequency channels of 4.3125 kHz. These frequency channels are sometimes termed bins. During
initial training to optimize transmission quality and speed, the ADSL modem tests each of the bins to
determine the signal-to-noise ratio at each bin's frequency. Distance from the telephone exchange,
cable characteristics, interference from AM radio stations, and local interference and electrical noise
at the modem's location can adversely affect the signal-to-noise ratio at particular frequencies. Bins
for frequencies exhibiting a reduced signal-to-noise ratio will be used at a lower throughput rate or
not at all; this reduces the maximum link capacity but allows the modem to maintain an adequate
connection. The DSL modem will make a plan on how to exploit each of the bins, sometimes termed
"bits per bin" allocation. Those bins that have a good signal-to-noise ratio (SNR) will be chosen to
transmit signals chosen from a greater number of possible encoded values (this range of possibilities
equating to more bits of data sent) in each main clock cycle. The number of possibilities must not be
so large that the receiver might incorrectly decode which one was intended in the presence of noise.
Noisy bins may only be required to carry as few as two bits, a choice from only one of four possible
patterns, or only one bit per bin in the case of ADSL2+, and very noisy bins are not used at all. If the
pattern of noise versus frequencies heard in the bins changes, the DSL modem can alter the bits-
per-bin allocations, in a process called "bitswap", where bins that have become more noisy are only
required to carry fewer bits and other channels will be chosen to be given a higher burden.
The data transfer capacity the DSL modem therefore reports is determined by the total of the bits-
per-bin allocations of all the bins combined. Higher signal-to-noise ratios and more bins being in use
gives a higher total link capacity, while lower signal-to-noise ratios or fewer bins being used gives a
low link capacity. The total maximum capacity derived from summing the bits-per-bin is reported by
DSL modems and is sometimes termed sync rate. This will always be rather misleading, as the true
maximum link capacity for user data transfer rate will be significantly lower; because extra data are
transmitted that are termed protocol overhead, reduced figures for PPPoA connections of around 84-
87 percent, at most, being common. In addition, some ISPs will have traffic policies that limit
maximum transfer rates further in the networks beyond the exchange, and traffic congestion on the
Internet, heavy loading on servers and slowness or inefficiency in customers' computers may all
contribute to reductions below the maximum attainable. When a wireless access point is used, low
or unstable wireless signal quality can also cause reduction or fluctuation of actual speed.
In fixed-rate mode, the sync rate is predefined by the operator and the DSL modem chooses a bits-
per-bin allocation that yields an approximately equal error rate in each bin.[3] In variable-rate mode,
the bits-per-bin are chosen to maximize the sync rate, subject to a tolerable error risk.[3] These
choices can either be conservative, where the modem chooses to allocate fewer bits per bin than it
possibly could, a choice which makes for a slower connection, or less conservative in which more
bits per bin are chosen in which case there is a greater risk case of error should future signal-to-
noise ratios deteriorate to the point where the bits-per-bin allocations chosen are too high to cope
with the greater noise present. This conservatism, involving a choice of using fewer bits per bin as a
safeguard against future noise increases, is reported as the signal-to-noise ratio margin or SNR
margin.
The telephone exchange can indicate a suggested SNR margin to the customer's DSL modem when
it initially connects, and the modem may make its bits-per-bin allocation plan accordingly. A high
SNR margin will mean a reduced maximum throughput, but greater reliability and stability of the
connection. A low SNR margin will mean high speeds, provided the noise level does not increase
too much; otherwise, the connection will have to be dropped and renegotiated (resynced). ADSL2+
can better accommodate such circumstances, offering a feature termed seamless rate
adaptation (SRA), which can accommodate changes in total link capacity with less disruption to
communications.
Vendors may support usage of higher frequencies as a proprietary extension to the standard.
However, this requires matching vendor-supplied equipment on both ends of the line, and will likely
result in crosstalk problems that affect other lines in the same bundle.
There is a direct relationship between the number of channels available and the throughput capacity
of the ADSL connection. The exact data capacity per channel depends on the modulation method
used.
ADSL initially existed in two versions (similar to VDSL), namely CAP and DMT. CAP was the de
facto standard for ADSL deployments up until 1996, deployed in 90 percent of ADSL installations at
the time. However, DMT was chosen for the first ITU-T ADSL standards, G.992.1 and G.992.2 (also
called G.dmt and G.lite respectively). Therefore, all modern installations of ADSL are based on the
DMT modulation scheme.
Interleaving and fastpath[edit]
ISPs (rarely, users) have the option to use interleaving of packets to counter the effects of burst
noise on the telephone line. An interleaved line has a depth, usually 8 to 64, which describes how
many Reed–Solomon codewords are accumulated before they are sent. As they can all be sent
together, their forward error correction codes can be made more resilient. Interleaving
adds latency as all the packets have to first be gathered (or replaced by empty packets) and they, of
course, all take time to transmit. 8 frame interleaving adds 5 ms round-trip-time, while 64 deep
interleaving adds 25 ms. Other possible depths are 16 and 32.
"Fastpath" connections have an interleaving depth of 1, that is one packet is sent at a time. This has
a low latency, usually around 10 ms (interleaving adds to it, this is not greater than interleaved) but it
is extremely prone to errors, as any burst of noise can take out the entire packet and so require it all
to be retransmitted. Such a burst on a large interleaved packet only blanks part of the packet, it can
be recovered from error correction information in the rest of the packet. A "fastpath" connection will
result in extremely high latency on a poor line, as each packet will take many retries.
Installation problems[edit]
ADSL deployment on an existing plain old telephone service (POTS) telephone line presents some
problems because the DSL is within a frequency band that might interact unfavourably with existing
equipment connected to the line. It is therefore necessary to install appropriate frequency filters at
the customer's premises to avoid interference between the DSL, voice services, and any other
connections to the line (for example intruder alarms, like "RedCARE" in the UK). This is desirable for
the voice service and essential for a reliable ADSL connection.
In the early days of DSL, installation required a technician to visit the premises.
A splitter or microfilter was installed near thedemarcation point, from which a dedicated data line was
installed. This way, the DSL signal is separated as close as possible to the central office and is not
attenuated inside the customer's premises. However, this procedure was costly, and also caused
problems with customers complaining about having to wait for the technician to perform the
installation. So, many DSL providers started offering a "self-install" option, in which the provider
provided equipment and instructions to the customer. Instead of separating the DSL signal at the
demarcation point, the DSL signal is filtered at each telephone outlet by use of a low-pass filter for
voice and a high-pass filter for data, usually enclosed in what is known as a microfilter. This
microfilter can be plugged by an end user into any telephone jack: it does not require any rewiring at
the customer's premises.
Commonly, microfilters are only low-pass filters, so beyond them only low frequencies (voice
signals) can pass. In the data section, a microfilter is not used because digital devices that are
intended to extract data from the DSL signal will, themselves, filter out low frequencies. Voice
telephone devices will pick up the entire spectrum so high frequencies, including the ADSL signal,
will be "heard" as noise in telephone terminals, and will affect and often degrade the service in fax,
dataphones and modems. From the point of view of DSL devices, any acceptance of their signal by
POTS devices mean that there is a degradation of the DSL signal to the devices, and this is the
central reason why these filters are required.
A side effect of the move to the self-install model is that the DSL signal can be degraded, especially
if more than 5 voiceband (that is, POTS telephone-like) devices are connected to the line. Once a
line has had DSL enabled, the DSL signal is present on all telephone wiring in the building,
causing attenuation and echo. A way to circumvent this is to go back to the original model, and
install one filter upstream from all telephone jacks in the building, except for the jack to which the
DSL modem will be connected. Since this requires wiring changes by the customer, and may not
work on some household telephone wiring, it is rarely done. It is usually much easier to install filters
at each telephone jack that is in use.
DSL signals may be degraded by older telephone lines, surge protectors, poorly designed
microfilters, Repetitive Electrical Impulse Noise, and by long telephone extension cords. Telephone
extension cords are typically made with small-gauge, multi-strand copper conductors which do not
maintain a noise-reducing pair twist. Such cable is more susceptible to electromagnetic interference
and has more attenuation than solid twisted-pair copper wires typically wired to telephone jacks.
These effects are especially significant where the customer's phone line is more than 4 km from the
DSLAM in the telephone exchange, which causes the signal levels to be lower relative to any local
noise and attenuation. This will have the effect of reducing speeds or causing connection failures.
Short for symmetric digital subscriber line, a technology that allows more data to be sent over existing copper
telephone lines (POTS). SDSL supports data rates up to 3 Mbps.
SDSL works by sending digital pulses in the high-frequency area of telephone wires and can not operate
upstream and downstream traffic. A similar technology that supports different data rates for upstream and
downstream data is called asymmetric digital subscriber line (ADSL). ADSL is more popular in North America,
This VDSL modem used in Taiwan provides 4 Ethernet ports and an internal filter for voice-data separation.
Very-high-bit-rate digital subscriber line (VDSL or VHDSL)[1] is a digital subscriber line (DSL)
technology providing data transmission faster thanasymmetric digital subscriber line (ADSL) over a
single flat untwisted or twisted pairof copper wires (up to 52 Mbit/s downstream and
16 Mbit/s upstream),[2] and oncoaxial cable (up to 85 Mbit/s down- and upstream)[3] using the
frequency band from 25 kHz to 12 MHz.[4] These rates mean that VDSL is capable of supporting
applications such as high-definition television, as well as telephone services (voice over IP) and
general Internet access, over a single connection. VDSL is deployed over existing wiring used
for analog telephone service and lower-speed DSL connections. This standard was approved by ITU
in November 2001.
Second-generation systems (VDSL2; ITU-T G.993.2 approved in February 2006) use frequencies of
up to 30 MHz to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and
downstream directions. The maximum available bit rate is achieved at a range of about 300 meters;
performance degrades as the loop attenuation increases.
The Internet protocol suite is sometimes called the TCP/IP protocol suite,
after TCP/IP, which refers to the two most important protocols in it:
the transmission control protocol (TCP) and the Internet protocol (IP).
These were also the first two protocols in the suite to be developed.
This suite has five layers, in contrast with the seven layers of the OSI (open
systems interconnect) reference model, each of which contains a number of
protocols. Among the main ones are HTTP (hypertext transfer protocol),
FTP (file transfer protocol), SSH (secure shell), Telnet and BitTorrent at the
application layer, TCP and UDP (user datagram protocol) at the transport
layer, IP at the network layer, Ethernet, FDDI (fiber distributed data
interface) and PPP (point-to-point protocol) at the data link layer,
and 10Base-T, 100Base-T and DSL (digital subscriber line) at the physical
layer.
The Internet protocol suite can be described by analogy with the OSI model,
although there are some major differences and not all of the layers
correspond well with their counterparts. In particular, the Internet protocol
suite model was produced as the solution to a practical engineering problem,
whereas the OSI model is a more theoretical approach and was developed at
an earlier stage in the evolution of networks.
"WWW" and "The web" redirect here. For other uses of WWW, see WWW (disambiguation). For
other uses of web, seeWeb (disambiguation).
The World Wide Web (WWW) is an information space where documents and other web
resources are identified by URLs, interlinked by hypertext links, and can be accessed via
the Internet.[1] The World Wide Web was invented by English scientist Tim Berners-Lee in 1989. He
wrote the first web browser in 1990 while employed at CERN in Switzerland.[2][3]
It has become known simply as the Web. When used attributively (as in web page, web browser,
website, web server, web traffic, web search, web user, web technology, etc.) it is invariably written
in lower case. Otherwise the initial capital is often retained (‘the Web’), but lower case is becoming
increasingly common (‘the web’).
The World Wide Web was central to the development of the Information Age and is the primary tool
billions of people use to interact on the Internet.[4][5][6]
Web pages are primarily text documents formatted and annotated with Hypertext Markup
Language (HTML). In addition to formatted text, web pages may contain images, video, and
software components that are rendered in the user's web browser as coherent pages
of multimedia content. Embedded hyperlinks permit users to navigate between web pages. Multiple
web pages with a common theme, a common domain name, or both, may be called a website.
Website content can largely be provided by the publisher, or interactive where users contribute
content or the content depends upon the user or their actions. Websites may be mostly informative,
primarily for entertainment, or largely for commercial purposes.
Contents
[hide]
1History
2Function
o 2.1Linking
o 2.2Dynamic updates of web pages
o 2.3WWW prefix
o 2.4Scheme specifiers
3Web security
4Privacy
5Standards
6Accessibility
7Internationalization
8Statistics
9Speed issues
10Web caching
11See also
12References
13Further reading
14External links
History
Main article: History of the World Wide Web
Berners-Lee's vision of a global hyperlinked information system became a possibility by the second
half of the 1980s. By 1985, the global Internet began to proliferate in Europe and in the Domain
Name System (which the Uniform Resource Locator is built upon) came into being. In 1988 the first
direct IP connection between Europe and North America was made and Berners-Lee began to
openly discuss the possibility of a web-like system at CERN.[7]
In March 1989 Tim Berners-Lee issued a proposal to the management at CERN for a system called
"Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used
the term "web" and described a more elaborate information management system based on links
embedded in readable text: "Imagine, then, the references in this document all being associated with
the network address of the thing to which they referred, so that while reading this document you
could skip to them with a click of the mouse." Such a system, he explained, could be referred to
using one of the existing meanings of the wordhypertext, a term that he says was coined in the
1950s. There is no reason, the proposal continues, why such hypertext links could not encompass
multimedia documents including graphics, speech and video, so that Berners-Lee goes on to use the
term hypermedia.[8]
With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more
formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" (one
word) as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server
architecture.[9] At this point HTML and HTTP had already been in development for about two months
and the first Web server was about a month from completing its first successful test.
This proposal estimated that a read-only web would be developed within three months and that it
would take six months to achieve "the creation of new links and new material by readers, [so that]
authorship becomes universal" as well as "the automatic notification of a reader when new material
of interest to him/her has become available." While the read-only goal was met, accessible
authorship of web content took longer to mature, with the wiki concept, WebDAV, blogs, Web
2.0 andRSS/Atom.[10]
The proposal was modeled after the SGML reader Dynatext by Electronic Book Technology, a spin-
off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext
system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to
Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing
policy for use in the general high energy physics community, namely a fee for each document and
each document alteration.
A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the
first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools
necessary for a working Web:[11] the first web browser (which was a web editor as well) and the first
web server. The first web site,[12] which described the project itself, was published on 20 December
1990.[13]
The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in
May 2013 that Berners-Lee gave him what he says is the oldest known web page during a 1991 visit
to UNC. Jones stored it on a magneto-optical drive and on his NeXT computer.[14]
On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on
the newsgroup alt.hypertext.[15] This date also marked the debut of the Web as a publicly available
service on the Internet, although new users only accessed it after 23 August. For this reason this is
considered the internaut's day. Several newsmedia have reported that the first photo on the Web
was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles
Cernettes taken by Silvano de Gennaro; Gennaro has disclaimed this story, writing that media were
"totally distorting our words for the sake of cheap sensationalism."[16]
The first server outside Europe was installed at the Stanford Linear Accelerator Center (SLAC) in
Palo Alto, California, to host the SPIRES-HEP database. Accounts differ substantially as to the date
of this event. The World Wide Web Consortium's timeline says December 1992,[17] whereas SLAC
itself claims December 1991,[18][19] as does a W3C document titled A Little History of the World Wide
Web.[20]
The underlying concept of hypertext originated in previous projects from the 1960s, such as
the Hypertext Editing System(HES) at Brown University, Ted Nelson's Project Xanadu, and Douglas
Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar
Bush's microfilm-based memex, which was described in the 1945 essay "As We May Think".[21]
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web,
he explains that he had repeatedly suggested that a marriage between the two technologies was
possible to members of both technical communities, but when no one took up his invitation, he finally
assumed the project himself. In the process, he developed three essential technologies:
a system of globally unique identifiers for resources on the Web and elsewhere, the universal
document identifier (UDI), later known as uniform resource locator (URL) and uniform resource
identifier (URI);
the publishing language HyperText Markup Language (HTML);
the Hypertext Transfer Protocol (HTTP).[22]
The World Wide Web had a number of differences from other hypertext systems available at the
time. The Web required only unidirectional links rather than bidirectional ones, making it possible for
someone to link to another resource without action by the owner of that resource. It also significantly
reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems),
but in turn presented the chronic problem of link rot.
Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it
possible to develop servers and clients independently and to add extensions without licensing
restrictions. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone,
with no fees due.[23] Coming two months after the announcement that the server implementation of
the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and
towards the Web. An early popular web browser was ViolaWWW for Unix and the X Windowing
System.
Robert Cailliau, Jean-François Abramatic formerly of INRIA, and Tim Berners-Lee at the 10th anniversary of
the World Wide Web Consortium.
Scholars generally agree that a turning point for the World Wide Web began with the
introduction[24] of the Mosaic web browser[25] in 1993, a graphical browser developed by a team at
the National Center for Supercomputing Applications at theUniversity of Illinois at Urbana-
Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-
Performance Computing and Communications Initiative and the High Performance Computing and
Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al
Gore.[26] Prior to the release of Mosaic, graphics were not commonly mixed with text in web pages
and the web's popularity was less than older protocols in use over the Internet, such
as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the
Web to become, by far, the most popular Internet protocol.
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the
European Organization for Nuclear Research (CERN) in October 1994. It was founded at
the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support
from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet;
a year later, a second site was founded at INRIA (a French national computer research lab) with
support from the European Commission DG InfSo; and in 1996, a third continental site was created
in Japan at Keio University. By the end of 1994, the total number of websites was still relatively
small, but manynotable websites were already active that foreshadowed or inspired today's most
popular services.
Connected by the existing Internet, other websites were created around the world, adding
international standards fordomain names and HTML. Since then, Berners-Lee has played an active
role in guiding the development of web standards (such as the markup languages to compose web
pages in), and has advocated his vision of a Semantic Web. The World Wide Web enabled the
spread of information over the Internet through an easy-to-use and flexible format. It thus played an
important role in popularizing use of the Internet.[27] Although the two terms are
sometimes conflated in popular use, World Wide Web is not synonymous with Internet.[28] The Web is
an information space containing hyperlinked documents and other resources, identified by their
URIs.[29] It is implemented as both client and server software using Internet protocols such
as TCP/IP and HTTP.
Tim Berners-Lee was knighted in 2004 by Queen Elizabeth II for "services to the global development
of the Internet".[30][31]
Function
The World Wide Web functions as a layer on top of the Internet, helping to make it more functional. The advent
of the Mosaic web browser helped to make the web much more usable.
The terms Internet and World Wide Web are often used without much distinction. However, the two
are not the same. The Internet is a global system of interconnected computer networks. In contrast,
the World Wide Web is a global collection of text documents and other resources, linked by
hyperlinks and URIs. Web resources are usually accessed using HTTP, which is one of many
Internet communication protocols.[32]
Viewing a web page on the World Wide Web normally begins either by typing theURL of the page
into a web browser, or by following a hyperlink to that page or resource. The web browser then
initiates a series of background communication messages to fetch and display the requested page.
In the 1990s, using a browser to view web pages—and to move from one web page to another
through hyperlinks—came to be known as 'browsing,' 'web surfing' (after channel surfing), or
'navigating the Web'. Early studies of this new behavior investigated user patterns in using web
browsers. One study, for example, found five user patterns: exploratory surfing, window surfing,
evolved surfing, bounded navigation and targeted navigation.[33]
The following example demonstrates the functioning of a web browser when accessing a page at the
URL http://www.example.org/home.html . The browser resolves the server name of the URL
( www.example.org ) into anInternet Protocol address using the globally distributed Domain Name
System (DNS). This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The
browser then requests the resource by sending an HTTP request across the Internet to the computer
at that address. It requests service from a specific TCP port number that is well known for the HTTP
service, so that the receiving host can distinguish an HTTP request from other network protocols it
may be servicing. The HTTP protocol normally uses port number 80. The content of the HTTP
request can be as simple as two lines of text:
HTTP/1.0 200 OK
Content-Type: text/html; charset=UTF-8
followed by the content of the requested page. HyperText Markup Language (HTML) for a basic web
page might look like this:
<html>
<head>
<title>Example.org – The World Wide Web</title>
</head>
<body>
<p>The World Wide Web, abbreviated as WWW and commonly known ...</p>
</body>
</html>
The web browser parses the HTML and interprets the markup ( <title> , <p> for paragraph, and
such) that surrounds the words to format the text on the screen. Many web pages use HTML to
reference the URLs of other resources such as images, other embedded media, scripts that affect
page behavior, and Cascading Style Sheets that affect page layout. The browser makes additional
HTTP requests to the web server for these other Internet media types. As it receives their content
from the web server, the browser progressively renders the page onto the screen as specified by its
HTML and these additional resources.
Linking
Most web pages contain hyperlinks to other related pages and perhaps to downloadable files,
source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks
like this: <ahref="http://www.example.org/home.html">Example.org Homepage</a>
Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of
information. Publication on the Internet created what Tim Berners-Lee first called
the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November
1990.[9]
The hyperlink structure of the WWW is described by the webgraph: the nodes of the webgraph
correspond to the web pages (or URLs) the directed edges between them to the hyperlinks.
Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with
different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link
rot, and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web
has prompted many efforts to archive web sites. The Internet Archive, active since 1996, is the best
known of such efforts.
Dynamic updates of web pages
Main article: Ajax (programming)
JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then
of Netscape, for use within web pages.[34] The standardised version is ECMAScript.[34] To make web
pages more interactive, some web applications also use JavaScript techniques such
as Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that can
make additional HTTP requests to the server, either in response to user actions such as mouse
movements or clicks, or based on elapsed time. The server's responses are used to modify the
current page rather than creating a new page with each response, so the server needs only to
provide limited, incremental information. Multiple Ajax requests can be handled at the same time,
and users can interact with the page while data is retrieved. Web pages may also regularlypoll the
server to check whether new information is available.[35]
WWW prefix
Many hostnames used for the World Wide Web begin with www because of the long-standing
practice of naming Internet hosts according to the services they provide. The hostname of a web
server is often www, in the same way that it may beftp for an FTP server, and news or nntp for
a USENET news server. These host names appear as Domain Name System (DNS)
or subdomain names, as in www.example.com. The use of www is not required by any technical or
policy standard and many web sites do not use it; indeed, the first ever web server was
called nxoc01.cern.ch.[36] According to Paolo Palazzi,[37] who worked at CERN along with Tim
Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project
page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN
home page, however the DNS records were never switched, and the practice of prepending www to
an institution's website domain name was subsequently copied. Many established websites still use
the prefix, or they employ other subdomain names such as www2, secure or en for special
purposes. Many such web servers are set up so that both the main domain name (e.g.,
example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others
require one form or the other, or they may map to different web sites.
The use of a subdomain name is useful for load balancing incoming web traffic by creating
a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be
used in a CNAME, the same result cannot be achieved by using the bare domain root.[citation needed]
When a user submits an incomplete domain name to a web browser in its address bar input field,
some web browsers automatically try adding the prefix "www" to the beginning of it and possibly
".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering
'microsoft' may be transformed to http://www.microsoft.com/ and 'openoffice'
to http://www.openoffice.org. This feature started appearing in early versions of Mozilla Firefox,
when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such
as Lynx.[38] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only
for mobile devices.[39]
In English, www is usually read as double-u double-u double-u.[40] Some users pronounce it dub-dub-
dub, particularly in New Zealand. Stephen Fry, in his "Podgrammes" series of podcasts, pronounces
it wuh wuh wuh.[citation needed] The English writer Douglas Adams once quipped in The Independent on
Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three
times longer to say than what it's short for".[41] In Mandarin Chinese, World Wide Webis commonly
translated via a phono-semantic matching to wàn wéi wǎng (万维网), which satisfies www and
literally means "myriad dimensional net",[42] a translation that reflects the design concept and
proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is
officially spelled as three separate words, each capitalised, with no intervening hyphens.[43]
Use of the www prefix is declining as Web 2.0 web applications seek to brand their domain names
and make them easily pronounceable.[44] As the mobile web grows in popularity, services
like Gmail.com, Outlook.com, MySpace.com,Facebook.com and Twitter.com are most often
mentioned without adding "www." (or, indeed, ".com") to the domain.
Scheme specifiers
The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer
Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the
request and response. The HTTP protocol is fundamental to the operation of the World Wide Web,
and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential
data, such as passwords or banking information. Web browsers usually automatically prepend http://
to user-entered URIs, if omitted.
Web security
For criminals, the web has become the preferred way to spread malware. Cybercrime on the web
can include identity theft,fraud, espionage and intelligence gathering.[45] Web-
based vulnerabilities now outnumber traditional computer security concerns,[46][47] and as measured
by Google, about one in ten web pages may contain malicious code.[48] Most web-basedattacks take
place on legitimate websites, and most, as measured by Sophos, are hosted in the United States,
China and Russia.[49] The most common of all malware threats is SQL injection attacks against
websites.[50] Through HTML and URIs, the Web was vulnerable to attacks like cross-site
scripting (XSS) that came with the introduction of JavaScript[51] and were exacerbated to some
degree by Web 2.0 and Ajax web design that favors the use of scripts.[52] Today by one estimate,
70% of all websites are open to XSS attacks on their users.[53] Phishing is another common threat to
the Web. "SA, the Security Division of EMC, today announced the findings of its January 2013 Fraud
Report, estimating the global losses from phishing at $1.5 Billion in 2012".[54] Two of the well-known
phishing methods are Covert Redirect and Open Redirect.
Proposed solutions vary to extremes. Large security vendors like McAfee already design
governance and compliance suites to meet post-9/11 regulations,[55] and some, like Finjan have
recommended active real-time inspection of code and all content regardless of its source.[45] Some
have argued that for enterprise to see security as a business opportunity rather than a cost
center,[56] "ubiquitous, always-on digital rights management" enforced in the infrastructure by a
handful of organizations must replace the hundreds of companies that today secure data and
networks.[57] Jonathan Zittrain has said users sharing responsibility for computing safety is far
preferable to locking down the Internet.[58]
Privacy
Main article: Internet privacy
Every time a client requests a web page, the server can identify the request's IP address and usually
logs it. Also, unless set not to do so, most web browsers record requested web pages in a
viewable history feature, and usually cache much of the content locally. Unless the server-browser
communication uses HTTPS encryption, web requests and responses travel in plain text across the
Internet and can be viewed, recorded, and cached by intermediate systems.
When a web page asks for, and the user supplies, personally identifiable information—such as their
real name, address, e-mail address, etc.—web-based entities can associate current web traffic with
that individual. If the website uses HTTP cookies, username and password authentication, or other
tracking techniques, it can relate other web visits, before and after, to the identifiable information
provided. In this way it is possible for a web-based organisation to develop and build a profile of the
individual people who use its site or sites. It may be able to build a record for an individual that
includes information about their leisure activities, their shopping interests, their profession, and other
aspects of their demographic profile. These profiles are obviously of potential interest to marketeers,
advertisers and others. Depending on the website'sterms and conditions and the local laws that
apply information from these profiles may be sold, shared, or passed to other organisations without
the user being informed. For many ordinary people, this means little more than some unexpected e-
mails in their in-box, or some uncannily relevant advertising on a future web page. For others, it can
mean that time spent indulging an unusual interest can result in a deluge of further targeted
marketing that may be unwelcome. Law enforcement, counter terrorism and espionage agencies
can also identify, target and track individuals based on their interests or proclivities on the Web.
Social networking sites try to get users to use their real names, interests, and locations. They believe
this makes the social networking experience more realistic, and therefore more engaging for all their
users. On the other hand, uploaded photographs or unguarded statements can be identified to an
individual, who may regret this exposure. Employers, schools, parents, and other relatives may be
influenced by aspects of social networking profiles that the posting individual did not intend for these
audiences. On-line bullies may make use of personal information to harass or stalk users. Modern
social networking websites allow fine grained control of the privacy settings for each individual
posting, but these can be complex and not easy to find or use, especially for beginners.[59]
Photographs and videos posted onto websites have caused particular problems, as they can add a
person's face to an on-line profile. With modern and potential facial recognition technology, it may
then be possible to relate that face with other, previously anonymous, images, events and scenarios
that have been imaged elsewhere. Because of image caching, mirroring and copying, it is difficult to
remove an image from the World Wide Web.
Standards
Main article: Web standards
Many formal standards and other technical specifications and software define the operation of
different aspects of the World Wide Web, the Internet, and computer information exchange. Many of
the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but
some are produced by the Internet Engineering Task Force (IETF) and other organizations.
Usually, when web standards are discussed, the following publications are seen as foundational:
Recommendations for markup languages, especially HTML and XHTML, from the W3C. These
define the structure and interpretation of hypertext documents.
Recommendations for stylesheets, especially CSS, from the W3C.
Standards for ECMAScript (usually in the form of JavaScript), from Ecma International.
Recommendations for the Document Object Model, from W3C.
Additional publications provide definitions of other essential technologies for the World Wide Web,
including, but not limited to, the following:
Uniform Resource Identifier (URI), which is a universal system for referencing resources on the
Internet, such as hypertext documents and images. URIs, often called URLs, are defined by the
IETF's RFC 3986 / STD 66: Uniform Resource Identifier (URI): Generic Syntax, as well as its
predecessors and numerous URI scheme-defining RFCs;
HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1 and RFC
2617: HTTP Authentication, which specify how the browser and server authenticate each other.
Accessibility
Main article: Web accessibility
There are methods for accessing the Web in alternative mediums and formats to facilitate use by
individuals with disabilities. These disabilities may be visual, auditory, physical, speech related,
cognitive, neurological, or some combination. Accessibility features also help people with temporary
disabilities, like a broken arm, or aging users as their abilities change.[60] The Web receives
information as well as providing information and interacting with society. The World Wide Web
Consortium claims it essential that the Web be accessible, so it can provide equal access and equal
opportunityto people with disabilities.[61] Tim Berners-Lee once noted, "The power of the Web is in its
universality. Access by everyone regardless of disability is an essential aspect."[60] Many countries
regulate web accessibility as a requirement for websites.[62] International cooperation in the
W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as
software developers can use to make the Web accessible to persons who may or may not be
usingassistive technology.[60][63]
Hyperlink
From Wikipedia, the free encyclopedia
In computing, a hyperlink is a reference to data that the reader can directly follow either by clicking,
tapping or hovering.[1] A hyperlink points to a whole document or to a specific element within a
document. Hypertext is text with hyperlinks. A software system that's used for viewing and creating
hypertext is a hypertext system, and to create a hyperlink is to hyperlink (or simply to link). A user
following hyperlinks is said to navigate or browse the hypertext.
The document containing a hyperlink is known as its source document. For example, in an online
reference work such as Wikipedia, many words and terms in the text are hyperlinked to definitions of
those terms. Hyperlinks are often used to implement reference mechanisms, such as tables of
contents, footnotes, bibliographies, indexes, letters, andglossaries.
In some hypertext, hyperlinks can be bidirectional: they can be followed in two directions, so both
ends act as anchors and as targets. More complex arrangements exist, such as many-to-many links.
The effect of following a hyperlink may vary with the hypertext system and may sometimes depend
on the link itself; for instance, on the World Wide Web, most hyperlinks cause the target document to
replace the document being displayed, but some are marked to cause the target document to open
in a new window. Another possibility is transclusion, for which the link target is a
document fragment that replaces the link anchor within the source document. Not only persons
browsing the document follow hyperlinks; they may also be followed automatically by programs. A
program that traverses the hypertext, following each hyperlink and gathering all the retrieved
documents is known as a Web spider or crawler.
Hypertext vs Hyperlink
Hyperlink is a powerful tool that is made use of to send the reader or surfer to another
webpage without having to open a new tab on the search engine. It is simply called a
link and is a reference in a hypertext document to another document or to another place
on the same text. Hypertext on the other hand is the text displayed on the monitor that
contains these hyperlinks and can take the reader to another web page in an instant
without having to open a new tab in the search engine.
Hypertext is the text on your screen that has references to other text on different web
pages that the reader can immediately go to by just clicking on this text. On the other
hand, the references are termed as hyperlinks. Hypertext contains just text and should
not be confused with hyper media which contains, besides text, images as well as short
videos also. Hypertext is the concept that has made WWW a more flexible and easy to
use system.
It is easy to confuse between hypertext and hyperlink as it is hypertext that contains the link to
another web page or document. You get to instantly see another document within the document
you are reading with the help of these hyperlinks or references. Three terms are commonly used
whenever we are talking about hyperlinks, the anchor, the source, and the target. The text which
has been hyperlinked in the document that you are reading is called the anchor. Sometimes,
when you hover on this anchor, a brief information flashes on the screen about what other
information you can get through the reference. The page on which the reader has the anchor is
called the source document. Target is usually another webpage to which the reader is directed to
when he clicks on the anchor.
Today almost every webpage has words or phrases that have been anchored to provide additional
information through hyperlinks and this has benefited website owners through a wheel of these
hyperlinks.
Types of websites
1. Personal Websites
Your Internet Service Provider or Domain Registrar may offer you free server space for you to create
your own website that might include some family photos and an online diary. Usually these will have a
web address (URL) looking something like this: www.your-isp.com/~your-user-name/. This type of
site is useful for a family, teenagers, grandparents, etc. to stay in touch with each other. This type is
not advisable for a small business because the URL is not search engine friendly and the limited server
capabilities your hosting company offer may not be sophisticated enough for a small business
website.
Writer's and Author's websites are part of what's known as the Writer's or Author's Platform in the
publishing business. The platform includes, a website, a Facebook presence, blog, Twitter account,
and the old fashioned mailing list. Many publishers will ask a prospective client about their platform. In
other words, "If we publish your book, what sort of a reader base do you already have that we can
count on to buy your new publication?" Fairly weighty request, wouldn't you say? For now, let's
concentrate on the website part. A writers website would include a biography, a catalog of published
books and works, perhaps excerpts from some works, links to publications on sites like Amazon.com,
a link to the writer's blog, reviews and comments on the author's publications. You get the idea, and
that is to build a following, a fan base to which future publications can be directly marketed.
The use of mobile devices (smart phones, tablets, watches, etc.) has become ubiquitous. One problem
is that standard websites are difficult to view and sometines take a long time to download on some of
these devices with their small screens and wireless connections. Websites whose pages are narrower
in width and take up less bandwidth work much better for mobile devices. A new domain designation
has been created to identify websites that are "mobile friendly". That is .mobi, as in
www.xislegraphix.mobi, if I had such a site. If you have a small business that would benefit from
being viewed on a mobile devise, you should consider investigating the possibilities of creating a
mobile friendly site.
6. Blogging Websites
People took the words Web Logs and shortened it to Blogs—online diaries, journals, or editorials, if
you will. My, how Blogs have taken over the Internet. A person used to be outdated if he/she did not
have a website, now having a blog is de rigeur. A blog owner will log-on daily, weekly, or whenever,
and write about whatever is going on in their lives or business, or they may comment on politics and
news. How wonderful the Internet is! Now anyone who can afford a blog can be self published and
allow their thoughts to be read by anyone in the world who has online access. How important is
blogging to the small business person?
Read more about blogs and find out...
7. Informational Websites
A major informational site is wikipedia.org, the online encyclopedia. And it is unique, because it allows
members to contribute and edit articles. Now your small business may not want such a comprehensive
site, but if you have information to share or sell, an informational website would fill the bill. Suppose
you have a landscaping business. You could create a website that lists plants with their definitions and
planting and caring instructions. This would be helpful to people, and you would use it to lead people
to your nursery. Of course you could "hybrid" this site by adding an e-commerce feature, a forum, or
even photo sharing.
In the days before the Internet, we used the print, radio, and television media to spread the word
about our businesses. Now we can cast a large net, reaching literally millions of people al over the
world with just one website. With your online brochure or catalog, you can show anyone who looks for
and finds your website, photos and descriptions of your products or services. To some this may sound
like an E-commerce Website, but there are many businesses that deal in products or services that are
not sellable over the web—think hair-stylist, dentist, or day-care center.
9. Directory Websites
Just as we used to use the printed Yellow Pages in phone books to find services and businesses, today
we have website directories. The Yellow Pages has one, YP.com. Directories can be dedicated to a
certain topic or industry, or they can encompass geographical areas. Search Engines, such
asGoogle.com and Yahoo.com can be considered directories, but since their databases are so large,
rather than searching alphabetically, one enters a search term in the search field.
Ever hear of Amazon.com? It's one of the grand-daddies of all e-commerce websites. But you don't
have to be an Amazon to sell your products online. There are millions of small businesses who use
their e-commerce websites to sell their products over the Internet. Just about anything that can be
sold in a brick-and-mortar store can be sold online—with much less overhead! Is an E-commerce
Website right for you?
The Internet has forever changed the way we do business. Businesses are networking better than
they ever could before due to social networking sites like Facebook and Twitter. Just 20 short years
ago we were looking through books to find addresses of sites that offered files that could be
download using the latest in file sharing technology: FTP. In those days, there were no GUI
(Graphical User Interface) websites, no pop-ups or ads to get in the way, no search engines and Java
was an exotic island in Indonesia.
Today we can easily identify eight different types of websites available on the Internet since the
inception of the basic informational website which appeared in the early years.
1. Informational Websites
As mentioned above, informational based websites were the first versions to hit the Internet. They
are as they sound, sites which enable readers to find information on a particular business or topic.
3. E-Commerce Websites
E-Commerce websites take brochure websites a step further by allowing you to shop directly from
your computer. The main difference between a brochure site and an e-commerce site is that the latter
features a checkout system to enable you to order directly from the online store.
4. Blogs
Blog websites (traditionally known as web logs) did not hit the mainstream until the early to mid
2000s. However since then blogging has become very popular for both business and personal use.
5. Personal Websites
Personal websites are similar to that of a personal blog where an individual in all likelihood will have
their own personal domain. These are created by friends and families to share their information and
pictures online with each other allowing for people to keep in contact.
A Uniform Resource Locator (URL), commonly informally termed a web address (which term is
not defined identically)[1]is a reference to a web resource that specifies its location on a computer
network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier
(URI),[2] although many people use the two terms interchangeably.[3] A URL implies the means to
access an indicated resource, which is not true of every URI.[4][3] URLs occur most commonly to
reference web pages (http), but are also used for file transfer (ftp), email (mailto), database access
(JDBC), and many other applications.
Most web browsers display the URL of a web page above the page in an address bar. A typical URL
could have the form http://www.example.com/index.html , which indicates a protocol ( http ),
a hostname ( www.example.com ), and a file name ( index.html ).
Contents
[hide]
1History
2Syntax
3Internationalized URL
4Protocol-relative URLs
5See also
6Notes
7Citations
8References
9External links
History
Uniform Resource Locators were defined in Request for Comments (RFC) 1738 in 1994 by Tim
Berners-Lee, the inventor of the World Wide Web, and the URI working group of the Internet
Engineering Task Force (IETF),[5] as an outcome of collaboration started at the IETF Living
Documents "Birds of a Feather" session in 1992.[6][7]
The format combines the pre-existing system of domain names (created in 1985) with file
path syntax, where slashes are used to separate directory and file names. Conventions already
existed where server names could be prepended to complete file paths, preceded by a double slash
( // ).[8]
Berners-Lee later expressed regret at the use of dots to separate the parts of the domain
name within URIs, wishing he had used slashes throughout,[8] and also said that, given the colon
following the first component of a URI, the two slashes before the domain name were unnecessary.[9]
Syntax
Main article: Uniform resource identifier § Syntax
Every HTTP URL conforms to the syntax of a generic URI. A generic URI is of the form:
scheme:[//[user:password@]host[:port]][/]path[?query][#fragment]
It comprises:
The scheme, consisting of a sequence of characters beginning with a letter and followed by any
combination of letters, digits, plus ( + ), period ( . ), or hyphen ( - ). Although schemes are case-
insensitive, the canonical form is lowercase and documents that specify schemes must do so
with lowercase letters. It is followed by a colon ( : ). Examples of popular schemes
include http , ftp , mailto , file , and data . URI schemes should be registered with
theInternet Assigned Numbers Authority (IANA), although non-registered schemes are used in
practice.[a]
Two slashes ( // ): This is required by some schemes and not required by some others. When
the authority component (explained below) is absent, the path component cannot begin with two
slashes.[11]
An authority part, comprising:
An optional authentication section of a user name and password, separated by a colon,
followed by an at symbol ( @ )
A "host", consisting of either a registered name (including but not limited to a hostname), or
an IP address. IPv4addresses must be in dot-decimal notation, and IPv6 addresses must be
enclosed in brackets ( [ ] ).[12][b]
An optional port number, separated from the hostname by a colon
A path, which contains data, usually organized in hierarchical form, that appears as a sequence
of segments separated by slashes. Such a sequence may resemble or map exactly to a file
system path, but does not always imply a relation to one.[14] The path must begin with a single
slash ( / ) if an authority part was present, and may also if one was not, but must not begin with
a double slash.
An optional query, separated from the preceding part by a question mark ( ? ), containing
a query string of non-hierarchical data. Its syntax is not well defined, but by convention is most
often a sequence ofattribute–value pairs separated by a delimiter.
An optional fragment, separated from the preceding part by a hash( # ). The fragment contains
a fragment identifier providing direction to a secondary resource, such as a section heading in
an article identified by the remainder of the URI. When the primary resource is
an HTML document, the fragment is often an id attribute of a specific element, and web
browsers will scroll this element into view.
A web browser will usually dereference a URL by performing an HTTP request to the specified host,
by default on port number 80. URLs using the https scheme require that requests and responses
will be made over a secure connection to the website.
Search engine
"Search engine" redirects here. For other uses, see Search engine (disambiguation).
For a tutorial on using search engines for researching Wikipedia articles, see Wikipedia:Search
engine test.
The results of a search for the term "lunar eclipse" in a web-based image search engine
A web search engine is a software system that is designed to search for information on the World
Wide Web. The search results are generally presented in a line of results often referred to as search
engine results pages (SERPs). The information may be a mix of web pages, images, and other
types of files. Some search engines alsomine data available in databases or open directories.
Unlike web directories, which are maintained only by human editors, search engines also
maintain real-time information by running an algorithm on a web crawler.
Contents
[hide]
1History
2How web search engines work
3Market share
o 3.1East Asia and Russia
4Search engine bias
5Customized results and filter bubbles
6Christian, Islamic and Jewish search engines
7Search engine submission
8See also
9References
10Further reading
11External links
History[edit]
Further information: Timeline of web search engines
JumpStation Inactive
Lycos Active
Infoseek Inactive
Daum Active
Magellan Inactive
Excite Active
SAPO Active
Naver Active
Vivisimo Inactive
Exalead Active
Gigablast Active
Scroogle Inactive
A9.com Inactive
Sogou Active
GoodSearch Active
SearchMe Inactive
Quaero Inactive
Search.com Active
ChaCha Active
Ask.com Active
Sproose Inactive
Picollator Inactive
Viewzi Inactive
Boogami Inactive
LeapFish Inactive
DuckDuckGo Active
Yebol Inactive
NATE Active
Cuil Inactive
(English) search
filter Search
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (July 2013) (Learn how and when to remove this template message)
Typically when a user enters a query into a search engine it is a few keywords.[15]The index already
has the names of the sites containing the keywords, and these are instantly obtained from the index.
The real processing load is in generating the web pages that are the search results list: Every page
in the entire list must beweighted according to information in the indexes.[14] Then the top search
result item requires the lookup, reconstruction, and markup of the snippets showing the context of
the keywords matched. These are only part of the processing each search results web page
requires, and further pages (next to the top) require more of this post processing.
Beyond simple keyword lookups, search engines offer their own GUI- or command-driven operators
and search parameters to refine the search results. These provide the necessary controls for the
user engaged in the feedback loop users create by filtering and weighting while refining the search
results, given the initial pages of the first search results. For example, from 2007 the Google.com
search engine has allowed one to filter by date by clicking "Show search tools" in the leftmost
column of the initial search results page, and then selecting the desired date range.[16] It's also
possible to weight by date because each page has a modification time. Most search engines support
the use of the boolean operators AND, OR and NOT to help end users refine the search query.
Boolean operators are for literal searches that allow the user to refine and extend the terms of the
search. The engine looks for the words or phrases exactly as entered. Some search engines provide
an advanced feature calledproximity search, which allows users to define the distance between
keywords.[14] There is also concept-based searchingwhere the research involves using statistical
analysis on pages containing the words or phrases you search for. As well, natural language queries
allow the user to type a question in the same form one would ask it to a human.[17] A site like this
would be ask.com.[18]
The usefulness of a search engine depends on the relevance of the result set it gives back. While
there may be millions of web pages that include a particular word or phrase, some pages may be
more relevant, popular, or authoritative than others. Most search engines employ methods
to rank the results to provide the "best" results first. How a search engine decides which pages are
the best matches, and what order the results should be shown in, varies widely from one engine to
another.[14] The methods also change over time as Internet usage changes and new techniques
evolve. There are two main types of search engine that have evolved: one is a system of predefined
and hierarchically ordered keywords that humans have programmed extensively. The other is a
system that generates an "inverted index" by analyzing texts it locates. This first form relies much
more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported by advertising revenue and thus
some of them allow advertisers to have their listings ranked higher in search results for a fee. Search
engines that do not accept money for their search results make money by running search related
ads alongside the regular search engine results. The search engines make money every time
someone clicks on one of these ads.[19]
Market share[edit]
Google is the world's most popular search engine, with a marketshare of 67.49 percent as of
September, 2015. Bing comes in at second place.[20]
The world's most popular search engines are:
Google 69.24%
Bing 12.26%
Yahoo! 9.19%
Baidu 6.48%
AOL 1.11%
Search engine Market share in September 2015
Ask 0.24%
Lycos 0.00%
Social network
From Wikipedia, the free encyclopedia
This article is about the theoretical concept as used in the social and behavioral sciences. For social
networking sites, see Social networking service. For the 2010 movie, see The Social Network. For
other uses, see Social network (disambiguation).
It has been suggested that Social graph be merged into this article. (Discuss)Proposed
since February 2016.
Sociology
History
Outline
Portal
Theory
Positivism
Antipositivism
Postpositivism
Functionalism
Conflict theories
Social constructionism
Structuralism
Interactionism
Critical theory
Structure and agency
Actor–network theory
Methods
Quantitative
Qualitative
Historical
Mathematical
Computational
Ethnography
Ethnomethodology
Network analysis
Subfields
Conflict
Criminology
Culture
Development
Deviance
Demography
Education
Economic
Environmental
Family
Gender
Health
Industrial
Inequality
Knowledge
Law
Literature
Medical
Military
Organizational
Political
Race and ethnicity
Religion
Rural
Science
Social change
Social movements
Social psychology
Stratification
STS
Technology
Urban
Browse
Bibliography
Index
Journals
Organizations
People
Timeline
v
t
e
Network science
Theory
Graph
Complex network
Contagion
Small-world
Scale-free
Community structure
Percolation
Evolution
Controllability
Graph drawing
Social capital
Link analysis
Optimization
Reciprocity
Closure
Homophily
Transitivity
Preferential attachment
Balance theory
Network effect
Social influence
Network types
Informational (computing)
Telecommunication
Transport
Social
Biological
Artificial neural
Interdependent
Semantic
Spatial
Dependency
Flow
Graphs
Features
Clique
Component
Cut
Cycle
Data structure
Edge
Loop
Neighborhood
Path
Vertex
Adjacency list / matrix
Incidence list / matrix
Types
Bipartite
Complete
Directed
Hyper
Multi
Random
Weighted
Metrics
Algorithms
Centrality
Degree
Betweenness
Closeness
PageRank
Motif
Clustering
Degree distribution
Assortativity
Distance
Modularity
Efficiency
Models
Topology
Random graph
Erdős–Rényi
Barabási–Albert
Watts–Strogatz
Exponential random (ERGM)
Hyperbolic (HGN)
Hierarchical
Stochastic block model
Dynamics
Boolean network
agent based
Epidemic/SIR
Lists
Categories
Topics
Software
Network scientists
Category:Network theory
Category:Graph theory
v
t
e
A social network is a social structure made up of a set of social actors (such as individuals or
organizations), sets of dyadic ties, and other social interactions between actors. The social network
perspective provides a set of methods for analyzing the structure of whole social entities as well as a
variety of theories explaining the patterns observed in these structures.[1] The study of these
structures uses social network analysis to identify local and global patterns, locate influential entities,
and examine network dynamics.
Social networks and the analysis of them is an inherently interdisciplinaryacademic field which
emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored
early structural theories in sociology emphasizing the dynamics of triads and "web of group
affiliations".[2]Jacob Moreno is credited with developing the first sociograms in the 1930s to study
interpersonal relationships. These approaches were mathematically formalized in the 1950s and
theories and methods of social networks became pervasive in the social and behavioral sciences by
the 1980s.[1][3] Social network analysis is now one of the major paradigms in contemporary sociology,
and is also employed in a number of other social and formal sciences. Together with other complex
networks, it forms part of the nascent field of network science.[4][5]
Contents
[hide]
1Overview
2History
3Levels of analysis
o 3.1Micro level
o 3.2Meso level
o 3.3Macro level
4Theoretical links
o 4.1Imported theories
o 4.2Indigenous theories
5Structural holes
6Research clusters
o 6.1Communications
o 6.2Community
o 6.3Complex networks
o 6.4Criminal networks
o 6.5Diffusion of innovations
o 6.6Demography
o 6.7Economic sociology
o 6.8Health care
o 6.9Human ecology
o 6.10Language and linguistics
o 6.11Literary networks
o 6.12Organizational studies
o 6.13Social capital
6.13.1Mobility benefits
o 6.14Social media
7See also
8References
9Further reading
10External links
o 10.1Organizations
o 10.2Peer-reviewed journals
o 10.3Textbooks and educational resources
o 10.4Data sets
Overview[edit]
The social network is a theoreticalconstruct useful in the social sciencesto study relationships
between individuals, groups, organizations, or even entire societies (social units, seedifferentiation).
The term is used to describe a social structure determined by such interactions. The ties through
which any given social unit connects represent the convergence of the various social contacts of that
unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to
understanding social interaction is that social phenomena should be primarily conceived and
investigated through the properties of relations between and within units, instead of the properties of
these units themselves. Thus, one common criticism of social network theory is that individual
agency is often ignored[6] although this may not be the case in practice (see agent-based modeling).
Precisely because many different types of relations, singular or in combination, form these network
configurations, network analytics are useful to a broad range of research enterprises. In social
science, these fields of study include, but are not limited to anthropology, biology,communication
studies, economics, geography, information science, organizational studies, social
psychology, sociology, and sociolinguistics.
History[edit]
In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social
networks in their theories and research of social groups. Tönnies argued that social groups can exist
as personal and direct social ties that either link individuals who share values and belief
(Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and
instrumental social links (Gesellschaft, German, commonly translated as "society").[7] Durkheim gave
a non-individualistic explanation of social facts, arguing that social phenomena arise when
interacting individuals constitute a reality that can no longer be accounted for in terms of the
properties of individual actors.[8] Georg Simmel, writing at the turn of the twentieth century, pointed to
the nature of networks and the effect of network size on interaction and examined the likelihood of
interaction in loosely knit networks rather than groups.[9]
Major developments in the field can be seen in the 1930s by several groups in psychology,
anthropology, and mathematics working independently.[6][10][11] Inpsychology, in the 1930s, Jacob L.
Moreno began systematic recording and analysis of social interaction in small groups, especially
classrooms and work groups (see sociometry). In anthropology, the foundation for social network
theory is the theoretical and ethnographic work of Bronislaw Malinowski,[12] Alfred Radcliffe-
Brown,[13][14] and Claude Lévi-Strauss.[15] A group of social anthropologists associated with Max
Gluckman and the Manchester School, including John A. Barnes,[16] J. Clyde Mitchell and Elizabeth
Bott Spillius,[17][18] often are credited with performing some of the first fieldwork from which network
analyses were performed, investigating community networks in southern Africa, India and the United
Kingdom.[6] Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that
was influential in later network analysis.[19] Insociology, the early (1930s) work of Talcott Parsons set
the stage for taking a relational approach to understanding social structure.[20][21] Later, drawing upon
Parsons' theory, the work of sociologistPeter Blau provides a strong impetus for analyzing the
relational ties of social units with his work on social exchange theory.[22][23][24] By the 1970s, a growing
number of scholars worked to combine the different tracks and traditions. One group consisted of
sociologist Harrison White and his students at the Harvard University Department of Social
Relations. Also independently active in the Harvard Social Relations department at the time
were Charles Tilly, who focused on networks in political and community sociology and social
movements, and Stanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark
Granovetter[26] and Barry Wellman[27] are among the former students of White who elaborated and
championed the analysis of social networks.[26][28][29][30]
Levels of analysis[edit]
Self-organization of a network, based on Nagler, Levina, & Timme, (2011)[31]
In general, social networks are self-organizing, emergent, and complex, such that a globally
coherent pattern appears from the local interaction of the elements that make up the
system.[32][33] These patterns become more apparent as network size increases. However, a global
network analysis[34] of, for example, all interpersonal relationships in the world is not feasible and is
likely to contain so much informationas to be uninformative. Practical limitations of computing power,
ethics and participant recruitment and payment also limit the scope of a social network
analysis.[35][36] The nuances of a local system may be lost in a large network analysis, hence the
quality of information may be more important than its scale for understanding network properties.
Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question.
Although levels of analysis are not necessarily mutually exclusive, there are three general levels into
which networks may fall: micro-level, meso-level, and macro-level.
Micro level[edit]
At the micro-level, social network research typically begins with an individual, snowballing as social
relationships are traced, or may begin with a small group of individuals in a particular social context.
Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads
may concentrate onstructure of the relationship (e.g. multiplexity, strength), social equality, and
tendencies toward reciprocity/mutuality.
Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may
concentrate on factors such asbalance and transitivity, as well as social equality and tendencies
toward reciprocity/mutuality.[35] In the balance theory ofFritz Heider the triad is the key to social
dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to
change to a balanced triad by a change in one of the relations. The dynamics of social friendships in
society has been modeled by balancing triads. The study is carried forward with the theory of signed
graphs.
Actor level: The smallest unit of analysis in a social network is an individual in their social setting,
i.e., an "actor" or "ego". Egonetwork analysis focuses on network characteristics such as size,
relationship strength, density, centrality, prestige and roles such as isolates, liaisons,
and bridges.[37] Such analyses, are most commonly used in the fields of psychology orsocial
psychology, ethnographic kinship analysis or other genealogical studies of relationships between
individuals.
Subset level: Subset levels of network research problems begin at the micro-level, but may cross
over into the meso-level of analysis. Subset level research may focus on distance and
reachability, cliques, cohesive subgroups, or other group actions or behavior.[38]
Meso level[edit]
In general, meso-level theories begin with a population size that falls between the micro- and macro-
levels. However, meso-level may also refer to analyses that are specifically designed to reveal
connections between micro- and macro-levels. Meso-level networks are low density and may exhibit
causal processes distinct from interpersonal micro-level networks.[39]
Social network diagram, meso-level
Organizations: Formal organizations are social groups that distribute tasks for a
collective goal.[40] Network research on organizations may focus on either intra-organizational or
inter-organizational ties in terms of formal or informalrelationships. Intra-organizational networks
themselves often contain multiple levels of analysis, especially in larger organizations with multiple
branches, franchises or semi-autonomous departments. In these cases, research is often conducted
at a workgroup level and organization level, focusing on the interplay between the two structures.[40]
Randomly distributed networks: Exponential random graph models of social networks became
state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to
represent social-structural effects commonly observed in many human social networks, including
general degree-based structural effects commonly observed in many human social networks as well
as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and
popularity effects, as derived from explicit hypotheses about dependencies among network
ties. Parameters are given in terms of the prevalence of small subgraph configurations in the
network and can be interpreted as describing the combinations of local social processes from which
a given network emerges. These probability models for networks on a given set of actors allow
generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing
models to be built from theoretical structural foundations of social behavior.[41]
Examples of a random network and a scale-free network. Each graph has 32 nodes and 32 links. Note the
"hubs" (shaded) in the scale-free diagram (on the right).
Theoretical links[edit]
Imported theories[edit]
Various theoretical frameworks have been imported for the use of social network analysis. The most
prominent of these areGraph theory, Balance theory, Social comparison theory, and more recently,
the Social identity approach.[45]
Indigenous theories[edit]
Few complete theories have been produced from social network analysis. Two that have
are Structural Role Theory and Heterophily Theory.
The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be
important in seeking information and innovation, as cliques have a tendency to have more
homogeneous opinions as well as share many common traits. This homophilic tendency was the
reason for the members of the cliques to be attracted together in the first place. However, being
similar, each member of the clique would also know more or less what the other members knew. To
find new information or insights, members of the clique will have to look beyond the clique to its
other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[46]
Structural holes[edit]
In the context of networks, social capital exists where people have an advantage because of their
location in a network. Contacts in a network provide information, opportunities and perspectives that
can be beneficial to the central player in the network. Most social structures tend to be characterized
by dense clusters of strong connections.[47] Information within these clusters tends to be rather
homogeneous and redundant. Non-redundant information is most often obtained through contacts in
different clusters.[48] When two separate clusters possess non-redundant information, there is said to
be a structural hole between them.[48] Thus, a network that bridges structural holes will provide
network benefits that are in some degree additive, rather than overlapping. An ideal network
structure has a vine and cluster structure, providing access to many different clusters and structural
holes.[48]
Networks rich in structural holes are a form of social capital in that they offer information benefits.
The main player in a network that bridges structural holes is able to access information from diverse
sources and clusters.[48] For example, inbusiness networks, this is beneficial to an individual's career
because he is more likely to hear of job openings and opportunities if his network spans a wide
range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory
of weak ties, which rests on the basis that having a broad range of contacts is most effective for job
attainment.
Research clusters[edit]
Communications[edit]
Communication Studies are often considered a part of both the social sciences and the humanities,
drawing heavily on fields such as sociology, psychology, anthropology, information
science, biology, political science, and economics as well asrhetoric, literary studies, and semiotics.
Many communications concepts describe the transfer of information from one source to another, and
can thus be conceived of in terms of a network.
Community[edit]
In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of
community ties had to do with who talked, associated, traded, and attended church with whom.
Today, however, there are extended "online" communities developed
through telecommunications devices and social network services. Such devices and services require
extensive and ongoing maintenance and analysis, often using network science methods. Community
development studies, today, also make extensive use of such methods.
Complex networks[edit]
Complex networks require methods specific to modelling and interpreting social
complexity and complex adaptive systems, including techniques of dynamic network analysis.
Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute
to the formation of structure in social networks.
Criminal networks[edit]
In criminology and urban sociology, much attention has been paid to the social networks among
criminal actors. For example, Andrew Papachristos[49] has studied gang murders as a series of
exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because
weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other
violent acts to maintain their reputation for strength.
Diffusion of innovations[edit]
Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to
another or one cultureand another. This line of research seeks to explain why some become "early
adopters" of ideas and innovations, and links social network structure with facilitating or impeding the
spread of an innovation.
Demography[edit]
In demography, the study of social networks has led to new sampling methods for estimating and
reaching populations that are hard to enumerate (for example, homeless people or intravenous drug
users.) For example, respondent driven sampling is a network-based sampling technique that relies
on respondents to a survey recommending further respondents.
Economic sociology[edit]
The field of sociology focuses almost entirely on networks of outcomes of social interactions. More
narrowly, economic sociology considers behavioral interactions of individuals and groups
through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed
core principles about the interactions of social structure, information, ability to punish or reward, and
trust that frequently recur in their analyses of political, economic and other institutions. Granovetter
examines how social structures and social networks can affect economic outcomes like hiring, price,
productivity and innovation and describes sociologists' contributions to analyzing the impact of social
structure and networks on the economy.[50]
Health care[edit]
Analysis of social networks is increasingly incorporated into health care analytics, not only
in epidemiological studies but also in models of patient communication and education, disease
prevention, mental health diagnosis and treatment, and in the study of health care organizations
and systems.[51]
Human ecology[edit]
Human ecology is an interdisciplinary and transdisciplinary study of the relationship
between humans and their natural,social, and built environments. The scientific philosophy of human
ecology has a diffuse history with connections
togeography, sociology, psychology, anthropology, zoology, and natural ecology.[52][53]
Language and linguistics[edit]
Studies of language and linguistics, particularly evolutionary linguistics, focus on the development
of linguistic forms and transfer of changes, sounds or words, from one language system to another
through networks of social interaction. Social networks are also important in language shift, as
groups of people add and/or abandon languages to their repertoire.
Literary networks[edit]
In the study of literary systems, network analysis has been applied by Anheier, Gerhards and
Romo,[54] De Nooy,[55] and Senekal,[56] to study various aspects of how literature functions. The basic
premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be
integrated with network theory and the relationships between different actors in the literary network,
e.g. writers, critics, publishers, literary histories, etc., can be mapped usingvisualization from SNA.
Organizational studies[edit]
Research studies of formal or informal organizational relationships, organizational
communication, economics, economic sociology, and other resource transfers. Social networks have
also been used to examine how organizations interact with each other, characterizing the
many informal connections that link executives together, as well as associations and connections
between individual employees at different organizations.[57] Intra-organizational networks have been
found to affect organizational commitment,[58] organizational identification,[37] interpersonal citizenship
behaviour.[59]
Social capital[edit]
Social capital is a sociological concept which refers to the value of social relations and the role of
cooperation and confidence to achieve positive outcomes. The term refers to the value one can get
from their social ties. For example, newly arrived immigrants can make use of their social ties to
established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of
unfamiliarity with the local language). Studies show that a positive relationship exists between social
capital and the intensity of social network use.[60][61]
Mobility benefits[edit]
In many organizations, members tend to focus their activities inside their own groups, which stifles
creativity and restricts opportunities. A player whose network bridges structural holes has an
advantage in detecting and developing rewarding opportunities.[47] Such a player can mobilize social
capital by acting as a "broker" of information between two clusters that otherwise would not have
been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher
and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value ... of
placing human beings in contact with persons dissimilar to themselves.... Such communication [is]
one of the primary sources of progress."[62] Thus, a player with a network rich in structural holes can
add value to an organization through new ideas and opportunities. This in turn, helps an individual's
career development and advancement.
A social capital broker also reaps control benefits of being the facilitator of information flow between
contacts. In the case of consulting firm Eden McCallum, the founders were able to advance their
careers by bridging their connections with former big 3 consulting firm consultants and mid-size
industry firms.[63] By bridging structural holes and mobilizing social capital, players can advance their
careers by executing new opportunities between contacts.
There has been research that both substantiates and refutes the benefits of information brokerage.
A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes
are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot
materialize due to the communal sharing values" of such organizations.[64] However, this study only
analyzed Chinese firms, which tend to have strong communal sharing values. Information and
control benefits of structural holes are still valuable in firms that are not quite as inclusive and
cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply
chain for one of America's largest electronics companies. He found that managers who often
discussed issues with other groups were better paid, received more positive job evaluations and
were more likely to be promoted.[47] Thus, bridging structural holes can be beneficial to an
organization, and in turn, to an individual's career.
Social media[edit]
Main article: Social media
Computer networks combined with social networking software produces a new medium for social
interaction. A relationship over a computerized social networking service can be characterized by
context, direction, and strength. The content of a relation refers to the resource that is exchanged. In
a computer mediated communication context, social pairs exchange different kinds of information,
including sending a data file or a computer program as well as providing emotional support or
arranging a meeting. With the rise of electronic commerce, information exchanged may also
correspond to exchanges of money, goods or services in the "real" world.[65] Social network
analysis methods have become essential to examining these types of computer mediated
communication.
See also[edit]
Bibliography of sociology
Business networking
Collective network
International Network for Social Network Analysis
Network society
Network theory
Semiotics of social networking
Scientific collaboration network
Social network analysis
Social network (sociolinguistics)
Social networking service
Social web
Structural fold
Social network
A social network is a social structure made up of a set of social actors (such as individuals or
organizations), sets of dyadic ties, and other social interactions between actors. The social network
perspective provides a set of methods for analyzing the structure of whole social entities as well as a
variety of theories explaining the patterns observed in these structures.[1] The study of these
structures uses social network analysis to identify local and global patterns, locate influential entities,
and examine network dynamics.
Social networks and the analysis of them is an inherently interdisciplinaryacademic field which
emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored
early structural theories in sociology emphasizing the dynamics of triads and "web of group
affiliations".[2]Jacob Moreno is credited with developing the first sociograms in the 1930s to study
interpersonal relationships. These approaches were mathematically formalized in the 1950s and
theories and methods of social networks became pervasive in the social and behavioral sciences by
the 1980s.[1][3] Social network analysis is now one of the major paradigms in contemporary sociology,
and is also employed in a number of other social and formal sciences. Together with other complex
networks, it forms part of the nascent field of network science.[4][5]
Contents
[hide]
1Overview
2History
3Levels of analysis
o 3.1Micro level
o 3.2Meso level
o 3.3Macro level
4Theoretical links
o 4.1Imported theories
o 4.2Indigenous theories
5Structural holes
6Research clusters
o 6.1Communications
o 6.2Community
o 6.3Complex networks
o 6.4Criminal networks
o 6.5Diffusion of innovations
o 6.6Demography
o 6.7Economic sociology
o 6.8Health care
o 6.9Human ecology
o 6.10Language and linguistics
o 6.11Literary networks
o 6.12Organizational studies
o 6.13Social capital
6.13.1Mobility benefits
o 6.14Social media
7See also
8References
9Further reading
10External links
o 10.1Organizations
o 10.2Peer-reviewed journals
o 10.3Textbooks and educational resources
o 10.4Data sets
Overview[edit]
Evolution graph of a social network:Barabási model.
The social network is a theoreticalconstruct useful in the social sciencesto study relationships
between individuals, groups, organizations, or even entire societies (social units, seedifferentiation).
The term is used to describe a social structure determined by such interactions. The ties through
which any given social unit connects represent the convergence of the various social contacts of that
unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to
understanding social interaction is that social phenomena should be primarily conceived and
investigated through the properties of relations between and within units, instead of the properties of
these units themselves. Thus, one common criticism of social network theory is that individual
agency is often ignored[6] although this may not be the case in practice (see agent-based modeling).
Precisely because many different types of relations, singular or in combination, form these network
configurations, network analytics are useful to a broad range of research enterprises. In social
science, these fields of study include, but are not limited to anthropology, biology,communication
studies, economics, geography, information science, organizational studies, social
psychology, sociology, and sociolinguistics.
History[edit]
In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social
networks in their theories and research of social groups. Tönnies argued that social groups can exist
as personal and direct social ties that either link individuals who share values and belief
(Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and
instrumental social links (Gesellschaft, German, commonly translated as "society").[7] Durkheim gave
a non-individualistic explanation of social facts, arguing that social phenomena arise when
interacting individuals constitute a reality that can no longer be accounted for in terms of the
properties of individual actors.[8] Georg Simmel, writing at the turn of the twentieth century, pointed to
the nature of networks and the effect of network size on interaction and examined the likelihood of
interaction in loosely knit networks rather than groups.[9]
Moreno's sociogram of a 2nd grade class
Major developments in the field can be seen in the 1930s by several groups in psychology,
anthropology, and mathematics working independently.[6][10][11] Inpsychology, in the 1930s, Jacob L.
Moreno began systematic recording and analysis of social interaction in small groups, especially
classrooms and work groups (see sociometry). In anthropology, the foundation for social network
theory is the theoretical and ethnographic work of Bronislaw Malinowski,[12] Alfred Radcliffe-
Brown,[13][14] and Claude Lévi-Strauss.[15] A group of social anthropologists associated with Max
Gluckman and the Manchester School, including John A. Barnes,[16] J. Clyde Mitchell and Elizabeth
Bott Spillius,[17][18] often are credited with performing some of the first fieldwork from which network
analyses were performed, investigating community networks in southern Africa, India and the United
Kingdom.[6] Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that
was influential in later network analysis.[19] Insociology, the early (1930s) work of Talcott Parsons set
the stage for taking a relational approach to understanding social structure.[20][21] Later, drawing upon
Parsons' theory, the work of sociologistPeter Blau provides a strong impetus for analyzing the
relational ties of social units with his work on social exchange theory.[22][23][24] By the 1970s, a growing
number of scholars worked to combine the different tracks and traditions. One group consisted of
sociologist Harrison White and his students at the Harvard University Department of Social
Relations. Also independently active in the Harvard Social Relations department at the time
were Charles Tilly, who focused on networks in political and community sociology and social
movements, and Stanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark
Granovetter[26] and Barry Wellman[27] are among the former students of White who elaborated and
championed the analysis of social networks.[26][28][29][30]
Levels of analysis[edit]
Self-organization of a network, based on Nagler, Levina, & Timme, (2011)[31]
In general, social networks are self-organizing, emergent, and complex, such that a globally
coherent pattern appears from the local interaction of the elements that make up the
system.[32][33] These patterns become more apparent as network size increases. However, a global
network analysis[34] of, for example, all interpersonal relationships in the world is not feasible and is
likely to contain so much informationas to be uninformative. Practical limitations of computing power,
ethics and participant recruitment and payment also limit the scope of a social network
analysis.[35][36] The nuances of a local system may be lost in a large network analysis, hence the
quality of information may be more important than its scale for understanding network properties.
Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question.
Although levels of analysis are not necessarily mutually exclusive, there are three general levels into
which networks may fall: micro-level, meso-level, and macro-level.
Micro level[edit]
At the micro-level, social network research typically begins with an individual, snowballing as social
relationships are traced, or may begin with a small group of individuals in a particular social context.
Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads
may concentrate onstructure of the relationship (e.g. multiplexity, strength), social equality, and
tendencies toward reciprocity/mutuality.
Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may
concentrate on factors such asbalance and transitivity, as well as social equality and tendencies
toward reciprocity/mutuality.[35] In the balance theory ofFritz Heider the triad is the key to social
dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to
change to a balanced triad by a change in one of the relations. The dynamics of social friendships in
society has been modeled by balancing triads. The study is carried forward with the theory of signed
graphs.
Actor level: The smallest unit of analysis in a social network is an individual in their social setting,
i.e., an "actor" or "ego". Egonetwork analysis focuses on network characteristics such as size,
relationship strength, density, centrality, prestige and roles such as isolates, liaisons,
and bridges.[37] Such analyses, are most commonly used in the fields of psychology orsocial
psychology, ethnographic kinship analysis or other genealogical studies of relationships between
individuals.
Subset level: Subset levels of network research problems begin at the micro-level, but may cross
over into the meso-level of analysis. Subset level research may focus on distance and
reachability, cliques, cohesive subgroups, or other group actions or behavior.[38]
Meso level[edit]
In general, meso-level theories begin with a population size that falls between the micro- and macro-
levels. However, meso-level may also refer to analyses that are specifically designed to reveal
connections between micro- and macro-levels. Meso-level networks are low density and may exhibit
causal processes distinct from interpersonal micro-level networks.[39]
Organizations: Formal organizations are social groups that distribute tasks for a
collective goal.[40] Network research on organizations may focus on either intra-organizational or
inter-organizational ties in terms of formal or informalrelationships. Intra-organizational networks
themselves often contain multiple levels of analysis, especially in larger organizations with multiple
branches, franchises or semi-autonomous departments. In these cases, research is often conducted
at a workgroup level and organization level, focusing on the interplay between the two structures.[40]
Randomly distributed networks: Exponential random graph models of social networks became
state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to
represent social-structural effects commonly observed in many human social networks, including
general degree-based structural effects commonly observed in many human social networks as well
as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and
popularity effects, as derived from explicit hypotheses about dependencies among network
ties. Parameters are given in terms of the prevalence of small subgraph configurations in the
network and can be interpreted as describing the combinations of local social processes from which
a given network emerges. These probability models for networks on a given set of actors allow
generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing
models to be built from theoretical structural foundations of social behavior.[41]
Examples of a random network and a scale-free network. Each graph has 32 nodes and 32 links. Note the
"hubs" (shaded) in the scale-free diagram (on the right).
Macro level[edit]
Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of
interactions, such aseconomic or other resource transfer interactions over a large population.
Theoretical links[edit]
Imported theories[edit]
Various theoretical frameworks have been imported for the use of social network analysis. The most
prominent of these areGraph theory, Balance theory, Social comparison theory, and more recently,
the Social identity approach.[45]
Indigenous theories[edit]
Few complete theories have been produced from social network analysis. Two that have
are Structural Role Theory and Heterophily Theory.
The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be
important in seeking information and innovation, as cliques have a tendency to have more
homogeneous opinions as well as share many common traits. This homophilic tendency was the
reason for the members of the cliques to be attracted together in the first place. However, being
similar, each member of the clique would also know more or less what the other members knew. To
find new information or insights, members of the clique will have to look beyond the clique to its
other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[46]
Structural holes[edit]
In the context of networks, social capital exists where people have an advantage because of their
location in a network. Contacts in a network provide information, opportunities and perspectives that
can be beneficial to the central player in the network. Most social structures tend to be characterized
by dense clusters of strong connections.[47] Information within these clusters tends to be rather
homogeneous and redundant. Non-redundant information is most often obtained through contacts in
different clusters.[48] When two separate clusters possess non-redundant information, there is said to
be a structural hole between them.[48] Thus, a network that bridges structural holes will provide
network benefits that are in some degree additive, rather than overlapping. An ideal network
structure has a vine and cluster structure, providing access to many different clusters and structural
holes.[48]
Networks rich in structural holes are a form of social capital in that they offer information benefits.
The main player in a network that bridges structural holes is able to access information from diverse
sources and clusters.[48] For example, inbusiness networks, this is beneficial to an individual's career
because he is more likely to hear of job openings and opportunities if his network spans a wide
range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory
of weak ties, which rests on the basis that having a broad range of contacts is most effective for job
attainment.
Research clusters[edit]
Communications[edit]
Communication Studies are often considered a part of both the social sciences and the humanities,
drawing heavily on fields such as sociology, psychology, anthropology, information
science, biology, political science, and economics as well asrhetoric, literary studies, and semiotics.
Many communications concepts describe the transfer of information from one source to another, and
can thus be conceived of in terms of a network.
Community[edit]
In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of
community ties had to do with who talked, associated, traded, and attended church with whom.
Today, however, there are extended "online" communities developed
through telecommunications devices and social network services. Such devices and services require
extensive and ongoing maintenance and analysis, often using network science methods. Community
development studies, today, also make extensive use of such methods.
Complex networks[edit]
Complex networks require methods specific to modelling and interpreting social
complexity and complex adaptive systems, including techniques of dynamic network analysis.
Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute
to the formation of structure in social networks.
Criminal networks[edit]
In criminology and urban sociology, much attention has been paid to the social networks among
criminal actors. For example, Andrew Papachristos[49] has studied gang murders as a series of
exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because
weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other
violent acts to maintain their reputation for strength.
Diffusion of innovations[edit]
Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to
another or one cultureand another. This line of research seeks to explain why some become "early
adopters" of ideas and innovations, and links social network structure with facilitating or impeding the
spread of an innovation.
Demography[edit]
In demography, the study of social networks has led to new sampling methods for estimating and
reaching populations that are hard to enumerate (for example, homeless people or intravenous drug
users.) For example, respondent driven sampling is a network-based sampling technique that relies
on respondents to a survey recommending further respondents.
Economic sociology[edit]
The field of sociology focuses almost entirely on networks of outcomes of social interactions. More
narrowly, economic sociology considers behavioral interactions of individuals and groups
through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed
core principles about the interactions of social structure, information, ability to punish or reward, and
trust that frequently recur in their analyses of political, economic and other institutions. Granovetter
examines how social structures and social networks can affect economic outcomes like hiring, price,
productivity and innovation and describes sociologists' contributions to analyzing the impact of social
structure and networks on the economy.[50]
Health care[edit]
Analysis of social networks is increasingly incorporated into health care analytics, not only
in epidemiological studies but also in models of patient communication and education, disease
prevention, mental health diagnosis and treatment, and in the study of health care organizations
and systems.[51]
Human ecology[edit]
Human ecology is an interdisciplinary and transdisciplinary study of the relationship
between humans and their natural,social, and built environments. The scientific philosophy of human
ecology has a diffuse history with connections
togeography, sociology, psychology, anthropology, zoology, and natural ecology.[52][53]
Organizational studies[edit]
Research studies of formal or informal organizational relationships, organizational
communication, economics, economic sociology, and other resource transfers. Social networks have
also been used to examine how organizations interact with each other, characterizing the
many informal connections that link executives together, as well as associations and connections
between individual employees at different organizations.[57] Intra-organizational networks have been
found to affect organizational commitment,[58] organizational identification,[37] interpersonal citizenship
behaviour.[59]
Social capital[edit]
Social capital is a sociological concept which refers to the value of social relations and the role of
cooperation and confidence to achieve positive outcomes. The term refers to the value one can get
from their social ties. For example, newly arrived immigrants can make use of their social ties to
established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of
unfamiliarity with the local language). Studies show that a positive relationship exists between social
capital and the intensity of social network use.[60][61]
Mobility benefits[edit]
In many organizations, members tend to focus their activities inside their own groups, which stifles
creativity and restricts opportunities. A player whose network bridges structural holes has an
advantage in detecting and developing rewarding opportunities.[47] Such a player can mobilize social
capital by acting as a "broker" of information between two clusters that otherwise would not have
been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher
and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value ... of
placing human beings in contact with persons dissimilar to themselves.... Such communication [is]
one of the primary sources of progress."[62] Thus, a player with a network rich in structural holes can
add value to an organization through new ideas and opportunities. This in turn, helps an individual's
career development and advancement.
A social capital broker also reaps control benefits of being the facilitator of information flow between
contacts. In the case of consulting firm Eden McCallum, the founders were able to advance their
careers by bridging their connections with former big 3 consulting firm consultants and mid-size
industry firms.[63] By bridging structural holes and mobilizing social capital, players can advance their
careers by executing new opportunities between contacts.
There has been research that both substantiates and refutes the benefits of information brokerage.
A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes
are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot
materialize due to the communal sharing values" of such organizations.[64] However, this study only
analyzed Chinese firms, which tend to have strong communal sharing values. Information and
control benefits of structural holes are still valuable in firms that are not quite as inclusive and
cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply
chain for one of America's largest electronics companies. He found that managers who often
discussed issues with other groups were better paid, received more positive job evaluations and
were more likely to be promoted.[47] Thus, bridging structural holes can be beneficial to an
organization, and in turn, to an individual's career.
Social media[edit]
Main article: Social media
Computer networks combined with social networking software produces a new medium for social
interaction. A relationship over a computerized social networking service can be characterized by
context, direction, and strength. The content of a relation refers to the resource that is exchanged. In
a computer mediated communication context, social pairs exchange different kinds of information,
including sending a data file or a computer program as well as providing emotional support or
arranging a meeting. With the rise of electronic commerce, information exchanged may also
correspond to exchanges of money, goods or services in the "real" world.[65] Social network
analysis methods have become essential to examining these types of computer mediated
communication.
See also[edit]
Bibliography of sociology
Business networking
Collective network
International Network for Social Network Analysis
Network society
Network theory
Semiotics of social networking
Scientific collaboration network
Social network analysis
Social network (sociolinguistics)
Social networking service
Social web
Structural fold
References[edit]
1. ^ Jump up to:a b Wasserman, Stanley; Faust, Katherine (1994). "Social Network Analysis in the Social
and Behavioral Sciences". Social Network Analysis: Methods and Applications. Cambridge University
Press. pp. 1–27. ISBN 9780521387071.
2. Jump up^ Scott, W. Richard; Davis, Gerald F.(2003). "Networks In and Around
Organizations". Organizations and Organizing. Pearson Prentice Hall.ISBN 0-13-195893-3.
3. Jump up^ Freeman, Linton (2004). The Development of Soc