Sie sind auf Seite 1von 8

What Is Web 2.0?

Web 2.0

The term "Web 2.0" refers to a perceived second generation of web development and
design, that aims to facilitate communication, secure information sharing, interoperability,
and collaboration on the World Wide Web. Web 2.0 concepts have led to the development
and evolution of web-based communities, hosted services, and applications; such as social-
networking sites, video-sharing sites, wikis, blogs, and folksonomies.

The term was first used by Dale Dougherty and Craig Cline and shortly after became notable
after the O'Reilly Media Web 2.0 conference in 2004.[1][2] Although the term suggests a new
version of the World Wide Web, it does not refer to an update to any technical
specifications, but rather to changes in the ways software developers and end-users utilize
the Web. According to Tim O'Reilly:

“ Web 2.0 is the business revolution in the computer industry caused by the move to
the Internet as a platform, and an attempt to understand the rules for success on
that new platform.[3] ”

O'Reilly has noted that the "2.0" refers to the historical context of web businesses "coming
back" after the 2001 collapse of the dot-com bubble, in addition to the distinguishing
characteristics of the projects that survived the bust or thrived thereafter.[4]

Tim Berners-Lee, inventor of the World Wide Web, has questioned whether one can use the
term in any meaningful way, since many of the technological components of Web 2.0 have
existed since the early days of the Web.[5][6]

Definition

Web 2.0 encapsulates the idea of the proliferation of interconnectivity and interactivity of
web-delivered content. Tim O'Reilly regards Web 2.0 as the way that business embraces the
strengths of the web and uses it as a platform. O'Reilly considers that Eric Schmidt's
abridged slogan, don't fight the Internet, encompasses the essence of Web 2.0 — building
applications and services around the unique features of the Internet, as opposed to
expecting the Internet to suit as a platform (effectively "fighting the Internet").

In the opening talk of the first Web 2.0 conference, O'Reilly and John Battelle summarized
what they saw as the themes of Web 2.0. They argued that the web had become a
platform, with software above the level of a single device, leveraging the power of "The
Long Tail," and with data as a driving force. According to O'Reilly and Battelle, an
architecture of participation where users can contribute website content creates network
effects. Web 2.0 technologies tend to foster innovation in the assembly of systems and sites
composed by pulling together features from distributed, independent developers. (This
could be seen as a kind of "open source" or possible "Agile" development process,
consistent with an end to the traditional software adoption cycle, typified by the so-called
"perpetual beta".)
Web 2.0 technology encourages lightweight business models enabled by syndication of
content and of service and by ease of picking-up by early adopters.[7]

O'Reilly provided examples of companies or products that embody these principles in his
description of his four levels in the hierarchy of Web 2.0 sites:

• Level-3 applications, the most "Web 2.0"-oriented, exist only on the Internet,
deriving their effectiveness from the inter-human connections and from the network
effects that Web 2.0 makes possible, and growing in effectiveness in proportion as
people make more use of them. O'Reilly gave eBay, Craigslist, Wikipedia, del.icio.us,
Skype, dodgeball, and AdSense as examples.
• Level-2 applications can operate offline but gain advantages from going online.
O'Reilly cited Flickr, which benefits from its shared photo-database and from its
community-generated tag database.
• Level-1 applications operate offline but gain features online. O'Reilly pointed to
Writely (now Google Docs & Spreadsheets) and iTunes (because of its music-store
portion).
• Level-0 applications work as well offline as online. O'Reilly gave the examples of
MapQuest, Yahoo! Local, and Google Maps (mapping-applications using contributions
from users to advantage could rank as "level 2", like Google Earth). In addition,
Gmail.

Folksonomy
Not to be confused with folk taxonomy.

Folksonomy (also known as collaborative tagging, social classification, social


indexing, and social tagging) is the practice and method of collaboratively creating and
managing tags to annotate and categorize content. Folksonomy describes the bottom-up
classification systems that emerge from social tagging.[1] In contrast to traditional subject
indexing, metadata is generated not only by experts but also by creators and consumers of
the content. Usually, freely chosen keywords are used instead of a controlled vocabulary.[2]
Folksonomy (from folk + taxonomy) is a user-generated taxonomy.

Folksonomies became popular on the Web around 2004 as part of social software
applications including social bookmarking and annotating photographs. Tagging, which is
characteristic of Web 2.0 services, allows non-expert users to collectively classify and find
information. Some websites include tag clouds as a way to visualize tags in a folksonomy.

Typically, folksonomies are Internet-based, although they are also used in other contexts.
Aggregating the tags of many users creates a folksonomy. [1] Aggregation is the pulling
together of all of the tags in an automated way.[1] Folksonomic tagging is intended to make
a body of information increasingly easy to search, discover, and navigate over time. A well-
developed folksonomy is ideally accessible as a shared vocabulary that is both originated
by, and familiar to, its primary users. Two widely cited examples of websites using
folksonomic tagging are Flickr and Delicious, although Flickr may not be a good example of
folksonomy.[3]

As folksonomies develop in Internet-mediated social environments, users can discover who


used a given tag and see the other tags that this person has used. In this way, folksonomy
users can discover the tag sets of another user who tends to interpret and tag content in a
way that makes sense to them. The result can be a rewarding gain in the user's capacity to
find related content (a practice known as "pivot browsing"). Part of the appeal of
folksonomy is its inherent subversiveness: when faced with the choice of the search tools
that Web sites provide, folksonomies can be seen as a rejection of the search engine status
quo in favor of tools that are created by the community.

Folksonomy creation and searching tools are not part of the underlying World Wide Web
protocols. Folksonomies arise in Web-based communities where provisions are made at the
site level for creating and using tags. These communities are established to enable Web
users to label and share user-generated content, such as photographs, or to collaboratively
label existing content, such as Web sites, books, works in the scientific and scholarly
literatures, and blog entries.

Software as a service

Software as a Service (SaaS, typically pronounced 'sass') is a model of software


deployment where an application is licensed for use as a service provided to customers on
demand. On demand licensing and use alleviates the customer's burden of equipping a
device with every application. It also reduces traditional End User Licensing Agreement
(EULA) software maintenance, ongoing operation patches, and patch support complexity in
an organization. On demand licensing enables software to become a variable expense,
rather than a fixed cost at the time of purchase. It also enables licensing only the amount of
software needed versus traditional licenses per device. SaaS also enables the buyer to share
licenses across their organization and between organizations, to reduce the cost of acquiring
EULAs for every device in their firm.

Using SaaS can also conceivably reduce the up-front expense of software purchases,
through less costly, on-demand pricing from hosting service providers. SaaS lets software
vendors control and limit use, prohibits copies and distribution, and facilitates the control of
all derivative versions of their software. SaaS centralized control often allows the vendor or
supplier to establish an ongoing revenue stream with multiple businesses and users without
pre loading software in each device in an organization . The SaaS software vendor may host
the application on its own web server, download the application to the consumer device and
disable it after use or after the on demand contract expires. The on demand function may
be handled internally to share licenses within a firm or by a third-party application service
provider (ASP) sharing licenses between firms. This sharing of end user licenses and on
demand use may also reduce investment in server hardware or the shift of server use to
SaaS suppliers of applications file services.

Key characteristics

The key characteristics of SaaS software, according to IDC, include:[3][dead link]

• network-based access to, and management of, commercially available software


• activities that are managed from central locations rather than at each customer's
site, enabling customers to access applications remotely via the Web
• application delivery that typically is closer to a one-to-many model (single instance,
multi-tenant architecture) than to a one-to-one model, including architecture,
pricing, partnering, and management characteristics
• centralized feature updating, which obviates the need for downloadable patches and
upgrades.
• SaaS is often used in a larger network of communicating software - either as part of
a mashup or as a plugin to a platform as a service. Service oriented architecture is
naturally more complex than traditional models of software deployment.

Data and Web2.0

Every significant internet application to date has been backed by a specialized database:
Google's web crawl, Yahoo!'s directory (and web crawl), Amazon's database of products,
eBay's database of products and sellers, MapQuest's map databases, Napster's
distributed song database. As Hal Varian remarked in a personal conversation last year,
"SQL is the new HTML." Database management is a core competency of Web 2.0
companies, so much so that we have sometimes referred to these applications as
"infoware" rather than merely software.

This fact leads to a key question: Who owns the data?

In the internet era, one can already see a number of cases where control over the
database has led to market control and outsized financial returns. The monopoly on
domain name registry initially granted by government fiat to Network Solutions (later
purchased by Verisign) was one of the first great moneymakers of the internet. While
we've argued that business advantage via controlling software APIs is much more
difficult in the age of the internet, control of key data sources is not, especially if those
data sources are expensive to create or amenable to increasing returns via network
effects.

Look at the copyright notices at the base of every map served by MapQuest,
maps.yahoo.com, maps.msn.com, or maps.google.com, and you'll see the line "Maps
copyright NavTeq, TeleAtlas," or with the new satellite imagery services, "Images
copyright Digital Globe." These companies made substantial investments in their
databases (NavTeq alone reportedly invested $750 million to build their database of
street addresses and directions. Digital Globe spent $500 million to launch their own
satellite to improve on government-supplied imagery.) NavTeq has gone so far as to
imitate Intel's familiar Intel Inside logo: Cars with navigation systems bear the imprint,
"NavTeq Onboard." Data is indeed the Intel Inside of these applications, a sole source
component in systems whose software infrastructure is largely open source or otherwise
commodified.

The now hotly contested web mapping arena demonstrates how a failure to understand
the importance of owning an application's core data will eventually undercut its
competitive position. MapQuest pioneered the web mapping category in 1995, yet when
Yahoo!, and then Microsoft, and most recently Google, decided to enter the market, they
were easily able to offer a competing application simply by licensing the same data.

Contrast, however, the position of Amazon.com. Like competitors such as


Barnesandnoble.com, its original database came from ISBN registry provider R.R.
Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-
supplied data such as cover images, table of contents, index, and sample material. Even
more importantly, they harnessed their users to annotate the data, such that after ten
years, Amazon, not Bowker, is the primary source for bibliographic data on books, a
reference source for scholars and librarians as well as consumers. Amazon also
introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN
where one is present, and creates an equivalent namespace for products without one.
Effectively, Amazon "embraced and extended" their data suppliers.

Imagine if MapQuest had done the same thing, harnessing their users to annotate maps
and directions, adding layers of value. It would have been much more difficult for
competitors to enter the market just by licensing the base data.

The recent introduction of Google Maps provides a living laboratory for the competition
between application vendors and their data suppliers. Google's lightweight programming
model has led to the creation of numerous value-added services in the form of mashups
that link Google Maps with other internet-accessible data sources. Paul Rademacher's
housingmaps.com, which combines Google Maps with Craigslist apartment rental and
home purchase data to create an interactive housing search tool, is the pre-eminent
example of such a mashup.

At present, these mashups are mostly innovative experiments, done by hackers. But
entrepreneurial activity follows close behind. And already, one can see that for at least
one class of developer, Google has taken the role of data source away from Navteq and
inserted themselves as a favored intermediary. We expect to see battles between data
suppliers and application vendors in the next few years, as both realize just how
important certain classes of data will become as building blocks for Web 2.0 applications.

The race is on to own certain classes of core data: location, identity, calendaring of
public events, product identifiers and namespaces. In many cases, where there is
significant cost to create the data, there may be an opportunity for an Intel Inside style
play, with a single source for the data. In others, the winner will be the company that
first reaches critical mass via user aggregation, and turns that aggregated data into a
system service.

For example, in the area of identity, PayPal, Amazon's 1-click, and the millions of users
of communications systems, may all be legitimate contenders to build a network-wide
identity database. (In this regard, Google's recent attempt to use cell phone numbers as
an identifier for Gmail accounts may be a step towards embracing and extending the
phone system.) Meanwhile, startups like Sxip are exploring the potential of federated
identity, in quest of a kind of "distributed 1-click" that will provide a seamless Web 2.0
identity subsystem. In the area of calendaring, EVDB is an attempt to build the world's
largest shared calendar via a wiki-style architecture of participation. While the jury's still
out on the success of any particular startup or approach, it's clear that standards and
solutions in these areas, effectively turning certain classes of data into reliable
subsystems of the "internet operating system", will enable the next generation of
applications.

A further point must be noted with regard to data, and that is user concerns about
privacy and their rights to their own data. In many of the early web applications,
copyright is only loosely enforced. For example, Amazon lays claim to any reviews
submitted to the site, but in the absence of enforcement, people may repost the same
review elsewhere. However, as companies begin to realize that control over data may be
their chief source of competitive advantage, we may see heightened attempts at control.

Much as the rise of proprietary software led to the Free Software movement, we expect
the rise of proprietary databases to result in a Free Data movement within the next
decade. One can see early signs of this countervailing trend in open data projects such
as Wikipedia, the Creative Commons, and in software projects like Greasemonkey, which
allow users to take control of how data is displayed on their computer.

Rich User Experiences

As early as Pei Wei's Viola browser in 1992, the web was being used to deliver "applets" and
other kinds of active content within the web browser. Java's introduction in 1995 was
framed around the delivery of such applets. JavaScript and then DHTML were introduced as
lightweight ways to provide client side programmability and richer user experiences. Several
years ago, Macromedia coined the term "Rich Internet Applications" (which has also been
picked up by open source Flash competitor Laszlo Systems) to highlight the capabilities of
Flash to deliver not just multimedia content but also GUI-style application experiences.

However, the potential of the web to deliver full scale applications didn't hit the mainstream
till Google introduced Gmail, quickly followed by Google Maps, web based applications with
rich user interfaces and PC-equivalent interactivity. The collection of technologies used by
Google was christened AJAX, in a seminal essay by Jesse James Garrett of web design firm
Adaptive Path. He wrote:

"Ajax isn't a technology. It's really several technologies, each flourishing in its own right,
coming together in powerful new ways. Ajax incorporates:

• standards-based presentation using XHTML and CSS;


• dynamic display and interaction using the Document Object Model;
• data interchange and manipulation using XML and XSLT;
• asynchronous data retrieval using XMLHttpRequest;
• and JavaScript binding everything together."

AJAX is also a key component of Web 2.0 applications such as Flickr, now part of Yahoo!,
37signals' applications basecamp and backpack, as well as other Google applications such
as Gmail and Orkut. We're entering an unprecedented period of user interface innovation,
as web developers are finally able to build web applications as rich as local PC-based
applications.

Interestingly, many of the capabilities now being explored have been around for many
years. In the late '90s, both Microsoft and Netscape had a vision of the kind of capabilities
that are now finally being realized, but their battle over the standards to be used made
cross-browser applications difficult. It was only when Microsoft definitively won the browser
wars, and there was a single de-facto browser standard to write to, that this kind of
application became possible. And while Firefox has reintroduced competition to the browser
market, at least so far we haven't seen the destructive competition over web standards that
held back progress in the '90s.

We expect to see many new web applications over the next few years, both truly novel
applications, and rich web reimplementations of PC applications. Every platform change to
date has also created opportunities for a leadership change in the dominant applications of
the previous platform.
Gmail has already provided some interesting innovations in email, combining the strengths
of the web (accessible from anywhere, deep database competencies, searchability) with
user interfaces that approach PC interfaces in usability. Meanwhile, other mail clients on the
PC platform are nibbling away at the problem from the other end, adding IM and presence
capabilities. How far are we from an integrated communications client combining the best of
email, IM, and the cell phone, using VoIP to add voice capabilities to the rich capabilities of
web applications? The race is on.

It's easy to see how Web 2.0 will also remake the address book. A Web 2.0-style address
book would treat the local address book on the PC or phone merely as a cache of the
contacts you've explicitly asked the system to remember. Meanwhile, a web-based
synchronization agent, Gmail-style, would remember every message sent or received, every
email address and every phone number used, and build social networking heuristics to
decide which ones to offer up as alternatives when an answer wasn't found in the local
cache. Lacking an answer there, the system would query the broader social network.

A Web 2.0 word processor would support wiki-style collaborative editing, not just
standalone documents. But it would also support the rich formatting we've come to expect
in PC-based word processors. Writely is a good example of such an application, although it
hasn't yet gained wide traction.

Nor will the Web 2.0 revolution be limited to PC applications. Salesforce.com demonstrates
how the web can be used to deliver software as a service, in enterprise scale applications
such as CRM.

The competitive opportunity for new entrants is to fully embrace the potential of Web 2.0.
Companies that succeed will create applications that learn from their users, using an
architecture of participation to build a commanding advantage not just in the software
interface, but in the richness of the shared data.

Social network service


Not to be confused with social network.

A social network service focuses on building online communities of people who share
interests and/or activities, or who are interested in exploring the interests and activities of
others. Most social network services are web based and provide a variety of ways for users
to interact, such as e-mail and instant messaging services.

Social networking has created new ways to communicate and share information. Social
networking websites are being used regularly by millions of people, and it now seems that
social networking will be an enduring part of everyday life. The main types of social
networking services are those which contain directories of some categories (such as former
classmates), means to connect with friends (usually with self-description pages), and
recommender systems linked to trust. Popular methods now combine many of these, with
MySpace and Facebook being the most widely used in North America;[1] Nexopia (mostly in
Canada);[2] Bebo,[3] Facebook, Hi5, MySpace, Tagged, Xing;[4] and Skyrock in parts of
Europe;[5] Orkut and Hi5 in South America and Central America;[6] and Friendster, Orkut,
Xiaonei and Cyworld in Asia and the Pacific Islands.
There have been some attempts to standardize these services to avoid the need to duplicate
entries of friends and interests (see the FOAF standard and the Open Source Initiative), but
this has led to some concerns about privacy.

Social networks for social good

Several websites are beginning to tap into the power of the social networking model for
social good. Such models may be highly successful for connecting otherwise fragmented
industries and small organizations without the resources to reach a broader audience with
interested and passionate users. Users benefit by interacting with a like minded community
and finding a channel for their energy and giving. [28] Examples include SixDegrees.org,
TakingITGlobal, G21.com, BabelUp, Care2, Change.org, Gather.org, Idealist.org,
OneWorldWiki, TakePart.com and Network for Good. The charity badge is often used within
the above context.

Typical structure of a social networking service

Basics

In general, social networking services allow users to create a profile for themselves, and can
be broken down into two broad categories: internal social networking (ISN) [36] and external
social networking (ESN)[37] sites, such as Orkut,MySpace, Facebook and Bebo. Both types
can increase the feeling of community among people. An ISN is a closed/private community
that consists of a group of people within a company, association, society, education provider
and organization or even an "invite only" group created by a user in an ESN. An ESN is
open/public and available to all web users to communicate and are designed to attract
advertisers. ESN's can be smaller specialised communities (i.e. linked by a single common
interest eg TheSocialGolfer, ACountryLife.Com, Great Cooks Community) or they can be
large generic social networking sites (eg MySpace, Facebook etc).

However, whether specialised or generic there is commonality across the general approach
of social networking sites. Users can upload a picture of themselves, create their 'profile'
and can often be "friends" with other users. In most social networking services, both users
must confirm that they are friends before they are linked. For example, if Alice lists Bob as a
friend, then Bob would have to approve Alice's friend request before they are listed as
friends. Some social networking sites have a "favorites" feature that does not need approval
from the other user. Social networks usually have privacy controls that allows the user to
choose who can view their profile or contact them, etc.

Some social networking sites are created for the benefits of others, such as parents social
networking site "Gurgle". This website is for parents to talk about pregnancy, birth and
bringing up children.

Several social networks in Asian markets such as India, China, Japan and Korea have
reached not only a high usage but also a high level of profitability. Services such as QQ
(China), Mixi (Japan), Cyworld (Korea) or the mobile-focused service Mobile Game Town by
the company DeNA in Japan (which has over 10 million users) are all profitable, setting
them apart from their western counterparts.[38]

Das könnte Ihnen auch gefallen