Sie sind auf Seite 1von 72

Research

Publication Date: 2 August 2010 ID Number: G00205757



2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form
without prior written permission is forbidden. The information contained herein has been obtained from sources believed to
be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although
Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal
advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors,
omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein
are subject to change without notice.
Hype Cycle for Emerging Technologies, 2010
Jackie Fenn
High-impact technologies at the Peak of Inflated Expectations during 2010 include
private cloud computing, augmented reality, media tablets (such as the iPad), wireless
power, 3D flat-panel TVs and displays, and activity streams.

Publication Date: 2 August 2010/ID Number: G00205757 Page 2 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



TABLE OF CONTENTS
Analysis ....................................................................................................................................... 4
What You Need to Know .................................................................................................. 4
The Hype Cycle ............................................................................................................... 4
The Priority Matrix ............................................................................................................ 8
Off the Hype Cycle ........................................................................................................... 9
On the Rise ................................................................................................................... 10
Human Augmentation ........................................................................................ 10
Context Delivery Architecture ............................................................................ 11
Computer-Brain Interface .................................................................................. 13
Terahertz Waves ............................................................................................... 14
Tangible User Interfaces ................................................................................... 15
Extreme Transaction Processing ....................................................................... 16
Autonomous Vehicles ........................................................................................ 19
Video Search .................................................................................................... 20
Mobile Robots ................................................................................................... 21
Social Analytics ................................................................................................. 22
3D Printing ........................................................................................................ 23
Speech-to-Speech Translation .......................................................................... 24
At the Peak .................................................................................................................... 25
Internet TV ........................................................................................................ 25
Private Cloud Computing ................................................................................... 27
Augmented Reality ............................................................................................ 29
Media Tablet ..................................................................................................... 31
Wireless Power ................................................................................................. 33
3D Flat-Panel TVs and Displays ........................................................................ 33
4G Standard ...................................................................................................... 34
Activity Streams ................................................................................................ 36
Cloud Computing .............................................................................................. 37
Cloud/Web Platforms......................................................................................... 38
Sliding Into the Trough ................................................................................................... 39
Gesture Recognition .......................................................................................... 39
Mesh Networks: Sensor .................................................................................... 40
Microblogging .................................................................................................... 41
E-Book Readers ................................................................................................ 43
Video Telepresence .......................................................................................... 45
Broadband Over Power Lines ............................................................................ 47
Virtual Assistants ............................................................................................... 48
Public Virtual Worlds ......................................................................................... 49
Consumer-Generated Media ............................................................................. 51
Idea Management ............................................................................................. 51
Mobile Application Stores .................................................................................. 53
Climbing the Slope ......................................................................................................... 55
Biometric Authentication Methods...................................................................... 55
Internet Micropayment Systems ........................................................................ 57
Interactive TV .................................................................................................... 59
Predictive Analytics ........................................................................................... 62
Electronic Paper ................................................................................................ 63
Location-Aware Applications.............................................................................. 64
Speech Recognition .......................................................................................... 65
Entering the Plateau ...................................................................................................... 66

Publication Date: 2 August 2010/ID Number: G00205757 Page 3 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Pen-Centric Tablet PCs ..................................................................................... 66
Appendixes .................................................................................................................... 68
Hype Cycle Phases, Benefit Ratings and Maturity Levels .................................. 70
Recommended Reading ............................................................................................................. 71

LIST OF TABLES
Table 1. Hype Cycle Phases....................................................................................................... 70
Table 2. Benefit Ratings ............................................................................................................. 70
Table 3. Maturity Levels ............................................................................................................. 71

LIST OF FIGURES
Figure 1. Hype Cycle for Emerging Technologies, 2010 ................................................................ 7
Figure 2. Priority Matrix for Emerging Technologies, 2010 ............................................................ 9
Figure 3. Hype Cycle for Emerging Technologies, 2009 .............................................................. 68

Publication Date: 2 August 2010/ID Number: G00205757 Page 4 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



ANALYSIS
What You Need to Know
This Hype Cycle provides a cross-industry perspective on potentially transformative technologies.
Senior executives, CIOs, strategists, business developers and technology planners will want to
consider these technologies when developing emerging business and technology portfolios. This
report is intended as a starting point and should be selectively augmented or revised based on
input from other technology and industry Hype Cycles, as well as from detailed technology
planning that targets organizational requirements.
Technologies on the rise to the Peak of Inflated Expectations during 2010 include Internet TV (for
example, Hulu), private cloud computing, augmented reality, media tablets (for example, the
iPad), wireless power, 3D flat-panel TVs and displays, activity streams, and fourth-generation
(4G) standard. Just over the peak are cloud computing and cloud/Web platforms, while
microblogging and e-books have fallen toward the Trough of Disillusionment since 2009.
Transformational technologies that will hit the mainstream in less than five years include media
tablets, cloud computing and cloud/Web platforms. Longer term, beyond the five-year horizon, 3D
printing, context delivery architectures, mobile robots, autonomous vehicles, terahertz waves and
human augmentation will be transformational across a range of industries.
The Hype Cycle
The Hype Cycle for Emerging Technologies targets strategic planning, innovation and emerging-
technology professionals by highlighting a set of technologies that will have broad-ranging impact
across the business. It is the broadest aggregate Gartner Hype Cycle, selecting from the more
than 1,600 technologies featured in Gartner's Hype Cycle Special Report for 2010. It features
technologies that are the focus of attention because of particularly high levels of hype, or those
that may not be broadly acknowledged but that Gartner believes have the potential for significant
impact. Because this Hype Cycle pulls from such a broad spectrum of topics, many technologies
that are featured in a specific year because of their relative visibility will not necessarily be
tracked throughout their life cycle. Interested readers can refer to Gartner's broader collection of
Hype Cycles for items of ongoing interest.
Themes emerging from this year's Hype Cycle include:
User experience and interaction. New styles of user interaction will drive new usage
patterns, giving organizations opportunities to innovate how information and transactions
are delivered to customers and employees. This includes devices such as media tablets
and 3D flat-panel TVs and displays, and interaction styles such as gesture recognition
and tangible user interfaces (see also "Hype Cycle for Human-Computer Interaction,
2010").
Augmented reality, context and the real-world Web. The migration of the Web
phenomenon and technology in general beyond the desktop and into the context
of people's everyday lives is creating new opportunities for personalized and
contextually aware information access. Augmented reality is a hot topic in the mobile
space, with platforms and services on iPhone and Android platforms, and it represents
the next generation as location-aware applications move toward the plateau. Other
elements such as 4G standard, sensor networks (mesh networks: sensor) and context
delivery architecture are evolving more slowly, but they will play a key role in expanding

Publication Date: 2 August 2010/ID Number: G00205757 Page 5 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



the impact of IT in the physical world (see also the "Hype Cycle for Context-Aware
Computing, 2010" and "Hype Cycle for Mobile Device Technologies, 2010").
Data-driven decisions. The quantity and variety of digital data continue to explode,
along with the opportunities to analyze and gain insight from new sources such as
location information and social media. The techniques themselves, such as predictive
analytics, are relatively well established in many cases; the value resides in applying
them in new applications such as social analytics and sentiment analysis.
Cloud-computing implications. The adoption and impact of cloud computing continue
to expand. The separate "Hype Cycle for Cloud Computing, 2010" shows a dense mass
of activity rising up to the Peak of Inflated Expectations. On this Hype Cycle for
Emerging Technologies, we feature cloud computing overall just topping the peak, and
private cloud computing still rising. Both are of intense interest among organizations that
would like to take advantage of the scalability of cloud computing, while potentially
addressing some of the security and other issues associated with public cloud
computing. Cloud/Web platforms are featured, along with mobile application stores, to
acknowledge the growing interest in platforms for application development and delivery.
Value from the periphery. A number of technologies, such as mobile robots and 3D
printing, are not yet widely used, but they can already provide significant value when
used appropriately (see "Most Valuable Technologies: Survey Results for Emerging-
Technology Adoption and Management").
New on the 2010 Hype Cycle for Emerging Technologies
The following technologies have been added to the 2010 Hype Cycle for emerging technologies
that were not part of the 2009 Hype Cycle:
Computer-brain interface, tangible user interfaces, gesture recognition and virtual
assistants highlight the shifts in user interaction.
Terahertz waves, autonomous vehicles, speech-to-speech translation, biometric
authentication and interactive TV show the progress of some long-emerging
technologies. For example, interactive TV is now climbing the slope, 14 years after we
showed it sliding into the trough on our 1996 Hype Cycle for emerging technologies.
Social analytics and predictive analytics showcase activity in data-driven decisions.
Private cloud computing, cloud/Web platforms and activity streams highlight key topics
in Web development and access.
Media tablets, consumer-generated media, 4G standard, mobile application stores and
Internet micropayment systems track the progress of key consumer technologies.
Extreme transaction processing shows the evolution of technology from exclusively
high-end, sophisticated organizations to mainstream users.
Broadband over power lines is included this year as an example of how technologies
can fail to emerge from the Trough of Disillusionment. Broadband over power lines will
become obsolete before the Plateau of Productivity as other broadband technologies
particularly WiMAX evolve faster and move into position to take large portions of the
addressable market for Internet access.
Fast Movers

Publication Date: 2 August 2010/ID Number: G00205757 Page 6 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



A number of technologies have moved along the Hype Cycle significantly since 2009:
3D flat-panel TVs and displays have moved rapidly since 2009 (when they appeared as
"3D flat-panel displays"), from shortly after the Technology Trigger to a position near the
peak, due to intense vendor activity and product announcements.
E-book readers have dropped from their peak last year, as media tablets, particularly the
iPad, threaten the value of a stand-alone reader.
Microblogging falls toward the trough as enterprises struggle to find the value, even as
consumer popularity continues.
Video telepresence is falling toward the trough due to still-high pricing, which limits
adoption. However, those who have adopted are inevitably impressed with the sense of
"being there" that the technology delivers.
Pen-centric tablet PCs (last year, these were called "tablet PCs") approach the plateau,
while potential competition gathers from touch-based media tablets.
Another Notable Technology
One technology, public virtual worlds, has changed since 2009 from an expected two to
five years to mainstream adoption to an expected five to 10 years to mainstream
adoption, due to a slowing down in adoption and development activity.

Publication Date: 2 August 2010/ID Number: G00205757 Page 7 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Figure 1. Hype Cycle for Emerging Technologies, 2010

Source: Gartner (August 2010)

Publication Date: 2 August 2010/ID Number: G00205757 Page 8 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



The Priority Matrix
The Hype Cycle for Emerging Technologies has an above-average number of technologies with a
benefit rating of transformational or high. This was deliberate during the selection process we
aim to highlight technologies that are worth adopting early, because of their potentially high
impact. However, the actual benefit often varies significantly across industries, so planners
should ascertain which of these opportunities relate closely to their organizational requirements.
The following technologies have been rated transformational in 2010:
For transforming products, services and commerce media tablet and 3D printing
For driving deep changes in the role and capabilities of IT cloud computing,
cloud/Web platforms and context delivery architecture
For the ability to drive major advances in automation mobile robots and autonomous
vehicles
For lowering the barriers to entry for transactionally demanding business models
extreme transaction processing
For its impact on a broad range of sensing, detection and communications applications
terahertz waves
For its impact on human capabilities human augmentation

Publication Date: 2 August 2010/ID Number: G00205757 Page 9 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Figure 2. Priority Matrix for Emerging Technologies, 2010

Source: Gartner (August 2010)
Off the Hype Cycle
Because this Hype Cycle pulls from such a broad spectrum of topics, many technologies are
featured in a specific year because of their relative visibility, but not tracked over a longer period
of time. Technology planners can refer to Gartner's broader collection of Hype Cycles for items of
ongoing interest. The following technologies that appeared in last year's Hype Cycle for Emerging
Technologies do not appear in this year's report:
Surface computers see "Hype Cycle for Human-Computer Interaction, 2010."
Corporate blogging, social network analysis, social software suites and wikis see
"Hype Cycle for Social Software, 2010."
Quantum computing see "Hype Cycle for Semiconductors, 2010."

Publication Date: 2 August 2010/ID Number: G00205757 Page 10 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Behavioral economics, and green IT do not appear in any Hype Cycles this year.
Online video see "Hype Cycle for Consumer Services and Mobile Applications,
2010."
Home health monitoring see "Hype Cycle for Healthcare Provider Applications and
Systems, 2010."
Over-the-air mobile phone payment systems, developed markets see "Hype Cycle for
Financial Services Payment Systems, 2010."
Web 2.0 has only application-specific and industry-specific entries this year (for
example, see "Hype Cycle for Life Insurance, 2010").
Service-oriented architecture (SOA) see "Hype Cycle for Application Architecture,
2010."
RFID (case/pallet) see "Hype Cycle for Retail Technologies, 2010."
On the Rise
Human Augmentation
Analysis By: Jackie Fenn
Definition: Human augmentation moves the world of medicine, wearable devices and implants
from techniques to restore normal levels of performance and health (such as cochlear implants
and eye laser surgery) to techniques that take people beyond levels of human performance
currently perceived as "normal." In the broadest sense, technology has long offered the ability for
superhuman performance from a simple torch that helps people see in the dark to a financial
workstation that lets a trader make split-second decisions about highly complex data. The field of
human augmentation (sometimes referred to as "Human 2.0") focuses on creating cognitive and
physical improvements as an integral part of the human body. An example is using active control
systems to create limb prosthetics with characteristics that can exceed the highest natural human
performance.
Position and Adoption Speed Justification: What has been considered a matter of science
fiction for decades (as exemplified by the bionic capabilities of the "Six Million Dollar Man") is
slowly but steadily becoming a reality. For example, the "eyeborg" project
(http://eyeborgproject.com/home.php) aims to put a working camera into the eye socket of a
Canadian TV journalist who lost an eye. The Olympic-class performances of runner Oscar
Pistorius, who has two prosthetic legs, have moved the concept of human augmentation from the
lab to the commercial foreground and inspired a new round of futuristic visions, such as Puma's
"2178" soccer cyborg advertisements. It is already possible to replace the lens in the aging
human eye and give it better-than-natural performance. Cochlear implants can do the same for
hearing.
Although most techniques and devices are developed to assist people with impaired function,
some elective use has already started. For example, a study by Nature found that one in five of
its readers used mental-performance-enhancing drugs, such as Ritalin or Provigil. Makers of
power-assisted suits or exoskeletons, such as Raytheon, are aiming to provide increased
strength and endurance to soldiers and caregivers. In addition, some researchers are
experimenting with creating additional senses for humans, such as the ability to sense a magnetic
field and develop the homing instinct of birds and marine mammals.

Publication Date: 2 August 2010/ID Number: G00205757 Page 11 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Globalization and specialization are creating levels of competition that will drive more people to
experiment with enhancing themselves. Augmentation that reliably delivers moderately improved
human capabilities will become a multibillion-dollar market during the next quarter century.
However, the radical nature of the trend will limit it to a small segment of the population for most
of that period. The rate of adoption will vary according to the means of delivering the
augmentation drugs and wearable devices are likely to be adopted more rapidly than those
involving surgery. However, the huge popularity of cosmetic surgery is an indicator that even
surgery is not a long-term barrier given the right motivation. For example, some individuals have
implanted magnets in their fingertips in order to sense electrical activity.
User Advice: Organizations aiming to be very early adopters of technology, particularly those
whose employees are engaged in physically demanding work, should track lab advances in areas
such as strength, endurance or sensory enhancement. As the synthesis of humans and machines
continues to evolve, evaluate the broad societal implications of these changes.
Business Impact: The impact of human augmentation and the ethical and legal controversies
surrounding it will first be felt in industries and endeavors demanding extreme performance,
such as the military, emergency services and sports.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Recommended Reading: Poll: Scientists Use Brain-Boosting Drugs: Survey of Magazine's
Readers Shows 1 in 5 Take Mental-Performance-Enhancing Drugs
(www.webmd.com/brain/news/20080409/poll-scientists-use-brain-boosting-drugs)
The feelSpace Belt (http://feelspace.cogsci.uni-osnabrueck.de/en/technology_01.html)
A Sixth Sense for a Wired World (www.wired.com/gadgets/mods/news/2006/06/71087)
Context Delivery Architecture
Analysis By: William Clark; Anne Lapkin
Definition: Context-aware computing is about improving the user experience for customers,
business partners and employees by using the information about a person or object's
environment, activities, connections and preferences to anticipate the user's needs and
proactively serve up the most appropriate content, product or service. Enterprises can leverage
context-aware computing to better target prospects, increase customer intimacy, and enhance
associate productivity and collaboration. From a software perspective, context is information that
is relevant to the functioning of a software process, but is not essential to it. In the absence of this
additional information, the software is still operational, although the results of the software's
actions are not as targeted or refined.
Most context-enriched services are implemented in siloed systems, where a particular person,
group or business process profits from being situationally aware. To replicate, scale and integrate
such systems, certain repeatable patterns emerge that will require a new enterprise solution
architecture known as context delivery architecture (CoDA).
Gartner defines CoDA as an architectural style that builds on service-oriented architecture (SOA)
and event-driven architecture (EDA) interaction and partitioning styles, and adds formal
mechanisms for the software elements that discover and apply the user's context in real time.
CoDA provides a framework for solution architects that allows them to define and implement the

Publication Date: 2 August 2010/ID Number: G00205757 Page 12 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



technology, information and process components that enable services to use context information
to improve the quality of the interactions with the user. The technologies may include context
brokers, state monitors, sensors, analytic engines and cloud-based transaction processing
engines. As context-aware computing matures, CoDA should also define data formats, metadata
schemas, interaction and discovery protocols, programming interfaces, and other formalities. As
an emerging best practice, CoDA will enable enterprises to create and tie together the siloed
context-aware applications with increased agility and flexibility. As with SOA, much of the pull for
CoDA will come from packaged-application and software vendors expanding to integrate
communication and collaboration capabilities, unified communications vendors and mobile device
manufacturers, Web megavendors (e.g., Google), social-networking vendors (e.g., Facebook),
and service providers that expand their roles to become providers and processors of context
information.
The CoDA architecture style considers information, business and technology domain viewpoints.
The technology domains are application infrastructure, communications infrastructure, network
services and endpoints (devices). Thus, CoDA provides a framework for architects to discover
gaps and overlap among system components that provide, process and analyze contextual
information. A key challenge of CoDA will be information-driven, not technology-driven. This key
challenge will revolve around what information sources can provide context, then what
technologies enable that information to be provided in a secure, timely and usable manner, and
how this information can be folded into processes.
Position and Adoption Speed Justification: Gartner introduced the term CoDA in 2007, based
on developments in areas such as mobile communications and cloud computing. By 2011, we
expect that aggressive enterprise architects and project managers will weave elements of CoDA
into their plans to orchestrate and build context-enriched services that rely not only on federated
information models, but also on federated delivery services. CoDA relies on SOA as an
underpinning and also is related to EDA, because enterprise architectures need to be agile and
scalable to support context-aware computing. SOA and EDA have not yet reached the Plateau of
Productivity. We expect CoDA to reach the Plateau of Productivity gradually, after 2014.
User Advice: Although CoDA, is an emerging architectural style, Type A organizations can
benefit in the short term by applying its principles as they experiment with use of context
information to provide improved user experiences in both customer-facing services and enterprise
productivity. Leading-edge organizations need to begin to incorporate CoDA constructs in
infrastructure and services to gain competitive advantages with the early use of context-aware
computing. Type A organizations should now be identifying which information sources, both
within the enterprise and external to it (e.g., from social-software sites), will provide context
information to a range of applications. Build competencies in CoDA's technology domains,
particularly in communications, because the migration of voice from silos to general applications
will be a key transformation, opening up further opportunities to create applications enhanced by
context-enriched services. An understanding of mobile development will also be key. The
refinement of your enterprise architecture to include CoDA constructs assumes prior investment
in SOA. Most mainstream, risk-averse organizations should not invest in building a CoDA
capability, but should explore the acquisition of context-enriched services through third parties.
Business Impact: Context awareness is a distinguishing characteristic of some leading software
solutions, including Amazon e-commerce, Google Search, Facebook, Apple and others. During
the next three to five years, context-aware computing will have high impact among Type A
businesses in two areas: extending e-commerce and mobile commerce initiatives toward
consumers, and increasing the efficiency and productivity of the businesses' knowledge workers
and business partners. Context-aware computing will evolve incrementally, and will gain
momentum as more information sources become available and cloud-based context-enriched
services begin to emerge. However, these will be siloed and will not use a standard or shared

Publication Date: 2 August 2010/ID Number: G00205757 Page 13 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



CoDA model. Emergence of formal CoDA protocols and principles will translate into a new
technology category and feature set, affecting all application infrastructure and business
application providers.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Appear Networks; Apple; Google; IBM; Interactive Intelligence; Nokia; Pontis;
Sense Networks
Recommended Reading: "Fundamentals of Context Delivery Architecture: Introduction and
Definitions, 2010 Update"
"The Seven Styles of Context-Aware Computing"
"Context-Enriched Services: From Reactive Location to Rich Anticipation"
"Fundamentals of Context Delivery Architecture: Provisioning Context-Enriched Services, 2010
Update"
Computer-Brain Interface
Analysis By: Jackie Fenn
Definition: Computer-brain interfaces interpret distinct brain patterns, shifts and signals
usually generated by voluntary user actions as commands that can be used to guide a
computer or other device. The best results are achieved by implanting electrodes into the brain to
pick up signals. Noninvasive techniques are available commercially that use a cap or helmet to
detect the signals through external electrodes.
Position and Adoption Speed Justification: Computer-brain interfaces remain at an embryonic
level of maturity, although we have moved them slightly further along the Hype Cycle from 2009
to acknowledge the commercial availability of several game-oriented products (such as OCZ
Technology's Neural Impulse Actuator, Emotiv Systems' Epoc and NeuroSky's MindSet). The
major challenge for this technology is obtaining a sufficient number of distinctly different brain
patterns to perform a range of commands typically, fewer than five patterns can currently be
distinguished. A number of research systems offer brain-driven typing by flashing up letters on
the screen and detecting the distinct brain pattern generated when the desired letter is
recognized by the user. Several of the commercial systems also recognize facial expressions and
eye movements as additional input. Outside of medical uses, such as communication for people
with "locked in" syndrome, other hands-free approaches, such as speech recognition or gaze
tracking, offer faster and more-flexible interaction than computer-brain interfaces. The need to
wear a cap to recognize the signals is also a serious limitation in many consumer or business
contexts.
User Advice: Treat computer-brain interfaces as a research activity. Some niche gaming and
disability-assistance use cases might become commercially viable for simple controls; however,
these will not have capabilities that will generate significant uses in the mainstream of business
IT.
Business Impact: Most research is focused on providing severely disabled individuals with the
ability to control their surroundings. Game controls are another major area of application. Longer
term, hands-busy mobile applications may benefit from hybrid techniques combining brain, gaze
and muscle tracking to offer hands-free interaction.

Publication Date: 2 August 2010/ID Number: G00205757 Page 14 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Brain Actuated Technologies; Emotiv Systems; Neural Signals; NeuroSky;
OCZ Technology
Recommended Reading: "Cool Vendors in Emerging Technologies, 2010"
Terahertz Waves
Analysis By: Jim Tully
Definition: Terahertz radiation lies in the 300GHz to 3THz frequency range, above microwaves
and below infrared. This part of the spectrum is currently unlicensed and is available for wide
bandwidth/high data rate communication.
Most terahertz radiation is absorbed by the atmosphere and cannot travel far unless a great deal
of transmission power is used. This renders THz frequencies impractical for the next few years for
most communication applications over distances of more than a few meters, since high-power
terahertz sources currently require building-sized synchrotrons costing tens of millions of dollars.
More practical sources for communication purposes will use quantum cascade lasers as the THz
source, but these require considerable development.
However, there are a few frequencies in the range where the transmission energy is not so
heavily absorbed in the air and these represent transmission windows. There is much interest in
the use of THz waves for short-range high-speed wireless communication, in view of exponential
increases in data rate demand and of shrinking cell sizes in mobile networks. The use of THz
frequencies will be very attractive for the provision of data rates of 5 to 10 Gbits/second that will
be required in around 10 years' time.
Another important class of application relies on a propert y of THz radiation where pulses directed
at various materials are absorbed or reflected in differing (and precise) amounts. The pattern of
reflected energy can be used to form a unique signature that can identify the material (a
technique known as spectroscopy), even when it is hidden below a thin layer (such as clothing).
This technique is leading to a great deal of activity in the detection of hidden weapons and
explosives at airport terminals and other locations of terrorist activity and is likely to be the first
important use of terahertz radiation.
Position and Adoption Speed Justification: Terahertz technology will develop during the next
decade and beyond as new applications emerge. At this stage of development, the spectroscopy
and imaging applications of THz waves are dominating most of the activity, but there is still much
interest in communications applications. The areas of focus for THz frequencies are very broad
and this encourages research from many institutions and companies.
Some of the applications suitable for terahertz technology use are:
Physical screening security applications (using a combination of millimeter waves and
THz technology).
High data rate wireless communication.
Surface inspection of LCD displays, aircraft wings and other composite materials.
Identification of fake products, such as pharmaceuticals.

Publication Date: 2 August 2010/ID Number: G00205757 Page 15 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Detection of cancerous cells.
Detection of flaws in car tires.
Safety and quality control in packaged food.
Mammography systems.
Skin hydration sensing.
Early detection of tooth decay (earlier than x-rays can detect).
Detection of cracks and other imperfections in welded plastic joints.
Identification of biological agents.
Biological and medical applications appear to be moving forward slower than previously expected
by researchers, but security and inspection applications are moving somewhat faster.
Communications applications are recognized as important, but are several years further in the
future. A great deal more work is needed in these areas, including the development of lower-cost
imaging systems, based on quantum cascade lasers and high sensitivity multi-pixel detectors.
On-chip technologies will be particularly important in driving costs down. Funding is a critical
factor in the speed of development of the technology. Much of the initial impetus has been for
security applications and this is linked to government funding. We expect this to have a negative
impact on timescales in the current post-recession economic environment even though there is
growing pressure for solutions in these areas.
User Advice: It is too early for significant action by most users, so R&D functions should monitor
the technology. Users in organizations that are more immediately affected (such as physical
security screening and manufacturing inspection systems), should educate themselves in the
emerging solutions and evaluate initial product offerings and research output.
Business Impact: The impact in some areas of business will be very high. Security systems in
airports will be affected by this technology. THz manufacturing inspection systems can be
expected to reduce costs and improve quality, particularly in products based on composite
materials. Telecommunications infrastructure will achieve significantly higher data rates and this
will be important for business success and competitive positioning, starting in eight to 10 years
time.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: German Terahertz Centre (DTZ); MIT; Picometrix; QinetiQ; SynView;
Technical University Braunschweig; TeraView; ThruVision Systems
Tangible User Interfaces
Analysis By: Jackie Fenn
Definition: In a tangible user interface, the user controls digital information and processes by
manipulating physical, real-world objects (rather than digital on-screen representations) that are
meaningful in the context of the interaction. For example, an emergency response simulation
could be driven by "game pieces" representing disasters (such as a fire) and vehicles that are
moved to different locations to automatically trigger a new simulation. In some cases, the physical

Publication Date: 2 August 2010/ID Number: G00205757 Page 16 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



objects that are moved around are themselves displays that respond to each other's presence, as
in MIT Media Lab's Siftables project now being commercialized as Sifteo. The increasing
numbers of consumer devices that contain accelerometers or silicon gyros are also providing a
new range of potential controller devices (for example, using a mobile phone as an "air mouse,"
or "throwing" media to transfer from one device to another).
Position and Adoption Speed Justification: Tangible interfaces are mostly still in the domain of
research projects. Microsoft Surface (where the tabletop is a screen) is among the first
commercial systems to incorporate a tangible user interface by responding to specially tagged
items placed on top of the screen (for example, to bring up product specifications). We expect this
area will take most of the next decade to develop commercially because of the need to
fundamentally rethink the user interface experience and application development toolset.
User Advice: Organizations with customer-facing, branded, kiosk-style applications (such as
retail and hospitality) should evaluate opportunities for a highly engaging (and relatively high-cost)
customer experience using tangible interfaces. Others should wait for new classes of devices and
peripherals to integrate the capability seamlessly.
Business Impact: In the long term, tangible user interfaces will provide a natural way for people
to bridge the physical and digital worlds. In the shorter term, applications will be focused primarily
on customer-facing experiences, collaborative planning, and the design of real -world objects and
places.
Benefit Rating: Low
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Microsoft; Sifteo
Extreme Transaction Processing
Analysis By: Massimo Pezzini
Definition: Extreme transaction processing (XTP) is defined as an application style aimed at
supporting design, development, deployment, management and maintenance of distributed
transaction processing (TP) applications, characterized by exceptionally demanding performance,
scalability, availability, security, manageability and dependability requirements. XTP architectures
aim at supporting even the most-demanding requirements of global-class, cloud-enabled, 21st-
century TP on mainstream technologies (such as virtualized, distributed and commodity-based
hardware infrastructures), and by leveraging mainstream programming languages (for example,
Java or C#).
In addition to traditional TP scenarios, XTP applications support one or more of the following
requirements:
Multichannel access, enabling users to trigger back-end transactions through
programmatic interfaces and through multiple access devices, including Web browsers,
rich clients, portals, mobile gadgets, sensors and other pervasive devices
Incorporation in service-oriented architecture and event-driven architecture, and
integration in multistep business processes via business process management (BPM)
tools
Interoperability with legacy applications

Publication Date: 2 August 2010/ID Number: G00205757 Page 17 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Servicing global audiences of users looking for access anytime from anywhere around
the world
Hybrid deployment scenarios combining on-premises and "in the cloud" physical and
application resources (this particular scenario is often called "cloud TP")
Increasingly, XTP-style architectures are adopted in the context of transactional-oriented cloud
services and software-as-a-service (SaaS) applications to support the exceptionally demanding
scalability, performance, low latency and availability requirements of these scenarios.
Position and Adoption Speed Justification: In traditional industries (such as travel,
telecommunications, financial services, defense/intelligence, transportation and logistics) and in
new and emerging sectors (for example, online gaming and betting, Web commerce, cloud
computing/SaaS, and sensor-based applications), organizations need to support unprecedented
requirements for their transactional applications: millions (if not tens or hundreds of millions) of
users, multiple thousands of transactions per seconds, low-latency, multifold elastic (up and
down) scalability and 24/7 continuous availability. The extremely competitive nature of these
businesses also poses another critical challenge: How do you support ultra-high-end
requirements on top of low-cost, commodity hardware and modern software architectures based
on industry-standard, Web-oriented technologies and not, as in the previous generation of TP
systems, on specialized and dedicated software and proprietary hardware?
Some XTP application requirements can be addressed by old-style TP monitors (TPMs), as well
as Java Platform, Enterprise Edition (Java EE) or .NET enterprise application servers (EASs). But
these mainstream EASs cannot provide the elastic scalability, high performance, continuous
availability and low latency needed to support the most-extreme requirements while keeping
costs under control. Therefore, early XTP adopters have implemented their own customized
application platforms by combining XTP-specific technology (such as distributed caching or
distributed computation) with open-source development frameworks (such as Spring) and
mainstream application server technology.
Large software vendors, such as IBM, Microsoft and Oracle, have released or announced
products aimed at supporting XTP scenarios, typically in the form of add-ons to their mainstream
platforms. Recently, vendors such as Tibco Software and VMware entered the XTP space
through acquisitions of specialized vendors (Kabira and GemStone Systems, respectively) and
internal developments (such as Tibco's ActiveSpaces distributed caching platform). Several
small, innovative software vendors (for example, Appistry, Cloudera, GigaSpaces Technologies,
Hiperware, Majitek, Microgen, OpenCloud, Paremus and Zircon) have designed and developed
XTP-specific platforms (usually Java-based) that, in some cases, also are available as open-
source software. Often, vendors that have invested in support for XTP-style applications wrap
their products and technologies into a cloud-centric value proposition, in part to ride the current
hype, but also because certain classes of cloud services (such as application platform as a
service [APaaS]) and applications have the same requirements as XTP-style applications.
XTP-specific platforms from the innovators have a relatively small installed base (up to 350 to 400
clients for the most popular products). Nevertheless, they have proved that they can support real-
life and very demanding XTP scenarios, including SaaS and cloud services deployments.
However, these products require users to familiarize themselves with new development
paradigms, typically based on loosely coupled, event-driven programming models, and to devise
best practices for application design, development processes, and ongoing operations and
maintenance.
During the past 12 to 18 months, XTP-specific point technologies (such as distributed caching
platforms) have been gradually integrated with mainstream middleware (such as EAS, BPM tools,
and enterprise service buses), innovative middleware (such as complex event processing

Publication Date: 2 August 2010/ID Number: G00205757 Page 18 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



engines or cloud-enabled application platforms [CEAP]) and, increasingly, packaged applications,
thus contributing to familiarizing the industry with XTP-style architectures.
Well-established architectures and technologies (such as Java, Spring, OSGi, message-oriented
middleware, flow managers, grid computing architectures and model-driven development tools)
are evolving and converging with XTP-specific technologies (such as distributed caching platform
and low-latency messaging) to provide users with a comprehensive extreme transaction
processing platform (XTPP). Examples of XTPP are already available from small, innovative
vendors. XTPPs package a number of previously distinct technologies (XTP-specific and generic)
into integrated products that will reach mainstream adoption during the next five years and will
challenge, and possibly replace, the now mainstream TP platforms. Many, especially midsize,
users will take advantage of XTP-style architectures through the adoption of APaaS approaches.
Large enterprises will, instead, support XTP requirements through the adoption of XTPPs,
increasingly offered in the form of transactionally oriented CEAPs.
XTP is a key enabler for transaction processing, global-class cloud applications and services
(such as APaaS) requiring elastic scalability, 24/7 availability, dependable quality of service
(QoS) and support for exceptionally high workloads. The requirements of this class of cloud
applications match the characteristics of XTP architectures, although with an additional
requirement to support some form of multitenancy. Although XTP will be widely adopted to
support large, business-critical on-premises TP applications, several organizations (large and
small) will unknowingly adopt XTP architectures embedded in cloud-computing solutions, SaaS
and cloud TP applications.
User Advice: Leading-edge users considering XTP architectures for high-risk/high-reward
transactional applications should evaluate whether the early forms of integrated XTPPs can
support their requirements more cost-effectively than custom-built XTP-enabling infrastructures.
Mainstream, risk-averse users should consider mainstream approaches supported by Java EE
and .NET EASs as the best fits for the next wave of their business-critical TP applications. These
users also should consider distributed caching platforms to extend EAS to support greater
performance and scalability needs in an incremental, nondisruptive way. The most conservative
users can continue to rely on traditional TPM-based architectures, typically in combination with
Java EE or .NET, to support their XTP requirements. However, they should be aware that TPM
architectures and technologies will decline in popularity during the next five years, thus exposing
users to high costs and fewer skilled resources.
Business Impact: XTP architectures and technologies will enable mainstream users to
implement a class of applications that, thus far, only the wealthiest and technically astute
organizations could afford, due to the need for procuring expensive, complex and proprietary
hardware and platform middleware. This new paradigm will lower barriers to entry into the most
transactionally demanding business models (for example, cloud computing/SaaS, social
networks, travel reservations or electronic payment processing), will enable users to reduce costs
of their TP systems dramatically (thus significantly improving margins for their companies), and
will enable new and creative business models for example, by providing limitlessly scalable
transactional back ends to consumer-oriented applications leveraging Web 2.0 technologies,
mobile devices and other consumer gadgets.
XTP also will impose new programming models (for example, natively supporting event
processing or enabling use of algorithms like MapReduce) that will be only partially compatible
with the traditional Java EE and .NET, new best practices for software architecture, and new
management and administration burdens on the IT environments. Typically, new disciplines will
be in addition to the system environments they gradually will complement and possibly replace.
Thus, IT departments will have to build and maintain in-house the new highly specialized skills

Publication Date: 2 August 2010/ID Number: G00205757 Page 19 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



(for example, distributed cache administrators), or delegate highly demanding XTP projects to
specialized service providers.
These barriers to entry will be removed gradually as mainstream organizations move into cloud
computing. By removing many operational obstacles to XTP adoption and implementation, the
low cost availability of XTP platforms in the form of cloud services, such APaaS, will encourage
users to take advantage of these infrastructures to implement innovative cloud-based, XTP-style,
cloud-enabled applications and services.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Appistry; Cloudera; GigaSpaces Technologies; IBM; Majitek; Microgen;
Microsoft; NEC; OpenCloud; Oracle; Paremus; Tibco Software; VMware
Recommended Reading: "The Challenges of Extreme Transaction Processing in a World of
Services and Events"
"The Birth of the Extreme Transaction-Processing Platform: Enabling Service-Oriented
Architecture, Events and More"
"Competitive Landscape: Extreme Transaction Processing Platforms, Worldwide, 2009"
"Distributed Caching Platforms Are Enabling Technology for Twenty-First Century Computing"
"Who's Who in Distributed Caching Platforms: IBM, Microsoft and Oracle"
"GigaSpaces Focuses on Cloud-Enabled XTP With XAP 7.0"
"Enabling Transactions 'in the Cloud' Through Extreme Transaction Processing"
Autonomous Vehicles
Analysis By: Thilo Koslowski
Definition: An autonomous vehicle is one that can drive itself from a starting point to a
predetermined destination in "autopilot" mode using various in-vehicle technologies and sensors,
including adaptive cruise control, active steering (steer by wire), anti-lock braking systems (brake
by wire), GPS navigation technology and lasers.
Position and Adoption Speed Justification: Advancements in sensor, positioning, guidance
and communication technologies are gaining in precision to bring the autonomous vehicle closer
to reality. However, many challenges remain before autonomous vehicles can achieve the
reliability levels needed for actual consumer use cases. The development of autonomous
vehicles largely depends on sensor technologies. Many types of sensors will be deployed in the
most-effective autonomous vehicles. Sensor data needs high-speed data buses and very high-
performance compute processors to provide real-time route guidance, navigation and obstacle
detection. Recent U.S. Defense Advanced Research Projects Agency (DARPA) projects
promoted autonomous vehicle efforts for military and civilian purposes. Autonomous vehicles can
also help reduce vehicle emissions by applying throttle more evenly and avoiding repeated stops
at traffic lights, because driving speed is matched with traffic management data.
User Advice: Automotive companies should collaborate with technology vendors to share the
high cost of experimentation with the required technologies and also carefully balance accuracy
objectives with user benefits. Initial autonomous vehicle functions will be limited to slow driving

Publication Date: 2 August 2010/ID Number: G00205757 Page 20 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



applications. Consumer education is critical to ensure that demand meets expectations once
autonomous vehicle technology is ready for broad deployment.
Business Impact: Autonomous vehicle efforts focus on safety, convenience and economical
applications, positioning this as a driver-assistance technology as well as an autopilot system in
future deployment scenarios. Autonomous vehicles can help to address distraction issues for in-
vehicle content consumption with the rise of infotainment applications. Automotive companies will
be able to market autonomous vehicle features as premium driver assistance, safety and
convenience features as well as an option to reduce vehicle fuel consumption.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Bosch; Continental
Recommended Reading: "German Consumers' Vehicle ICT Preferences Highlight Driver
Assistance and Connectivity Opportunities"
"Emerging Technology Analysis: Autonomous Vehicles, Automotive Electronics"
Video Search
Analysis By: Whit Andrews
Definition: The ability to search within a collection of videos, whether inside the enterprise or
outside, whether under enterprise control or in another company's domain, is attracting interest in
areas of content management as expectations sparked by YouTube, Hulu, Netflix, Vimeo and
Blinkx enter the enterprise environment. Video search will incorporate elements of social
networking, social tagging, metadata extraction and application, video and audio transcription,
and conventional enterprise search to make it possible to find videos using ordinary enterprise
search.
Position and Adoption Speed Justification: Expectations driven by YouTube are intriguing
consumers as they see the possibility for improved searchability in rich media, which is why the
excitement around this technology is only now beginning its hype trend. Some vendors still rely
on human transcription; others are adding speech-to-text facilities. Ultimately, enterprise search
will subsume video search as simply another format, just as it did with audio and graphical media.
More than five years will pass before video search is fully and effectively understood and
exploitable, because where textual search and to a lesser degree, audio search came with
vocabularies and grammars intact from conventional communication, video does not. Video talk
tracks are an appropriate means of developing some searchability, but the objects, people and
actions inside videos, as well as the relationships and action paths they follow, are not yet
consistently identifiable or reliably describable either as content or query objects.
User Advice: Only enterprises with the greatest ambition for video in their operations should
invest in video-specific search capabilities. Others will more likely turn to cloud vendors (expect a
variety of formal and informal enterprise editions of video hosting from Web conferencing vendors
and others, including Google and Microsoft) or wait for video search as an element of enterprise
search.
Business Impact: Making video easier to locate will boost the use of non-textual elements in
training and communications in the enterprise.

Publication Date: 2 August 2010/ID Number: G00205757 Page 21 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Altus; Autonomy; Brainshark; Coveo; Kaltura; Kontiki; Open Text; Qumu;
Rayzz; Sonic Foundry
Recommended Reading: "Video Killed the Document Czar"
"Rich Media, Federation and Conversation are Information Access's Next Acts"
"Overview for Video Content Management Projects: You Can Have a YouTube Too"
"Beyond Videodrome: Managing and Valuing Enterprise Video Content in the YouTube Era"
"Video and the Cloud: What Does YouTube Mean To You?"
"Toolkit: Policy for Internal Video Sharing Sites"
"Pursue Four Goals With Video"
"How to Deliver Rich Media, Including Live Video, to Your Company Now"
Mobile Robots
Analysis By: Jackie Fenn
Definition: Mobile robots move and navigate in an autonomous or semiautonomous (that is, via
remote control) manner and have the ability to sense or influence their local environments. Mobile
robots may be purely functional, such as vacuum-cleaning or lawn-mowing robots, or may be
humanlike in their appearance and capabilities.
Position and Adoption Speed Justification: Mobile robots have emerged from their traditional
niches in the military, toy and hobbyist markets, and are starting to provide practical value in
home and enterprise markets. Early enterprise applications include mobile videoconferencing (for
example, for physicians in hospitals) and delivery of items to specific locations. Outdoor
autonomous vehicles are also becoming viable (for example, ore-hauling robots or self-guiding
underwater monitoring robots). Patient care in hospitals and home care for the elderly are also
receiving attention. More-advanced capabilities under development include safely lifting an object
or person.
At the high end of the market, Sony and Honda have developed human-looking robots that can
walk, run, jump and respond to gestures and voice commands. These are still research
prototypes and not yet at commercially viable prices, but they indicate the level of physical
performance and responsiveness that will be available in the next decade. Companies such as
Microsoft (Robotics Studio) and iRobot (Create) have introduced development tools that
significantly reduce robotic software development barriers. Low-cost telepresence robots, such as
the $15,000 QB announced by Anybots, will drive trials of new usage models in telecommuting
and remote collaboration.
The Hype Cycle positioning for mobile robots reflects an average of the various applications.
Some applications, such as robotic pets, are already widespread. Others, such as robots for lawn
mowing and cleaning, are progressing into regular use, while many of the potentially
transformational applications and capabilities are still in development.

Publication Date: 2 August 2010/ID Number: G00205757 Page 22 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



User Advice: Evaluate mobile robots for cleaning, delivery, security, warehousing and mobile
videoconferencing applications. As robots start to reach price levels that are comparable to a
person's salary, prepare for mobile robots to appear as new endpoints in corporate IT networks.
Examine new robotic development tools for even mundane mechanical system software.
Business Impact: During the next five years, applications for mobile robots will include cleaning,
delivery, security patrolling, greeting of visitors and a range of other applications enabled by
mobile videoconferencing and low-cost sensors. They can also add value in deploying
infrastructure technologies (for example, mounting RFID readers on a mobile robot to track
assets over a large area at a much lower cost than blanket coverage through fixed readers).
Longer term, mobile robots will deliver a broader spectrum of home help and healthcare
capabilities, and, as costs fall, they may play a growing role in automating low-wage tasks in
activities such as food preparation.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Aethon; Anybots; Honda; iRobot; InTouch Health; Kiva Systems; Mitsubishi;
MobileRobots; Pal Robotics
Recommended Reading: "Insights on a Future Growth Industry: An Interview With Colin Angle,
CEO, iRobot"
"Cool Vendors in Emerging Technologies, 2010"
"RoboBusiness Showcases Fledgling Mobile Robot Industry"
Social Analytics
Analysis By: Carol Rozwell
Definition: Social analytics describes the process of measuring, analyzing and interpreting the
results of interactions and associations among people, topics and ideas. These interactions may
occur on social-software applications used in the workplace, in internally or externally facing
communities, or on the social Web. "Social analytics" is an umbrella term that includes a number
of specialized analysis techniques, such as social filtering, social-network analysis, sentiment
analysis and social-media analytics.
Position and Adoption Speed Justification: Social-software vendors, such as IBM and
Microsoft, added tools for social analytics to their applications that measure adoption and growth
to provide an understanding of community dynamics. The data makes individual behaviors,
content and interactions visible. Social-media monitors look for patterns in the content of
conversations across all social-media spaces. They extract actionable or predictive information
from social media and, in some cases, offline media. Jive Software's acquisition of Filtrbox is an
example of a social-software platform vendor that extended its social-analytics capability to
include social-media monitoring.
User Advice: Organizations should ensure that their business intelligence initiatives are
positioned to take advantage of social analytics to monitor, discover and predict. Some
enterprises will be content to monitor the conversations and interactions going on around them.
Enterprises with social-software platforms that provide social analysis and reporting can use this
information to assess community engagement. They can also easily monitor what is being said
about the company, its products and the brand using simple search tools or more-sophisticated

Publication Date: 2 August 2010/ID Number: G00205757 Page 23 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



sentiment analysis applications. The results of social analytics (e.g., discovered patterns and
connections) can be made available (possibly in real time) to the participants of the environment
from which the data was collected to help them navigate, filter and find relevant information or
people. Other enterprises will mine the social-analytics data, actively looking to discover new
insight using a wide range of business intelligence applications. At this time, the use of social-
analytics information for predictive purposes is a largely untapped source of value. Using social
analytics for prediction supports pattern-based strategies.
Business Impact: Social analytics are useful for organizations that want to uncover predictive
trends based on the collective intelligence laid open by the Internet. As an example, a biopharma
researcher could examine medical research databases for the most important researchers, first
filtering for the search terms and then generating the social network of the researchers publishing
in the biopharma's field of study. Similarly, social analytics could be used by marketers who want
to measure the impact of their advertising campaigns or uncover a new target market for their
products. They could look for behaviors among current customers or among prospects that could
enable them to spot trends (deterioration in customer satisfaction or loyalty) or behaviors
(demonstrated interest in specific topics or ideas).
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Attensity; BuzzLogic; GalaxyAdvisors; IBM; News Patterns; Radian6; SAS;
Trampoline Systems; Visible Technologies
Recommended Reading: "Social Media Delivers Marketing Intelligence"
3D Printing
Analysis By: Pete Basiliere
Definition: 3D fabricating technologies have been available since the late 1980s and have
primarily been used in the field of prototyping for industrial design. More recently, the 3D printing
quality has increased, and printer and supply costs have decreased to a level that broadens the
appeal of 3D printing to consumers and marketers.
Additive 3D printers deposit resin, plastic or another material, layer by layer, to build up a physical
model. Inkjet 3D printers image successive layers of plastic powder, hardening each layer on
contact, to build up the piece. The size of the part varies with the specific manufacturer's printer
and whether support structures are required.
Position and Adoption Speed Justification: Continued quality improvements and price
decreases in 3D printers and scanners mean enterprises that need product prototypes but
previously either found the cost too high can now make a modest investment that streamlines
their design and development programs. 3D printers with multicolor capabilities (under $20,000)
and single, monochromatic 3D printers (approximately $10,000) are available for a wide range of
applications. 3D printing is advancing technologically but has not yet found an application or
business opportunity beyond use by product designers, engineers, architects and some schools
that will fulfill its potential.
An important tipping point was reached in 2010 when HP became the first 2D printer
manufacturer to offer its own 3D printers. The Designjet 3D printers are available in five European
markets: France, Germany, Italy, Spain and the U.K. The initial target for the machines is in-
house workgroup 3D printing, enabling multiple users to cut the time and cost of using a more

Publication Date: 2 August 2010/ID Number: G00205757 Page 24 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



expensive external provider and improving the printer's ROI as a result. HP retained the right to
extend the distribution globally through its Imaging and Printing Group. The devices are
manufactured by Stratasys but sold and serviced by HP. Stratasys also manufactures 3D printers
under the "Dimension" brand as well as 3D production systems under the "Fortus" brand. Both
product lines, as well as the HP Designjet 3D printers employ Stratasys' patented fused
deposition modeling (FDM) technology.
As the first 2D printer manufacturer to enter the market, HP gives low-cost 3D printers credibility
and could potentially set standards that others will have to follow. 3D printers will hit the Peak of
Inflated Expectations as they drop to a price point that attracts the hobbyist consumer, are
promoted for widespread use by a major printer technology provider and/or a "killer app"
generates buzz and sustained sales.
User Advice: End users must examine 3D printing technology for product design and
prototyping, as well as for use in proposals, focus groups and marketing campaigns. Educational
institutions should use 3D printers not only in engineering and architectural courses, but also
creative arts programs (for instance, to design masks and props for a theater production).
Printer technology providers must continue research and development work while monitoring the
impact of HP's entrance on market growth.
Business Impact: Uses for 3D printers have expanded as advances in 3D scanners and 3D
design tools, as well as the commercial and open-source development of additional design
software tools, made 3D printing practical. The cost of creating 3D models has continued to drop,
with devices available for an investment of approximately $10,000. Increasing printer shipments
will create economies of scale for the manufacturers, and when coupled with price pressure from
low-cost 3D printer kits, will continue to drive down 3D printer prices. Similarly, the supplies costs
will decrease as use increases and competitive pressures become a factor.
The commercial market for 3D print applications will continue expanding into architectural,
engineering, geospatial, medical and short-run manufacturing. In the hobbyist and consumer
markets, the technology will be used for artistic endeavors, custom or vanity applications (such as
the modeling of children, pets and gamers' avatars), and "fabbing" (the manufacture of one-off
parts). As a result, demand for scarce 3D design skills and easy-to-use consumer software tools
will explode in the consumer and business arenas.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: 3D Systems; HP; Objet Geometries; RepRap; Solidscape; Stratasys; Z Corp.
Recommended Reading: "Cool Vendors in Imaging and Print Services, 2010"
"Emerging Technology Analysis: 3-D Printing"
Speech-to-Speech Translation
Analysis By: Jackie Fenn
Definition: Speech-to-speech translation involves translating one spoken language into another.
It combines speech recognition, machine translation and text-to-speech technology.
Position and Adoption Speed Justification: A host of recent low-cost apps for mobile phones
from companies such as Cellictica (Trippo VoiceMagix), Jibbigo and Speechtrans, are showing

Publication Date: 2 August 2010/ID Number: G00205757 Page 25 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



the viability of speech-to-speech translation for consumer and tourist applications, where
approximate translations are often good enough. Major players, including Microsoft and IBM,
have demonstrated highly accurate speech translation in their labs. For highest accuracy,
developers need to constrain the task to a limited vocabulary, such as Spoken Translation's
system for medical professionals to converse with Spanish-speaking patients or IBM's system for
the U.S. military in Iraq. Current solutions often involve an interim stage of generating a textual
rephrasing of the spoken sentence, so the speaker can check that the system has found the right
context prior to translation into the target language. While there has been little adoption of the
technology by enterprises to date due to accuracy limitations, the availability of the low-cost
mobile consumer products may drive interest and progress for higher-end applications. For this
reason, we have moved the Hype Cycle position forward and changed the "Time to Plateau"
rating to five to 10 years (from more than 10 years in 2009 and prior years).
User Advice: Do not view automated translation as a replacement for human translation, but
rather as a way to deliver approximate translations for limited dialogues where no human
translation capability is available. Evaluate whether low-cost consumer products can help during
business travel. Leading-edge organizations can work with vendors and labs to develop custom
systems for constrained tasks.
Business Impact: Consumer mobile applications are the first to attract significant interest.
Potential enterprise applications include on-site interactions for fieldworkers, as well as
government security, emergency and social service interactions with the public. Longer term,
multinational call centers and internal communications in multinational corporations will benefit.
Internal collaborative applications may be limited by the fact that strong team relationships are
unlikely to be forged with automated translation as the only way to communicate.
The technology has the potential to be of transformational benefit in the areas where it can be
applied; however, based on current performance constraints, we are assessing the overall benefit
rating as moderate.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Cellictica; IBM; Jibbigo; Speechtrans; Spoken Translation
At the Peak
Internet TV
Analysis By: Andrew Frank; Michael McGuire
Definition: Internet TV refers to video streaming of licensed professional content (typically, TV
shows, live events and movies) over the public Internet for viewing on a PC, handset or Internet-
connected TV set. In the months since the 2009 Hype Cycle, Gartner has refined our definition of
Internet TV. We break down Internet TV into three delivery modes, with the Internet Protocol (IP)
at the base of the stack:
Level 1: Application software bound to hardware, creating a closed system embedded
within a device. Examples: iTunes, Vudu and Netflix service running on a Roku box.
Level 2: Application software (including a browser plug-in) delivered as an executable
and bound to a device's operating system. Examples: Hulu desktop player (as distinct
from the browser-based Hulu service), Boxee, iPlayer and HBO Go.

Publication Date: 2 August 2010/ID Number: G00205757 Page 26 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Level 3: Software (via IP delivery) bound to an open browser. Examples would include
Google's YouTube, Break.com and Boxee's base-level offering.
The focus of Internet TV is to deliver a TV-like video experience (full-screen, long-form or
professional content) using the Internet in place of conventional broadcast channels. The
motivation for isolating this usage is that such a capability is particularly disruptive to the
incumbent broadcast model for TV service delivery, which is based on limited access to a
licensed broadcast spectrum (for over-the-air, or terrestrial, delivery) and proprietary
telecommunications channels (cable, satellite and Internet Protocol television [IPTV]). In most
regions, these channels are constrained by "must carry" regulations to transmit the programming
of terrestrial broadcasters (with a few exceptions). Internet TV, in contrast, does not have must-
carry rules, compulsory licensing or most of the other regulatory restrictions of broadcasting,
creating difficult choices for broadcasters and content license holders.
Position and Adoption Speed Justification: This profile is one of the better examples of the
disruptive power of Internet our position is essentially an average of the three levels noted in
the definition section. We expect that, by next year's Hype Cycle, we will be redefining "Internet
TV" yet again as Gartner expects the market to continue to fragment and reshape in hard-to-
predict ways as it matures. We have reduced Internet TV's benefit rating to "high." While it is
likely to drive changes in TV businesses, in many respects, the way the TV business is
monetized, and the way it is consumed, is likely to more closely resemble historic patterns. When
Internet TV first began to appear around 2005, many television service providers assumed that
professional high-definition television (HDTV) would create enough of a quality gap compared
with online video that the business models supporting both could coexist without much friction.
Over the next few years, however, a combination of technological advancement and surprising
consumer appetite for Internet TV has undermined this assumption.
Consumer interest in Internet TV has accelerated over the past few years with the rapid rise of
services such as Hulu and TV.com in the U.S. and BBC iPlayer in the U.K. The issue remains,
however, that most consumers prefer to watch long-form video on TV rather than on a PC,
especially given the parallel adoption of HDTV. In 2008, a number of new over-the-top set-top
boxes (OTT STBs) began to hit the market to bridge the gap between the Internet and TV.
However, so far, indications are that third-party dedicated OTT STBs are unlikely to enjoy wide
consumer adoption without a significant investment in marketing (such as Apple or Google might
apply) and are likely to be superseded or at least supplemented by other methods, including
Internet content gateways provided by TV service providers (using closed STBs), broadband-
connected TVs, and HDMI connections to portable devices such as media tablets or net PCs (or
possibly new hybrid devices with the form factor of a smartphone).
Such innovations pose challenges for the existing licensing arrangements under which current
Internet TV services acquire "Internet rights," as distinct from television rights. As TVs gain more
access to licensed Internet programming, channel conflicts arise. For example, cable operators
and other television service providers typically pay cable networks carriage fees to carry their
channels, and they have made it clear that offering the same programming for free over the
Internet will force them to reconsider these payments. These concerns have led to proposals to
extend conditional access for pay-TV subscribers to Internet TV, such as the "TV Everywhere"
program, which is being promoted by a large number of TV service providers in the U.S.
Meanwhile, consumer consumption of free video online continues to grow and raise concerns
that, under recessionary pressure, consumers may begin to drop expensive cable, satellite and
IPTV packages and revert to a diet of free-to-air broadcast and Internet TV (with perhaps a video-
on-demand service like Netflix thrown in).

Publication Date: 2 August 2010/ID Number: G00205757 Page 27 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



As a result, while the technology is currently moving toward the Peak of Inflated Expectations,
many in the industry see trouble ahead, and pressure is building to resolve the business and
technical issues before the disruption becomes irreversible.
User Advice:
Content owners should keep a close watch on the evolving business models and get
intellectual property rights management (IPRM) in order to maximize syndication
opportunities.
Broadcasters and other content licensees should negotiate for complete rights packages
rather than argue over Internet, TV and mobile rights distinctions.
TV, Internet and triple-play service providers should consider the implications of Internet
TV and proactively pursue multichannel conditional access models.
All parties, as well as regulators, should recognize the significance of the net neutrality
debate, as consumer demand for Internet video quality grows and strains existing
Internet delivery infrastructures.
Business Impact: Unless redirected by concerted efforts, Internet TV will ultimately collide with
cable, satellite and IPTV services and, potentially, the DVD market. Cable networks and other
broadcasters that rely on carriage revenue are also vulnerable. Although the detriments to certain
of these businesses may outweigh the benefits, for many broadcasters, content providers and
advertisers, Internet TV opens new opportunities and markets. It also has the potential to drive
demand for new devices that can leverage the growing demand for Internet TV.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Amazon.com; Apple; BBC; Fancast; Hulu; NeuLion/JumpTV
Recommended Reading: "The Race to Dominate the Future of TV"
"Two Roads to TV 2.0"
"U.S. Copyright Office Signals New Era of Deregulation for Video Licensing"
Private Cloud Computing
Analysis By: Thomas Bittman; Donna Scott
Definition: Cloud computing is a style of computing in which scalable and elastic IT-enabled
capabilities are delivered as a service to customers using Internet technologies. In the broadest
terms, private cloud computing is a form of cloud computing in which service access is limited,
and/or the customer has some control/ownership of the service implementation. This contrasts
with public cloud computing, where access to the service is completely open, and the service
implementation is completely hidden from the customer. For our purposes here, the focus will be
on private cloud computing that is internal to an organization in other words, the customer has
control/ownership of the service, and service access is limited to the internal organization.
However, two other variants of private cloud computing (not discussed here) are community cloud
computing (in which a third-party provider offers services to a limited set of customers) and virtual
private cloud computing, in which a third-party provider offers the services, but the customer has
some control over the implementation, usually in terms of limiting hardware/software sharing.

Publication Date: 2 August 2010/ID Number: G00205757 Page 28 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Organizations building a private cloud service are trying to emulate public cloud-computing
providers, but within their control and on-premises. In most cases, this is based on a virtualization
foundation, but private cloud computing requires more (see "The Architecture of a Private Cloud
Service"), including standardization, automation, self-service tools and service management,
metering, and chargeback, to name a few. Many of these technologies are still evolving, and early
deployments often require custom tools. Regardless, the biggest challenges with private cloud
computing tend to be process-related, cultural, political and organizational.
Unlike public cloud providers, which maintain a small number of offered services, enterprises
have many complex and interrelated services to deliver. A private cloud-computing service can fit
within a broader portfolio of services delivered by a real-time infrastructure.
Position and Adoption Speed Justification: Although some of the technologies required for
private cloud computing exist, many do not, or are immature. Many early examples of private
cloud-computing services are focused on development and test provisioning. However, the
private cloud has become a marketing buzzword for most of the largest IT vendors, and many
new products have shipped or will be shipped in 2010 to address technology gaps. Since private
cloud computing is a natural evolution for the hot server virtualization trend, no vendor wants to
miss the "next big thing." The hype is already tremendous, and it's going to increase during the
next year.
Enterprise interest is already high, with 76% of respondents in a recent poll saying they plan to
pursue a private cloud-computing strategy by 2012 (see "Private Cloud Computing Plans From
Conference Polls").
User Advice:
Let service requirements lead your private cloud-computing plans, rather than
technologies (see "Getting Starting With Private Cloud: Services First").
Create a business case for developing a full private cloud service, versus using public
cloud services, or modernizing established architectures.
Consider the long-term road map for your private cloud service (see "Private Cloud
Computing: The Steppingstone to the Cloud"). Build with the potential to take advantage
of hybrid sourcing (using both your private cloud services and public) at some point in
the future.
Start slowly with development/test lab provisioning; short-term, low-service-level
agreement computing requests; and simple, non-mission-critical Web services (e.g.,
self-service requests and dynamic provisioning for Web environments). Pilot a private
cloud implementation to gain support for shared services and to build transparency in IT
service costing and chargeback.
Implement change and configuration management processes and tools prior to
implementing private cloud services to ensure that you can standardize on the software
stacks to be delivered through self-service provisioning and adequately maintain them.
Business Impact: Most private cloud implementations will evolve from a virtualization
foundation. Virtualization reduces capital costs, but private cloud computing will reduce the cost
of operations and enable faster service delivery. It's primarily attractive to the business, because
it enables agility self-service ordering of frequently requested services, as well as dynamic
provisioning. Test lab provisioning is an early example of a private cloud service that enables
testers to improve time-to-market and efficiencies, while labor costs associated with provisioning
are reduced.

Publication Date: 2 August 2010/ID Number: G00205757 Page 29 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Private cloud computing also changes the relationship between the business and IT, transforming
how IT is consumed. The shift to services (rather than implementation and assets), pay-per-use
and chargeback enables the business to focus on rapidly changing service requirements and
consuming IT based on variable costs, while IT can focus on efficient implementation and
sourcing (including the potential to leverage public cloud services in the future, without negatively
affecting the business).
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Abiquo; Adaptive Computing; BMC; CA; DynamicOps; Elastra; Eucalyptus;
HP; IBM; newScale; Novell; Surgient; VMLogix; VMware
Recommended Reading: "The Architecture of a Private Cloud Service"
"Private Cloud Computing: The Steppingstone to the Cloud"
"The Spectrum of Public-to-Private Cloud Computing"
"Getting Starting With Private Cloud: Services First"
"Building Private Clouds With Real-Time Infrastructure Architectures"
"Q&A: The Many Aspects of Private Cloud Computing"
"Private Cloud Computing Plans From Conference Polls"
Augmented Reality
Analysis By: Tuong Nguyen; Jackie Fenn; CK Lu
Definition: Augmented reality (AR) is a technology that superimposes graphics, audio and other
virtual enhancements over a live view of the real world. It is this "real world" element that
differentiates AR from virtual reality. AR aims to enhance users' interaction with the environment,
rather than separating them from it. The term has existed since the early 1990s, when it
originated in aerospace manufacturing.
Position and Adoption Speed Justification: Although AR has existed for the past two decades,
it has become practical in the mobile space only in the past year. The maturity of a number of
mobile technologies such as GPS, cameras, accelerometers, digital compasses, broadband,
image processing and face/object recognition software has made AR a viable technology on
mobile devices. As all these technologies converge in maturity, AR has also benefited from a
growing number of open operating systems (promoting native development), the increasing
popularity of application stores (increasing awareness and availability of applications), and the
rising availability of overlay data, such as databases, online maps and Wikipedia. The
combination of these features and technologies also allows AR to be used in a number of
different applications, including enhancing user interfaces, providing consumers with information
and education, offering potential for marketing and advertising, and augmenting games and
entertainment applications.
We expect AR's visibility and the hype surrounding AR to increase in the next two years. For
example, the AR browser vendor Layar boasts more than 700,000 active users. The vendor is
working with LG (to preload its application on new Android devices) and Samsung (to be
supported on bada). Nokia is also promoting AR via its Point & Find solution.

Publication Date: 2 August 2010/ID Number: G00205757 Page 30 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Despite the hype and potential, a number of factors will slow adoption. Device requirements for
AR in mobile devices are rigorous and will restrict usage to higher-end devices. Furthermore,
although mobile services provide a great use case for AR, the experience is restricted for the
aforementioned reason. Mobile devices have smaller screens than other consumer electronics
devices, such as laptops and even handheld gaming consoles, restricting the information that can
be conveyed to the end user. The interface (a small handheld device that needs to be held in
front of you) limits usage to bursts, rather than continued interaction with the real world. GPS
technology also lacks enough precision to provide perfect location data but can be enhanced by
hardware such as accelerometers and gyroscopes or magnetometers. As with other location-
based services (LBSs), privacy is a potential concern and a hindrance to adoption. As a newer
solution, there are also issues with compatibility. Competing AR browsers are using proprietary
API and data structure, making the AR information from one vendor's browser incompatible with
that from other browsers.
User Advice:
Communications service providers (CSPs): Examine whether AR would enhance the
user experience of your existing services. Compile a list of AR developers with which
you could partner, rather than building your own AR from the ground up. Provide end-to-
end professional services for specific vertical markets, including schools, healthcare
institutions and real-estate agencies, in which AR could offer significant value. A
controlled hardware and software stack from database to device will ensure a quality
user experience for these groups. Educate consumers about the impact of AR on their
bandwidth, to avoid being blamed for users going over their data allowance.
Mobile device manufacturers. Recognize that AR provides an innovative interface for
your mobile devices. Open discussions with developers about the possibility of
preinstalling application clients on your devices, and document how developers can
access device features. Build up alliances with AR database owners and game
developers to provide exclusive AR applications/services for your devices. Secure
preloading agreements and examine how you could integrate AR into your user
interfaces or operating systems.
AR developers. Take a close look at whether your business model is sustainable, and
consider working with CSPs or device manufacturers to expand your user base, perhaps
by offering white-label versions of your products. Integrate AR with existing tools, such
as browsers or maps, to provide an uninterrupted user experience. Build up your own
databases to provide exclusive services through AR applications. Extend your AR
application as a platform that individual users and third-party providers can use to create
their own content. Explore how to apply AR through different applications/services to
improve the user experience, with the aim of predicting what information users need in
different contexts.
Providers of search engines and other Web services: Get into AR as an extension of
your search business. AR is a natural way to display search results in many contexts.
Mapping vendors: Add AR to your 3D map visualizations.
Early adopters: Examine how AR can bring value to your organization and your
customers by offering branded information overlays. For workers who are mobile
(including factory, warehousing, maintenance, emergency response, queue-busting or
medical staff), identify how AR could deliver context-specific information at the point of
need or decision.

Publication Date: 2 August 2010/ID Number: G00205757 Page 31 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Business Impact: AR browsers and applications will be the focus area of innovation and
differentiation for players in the mobile device market in 2010. There are interesting branding
opportunities for companies and businesses. Points of interest (POIs) can be branded with a
"favicon" (that is, a favorites or website icon) that appears when the POI is selected. Companies
such as Mobilizy are offering white-label solutions that allow core Wikitude functionality to be
customized. AR products such as Wikitude can lead to numerous LBS advertising opportunities.
CSPs and their brand partners can leverage AR's ability to enhance the user experience within
their LBS offerings. This can provide revenue via a la carte charges, recurring subscription fees
or advertising. Handset vendors can incorporate AR to enhance user interfaces, and use it as a
competitive differentiator in their device portfolio. The growing popularity of AR opens up market
opportunity for application developers, Web service providers and mapping vendors to provide
value and content to partners in the value chain, as well as opportunity for CSPs, handset
vendors, brands and advertisers.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: GeoVector; Layar; Mobilizy; Nokia; Tonchidot
Recommended Reading: "Emerging Technology Analysis: Augmented Reality Shows What
Mobile Devices Can Do"
"Cool Vendors in Consumer Mobile Cloud Services, 2010"
"Context-Aware Computing: Context Drives Next-Generation User Interfaces"
Media Tablet
Analysis By: Van Baker; Angela McIntyre; Roberta Cozza
Definition: A media tablet is a device based on a touchscreen display (typically with a multitouch
interface) whose primary focus is the consumption of media. Examples of media include Web
pages, music, video and games. The device can also facilitate content entry via an on-screen
keyboard, a hardware-based slide-out keyboard, or one that is part of a clamshell design. The
device has a screen with a diagonal dimension that is a minimum of five inches and may include
screens that are as large as is practical for handheld use, roughly up to 15 inches. The media
tablet runs an operating system that is more limited than, or a subset, of the traditional fully-
featured operating systems, such as Windows 7. Alternatively, it may be a closed operating
system under the control of the device manufacturer; examples include Android, Chrome and
Apple's iOS 4. The media tablet features wireless connectivity with either Wi-Fi, WAN or both, a
long battery life and lengthy standby times with instant-on access from a suspended state.
Examples of media tablets are the Apple iPad and the Joo Joo tablet.
Position and Adoption Speed Justification: It is tempting to assume that with the launch of the
iPad and all the hype leading up to this launch that the media tablet is at or beyond the Peak of
Inflated Expectations, but this is not the case. There are many products planned for launch later
in 2010 and there seems to be focus building around Android in addition to Apple's iPad. In
addition, at Computex, we saw tablets running Windows 7, Windows-Embedded and MeeGo.
Due to the impending launch of platforms that are competitive to the iPad in the media tablet
market, there is plenty more competition and hype for the platform to come. Additionally, this
device category has the potential to be disruptive to handheld consumer electronics, especially e-
readers and portable media players, as well as to the personal computer market. One criticism of

Publication Date: 2 August 2010/ID Number: G00205757 Page 32 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



media tablets is the issue of unintended touches but there are technologies that can address this
problem and reduce their recognition.
User Advice: The media tablet offers an attractive alternative to mini-notebooks and ultra-thin
and light notebooks for consumers that are more focused on content consumption than content
creation. In many cases as the technology improves, the media tablet should offer stronger
capabilities for content creation such as photo and video editing. In the three year time frame,
media tablets will be used in business mainly for customer-facing roles; for example, by sales to
give presentations to clients, by realtors, and by executives. They will be managed by enterprise
in a method similar to smartphones.
The adoption of multitouch technology in both the smartphone and media tablet categories will
bring multitouch adoption and frequent usage to two of the most popular devices that consumers
carry. This should accelerate the adoption of both smartphones and media tablets. The
proliferation of multitouch on consumer devices will put additional pressure on the PC industry to
offer multitouch technologies in all-in-one desktops, mini-notebooks, notebooks, primary and
secondary displays. This disruption could extend not only to multitouch controls, but also to the
industrial design of all the product categories mentioned. The adoption of tablet PCs, which run a
full version of an OS, such as Windows 7, and other PCs with touch will be stifled by the lack of
productivity software that incorporates touch in ways that significantly enhance the user
experience or improve productivity.
Manufacturers in the consumer electronics, personal computing and communications industry
should offer incentives for software developers to offer applications designed around touch that
creates a superior user experience and makes content creation easier. Manufacturers can help
drive forward standards for touch interfaces to ensure consistency across applications and
platforms. They should move aggressively with new media tablet and consumer electronic
product designs, but be cautious about offering PCs with touch for business use due to the lack of
touch-based productivity software.
Enterprise IT architects should prepare for media tablets to come into the enterprise with their
consumer employees and apply the managed diversity model for these devices. Media tablets
should also be considered for business-to-consumer applications for delivering content, providing
high-quality graphics in a sales situation and/or where customer engagement is required for
navigating through marketing materials.
Business Impact: The adoption and use of multitouch media tablets in addition to smartphones
has the potential to disrupt the overall computing industry including product design centers,
software, controls and user interface design. If the most commonly used consumer devices are
driven by simple user interfaces with touch controls, then there will be pressure on the traditional
personal computing market to move away from the mouse and keyboard design center that has
been the central control model since the graphical user interface arrived. Media tablets in
conjunction with smartphones have the potential to fundamentally change the personal computer
use model in the longer term. This impact extends to the user interface and performance
expectations such as instant on. Media tablets will also impact handheld devices with applications
where small screens are a serious constraint.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Apple; Dell; Lenovo

Publication Date: 2 August 2010/ID Number: G00205757 Page 33 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Wireless Power
Analysis By: Jim Tully; Steve Ohr
Definition: A wireless power supply facilitates the charging or direct powering of electrical and
electronic equipment using inductive or radio frequency (RF) energy transfer. Inductive systems
are preferred for short-range wireless power transfer (a few centimeters) and can provide high
levels of power of several hundred watts or more. RF power transfer operates over longer
distances (tens or hundreds of meters or more) and provides more modest levels of power (a few
milliwatts). Inductive systems are therefore more suited for PCs and the fast charging of mobile
devices, while RF power is more applicable to radio frequency identification (RFID), sensor
networks and trickle-charging of cell phones.
In its most basic forms, inductive power has been in use for many years for example, in
electric toothbrushes. The focus today is on more flexible, efficient and addressable forms of the
technology using resonance techniques. Most users of mobile electronic devices find battery
charging to be a real annoyance. It is inconvenient and different chargers are required for
different types of equipment. The idea of wireless charging is clearly attractive and several
solutions have recently been demonstrated. For example, wireless charging schemes are being
designed for use in countertop surfaces and similar environments that will charge a mobile device
when it is placed onto the surface.
Position and Adoption Speed Justification: Adoption of the technology for mobile devices or
PCs requires a degree of standardization. The Wireless Power Consortium is addressing this
issue and considerable progress is being made. A bigger obstacle is the question of why mobile
equipment makers (such as handset vendors) would be interested in this technology. Cell phone
makers have recently agreed a set of standards for chargers and this could set back the
aspirations of wireless power vendors in this area. Much prominent discussion continues about
this technology. Nevertheless, we saw no clear signs of market adoption over the past year and
we therefore leave the technology in the same location on the Hype Cycle as last year.
User Advice: Technology planners in organizations with many users of mobile devices should
evaluate the benefits of this technology as it becomes available. Vendors of mobile devices,
batteries and power infrastructure (such as chargers) should evaluate the alternatives and decide
on their position in relation to this technology. Users should investigate this if they need small
levels of trickle-charging for equipment where it is difficult or impossible to connect a physical
supply for example, for things like sensor networks and Bluetooth low-energy devices.
Business Impact: The technology is applicable to a wide range of business and consumer
situations. Some environments require mobile devices to be charged at all times, and wireless
charging is particularly suited to those situations.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Fulton Innovation; MIT; Nokia; Powercast
3D Flat-Panel TVs and Displays
Analysis By: Paul O'Donovan
Definition: Four technologies are currently used to show 3D images on flat-panel TVs and
displays: anaglyphic 3D, which uses red/cyan glasses; polarization 3D, which uses polarized

Publication Date: 2 August 2010/ID Number: G00205757 Page 34 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



glasses but is not suitable for most TV displays; alternate-frame sequencing, which uses active
shutter glasses and requires high refresh rates on displays; and autostereoscopic displays, which
use lenticular or barrier lenses built into the display panel and which do not require glasses to
view the 3D effect.
Position and Adoption Speed Justification: The revival of 3D movies has enthused the
cinema-going public, so display manufacturers have been looking into providing 3D display
technology for the home. However, bringing the technology to the consumer TV market is proving
more difficult than the successes in the cinema. There is a lack of available 3D content, and
differing display technologies (most of which still use glasses) are confusing many consumers at
this nascent stage. For example, several TV manufacturers had been offering 3D-ready Digital
Light Processing (DLP) rear-projection TVs, but then pulled out of the rear-projection TV market
in favor of flat-panel LCD TVs because of their wider mass appeal to the market. Currently the
most popular method of displaying 3D content on LCD and plasma-enabled TVs and displays
requires the user to wear electronic shutter glasses. However, there is no standard developed for
the technology in these electronic glasses, so they are all proprietary, and one manufacturer's
product cannot be used with another manufacturer's display. Some form of standardization would
certainly help user take up of 3D TVs and displays, and the Society of Motion Picture and
Television Engineers (SMPTE) has set up a task group to create such a standard by 2011.
However, this standard is not designed to cover the display technologies or electronic glasses
needed to view alternate-frame sequencing systems that use electronic shutter glasses.
Standards apart, the fact that glasses are needed at all is probably the biggest stumbling block to
mass-market adoption, along with a lack of content with a wide audience appeal. Film success
stories such as Avatar are few and far between.
User Advice: 3D TVs and displays will do well in the video game market, and there are already
some systems available that offer alternate-frame sequencing and glassless 3D single-user
viewing. However, the development of 3D games is still in its infancy, with only a small number
being released so far.
Business Impact: The development of 3D TVs and displays for the consumer market could well
benefit computer-aided design applications, as well as medical imaging, especially in medical
schools and training institutions. Some industrial applications already use 3D image capturing,
and 3D displays using lenticular lenses are used for public digital signage.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Hyundai; JVC; LG Display; Panasonic; Samsung Electronics; Sony
4G Standard
Analysis By: Sylvain Fabre
Definition: A fourth-generation (4G) worldwide standard being developed for a next-generation
local- and wide-area cellular platform is expected to enter commercial service between 2012 and
2015. International Mobile Telecommunications-Advanced (IMT-A) is now called 4G. The
development effort involves many organizations: the International Telecommunication Union
Radiocommunication Sector (ITU-R); the Third Generation Partnership Project (3GPP) and
3GPP2; the Internet Engineering Task Force (IETF); the Wireless World Initiative New Radio
(WINNER) project, a European Union research program; telecom equipment vendors; and
network operators.

Publication Date: 2 August 2010/ID Number: G00205757 Page 35 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Agreement on the initial specification has yet to be reached, but discussions point to some key
characteristics. These include: support for peak data transmission rates of 100 Mbps in WANs
and 1 Gbps in fixed or low-mobility situations (field experiments have achieved 2.5 Gbps);
handover between wireless bearer technologies such as code division multiple access (CDMA)
and Wi-Fi; purely Internet Protocol (IP) core and radio transport networks for voice, video and
data services; and support for call control and signaling. Many technologies are competing for
inclusion in the 4G standard, but they share common features such as orthogonal frequency
division multiplexing (OFDM), software-defined radio (SDR) and multiple input/multiple output
(MIMO). 4G technology will be all-IP and packet-switched.
In addition, we believe that the network architecture will be radically different from today's
networks. In particular, it will include all-IP, low latency, flat architecture and integration of
femtocells and picocells within the macrolayer.
Position and Adoption Speed Justification: The 4G standard is still in the early stages of
development and has to incorporate a wide range of technologies. But these are not the only
reasons why its introduction is some way off. Deployments of High-Speed Downlink Packet
Access (HSDPA), High-Speed Uplink Packet Access (HSUPA) and Long Term Evolution (LTE)
technology will extend the life of third-generation (3G) infrastructure for voice and, to some extent,
for data. Also, network operators will want to receive a worthwhile return on 3G investments
before moving to 4G. Then there is the problem of how to provide adequate backhaul capacity
cost-effectively; this is already difficult with the higher data rates supported by High-Speed Packet
Access (HSPA), and it will become harder with 4G. Ultra Mobile Broadband (UMB) which,
unlike wideband code division multiple access (WCDMA) and HSPA, is not being assessed for
4G had been under consideration as a next-generation mobile standard in the U.S. and parts
of Asia and Latin America, but it failed to gain a hold and will not be widely adopted. It appears
likely at this point that LTE-Advanced (LTE-A) is a clear leader for 4G, with 802.16m a possible
distant contender. WiMAX had been considered but is not in the race anymore, although in the
U.S. Sprint has been advertising WiMAX as 4G, even though this is not quite true, and it is losing
momentum to time division LTE (TD-LTE).
User Advice: It is too soon to plan for 4G. Instead, monitor the deployment and success of 3G
enhancements such as HSDPA, HSUPA, High-Speed Packet Access Evolution (HSPA+) and
LTE, as these need to provide a worthwhile return on investment before network operators will
commit themselves to a new generation of technology. Carriers will also need to ensure
interoperability with today's networks, as backward-compatibility might otherwise be an issue.
Additionally, carriers should recognize that the cost of deploying and operating an entirely new
4G network might be too high to justify, unless a different network business model is found,
including, for example, network sharing and femtocells.
Business Impact: The business impact areas for 4G are high-speed, low-latency
communications, multiple "pervasive" networks and interoperable systems.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Alcatel-Lucent; Ericsson; Fujitsu; Huawei; Motorola; NEC; Nokia Siemens
Networks; ZTE
Recommended Reading: "Magic Quadrant for LTE Network Infrastructure"
"Dataquest Insight: LTE and Mobile Broadband Market, 1Q10 Update"

Publication Date: 2 August 2010/ID Number: G00205757 Page 36 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



"Emerging Technology Analysis: Self-Organizing Networks, Hype Cycle for Wireless Networking
Infrastructure"
"Dataquest Insight: IPR Issues Could Delay Growth in the Long Term Evolution Market"
Activity Streams
Analysis By: Nikos Drakos
Definition: An activity stream is a publish/subscribe notification mechanism that provides
frequent updates to subscribers about the activities or events that relate to another individual. It
can be a feature of social-networking environments that enables users to keep track of the
activities of others or a separate service that aggregates activities across multiple sites or
applications. For example, the scope of the activities covered may be limited to those undertaken
within a particular environment (such as profile changes, new connections between users and
message posts), or it may be much broader and include activities in other environments into
which the activity stream has access (such as Twitter posts, general blog posts, uploads of
pictures in Flickr, or a transaction in a business application). Activity streams are also an example
of the more general category of "personal subscriptions."
Position and Adoption Speed Justification: Activity streams are available and popular on
social-networking sites such as Facebook (News Feed and Mini-Feed) and other consumer
services (such as FriendFeed, Spokeo and Plaxo Pulse) where the focus is specifically on user
activity aggregation based on information retrieved from other services where the user has
accounts or contacts. Users can control how much of their activity stream is available to other
users. Activity streams are also beginning to be used in business environments through products
such as Socialtext Signals, NewsGator, Cubetree from SuccessFactors, Pulse from Novell, and
Chatter from salesforce.com. Some enterprise social software products are beginning to
incorporate such functionality, including IBM Lotus Connections, Microsoft SharePoint and Jive
Social Business Software. Activity streams have the potential to become a general-purpose
mechanism for any kind of information (including transactional or business events) aggregation
and personalized delivery (exemplified in specialist products such as that from Socialcast).
User Advice: Tools that help individuals to expand their "peripheral vision" with little effort can be
very useful. Being able to choose to be notified about the ideas, comments or activities of others
on the basis of who they are or your relationship with them is a powerful mechanism for
managing information from an end user's perspective. Unlike e-mail, with which the sender may
miss interested recipients or overload uninterested ones, publish and subscribe notification
mechanisms such as activity streams allow recipients to fine-tune and manage more effectively
the information they receive. At this stage, it is important to understand the consumer activity
aggregation services. As enterprise equivalents become available, more detailed assessments
are possible, both in terms of their contribution to collaborative work but also in terms of their
relevance as a general-purpose information access and distribution mechanism. It is also
important to examine the plans of the collaboration or social software vendors with which you are
already working. One aspect of enterprise implementations that will require particular caution is
the richness of privacy controls that enable users to manage who sees what information about
their activities, as well as the ability to comply with local privacy regulations.
Business Impact: There is an obvious application of activity streams in managing dispersed
teams or in overseeing multiparty projects. Regular updates of status changes that are collected
automatically as individuals interact with various systems can keep those responsible up to date,
as well as keep different participants aware of the activities of their peers. Activity streams can
help a newcomer to a team or to an activity to understand who does what and, in general, how
things are done. Activity streams, over time, are likely to become a key mechanism in business
information aggregation and distribution.

Publication Date: 2 August 2010/ID Number: G00205757 Page 37 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: bluekiwi; Drupal; Facebook; FriendFeed; IBM; Jive; Microsoft; NewsGator;
Novell; Plaxo; Reply; salesforce.com; Socialcast; Socialtext; WordPress; Yammer
Recommended Reading: "Twitter for Business: Activity Streams Are the Future of Enterprise
Microblogging"
"Case Study: Social Filtering of Real-Time Business Events at Stratus With Salesforce.com's
Chatter"
Cloud Computing
Analysis By: David Mitchell Smith
Definition: Gartner defines "cloud computing" as a style of computing where scalable and elastic
IT-enabled capabilities are delivered as a service using Internet technologies.
Position and Adoption Speed Justification: Users are changing their buying behaviors.
Although it is unlikely that they will completely abandon on-premises models, or that they will
soon buy complex, mission-critical processes as services through the cloud, there will be a
movement toward consuming services in a more cost-effective way. As expected of something
near the Peak of Inflated Expectations, there is deafening hype around cloud computing. Every IT
vendor has a cloud strategy, although many aren't cloud-centric. Variations, such as private cloud
computing and hybrid approaches, compound the hype and demonstrate that one dot on a Hype
Cycle cannot adequately represent all that is cloud computing. Cloud computing has moved just
past the Peak and will likely spend some time in the future in the Trough of Disillusionment.
Subjects that generate as much hype rarely skip through the Trough quickly.
User Advice: Vendor organizations must begin to focus their cloud strategies around more-
specific scenarios, and unify them into high-level messages that encompass the breadth of their
offerings. User organizations must demand road maps for the cloud from their vendors today.
Users should look at specific usage scenarios and workloads, and map their view of the cloud to
that of any potential providers, and focus more on specifics, rather than on general cloud ideas.
Business Impact: The cloud-computing model is changing the way the IT industry looks at user
and vendor relationships. As service provisions (a critical aspect of cloud computing) grow,
vendors must become, or partner with, service providers to deliver technologies indirectly to
users. User organizations will watch portfolios of owned technologies decline as service portfolios
grow. The key activity will be to determine which cloud services will be viable, and when.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Amazon.com; Google; Microsoft; salesforce.com; VMware
Recommended Reading: "Key Issues for Cloud Computing, 2010"
"The What, Why and When of Cloud Computing"

Publication Date: 2 August 2010/ID Number: G00205757 Page 38 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Cloud/Web Platforms
Analysis By: Gene Phifer; David Mitchell Smith
Definition: Cloud/Web platforms use Web technologies to provide programmatic access to
functionality on the Web, including capabilities enabled not only by technology, but also by
community and business aspects. This includes, but is not limited to, storage and computing
power. We use the terms "Web platform" and "cloud platform" interchangeably, and sometimes
use the term "Web/cloud platforms." They have ecosystems similar to traditional platforms, but
Web platforms are emerging as a result of market and technology changes collectively known as
"Web 2.0." These platforms will serve as broad, general-purpose platforms, but, more specifically,
they will support business flexibility and speed requirements by exploiting new and enhanced
forms of application development and delivery. Web platforms reuse many of the capabilities and
technologies that have been accessible on websites for more than a decade through browsers by
adding programmatic access to the underlying global-class capabilities. Reuse is occurring via
Web services, and is being delivered via Web-oriented architecture (WOA) interfaces, such as
representational state transfer (REST), plain old XML (POX) and Really Simple Syndication
(RSS). In addition to the capabilities of Web 2.0, these platforms provide programmatic access to
cloud-computing capabilities. The public API phenomenon has taken WOA beyond consumer
markets (e.g., Twitter) into enterprise B2B integration.
Position and Adoption Speed Justification: The use of Web/cloud platforms is happening in
consumer markets. In addition, the concepts are apparent in enterprises' use of service-oriented
business applications. Enterprise use of Web-based capabilities, such as Amazon Simple
Storage Service (Amazon S3) and Amazon Elastic Compute Cloud (Amazon EC2), has begun as
well. However, mainstream adoption of Web/cloud platforms hasn't begun yet. Additionally, early
adopters have limited experience with Web/cloud platforms, and will inevitably run into challenges
and issues.
User Advice: Web platforms and related phenomena have affected consumer markets first, but
enterprises should evaluate the growing space as an appropriate extension to internal computing
capabilities. Use of Web platforms will drive WOA, which enterprises should adopt where
appropriate, along with simple interfaces, such as REST, POX and RSS (wherever possible), to
exploit the interoperability, reach and real-time agility of the Internet.
Business Impact: Web platforms can be leveraged as part of business solutions, and will form
much of the basis for the next generation of interest in the virtual enterprise. Web platforms can
decrease barriers to entry, and can deliver substantial value for small and midsize businesses
that could not afford to build and maintain capabilities and infrastructures. Examples include
Amazon Web Services (including S3 and EC2), salesforce.com's Force.com, Google's App
Engine and Microsoft Azure Services Platform. Note that the term "Web/cloud platform" is
broader than and includes multiple layers in cloud-computing terminology (e.g., integration as a
service [IaaS], platform as a service [PaaS] and software as a service [SaaS]) and the use of the
term "platform" is different from the term "PaaS."
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Early mainstream
Sample Vendors: Amazon.com; Google; Microsoft; salesforce.com
Recommended Reading: "Web Platforms Are Coming to an Enterprise Near You"
"Predicts 2010: Application Platforms for Cloud Computing"

Publication Date: 2 August 2010/ID Number: G00205757 Page 39 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



"NIST and Gartner Cloud Approaches Are More Similar Than Different"
Sliding Into the Trough
Gesture Recognition
Analysis By: Steve Prentice
Definition: Gesture recognition involves determining the movement of a user's fingers, hands,
arms, head or body in three dimensions through the use of a camera; or via a device with
embedded sensors that may be worn, held or body-mounted. One of the most visible examples is
the newly launched Microsoft Kinect (previously known as Project Natal) gaming controller. In
some cases (for example gaming controllers such as the Nintendo Wii Balance Board or the
Microsoft Skateboard controller), weight distribution is being added to supplement the data
available.
A more limited subset of gesture recognition (in 2D only) has become common with the recent
development of multitouch interfaces (such as the Apple iPhone or Microsoft Surface) where
multiple finger touches pinch and squeeze, flicks and swipe-type gestures are used to
provide a richer and more intuitive touch-based interface.
Position and Adoption Speed Justification: The commercialization of gesture interfaces began
with handheld devices that detect motion, such as the Nintendo Wii's 3D controller, 3D mice and
high-end mobile phones with accelerometers. Camera-based systems are now entering the
market, and the recent launch (availability is expected in late 2010) of Kinect by Microsoft
combines camera-based full-body gesture and movement recognition with face and voice
recognition to provide a richer interface. Such composite interfaces are likely to become more
commonplace, especially in the consumer environment; elements are expected to find their way
into Windows 8 in due course to provide automatic login and logout. Imminent availability will
spur the development of more games and other solutions from the games industry. Sony has also
revealed its alternative, called Move, which uses a handheld controller similar to the Nintendo Wii
device, but claims greater resolution. While the benefits of gestural interfaces in gaming
applications are clear, the redesign of user interfaces for business applications (to take
advantage of gesture recognition) will take many years. The logical mapping of intuitive and
standardized gestures into meaningful commands with which to control a business application
is a significant challenge.
Gesture recognition involves the effective use of a variety of input devices (to provide either 2D
movements or full 3D information) and considerable data processing to recreate wire-frame
models of body positions and vector-based dynamics (for speed and direction of movement),
followed by the interpretation of these gestures into meaningful commands to an application. The
conceptual design of a user interface based on gestures is a considerable task; both from a
technical standpoint and from a cultural and anthropological perspective, especially in a global
market where cultural sensitivity must be taken into account. Nevertheless, this is an area which
is attracting considerable interest from researchers, and the ability to create mashup-style
interfaces from readily available components makes experimentation accessible. The SixthSense
project at the Massachusetts Institute of Technology is a good example, linking the use of gesture
recognition with augmented reality to explore a new generation of interactions. At a simpler level,
the Seer application (developed by IBM for use at the Wimbledon Tennis Tournament in the U.K.)
indicates how this area is likely to develop during the next two to three years.
With high-profile launches in the gaming market, gesture recognition remains close to
the peak on the Hype Cycle; the growing availability of options advances it from
"emerging" to "adolescent" in terms of maturity.

Publication Date: 2 August 2010/ID Number: G00205757 Page 40 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



While mainstream adoption in gaming will happen fairly quickly (less than five years),
"time to plateau" in the enterprise space will be considerably longer.
User Advice: Gesture recognition is just one element of a collection of technologies (including
voice recognition, location awareness, 3D displays and augmented reality) which combine well to
reinvent human-computer interaction.
Evaluate handheld and camera-based gesture recognition for potential business
applications involving controlling screen displays from a distance.
Consider how these may be combined with location-based information and augmented
reality displays.
Look carefully at developments in the gaming sector; these will form the basis for a
variety of early prototypes for business applications.
Business Impact:
The primary application for gestural interfaces at present is in the gaming and home
entertainment market. However, the potential of hands-free control of devices, and the
ability for several people to interact with large datasets, opens up a wide range of
business applications including data visualization and analytics, design, retail,
teaching, and medical investigation and therapy.
As computing power moves from a single device to an "on-demand" resource, the ability
to interact and control without physical contact frees the user and opens up a range of
intuitive interaction opportunities, including the ability to control devices and large
screens from a distance.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: 3DV Systems; Corinex Communications; eyeSight; Extreme Reality (XTR);
GestureTek; Gyration; Microsoft; Nintendo; Oblong; PrimeSense; Softkinetic; Sony
Mesh Networks: Sensor
Analysis By: Nick Jones
Definition: Sensor networks are formed ad hoc by dynamic meshes of peer nodes, each of
which includes simple networking, computing and sensing capabilities. Some implementations
offer low-power operation and multiyear battery life. We are using the term "sensor network" to
mean the entire system, consisting of multiple sensors connected in a mesh network.
Position and Adoption Speed Justification: Small-to-midsize implementations (that is, tens to
hundreds of nodes) are being deployed using technology from several vendors for applications
such as remote sensing, environmental monitoring and building management/automation. The
market is commercially and technologically fragmented, and topics such as middleware and
power-efficient routing are still areas of active academic research. Some vendors use proprietary
protocols; however, groups such as 6LoWPAN are making sensor environments more accessible
and standardized by extending mainstream IPv6 networking to sensor nodes.
Some vendors have adopted the ZigBee standard as a radio frequency bearer, some use
proprietary systems and some have formed industry alliances around technologies for specific

Publication Date: 2 August 2010/ID Number: G00205757 Page 41 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



applications (for example, Z-Wave for home automation). Beginning in approximately 2011, a
new, low-energy Bluetooth standard will gain some traction for simple personal area sensor
networks. The market potential is enormous (and scenarios of several billion installed units are
feasible); however, the slow adoption rate means it may take decades to ramp up.
User Advice: Organizations looking for low-cost sensing and robust self-organizing networks
with small data transmission volumes should explore sensor networking. Because it is an
immature technology and market, vendor and equipment decisions could become obsolete
relatively rapidly (perhaps in less than three years); therefore, this area should be viewed as a
tactical investment. Despite these challenges, there have been successful early deployments in
such areas as building automation, agricultural sensing, automated meter reading and industrial
sensing.
Business Impact: This technology will affect a wide range of business areas, including low-cost
industrial sensing and networking; low-cost, zero-management networking; resilient networking;
military sensing; product tagging; healthcare; building automation; home automation; and
environmental management.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Arch Rock; Crossbow Technology; Dust Networks; Ember; Intel; Millennial
Net; Sigma Designs
Microblogging
Analysis By: Jeffrey Mann
Definition: "Microblogging" is the term given to a narrow-scope mode of social communication
pioneered by the social network site Twitter.com and followed by similar services from Plurk,
Yammer, Socialcast and Identi.ca. The concept is surprisingly simple: users publish a one-line
status message to their contacts, who have decided to follow their activities on the service. Users
can see the collected statuses of the people they choose to follow. Even those who do not want
to follow many people can search through the microblogging stream for topics or tags they are
interested in. Trending topics provide a condensed view of what everyone on the service is
talking about. The content of status messages (called "tweets" on Twitter) ranges from the
mundanely trivial ("I am eating eggs") to a random insight ("I think blogging is our online
biography in prose, and Twitter is the punctuation") to a reaction to an event ("A passenger plane
just landed on the Hudson River!").
Twitter's dominance has led to the practice being called "twittering" but it is also referred to as
microblogging to broaden the focus from a single vendor, as well as to point out how this style of
communication has augmented and partially replaced blogging. Even though it superficially
resembles instant messaging (IM), tweets are published to a group of interested people, making it
more similar to blogging than the person-to-person nature of IM.
The trendsetting Twitter system intentionally constrains messages to 140 characters, which is
what can be sent via a Short Message Service text message on a mobile phone. This simple
constraint enhances the user experience of those who consume this information. Tweets are
small tidbits of information, easily digested and just as easily ignored, as the moment dictates.
Other intentional constraints are designed to provide a high-impact user experience through
minimalist design: no categories, no attachments, no scheduled postings. These constraints are a
matter of some debate among users, leading Twitter to add more functionality in the last year

Publication Date: 2 August 2010/ID Number: G00205757 Page 42 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



(groups or lists, trending topics and retweets). Competitors offer more full-featured alternatives
(Plurk, FriendFeed) or open-source approaches (such as Identi.ca based on Status.Net), but
have not been able to challenge the dominance of Twitter in the consumer market. One key factor
behind Twitter's success over its competitors has been its early offering of an application
programming interface to third-party developers. This has led to dozens of packages that enable
users to access the Twitter service and post content, either through a mobile device or a more
full-featured desktop client. Examples include Seesmic, TweetDeck, Twitterific and TwitterBerry.
These third-party packages can provide offline capability, as well as features that fill in the gaps
of Twitter's online offering. Twitter recently offered its own BlackBerry client as well.
Twitter's open nature makes it largely unsuitable for internal use within enterprises or for
confidential communications with partners, leaving an opportunity for new offerings. Services
including salesforce.com's Chatter, Yammer, Socialcast and Present.ly provide microblogging
services aimed at individual companies, with more control and security than the public services
like Twitter provide. Microblogging is also quickly becoming a standard feature in enterprise social
software platforms, such as Socialtext, Microsoft SharePoint 2010, IBM Lotus Connections and
Jive SBS. By 2011, some form of enterprise microblogging will be a standard feature in 80% of
the social software platforms on the market.
Position and Adoption Speed Justification: Microblogging in general, and Twitter in particular,
continue to gain in popularity, becoming a widely-recognized part of popular culture. A planned
maintenance shutdown for Twitter became an international political issue when scheduled during
an election crisis in Iran. The volume of Twitter traffic makes it valuable as a real-time news feed.
Major events are almost always signaled first on Twitter before the traditional media can respond.
This high profile has led many organizations to question whether they should be using Twitter or
other microblogging platforms for communication between employees or to communicate with
customers and the public. Many companies have expanded their Web participation guidelines for
employees to include microblogging alongside the more traditional blogging and community
participation. With wide adoption comes the inevitable backlash. Microblogging's superficiality
and potential for time-wasting have led many to dismiss it as a passing fad, which is typical of a
post-peak Hype Cycle position.
Twitter has worked to stabilize its technology and reduced many (but by no means all) of the
service interruptions that previously plagued the system. Twitter problems are announced by the
appearance of the "fail whale" graphic on Twitter's home page, a term that has received wide
public adoption. Twitter's dominance has made it difficult for competitors to gain a foothold,
although enterprise suppliers such as Yammer and salesforce.com's Chatter have had some
success. Several companies in the young microblogging space have already disappeared,
including Quotably, Swurl and Pownce. Summize was acquired by Twitter, FriendFeed by
Facebook and Ping.fm by Seesmic.
User Advice:
Adopt social media sooner rather than later, because the greatest risk lies in failure to
engage and being left mute in a debate in which your voice must be heard.
Before using social media to communicate, listen to the channel, learn the language and
become familiar with the social norms. Only then should you begin speaking. As with
any other language, good results are achieved with regular, consistent practice, rather
than with spotty participation.
Remind employees that the policies already in place (for example, public blogging
policies, protection of intellectual property and confidentiality) apply to microblogging as
well. It is not always necessary to issue new guidelines.

Publication Date: 2 August 2010/ID Number: G00205757 Page 43 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



As Twitter is a public forum, employees should understand the limits of what is
acceptable and desirable.
Business Impact: Despite its popularity, microblogging will have moderate impact overall on how
people in organizations communicate and collaborate. It has earned its place alongside other
channels (for example, e-mail, blogging and wikis), enabling new kinds of fast, witty, easy-to-
assimilate exchanges. But it remains only one of many channels available. Microblogging has
greater potential to provide enterprise value than these other channels by coordinating large
numbers of people and providing close to real-time insights into group activities. These mass-
coordination and mass-awareness possibilities are being explored by some early adopters, but
have not achieved wide adoption.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Blogtronix; Identi.ca; Jaiku; salesforce.com; Seesmic; Socialcast; Socialtext;
Tweet Scan; Twitter; Yammer
Recommended Reading: "Four Ways in Which Enterprises Are Using Twitter"
"Twitter for Business: Activity Streams Are the Future of Enterprise Microblogging"
"Case Study: Social Filtering of Real-Time Business Events at Stratus With Salesforce.com's
Chatter"
"Should Retailers Use Twitter?"
E-Book Readers
Analysis By: Van Baker; Michael McGuire; Allen Weiner
Definition: "E-book readers" refers to a class of devices the primary purpose of which is to
enable the reading of "trade books" (general fiction) and other content not dependent on color,
rich graphics or video. E-books, at this point, rely on e-ink screens suited for lengthier periods of
reading in indoor and outdoor settings. Technology that includes second-generation color e-ink
displays and LCD screens from Qualcomm and Liquavista is at least a year away from broad
commercial application. Current e-book devices may provide access to third-generation (3G)
and/or Wi-Fi connectivity, relying on USB tethering for acquiring content, and while some (Kindle,
nook) support Web browsing, the fidelity and refresh rate are not adequate for a quality content
consumption experience. We do not include Apple's iPad in this category because of its ability to
support multiple media types such as music files, movie files and streamed content. The iPad
is classified as a media tablet. Given the focus on devices of the consumer technologies Hype
Cycle, we do not include software-only readers in this description.
Position and Adoption Speed Justification: The technology for e-book readers has improved
dramatically since their previous incarnations in the late 1990s and the early years of this decade.
One of the most important developments has been the emergence of the ePub format a
format, while in the process of a new more powerful iteration, that is rapidly becoming a standard
for the majority of e-book devices (with the exception of Amazon's Kindle product line). However,
digital rights management (DRM) technologies currently restrict the users of many devices from
being able to buy titles from multiple stores and this is proving a challenge to broader adoption.
Instead, consumers are still forced to make the bulk of their purchases from whatever online store
is tied to a particular piece of hardware; for example, Kindle users can buy titles only from

Publication Date: 2 August 2010/ID Number: G00205757 Page 44 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Amazon.com, and users of the Barnes & Noble nook can buy e-book titles only from Barnes &
Noble. Only cloud e-bookstores such as Kobo allow consumers to buy a title and view it across
multiple devices.
Providing wireless access (either through Wi-Fi or 3G wireless networks, or a combination of
these two) so that users can purchase or acquire additional content from online stores will be
important to the long-term success of e-book readers, yet Gartner believes that Wi-Fi not 3G
will be the most relevant to consumers interested in e-book readers.
Subsidies for the readers to reduce the acquisition costs have the potential to increase adoption
of the devices. As the number of devices being offered increases, and the file formats and DRM
technologies proliferate, there is potential for confusion among consumers. The emergence of the
ePub standard (deployed by Sony, Apple and Barnes & Noble) helps to address this confusion
but there are still multiple DRM solutions, as Sony and Barnes & Noble use Adobe DRM, while
Apple uses its own FairPlay. Amazon.com continues to use proprietary formats for both the file
format and the DRM.
Where Gartner sees potential confusion will be among the distinct class of devices that purport to
be e-reader devices. There are media tablet products, such as Apple's iPad (and a rumored
number of Android-based tablets), which have e-book reading capabilities, and there are
dedicated e-book reader devices, not to mention e-book reader applications for mobile phones. In
the dedicated device category, the devices are segmenting roughly into stand-alone readers
(such as Sony's Pocket device, which does not have a wireless or online connection but instead
relies on a cable connection to the user's PC that is used to access the Sony online store to
acquire titles) and connected readers (the Kindle, nook and midrange and high-end of Sony's
device line) that have Wi-Fi and/or 3G wireless radios. As competition has increased due both to
increased numbers of reader manufacturers and competition from multifunction devices such as
the iPad, the pressure on price for e-book readers has resulted in lower-priced devices prices
are expected to move toward $99 soon. This should make e-book readers more appealing to
more consumers. However, the transition from traditional books to e-books will still take time, as
consumer behavior will be somewhat slow to change.
The price drop is a result of the growing number of companies with e-ink, e-reader devices,
including Copia, Acer, WonderMedia and Pandigital with e-readers scheduled for a 2010 release,
with Barnes & Noble (nook) contemplating an updated device for 2010.
A major wild card in the e-reader space will be Google with its Google Editions service. Google
Editions is a browser-based e-book platform but, given the current weak support that e-readers
provide Web browsers, Google's entry into the hardware space (perhaps with a partner) is a
possibility to give consumers an optimal experience for reading books via Web browsers.
User Advice: Because book publishing is not a uniform business, advice must be tailored to the
various segments. Trade fiction publishers must prepare for an onslaught of e-ink, e-reader
devices, the vast majority of which will support ePub and Adobe's ACS4 DRM. Think content over
devices and work with publishing service providers and tool and application providers to push
content into the e-book channel. Use publishing intermediaries to keep track of various devices
and their capabilities.
Educational publishers must focus more on tablets and less on e-ink devices as they are not
selling e-books. Rather, they are selling curriculum content to varied constituents, including
students, teachers and school administrators (not to mention school IT departments). Educational
publishers are demanding cross-platform capabilities for development and consumption, so until
Apple comes up with a cost-efficient solution to this issue, the educational market may need to
look to future Android-based tablets.

Publication Date: 2 August 2010/ID Number: G00205757 Page 45 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



All publishers need to continue to push vendors such as Amazon.com to unify around the ePub
file format, while monitoring how popular Apple's iPad becomes among serious readers (meaning
those that purchase multiple titles every year).
Manufacturers will have to decide whether to pursue lower-cost stand-alone devices or higher-
end media tablets capable of running multiple applications with the service to support these
applications, or do both.
Business Impact: This technology is likely to define primary reading experiences for a large
audience of consumers in the five-year time frame. Progress is being made on the technology
barriers that thwarted the previous incarnations of e-book reading devices. The launch of Apple's
iPad and the expected launch of several additional media tablets based on other operating
systems has accelerated the interest in e-books. However, issues surrounding cross-platform
compatibility and the availability of full catalogs across the variety of reader devices are still
affecting acceptance.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Adolescent
Sample Vendors: Amazon.com; Fujitsu; Plastic Logic; Sony
Video Telepresence
Analysis By: Scott Morrison; Robert Mason
Definition: Video telepresence is a form of immersive video communication that creates the
impression of being in the same room as other conference participants. Most telepresence suites
run high-definition (HD) video resolution at 720p (progressive scan) or 1,080p lines. Conference
participants appear as life-size individuals on large plasma, LCD or projection screens. Multiple
cameras and microphones pick up individuals or pairs of individuals, so that all audiovisual
information becomes directional, with good eye contact and spatial sound aligned with the
location of the person speaking.
Telepresence suites are designed and assembled by the system supplier to provide layout,
acoustics, color tones and lighting that maximize the perception of realism. Vendors have
recognized a need for more adaptive solutions, which fit into established environments and cost
considerably less than fully immersive suites. In addition, some providers offer "lite" solutions,
which have multiscreen capabilities that are basically a step up from regular HD room
videoconferencing. Operational simplicity and high availability are other key factors for
telepresence. The systems are designed to enable anyone to use them to their full potential with
little or no prior training, without the connectivity problems associated with traditional room
videoconferencing solutions.
Telepresence systems make high demands on the network, with low-compression, three-screen,
HD rooms taking anything from 8 Mbps to 45 Mbps of dedicated bandwidth for video and content.
They are typically deployed across Multiprotocol Label Switching (MPLS) networks, often
dedicated to and designed for video traffic, with minimal latency, so that users can retain natural
levels of spontaneity during interactions with other participants.
Vendors are now moving quickly toward telepresence interoperability with each other and with
traditional, room-based videoconferencing systems, desktop video and unified communications
solutions. In particular, the relatively wide adoption of the Cisco-developed Telepresence
Interoperability Protocol (TIP) will offer a mechanism to deal with multiscreen environments, as

Publication Date: 2 August 2010/ID Number: G00205757 Page 46 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



products which use the TIP protocol come onto the market during 2010/2011. Commercial
negotiations among service providers are currently slowing the progress of wide inter-carrier
telepresence, although even these are expected to be resolved over the next 18 months, paving
the way for wider adoption of the technology. But it may take two to three years for all providers to
implement the inter-carrier agreements based on the bilateral approach the major players have
adopted.
Position and Adoption Speed Justification: The primary use of telepresence in most
organizations remains for internal meetings or meetings with clients and partners who come to
the company premises as a way to reduce travel. But the collaboration features lend themselves
increasingly to project management and engineering environments, where travel is not
necessarily being avoided, but the added dimension afforded by telepresence makes new forms
of virtual communications feasible for users who benefit from increased productivity and speed of
decision-making as a result. Vertical markets which are heavy users include those for financial
services, telemedicine, higher education and manufacturing.
Many telepresence vendors are taking the technologies and capabilities first provided in
immersive environments and placing them in lower-cost solutions down to the desktop level.
Improved compression and traffic handling techniques include Scalable Video Coding (SVC), an
extension of H.264. SVC will increasingly allow telepresence to be delivered to a wider range of
locations, across networks with more variable performance characteristics.
Gartner expects telepresence adoption to continue to be driven by larger organizations and
specific vertical industries, including financial services, banking, pharmaceuticals, high-
technology manufacturing, consumer products and motion pictures/entertainment. The hospitality
and managed office industries are now beginning to roll out telepresence suites for pay-per-use in
their conference centers.
Growth in demand has been very strong, but from a small base, reaching over 5,700 immersive
total rooms by the end of 2009, which is expected to almost double in 2010. However, the
addressable market for telepresence systems is limited to a combination of larger sites in Fortune
5000 companies and specific applications in other sectors such as government, professional
services and healthcare. Year-over-year growth in demand for desktop video is equally strong,
and reaches far more users.
User Advice: Organizations should consider video telepresence as the high end of their
videoconferencing technology requirements, rather than a stand-alone solution. Most mature
telepresence deployments have a ratio of one telepresence system per 10 regular room-based
videoconferencing systems. Telepresence can deliver a more immersive, higher-quality group
videoconferencing experience than a traditional room-based or desktop videoconferencing
solution, albeit at a substantially higher cost. When selecting a solution, organizations should
consider video telepresence in the context of their wider productivity, collaboration and unified
communications programs.
Business Impact: For regular telepresence users, travel cost reductions and improved
productivity will provide a business case with a relatively short payback period, often less than 18
months. Telepresence typically demands a utilization rate in excess of three hours per day
three to four times what most organizations achieve with traditional videoconferencing to justify
the investment, which can range from $180,000 to $400,000 or more per endpoint, and an
additional $8,000 to $18,000 per month for managed services and network connectivity. Early
adopters do indicate that telepresence has boosted usage rates into the 30% to 40% range for
organizations, based on a 10-hour business day, compared with less than an hour per day for
unmanaged videoconferencing systems. Increased usage is key to cost justification and customer
success with telepresence. Public utility services have yet to live up to their promise there are
simply not enough rooms available yet to provide access to significant volumes of potential users.

Publication Date: 2 August 2010/ID Number: G00205757 Page 47 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Cisco; HP; LifeSize; Magor Communications; Polycom; Tandberg; Telanetix;
Teliris
Recommended Reading: "MarketScope for Video Telepresence Solutions, 2008"
"The Gartner View on Enterprise Video"
"Comparing Videoconferencing Telepresence Systems"
"Take Another Look at IP Videoconferencing"
Broadband Over Power Lines
Analysis By: Zarko Sumic
Definition: Broadband over power line (BPL) technology also called power line
communications (PLC) is a landline means of communication that uses established electrical
power transmission and distribution lines. A service provider can transmit voice and data traffic by
superimposing an analog signal over a standard alternating electrical current of 50Hz or 60Hz.
Traditionally, the promise of BPL appeared to reside in electrical engineering domains, in which
looping the transformers was cost-effective (for example, in Europe and Asia/Pacific, where,
because of higher secondary distribution service voltage, several hundred consumers are served
by one transformer, as opposed to North America, where only up to seven consumers are served
by one transformer). However, with the recent development of new technologies and
technological improvements, embedded utility infrastructures can be used to deliver voice, video
and data services.
Position and Adoption Speed Justification: Utilities, searching for options to increase revenue,
are revisiting BPL, and, at the same time, exploring its potential to improve utility functions.
Business models that highlight utility-focused applications, such as advanced metering
infrastructure (AMI), appear to be driving new implementations particularly in Europe, where
they still have a strong presence. However, other broadband technologies particularly WiMAX
are evolving faster and moving into position to take large portions of the addressable market
for Internet access.
User Advice: BPL technology is maturing, but some technical issues still must be resolved (such
as tunneling/bypassing distribution transformers, signal attenuation and radio interference).
Distribution feeders are dynamic in nature, resulting in changing network parameters as a
consequence of capacitor and line regulator switching for voltage control, as well as
sectionalizing and transfer switching. Utilities should understand that most BPL systems must be
retuned for optimal performance every time a distribution component gets switched in or out of
the network. Therefore, close collaboration should be established between BPL personnel and
planning engineers to consider BPL dynamics in circuit design and operations.
BPL continues to lag behind other mainstream broadband communication technologies, which
are attracting substantially more R&D investments. Although not yet fully mature, electric utilities
and broadband service providers should follow BPL development and conduct technical feasibility
and economic viability studies. BPL appears to be more appropriate as a communication channel
for AMI and other utility control-monitoring functions (although some initial deployments have
raised performance concerns), but less appropriate for Internet access services. BPL must be

Publication Date: 2 August 2010/ID Number: G00205757 Page 48 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



evaluated as a vehicle that can increase system reliability, improve the use of the distribution
asset, and enable sophisticated energy management and demand-response options, rather than
a new revenue source by a company's entry into the broadband market. Users must ensure that
business models, regulatory issues, and proper divisions between broadband service and utility
functions have been achieved before attempting rollouts. In addition, users need to consider that,
due to lacking scale and investment level compared with other mainstream communication
technologies, BPL will become obsolete, which will impact product and supplier viability and
deployment, resulting in a "stranded asset."
Business Impact: Affected areas include broadband communications and energy management
services, including on-premises, "home-plug-type" provisioning for consumer energy
management applications.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Obsolete
Sample Vendors: Ambient; Amperion; BPL Global; MainNet
Recommended Reading: "Management Update: Top 10 Technology Trends Impacting the
Energy and Utility Industry in 2010"
Virtual Assistants
Analysis By: Johan Jacobs
Definition: A virtual assistant (VA) is a conversational, computer-generated character that
simulates a conversation to deliver voice- or text-based information to a user via a Web, kiosk or
mobile interface. A VA incorporates natural-language understanding, dialogue control, domain
knowledge (for example, about a company's products on a website) and a visual appearance
(such as photos or animation) that changes according to the content of the dialogue. The primary
interaction methods are text-to-text, text-to-speech, speech-to-text and speech-to-speech.
Position and Adoption Speed Justification: Computer-generated characters have a limited
ability to maintain an interesting dialogue with users; they need a well-structured and extensive
knowledge management engine to become efficient self-service productivity tools. As
organizational knowledge engines become increasingly better-structured and intelligent, self-
service deployments relying on this source for knowledge are increasing. The adoption of VAs in
service, sales and education is starting to see deployment from some Fortune 1000 companies.
End-user acceptance of VAs, driven mostly by their larger presence, is becoming less of a
challenge than it was a few years ago. The growth in the art of image rendering has also seen
increasingly sophisticated human-like forms taking over from the cartoon-type characters
associated with Generation 1 and Generation 2 VAs. Fourth-generation VAs are more easily
accepted by many users as opposed to the first-generation VA depictions as cartoon-based
characters. The organizations that successfully deploy VAs often support the implementation
through the use of artificial-intelligence engines that assist natural-language dialogues.
First-generation VAs were stationary, with little visual appeal. Second-generation VAs brought
animation and generated customer interest. Third-generation VAs look like humans and have
excellent visual appeal, with responses to questions becoming increasingly accurate. Fourth-
generation VAs not only look human, but also are embedded with speech and text interactions.
The new fifth-generation VAs that are just emerging have excellent human-like image qualities,
are able to understand multiple questions and have highly developed natural-language support.

Publication Date: 2 August 2010/ID Number: G00205757 Page 49 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



The first through third generations are very mature, but the technologies for fourth generations
and, especially, for the fifth generation, are emergent.
User Advice: To use VAs successfully in customer service you need to focus the VA on one
specific area, and not apply the VA to all areas of the organization's products and services. Use
VAs to differentiate your website and increase the number of self-service channels available to
your target market. Support VAs with a strong knowledge management engine for self-service to
create meaningful and productive interaction, and focus on delivering a similar experience in this
and other self-service channels. Also, support VAs through invisible Web chat agents once the
knowledge delivery of the VAs drops below an 85% relevance-of-response rate.
Business Impact: Effective use of a VA can divert customer interactions away from an
expensive phone channel to a less-expensive, self-service channel. The use of a VA that is voice-
enabled in a kiosk or an ATM can alleviate the need for typed interventions and can assist in
creating an interesting interaction for a nontraditional audience.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Emerging
Sample Vendors: Alicebot; Anboto; Artificial Solutions; Bangle Group; Cantoche; Creative
Virtual; eGain; Icogno; Umanify
Recommended Reading: "Self-Service and Live Agents Work Together"
Public Virtual Worlds
Analysis By: Steve Prentice
Definition: A public virtual world is an online networked virtual environment hosted on a
publicly accessible infrastructure in which participants are immersed in a 3D representation of
a virtual space and interact with other participants and the environment through an avatar (a
representation of themselves in the virtual space).
Position and Adoption Speed Justification: The growth of publicly accessible virtual worlds
has slowed, with activity moving to specific areas virtual worlds focused at children and
preteens (such as Habbo [owned by Sulake], Whyville, Build-a-Bearville and vSide [built by
ExitReality]), education, virtual events and simulation/training. As a result, we have increased the
Time to Plateau and reduced the benefit rating. There is strong ongoing interest from public-
sector and federal users (in the U.S.), but engagement by private-sector enterprises remains
muted outside the specific niches listed above. Building on the success of the product-related
virtual environments targeted at children, a number of advertising/media companies are exploring
the use of these environments (such as the "Immersive Brandspaces" being developed by Rivers
Run Red for a number of clients) for a broader range of fast-moving consumer goods (FMCG)
products.
Second Life (owned by Linden Lab) remains the leading option for those looking for a less
constrained environment, and it supports a rich diversity of development tools allowing user-
generated content, as well as an active economic model allowing products and services to be
exchanged for Linden Dollars (which can then be converted via an exchange into real-world
money). The release of a new client software with a simplified interface has eased some of the
barriers to entry that new users experienced, but Linden Lab continues to demonstrate
uncertainty over future direction. 2009 saw an increased focus on enterprise usage, but recent
announcements appear to indicate a reversal of this move and a return to a focus on individual

Publication Date: 2 August 2010/ID Number: G00205757 Page 50 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



users and further moves to simplify the user interface with a browser-based client. Nevertheless,
with its rich tools and relatively uncontrolled environment, Second Life remains the primary option
at this point in time for enterprises and educationalists looking to trial 3D environments for a wide
variety of purposes, although training and simulations are still the primary applications. Some
vendors (such as nTeams; www.nteams.com) are using Second Life as a platform to support a
specific application.
Without a clear audience value proposition, early interest in social worlds has declined. The
majority of early commercial entrants have now scaled down or closed down their activities as the
e-commerce opportunities failed to materialize (for example, Google closed its brief experiment
with Google Lively on 31 December 2008, and There [operated by Makena Technologies] closed
in March 2010). In the consumer space, the massive growth in social-networking platforms such
as Facebook and the rise of addictive browser-based games (such as Farmville) consume the
attention and interest of the potential audience, to the detriment of those operators hoping to
create social-networking sites built around a 3D immersive environment.
User Advice: The value of virtual worlds to enterprise and educational users lies primarily in the
ability to deliver a rich and immersive environment for collaboration and interaction. While
reliability concerns are fading, security issues remain, forcing the focus of attention for many
enterprises toward alternative solutions hosted inside the firewall (see the Private Virtual Worlds
Hype Cycle entry). This trend, combined with an ongoing failure to identify a compelling business
case, means enterprise use of public virtual worlds remains limited to niche applications. Current
business conditions continue to encourage enterprises to examine a wide range of alternatives to
face-to-face meetings, but simpler solutions (such as Web conferencing) appear to be gaining
more traction with users. Travel restrictions, natural disasters and cost constraints have created a
growing opportunity for virtual events (see the Virtual Events Hype Cycle entry) as a specific
niche that is gaining credibility and acceptance. Outside this niche, business models and
compelling business cases remain problematic, while effective metrics are still under
development. Enterprises should avoid heavy investments and focus on delivering clear business
value for identified and prioritized needs.
Enterprises seeking to support existing media or physical products (especially those targeted at
preteen and early-teen markets) may find virtual worlds a more immediately useful media
channel. However, before committing, they should closely evaluate the demographics and
economics of the various hosting options against a measurable business case.
Enterprises looking to host online events (either independently or as an adjunct to conventional
"on-site" events) should evaluate the role that public virtual worlds can play in this application.
Business Impact: In the short term, public virtual worlds remain a "sandbox" environment for
experimentation in training, community outreach and collaboration, but the "buzz" has died and
enterprise interest remains static. In the longer term, virtual environments still represent useful
media channels to support and engage with communities in an immersive fashion but appear
unlikely to induce transformational change.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Linden Lab
Recommended Reading: "How to Justify Enterprise Investment in Virtual Worlds"
"Virtual Worlds: What to Expect in 2009"

Publication Date: 2 August 2010/ID Number: G00205757 Page 51 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



"Learning Simulations Equip People to Face Crises"
"Cool Vendors in Social Software and Collaboration, 2010"
Consumer-Generated Media
Analysis By: Michael McGuire
Definition: Consumer-generated media (CGM) refers to any written, audio or video content
created by end users, using basic or semiprofessional tools. CGM can include one-consumer-to-
many applications such as photo sharing, publishing via blogs, podcasting, videoblogging and the
like, as well as "auteurs" looking to get their content to an audience.
Position and Adoption Speed Justification: CGM is filling content pipelines with material that
competes with established media for consumer time share. While many types are fairly generic
consumer content such as photos and digital videos, recent events are showing such consumer-
generated media in the form of videos and still images captured by mobile phones and uploaded
to online sites. The potential of CGM is clearly understood by companies such as Yahoo as
evidenced by its recent acquisition of Associated Content. This acquisition, in fact, was evidence
of the rapid evolution of this Hype Cycle entry and the difficulty of tracking technologies in the
overall media space. One could see the proliferation of CGM sites in the past year or two, and the
evolution of early market leaders such as YouTube to include more premium content offerings, as
evidence that CGM passed through the Trough of Disillusionment in late 2009. As we noted in
2009's Hype Cycle, "the companies that emerge (from the trough) will be those that are able to
generate meaningful revenue streams from the large audiences they can attract." Additional
evidence for the impact of CGM can be seen in how news organizations are leveraging
crowdsourcing of information in the form of video and still images captured by consumers with
camera-equipped mobile phones or low-cost HD cameras.
User Advice: Elements of CGM must be embraced by traditional media companies and used to
their advantage, but with the caveat that vigilance is required. The proper mix of premium content
and CGM that supports many premium titles provides a 360-degree offering to consumers and
provides cross-marketing and multichannel advertising opportunities. While media companies
should be diligent about tracking CGM publishing sites for potential copyright violations, they
should also look to create relationships that allow consumers to legally share and embed snippets
of copyrighted work in their own creations.
Business Impact: Because of the relatively low barrier to entry enjoyed by individual consumer
creators, media incumbents could find it difficult to gain and maintain the attention of a meaningful
audience in the short term if their primary motivation is to drive significant new revenue streams.
However, Gartner believes that over the long term three to five years CGM creators and
distribution sites will provide marketing and promotional opportunities for media incumbents, as
well as a source for breaking news and information for news organizations.
Benefit Rating: Moderate
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Demand Media (Pluck); Flickr; Twitter; YouTube
Idea Management
Analysis By: Carol Rozwell; Kathy Harris

Publication Date: 2 August 2010/ID Number: G00205757 Page 52 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Definition: Idea management is a structured process of generating, capturing, discussing and
improving, organizing, evaluating and prioritizing valuable insight or alternative thinking that would
otherwise not have emerged through normal processes. Idea management tools provide support
for communities (in originating and building out promising ideas), administrative support (for
capturing ideas and enabling innovation leaders to organize and track ideas), and analytical
support (for aggregating, refining, scoring, voting, prioritizing and measuring) for the leaders and
participants of innovation or ideation programs. These tools are typically used for focused
innovation campaigns or events, but most also enable continuous idea generation. In 2010, these
tools offer a wide array of approaches to idea management, such as events, crowdsourcing,
internal and external participants, multiple administrators, and other capabilities. They also offer
owned, hosted or software as a service versions of their tools, and often implementation support
and consulting services.
Position and Adoption Speed Justification: Companies in a wide variety of industries are
turning to idea management as a way to bolster innovation that drives sales of existing products,
creates new opportunities to increase revenue, or radically changes process or cost structure.
Industries that emphasize new product development were the earliest adopters of idea
management tools. Today, service industries and government are increasingly adopting
innovation and idea management practices. In addition, collaboration platform vendors are adding
idea management tools to their offerings. During 2009 and 2010, the growth in vendors and
success stories for idea management have driven interest in innovation and confidence in
engaging employees, customers and others in idea generation and evaluation. The Web seems
tailor-made to enable idea management across enterprise ecosystems and continually provides
access to new sources of ideas and innovations.
User Advice: Organizations establish innovation programs with great fanfare, succeed at
generating ample ideas, and then have difficulty sustaining the momentum through
implementation. Users should address the organizational and cultural issues of innovation
management. They should also identify the scope, participants and processes envisioned for idea
generation programs before attempting to select an idea management tool. Organizations that
plan to use idea management tools as a front end to new product or service development should
also ensure that those tools can be integrated with community, product life cycle and project
management tools. Organizations that consider idea management as part of an overall
collaboration platform should ensure integration with directory services and other technical and
functional features of collaboration. Finally, with the growth and rapid evolution of idea
management approaches and technology, organizations should evaluate whether these tools
should be internally owned and managed, or if hosted or software as a service versions are viable
options.
Business Impact: Idea management tools were initially used to facilitate an online "suggestion
box" (adding the advanced synergy, features and functions made possible by the Web), plus
events or campaigns implemented under the auspices of innovation programs. In 2010, these
tools may be used in broad organizational programs that include internal and external users, full
enterprises or individual departments, and organizations looking for product, process or services
innovation. Idea management tools can also enable organizations to segregate sources of ideas
(such as employee versus customer ideas), separate types of ideas (such as product versus
process ideas), and even aggregate multiple ideas into one. The proper handling of ideas is one
of the most critical aspects of successful innovation programs; users have a large number of
choices and therefore the requirement to plan and execute well. Idea management tools also
facilitate the process of publicizing campaigns and events (so they get a wide range of
participants and input), evaluating and building on the ideas submitted, acknowledging the
submissions and archiving valuable information.
Benefit Rating: Moderate

Publication Date: 2 August 2010/ID Number: G00205757 Page 53 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: BrainBank; Brightidea; Datalan; I-Nova; Idea Champions; Imaginatik; Induct;
InnoCentive; Jive Software; Kindling; MindMatters; NewsGator; salesforce.com; Sopheon; Spigit
Recommended Reading: "What a World-Class IT Innovation Charter Should Contain and Why
You Need One"
"Five Myths of Innovation, 2010 Update"
"Managing Innovation: A Primer, 2009 Update"
Mobile Application Stores
Analysis By: Monica Basso; Charlotte Patrick
Definition: Application stores offer downloadable applications to mobile users, mostly
consumers, via a storefront that is either embedded in the mobile handset or found on the fixed or
mobile Web. Application categories include games, travel, productivity, entertainment, books,
utilities, education, travel and search. Applications are free or charged-for.
Position and Adoption Speed Justification: Mobile application stores are targeted to
smartphone users, mostly consumers and prosumers, mainly for entertainment applications, such
as games and ring tones, or phone utilities, such as screen savers. One of the original application
stores was offered by GetJar, and is still in the market today.
With Apple's App Store introduction in 2008, the market saw a revival in interest. The company
recently announced that there are now over 225,000 apps, there have been over 5 billion
downloads, and it paid out over $1 billion in revenue sharing to developers (June 2010). The
application store generated excitement in the market with free (sometimes advertisement-based)
or charged-for applications, and has been a differentiator for the iPhone.
Other handset and operating system (OS) manufacturers looking to create similar excitement with
their phones and/or OSs have also introduced application stores, including Android Market
(Google), Ovi Store (Nokia), BlackBerry App World (Research In Motion), Windows Marketplace
for Mobile (Microsoft) and Palm Software Store (Palm). Carriers are also offering upgrades to
their own application stores and offerings for their feature phones, with a view to exposing
services such as billing, location and messaging to developers e.g., Orange App Shop and
Vodafone 360. A number of third parties, such as Handmark, GetJar and Qualcomm, offer white-
label solutions to carriers.
One example of an enterprise-specific application store is Citrix Dazzle, which works across a
range of client and mobile devices, and provides a mobile app store for internal deployment (i.e.,
the enterprise runs the store).
Due to the expectation that the adoption of smartphones and high-end feature phones will
increase, along with the popularity of applications, we expect application stores to accelerate
rapidly to the Plateau of Productivity in fewer than two years.
User Advice: Application stores are a "scale game," and those offering them need to create
some unique selling points that will bring developers to their stores, rather than those of their
competitors. An "ecosystem" needs to be created in which developers have the tools to easily
write and port applications; consumers can easily access, download and use applications; and all
sides have visibility into the accounting of application sales and an efficient billing system that
allows everyone to get paid in a timely manner.

Publication Date: 2 August 2010/ID Number: G00205757 Page 54 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Enterprises are particularly interested in this type of smooth ecosystem, as it takes the guesswork
out of the application business.
Device manufacturers and software manufacturers are able to insert icons into the mobile device
user interface so that users can easily access the application store. However, having an
application store is not for every device manufacturer. Smartphone manufacturers that do not
offer their own applications will need to offer applications via third parties and operators in order
to compete in the market. Other handset manufacturers that primarily offer high-end feature
phones (with proprietary OSs) should look to their partners to offer applications, such as
operators or third-party application stores like Handmark or Handango. The choice of
applications, how your customers obtain them, and their ease of use on the device are important,
rather than owning your application store.
In essence, operators have been offering application stores for a long time. They need to
increase their selection of applications and fight for good discoverability on the device versus
other competing stores. One option is to work with third parties to create virtual application stores
that can compete with some smartphone application stores or can work together in the Wholesale
Applications Community (WAC) initiative. Important components of these stores are ease of
search, discovery and downloadability. Operators can also use their billing functionality to
facilitate payments, location information to enhance applications and customer demographics to
improve advertising inside the applications. Being on a carrier deck can be an advantage to a
third-party application store, as this is still a strong channel. Carriers can also offer APIs to attract
developers.
Application providers and developers should look for application stores that are associated with
popular handsets and that can create a good user experience, and should weigh that against the
difficulty of developing and porting applications and the potential popularity of an application. It is
also important to choose application stores that have good distribution in terms of outlets and
service from the application development community. Other features of application stores that
would benefit developers include advertisement support (like the Google model, to allow vendors
to be "top of deck"), user reviews, rankings and recommendations (as with Amazon.com), and
good billing and reporting features.
Business Impact: Mobile application stores are likely to have an impact on:
Smartphone manufacturers, allowing a different degree of differentiation, depending on
the user experience and the selection of applications offered by the store.
Wireless carriers, primarily because of their interest in data access.
Applications providers, giving them access to additional customers in a well-organized
ecosystem.
Brands, which can advertise and segment customers based on applications.
The biggest issue for any party wishing to provide an application store is that it is unlikely to be
highly profitable, given the current market price points and the necessary startup costs. Media
reports suggest that the Apple store does not make a profit, and that it is part of the company's
broader "halo" strategy (encouraging third-party content and accessories that make the product
more attractive) for the iPhone and iPod touch family. Also, for operators, there are some
opportunities to drive data usage.
For later entrants rolling out a "me-too" strategy, the issue will be how much money is worth
investing and what, if anything, can be sold to developers, aside from the opportunity to reach a
company's customer base. A range of parties has announced stores, although, in some
initiatives, this might not have a high impact on their bottom lines. More joint initiatives like the

Publication Date: 2 August 2010/ID Number: G00205757 Page 55 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



WAC are likely to develop especially with vendors of apps stores bringing smaller
communications service providers (CSPs) in different geographies together to get economies
of scale.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Apple; Google; Microsoft; Nokia; O2; Orange; Palm; Research In Motion;
Vodafone
Recommended Reading: "Marketing Essentials: How to Decide Whether to Start a Mobile
Application Store"
"Dataquest Insight: Application Stores; The Revenue Opportunity Beyond the Hype"
Climbing the Slope
Biometric Authentication Methods
Analysis By: Ant Allan
Definition: Biometric authentication methods use biometric traits to verify users' claimed
identities when accessing devices, networks, networked applications or Web applications (see
"Q&A: Biometric Authentication Methods"). Across a wide range of use cases, any biometric
authentication method may be used in one of two modes one-to-one comparison (when the
user enters a user ID) or one-to-many search mode (when the user simply presents his or her
biometric, with no explicit claim of identity, and the system determines his or her user ID from a
range of candidates).
Some devices embed biometric authentication, which impacts usage and business value, and we
treat this as a special case "device-embedded biometric authentication" with a different
position on the Hype Cycle.
Position and Adoption Speed Justification: Biometric authentication is used in a wide range of
use cases, for workforce local access (such as login to Windows PCs and networks or
downstream applications), external users (such as login to Web applications), and, less often, for
workforce remote access (such as login to virtual private networks [VPNs]).
Many enterprises look to biometric authentication to provide increased user convenience, rather
than increased assurance or accountability. Some enterprises may want to exploit this for the
whole workforce, while others use it only for users who have particular problems with passwords
because of irregular working patterns. However, the potential for increased user convenience is
not always fully realized. Fingerprint remains the most popular choice of biometric trait, but at
least some users (one or two in a thousand) will be unable to reliably use biometric authentication
for physiological reasons (such as poorly defined ridges) or simply because they find it difficult
to properly interact with the sensor (the biometric industry refers to such users unkindly as
"goats") and many users have problems some of the time. Adjusting comparison thresholds to
reduce false-non-match rates and increase usability means that false-match rates increase,
decreasing security.
We see more consistent success with other traits, particularly with typing rhythm used in
conjunction with an existing password. In this case, the addition of the biometric technology

Publication Date: 2 August 2010/ID Number: G00205757 Page 56 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



allows enterprises to relax password management policies (decreasing the user burden of
passwords) without significantly compromising security.
Some enterprises particularly judicial agencies, pharmaceutical companies, government
financial agencies and investment banks have adopted biometric authentication, alone or in
conjunction with other technologies such as smart cards, where there is a need for a higher level
of individual accountability, at least among small constituencies within larger populations (for
example, traders in investment banks). Fingerprint remains popular here, despite usability
concerns, but there is increasing interest in vein structure.
During the next three years, we expect to see significantly increased interest in biometric
authentication for workforce and external users accessing higher-value Web applications via
mobile phones. Gartner has predicted that mobile phones will overtake PCs as the most common
Web access device worldwide by 2013 (see "Gartner's Top Predictions for IT Enterprises and
Users, 2010 and Beyond: A New Balance"). Phones are already common as an authentication
token for access from a PC (see "Q&A: Phone-Based Authentication Methods"), but the value of
a phone as a token is eroded when the phone itself (rather than the user's PC) is the endpoint.
The "second factor" is no longer distinct, and the level of assurance falls. Because users will
inevitably resist having to return to using a dedicated device for authentication, it is appealing to
exploit biometric authentication to provide the higher level of assurance appropriate for higher-risk
applications (see "Q&A: Biometric Authentication Methods"). While availability of device-
embedded biometric authentication in mobile phones remains low and inconsistent (see the
Device-Embedded Biometric Authentication section of this Hype Cycle), server- or cloud-based
biometric authentication products can exploit phones' (as well as PCs') microphones and user-
facing cameras as capture devices for voice, face topography and possibly iris structure.
User Advice: As for any authentication method, an enterprise must evaluate the potential
benefits of biometric authentication against the needs of each use case (see "How to Choose
New Authentication Methods"), and choose among the broad range of biometric technologies on
the same basis.
Biometric authentication can provide medium to high levels of assurance, but established,
nonbiometric alternatives are available at a similar price point. Enterprises should evaluate
biometric authentication as the sole method if user convenience is a primary consideration.
However, while this can free users from the need to remember passwords or carry some kind of
token, established fingerprint technologies cannot be used reliably by every user, necessitating
the provision of an alternative, at a higher per-user cost, and thus enterprises should evaluate
use of other biometric traits, such as typing rhythm, face topography and vein structure.
Biometric authentication can, however, provide a higher level of accountability than alternatives,
and enterprises should favor biometric authentication, alone or in conjunction with other
technologies, when individual accountability is paramount.
Enterprises should also consider alternative approaches: The comparison score generated by a
biometric technology can be used as a variable input to dynamic risk assessment in adaptive
access control (see "Adaptive Access Control Emerges"), rather than as the basis for a clear-cut,
binary authentication decision. Biometric technologies can also be used as a postlogin preventive
or detective control to verify that only the legitimate user is or has been using the PC.
When biometric authentication is used in combination with smart cards, the comparison of the
probe biometric sample with the biometric reference can be performed onboard the card.
Enterprises should evaluate the benefits of such "match on card" technology in addressing
privacy, legal, scalability, and integration issues regarding biometric authentication. We see
increasing vendor support for this option (reflected in some of the sample vendors listed here).
However, this clearly increases the cost and complexity of authentication, and enterprises might

Publication Date: 2 August 2010/ID Number: G00205757 Page 57 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



consider alternative approaches: One European investment bank chose vein structure over
fingerprint, partly to avoid the need to use "match on card" to comply with local legislation which
precluded centralized storage of fingerprint (but not vein structure) data and so to reduce costs.
Business Impact: Biometric authentication can provide increased user convenience, and, when
used as an alternative to passwords, can reduce authentication-related help desk calls by up to
90%. However, usability problems remain, and Tier 2 and Tier 3 support costs will typically be
higher than for passwords (as with other authentication methods).
Biometric authentication can provide higher levels of accountability than any other kind of
authentication method, as it cannot be shared by coworkers as passwords and tokens can.
Some kinds of biometric technologies, such as face topography and typing rhythm, can also
mitigate "walk away" risks by providing assurance that only the person who logs into a PC is the
person who continues to use it, potentially eliminating the need to enforce timeouts (see "Setting
PC and Smartphone Timeouts Is a Blunt Instrument for Mitigating Risks, but an Essential One").
Biometric authentication can provide higher levels of assurance (and accountability) for access to
higher-value Web applications via mobile phones, where users especially retail customers
among other external users will resist having to use any kind of authentication token separate
from the device.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: AdmitOne Security; Agnitio; BehavioSec; DigitalPersona; Fujitsu; Gemalto;
Hitachi; Imprivata; L-1 Identity Solutions; Precise Biometrics; SecuGen; Sensible Vision
Recommended Reading: "A Taxonomy of Authentication Methods"
"Market Overview: Authentication"
"How to Choose New Authentication Methods"
"Setting PC and Smartphone Timeouts Is a Blunt Instrument for Mitigating Risks, but an Essential
One"
"Q&A: Biometric Authentication Methods"
"Q&A: Phone-Based Authentication Methods"
Internet Micropayment Systems
Analysis By: Christophe Uzureau
Definition: Internet micropayment systems are used to purchase Internet content, goods and
services that have a value of less than $5.
Position and Adoption Speed Justification: The demand for payment solutions to achieve the
monetization of digital content remains strong. More Internet users access microcontent via their
mobile phone devices and turn to online communities, advertising companies are exploring new
models, and millions of Internet users rely on online games for their daily entertainment.
Most Internet micropayment systems are not stand-alone solutions. For instance, PayPal,
Moneybookers, ClickandBuy and Amazon Flexible Payments Service (FPS) support

Publication Date: 2 August 2010/ID Number: G00205757 Page 58 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



micropayments as part of a portfolio of payment services (for example, cobranded cards, person-
to-person payment systems and payment gateways), and their solutions can also be used for
transactions of more than $5.
Emerging micropayment systems rely on existing payment systems:
PayPal is expected to become one of the payment options to acquire Facebook Credits
(when the solution is fully rolled out).
Users can buy Facebook Credits using an existing payment card (Visa, MasterCard or
American Express) to fund a prepaid account. Then, Facebook users can buy
applications on Facebook and pay with their credits, worth 10 cents each, by clicking on
a Pay With Facebook button that debits the prepaid account. Facebook is testing this
micropayment system for the payment of games and selected applications.
In Australia, Visa has launched its payclick micropayment system, which can be funded
from any existing Visa, MasterCard or bank accounts, and enables parents to set up
sponsored accounts, as well as control and monitor purchases.
In terms of product development, both PayPal and Amazon Payments have built communities of
users and developers to support innovation in their payment systems. This will carry on improving
the maturity of payment solutions to support micropayments by ensuring that users' requirements
are captured.
New distribution models for payment applications by payment providers such as PayPal (adaptive
payment suite) are enabling developers to create their own payment applications and use new
processing models that are better aligned to the distribution of digital content.
Models are emerging to support content monetization:
Kwedit With its Kwedit Promise option, online gaming users can acquire a digital item
and make the promise to pay later; therefore, not disrupting their gaming experience.
This alternative credit scoring relies on the potential negative networking effect of not
paying for the goods and services, and damaging a credit that may affect the use of
other social-gaming applications in the future.
Kachingle This is a payment method for information freely available via content
sites/blogs. Users commit to a $5 withdrawal from their PayPal account, and this amount
is shared among the Kachingle-enabled websites that the user visited during the month,
according to the frequency of the visits. This type of voluntary payment is another
demonstration of the growing role of PayPal in supporting Internet micropayments.
As a result, Internet micropayment systems are becoming more mature and making progress
toward the Plateau of Productivity. Competition is also accelerating, notably with the
announcement by Visa of its payclick micropayment system for the Australian market.
User Advice: Taking into account the growth of usage of online communities, use micropayment
systems to capture new banking relationships with sole proprietors, and small and midsize
businesses:
The distribution of digital content requires new payment solutions to respond to the
payment requirements of entrepreneurs looking to monetize their digital content as part
of online communities.
This will provide ground to distribute banking products and services to this newly
acquired customer base.

Publication Date: 2 August 2010/ID Number: G00205757 Page 59 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Do not reinvent the wheel, but offer a selection of micropayment solutions to your clients. Banks
have to consider offering merchants solutions and services from third-party providers, such as
PayPal. The market push for more-transparent acquiring services and the realities of
microcommerce require a change in banks' approaches to acquiring services. For banks, it's
about strengthening their portfolio of payment products and services to accommodate new
consumption patterns.
As with other emerging payment systems, there is a lot of value in the payment information:
Banks need to capture emerging consumption patterns notably, as part of online
communities and for digital content. Using the resulting payment usage data, they will be
in a position to better market their existing payment and banking services.
Banks need to capture valuable information that could be fed into their credit-rating
system.
Introduce new spending analytics and tracking services to your younger customers. As discussed
in "Banks, Check Your Fundamentals Before Launching New Payment Instruments," that
customer group is looking for those services, and it remains undecided on their payment
preferences and are, therefore, easier targets for nonbank payment providers entering the market
with new payment solutions. Because this customer segment is more likely to use Internet
micropayment systems, ensure that the related spending activity information is fed into financial
management tools.
Business Impact: Internet micropayment systems facilitate the distribution of new Internet
content, and they create new revenue opportunities or threats for payment solution providers.
However, as mentioned above, they are not stand-alone solutions and use existing payment
systems to operate. Their benefit can be high only if banks can manage them as part of a
portfolio of payment services and are able to work with third-party providers to grow this portfolio.
If banks don't get involved, they risk providing space for nonbank payment providers that are
growing their portfolio of payment products and services. This doesn't just imply a loss of revenue
but, most importantly, a loss of connection with customers, making it more difficult to sell other
products and services.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Emerging
Sample Vendors: Amazon.com; ClickandBuy; Kachingle; Kwedit; Moneybookers; PayPal
Recommended Reading: "Banks, Check Your Fundamentals Before Launching New Payment
Instruments"
"Innovation Map for Value-Added Services in Card Processing"
"PayPal and Google Checkout Show the Way for Banks' Payment Operations"
Interactive TV
Analysis By: Andrew Frank; Michael McGuire
Definition: Interactive TV refers to any platform that enables two-way television services, such as
electronic program guides (EPGs); video on demand (VOD); interactive advertising; games; and
information, transaction and communication services. Interactive TV can consist of local or

Publication Date: 2 August 2010/ID Number: G00205757 Page 60 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



network interactions, but must support some return path to a network-based station that can
collect data, process transactions and so on.
Position and Adoption Speed Justification: Interactive TV has taken nearly 20 years to make
its Hype Cycle journey, since it first emerged in trials in the early 1990s. During this long period,
its architecture, design and business models have changed considerably, and they continue to do
so.
First-generation interactive TV applications, in general, did not include a return path to provide
such services as EPGs, interactive games and information services. We are now entering an era
where a return path is seen as essential for both commerce and measurement, and the model for
providing it is a key point of competition.
Another key factor affecting the speed and nature of interactive TV adoption is the division
between "bound" and "unbound" applications. Bound applications are tied to a specific channel's
video programming context, and delivered within that channel's bandwidth, while unbound
applications are persistent and decoupled from a channel or program. Unbound applications can
be delivered through the bandwidth managed by a cable or Internet Protocol TV (IPTV) provider,
or through an open Internet connection to a TV or set-top box (STB; referred to as "over-the-top"
or OTT). Bound applications require less storage and, therefore, can run on more legacy STBs.
Bound applications are now clearly within five years of widespread adoption, while the time frame
for unbound applications is hazier, but probably also within a five-year horizon.
Bound applications tend to have limited return path capacity, and sometimes achieve this using
out-of-band channels such as wired or wireless telephony, or store-and-forward polling methods.
These methods are satisfactory for simple network applications such as product ordering,
requests for more information, or polling and voting applications. More-complex group
interactions such as gaming and social networking generally require more bandwidth, although
clever work-arounds are abundant and likely to proliferate in the next few years.
In the U.S., cable companies serve roughly 60% of the TV audience, and their approach to
interactive TV is invested in Canoe Ventures, a spin-off from the industry's R&D arm, CableLabs,
which has been promoting the Enhanced TV Binary Interchange Format (EBIF) standard under
the SelecTV brand announced in early 2010. Initial applications are centered on RFI response
overlays for TV commercials. Early last year, the industry produced estimates that about 32
million U.S. households would be live with EBIF by the end of 2009; the industry fell far short of
this goal and recent estimates (June 2010) suggested 30 million by the end of 2010. Another
issue for EBIF is variation in implementations (known as user agents) that result in a relatively
small range of universally interoperable application possibilities. Nonetheless, EBIF clearly
represents the first significant penetration of interactive TV capabilities in the world's leading TV
market.
A longer-term standard from CableLabs that supports both bound and unbound applications is
tru2way (formerly OCAP), which has been licensed by STB and TV manufacturers, but is lagging
behind the industry's deployment timetable by a far greater margin than EBIF, despite having
publicly committed in 2008 to having tru2way capable digital cable systems by 1 July 2009. U.S.
satellite TV providers have also licensed tru2way, and have the capacity to serve bound
applications using EBIF or other technologies, such as the TiVo platform. IPTV providers, still a
small minority in the U.S., have already deployed proprietary unbound application platforms.
In Europe and South Korea, Digital Video Broadcasting Multimedia Home Platform (DVB-MHP) is
available over the air on at least 20 million STBs, often with a telephone-service-based return
path (wired or wireless).

Publication Date: 2 August 2010/ID Number: G00205757 Page 61 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



In the U.K., MHEG-5 has been deployed by Freeview (a digital terrestrial TV service reaching
70% of U.K. households) and Freesat (a joint satellite venture between the BBC and ITV), while
OpenTV has been deployed by Sky TV satellite service. These deployments offer interactivity, but
no return path. To address this, Project Canvas, another collaboration between BBC and ITV that
includes BT, is developing a new interactive platform for free-to-air television that employs
Internet connectivity to deliver two-way applications in a hybrid fashion.
All these developments add up to a picture of accelerated deployment for interactive TV over the
next five years, although questions persist about its ultimate business value, which is usually
conceived to be based on enhanced advertising and television commerce (t-commerce)
capabilities (including VOD movie rentals and games). While advertisers, merchants,
manufacturers and content licensors generally acknowledge the promise of interactivity and
addressability, there is scant proof that the value that these applications deliver will be enough in
the near term to offset the cost and complexity of these platforms, especially given the legacy of
engineering costs sunk into the pursuit of these visions over the years. Nevertheless, while
revenue from interactive advertising and t-commerce may be unproved, the infrastructure for
interactive TV will also provide access on the TV set to popular Internet-based services, such as
Internet TV and social networking. Interactive TV is far more likely to achieve mass adoption now,
compared to the 1990s, because it can leverage Internet behavior, rather than having to invent
new behavior patterns.
User Advice:
Service providers need to align with their regional industry groups and negotiate
collectively for interoperable standards that allow their network platforms (cable,
satellite, IPTV, broadcast and online video) to remain competitive and economical to
develop for. Service providers also need to focus on multiscreen strategies (TV, PC and
mobile) for service bundling and integration.
Broadcasters and content providers should focus on how to incorporate standards-
based interactivity into programming in order to bring more value to both audiences and
sponsors.
Manufacturers should resist the temptation to create differentiation on the level of
standards implementations that would undermine interoperability, and seek advantage
on the application level instead (such as better support for Internet TV and video device
controls).
Advertisers and ad agencies need to press for control over metrics and reporting
standards, and work to ensure full transparency in interactive TV advertising markets.
All commercial parties should focus in the near term on partnerships and alliances in the
newly forming "ecosystem" for interactive TV services, and hedge their bets on any
single technology solution.
Regulators should focus on ensuring fair competition among service providers and
standards bodies, and be aware that technology is creating media environments in
which legacy regulations are often inapplicable or irrelevant.
Business Impact: Cable, satellite and IPTV operators have a substantial opportunity to increase
their revenue share from advertisers and direct marketers by offering interactive features that can
support transactions and consumer engagement. Consumer electronics, middleware and STB
vendors face potentially decisive competition over where to strike the right balance between
features and cost. TV networks and advertisers, for which DVR-based ad skipping and Internet

Publication Date: 2 August 2010/ID Number: G00205757 Page 62 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



advertising spending shifts are significant disruptive trends, rely on interactive features, along with
more-dynamic targeting, to shore up the value of the TV medium to advertisers.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: BBC; Canoe Ventures; Ensequence; Ericsson Television; Intel; Invidi;
MediaFriends; OpenTV; Rovi; Yahoo
Predictive Analytics
Analysis By: Gareth Herschel
Definition: The term "predictive analytics" has become generally used to describe any approach
to data mining with three attributes: rapid analysis measured in hours or days (rather than the
stereotypical months of "traditional" data mining), an emphasis on the business relevance of the
resulting insights (no "ivory tower" analyses) and (increasingly) an emphasis on ease of use, thus
making the tools accessible to business users (no more "Ph.D.s with lab coats").
Position and Adoption Speed Justification: The algorithms underpinning predictive analytic
applications are reasonably mature. Although new techniques continually emerge from research
laboratories, the 80/20 rule firmly applies with most of the commonly used algorithms (such as
CHAID decision trees and k-means clustering) that have been in existence for over a decade.
The applications themselves are also approaching maturity, although the development of
packaged applications to address specific business problems (compared with the generic
approach of turning more-traditional data-mining workbenches into predictive analytic solutions) is
less mature and more diverse in its maturity. When predictive analytics applications have added
project and model management capabilities and more enhancements to aid ease-of-use, they will
have achieved maturity.
User Advice: Predictive analytics is effectively a more user-friendly and business-relevant
equivalent of data mining, but the distinction is more. Although potentially lacking some of the
mechanisms to fine-tune the model performance that a traditional data-mining workbench might
deliver, the benefits of rapid model development and easier maintenance tend to be appealing for
most analytical initiatives. The bigger distinction tends to be between predictive analytic solutions
and packaged applications built on these solutions for specific business issues. In these cases,
the selection decision should be based on the domain expertise the vendor has been able to
package into the application versus the domain expertise the business analyst can bring to the
analysis.
Business Impact: The ability to detect nonobvious patterns in large volumes of data is a
standard benefit of data-mining and predictive analytic solutions. Compared with traditional data-
mining workbenches, predictive analytic solutions deliver high value, primarily through broader
end-user access to analytic capabilities (enabling power business users to perform analysis,
rather than relying on specialist data miners) and better maintenance of existing models
(improving the reuse and the performance of the organization's models).
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: IBM (SPSS); KXEN; SAS Institute; ThinkAnalytics

Publication Date: 2 August 2010/ID Number: G00205757 Page 63 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Electronic Paper
Analysis By: Jim Tully
Definition: Electronic paper refers to several reflective display technologies that do not require a
backlight and can be viewed in conditions of moderate to good ambient illumination. They can be
made very thin, producing a nearly paper-thin rewritable display that gives a similar user
experience to that of printed paper. Electronic paper typically utilizes organic plastics rather than
glass, giving physical characteristics that are surprisingly rugged.
Most of these technologies involve physical movement of (or within) the pixel to facilitate a
change from light to dark or to change color. The performance achieved is, therefore, slower than
other electronic displays, such as LCDs. The most common example is E Ink, which is based on
pixels composed of charged particles suspended in a fluid. Other solutions are based around
micro-electromechanical systems (MEMS), nano chemical changes and rotation of spherical-
shaped pixels. The displays consume power only while images are changing and therefore use
virtually no power for static images. There is much interest in the development of the flexible
versions of these displays, such as those produced by Polymer Vision (now acquired by Wistron)
and Plastic Logic. Faster versions are also being developed with the ultimate aim of full video
speeds.
Touch sensitivity can be added to electronic paper by adding a touch layer over the front or back
of the display. Addition to the rear of the display offers the added benefit of a higher quality (and
brighter) image, since reflected light does not need to pass through the touch-sensitive layer.
Touch technology allows features such as the highlighting of words or adding handwritten notes
in electronic books.
Position and Adoption Speed Justification: The initial major applications for electronic paper
are electronic books, signage (in retail and roadside applications), and small information-centric
screens (in cell phones and music players). The most visible uses of the technology to date are in
the Amazon Kindle, Sony Reader and a number of other electronic book products. The
technology was also prominently used in Motorola's Motofone F3 cell phone. Esquire magazine
used electronic paper on the front cover of its 75th anniversary edition in October 2008, giving the
technology further visibility. Low power consumption is the main driver in most electronic paper
applications. Low cost is another driver for low-end mobile phones and other inexpensive mobile
devices that do not require full-motion video. Another driver is the readability of these displays in
bright sunlight, making them ideal for use outdoors and in car dashboard displays. These
applications are likely to drive further commercialization during the next two to three years.
Refresh speed and color support remain limiting factors for the current generation of electronic
paper technology. The declining cost of LCDs and the increasing attractiveness of organic light-
emitting diode (OLED) displays are challenging a number of application areas for electronic
paper, and this is moderating some of the growth opportunities for the technology. However, the
ultra-low-power characteristics are unmatched by any other display technology and this will be
critically important in many applications.
User Advice: Users and vendors should start to evaluate this technology in light of their specific
business needs. Early adopters are likely to be in signage and in low-power consumer products.
Automotive dashboard applications will also be important in view of the high contrast ratio of
electronic paper.
Business Impact: Use of wireless battery-powered signage is likely to bring significant benefits
to some classes of business. Electronic paper will also lead to new product and market
opportunities for vendors especially in consumer and remote applications.
Benefit Rating: High

Publication Date: 2 August 2010/ID Number: G00205757 Page 64 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: E Ink; Magink; Ntera; SiPix; Wistron
Location-Aware Applications
Analysis By: Monica Basso
Definition: Location-aware applications use the geographical position of a mobile worker or an
asset to execute a task. Position is detected mainly through satellite technologies, such as a
GPS, or through mobile location technologies in the cellular network and mobile devices.
Examples include fleet management applications with mapping, navigation and routing
functionalities, government inspections and integration with geographic information system
applications.
Position and Adoption Speed Justification: The market is maturing in all regions, with multiple
offerings for enterprise and consumer use. An increasing number of organizations have deployed
location-aware mobile business applications, most of which are based on GPS-enabled devices,
to support business processes and activities such as dispatch, routing, field force management
(for example, field personnel, field engineers, maintenance personnel and medical personnel on
ambulances), fleet management, logistics and transportation. Mobile workers typically use a PDA
or smartphone connected via Bluetooth to an external GPS receiver, a GPS-enabled wireless
device or a dedicated personal navigation device. They may also use laptops or ruggedized
devices. Location-aware applications include messaging, especially in the government sector,
where operational efficiency can be achieved (for example, so that the nearest road crew can be
dispatched to fix a water main break without a delay in determining who is where). Worker safety
applications for different roles are also being deployed, with a combination of motion sensor and
location, such as energy utility workers, lone workers in gas stations or social workers. Platforms
in this area integrate at different levels with corporate applications, such as ERP, sales force
automation and HR systems, offering capabilities to enhance, support and automate business
processes. Benefits like cost savings and increased efficiency can be achieved by deploying
these location-aware solutions.
Other services (often referred to as "infomobility services") enable people to move on streets and
roads more easily, through traffic flow monitoring and management. Countries in which mobile
penetration is extremely high and road traffic congestion is a serious issue have some potential
for adoption. However, these services are in their infancy, particularly when it comes to services
for citizens. During the past couple of years, local municipalities and public transportation
companies in North America and Europe have launched several pilot initiatives for example, a
real-time service in Italy for public bus users provides the expected time of arrival, delays and
connections for buses. The limited availability and lack of integration with other key services, such
as mobile payments or booking, make such initiatives experimental, so they are still emerging.
Mapping, navigation and tracking applications and services are also available for consumers
for example from Nokia (OVI Maps), Google (Maps and Latitude), Yahoo (Maps) and TeleNav. In
the area of sport and fitness, multiple applications and services are available, such as Sports
Tracker and MapMyRun.
User Advice: User organizations with numerous groups of employees moving frequently outside
a fixed location on a campus, or on a national or an international basis should consider the
possible benefits of deploying applications that allow them to provide status information or
support to staff and/or customers based on the geographical location of a person or asset in real

Publication Date: 2 August 2010/ID Number: G00205757 Page 65 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



time. The security and privacy of user location information should be a key consideration in the
development, deployment or acquisition of location-aware applications.
Business Impact: Location-aware applications can be deployed in field force automation, fleet
management, logistics and goods transportation in sectors such as government, healthcare,
utilities and transportation.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Appear; Gamma Engineers; Google; IBM; modomodo; Nokia; TeleNav
Recommended Reading: "Forecast: GPS in Mobile Devices, Worldwide, 2004-2010"
"Gartner's Top Predictions for IT Organizations and Users, 2007 and Beyond"
"Tracking People, Products and Assets in Real Time"
Speech Recognition
Analysis By: Jackie Fenn
Definition: Speech recognition systems interpret human speech and translate it into text or
commands. Primary applications are self-service and call routing for call center applications,
converting speech to text for desktop text entry, form-filling or voice-mail transcription, and user
interface control and content navigation for use on mobile devices and in-car systems. Control of
consumer appliances (such as TVs) and toys is also commercially available but not widely used.
Phone applications such as call routing or content navigation (for example, obtaining a weather
report) often use various techniques, ranging from directed dialogue, in which the system walks
the caller through a series of questions, to natural-language phrase recognition, in which the user
can respond freely to an open-ended question. Depending on the application, the speech
recognition may be performed on the device or on a server in the network (which offers superior
accuracy but slower response times) or as a combination of both.
Position and Adoption Speed Justification: Speech recognition provides tangible benefits in a
range of applications but still falls short of its potential both in performance and in adoption levels.
Accuracy can be highly variable depending on background noise, size of the recognized
vocabulary, level of natural-language understanding attempted, clarity of the speaker's voice,
quality of the microphone and processing power available. For text entry in a quiet environment,
where some users can achieve impressive accuracy, speech recognition still has not been widely
adopted outside medical and legal dictation, possibly due to the need to learn a new skill
(dictation) for most general office workers.
Speech recognition as a whole has been in this status climbing the Slope of Enlightenment
for over a decade. This year, we have changed its expected time to plateau (i.e., to the start of
mainstream adoption) to "2 to 5 years" (from its timing of "5 to 10 years" in 2009 and earlier) due
to a growing number of consumer applications, particularly in the mobile space, including voice
mail transcription and speech-to-speech translation. These applications provide useful
functionality even if not perfect, and we anticipate that broader use of speech recognition
embedded in interface and unified communications applications will drive a steady level of
adoption. Some applications of speech recognition are already further along with higher levels of
adoption; for example, simple self-service dialogues for call center applications are close to the
Plateau of Productivity. Others, including desktop dictation and mobile device control, are closer

Publication Date: 2 August 2010/ID Number: G00205757 Page 66 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



to the Trough of Disillusionment as they struggle to attract broader use. Other interface advances
such as touchscreens for media tablets may also lead to broader inclusion of speech recognition
as an additional (but not sole) means of input and control, and drive more rapid adoption.
User Advice: For call center applications, the most-critical factors influencing success and
reducing risk are designing the application to work within the accuracy constraints of speech
recognition, designing the voice user interface, selecting the appropriate speech engine, and
having thorough ongoing tuning and evaluation. Professional services experienced in speech
recognition technology are recommended for a first foray into this space.
For general-purpose office text entry, deploy speech recognition "on demand" for individual users
who express interest and motivation (for example, those with repetitive-stress injuries). Users
who are already practiced in dictation are likely to be most successful. Also examine non-mission-
critical applications where a rough transcription is superior to nothing at all, such as voice mail
transcription and searching audio files.
For mobile devices, focus initial applications on selecting from lists of predefined items, such as
city names, company names or music artists, because this is where speech recognition has the
strongest value-add by avoiding scrolling and embedded lists while maintaining a high level of
accuracy.
Business Impact: Speech recognition for telephony and contact center applications enables
enterprises to automate call center functions such as travel reservations, order status, ticketing,
stock trading, call routing, directory services, auto-attendants and name dialing. Additionally, it is
used to enable workers to access and control communication systems, such as telephony, voice
mail, e-mail and calendaring applications, using their voice.
For some users, speech input can provide faster text entry for office, medical and legal dictation,
particularly in applications where speech shortcuts can be used to insert commonly repeated text
segments (for example, standard contract clauses).
For mobile devices, applications include name dialing, controlling personal productivity tools, and
accessing content (such as MP3 files) and voice-mail-to-text services. These applications are
strongly motivated to use speech to support in-car use and for unified communications among
voice, text and e-mail services.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: IBM; Loquendo; LumenVox; Microsoft; Nuance; Sensory; SpinVox; telisma
Recommended Reading: "Emerging Technology Analysis: Voice-to-Text on Mobile Devices"
"MarketScope for IVR Systems and Enterprise Voice Portals"
Entering the Plateau
Pen-Centric Tablet PCs
Analysis By: Leslie Fiering; Ken Dulaney
Definition: Pen-centric tablet PCs are differentiated from media tablets, such as Apple's iPad,
because they are pen-centric and run full, user-controlled operating systems. These tablets are
largely targeted at the business and education sectors. They meet all criteria for a notebook PC,

Publication Date: 2 August 2010/ID Number: G00205757 Page 67 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



are equipped with a pen and an on-screen digitizer, and run Windows XP Professional Tablet
Edition, Windows Vista and Windows 7. There are two form factors: slates, which don't have a
keyboard; and convertibles, which have attached keyboards and swivel screens that lie flat on the
keyboard when in tablet mode. Slate adoption tends to be restricted to vertical applications with
walking workers and "clipboard replacement."
Position and Adoption Speed Justification: Despite the stability of the underlying tablet PC
technology, the price premium (as much as $250) over similarly configured clamshell notebooks
and lack of tablet-optimized applications have prevented broader mainstream adoption of pen-
centric tablets. In some cases, hardware OEMs are adding high-end features (such as Intel Core
i7 processors and solid-state drive) to drive the price premium even higher. Pen-centric tablet
PCs have long been a staple for certain vertical applications like healthcare, law enforcement,
field service and military.
Ruggedized models make pen-centric tablets particularly attractive for emergency services and
field applications.
Sales which Gartner considers a "semivertical" market is showing a strong return on
investment from the use of pen-centric tablets in some scenarios where it is critical to maintain
eye contact and customer intimacy, while collecting sales information. Smaller-form-factor tablets
are especially attractive to this segment. The ability to do nontext entries (such as diagrams and
formulas) makes pen-centric tablets attractive for higher education students.
The K-12 education market has found pen-centric tablet PCs attractive, since many younger
children find direct screen manipulation an aid to learning. In an effort to improve affordability for
this market, Intel has dropped the price premium between the Classmate 2's clamshell and
convertible pen-/touch-enabled models to $100.
The greater ease of use of touch in media tablets (which are largely focused on content
consumption, rather than on content creation) is likely to limit the growth of pen-centric tablet PCs
beyond the markets where they are already established.
User Advice: Consider a pen-centric tablet PC as a solid and mature option for vertical
applications in which it solves a specific problem. Do not consider pen-centric tablet PCs for
broad, mainstream deployment because of the price premium. Consider pen-centric tablet PCs
when the user must wear gloves that would prohibit the use of capacitive touch input tablets.
Business Impact: Pen-centric tablet PCs are useful in vertical applications for clipboard
replacement. Semivertical applications for sales include note-taking in social settings. University
students find pen-centric tablet PCs useful where nontext entry is required.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Early mainstream
Sample Vendors: Acer; Dell; Fujitsu; HP; Lenovo; Toshiba
Recommended Reading: "Case Study: Model One-to-One Technology Support: Klein Forest
High School, Texas"
"Notebook PCs: Technology Overview"

Publication Date: 2 August 2010/ID Number: G00205757 Page 68 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Appendixes
Figure 3. Hype Cycle for Emerging Technologies, 2009


Publication Date: 2 August 2010/ID Number: G00205757 Page 69 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Source: Gartner (July 2009)

Publication Date: 2 August 2010/ID Number: G00205757 Page 70 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Hype Cycle Phases, Benefit Ratings and Maturity Levels
Table 1. Hype Cycle Phases
Phase Definition
Technology Trigger A breakthrough, public demonstration, product
launch or other event generates significant press
and industry interest.
Peak of Inflated Expectations During this phase of overenthusiasm and unrealistic
projections, a flurry of well-publicized activity by
technology leaders results in some successes, but
more failures, as the technology is pushed to its
limits. The only enterprises making money are
conference organizers and magazine publishers.
Trough of Disillusionment Because the technology does not live up to its
overinflated expectations, it rapidly becomes
unfashionable. Media interest wanes, except for a
few cautionary tales.
Slope of Enlightenment Focused experimentation and solid hard work by an
increasingly diverse range of organizations lead to a
true understanding of the technology's applicability,
risks and benefits. Commercial off-the-shelf
methodologies and tools ease the development
process.
Plateau of Productivity The real-world benefits of the technology are
demonstrated and accepted. Tools and
methodologies are increasingly stable as they enter
their second and third generations. Growing
numbers of organizations feel comfortable with the
reduced level of risk; the rapid growth phase of
adoption begins. Approximately 20% of the
technology's target audience has adopted or is
adopting the technology as it enters this phase.
Years to Mainstream Adoption The time required for the technology to reach the
Plateau of Productivity.
Source: Gartner (August 2010)
Table 2. Benefit Ratings
Benefit Rating Definition
Transformational Enables new ways of doing business across
industries that will result in major shifts in industry
dynamics
High Enables new ways of performing horizontal or
vertical processes that will result in significantly
increased revenue or cost savings for an enterprise
Moderate Provides incremental improvements to established
processes that will result in increased revenue or
cost savings for an enterprise

Publication Date: 2 August 2010/ID Number: G00205757 Page 71 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



Benefit Rating Definition
Low Slightly improves processes (for example, improved
user experience) that will be difficult to translate into
increased revenue or cost savings
Source: Gartner (August 2010)
Table 3. Maturity Levels
Maturity Level Status Products/Vendors
Embryonic In labs None
Emerging Commercialization by
vendors
Pilots and deployments by
industry leaders
First generation
High price
Much customization
Adolescent Maturing technology
capabilities and process
understanding
Uptake beyond early adopters
Second generation
Less customization
Early mainstream Proven technology
Vendors, technology and adoption
rapidly evolving
Third generation
More out of box
Methodologies
Mature mainstream Robust technology
Not much evolution in vendors or
technology
Several dominant vendors
Legacy Not appropriate for new
developments
Cost of migration constrains
replacement
Maintenance revenue focus
Obsolete Rarely used Used/resale market only
Source: Gartner (August 2010)
RECOMMENDED READING
"Understanding Gartner's Hype Cycles, 2010"
"Cool Vendors in Emerging Technologies, 2010"
This research is part of a set of related research pieces. See "Gartner's Hype Cycle Special
Report for 2010" for an overview.

Publication Date: 2 August 2010/ID Number: G00205757 Page 72 of 72
2010 Gartner, Inc. and/or its Affiliates. All Rights Reserved.



REGIONAL HEADQUARTERS
Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
U.S.A.
+1 203 964 0096
European Headquarters
Tamesis
The Glanty
Egham
Surrey, TW20 9AW
UNITED KINGDOM
+44 1784 431611
Asia/Pacific Headquarters
Gartner Australasia Pty. Ltd.
Level 9, 141 Walker Street
North Sydney
New South Wales 2060
AUSTRALIA
+61 2 9459 4600
Japan Headquarters
Gartner Japan Ltd.
Aobadai Hills, 6F
7-7, Aobadai, 4-chome
Meguro-ku, Tokyo 153-0042
JAPAN
+81 3 3481 3670
Latin America Headquarters
Gartner do Brazil
Av. das Naes Unidas, 12551
9 andarWorld Trade Center
04578-903So Paulo SP
BRAZIL
+55 11 3443 1509

Das könnte Ihnen auch gefallen