Sie sind auf Seite 1von 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.

com

G00304147

Hype Cycle for I&O Automation, 2016


Published: 6 July 2016

Analyst(s): Robert Naegle

The array of automation technologies in the IT market continues to expand,


creating increased confusion. I&O leaders should proactively identify the
correct automation technologies for their specific goals and objectives as
part of an intentional strategic initiative.

Table of Contents

Analysis.................................................................................................................................................. 3
What You Need to Know.................................................................................................................. 3
The Hype Cycle................................................................................................................................ 5
The Priority Matrix.............................................................................................................................8
Off the Hype Cycle........................................................................................................................... 9
On the Rise...................................................................................................................................... 9
DevOps Toolchain Orchestration.................................................................................................9
Network Configuration Automation........................................................................................... 11
OpenConfig.............................................................................................................................. 13
Algorithmic IT Operations (AIOps) Platforms..............................................................................14
Continuous Delivery.................................................................................................................. 16
IT Service Orchestration............................................................................................................17
Management SDS.................................................................................................................... 18
Composable Infrastructure........................................................................................................20
Heuristic Automation................................................................................................................ 22
Unified Endpoint Management..................................................................................................24
Application Release Automation............................................................................................... 25
Virtual Network Configuration Automation.................................................................................27
At the Peak.....................................................................................................................................28
Container Management............................................................................................................ 28
Continuous Configuration Automation...................................................................................... 31
DevOps.................................................................................................................................... 32

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Sliding Into the Trough.................................................................................................................... 34


COBIT...................................................................................................................................... 34
Hybrid Cloud Computing.......................................................................................................... 36
IT Workload Automation........................................................................................................... 37
Cloud Management Platforms.................................................................................................. 39
Configuration Auditing.............................................................................................................. 41
Network Configuration and Change Management Tools............................................................42
Cloud Migration Tools............................................................................................................... 44
IT Process Automation Tools.....................................................................................................45
Enterprise Mobility Management Suites.................................................................................... 47
Climbing the Slope......................................................................................................................... 49
ITIL........................................................................................................................................... 49
Patch Management.................................................................................................................. 51
Server (Life Cycle) Automation.................................................................................................. 52
bpmPaaS................................................................................................................................. 54
Appendixes.................................................................................................................................... 55
Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 57
Gartner Recommended Reading.......................................................................................................... 58

List of Tables

Table 1. Hype Cycle Phases................................................................................................................. 57


Table 2. Benefit Ratings........................................................................................................................ 57
Table 3. Maturity Levels........................................................................................................................ 58

List of Figures

Figure 1. I&O Automation Architecture Map............................................................................................ 4


Figure 2. Hype Cycle for I&O Automation, 2016......................................................................................7
Figure 3. Priority Matrix for I&O Automation, 2016.................................................................................. 9
Figure 4. Hype Cycle for I&O Automation, 2015....................................................................................56

Page 2 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Analysis
What You Need to Know
There is no "one" automation solution for all infrastructure and operations (I&O) needs. Gartner now
tracks more than 20 automation-specific technologies, many with unique subsegments, ranging
from IT task-based automation tools to strategic service-provisioning-focused solutions.
Automation technologies, in general:

■ Increase speed of workflow completion


■ Reduce errors
■ Increase reliability
■ Make I&O more cost-efficient

Existing I&O system management and configuration tools continue to add automation capabilities,
while new automation products are entering the space from I&O automation markets, including IT
task and service-provisioning markets among others.

The majority of I&O/IT automation tools are IT-task-oriented, and tightly aligned to specific systems,
management platforms and/or software suites. Automation technologies that provide a framework
for codifying process automation across different systems (for example, servers, routers/switches)
are foundational to aggregating IT tasks or functions into IT services (for example, cloud
management platforms [CMPs], IT process automation [ITPA] tools). The evolving frontier for
automation tools is focused on orchestrating processes across multiple management or task
automation technologies. In today's market, no single vendor offers (or likely ever will) a set of
automation tools that span all purposes with the required depth of functionality or breath of
integration required for every enterprise. Successfully automated IT initiatives will require
automation technologies (often multiple tools) in three unique layers of automation: business
service, IT service and IT task (see Figure 1 and "Map I&O Automation Capabilities and Needs for a
Successful Tool Strategy").

Gartner, Inc. | G00304147 Page 3 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Figure 1. I&O Automation Architecture Map

Cross-Silo Complex: Situationally Dependent Coordination-Oriented


IT Service Business Service
Automation

Automation
Heuristic
IT Service Orchestration Continuous
Configuration
Automation

Cloud Service Automation


IT Process Automation
Automation

DevOps
Cloud Container

Business Valued
Mgmt. Mgmt.
ARA

Cloud
Migration Continuous
Delivery

Network Config.
Infrastructure/System Automation

IT Operations
Automation
UEM
Automation

Analytics
IT Task

Network Enterprise
Virtual Network Server Life Client
Config. & Mobility
Change
Configuration Cycle Mgmt. Mgmt.

Configuration Compliance &


Provisioning Patch Others
Assess Audit

Functionally Specific Simple: Fixed Sequence Technically Focused


Source: Gartner (July 2016)

Page 4 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

The Hype Cycle


This Hype Cycle reviews the variety of I&O automation-related technologies and initiatives, and their
relative maturities. I&O leaders should use this research to help identify relevant technologies and
map them against Gartner's Hype Cycle view of technology maturity. Early adopter/Type A
organizations will be interested in those tools that are poised to change the delivery of IT through
unique automation solutions. Type B and Type C enterprises will find the review of more mature
technology profiles a valuable guide for selecting more proven technologies. All enterprises will
benefit from examining technologies that provide the agility and speed needed for Mode 2 of
bimodal IT. The included technologies vary widely in their maturity — consider carefully the relative
position of technology profiles as compared to your organizational appetite for less proven
innovation.

The challenge for many I&O leaders is where to start or how to take the next step in their
automation journey. This Hype Cycle identifies automation initiatives that can be driven top-down —
centralized, aggregated, focused with strong governance — or evolved bottom-up, from functional
or task-based implementations focused on pragmatic "quick wins." Gartner research shows that it
is still more common to see automation technologies popping up in functional islands, with I&O
management later attempting to aggregate and orchestrate disparate technologies, scripts and
workflows.

Most organizations already have multiple automation-capable tools in their portfolios. For example,
network automation, server configuration, patch automation, IT service management (ITSM) and
workload automation are common. Thus, I&O leaders must implement a proactive automation
strategy that is designed to accommodate this dynamic market.

Automation initiatives will also be more successful when they focus on specific opportunities based
on relative task complexity, levels of manual or redundant work, or process areas with high rates of
manual error. I&O leaders should look for opportunities to automate that will provide more rapid time
to value, can make use of current resources and/or address acute pain points, and they should
focus efforts on a problem-by-problem basis.

Technology profiles new to this Hype Cycle are:

■ bpmPaaS, which replaces BPM platforms as the common area of BPM coverage. For more
details on the BPM market, Gartner clients should see the "Hype Cycle for Business Process
Management, 2016."
■ Composable Infrastructure refers to a set of disaggregated components, creating pools of
resources from which to configure applications and deliver services.
■ Management SDS, which includes the automation capabilities in software-defined storage.
■ OpenConfig, which is a growing standard that will impact network automation.
■ Hybrid Cloud Computing considers the unique automation needs when building and
maintaining hybrid clouds.

Gartner, Inc. | G00304147 Page 5 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

■ DevOps Tools Chain Orchestration addresses the need to coordinate and orchestrate the large
and growing number of DevOps automation technologies.
■ AIOps replaces and expands on the IT Operations Analytics (ITOA).
■ Unified Endpoint Management is the convergence of client and enterprise mobility
management.

In this Hype Cycle, the automation technologies that moved the most include:

■ Cloud Management Platforms are rapidly being positioned as a partial solution, not the total
solution as was hoped. CMPs are primarily used to provide policy definition, governance and
brokering of IaaS resources (private and public).
■ IT Process Automation technologies are delivering much of the anticipated value for I&O
organizations that need technology to coordinate the delivery of IT services.
■ Cloud Migration Tools are being used for lift-and-shift cloud migrations, and increasingly for
disaster recovery (DR), where enterprise want to leverage public IaaS as a secondary data
center.
■ ITSO moved backward as its definition evolved to include more intelligence and situational
awareness in order to provide more business-valued services.

Page 6 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Figure 2. Hype Cycle for I&O Automation, 2016

expectations

Continuous Configuration Automation


Container Management
DevOps

Virtual Network Configuration Automation


Application Release Automation

Unified Endpoint Management


Heuristic Automation
Composable Infrastructure
Management SDS COBIT
IT Service Orchestration Hybrid Cloud Computing bpmPaaS

Server (Life Cycle) Automation


Continuous Delivery IT Workload
Automation Patch Management
Algorithmic IT Operations (AIOps) Platforms ITIL
Cloud Management
Platforms Enterprise Mobility Management Suites
IT Process Automation Tools
Configuration Auditing
Cloud Migration Tools
OpenConfig Network Configuration and Change Management Tools
Network Configuration Automation
DevOps Toolchain Orchestration As of July 2016
Peak of
Innovation Trough of Plateau of
Inflated Slope of Enlightenment
Trigger Disillusionment Productivity
Expectations
time
Years to mainstream adoption: obsolete
less than 2 years 2 to 5 years 5 to 10 years more than 10 years before plateau
Source: Gartner (July 2016)

Gartner, Inc. | G00304147 Page 7 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

The Priority Matrix


The Priority Matrix maps the time to maturity of a technology/framework on a grid in an easy-to-
read format. It answers two high-priority questions:

■ How much value will an organization receive from a technology?


■ When will the technology be mature enough to provide this value?

It is important to note that for most of the technologies listed, the truly transformative impact is
delivered by interlocking technology adoption with people and process frameworks that are aligned
to a clear business objective.

As highlighted in Figure 3, the time to mainstream adoption for most of the technology profiles is
between two and 10 years. Generally, this has more to do with I&O organizational process maturity
than with product technical capabilities. More mature I&O organizations will successfully adopt
automation and transform their efficiency, reliability and predictability faster than indicated in this
matrix. Conversely, cultural resistance, skills shortages, lack of process discipline and inconsistent
governance will slow the time to adoption for many I&O organizations. Regardless of the maturity of
the organization, it is important that I&O leaders create an automation strategy that takes into
account the technologies and tools the enterprise has already acquired and is using, as well as any
innovative and transformative technologies entering this dynamic market.

Page 8 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Figure 3. Priority Matrix for I&O Automation, 2016

benefit years to mainstream adoption


less than 2 years 2 to 5 years 5 to 10 years more than 10 years

transformational Hybrid Cloud Computing Algorithmic IT Operations Heuristic Automation


(AIOps) Platforms
Continuous Delivery
DevOps Toolchain
Orchestration
IT Service Orchestration

high bpmPaaS DevOps Application Release Virtual Network


Automation Configuration Automation
Enterprise Mobility
Management Suites Composable Infrastructure
IT Process Automation Continuous Configuration
Tools Automation
ITIL Management SDS
Server (Life Cycle) Unified Endpoint
Automation Management

moderate Cloud Migration Tools Cloud Management Container Management COBIT


Platforms
Network Configuration
Configuration Auditing Automation
IT Workload Automation
Network Configuration and
Change Management
Tools
Patch Management

low OpenConfig

As of July 2016

Source: Gartner (July 2016)

Off the Hype Cycle


Two profiles were retired from the automation Hype Cycle this year:

■ ITOA was replaced by AIOps


■ BPM Platform was replaced by bpmPaaS

On the Rise

DevOps Toolchain Orchestration


Analysis By: David Paul Williams

Gartner, Inc. | G00304147 Page 9 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Definition: Today, DevOps toolchain orchestration refers to the concept of providing the ability to
manage the personnel and tool activities to support a DevOps pipeline. This includes the ability to
assign, collaborate, monitor, track and report on activities, identify delays and bottlenecks in the
delivery, integrate DevOps tools, and automate the pipeline activities from code planning through to
live production.

Position and Adoption Speed Justification: As organizations seek to put together toolchains to
support their DevOps processes, they must deal with many issues including changing standards
and integration to existing tools. Generally, toolchains solve specific issues, but they don't provide a
good view, feedback or consistent pipeline measurement, nor do they understand bottlenecks.
DevOps orchestration tools are slowly emerging with the aim to unify the DevOps pipeline tools and
DevOps team activities. Deriving mainly from established build, test and release functions, these
tools are expanding their reach across the toolchain to provide a platform with a framework that
provides interfaces, activity management, tool integrations, workflow, pipeline measurement, chat
and reporting. DevOps pipeline orchestration tool adoption continues to gain interest and grow, but
these tools can be seen as restrictive and require consensus for usage across all team members. In
addition, today these tools do not cover the full pipeline, can be expensive, and require additional
administration and skills. Lastly, these tools do not work well together as each regards other
DevOps tools as subordinates. These challenges will only be overcome as vendors expand
effectively across the toolchain activities, and as DevOps practices and projects mature.

User Advice: Effective DevOps toolchain orchestration should enable the following values:

■ Greater release agility


■ Faster time to production
■ A consistent method for continual deployment
■ Quality through automation
■ A constant feedback loop

It is critical that DevOps practices are built around business value, with DevOps toolchain
orchestration using the value to establish consistent pipeline metrics throughout the DevOps
toolchain — from plan through live production and back to plan. Ensure that any DevOps toolchain
orchestration solution meets the needs of all DevOps team members (including role-based
interfaces, access controls and tool integrations). Avoid having too many DevOps toolchain
orchestration solutions as it will introduce conflict and overlap because they are all designed to be
used as the highest-level tool. If multiple orchestration solutions are used, establish hard lines
around toolchain scope and coverage (for example, one orchestration tool for development and one
for release and configuration) with handoffs between the orchestrators. Not all DevOps initiatives
will require DevOps toolchain orchestration as it is determined by the complexity of the toolchain,
the need of the business, the size of the DevOps team and DevOps maturity. Due to DevOps
remaining a highly individual practice, the adoption of DevOps toolchain orchestration solutions will
require a greater level of standardization of practices and tools before this approach starts to make
significant traction and business sense.

Page 10 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Business Impact: The primary objective for DevOps toolchain orchestration is to ensure the
DevOps delivery pipeline is managed in line with business priorities. This means releases must be
visible to all members of the DevOps team including the business management. This visibility is also
used to identify bottlenecks and delays, as well as areas where effort, cost and time can be saved.
The orchestration of the pipeline should still provide the flexibility to embrace agile practices, but it
should also provide a level of control that ensures that activities adhere to corporate business
guidelines (for example, security and compliance regulations).

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Network Configuration Automation


Analysis By: Vivek Bhalla; Sanjit Ganguli

Definition: NCA is the combination of both physical and virtual network-centric automation
requirements and demands. It is a composite discipline that encompasses network configuration
and change management, and virtual network configuration automation. Holistically, NCA meets the
growing need for new approaches, tools and technologies that facilitate automated, policy-driven
change and configuration management to support network virtualization, NFV and SDN adoption
while maintaining heterogeneous physical infrastructures.

Position and Adoption Speed Justification: NCA is an embryonic technology that relies on two
primary factors emanating from its constituent disciplines: network configuration and change
management (NCCM) and virtual network configuration automation (VNCA).

First, VNCA is an emerging technology that is tied to the greater adoption of network virtualization
technologies, such as network function virtualization (NFV) and software-defined networking (SDN).
Given that network virtualization itself is still in its infancy, VNCA is very much an evolving space.
Gartner estimates SDN remains 18 months from mainstream adoption, and for this reason, it is
reasonable to anticipate a similar lag for the supporting VNCA tools. Given NCA is a composite that
includes VNCA, its own adoption is tied to that of VNCA's progression.

Second, multivendor physical-infrastructure-oriented NCCM tool providers have thus far


demonstrated a reluctance to migrate their capabilities across to virtual infrastructure stacks. This is
hampering the availability of heterogeneous options. Gartner anticipates greater interoperability will
arrive in 24 months, when end-user demand will push existing vendors to offer this capability or else
attract new vendors that do.

User Advice: When evaluating NCA tools, recognize how these tools may support any overall SDN
or NFV initiative and the implications of such drives while concurrently maintaining breadth of
device coverage and heterogeneous support of the physical infrastructure. Currently, few vendors
have the holistic and broad network automation use cases and scenarios required by advanced
virtual infrastructure stacks. Those that do tend to either be focused around a single vendor's

Gartner, Inc. | G00304147 Page 11 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

infrastructure stack (for example, Cisco) or have a predominant emphasis toward the NCCM use
cases at present (for example, HP). However, Gartner anticipates the NCA segment of network
automation to expand over the next 18 months.

Replace manual tasks with automated NCA tools to monitor and control physical and virtual
network device configurations to improve staff efficiency, reduce risk and network outages, and
enable the enforcement of compliance policies. Prior to investing in tools, establish standard
network device configuration policies to reduce complexity and enable more effective automated
change. Holistic change and configuration management processes should include an automation
strategy. Although a discipline unto itself, network automation plays a part in the automation of end-
to-end IT service management processes, and should be viewed as an enabler for the adoption of
cloud services and technology.

Consider NCA tools as components of a broader network automation strategy. IT leaders


implementing NCA should regard the new pressures coming from cloud implementations, where
policy-based network configuration updates must be made in lockstep with changes to other
technologies (such as servers and storage) to initiate the end-to-end cloud service. This will require
participation in strategic companywide deployment and configuration automation strategies (which
are usually implemented as part of an IT service support management toolset), and integration with
configuration management tools for other technologies, such as servers and storage.

Network managers need to identify opportunities to employ the automated features of NCA tools for
efficiency gains. Corrective actions by NCA tools will require human oversight in the first instance to
improve user confidence; however, the level of operator input should be reduced over time. With
cost minimization and service quality maximization promised by new, dynamically virtualized cloud
services and technology, automation is becoming a requirement because humans will no longer be
able to manually keep up with real-time configuration changes.

Business Impact: NCA delivers an automated way to maintain physical and virtual network device
configurations, thereby offering an opportunity to lower costs, reduce the number of human errors
and improve compliance with configuration policies.

As SDN gains traction, traditional network automation vendors will need to demonstrate suitable
roadmaps as organizations move toward the centralized management and orchestration of their
networks that SDN offers. With SDN, the need for traditional box-by-box configuration is greatly
reduced, dramatically reducing the value proposition associated with traditional NCCM tools. Thus,
these vendors must have enough vision to bridge the gap and fill the current NCA void. Failure to do
so will leave the door open to emerging innovative VNCA vendors to step in and start claiming a
greater share of the network automation market.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Aria Networks; Arkin; Cisco; Hewlett Packard Enterprise; Veriflow

Page 12 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Recommended Reading: "Market Guide for Network Automation"

"Toolkit: Network Automation RFP Template"

"Minimize Outage Exposure and Risk With Network Automation Tools"

OpenConfig
Analysis By: Andrew Lerner

Definition: OpenConfig is a working group primarily comprising network operators that are
developing open-source software, based on YANG data models to be consumed via APIs. The
consortium's goal is to improve management of heterogeneous network infrastructure in a vendor-
agnostic way. OpenConfig participation is driven by the buying community, not the vendor
community or standards bodies, with contributions from Google, AT&T, Microsoft, BT, Facebook,
Comcast, Verizon, Level 3 Communications, Cox Communications, Yahoo, Apple, Bell Canada and
others.

Position and Adoption Speed Justification: OpenConfig is an early-stage technology but carries a
high degree of promise as it has some prominent contributors, including Google. To date, there is
limited but growing support for OpenConfig APIs within mainstream networking vendors, with
Cisco, Juniper Networks and Arista Networks providing native support in some of their products. By
the end of 2018, we anticipate that most networking vendors that sell to large operators will support
OpenConfig. Thus, we anticipate early adoption in large providers (including network operators and
cloud providers) within the next three years. However, we do not anticipate OpenConfig gaining
adoption in mainstream enterprises within the next three years.

User Advice:

■ Operators of large networks should examine the OpenConfig data models and APIs.
■ Operators of large networks should prefer vendors that support or have committed roadmap
support for OpenConfig.
■ Operators of large networks should request that their network vendors provide support for
OpenConfig data models and APIs.

Business Impact: OpenConfig represents the potential to dramatically disrupt and improve
traditional network management frameworks that have dominated for the past 20 years. Initial
interest in the technology is from operators of large networks including cloud and service providers,
but this is driving network vendors toward support of it. Longer term, this could benefit mainstream
enterprise as well.

Benefit Rating: Low

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Gartner, Inc. | G00304147 Page 13 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Sample Vendors: Arista Networks; Cisco; Extreme Networks; Juniper Networks

Recommended Reading: "Magic Quadrant for Network Performance Monitoring and Diagnostics"

Algorithmic IT Operations (AIOps) Platforms


Analysis By: Colin Fletcher; Will Cappelli

Definition: Algorithmic IT operations (AIOps) platforms utilize big data, modern machine learning
and other advanced analytics technologies to directly and indirectly enhance all primary IT
operations functions with proactive, personal and dynamic insight. AIOps platforms enable the
concurrent use of multiple data sources, data collection methods, analytical technologies (real-time
and deep) and presentation technologies. AIOps platforms represent the evolving and expanded
use of technologies previously categorized as IT operations analytics (ITOA).

Position and Adoption Speed Justification: As operations tasks become increasingly automated,
and roles and responsibilities continue to converge (with DevOps as a leading example), the work of
analysis becomes a growing portion of all primary IT operations functions (monitoring, automation
and service desk), and, in turn, drives the need for AIOps platform capabilities. AIOps platform
technologies, in particular machine data and log management, have been most frequently adopted
to date in support of monitoring and root cause analysis efforts due to their ability to rapidly perform
and support highly complex diagnostic tasks across multiple domains.

Interest and investment in AIOps platform technologies will continue to rise due to:

■ Growing demand for increasingly proactive, intelligent and personal experiences that are a
hallmark of many digital business initiatives
■ Rising agility, cost optimization and quality expectations of IT operations
■ Continued exponential growth of data, dynamism and complexity in IT operations management
■ A growing set of early AIOps platform adoption successes in IT operations, consumer
applications and by service providers
■ ITOM tool vendors searching for meaningful competitive differentiation

This growth is tempered by broad "analytics" market confusion, intentional obfuscation by vendors,
and lingering end-user skepticism of predictive capabilities' cost and value. End users also
recognize that skill set adoption and cultural barriers will take time to overcome.

User Advice: I&O leaders must build and implement a strategic AIOps platform investment plan that
supports multiple, major IT operations functions (monitoring, automation, service desk and others)
and incorporates:

■ Identification and prioritization of high-value use cases across all of IT operations management
■ Inventorying existing skills and tooling capabilities across all of IT operations management
■ Training to address skills gaps and recognize value delivered by current AIOps platforms

Page 14 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

■ Incentive and organizational changes to drive both skill improvement and AIOps/"data scientist"
team cultivation
■ Development of an AIOps architecture with an eye on:
■ Balancing ease of implementation/use with interchangeability of platform capabilities
■ IT operations management tool portfolio rationalization
■ Key technology gap investment
■ Staged implementation of AIOps capabilities
■ Ongoing portfolio management of AIOps tooling

Most enterprises would benefit from an audit of the various AIOps technologies they already have
acquired in the form of stand-alone products or as capabilities in domain-centric tooling. Few I&O
teams have the skills or vision needed to take full advantage of all the AIOps capabilities they
already have, and those that do rarely have incentives in place to share and grow the necessary
skills; so both areas should be addressed as soon as possible. An architected AIOps strategy can
serve as a useful mechanism for focusing both skills and tooling investments on an ongoing basis.

Business Impact: By enabling I&O teams to enhance and transform major operational functions
with a real, automated insight generation capability, organizations across all verticals stand to
realize:

■ Agility and productivity gains — Via active analysis of both IT and business data, yielding new
insights on user interaction, business activity and supporting IT system behavior.
■ Service improvement and cost reduction — Via a significant reduction of time and effort
required to identify the root cause of availability and performance issues. Behavior-prediction-
informed forecasting can support resource optimization efforts.
■ Risk mitigation — Via active analysis of monitoring, configuration and service desk data
identifying anomalies from both operations and security perspectives.
■ Competitive differentiation/disruption — Via superior responsiveness to market and end-user
demand based on machine-based analysis of shifts, beyond those that are immediately obvious
to human interpretation.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: BigPanda; BMC; Elastic (Elasticsearch); Hewlett Packard Enterprise; IBM;
Moogsoft; Rocana; Splunk; Sumo Logic; XpoLog

Recommended Reading: "Innovation Insight for Algorithmic IT Operations Platforms"

Gartner, Inc. | G00304147 Page 15 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

"Cool Vendors in Availability and Performance, 2016"

"Digital Business Initiatives Demand the Use of IT Operations Analytics to Spark Transformation"

"Apply Machine Learning and Big Data at the IT Service Desk to Support the Digital Workplace"

"Enhance IT Operations Management With IoT-Derived Context and Data"

Continuous Delivery
Analysis By: Colin Fletcher; David Paul Williams

Definition: Continuous delivery (CD) enables teams to reliably release application or infrastructure
code at any time through the creation of an automated pipeline. It is a key capability of a DevOps
initiative, enabled by a DevOps toolchain. It involves the combined use of continuous integration
(CI), automated testing, deployment orchestration and execution (often performed by application
release automation), and other tools to reduce code-to-production cycle times.

Position and Adoption Speed Justification: CD improves release reliability and simplifies
compliance enforcement via improvements in environment fidelity and automation. CI and testing
are core to CD, as these functions provide environment models that can be leveraged throughout
the life cycle to more consistently deploy application builds and updates.

CD is a nonprescriptive, evolving approach that can be delivered and/or realized in many different
ways, limiting its visibility and understanding. Given CD's emerging state, market demand and
vendor responses have been fragmented, with DevOps teams typically starting with functions that
can clearly demonstrate value through automation (application release, configuration) when
integrated with CI and testing. Serving as a logical linkage between CI and operational functions,
CD plays a critical role in the formation of scalable DevOps toolchains.

User Advice: DevOps teams should incorporate CD processes and associated tooling to help
reduce friction throughout the application life cycle. This incorporation should take into account any
plans or investments in application release and continuous configuration automation as these tools
provide some degree of environment modeling and management, which can prove invaluable for
scaling CD capabilities across multiple applications.

At the beginning of a CD project, a role may need to be created within a DevOps project to manage
the CD process and associated technologies. For some organizations, this is an environment
manager, and, for others, it's an extension of the build role. Beyond assigning a role to manage CD,
a key facet is to establish fidelity across application environments. This will enable a higher
likelihood of CD success. DevOps teams should assume that, much like in release automation
implementation, discoveries of responsibility, skill, automation and documentation inconsistencies
will be a regular occurrence, at least initially.

DevOps teams should build requirements for CD tools with a broader view than just one
environment (development, test, quality assurance, preproduction or production) and one
application (for example, Java and .NET). Vendors claiming to offer solutions that completely enable
CD will continue to come from a variety of different market segments. This is a reflection of DevOps

Page 16 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

initiative diversity, but will add to market confusion. Some organizations will approach CD from a
"bottom up" infrastructure provisioning automation perspective, and others will start by leveraging
and extending their CI tools. There are trade-offs either way.

Business Impact: CD is a key capability of a DevOps initiative that reduces build-to-production


cycle times, in turn speeding the positive impact of new applications, functions, features and fixes
by reducing friction across the application life cycle. These positive impacts include improved
business and end-user satisfaction, business performance through rapid response to changing
market demands, and risk mitigation through rapid delivery of updates that address security issues.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Recommended Reading: "Cool Vendors in DevOps, 2016"

"Market Guide for Application Release Automation Solutions"

"Avoid Failure by Developing a Toolchain That Enables DevOps"

IT Service Orchestration
Analysis By: Robert Naegle

Definition: IT service orchestration (ITSO) is centered around the organization, sequencing and
management of automation activities and scripts across multiple pockets of IT/I&O automation. It is
designed to help visualize, control and simplify the delivery IT services by leveraging skills and
capabilities across multiple technologies and teams. ITSO includes intelligence-based decision
making to orchestrate provisioning of increasingly complex business-valued services.

Position and Adoption Speed Justification: Efficient I&O delivery of business-valued services
requires advanced orchestration of automated workflows. In many I&O organizations, service
context is minimal and requires manual input, while typically, automation is system-driven. For
example, as business-oriented service portfolios evolve to provision complex cross-functional
deliverables, so will the need for improved predictability, reliability, consistency and cost
management of those services. Service models exist in many management tools, but rarely transfer
easily. As ITPA tools evolve toward being functional coordinators, or the glue for larger ITOM suites,
an automation tool with an intelligent-service context will become a critical differentiator.

As the demand for more service orientation and improved service levels, reliability, predictability and
cost management increases, the discipline of automating service orchestration will continue to
evolve. The primary obstacle for most IT organizations is the definition and delivery of more
complex services, and the requisite orchestration of multiple functionally focused, task automation
capabilities. With this shift, I&O organizations will need to be more service-savvy; the future of IT
automation will be increasingly intelligent and offer the ability to enable automated decision making

Gartner, Inc. | G00304147 Page 17 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

fueled by better information, knowledge and learning (heuristic automation and/or algorithmic IT
operations [AIOps]) to further enhance service delivery. We reassessed the placement of this
technology on the Hype Cycle, and the lack of single-vendor solutions and the slow move to
business service alignment in I&O mean that it will require more time to mainstream adoption than
we anticipated last year.

User Advice: Think of ITSO tools as construction kits; each comes with different components to
create a specific process or workflow design. Out of the box (OOTB) components like connectors,
machine actions, forms and prebuilt workflows are designed to orchestrate the automation of
services across multiple functional areas, products, workflows and scripts. Tools, as they emerge,
will include a rich set of knowledge, decision models, and even prioritized sets of workflows to
support specific business outcomes. As with ITPA tools, content is king — Gartner clients should
look for solutions with an applicable library of OOTB content, as modifying vendor and community
content will require significantly less investment than creating content from scratch.

Business Impact: The delivery of business-valued IT services has become increasingly important
and complex. I&O organizations must have a systematic and automated delivery approach to
provide IT services in a timely, reliable and efficient manner. ITSO technologies provide the ability to
define, automate and deliver IT services in support of business-valued services. I&O leaders that
desire to run IT as a business will increasingly turn to ITSO technologies to positively impact
revenue creation, reduce operating costs and mitigate the risk of poor service delivery.

The discipline of automation orchestration will evolve with increased demand for service orientation.
Service orientation needs improved service levels, reliability and cost management. The primary
obstacle for most IT organizations is the definition and delivery of more complex, business-oriented
services. Orchestration of multiple functionally focused, task automation capabilities is also needed.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Recommended Reading: "Map I&O Automation Capabilities and Needs for a Successful Tool
Strategy"

"Market Guide for IT Process Automation"

"Survey Analysis: The Realities, Opportunities and Challenges of I&O Automation"

"Four Steps for Optimizing Automation Implementations in Data Centers, Clouds and I&O
Environments"

"Consider Heuristics the Future of Smart I&O Automation"

Management SDS
Analysis By: Julia Palmer; Dave Russell

Page 18 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Definition: Management software-defined storage (SDS) coordinates the delivery of storage


services to enable greater storage agility. It can be deployed as an out-of-band technology with
robust policy management, I/O optimization and automation functions to configure, manage and
provision other storage resources. Management SDS products enable abstraction, mobility,
virtualization, SRM and I/O optimization of storage resources to reduce expenses, making external
storage virtualization software products a subset of management SDS category.

Position and Adoption Speed Justification: While management SDS is still largely a vision, it is a
powerful notion that could revolutionize storage architectural approaches and storage consumption
models over time. The concept of abstracting and separating physical or virtual storage services via
bifurcating the control plane (action signals) regarding storage from the data plane (how data
actually flows) is foundational to SDS. This is achieved largely through programmable interfaces
(such as APIs), which are still evolving. SDS requests will negotiate capabilities through software
that, in turn, will translate those capabilities into storage services that meet a defined policy or SLA.
Storage virtualization abstracts storage resources, which is foundational to SDS, whereas the
concepts of policy-based automation and orchestration — possibly triggered and managed by
applications and hypervisors — are key differentiators between simple virtualization and SDS.

The goal of SDS is to deliver greater business value than traditional implementations via better
linkage of storage to the rest of IT, improved agility and cost optimization. This is achieved through
policy management, such that automation and storage administration are simplified with less
manual oversight required, which allows larger storage capacity to be managed with fewer people.
Due to its hardware-agnostic nature, management SDS products are more likely to provide deep
capability for data mobility between private and public clouds to enable a hybrid cloud enterprise
strategy.

User Advice: Gartner's opinion is that management SDS is targeting end-user use cases where the
ultimate goal is to improve or extend existing storage capabilities. However, value propositions and
leading use cases of management SDS are not clear, as the technology itself is fragmented by many
categories. The software-defined storage market is still in a formative stage, with many vendors
entering and exiting the marketplace and tackling different SDS use cases. When looking at different
products, identify and focus on use case applicable to your enterprise, and investigate each product
for its capabilities.

Gartner recommends proof of concept (POC) implementations to determine suitability for broader
deployment.

Top reasons for interest in SDS, as gathered from interactions with Gartner clients, include:

■ Improving the management and agility of the overall storage infrastructure through better
programmability, interoperability, automation and orchestration
■ Storage virtualization and abstraction
■ Performance improvement by optimizing and aggregating storage I/O
■ Better linkage of storage to the rest of IT and the software-defined data center

Gartner, Inc. | G00304147 Page 19 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

■ Operating expenditure (opex) reductions by reducing the demands of administrators


■ Capital expenditure (capex) reductions from more efficient utilization of existing storage systems

Despite the promise of SDS, there are potential problems with some storage point solutions that
have been rebranded as SDS to present a higher value proposition versus built-in storage features,
and it needs to be carefully examined for ROI benefits.

Business Impact: Management SDS's ultimate value is to provide broad capability in the policy
management and orchestration of many storage resources. While some management SDS products
are focusing on enabling provisioning and automation of storage resources, more comprehensive
solutions feature robust utilization and management of heterogeneous storage services, allowing
mobility between different types of storage platforms on-premises and in the cloud. As a subset of
management SDS, I/O optimization SDS products can reduce storage response times, improve
storage resource utilization and control costs by deferring major infrastructure upgrades. The
benefits of management SDS are in improved operational efficiency by unifying storage
management practices and providing common layers across different storage technologies. The
operational ROI of management SDS will depend on IT leaders' ability to quantify the impact of
improved ongoing data management, increased operational excellence and reduction of opex.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Atlantis Computing; DataCore Software; EMC; FalconStor; ioFABRIC; IBM;
Infinio; PernixData; Primary Data; VMware

Recommended Reading: "Top Five Use Cases and Benefits of Software-Defined Storage"

"Innovation Insight: Separating Hype From Hope for Software-Defined Storage"

"Technology Overview for I/O Optimization Software"

"Multivendor SDS: A Complex and Costly Myth"

"Should Your Enterprise Deploy a Software-Defined Data Center?"

Composable Infrastructure
Analysis By: George J. Weiss; Andrew Butler

Definition: Composable infrastructure takes physical infrastructure from a set of disaggregated


components, creating pools of resources from which to configure applications and deliver services.
Composable infrastructure can be quickly assembled and disassembled as demand arises.
Applications based on configuration templates with metadata can be stored for repeated use to
automatically assemble resources.

Page 20 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Position and Adoption Speed Justification: Reference architectures in use today comprise
lengthy development, test, validation and production cycles. A well-implemented composable
infrastructure will enable I&O leaders to achieve simpler, flexible resource utilization, combined with
faster application deployment configurations via software-driven automation and templates.
Composable infrastructure must still be validated by I&O leaders to confirm its production-readiness
as applicable for a broad spectrum of next-generation integrated systems and applications. The
organizational and political structure of the data center must evolve to acknowledge this technology
convergence, and to collapse day-to-day administration of servers, storage and networking. A
common user interface, dashboard and OSS management tools will be essential to make effective
use of such an architecture. This is the first appearance of composable infrastructure on the server
Hype Cycle.

User Advice: Currently, the maturity of this technology makes it appropriate only for early adopters,
but it is worth considering by other organizations for the long term. Do not consider overhauling DC
infrastructure with composable as a pervasive strategy until it is proven in end-to-end multirack
systems. Do not consider this technology for mission-critical apps at this time. Do perform POCs of
defined and limited scope. Proof points should include: network fabric, compute and storage,
detailed performance advantages, health monitoring, SLA compliance, and elastic scalability during
life cycle updates and maintenance. Use the POC to demonstrate composable infrastructure's value
compared to current IT infrastructures. Its benefits include simplicity and flexibility, productivity
(faster time to market and value), resilience and performance (for example, higher IOPS, lower
latency and greater scale). Before making major commitments, analyze vendor composable
infrastructures for continuous application delivery by evaluating time and labor savings during the
operations and life cycle of a system. The vendor most invested in developing a successful
differentiating composable strategy is Hewlett Packard Enterprise (HPE) with its Synergy system;
but most other system vendors are expected to embark on their own composable strategies (Cisco
and Dell are also publicizing their capabilities). Be cautious about making extensive commitments to
a vendor's solution since this may lead to high degrees of vendor lock-in with little choice in best-
of-breed components among other vendors.

Business Impact: Today's IT infrastructure has evolved to have much higher levels of complexity in
technological, application and ecosystem dimensions. While engineering, application and software
design may have also matured, basic challenges remain: demand for high vendor investment,
limited interoperability and manageability of silos, broader and fast-changing technical innovations
of component integration, and higher consumption demands on agility and speed to production.
Composable infrastructure has a potentially important market impact in delivering next-generation
applications, where fast development and delivery mandate rapid and continuous integration. The
ability to manage multiple variables in performance, configurations, application designs, resource
governance and pricing will inhibiting the progress made in composable software-driven
infrastructure and intelligence. Moreover, the potential vendor lock-in may dampen the enthusiasm
of I&O leaders and procurement management who desire maximum choice and selectivity.
Supporting its continued evolution is the drive toward a software-defined enterprise that abstracts
and decouples the underlying hardware from the applications, automation tools, and cloud
management and integration services. Composable infrastructure must complement current

Gartner, Inc. | G00304147 Page 21 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

evolving software-defined strategies and technologies rather than be seen as a competitive


alternative.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Cisco; Dell; Hewlett Packard Enterprise

Recommended Reading: "Composable Infrastructures Will Affect Hyperconvergence's Next


Generation"

"Infrastructure and Operations Leaders Should Examine Nine Critical Criteria When Assessing
Composable Infrastructures"

Heuristic Automation
Analysis By: Robert Naegle; Colin Fletcher

Definition: Heuristic I&O automation involves collecting, analyzing and applying human- and
machine-based learning and intelligence to tailor specific automated actions to unique situations
and dependencies. Heuristic automation is knowledge- and analytics-driven where most
automation use a deterministic, predefined workflow-driven approach.

Position and Adoption Speed Justification: Commonly defined, heuristics are techniques derived
from experience with similar problems, using readily accessible, though loosely applicable,
information to control problem solving (in human beings, machines, and abstract issue). The
potential to use process definition and automation experience to learn, apply patterns and improve
IT automation promises significant advances over workflow engines that follow a prescriptive and
predetermined set of steps. By incorporating heuristic-based automation in a task sequence, the
system can respond to "unique situations and dependencies" in more or less real time. However,
the application of heuristic-based automation is still narrowly focused on specific functional areas or
specific operations. Heuristic automation, although transformative, is immature and has yet to learn
from some of the mistakes of problem management systems used with service desk and
technologies a decade ago. It is not easy to implement into production and will take time to mature
and become mainstream.

User Advice: I&O leaders should proactively implement automation to: lower costs, meet the
demands of scaling the business and replace repetitive and mundane people activities. For most,
heuristic automation will be an evolutionary step in their automation maturity journey. The most
common use cases for heuristic automation today are in areas where multiple variables must be
evaluated prior to taking a next action, analysis which is facilitated by human intervention into the
deterministic workflow (either planned or exception-driven). Automation that executes using
knowledge, data, dynamic decision trees, behaviors, etc. typically requires a LOT of setup and is
never "out-of-the-box." Deterministic automation initiatives, automation skills and use cases should
be relatively mature before attempting a heuristic-based approach. Then, I&O leaders should first

Page 22 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

implement heuristic automation technology where intelligence is needed to address the limitations
of deterministic automation. Therefore, users should:

■ First prove their automation ability with a well-controlled and effective deterministic automation
implementation. If you have very little automation in place today, find out why.
■ Refrain from replacing existing deterministic workflows and tools based on marketing hype.
Initially supplement proven automation workflows only after rigorous testing and process
validation.
■ Use heuristic-driven automation when deterministic automation needs intelligence or insight
that helps with decision making to drive particular outcomes. Heuristic automation is not suited
for all use cases or processes.
■ Plan for considerable implementation effort when executing heuristic automation technology
given its relative immaturity.
■ Utilize Algorithmic IT Operations (AIOps) platforms to enhance heuristic automation.

Business Impact: I&O leaders who have implemented automation in their environments recognize
that IT process automation helped their organizations become more efficient by automating manual
and repetitive activities. Heuristics drives additional improvements as it supports more complex and
variable-dependent workflows — increasing the potential to decrease required human interaction in
process workflow, increasing the speed and service consistency. The knowledge base or
intelligence required to streamline and improve policy-based execution can be expensive and time
consuming to achieve required levels of reliability.

Benefit Rating: Transformational

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Arago; Cortex: Digitate

Recommended Reading: "Consider Heuristics the Future of Smart I&O Automation"

"Innovation Insight for Algorithmic IT Operations Platforms"

"Digital Business Initiatives Demand the Use of IT Operations Analytics to Spark Transformation"

"Know the I&O Automation Tool Categories to Drive Efficiency Across Your Data Center and Cloud"

"Market Guide for IT Process Automation"

"Pick the Right Orchestration Technology to Power Your Cloud Initiative"

"Bimodal IT: How to Be Digitally Agile Without Making a Mess"

Gartner, Inc. | G00304147 Page 23 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Unified Endpoint Management


Analysis By: Terrence Cosgrove; Nathan Hill

Definition: Unified endpoint management (UEM) is the use of a common set of tools and processes
across PCs, smartphones and tablets. UEM applies the smartphone and tablet paradigm to a wider
set of devices, most notably PCs. UEM includes the technologies of enterprise mobility
management (EMM) and client management tools (CMTs).

Position and Adoption Speed Justification: PCs (for example, Windows and Macs) possess an
open file system architecture, which allows (but also requires) CMTs and processes to perform a
wide range of management tasks, including provisioning, inventory, software distribution, patching,
configuration management and IT support. Mobile devices (for example, iOS, Android, Windows
Phone) introduced the sandboxed architecture, in which applications are isolated, with the OS
providing enterprise management APIs to EMM suites. Windows 10 and Mac OS X are transitioning
to the sandboxed application architecture, and as a result, are transitioning to the EMM suite
management architecture as well. However, classic Windows and Mac applications are entrenched
in organizations, which requires organizations to use CMTs for some devices and EMMs for others.

UEM solutions allow organizations to manage their devices through a single management tool.
Convergence will happen in three waves over the next three to five years:

■ Wave 1: Different Vendors and Products: Organizations use different vendors and products to
manage mobile devices and PCs.
■ Wave 2: Consolidated Endpoint Management: Organizations use a single vendor product set,
but with different processes and workflows, to manage mobile devices and PCs.
■ Wave 3: True Convergence: Organizations use the same technologies and processes to
manage PCs and mobile devices.

In Wave 3, the products (not just the vendors) and the processes for managing PCs, smartphones
and mobile devices will become the same. Organizations may accelerate this change by running
Windows applications remotely in a server-based computing or hosted virtual desktop environment,
while managing all endpoint devices via EMM controls on the devices.

UEM has progressed significantly over the past year. Windows 10 and Mac OS X have added
significant enhancements to their management capabilities to help facilitate UEM. Over the past
year, Microsoft added a large number of Windows 10 MDM APIs along with new ways of
provisioning Windows 10 systems. Apple continues to enhance its OS X MDM APIs as well. Still, the
persistence of Win32 applications and a continued need for many organizations to use Active
Directory Group Policy will keep organizations in Wave 1 and Wave 2 for the next several years.

User Advice: Users procuring Wave 1 device management products, referred to as MDM, EMM and
CMT, should plan on shifting to Wave 2 products within three years, as long as those products have
a path toward Wave 3. Identify the right use cases for EMM today, such as bring your own PC
(BYOPC), self-supporting users and users with few Win32 applications.

Page 24 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Business Impact: The third wave of UEM represents the disruption of processes and tools that
organizations have used for 15 to 20 years. It will require significant re-engineering of process.
However, there are several benefits to UEM:

■ The UEM architecture allows organizations to more easily manage their endpoint OS platforms,
which are continuously updating.
■ It allows organizations to support a wider range of devices, as UEM does not require managing
images or device drivers.
■ It will reduce the total cost of ownership of managing endpoint devices by simplifying device
management and support processes.
■ It reduces the number of tools required to manage the entire portfolio of endpoint devices.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: IBM; Landesk; Matrix42; Microsoft; VMware (AirWatch)

Recommended Reading: "Manage PCs as Mobile Devices for the Right Use Cases"

"Make Enterprise Mobility Management a Use-Case Decision for Managing Windows 10 and Mac
OS X"

"Gartner Retires the Magic Quadrant for Client Management Tools"

"IBM's Internal Apple Mac Savings Started With New Processes"

Application Release Automation


Analysis By: Colin Fletcher; David Paul Williams

Definition: Application release automation (ARA) tools enable best practices in moving application-
related artifacts, applications, configurations and even data together across the application life
cycle. To do so, ARA tools provide a combination of automation, environment modeling and release
coordination capabilities to simultaneously improve the quality and velocity of application releases.
These tools are a key part of enabling the DevOps goal of achieving continuous delivery with large
numbers of rapid, small releases.

Position and Adoption Speed Justification: IT organizations rarely manage application releases
consistently across the entire life cycle, with individual efforts led by operations, development or
combined DevOps teams. Tool acquisition remains similarly fragmented, with the majority of ARA
interest and adoption by large enterprises with DevOps initiatives and application portfolios that are
increasingly agile-developed.

Gartner, Inc. | G00304147 Page 25 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

As ARA vendors continue to emerge, additional acquisitions are expected. ARA tools typically target
a combination of manual processes, homegrown scripts, continuous configuration automation tools
and overextended build/continuous integration systems. The agility and quality benefits of ARA
solutions become more obvious when DevOps initiatives scale beyond a handful of applications.
The tools themselves have reached adequate maturity to support the code movement and
environmental management of very large implementations (hundreds of applications across
thousands of infrastructure elements); however, tools vary greatly in approach. Innovation is still
needed for aspects of release coordination (DevOps toolchain orchestration, interdependencies,
capacity and performance planning, communications, and similar aspects) at scale.

User Advice: Assess your application life cycle management maturity — specifically around your
deployment processes — and seek tools that can help automate the implementation of ARA
processes across multiple development and operations teams and platforms. Processes for ARA
are not, and are unlikely to become, highly standardized. Organizational and political issues remain
significant and cannot be addressed solely by a tool purchase. Additionally, the better
understanding you have of your current workflows for application release (especially if it is done
manually), the easier the transition to an automated workflow will be, which will decrease time-to-
value for the tools.

Establish requirements for applications to narrow the scope of evaluated vendors, and to determine
whether one tool or multiple tools will be required. Although most vendors provide a combination of
automation, environment modeling and release coordination, the strengths, scope (application,
platform and version support) and packaging of these respective capabilities vary significantly
across vendors. While we expect this gap to shrink, it is important to understand current support
and future roadmaps.

Include integrations with existing development and IT operations management (ITOM) tooling
(especially CCA, continuous integration/build and CMP tools) in product evaluation criteria, with an
eye toward using these tools in your broader provisioning and configuration environment.
Organizations that want to extend the application life cycle beyond development to production
environments using a consistent application model should evaluate development tools with ARA
features or ARA point solutions that provide out-of-the-box integration with development tools.

Business Impact: By automating the deployment of code, management of environments and


coordination of people in support of a continuous delivery pipeline, organizations across all verticals
stand to realize:

■ Agility and Productivity Gains — Via faster delivery of new applications and updates in response
to changing market demands
■ Cost Reduction — Via a significant reduction of required manual interactions by high-skill and
high-cost staff, freeing them to work on higher-value activities
■ Risk Mitigation — Via the consistent use of standardized, documented processes and
configurations across multiple technology domains

ARA tools can also provide a level of transparency to the release management process that can
prove useful in evaluating supporting infrastructure providers and the maturity of other configuration

Page 26 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

management initiatives. The direct business impact is that applications and additive functionality
can be delivered to the business faster and more reliably to improve competitiveness.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Automic; CA Technologies; Clarive Software; Electric Cloud; IBM (UrbanCode);
Inedo; MidVision; Serena Software; VMware; XebiaLabs

Recommended Reading: "Cool Vendors in DevOps, 2016"

"Market Guide for Application Release Automation Solutions"

"Avoid Failure by Developing a Toolchain That Enables DevOps"

"How to Build a DevOps Release Team"

"Market Trends: DevOps — Not a Market, but a Tool-Centric Philosophy That Supports a
Continuous Delivery Value Chain"

Virtual Network Configuration Automation


Analysis By: Vivek Bhalla; Sanjit Ganguli

Definition: VNCA is focused on supporting network virtualization, network function virtualization


(NFV) and software-defined networking (SDN) adoption. VNCA does this by meeting the
configuration demands for new approaches, tools and technologies that facilitate automated,
policy-driven change and configuration management for virtual networking technology. This
includes components such as virtual switches and virtual routers that help facilitate abstraction of
the data plane.

Position and Adoption Speed Justification: VNCA is an emerging technology that is tied to the
greater adoption of network configuration and change management (NCCM) and network
virtualization technologies, such as NFV and SDN. Given that NCCM adoption is relatively low and
network virtualization is still in its infancy, VNCA is very much an evolving space. Gartner estimates
SDN remains 18 months from early mainstream adoption, and for this reason, it is logical to
anticipate a similar lag for the supporting VNCA tools; although the greater demand for
configuration automation by network virtualization vendors (as opposed to physical network
vendors) may offset this to a degree.

In contrast with their physical-infrastructure-oriented NCCM counterparts, VNCA tools' lack of


heterogeneity is apparent. Currently, VNCA tools are normally tied to the infrastructure of a specific
vendor (for example, VMware or Microsoft). Gartner anticipates greater interoperability will arrive in
24 months, when end-user demand will push existing vendors to offer this capability or else attract
new vendors that will.

Gartner, Inc. | G00304147 Page 27 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

User Advice: When evaluating VNCA tools, determine how these tools support any overall virtual
network infrastructure, SDN or NFV initiative, and the implications of each technology drive (see
"Hype Cycle for Networking and Communications, 2015").

Prior to investing in tools, establish standard network device configuration policies to reduce
complexity and enable more effective automated change. Unlike physical networking providers,
those involved with the virtual infrastructure recognize that automated VNCA tools are essential to
monitor and control virtual network device configurations to improve staff efficiency, reduce risk and
network outages, and enable the enforcement of compliance policies.

Network automation, although a discipline unto itself, must increasingly be considered part of the
wider automation and holistic configuration and change management processes for an end-to-end
IT service, as well as be viewed as an enabler for the adoption of cloud services and technology.
Network automation implementation should take into account the new pressures coming from cloud
implementations, where policy-based network configuration updates must be made in lockstep with
changes to other technologies (such as servers/storage) to initiate the end-to-end cloud service.

Network managers need to identify opportunities to employ the automated features of VNCA tools
for efficiency gains. Corrective actions by VNCA tools will require human oversight in the first
instance to improve user confidence; however, the level of operator input should be reduced over
time. With cost minimization and service quality maximization promised by new, dynamically
virtualized cloud services and technology, automation is becoming a requirement because humans
will no longer be able to manually keep up with real-time configuration changes.

Business Impact: These tools provide an automated way to maintain virtual network device
configurations, thereby offering an opportunity to lower costs, reduce the number of human errors
and improve compliance with configuration policies.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Arkin; Big Switch Networks; Brocade; Cisco; HP; Microsoft; Nuage Networks;
Riverbed Technology; VMware

Recommended Reading: "Market Guide for Network Automation"

"Toolkit: Network Automation RFP Template"

"Minimize Outage Exposure and Risk With Network Automation Tools"

At the Peak

Container Management
Analysis By: Dennis Smith; Lydia Leong

Page 28 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Definition: Container management software provides management and orchestration of OS


containers. This category of software includes container runtimes, container orchestration, job
scheduling, resource management and other container management capabilities. Container
management software is typically DevOps-oriented and dependent upon the use of a particular OS
container technology or specific container runtime.

Position and Adoption Speed Justification: Interest in OS containers is rising sharply as a result
of the introduction of container runtimes, which have made containers more easily consumable by,
and useful to, application developers and those with a DevOps approach to operations. Container
runtimes, frameworks and other management software have increased the utility of OS containers
by providing capabilities such as packaging, placement and deployment, and fault tolerance. The
most notable container runtime is part of the Docker framework, which has a core value proposition
that allows easy and efficient packaging of applications into OS containers. Together with APIs that
allow easy integration and extension of the entire Docker framework, its runtime has become the
nexus of the container-related ecosystem. Its main rival is CoreOS' rkt runtime and associated app
container specification.

Most use of container management software is focused specifically on Linux environments, where
the OS container technology is relatively immature, but improving rapidly; native containers will be
introduced to Windows with the release of Windows Server 2016. As use of OS containers,
especially in conjunction with container runtimes, has grown, there has been strong growth of the
associated ecosystem. That includes container management software (such as Apache Mesos,
Mesosphere's Enterprise DC/OS, VMware's Photon Platform, Docker data center and the Google-
led Kubernetes project), lightweight, micro-PaaS frameworks (such as Flynn and Deis), and public
cloud infrastructure as a service (IaaS) solutions specifically designed to run containers (such as
Amazon Web Services' EC2 Container Service, Google Container Engine and Joyent's Triton Elastic
Container Service). Other platform as a service (PaaS) frameworks, such as Cloud Foundry and
OpenShift, have also begun to incorporate integration with container management software.

There is a high degree of interest in, and awareness of, container runtimes in early-adopter
organizations, and significant grassroots adoption from individual developers. Consequently,
container runtimes and associated software may be used with increasing frequency in development
and testing. Although few organizations use container runtimes in production environments today,
many IT organizations have begun to explore how such use would alter processes and tools in the
future. Container management software is likely to remain an early-adopter technology for at least
two years.

User Advice: Early-adopter organizations should begin exploring Docker or rkt as an alternative for
packaging and deploying applications and their runtime environments. Container management tools
should be viewed as a supplement to configuration management, not a replacement for it. As
container integration is added to existing DevOps tools and to the service offerings of cloud IaaS
and PaaS providers, DevOps-oriented organizations should experiment with altering their processes
and workflows to incorporate containers. Organizations should also look at the emerging ecosystem
around the container runtimes.

Gartner, Inc. | G00304147 Page 29 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

An organization may be a good candidate to explore a native container management tool in


conjunction with OS containers as an alternative to more VM-based cloud management platforms, if
it meets the following criteria:

■ Is DevOps-oriented.
■ Has high-volume, scale-out applications; a microservice architecture; or large-scale batch
workloads.
■ Is willing to place these workloads in OS containers.
■ Can assume trust between containers.
■ Intends to use an API to automate deployment, rather than obtaining infrastructure through a
self-service portal.

Business Impact: Container runtimes make it easier to take advantage of OS container


functionality, including providing integration into DevOps tooling and workflows. Each container
runtime, along with the tools integrated with it, provides a different set of functionality. Container
runtimes typically take an application-centric view — the OS container is simply a convenient
vehicle into which an application can be deployed. Container runtimes aim to improve management
efficiency by providing applications with an apparently homogeneous OS environment. Container
runtimes should help improve both the productivity of DevOps engineers and quality via
standardization and automation.

OS containers can be rapidly provisioned and scaled, and the scaling units can be much smaller
than a typical VM; thus, container frameworks with autoscaling capabilities can further improve
utilization by dynamically allocating small increments of compute resources. This resource efficiency
potentially leads to lower costs, especially when deploying applications into IaaS and PaaS
offerings.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: ClusterHQ; CoreOS; Docker; Mesosphere; Oracle (StackEngine); Rancher Labs;
Shipyard; VMware

Recommended Reading: "How I&O Teams Can Combine CCA Tools With Containers to Achieve
Operational Efficiencies"

"Containers Will Change Your Data Center Infrastructure and Operations Strategy"

"Take (Limited) Action to Prepare Your Data Center Network for Containers"

"Virtual Machines and Containers Solve Different Problems"

Page 30 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Continuous Configuration Automation


Analysis By: Terrence Cosgrove

Definition: Continuous configuration automation (CCA) tools enable infrastructure (system, server
and cloud) administrators and developers to automate the deployment and configuration of settings
and software for physical and virtual infrastructure in a programmatic way. They enable the
description of configuration states, customization of settings, software binaries deployment and
configuration data reporting.

Position and Adoption Speed Justification: CCA tools are the commercial evolution of open-
source "infrastructure as code" configuration tools. Emerging in the mid-1990s, with new entrants
continuing to arrive, these tools offer a programmable framework on which to codify infrastructure
configuration and provisioning tasks. These tools originally focused on Unix/Linux platforms;
however, Windows capabilities have improved over the past few years.

Adoption of these tools is growing in line with investment in DevOps initiatives due to:

■ Programmatic appeal to application developers


■ Ease of experimentation, extensibility and access to active communities
■ Potentially lower TCO for significant configuration management capability

Enterprise adoption of these tools is hindered mainly by the IT skill sets needed to use them.
Developers and administrators may use them on a tribal basis, further inhibiting enterprisewide
adoption. The growing use of containers has also created confusion about the role of CCA tools;
however, we believe that CCA tools and containers can be used in a highly complementary manner.

Organizations are increasingly using CCA tools for a broader set of deployment and automation
functions beyond configuration management, for example, patching, compliance and application
release automation. As CCA tools are increasingly used in adjacent functions, organizations will
experience the advantages of using the tools in new ways, but also discover limitations relative to
tools that are purpose-built for functions other than configuration management.

User Advice: CCA tools address many configuration automation capabilities needed by system
administrators. They also provide a robust supporting framework for DevOps projects, in which
infrastructure as code is created in sync with application code created by developers. The OSS
versions of these tools more than proved their value to early adopters (primarily web-scale, early
internet-based companies) with community-supported content and no licensing cost. Market
entrants have generally followed a now fairly common OSS business model of providing a free,
open-source offering with minimal content that the community builds on, and then ultimately
"packaging" a combination of enterprise management, curated content, professional services
and/or support capabilities into a commercial offering.

While much of the core automation technology used in both the no-licensing-cost OSS and paid
tools is not new, commercial offerings are still maturing in areas of scalability, platform support and
usability. Each of these areas should be carefully evaluated for suitability. CCA tools are priced

Gartner, Inc. | G00304147 Page 31 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

competitively when compared with traditional server (life cycle) automation tools; however, they
don't yet support management of the entire infrastructure life cycle across all major platforms.
Because CCA tools provide a programmatic framework, the costs associated with them extend
beyond just the licensing cost (or lack thereof), so enterprises should include professional services
and training requirements into cost evaluations. In particular, most I&O organizations should expect
to invest in training, as not all infrastructure administrators have the skills needed to use these tools
successfully.

Business Impact: By enabling infrastructure administrators and developers to automate the


deployment and configuration of settings and software for physical and virtual infrastructure in a
programmatic way, organizations across all verticals stand to realize:

■ Agility and Productivity Gains — Via faster deployment and configuration of infrastructure in
response to changing market demands.
■ Cost Reduction — Via a significant reduction of required manual interactions by high-skill and
high-cost staff. Licensing cost reductions may also be achieved.
■ Risk Mitigation — Via the consistent use of standardized, documented processes and
configurations across physical and virtual infrastructure.

CCA tools can drive efficiencies into existing operational configuration management, as well as
provide a flexible framework for managing the infrastructure of DevOps initiatives, by natively
integrating with other toolchain components — notably continuous integration and application
release automation in support of continuous delivery.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Emerging

Sample Vendors: Ansible; CFEngine; Chef; Inedo; Puppet; SaltStack

Recommended Reading: "Innovation Insight for Continuous Configuration Automation Tools"

"How I&O Teams Can Combine CCA Tools With Containers to Achieve Operational Efficiencies"

DevOps
Analysis By: George Spafford; Thomas E. Murphy

Definition: DevOps is a perspective that requires cultural change and focuses on rapid IT service
delivery through the adoption of agile, lean practices in the context of an integrated approach.
DevOps emphasizes people and culture to improve collaboration between development and
operations groups as well as other IT stakeholders, such as architecture and information security.
DevOps implementations utilize technology (especially automation tools) that can leverage an
increasingly programmable and dynamic infrastructure from a life cycle perspective.

Page 32 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Position and Adoption Speed Justification: DevOps doesn't have a concrete set of mandates or
standards, or a known framework — like ITIL or Capability Maturity Model Integrated (CMMI) —
making it subject to a more liberal interpretation. For many, it is elusive enough to make it difficult to
know where to begin and how to measure success. This can accelerate (or potentially inhibit)
adoption, and it's key to define what it means to your organization. DevOps is primarily associated
with continuous integration and continuous delivery of IT services as a means of optimizing the flow
of work across the application life cycle, from development to production. DevOps concepts are
becoming more widespread across Mode 2 initiatives, including digital business and the Internet of
Things (IoT), and in more traditional enterprise environments; yet every implementation is unique.
The creation of DevOps teams brings development and operations staff together to more effectively
and efficiently manage an end-to-end view of an application or IT service. To accomplish this and
then to continually improve requires major shifts in culture and in how objectives and metrics are set
and shared at the team level.

User Advice: DevOps projects are most successful where there is a focus on business value, and
there must be executive sponsorship with the understanding that this new team will have to make
an often-difficult organizational philosophy shift from traditional development and operations
projects today. Focus DevOps projects to develop Mode 2 capabilities to support systems of
innovation utilizing agile development.

Recognize that DevOps hype has peaked among tool and service vendors, with the term applied
aggressively and claims outrunning demonstrated capabilities. Many tool vendors are adapting their
existing portfolios and branding them DevOps to gain attention. Some vendors are acquiring smaller
point solutions specifically developed for DevOps to boost their portfolios. IT organizations must
establish key criteria that will differentiate DevOps traits (strong toolchain integration, workflow,
automation) from traditional management tools. Both development and operations should look to
tools to replace custom scripting with improving deployment success through more predictable
configurations.

Because DevOps is not prescriptive, it will result in a variety of manifestations, making it more
difficult to know whether one is actually "doing" DevOps. However, the lack of a formal process
framework should not prevent IT organizations from developing their own repeatable processes for
agility and control.

IT organizations should approach DevOps as a set of guiding principles, not as process dogma.
Select a project with both acceptable value and risk involving development and operations teams to
determine how to approach DevOps in your enterprise. Start small and deploy DevOps iteratively,
taking into account lessons learned along the way. At a minimum, examine activities along the
existing developer-to-operations continuum, where the adoption of more-agile communication can
improve production outcomes. As development efforts leverage enterprise agile frameworks to
scale, DevOps must be addressed as well.

Business Impact: DevOps is focused on accelerating the delivery of business via the adoption of
continuous improvement and incremental release principles adopted from agile methodologies.
While agility often equates to speed, there is a somewhat paradoxical impact; and smaller, more
frequent updates to production can work to improve overall quality, including both stability and

Gartner, Inc. | G00304147 Page 33 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

control, thus reducing risk. While not explicitly a focus for most DevOps projects, once initial
projects are successful, an adjacent but critical outcome is that clients of IT (both internal and
external) will have better experiences in application consumption.

Many new and transformational initiatives are not sufficiently focused on reducing risk, but, through
iterative use of DevOps and architectural adoption, value can be enhanced while risks and costs
can be managed.

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Recommended Reading: "Seven Steps to Start Your DevOps Initiative"

"Avoid Failure by Developing a Toolchain That Enables DevOps"

"When Using DevOps Principles, Follow Five Gartner Rules to Minimize Compliance and Audit
Findings"

"Survey Analysis: DevOps Adoption Survey Results"

"How to Build a DevOps Release Team"

"Avoid DevOps Disappointment by Setting Expectations and Taking a Product Approach"

Sliding Into the Trough

COBIT
Analysis By: Ian Head; Simon Mingay

Definition: COBIT, owned by ISACA, originated as an IT control framework, but COBIT 5, released
in 2012, is a broad IT governance and management framework. The intended purpose is to ensure
that the achievement of the business's goals is supported by the IT investments. This technology
profile considers COBIT 5 from an IT operations perspective.

Position and Adoption Speed Justification: Although COBIT 5 was released in April 2012,
organizations continue to be slow in their adoption of it, and COBIT 4.1, with its focus on Control
OBjectives for IT, still has some following.

In general, the impact of COBIT on IT operations has been limited. Compared to other frameworks
and bodies of knowledge, Gartner receives relatively few inquiries on COBIT. Gartner sees COBIT as
moving toward the Trough of Disillusionment. We do not believe it will reach the Plateau of
Productivity on the Hype Cycle in the context of IT operations in the next 10 years, which is
unchanged from last year. Some organizations continue to adopt COBIT, and some mandate
compliance for IT operations. However, few operations leaders find it useful as a broad framework

Page 34 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

to manage and govern the creation of value, and, so, in-depth use of COBIT within operations is
limited.

As a control framework, COBIT has a following, especially among auditors, and while its indirect
effect on IT operations can be significant, it's unlikely to be a frequent point of reference for IT
operations management.

User Advice: Even with the COBIT 5 update and its incorporation of ISACA's many frameworks, the
focus of this high-level framework is on what must be done, not on how to do it. COBIT
complements, rather than replaces, ITIL; and so, process engineers using COBIT must leverage
other standards, such as ITIL and ISO/IEC 20000, for additional design details to use pragmatically.
In implementing bimodal IT, agile philosophies and practices are also required.

IT operations managers who want to assess their management and governance to better mitigate
risks and reduce variations, and who are aiming toward clearer business alignment of IT services,
should use COBIT in conjunction with other frameworks, including ITIL and ISO 20000. Those IT
operations managers who want to gain insight into what auditors will look for, or into the potential
implications for compliance programs, should also take a close look at COBIT; but adoption of
COBIT can only be successful if the wider enterprise embraces the framework. Any operations team
facing a demand for wholesale implementation should push back and focus COBIT's use in areas
where there are specific risks in the context of their IT operations.

In particular, operations leaders should talk to compliance, internal audit and any other relevant
stakeholders to discuss future plans before adopting COBIT, or accepting an audit using this
framework.

Business Impact: Properly implemented, COBIT can be used to enhance governance practices
and to help manage risks, thus resulting in improved performance. COBIT links IT service goals to
business goals, so its potential has moved far beyond a simple audit tool. Note, however, that the
lack of compatibility with COBIT 4.1 necessitates an extensive training program for all those
impacted by COBIT 5 adoption.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Recommended Reading: "Leveraging COBIT for Infrastructure and Operations"

"Understanding IT Controls and COBIT"

"I&O Must Combine ITIL and DevOps to Deliver Business Value for Bimodal IT"

"Well-Defined Duties of the Process Owner and Process Manager Are Critical Success Factors for
Service Improvement Programs"

Gartner, Inc. | G00304147 Page 35 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

"Resourceful Midsize Enterprises Will Grow IT Vendor Management Competencies From the Inside
Out"

Hybrid Cloud Computing


Analysis By: Milind Govekar; Dennis Smith; David W. Cearley

Definition: Hybrid cloud computing is the coordinated use of cloud services across provider
boundaries among public, private and community cloud service providers to create another cloud
service. A hybrid cloud computing service is automated, scalable, elastic, has self-service interfaces
and is delivered as a shared service using internet technologies. Hybrid cloud computing implies
significant integration between the internal and external (of two or more external) environments at
the data, process, management or security layers.

Position and Adoption Speed Justification: Hybrid cloud offers enterprises the best of both
worlds — the cost optimization, agility, flexibility, scalability and elasticity benefits of public cloud, in
conjunction with the control, compliance, security and reliability of private cloud. As a result,
virtually all enterprises have a desire to augment internal IT systems with external cloud services.
The solutions that hybrid cloud provides include: service integration, availability/disaster recovery,
cross-service security, policy-based workload placement and runtime optimization, and cloud
service composition and dynamic execution (for example, cloudbursting).

While most organizations are integrating applications and services across service boundaries, we
estimate approximately 10% to 15% of large enterprises have implemented hybrid cloud computing
beyond this basic approach — and for relatively few services. This decreases to less than 10% for
midsize enterprises, which mostly are implementing the availability/disaster recovery use case.
While most companies will use some form of hybrid cloud computing during the next three years,
more advanced approaches lack maturity and suffer from significant setup and operational
complexity. Positioning on the Hype Cycle advances toward the Trough of Disillusionment as
organizations continue to gain experience in designing cloud-native and optimized services, and
seek to optimize their spending across on-premises and off-premises cloud services. However, this
is different from hybrid IT, which is where IT organizations act as service brokers as part of a broader
IT strategy and may use hybrid cloud computing. Hybrid IT services are professional services that
provide cloud service brokerage (CSB), multisourcing, service integration and management
capabilities to customers building and managing an integrated hybrid IT operating model. These
services are provided by vendors, such as Accenture, Wipro and Tata Consultancy Services (TCS).

User Advice: When using multiple cloud computing services, establish security, management, and
governance guidelines and standards to coordinate the use of these services with internal
applications and services to form a hybrid environment. Approach sophisticated cloudbursting and
dynamic execution cautiously, because these are the least mature and most problematic hybrid
approaches. To encourage experimentation and cost savings, and to prevent inappropriately risky
implementations, create guidelines/policies on the appropriate use of the different hybrid cloud
models. Coordinate hybrid cloud services with noncloud applications and infrastructure to support a
hybrid IT model. Consider cloud management platforms, which implement and enforce policies
related to cloud services. If your organization is implementing hybrid IT, consider using hybrid cloud

Page 36 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

computing as the foundation for implementing a multicloud broker role and leveraging hybrid IT
services to complement your own capabilities.

Business Impact: Hybrid cloud computing enables an enterprise to scale beyond its data centers
to take advantage of the elasticity of the public cloud — and, therefore, it is transformational when
implemented because changing business requirements drive the optimum use of private and public
cloud resources. This ideal approach offers the best possible economic model and maximum agility.
It also sets the stage for new ways for enterprises to work with suppliers and partners (B2B), and
customers (business-to-consumer), as these constituencies also move toward a hybrid cloud
computing model.

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Hewlett Packard Enterprise; IBM; Microsoft; OpenStack; Rackspace; RightScale;
VMware

Recommended Reading: "How to Prepare Your Network for Private and Hybrid Cloud Computing"

"Exploring Cloud Management Trends and Actions to Take"

"Solution Path: Implementing a Hybrid Strategy for Cloud Integration"

"Is Your Colocation Provider Cloud-Enabling or a Cloud Impediment?"

IT Workload Automation
Analysis By: Robert Naegle; Biswajeet Mahapatra

Definition: Workload automation tools manage and automate the scheduling and movement of
workloads and infrastructure tasks — within and between applications, and across mainframes, and
distributed, virtual and cloud environments. In addition, they manage mixed workloads based on
policies in which resources are assigned, or deassigned, in an automated fashion to meet service-
level objectives.

Position and Adoption Speed Justification: Workload automation tools are an evolution of the
traditional job schedulers that were application- and infrastructure-stack-specific (Windows, Linux,
mainframe, etc.). However, the move to digitalization and the adoption of mobility, cloud and IT
analytics have combined to morph basic job scheduling tools into more dynamic workload
automation tools. Workload automation tools are designed to manage jobs of varying size,
complexity and demand, enabling businesses to: classify jobs automatically based on the event
type; automatically free up or add capacity; run jobs based on type or priority; and run analytics and
publish results across different media based on need and audience. Workload automation is not
new, and business and IT operations leaders generally understand the potential of workload

Gartner, Inc. | G00304147 Page 37 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

automation tools. In most organizations, workload automation tools were adopted in a natural
transition from incumbent job scheduling tool providers. Most inquiries into workload automation
tools are driven by architecture changes and/or the desire to switch vendors.

Some key capabilities that will differentiate workload automation tools are:

■ Support for digital business by enabling the creation of dynamic jobs, and running, testing and
deploying those jobs instantly in addition to the regular running of jobs
■ Integration with other workload automation tools, capacity management tools and IT financial
management tools
■ Integration with capacity planning tools to help identify overused and spare resources to
schedule jobs in a cost-effective manner
■ Ability to run dynamic jobs on-premises and in the cloud, enabling monitoring and scheduling of
jobs through mobile devices, pushing alerts and messages to a variety of platforms, and
running different analytical tools on top of job scheduling tools — all hygiene factors
■ Building intelligence by leveraging IT operations analytics (ITOA)/algorithmic IT operations
(AIOps) into the workload to help differentiate jobs based on the events that triggered them

User Advice: I&O leaders should evaluate the need and applicability of workload automation tools
and the requirement for continued investment. More-diverse environments with complex business
applications and mixed operating environments will realize the greatest benefit from workload
automation tools. Job scheduler owners who are content with the existing scheduling features —
provided primarily by the applications themselves — that enable running jobs across a set of
heterogeneous environments with basic feature requirements like scheduling and monitoring of
jobs, have less need for a workload automation solutions. More advanced organizations proactively
automate the movement of data and the scheduling of jobs that in turn enable better decisions
made faster and increase customer satisfaction by actively leveraging workload automation tools.

I&O leaders that need workload automation tools should:

■ Choose tools that have capabilities to support ITIL-like and DevOps-like processes.
■ Selectively choose workload automation tools and avoid purchasing multiple redundant tools.
■ Choose tools that enable recognition of different types of events, enable capacity evaluation,
and have provisions to run multiple jobs across heterogeneous environments.

Business Impact: Workload automation vendors continue to refine core capabilities to be more
dynamic and agile to meet new requirements from digital enterprises. Workload automation cannot
be just a monolithic application that runs on a preconfigured environment at a predetermined time.
Rather, workloads must be increasingly event-driven and should automate tasks to perform
capacity checks, provision extra capacity (virtual/in-house/in the cloud) and run jobs efficiently so as
to rapidly provide meaningful business information to end users. Workload automation tools also
need to be able to manage automated movement of jobs and/or tasks from development into
production in a predictable and agile manner. Workload automation tools can help reduce the cost
of operations by automating many manual processes, reducing redundancy, eliminating duplication

Page 38 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

and human errors, and by making systems more agile. However, the more workload automation
tools evolve to support more general IT automation requirements, the more they will encounter
competitive pressure from ITPA tools.

Benefit Rating: Moderate

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Advanced Systems Concepts; Automic; BMC; CA Technologies; HelpSystems;


MVP Systems Software; Redwood Software; Stonebranch

Recommended Reading: "Mapping Workload Automation to IT Bimodal Needs "

"Consider Heuristics the Future of Smart I&O Automation"

"Market Guide for IT Process Automation"

"Know the I&O Automation Tool Categories to Drive Efficiency Across Your Data Center and Cloud"

"Innovation Insight for Algorithmic IT Operations Platforms"

Cloud Management Platforms


Analysis By: Dennis Smith

Definition: Cloud management platform (CMP) tools enable organizations to manage private, public
and hybrid cloud services and resources. Their specific functionality addresses three key
management layers: access management, service management and service optimization.
Management services include accessing/requesting cloud services and provisioning and managing
them to defined SLAs. Optimization supports the orchestration and automation of cloud services, as
well as the underlying infrastructure resources, in accordance with defined policies.

Position and Adoption Speed Justification: The CMP market is changing rapidly, as vendors
struggle to keep up with evolving customer requirements (for example, interfacing to public clouds
and workload optimization). At the same time, major market consolidation has begun and will
continue over the next few years. Some of the core CMP functionality is being combined with other
features (for example, service management and container orchestration). The ability to automatically
provision infrastructure for developers, so that they can focus on business logic, is key to providing
organizations with the agility they need. This requires that CMPs be linked into the application
development process. I&O teams are also a user of CMPs where they seek cost and operational
efficiencies. Organizations have an increasing need to address hybrid requirements and, in some
cases, they want to become internal cloud service brokers (CSBs) and manage public services that
were previously acquired (often by lines of business outside the infrastructure and operations
organization) and have become tough to manage operationally. This ability to handle hybrid
requirements tends to be a large investment for most CMP vendors.

Gartner, Inc. | G00304147 Page 39 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

User Advice: As CMP market volatility increases, IT organizations must:

■ Perform due diligence, not only on features, but also on the CMP vendor's viability.
■ Augment, swap out or integrate additional cloud management or traditional management tools
for some requirements, because no vendor provides a complete CMP solution.
■ Standardize, as deriving value from your CMP will depend heavily on the degree of
standardization offered by infrastructure, software and services.
■ Set realistic expectation on deployment times as mature organizations implement CMP in a
relatively short period (one to two years); however, less mature organizations may require three
or more years to design effective, repeatable and automatable standards and processes.
■ Plan for new roles (for example, cloud architects and cloud service brokers), including
development skills in the infrastructure and operations organization, financial management and
capacity management.

Business Impact: Enterprises will deploy CMPs to increase agility, reduce costs in providing
services and increase the likelihood of meeting service levels. The reduction of costs and the ability
to meet service levels are achieved because CMP deployments require adherence to standards and
increased governance and accountability. Desirable IT outcomes include:

■ Policy enforcement (e.g., on reusable standard infrastructure components).


■ Reduced lock-in to public cloud providers.
■ Enhanced ability to broker services from various cloud providers and to make informed
business decisions on providers to use.
■ Ongoing optimization to SLAs and costs, including autoscaling within and across providers.
■ Management of SLAs and enforcement of compliance requirements.
■ Accelerated development, enabling setup/tear-down of infrastructure that mimics production,
resulting in lower overall infrastructure costs and higher quality.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: BMC; Cisco; GigaSpaces Technologies; HPE; IBM; Microsoft; Red Hat;
RightScale; Scalr; VMware

Recommended Reading: "Market Guide for Cloud Management Platforms: Large, Emerging and
Open-Source Software Vendors"

"Market Guide for Integrated Infrastructure Systems Cloud Management Platforms"

"OpenStack Is Not a Cloud Management Platform"

Page 40 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

"Synergistically Link Cloud Management Platforms With Containers to Enhance I&O Agility"

"Cool Vendors in Cloud Management, 2016"

Configuration Auditing
Analysis By: Terrence Cosgrove

Definition: Configuration auditing tools provide change detection and configuration assessment
across servers, applications, databases and networking devices, across internal and public cloud
infrastructure. Company-specific policies or industry-recognized security configuration assessment
templates (for example, NIST) maintain the fidelity of the system for auditing, hardening or improved
availability. Some can remediate to a desired state, while others work with configuration
management tools for remediation.

Position and Adoption Speed Justification: Configuration auditing provides visibility to


configuration changes. Organizations adopt it for external (regulatory compliance) and internal
(improved availability and security compliance) reasons. Cloud projects continue to expand and
mature, and the requirement for policy enforcement is becoming a critical "day two" requirement.
This is especially true for hybrid clouds, where cloud service providers (CSPs) will drive placement
of virtual machines (VMs) and applications based on capacity, which may violate the compliance
requirements of the business. As the number of DevOps projects focused on improving release
velocity grows, it introduces heightened risk for compliance as application changes can happen
continuously. New tools are emerging to address this requirement, and existing tools will also
expand.

Technology implementation is gated by the organization's process maturity. Organizations must first
have the ability to define and implement configuration standards, as well as governance through a
change management process.

User Advice: Develop sound configuration and change management practices before introducing
configuration auditing technology in the organization. Greater benefits can be achieved if robust,
proactive change management processes are also implemented. Process and technology
deployment should focus on systems that are material to the compliance issue being resolved.
However, broader functional requirements should also be evaluated, because many organizations
can benefit from more than one area of focus, and often need to expand support within 12 months.

Define the specific audit controls required before selecting configuration auditing technology,
because each configuration auditing tool has a different focus and breadth. IT system
administrators, network administrators and system engineers should evaluate configuration auditing
tools to maintain operational configuration standards and to provide a reporting mechanism for
change activity. Security officers should evaluate the security configuration assessment capabilities
of incumbent security technologies, to conduct a broad assessment of system hardening and
security configuration compliance independent of operational configuration auditing tools.
Enterprise and cloud architects must insist on specific compliance policies and governance
capabilities that meet company regulatory and availability requirements. DevOps project leaders

Gartner, Inc. | G00304147 Page 41 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

focused on increasing release cycles should evaluate configuration auditing tools that support the
ability to track application environment changes.

Business Impact: The benefits of configuration auditing are increased agility by ensuring high-
velocity change does not cause configuration drift; increased availability by maintaining desired
configurations; and reduced risk by ensuring the right controls are in place to avoid audits and,
worse, exposure to risk of attack. Build a case where controls are appropriate for the situation,
which can be potentially industry-specific (for example, PCI for retail, commercial financial, or
NIST/CIS for government). Without these tools, it is difficult, if not impossible, to be prepared for
audits and demonstrate control. Without a way to show this type of change control, there are fines
to be incurred. But the real business benefit is the potential offset of an attack by enabling a
consistent and trusted configuration of systems and software.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: BMC; Chef; Evolven; IBM BigFix; Puppet; Qualys; Tripwire; UpGuard

Recommended Reading: "Know the I&O Automation Tool Categories to Drive Efficiency Across
Your Data Center and Cloud"

Network Configuration and Change Management Tools


Analysis By: Vivek Bhalla; Sanjit Ganguli

Definition: Network configuration and change management (NCCM) tools are a subset of the
network automation toolset and are focused on the setup and configuration, patching, rollout and
rollback, resource use, and change history of the physical network infrastructure. These tools
discover and document network device configurations; detect, audit and alert on changes; compare
configurations with the policy or "gold standard" for that device; and deploy configuration updates
to multivendor network devices.

Position and Adoption Speed Justification: NCCM is just past the Trough of Disillusionment, not
because of any deficiency of the tools, which work well and can deliver strong benefits. The
discipline is held back by a lack of process maturity pushing teams toward taking a pragmatic
approach to resolving their organizations' specific requirements. This has frequently resulted in a
cultural reluctance to modify and document standard operating procedures that have evolved
organically (as opposed to systematically).

Network configuration management is frequently practiced by router experts, the only individuals
familiar with the command line interfaces for the network devices. Without sufficiently documented
procedures, it is a challenge to transform this status quo, particularly when there's resistance to
change by those feeling their skills are being undervalued. A top-down effort is required, and a
change in personnel performance review metrics must occur to convince network managers of the

Page 42 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

business importance of documented device configuration policies, rigorous change management


procedures and tested disaster recovery (DR) capabilities.

User Advice: Replace manual processes and activities with automated NCCM tools to monitor and
control physical network device configurations to improve staff efficiency, reduce risk and network
outages, and enable the enforcement of compliance policies. Prior to investing in tools, establish
standard network device configuration policies to reduce complexity and enable more effective
automated change.

NCCM, although a discipline unto itself, must be considered part of the wider automation and
holistic configuration and change management processes for an end-to-end IT service, as well as
an enabler for the adoption of cloud services and technology. Implementation of NCCM should take
into account the new pressures coming from cloud implementations, where policy-based network
configuration updates must be made in lockstep with changes to other technologies (such as
servers and storage) to initiate the end-to-end cloud service. The implications of any drive toward
software-defined networking (SDN) and/or network function virtualization (NFV) adoption must also
be accommodated in this process, with the adoption of virtual network configuration automation
(VNCA) tools being of particular relevance.

Network managers need to identify opportunities to employ the automated features of NCCM tools
for efficiency gains. Corrective actions by NCCM tools will require human oversight in the first
instance to improve user confidence; however, the level of operator input should be reduced over
time. With cost minimization and service quality maximization promised by new, dynamically
virtualized cloud services and technology, automation is becoming a requirement because humans
will no longer be able to manually keep up with real-time configuration changes.

Business Impact: NCCM tools provide an automated way to maintain physical network device
configurations, thereby offering an opportunity to lower costs, reduce the number of human errors
and improve compliance with configuration policies.

NCCM remains primarily a labor-intensive, manual process that involves remote access — for
example, via Telnet or Secure Shell (SSH) — to individual network devices and typing commands
into vendor-specific command line interfaces. These activities are fraught with opportunities for
human error. Alternative approaches — such as creating homegrown scripts to ease retyping
requirements — are used to reduce effort, as opposed to ensuring accuracy and eliminating
inconsistencies. Enterprise network managers do not often consider rigorous configuration and
change management, compliance audits and disaster recovery (DR) rollback processes when
executing network configuration alterations, even though these changes often are the root causes
of network issues. However, corporate audit and compliance initiatives have forced a shift in this
behavior.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Gartner, Inc. | G00304147 Page 43 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Sample Vendors: Auconet; Dorado Software; EfficientIP; Entuity; Hewlett Packard Enterprise;
Infoblox; Infosim; ManageEngine; NetBrain; Opmantek

Recommended Reading: "Market Guide for Network Automation"

"Toolkit: Network Automation RFP Template"

"Minimize Outage Exposure and Risk With Network Automation Tools"

Cloud Migration Tools


Analysis By: Dennis Smith

Definition: Cloud migration tools support the packaging and movement of production or disaster
recovery workloads between on-premises infrastructure and public cloud facilities, as well as
between public cloud services. Enterprises will mostly use these tools through a system integrator
that has been engaged for cloud migration. These tools can import workloads from either physical
or virtual servers. Movement of the workloads can be initiated manually or through a service
governor.

Position and Adoption Speed Justification: Cloud migration tools are a rapidly maturing market,
in which vendors offer proprietary technology for workload autodiscovery, metadata creation and
importing into the cloud. Originally targeted for on-premises infrastructures (for example, moving
workloads from physical to virtual servers), these tools have evolved and been repositioned to
address use cases involving running production and disaster recovery workloads in public clouds.

Some vendors have originated from the cloud migration (moving production workloads one-time
from on-premises infrastructure to a public cloud) and the disaster recovery (moving workloads
during disaster recovery testing or after declaring an actual disaster) market areas, with their
respective technologies having strengths more geared toward one or the other of the two use
cases. Additional providers from adjacent areas are also starting to encroach on the market (for
example, backup and recovery vendors and cloud management platform vendors). Some of the
vendors with roots in cloud migration are also beginning to offer cloud management platform
features.

User Advice: Enterprises planning to use public cloud services for legacy (i.e., noncloud)
production or disaster recovery infrastructure should consider cloud migration tools. Doing so can
decrease the time, effort and cost of migrating existing workloads into a public cloud environment,
as opposed to manual efforts and/or using scripting. These tools can also aid in moving workloads
in the case of changing public cloud providers or from a public cloud provider back on-premises
(though this use case is rarely seen). Enterprises should clearly define their use cases and
understand that vendor strengths will vary. Enterprises should also ensure that the tool selected is in
line with their overall cloud toolset (for example, their selected cloud management platform). The
following should be among the questions asked of any potential vendor:

■ What hardware and software platforms do you support (source and target)?

Page 44 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

■ What migration models do you support (e.g., physical to cloud, virtual to cloud or cloud to
cloud)?
■ Are you able to migrate specific applications or an entire image (i.e., application, OS, etc.)?
■ What specific public cloud providers do you support?
■ Are you able to migrate the workload back on-site (e.g., after moving to a public cloud, migrate
back on-premises)?
■ Are you able to automatically discover source resources (e.g., images, OSs, hardware)?
■ What migration times have your customers experienced (per GB of source images/data
migrated)?

Business Impact: The business impact associated with successfully using cloud migration tools is
shorter implementation times, less manual effort and lower implementation costs. For disaster
recovery use cases, operations can be enhanced through greater speed and lower costs, because
of faster server provisioning, which is automated, with reduced human error. For cloud migration
use cases, workloads can be deployed over a weekend, as opposed to a multiple-week effort that
requires extensive scripting and manual evocation. The organizations that have had the best results
are those with high levels of standardization in their infrastructures. Those with significant
standardization will find dramatically lower operational costs and will realize that automation can
drive tremendous benefits. IT organizations with little standardization will find that maintaining a
diverse environment is difficult and that it increases costs, thereby potentially outstripping some of
the benefits. However, vendor support and/or consulting are needed for the workload onboarding of
large, complex workloads.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: CloudVelox; Cristie Software; Racemi; RiverMeadow Software; Veeam

Recommended Reading: "I&O Leaders Should Add Cloud Migration Tools to Their DR Toolset"

IT Process Automation Tools


Analysis By: Robert Naegle

Definition: IT process automation (ITPA) tools automate IT operations processes across traditional,
virtual and public cloud resources, integrating and orchestrating multiple IT operations activities.
ITPA tools can focus on a specific IT process (e.g., server provisioning), replacing or augmenting
scripts and manual processes, or can be applied to processes that span different functional or
application domains.

Gartner, Inc. | G00304147 Page 45 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Position and Adoption Speed Justification: ITPA tools provide a mechanism to help IT
organizations take manual or scripted processes and automate them via a workflow-enabled
construction model. Also, ITPA tools provide IT operations teams a way to integrate the interactions
between disparate IT operations management, monitoring and automation tools to improve process
handoffs, system calls, etc. Gartner anticipates that many IT organizations will have multiple ITPA-
capable tools, most acquired to solve specific functional problems or as an embedded capability in
a (larger) vendor enterprise agreement.

Increasingly, IT organizations are leveraging ITPA tools as the glue tying together the automation
and workflow capabilities of various tools used for task-specific process definition, automation, fault
detection and event remediation, server provisioning, etc. For example, IT organizations may be
using a service desk tool for service catalog or service requests, and are calling the ITPA tool from
the service desk tool for execution of workflows to provision the appropriate resources, commonly
by calling upon still other tools (either on- or off-premises). A key enabler of these process
automation use cases is the ability to interact with and orchestrate via web services and APIs.

User Advice: Select ITPA tools in the context of your organization's process maturity and the tool's
framework — first, evaluate current management and process tools before purchasing additional
capabilities. ITPA tools that have a specific operational-orientation (e.g., user provisioning and
server provisioning) and provide defined (out-of-the-box) process documentation framework can aid
in achieving rapid value, reducing costs and improving reliability. However, using a more
orchestration-oriented ITPA tool requires more organizational maturity, better-understood process
workflows, and specific skills to develop, build and maintain unique automation or integration
connector content. Expect to see ITPA tools positioned and sold within a single vendor's product
portfolio to augment and enhance their own current IT management products. Innovation in the
ITPA space is generally lacking. The real differentiators in this space are the depth of out-of-the-box
content and the breath of integrations. Many tools in this space still do not provide the basic
capabilities like scheduling, decision tree management, process time management and load
balancing.

A second key automation success factor is defining, capturing and documenting processes. IT
operations managers should consider ITPA tools as a way to minimize redundant tasks, reduce risk
where handoffs occur, or improve efficiencies where multiple tool integrations can establish
repeatable best-practice activities. Standardizing workflow and automation disciplines to improve
repeatability and predictability is a critical prerequisite, as will be the centralization of process and
automation governance in most organizations. For more service-oriented organizations, ITPA tools
may give way to IT service orchestration (ITSO) tools, where providing more business-oriented
services requires cross-organization coordination and orchestration of IT tasks and IT services to
deliver business value.

Business Impact: ITPA tools will have a significant effect on cost, growth and agility, greatly
enhancing the ability of I&O leaders to run IT as a business by providing consistent, measurable and
repeatable services. Process definition and automation will reduce the "human factor" and
associated risks by automating safe, repeatable processes, and will lower operational costs by
integrating and leveraging the IT management tools needed to support IT operations processes
across IT domains.

Page 46 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

However, the four most significant barriers to widespread ITPA adoption continue to be:

■ Cultural resistance to automation. Too many organizations struggle with the perception that
process automation will marginalize the value of IT staff members.
■ Lack of detailed process knowledge, cross-domain expertise and coordination to successfully
automate key processes.
■ Shortage of skills required to modify out-of-the-box content to meet specific business needs.
■ Script-based automation is often limited, user-specific and difficult to duplicate in process
automation tools.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Automic; Ayehu Software Technologies; BMC; Cortex; HP Inc.; Microsoft;
ServiceNow; VMware

Recommended Reading: "Map I&O Automation Capabilities and Needs for a Successful Tool
Strategy"

"Market Guide for IT Process Automation"

"How to Avoid the Five Most-Common IT Automation Pitfalls"

"Survey Analysis: The Realities, Opportunities and Challenges of I&O Automation"

"Consider Heuristics the Future of Smart I&O Automation"

Enterprise Mobility Management Suites


Analysis By: Manjunath Bhat

Definition: Enterprise mobility management (EMM) suites help organizations securely integrate
mobile devices to their enterprises' systems. EMM suites configure devices to comply with
organizations' policies, secure and deploy applications, protect enterprise data, and optionally
provide contextual trust. There are five core EMM technical categories that help IT organizations
perform these services: mobile device management (MDM), mobile application management (MAM),
mobile content management (MCM), mobile identity and containment.

Position and Adoption Speed Justification: The foundational technologies (such as MDM and
MAM) underpinning EMM suites vary by platform. EMM suites struggle to provide consistency in
managing and securing mobility in a diverse mobile landscape. While MDM is becoming
standardized, customers still experience challenges in the implementation of the broader EMM
suites. This is largely due to the immaturity of the tools in respect to providing a good balance

Gartner, Inc. | G00304147 Page 47 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

between security and usability, and the lack of policy parity across all mobile platforms.
Organizations are also wrestling with end-user resistance to enrolling devices into EMM due to
perceived loss of privacy. EMM solutions vary in their capabilities to manage mobile devices versus
PCs as well as mobile apps versus SaaS apps, leading to different workflows and different
consoles.

The products will continue to broaden and leverage synergistic capabilities between EMM and IAM
to provide a unified workspace. Mac OS X, and more recently Windows 10, introduces new
management APIs that will allow organizations to use EMM suites to manage PCs. EMM suites will
standardize functionality based on mobile-platform-specific technologies, such as Apple MDM,
Android for Work and Windows 10 Enterprise Data Protection.

User Advice: Identify critical policy controls and the mobile use cases in your organization, and
evaluate the EMM functions that are most critical in addressing those requirements. No EMM
vendor excels in all functions because of the breadth of the products.

Use mobile apps as a catalyst to drive business mobility initiatives and increase adoption of EMM
among your user base. EMM solutions have evolved to manage apps and deliver content without
device enrollment.

Train your users at the time of deployment to increase overall user satisfaction and reduce support
overhead.

Use the integration with IAM tools to provide features like conditional or adaptive access to
incorporate resources that take into account the security posture of the managed device. Identify
the right use cases for managing PCs using EMM, such as bring your own PC (BYOPC), third-party
workers and environments with few Win32 application needs. Adopt a platform-centric approach to
containerization, and avoid lockdown to EMM-specific SDKs.

Business Impact: EMM suites help enable mobility in the enterprise, so the business impact of
EMM is tied to the business impact of mobility itself. CIOs and business leaders realized the
growing importance of enterprise mobility as a means to gain competitive advantage. In that sense,
EMM is used as a business enablement tool to fulfill two basic requirements: enable general
productivity (email, calendar, access to documents) and improve business processes (for example,
customer interactions, real-time data entry, sales automation, field service applications). In addition,
the integration between EMM and IAM functionality enables organizations to manage both mobile
and SaaS apps through a single workspace, thus enabling a wide range of use cases that
complement an organization's move to the cloud.

The biggest risk introduced by mobility is the increased likelihood of data leakage. EMM suites help
organizations make mobility secure by implementing various measures to protect enterprise data.
EMM suites also improve IT operational efficiency by automating provisioning and configuration
management at large scale, and helping IT departments troubleshoot end-user devices.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Page 48 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Maturity: Early mainstream

Sample Vendors: BlackBerry; Citrix; IBM; Matrix42; Microsoft; MobileIron; Sophos; Soti; VMware
(AirWatch)

Recommended Reading: "Toolkit: Enterprise Mobility Management RFI and RFP Template"

"Best Practices in Choosing, Implementing and Using MDM and EMM"

"Magic Quadrant for Enterprise Mobility Management Suites"

"Critical Capabilities for Enterprise Mobility Management Suites"

Climbing the Slope

ITIL
Analysis By: Ian Head; Simon Mingay

Definition: ITIL is an IT service management framework owned by Axelos — a joint venture


between the U.K. government and Capita. ITIL is structured as five core books to cover the full
service life cycle: service strategy, service design, service transition, service operation and continual
service improvement. Specific implementation guidance is not provided; the goal of the framework
is to offer a set of good practices that an organization should adapt to achieve its business
objectives.

Position and Adoption Speed Justification: Evolving for more than 20 years, the current release is
ITIL 2011, which has integration among processes via information exchanges as a key focus. ITIL
was once considered by many practitioners as the only essential source of guidance, and, thus, the
de facto service management standard. Today, it is clear that it is unwise to try to implement ITIL in
its totality, and also that ITIL should not be a professional's sole source of advice. For example,
there is inadequate reference to agile philosophies and associated high-velocity release practices.

Especially for Mode 1 practices, ITIL has the highest adoption rate of the related frameworks used
within IT operations, including COBIT, the Software Engineering Institute's Capability Maturity Model
Integration (CMMI) and Microsoft Operations Framework (MOF).

ITIL approaches are one way to meet the requirements of the formal service management standard
ISO/IEC 20000 standard audits, since they share many concepts and principles. However, the
alignment is not perfect, with differences reflecting the different origins of the two bodies of work.

Service transition and service operation are the most commonly used ITIL books and could arguably
justify a position higher on the Plateau of Productivity. By contrast, both the service strategy and
continual service improvement books have not gained momentum since the 2011 rewrite, and so
could be placed much earlier in the Hype Cycle. This unbalanced adoption is the reason penetration
is shown as 20% to 50%.

Gartner, Inc. | G00304147 Page 49 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

While more than 50% have used ITIL advice, our survey data suggests that most organizations have
stalled on their adoption journey for a variety of reasons, although the most mature organizations
are well on their way and pursuing continual improvement vigorously. Successful I&O leaders find
that a combination of process guidance from various sources tends to do a better job of addressing
requirements than any one framework in isolation.

User Advice: Leverage ITIL as one source of good practice that may be refined to meet your
specific business goals. Some recent key developments, such as Pace-Layered Application
Strategies (see Gartner's DevOps, ITSM leap and bimodal IT research), the digital business
revolution and changing service provider landscapes (including cloud), have yet to be reflected in
the core ITIL body of knowledge. Users should look for additional inspiration in sources such as
ISO/IEC 20000, COBIT, bimodal IT, lean, DevOps, agile development and continuous integration, as
well as some of the Axelos white papers.

ITIL helps put IT service management into a strategic context, and provides high-level guidance on
service management processes and other factors in the service life cycle. However, the digital
business revolution means that the future of technology services is radically different from that
foreseen in the last ITIL revision in 2011. To improve services, leaders must first define business-
relevant objectives, and then pragmatically leverage ITIL and other sources in transforming their
processes and organizations.

There is a large pool of ITIL-trained staff, and so ITIL familiarity should be a feature (but not the sole
feature) of your development and recruitment process.

Business Impact: IT organizations that use the ITIL service life cycle guidance will enhance the
achievement of their target business outcomes, especially when used in conjunction with agile-
focused philosophies such as DevOps. IT service management is a critical discipline in this
endeavor, and provided that the objectives are clear, when used in conjunction with other advice,
ITIL will help leaders to raise service maturity, reduce service risks, raise the quality of service
delivery and lower total costs.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Recommended Reading: "I&O Must Combine ITIL and DevOps to Deliver Business Value for
Bimodal IT"

"CIOs Must Move IT From an ITIL Operational View of Services to a Strategic Business Focus"

"Step 1 in Delivering an Agile I&O Culture Is to Know Your Target State"

"Optimize IT Operations Using ITSM, ITIL and DevOps Primer for 2016"

"IT Operations Optimization via ITSM, ITIL and DevOps Key Initiative Overview"

Page 50 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

"Maturing the IT Service Portfolio for Strategic Business Impact"

Patch Management
Analysis By: Terrence Cosgrove

Definition: Patch management tools are used predominantly by client admins, server admins and
cloud admins, as well as other roles of various systems and software (for example, databases) to
automate the deployment of OS and application patches. This includes providing patch content and
patch packaging, discovery, targeting and scheduling, deployment, and reporting.

Position and Adoption Speed Justification: The patch management problem is not new, nor are
the patch management tools. Patching is an operations activity, but occasionally security staff (for
smaller organizations) will patch or use patch tools to assess patch status. Windows patching is
mature in most organizations, and most organizations have tools that provide some level of patch
deployment automation; though Windows 10 patching presents new challenges in dealing with
potential compatibility issues. Third-party desktop application patching is a high-priority area for
organizations, and tools focused on this are getting increasing adoption. Organizations are paying
increased attention to patching Linux servers, databases and public cloud environments, but they
often lack the patching tools and configuration management to successfully address these areas.
Unfortunately, there is not one patch vendor that addresses all systems and software. I&O
organizations can expect to have as many tools as they have system types if they don't try to find a
tool that has some multiplatform support.

User Advice: Robust patching tools are only one part of the patching solution. There are four
additional critical factors that influence patching success:

1. Patching must start with people. Assign individuals to work with operations, security, change
managers and business liaisons, because they need to understand the risk of not patching, and
disruption of service is needed to ensure risk is mitigated.
2. Effective patch management requires coordination across teams, which must be documented in
a patch management process. A process must be developed that includes appropriate
constituents to assess risk, and to determine ability and timing for deployment.
3. I&O organizations must try to reduce the complexity and diversity of their systems and software,
because they have a direct impact on timing and success of deployments.
4. The ability to test patches before deploying is critical to ensure that the fix or patch doesn't
inadvertently "break" the system or application.

Beyond these four, organizations will need to use patch management tools to build in reliability and
repeatability. While patching is not a new problem, the tools continue to evolve to address broader
platforms and functions. Most I&O organizations start with Windows (desktop). When server
patching is a requirement, adjust technical requirements as well as platform-specific requirements
to ensure that the patch tool can be useful for all administrators. Recognize that application and
database patching capabilities may have to be done with an add-on tool, manually or by the vendor
itself.

Gartner, Inc. | G00304147 Page 51 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Business Impact: Patch management tools provide a mechanism to remediate vulnerabilities that
put business users and their applications at risk. There was a time when patching was thought to be
a Microsoft problem — due mostly to the ubiquity of the Windows platform — but we no longer
have the luxury to have a narrow scope for patching. Many vulnerabilities (for example, Heartbleed)
are an attack plane for many different systems. Likewise, as applications are developed to traverse
different platforms and environments (for example, private cloud, public cloud and the Internet of
Things), the attack plane grows exponentially. Patch management tools are not a transformative
technology, but they are hygienic. Every I&O organization must have a patch management strategy.
Without patch tooling, the patch process cannot scale or remain resilient, especially as environment
complexity increases.

Benefit Rating: Moderate

Market Penetration: More than 50% of target audience

Maturity: Early mainstream

Sample Vendors: BMC (BladeLogic); Flexera Software; Heat Software; Hewlett Packard Enterprise;
IBM BigFix; Landesk; Microsoft; Red Hat

Recommended Reading: "Know the I&O Automation Tool Categories to Drive Efficiency Across
Your Data Center and Cloud"

"Magic Quadrant for Client Management Tools"

"Critical Capabilities for Client Management Tools"

Server (Life Cycle) Automation


Analysis By: Terrence Cosgrove

Definition: Server life cycle automation tools manage the software configuration life cycle for
physical and virtual servers. The main functions include OS provisioning, application provisioning,
configuration management, patching, inventory and configuration auditing for compliance. Some
vendors offer functionality for the entire life cycle; others focus on certain aspects (e.g.,
configuration management, patch management, configuration auditing) and offer additional but
more limited capabilities for other life cycle management functions.

Position and Adoption Speed Justification: Increasing IT infrastructure complexity supporting


physical, virtual and cloud infrastructures continues to add new requirements to server automation
tools. These tools continue to evolve, adding new capabilities and compliance policies. Few
vendors offer a full life cycle solution, but many vendors provide one, two or even three of the life
cycle functions. Parity of support on multiple platforms is also a differentiator.

Obstacles for broader adoption are related to isolated server platform teams, lack of standardization
and immature processes. Most large enterprises have deployed these tools for at least one or two
functions on at least one platform; few have successfully supported multiple functions on multiple
platforms due to organizational misalignment. Cloud automation tools have taken focus away from

Page 52 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

these tools with the promise of faster provisioning. Because of the complementary nature of server
life cycle automation tools, there could be a revival of them in two to five years, as cloud adoption
matures and the subsequent need to manage configuration and compliance is a renewed priority.

User Advice: IT organizations must begin with sponsorship at an executive or director level to
ensure that all server teams participate equally in the adoption of these tools to best overcome the
major inhibitors for a successful implementation. Compromises will need to be made. Gartner
recommends looking for tools that focus on one to three functions across multiple platforms (versus
platform-specific tools). This enables better visibility across each function (for example,
compliance). For small organizations, an alternative may be to leverage existing client management
tools to manage Windows servers, but they will find limited Linux and Unix support in those tools. In
addition, for smaller I&O organizations, we recommend taking an incremental approach, which is
more successful (for example, discovery and patch or discovery and configuration).

IT organizations should implement standardization of server stacks prior to implementation


because, while these tools can automate repetitive tasks and drive down costs, value is limited if
each automation is unique. These tools can reduce the overall cost to manage and support
patching, rapid deployments and virtual machine (VM) policy enforcement, as well as provide a
mechanism to monitor and enforce compliance. The criteria should also include the capability to
address the unique requirements of virtual servers as part of a cloud. When IT standards are in
place, server automation tools significantly enhance the audit process by automating manual tasks
for repeatable and accurate configuration change control. The tools help organizations gain
efficiencies by moving from monolithic imaging strategies to a dynamic layered approach to
incremental changes.

Business Impact: The immediate value of server automation is in reducing cost of labor that either
do these tasks manually or with scripts. Either of these approaches is not scalable, and they often
become fragile as they are used and extended beyond initial use cases. Server automation tools
enable a consistent and repeatable framework for enforcing standards and increasing system (and
application) availability by reducing the human element. These tools also improve the speed of
modifications to servers and software, and provide a mechanism for enforcing security and
operational policy compliance. Server automation tools are applicable to all industries. Likewise, all
sizes of organizations can benefit from server automation, but the cost and complexity of some of
the tools are prohibitive for small organizations. Adoption has been predominantly with large I&O
organizations, although midsize and small organizations have also seen some success, typically
with solutions that provide two or three functions (not the complete life cycle).

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Ansible; BMC; CFEngine; Chef; Hewlett Packard Enterprise; IBM; Microsoft;
Puppet; SaltStack

Gartner, Inc. | G00304147 Page 53 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Recommended Reading: "Midsize Enterprises Should Use These Considerations to Select Server
Provisioning and Configuration Tools"

"Cool Vendors in DevOps, 2015"

"Cool Vendors in DevOps, 2016"

"Know the I&O Automation Tool Categories to Drive Efficiency Across Your Data Center and Cloud"

bpmPaaS
Analysis By: Michele Cantara

Definition: bpmPaaS refers to a basic BPM platform, a business process management suite
(BPMS) or intelligent BPMS (iBPMS) delivered via platform as a service (PaaS). IT developers or
citizen developers use basic BPM platforms to develop and compose "code-free" applications to
automate work. Business outcome owners and IT use BPMSs to accelerate process change and
improve business outcomes. Business transformation leaders and business outcome owners use
iBPMS to radically reinvent how the business operates with its value chain partners.

Position and Adoption Speed Justification: bpmPaaS has rapidly progressed from a post-Peak of
Inflated Expectations position to a pre-plateau position for several reasons. Reduction in time-to-
business-outcome benefits from bpmPaaS (versus on-premises business process management
[BPM] platforms) now outweighs concerns about cloud security, privacy and risk. Accordingly,
bpmPaaS has become the mainstream delivery model for new BPM implementations. Ten vendors
accounted for a 66% share of the BPMS market in 2015, and nine of these 10 vendors offer
bpmPaaS. New entrants to the BPM platform market, such as Effektif, offer bpmPaaS, but don't
offer corresponding on-premises software. Lastly, maturity of the offerings is high, and the adoption
of bpmPaaS is consistent and growing.

User Advice: CIOs and business outcome owners who are involved in business transformation
initiatives should use bpmPaaS as the delivery model for BPM platforms when they need to
accelerate time-to-outcome in the following scenarios:

■ Rapid development of net-new, flexible applications for systems of differentiation and


innovation, or to extend systems of record
■ Use of iBPMS capabilities to make business processes more intelligent and to optimize
outcomes for each business moment
■ Significant transformation and digitalization of business processes to deliver a highly
differentiated customer experience, or to quickly implement the new business models
necessary for digital business
■ Coordination of business outcomes spanning disparate applications, SaaS, business process
as a service (BPaaS) and individual human tasks with the operations of an enterprise and its
value chain partners

Page 54 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

■ Trialing different business process options for different business moments, partners or
customers

Solution architects should employ bpmPaaS for the following use cases:

■ In pilot projects to build a business case for on-premises BPM solutions


■ In development and test environments to avoid additional capital expenditures on software and
hardware
■ As a mechanism for code-free development of process-centric applications
■ To help application leaders reduce IT backlog by providing citizen developers with an easy-to-
use, opportunistic application development platform for processes largely made up of human
tasks, as well as social and collaborative interactions

Business Impact: bpmPaaS makes BPM platforms widely available and cost-effective to buyers
who need to scale knowledge-intensive processes and manage the unprecedented process
variability triggered by business moments. Gartner views business processes as the coordination of
behavior and interactions of people, systems and "things" to produce specific business outcomes
that support execution of the business strategy. The ability of a bpmPaaS to produce business
benefits from the coordination of behavior and interactions depends on which type of BPM platform
is used. For example, a bpmPaaS with iBPMS capabilities is a proven path for organizations that
need to radically reinvent their business processes so they can handle deliberately unstable
business processes (see "What Does It Mean to Digitalize Work?").

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: AgilePoint; Appian; BP Logix; IBM; MatsSoft; Newgen Software; Openwork;
Pegasystems; WebRatio; XMPro

Recommended Reading: "Platform as a Service: Definition, Taxonomy and Vendor Landscape,


2016"

"Select the Right Type of BPM Platform to Achieve Your Application Development, Business
Transformation or Digital Business Goals"

"Market Guide to Business Process Management Platforms"

"Magic Quadrant for Intelligent Business Process Management Suites"

"Be Wary If Buying an ITSSM Tool to Use Beyond ITSM 2.0"

Appendixes

Gartner, Inc. | G00304147 Page 55 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Figure 4. Hype Cycle for I&O Automation, 2015

expectations
DevOps

Continuous Configuration Automation


IT Service Orchestration
Container Management Cloud Management Platforms
Virtual Network Configuration Automation
Application Release Automation IT Workload Automation
Cloud Migration Tools
COBIT
IT Process Automation Tools
BPM Platform

Enterprise Mobility
Management Suites
IT Operations Analytics Configuration
Network Configuration Auditing
and Change Server (Life Cycle) Automation
Management Tools Patch Management

Continuous Delivery ITIL


Heuristic Automation
Network Configuration Automation

As of July 2015

Innovation Peak of
Trough of Plateau of
Trigger Inflated Slope of Enlightenment
Disillusionment Productivity
Expectations
time
Plateau will be reached in:
obsolete
less than 2 years 2 to 5 years 5 to 10 years more than 10 years before plateau
Source: Gartner (July 2015)

Page 56 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Hype Cycle Phases, Benefit Ratings and Maturity Levels


Table 1. Hype Cycle Phases

Phase Definition

Innovation Trigger A breakthrough, public demonstration, product launch or other event generates significant
press and industry interest.

Peak of Inflated During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized
Expectations activity by technology leaders results in some successes, but more failures, as the
technology is pushed to its limits. The only enterprises making money are conference
organizers and magazine publishers.

Trough of Because the technology does not live up to its overinflated expectations, it rapidly becomes
Disillusionment unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of Focused experimentation and solid hard work by an increasingly diverse range of
Enlightenment organizations lead to a true understanding of the technology's applicability, risks and
benefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity The real-world benefits of the technology are demonstrated and accepted. Tools and
methodologies are increasingly stable as they enter their second and third generations.
Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid
growth phase of adoption begins. Approximately 20% of the technology's target audience
has adopted or is adopting the technology as it enters this phase.

Years to Mainstream The time required for the technology to reach the Plateau of Productivity.
Adoption

Source: Gartner (July 2016)

Table 2. Benefit Ratings

Benefit Rating Definition

Transformational Enables new ways of doing business across industries that will result in major shifts in industry
dynamics

High Enables new ways of performing horizontal or vertical processes that will result in significantly
increased revenue or cost savings for an enterprise

Moderate Provides incremental improvements to established processes that will result in increased revenue
or cost savings for an enterprise

Low Slightly improves processes (for example, improved user experience) that will be difficult to
translate into increased revenue or cost savings

Source: Gartner (July 2016)

Gartner, Inc. | G00304147 Page 57 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Table 3. Maturity Levels

Maturity Level Status Products/Vendors

Embryonic ■ In labs ■ None

Emerging ■ Commercialization by vendors ■ First generation

■ Pilots and deployments by industry leaders ■ High price

■ Much customization

Adolescent ■ Maturing technology capabilities and process ■ Second generation


understanding
■ Less customization
■ Uptake beyond early adopters

Early mainstream ■ Proven technology ■ Third generation

■ Vendors, technology and adoption rapidly evolving ■ More out of box

■ Methodologies

Mature ■ Robust technology ■ Several dominant vendors


mainstream
■ Not much evolution in vendors or technology

Legacy ■ Not appropriate for new developments ■ Maintenance revenue focus

■ Cost of migration constrains replacement

Obsolete ■ Rarely used ■ Used/resale market only

Source: Gartner (July 2016)

Gartner Recommended Reading


Some documents may not be available as part of your current Gartner subscription.

"Understanding Gartner's Hype Cycles"

"Know the I&O Automation Tool Categories to Drive Efficiency Across Your Data Center and Cloud"

"Survey Analysis: The Realities, Opportunities and Challenges of I&O Automation"

"Six Steps to Move IT Process Automation From Basics to Best Practices"

"The Changing ITOM Vendor Landscape Demands an Extensive Analysis Beyond Product
Capabilities"

"Pick the Right Orchestration Technology to Power Your Cloud Initiative"

Page 58 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

"Minimize Outage Exposure and Risk With Network Automation Tools"

"Choose IT Operations Management Tools Based on Your Requirements"

"Digital Business Initiatives Demand the Use of IT Operations Analytics to Spark Transformation"

"Consider Heuristics the Future of Smart I&O Automation"

"Managing PCs, Smartphones and Tablets and the Future Ahead"

"Exploring Cloud Management Trends and the Actions to Take"

Gartner, Inc. | G00304147 Page 59 of 60

This research note is restricted to the personal use of kamalaksha.pai@tcs.com


This research note is restricted to the personal use of kamalaksha.pai@tcs.com

GARTNER HEADQUARTERS

Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096

Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM

For a complete list of worldwide locations,


visit http://www.gartner.com/technology/about.jsp

© 2016 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This
publication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to access
this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained
in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,
completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions
expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,
Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,
and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of
Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization
without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner
research, see “Guiding Principles on Independence and Objectivity.”

Page 60 of 60 Gartner, Inc. | G00304147

This research note is restricted to the personal use of kamalaksha.pai@tcs.com

Das könnte Ihnen auch gefallen