You are on page 1of 9

15 Top Factors that Impact Application

What is it that drives the need for Application Performance Management (APM)? What are the
main factors that can negatively impact application performance? What should you be looking
out for? That is what this new APMdigest list reveals.
Many of the APM industry's top experts — from analysts and consultants to users and the top
vendors — offer their perspective on the root causes of application performance problems.
These factors are not listed in order of importance. Some of the categories overlap. Some of the
categories could actually be considered subsets of the other categories. Some of the quotes
could fit into multiple categories. But the bottom line is that the list accomplishes the goal: to
provide a broad picture of the many factors out there impacting application performance.
On this list you will see a wide range of impacts from the applications themselves, to the
environment and the network, to the people behind those applications. What this list really
shows is that all of these factors must be considered when managing application performance.
I think Julie Craig, Research Director, Application Management, Enterprise Management
Associates (EMA) said it best in her response: ―When trying to pin down the top factors
impacting application performance, the right answer is that there is no right answer ... the
source of a performance problem could be almost anywhere!‖
―Almost anything that touches an application either improves or degrades performance,‖ she
adds. ―Determining whether that is infrastructure, code, data, the network, the application
architecture, the endpoint, or another application is the name of the game — and the big
reason why APM solutions are so valuable.‖
The following are the industry's top 15 factors that impact application performance:
Today’s enterprise applications are increasingly a large collective of distributed software
components and cloud services that enable complex business services. With so many moving
parts, something’s always bound to have the chance of impacting performance, even with a
resilient architecture. Complexity – and the fact that all these components are monitored in
different silos – also makes it hard to manage a business service or application as a whole,
which also impacts performance. But it’s the reality of today’s enterprise application and
monitoring architectures.
Application complexity is one of the biggest factors impacting application performance.
Today’s applications and services, particularly those delivered via the Web, are a mosaic of
components sourced from multiple places: data center, cloud, third-party, et al. While the
customer or employee looking at a browser window sees a single application, multiple moving
parts must execute in the expected manner to deliver a great end-user experience. Maybe the
Web server and app server are running fine, but if the database is faltering, user experience will
suffer. Being able to measure and keep tabs on all those moving parts is the challenge and
requires an APM tool that can provide a view into the performance of all the parts, not just
individual components. As the saying goes, ―The more moving parts, the more that can go
One of the biggest factors that impacts application performance is design. Performance must
be designed in. When applications are specified, performance goals need to be delineated along
with the details of the environment the applications will run in. Often development is left out of
this and applications are monitored, analyzed and ―fixed‖ after they are released into
production. This never works as well as when performance is one of the key goals of the
application design before a line of code is written.

One of the biggest impacts to application performance is caused by companies
outsourcing/subcontracting their application development outside of their company and their
quality control domain. Application quality and performance needs to be built into the
application platform and cannot be an afterthought or something that ―we’ll fix later‖. The
subpar app performance that is accepted in the development phase is bound to manifest itself
in the production stage. Modern APM solutions capture this poor performance, but can’t
provide the cure. The only way to prevent poor app performance is to expose your app
development to the rigorous quality controls and processes early on in the application lifecycle

From my perspective the biggest factor affecting application performance today is poorly
optimized code and infrastructure, such as suboptimal SQL queries, poorly configured network
infrastructure, or inefficient code algorithms at the application layer. All of these problems can
be difficult to isolate, and the emphasis on DevOps processes can cause these issues to
multiply quickly by increasing the rate of change in the data center. Because of this it is
important to adequately tool the data center to monitor and report on all aspects of a deployed
application using code level instrumentation, EURT and network performance tools, and
traditional IT infrastructure monitoring solutions.

Today's applications are often developed in simulation labs without testing performance on
real-world networks. Before applications are deployed, transport across today's highly
distributed network architectures should be monitored and optimized.

Insufficient testing of the application in the actual production environment and under varying
conditions impacts performance. Tied to that is for developers and testers to have a clear
understanding of the non-functional performance criteria.

Agile release cycles — the reality is that less than 5% of developers performance test their code
before it is pushed to production. The ―make it work‖ over ―make it perform‖ mantra is one of
the biggest factors that impacts application performance today. Most organizations don't have
the time, resource or budget to replicate production environments in test for every agile
release, this is why a growing trend of customers have started to test in production out of
working hours. When you consider that the codebase of an application changes several times
per month, you can begin to understand why performance anti-patterns and bottlenecks make
their way into production.

It’s the "Butterfly Effect" in IT, which theoretically describes a hurricane's formation being
contingent on whether or not a distant butterfly had flapped its wings weeks before. Sensitive
dependence on environmental conditions where a small change at one place (Dev env.) can
result in large differences to a later state (Production). It’s possible that a small innocuous code
change could go undetected, being promoted through each Dev/QA environment, and then
have catastrophic effects on performance once it reaches production. The environmental
variants need to be minimized and closely monitored to prevent the anomalous events. I'm
suggesting that it is not necessarily the number of features or technical stamina of each
monitoring tool to process large volumes of data that will make an APM implementation
successful — it's the choices you make in how you put them together to support the multiple
environments within IT.

Application performance is impacted by componentry used to deliver the service to the user,
the user’s interfaces with the application, and the connectivity between these components. The
variance and complexity is what makes the problem hard to solve, and often causes approaches
to fail on given architectures.
Applications are distributed by nature, and unless the underlying infrastructure is responsive on
all the different components of the application service, the entire application service is
Without a doubt, third-party web components are among the biggest factors impacting web
application performance today. To deliver the functions and features online visitors expect,
websites and web applications are actually a composite of your own resources plus numerous
third-party web components. These include content delivery networks (CDNs), site search
functions, shopping cart and payment processing functions, ad networks, multiple social
network connections, ratings and reviews for gathering feedback and web analytics. Today, the
average website includes components from eight or more different hosts, and a slowdown for
any one service can degrade performance for an entire website or web application. If anything
goes wrong (and inevitably it will), only one party will get the blame: you, as the primary
website owner. Organizations leveraging third-party web components must adopt an end-user
focused approach to APM, in order to better identify and fix performance problems associated
with third-party services beyond one’s own firewall.

One of the most critical factors that affect application performance, and often the hardest to
identify and track, are application dependencies – on supporting applications, as well as the
underlying system and network components that connect them all together. With the advent of
virtualized servers and networks, the complexity of the application delivery infrastructure has
increased significantly, and so the challenge is finding an application performance monitoring
solution that can automatically discover and monitor the network and server topologies for the
entire application service.

Today's distributed applications — particularly for large organizations — can have thousands of
individual connections stretching across many tiers and even reaching outside services. We've
moved beyond simple 3-tiered web applications into complex distributed applications (made
up of load-balanced web and application servers, multiple layers of middleware and databases,
storage arrays, mainframe transactions, and even outside services). In this world, problems are
no longer concentrated in application code — instead they are randomly distributed throughout
the application infrastructure. Just this week, we've seen LDAP, anti-virus, database, firewall,
and DNS misconfiguration all create application problems; and that's just the tip of the iceberg.

As applications tie together more and more disparate services, both internal and external, they
become exposed to new opportunities for failure. These interactions are the single biggest
source of performance and availability problems for applications. Not only does a bad link in
the chain — say, an unresponsive external API — generally mean a key part of the app is
unavailable, but in fact, most systems are architected in such a way that timeouts cascade to
bring down the entire environment. A failed request means a bad experience for a single user; a
stalled request ties up resources in services shared by many users, which in the worst case
means total system failure.

Network latency and bandwidth is king for any application that isn't local (remote workforce,
customer facing website, web applications, etc.). Monitoring network bandwidth and web
application performance from multiple locations helps isolate the problem to the network tier.

The network on which the application is used impacts performance tremendously, especially for
mobile and cloud. Inconsistent bandwidth, high jitter, increased latency and packet loss all
work to degrade application performance. While you might not be able to control mobile or
most cloud networks, you can build and test apps with these network conditions in mind. This
gives organizations the best chance to optimize app performance before the network impacts
are felt by users.

Bandwidth Bottlenecks are a big problem as it causes network queues to develop and data to be
lost, impacting the performance of applications. My advice is to keep tabs on the number of
devices, users and new applications utilizing the network.

Applications today are an intricate mesh of multi-tier software running on servers, networks,
and storage. In addition, there is a good chance they are running on virtualized hardware that is
shared with other applications. It is very challenging in this dynamic environment to understand
what will impact your application performance as it requires intimate knowledge of your ever-
changing application structure at any given moment. Many IT organizations are very advanced
on the application side but unfortunately still struggle to move beyond managing applications
via a silo approach to the different technology tiers – application, server, network, storage, etc.
This is why many organization will experience application performance issues without any
useful tools to help them to resolve the problems. Only an application performance
management tool that uses a unified, cross-domain view of the application and its supporting
infrastructure components – with an accurate run-time update of their fast-changing inter-
dependencies – can ensure highly available and optimally performing systems.
Ariel Gordon

Virtualized environments — from the desktop layer to applications and the underlying
infrastructure — are becoming too complex to troubleshoot with traditional silo tools. Too
many isolated metrics and alerts that don’t make much sense and confuse administrators. The
next generation of APM requires awareness of performance across all virtual and physical
domains from the desktop to the datacenter and cloud, presented in an intuitive dashboard.
Equally important are capabilities to proactively alert admins before users call and complain
about slow apps. And not only alert about general issues but with smart auto-diagnosis that
points right to the root cause so that admins can quickly restore performance levels without
spending days troubleshooting.

The modern application is complex and a single transaction trace can sprawl across many
layers in a virtualized, cloud world – a perfect storm impacting application performance. This
growing complexity impacts application performance from the end user experience all the way
back through transactions, the application layer, application infrastructure, and IT

The rapid rise of communications and content via the Cloud among increasingly dispersed
employees seeking to better serve customers and collaborate with their peers and partners will
have the greatest impact on application performance.

One of the biggest factors we see is the acceleration of mobility and IT consumerization, which
will propel the ongoing shift in application architectures required to deliver the most dynamic,
modern mobile end user interfaces.

Mobile usage numbers are soaring, which is certainly having an impact on application
performance. Users have multiplied as they engage more often using a variety of devices.

TRAC’s 2013 APM Spectrum shows that the Web browser is the key blind spot for gaining true
end-to-end visibility into application performance. With new approaches to application design
and the increased usage of Web Services, the ability to monitor the processing that takes place
within the browser has become one of the key requirements for full visibility into application

Application performance has been impacted severely by what is now known as the "chronic
change and configuration management challenges" – a big data problem with lack of actionable
insight into changes in the IT environment. A typical application includes hundreds of
thousands of different configuration parameters and is subject to numerous changes. The
volume, velocity, and variety of configuration changes overwhelm IT operations. This is
especially true when considering that any minute misconfiguration can cause a high impact
application performance and availability incident.

One of the most important factors that affect application performance is poor understanding of
how the application will be used (i.e. how many people will simultaneously use it and for what
kind of transactions), and the corresponding application architecture and its scaling
assumptions that go into its design and deployment. This lack of understanding real user
transactions and performance manifests itself as bottlenecks in performance during the most
critical peak usage period.

One of the biggest factors impacting application performance is that today’s applications are
essentially like one-gear bicycles without gears to shift for different terrain. By enabling clients’
applications with performance ―gears,‖ our industry can have a much more positive impact with
applications and business performance. Just like today’s bicycles can shift gears depending on
the challenge, applications should be able to rev up during peak user spikes or to deliver a
premium user experience – maybe even by target customer or transaction types. They should
be able to downshift to conserve resources when demand is low. To make this functionality
work, APM technology should, at a minimum, provide a governor or feedback loop to the app
regarding end user experience and internal app/infra operations.

My vote goes to ―a failure to communicate.‖ Not to steal from Jack Nicholson in A Few Good
Men. There are many good technologies out there, but as APM has evolved to become more
than an introverted, single-domain discipline, sharing information effectively will require an
investment in dialog. This will include next-step process awareness, so that key stakeholders
are identified and know who each other are, and clear avenues for optimizing their collective
insights. But it also requires, in many organizations, a cultural and often a political shift to
promote a willingness to step beyond traditional boundaries and ways of working. As it
matures, social media should also help. But no single technology will count more to promote
effective APM than a revitalized and intelligent willingness to communicate across roles.
Without it, most technology investments are wasted, or at least poorly optimized.

Today's application environments have become so complex that there is no single individual in
the IT organization that understands all that is required to effectively deliver that application at
the performance level the business expects. Crowd-sourcing and peer review of human
knowledge combined with existing systems-based knowledge is the only way to fill in the gaps
and ensure the organization can benefit from the knowledge everyone collectively possesses to
ensure the needed quality of service.

In my view, the single biggest factor that impacts application performance is people. Once you
correctly align your resource aptitude, skill sets, and knowledge to your application portfolio
and equip them with the tools and knowledge to be successful, applications and environments
flourish. In the fitness world, they say diet is everything. Why not apply that same rigor to your
organization? Satisfy your companies appetite for high availability, business continuity,
rationalized portfolios, predictive environments and analytics through the targeted and
thoughtful process of feeding your best and brightest with knowledge and support! In return,
you’ll gain all that you require to mold yourself into a lean and mean organization, able to meet
the toughest challenges.
Clark Cunningham

The Unknown Unknowns include unanticipated application behavior (e.g. "We never expected
80% of our users to be using mobile devices!"), unanticipated load (e.g. "Who would have
guessed we'd get a traffic spike of 600% during the summer?") and unanticipated user behavior
(e.g. "We didn't expect users to keep hitting that button.") Application architects plan for the
known-knowns and QA checks for the known-unknowns. But APM is critical for the unknown-
unknowns that eventually impact application performance.
Russell Rothstein

A lack of visibility impacts application performance. Given today's complex and dynamic
operational environments, traditional management tools are unable to provide a complete
picture of application heath, availability and risk. As more organizations move toward hybrid
and converged compute environments, it becomes challenging to provide complete visibility
across internal and external resources. Organizations need management tools that provide a
single pane of glass view across all IT assets and workloads, regardless of where they reside, to
ensure critical business applications are always available and running at peak performance.
Julia Lim

Lack of proper end-to-end monitoring can prevent IT from comprehending the health and
capacity situation of critical applications. Finally, these applications can reach a point where
they can no longer be stable, leading to degradation in performance or even downtime.
Arun Balachandran