Sie sind auf Seite 1von 16

Sourcing and integrating managed services

Selecting the best application platforms for mid-market businesses


June 2012

Reliable software applications are as critical for mid-market businesses as they are for enterprises. Choosing the right platform is the key to ensuring that applications are available, scalable, cost effective, compliant and secure. With the growing variety of physical, virtual and cloud-based platforms to choose from for application deployment, many will value the advice of third party experts as they plan the evolution of legacy applications or deploy new ones. This paper looks at the issues that mid-market IT and business managers need to take into account when deciding how to deploy applications and when they should consider turning to managed service providers (MSP) for resources and advice. The document should be of interest to those who are focused on delivering their organisations core value proposition, whilst also considering how this is best underpinned by IT.

Bob Tarzey Quocirca Ltd Tel : +44 7900 275517 Email: bob.tarzey@quocirca.com

Clive Longbottom Quocirca Ltd Tel: +44 771 1719 505 Email: clive.longbottom@quocirca.com

Copyright Quocirca 2012

Sourcing and integrating managed services

Sourcing and integrating managed services


Selecting the best application platforms for mid-market businesses
Most business processes are underpinned by software applications; if the application fails the organisation suffers. Although there are plenty of choices for ensuring applications have the necessary resources to run reliably, the considerations and choices are increasingly complex. Consequently, many mid-market organisations are turning to managed service providers (MSP) with integration skills for advice and resources.

Business should focus on core value

Physical, virtual or cloud?

Location, location, location Flexible application deployment Application deployment check list Security and compliance

Reliable software applications are essential to the smooth running of any business but, for most organisations, IT is not a core skill. Selecting the right platform and location for running a given application and ensuring future flexibility involves a number of increasingly complex choices. Many will be better off turning to partners with integration skills for advice and resources, especially mid-market organisations that will not have the required expertise inhouse. In the past it was common to run individual applications on dedicated physical servers; in some cases this may still make sense. However, much of the time it makes sense to share physical resources between applications by virtualising servers or creating private clouds. Public clouds allow even greater economies to be achieved through sharing infrastructure with other organisations. By definition, public clouds are hosted in third party data centres. However, dedicated infrastructure can be too, either by co-locating existing equipment or making use of managed hosting services. Most mid-market organisations will find that the resilience and efficiency of enterprise-class third party data centres far exceeds that of in-house facilities. Ensuring a given application workload can be moved from one type of platform to another is essential to achieving many of the goals of reliable application deployment. This is the best way to ensure resilience, scalability and future proofing. There is a long list of considerations for ensuring optimal deployment that will vary from one application to another. These include how the application should be structured and its workloads broken down, what resources it needs, where it is in its life cycle and how commercial components are licenced. Considerations will also vary for in-house developed and commercially acquired applications. Two key considerations are security and compliance. Options for using certain cloud platforms may be restricted for regulated data. That said, the perception that cloud platforms are inherently insecure is slowly being overcome. Businesses are coming to realise that the reputation of cloud service providers is dependent on delivering higher levels of security than many IT departments achieve in-house. The choices are complex and most mid-market organisations will not have all the skills required in-house, which is why many turn to MSPs for advice and resources. Integrator-MSPs are onestop-shops offering advice on end-to-end application deployment services whilst specialist MSPs offer specific services, for example managed hosting. For an all-round service the first call should be on the former who will engage with and integrate the latter as required.

Advice and resources for deployment

Conclusions
All businesses are now reliant on IT to a greater or lesser extent and mid-market businesses, in particular, are unlikely to have the full range of in-house skills needed to ensure the performance, scalability, security and compliance of their applications. Forwardthinking mid-market organisations are turning to integrator-MSPs and benefiting from their business processes being supported by reliable applications running on stable and flexible platforms.

Quocirca 2012

-2-

Sourcing and integrating managed services

Introduction: the focus on core value


Successful businesses have a clear idea of their core value proposition, often expressed through a mission statement. For most, staying focussed on this is what helps achieve other goals such as profitability, growth and delivering stakeholder value. The decisions managers make must be focussed on delivering that proposition. Only software companies are likely to say that their value comes from delivering high performance, reliable and secure applications. However, most other businesses now rely on software applications to ensure they can deliver their core value proposition effectively and Quocirca research shows that ensuring application performance is a top priority for IT managers (Figure 1). In short, information technology (IT) now lies at the heart of most businesses.

However, this is a metaphorical heart, no longer necessarily a physical one. Software applications may be essential to supporting a given business but, increasingly, the same business does not need to be expert in IT to achieve this; it does not even need to run the necessary systems on its own premises. Find the right third party to work with and the running of all or part of a given organisations IT requirements can be trusted to a partner who sees ensuring high performance, scalable, reliable, compliant and secure applications as their core value proposition. Today there is huge flexibility in the choices that can be made because of the global network connectivity that has been put in place over the last 20 years. For mid-market organisations this is a double-edged sword; they can more effectively compete with larger organisations without having to build up internal IT expertise. However, entrust the task to the wrong partner and the intended goal of delivering more reliable applications may not be achieved and the business could be derailed. This paper looks at the issues mid-market businesses (5005,000 employees) must consider when working out how and where to run the various applications that they rely on. It looks at whether they should they keep old ones in-house and what the options are for deploying new ones. It also looks at the types of managed service providers (MSP) and the benefits to be expected when partnering with one. The document should be of interest to midmarket business and IT managers who are focused on trying to deliver that core value proposition, but are also likely to have to regularly stop and consider how this is best underpinned by IT.

Find the right MSP to work with and the running of IT can be trusted to a partner who sees ensuring high performance, scalable, reliable, compliant and secure applications as their core value proposition

Quocirca 2012

-3-

Sourcing and integrating managed services

Platform options: physical, virtual or cloud


The 1980s saw IT arrive in the mid-market. The advent of mini computers meant that it was no longer restricted to organisations large enough to afford mainframe computing. All sorts of applications started to become available, usually running on the individual dedicated servers in newly provisioned machine rooms or sometimes in the managers office or just under a desk somewhere. Whilst this often boosted productivity, there was, of course, a down side. First, a casually run application with no redundancy may be a benefit for a while, but when some part of the infrastructure that supported the application failed, the processes that now relied on it would collapse too. Second, IT could also get expensive; running every application on its own server, sometimes with no centralised controls or procurement, meant some applications were costing businesses more than they were saving through efficiency gains. Slowly, this has been changing as mid-market businesses have come to realise the choices available to them. Many still have legacy applications running on dedicated servers but, equally, many are moving those applications to better provisioned, more efficient virtualised or cloud-based platforms (Figure 2). Mostly this is good news when it comes to reliability, performance and cost control. However, there is a danger that things can come full circle and, without controls, businesses could find some of the problems of servers under desks returning in the guise of cloud services subscribed to by lines-of-business whilst overlooking certain key considerations. Before looking at those considerations, it is worth reviewing what the current platform choices are, where these platforms are best deployed and what flexibility should be sought for a given application workload. A workload is a computing task that requires access to a mix of resources including processing power, storage, disk input/output (i/o) and network bandwidth. A workload must be deployed with an understanding of the security and compliance needs associated with it. The right platform, or mix of platforms, on which an application should be run is the one that best supports its workloads at a given time. A platform should not be selected just because it is being touted as the next big thing, but because it suits the applications needs.

The physical platform


Whist there is constant talk about the value of virtualisation (see below), ultimately, software must meet silicon at some point: obviously all IT is still underpinned by physical hardware. The reason for highlighting this as a platform choice is that, in some circumstances, having an application running on dedicated hardware with a minimal layer of software between the two (middleware) is still the best choice. Certain network and security appliances are obvious examples, where high throughput is achieved using stripped down firmware on highly tuned hardware. However, it also applies to certain applications, which perform best on dedicated hardware, such as high performance databases and data warehouses. That said, even when this is the case, ensuring there is a backup for such applications when something goes wrong may no longer need a full secondary physical stack.

Quocirca 2012

-4-

Sourcing and integrating managed services


It may also make sense to leave an existing legacy application running on a dedicated physical server, at least for a period of time, because it makes sense to wait for a longer-term release rather than upgrade to an inferior technology on a new platform today. Furthermore, the existing hardware that it runs on may have an intrinsic value that makes its use cost-effective for a further period of time rather than paying for something new.

Virtual platforms
Quocirca research has consistently shown that hardware servers are underutilised. This has been the driving force behind the explosion in the last 10 years of virtualisation software for utility x86-based servers. This has seen the emergence of virtualisation platforms, including VMware vSphere, Microsoft Hyper-V, Citrix XenServer and Red Hat Enterprise Virtualization (RHEV). These are all hypervisors that provide a virtual link between hardware resources and infrastructure software. There is nothing new about the idea of virtualisation - it was at the heart of mainframe operating systems - but what was new was to apply it to all sizes of computers (even PCs). Virtualisation is not just about better utilisation, it is about flexibility of deployment; for example, two applications, one running on Linux and one running on Microsoft Windows, can share the same physical hardware whereas before they would have needed their own dedicated servers. Tools supplied with virtualisation software make it relative easy to take instances of applications running on dedicated physical servers and move them, almost seamlessly, to virtual environments. Some still perceive virtual platforms as only being suitable for non-critical workloads, but the truth is that a growing number of businesses are now running all their applications in a virtualised environment as whole data centres are turned into virtual private clouds.

Private cloud
Scaling up virtualisation itself, so that many physical servers appear as a single large virtual server, is, in effect, what a computing cloud is. When this is run exclusively for the use of a single organisation then it can be termed a private cloud. Applications are deployed to the private cloud with no need to assign resources, as the cloud will assign them as needed, in so far as they are available. Of course, the availability of these resources is still limited by the underlying physical platform and this will dictate how the cloud is constructed and run and the sort of applications that can be deployed to it. Existing in-house data centres may be transformed into private clouds or cloud service providers can provision them for their customers in industrial-scale data centres, often alongside their own public cloud offerings.

Public cloud
Take the cloud concept and use it to build large-scale multi-tenancy platforms with hundreds, thousands or tens of thousands of servers that can be shared by multiple organisations and you have a multi-tenant public cloud. There are two flavours; infrastructure-as-a-service (IaaS), where the deployment is to a hypervisor (i.e. an application is deployed with the operating system it requires to run) and platform-as-a-service (PaaS) where the public cloud is like a huge shared Windows or Linux platform to which applications can be deployed directly. PaaS is generally quite proprietary, for example Microsoft Azure (based on Windows), Force.com (from salesforce.com) or Google App Engine. With IaaS, deployment is to the hypervisor and this is the way most cloud service providers are building out public clouds. Examples include Amazons EC2, which is based on Xen (an open source project that is also used by Citrix), and Rackspace Cloud Servers (based on OpenStack, an open source data centre operating environment that supports various hypervisors).

Software-as-a-service (SaaS)
Whilst it is not strictly speaking a platform, SaaS is mentioned here as it is an option that should be considered when thinking about deploying new applications. A SaaS application is one that is provided as ready to go over the internet as a service. Just as with public cloud, the infrastructure is shared but there is no need to even install application software although varying degrees of tailoring are possible. Perhaps the most obvious example is email; why go to the bother and expense of installing a Microsoft Exchange or some other email server when you can just

Quocirca 2012

-5-

Sourcing and integrating managed services


buy mailboxes off-the-shelf. Other applications popular as a service include IP telephony, web conferencing and customer relationship management (CRM). Approaching 40% of midmarket organisations now say they are using SaaS applications of some sort (Figure 3). If there is a SaaS option available that meets the needs to support the business process in question, it may well be the best option. IT managers really should be aware of this because the lines-of-business they are there to support are quite likely to find and subscribe to SaaS applications for themselves. If this is not done within an organisations overall governance framework, then there may be some loss of control.

Platform options: in-house and out of house?


By definition, public clouds are out there somewhere at a third party location. However, any of the other types of platform can be located in third party facilities too or, indeed, left where they are. This section looks at the considerations for locating platforms.

Keeping the platform on-site


A legacy application running on an on-site server may be best left there. It may be making use of a purpose-built data centre that has been paid for and it makes no sense to de-provision the facility at present. For some applications, such as financial trading, the delays introduced by networks can make a real difference and proximity of an application to its users is a key issue. However, for any application, if there is a case to be made for running it on-premise at a company owned location, to ensure redundancy (i.e. that the application can failover to another location if the primary location is incapacitated), either requires owning and running two data centres or turning to a third party. Furthermore, as in-house data centres age, they may no longer meet the efficiency, resilience, capacity and requirements of a given business and many now turn to third party data centre providers rather than revamping old facilities or building new ones. Co-lo providers are

The market for third party data centres has grown fast in the last decade; placing applications and the infrastructure they rely on in such facilities is called co-location, or co-lo for short. Co-lo providers are experts at running data centres, ensuring power supplies, access to multiple network service providers, 24-hour physical security and advanced protection against fire and flood. As the costs of running such facilities are shared across multiple customers they are accessible to all sizes of business. Their owners focus on ensuring access to utilities and also selecting low cost locations. As most applications are not so time critical that a few 10s, 100s or even 1,000s of miles separation of users from applications makes a huge difference, many businesses now turn to co-lo providers for their data centre needs, either provisioning new equipment in these facilities or moving old kit off-premise to them.

experts at running data centres, ensuring power supplies, and access to multiple network service providers

Co-location (co-lo)

Quocirca 2012

-6-

Sourcing and integrating managed services


Managed hosting
When a business is provisioning new applications or re-provisioning old ones, the direct purchase of the hardware is no longer necessary. Managed hosting providers (MHP) offer pre-provisioned dedicated hardware: this is different to public cloud because not all resources are shared, although some will be (most obviously the data centre). Some MHPs may run their own data centres but others rent space from co-lo providers. MHP infrastructure can be used to provide dedicated physical servers, virtual servers or private cloud.

Factors affecting the choice of platform and location


Whether it is a privately owned or co-lo data centre, the location will also be influenced other considerations; for example, proximity to users when low latency is required, regulations with regard to data storage and access to specific network services. The criticality of a particular IT service to a business is also a factor; a business cannot differentiate itself much by running commodity IT functions such as data centres, hypervisors, databases or email servers better than its competitors (unless it is a provider of hosted services), but it can be more competitive if it has unique IT services that drive its core business processes (e.g. high speed share trading or a customer loyalty programme). Commodity applications are good early candidates for outsourcing, leaving IT staff free to focus on core value. Even if a critical application is kept in house it may make sense to outsource the management of utility parts of the stack that supports it. Figure 4 summarises who owns the stack for the different platform or location options.

Quocirca 2012

-7-

Sourcing and integrating managed services

The need for platform flexibility


Selecting a platform, and location for it, is not something that needs to be or should be fixed, even for the lifetime of a given application. There are good reasons for ensuring there is the flexibility to move an application, or at least instances of workloads and components of it. These include:

Redundancy
As outlined above, the best way to provide a failover capability for many applications deployed on-premise will be to have access to a cloud-based back up resource. For applications that are deployed in the cloud as a primary platform, failover should be to a secondary cloud resource. For new applications, this means ensuring there is sufficient flexibility to achieve this. This may be harder for legacy applications that may be tied to old hardware and it may be a good reason for accelerating their replacement.

Peak planning
Some applications may happily run on a given platform with limited resources most of time, but require additional resources for short periods. In the past this required over-provisioning of hardware. However, being able to make use of additional on-demand resources only when they are needed, sometimes called cloud-bursting, avoids this problem. New applications should be acquired with this flexibility in mind.

Avoiding under/over investment


Another benefit of using third party infrastructure is avoiding over or under investment. Predicting the resources required by an application to support a business plan will vary depending on the success of that plan; a third party cloud platform, be it private or public, ensures that the resources needed can be scaled accordingly.

Longer term capacity planning


In-house resources may be adequate for a given application in the short term and there may be no sense in paying for third party resources when an investment has already been made. However, if growth means that, in the long term, more capacity will be required it may make sense to provision this from the cloud rather than build more capacity in-house.

Future proofing
It is in the interests of cloud service providers to keep up with emerging technology. Even if one lags, another may be innovating. Provisioning all infrastructure in-house limits an application to the technology stack that was available at the time it was initially provisioned. Ensuring an application can be easily moved means that if a given new technology can provide considerable benefits and provide quick wins then an application can be moved to a platform provider that is quick on the uptake of such technology. Understanding when this is the case will depend on the applicationspecific considerations.

Provisioning all infrastructure inhouse limits an application to the technology stack that was available at the time it was initially provisioned

Quocirca 2012

-8-

Sourcing and integrating managed services

A checklist for application deployment


Outlining the choice of platform for an application, where to locate it and understanding the flexibility to move application workloads is all well and good but, in reality, that choice will be limited by the application itself. This includes who is best positioned to provide support for the application and how it fits into a given organisations governance framework. This section is a checklist that will help guide those charged with making such decisions through the process.

Data centre location


The issues around data centre location have been discussed above and, whilst it is an early decision to be made, it may involve multiple choices for running the various workloads that constitute a given application (see below, application structure).

Network
As discussed earlier, if access to a preferred network service provider is required or there is any special connectivity to other partners, users or the public internet needed, this may affect the choices made. There may also be special considerations for access from mobile devices as their use becomes more and more common amongst businesses of all sizes (Figure 5) that require particular load balancing and data caching to be put in place.

Platform
Again, the platform options have been discussed above. As discussed, different workloads may best run on different platforms, be they physical, virtual or cloud based.

Storage
Where data is stored is an important consideration for a number of reasons including latency, compliance and capacity. Storage is provided as an integral part of cloud and other offerings from service providers but is also offered as a discrete resource by many. When this is the case, storage can be provisioned separately; for example for providing off-site backups, shared data stores etc. Similar considerations apply to those for deploying applications when considering the suitability of dedicated versus shared storage infrastructure.

Encryption of stored data


Security and compliance requirements may demand that certain stored data is encrypted. Some cloud platforms provide encryption as an integral service, some charge for it as an add-on, whilst others leave it to customers to sort out for themselves.

Governance and compliance


Whilst there are many advantages to using multiple platforms for flexible workload management, there must always be checks that their use conforms to in-house and external governance and compliance requirements. For example, there are geographic restrictions on where personal data may be processed and stored, imposed by the Data Protection Directive (being updated to the European Data Protection Regulation in 2012). This means cloud service providers must be able to guarantee that data will not cross certain physical boundaries or that, if it does, safe harbour agreements are in place. Many organisations that handle credit card data must comply with the rules laid

Quocirca 2012

-9-

Sourcing and integrating managed services


down by the Payment Card Industry Data Security Standard (PCI DSS), which limits what data can be retained and also demands encryption. Some organisations may want to ensure their data does not become subject to the US Patriot Act by keeping it well away from its jurisdiction.

Other security issues


When deploying any application and planning the storage and movement of data, ensuring security must always be a high priority consideration. This paper is not specifically about security, but those planning the use of flexible platforms will find that the biggest challenge they have to overcome is the perception that externally hosted IT is less secure than that deployed in-house (Figure 6). This is almost always a fallacy; cloud platforms are not inherently less secure and, if this were true, the proposition would simply not be viable. The level of security provided for a given application is ultimately the responsibility of those that deploy it, regardless of the platform. A cloud service provider will provide SLAs that cover the security of its platform, including the physical security of the data centre, but beyond that, standard considerations apply.

End user considerations


There may be special considerations for certain applications with regard to how they are accessed by end users; for example the support of virtual desktop infrastructure (VDI), which may have latency considerations, or mobile users (as discussed with regard to networks earlier).

Application structure
Much of the discussion so far has talked about applications as if they were a single entity that cannot be broken down, which of course they can, into individual workloads and components. It may make sense to deploy different parts of an application on different platforms. An ecommerce application may consist of a web server, an order processing application and a transactions database. It may make sense to deploy the web server in the cloud, well connected to customers, whilst the order processing application may be a legacy system best run on existing hardware but co-located near the web server. For performance reasons, the database may be best deployed on a dedicated physical server offered as a managed hosting service in a nearby facility. This will only work if the application has been structured in way that allows such componentisation in the first place, which many will not have been especially older ones. Even when that is the case, it does not mean there is no choice to be made; a more resilient application may be achieved by moving it, and the hardware it runs on, to a co-lo facility or by reprovisioning it on a managed hosting platform.

Application resource requirements


Applications need to be allocated four basic resources; processing power, storage capacity, disk input/output (i/o) and network bandwidth. Different applications will be hungrier for one or other of these. For example, a video conferencing (VC) system needs network bandwidth above all else whilst a business intelligence application may rely most on processing power. This affects the choice of platform. Processing power can quite easily be shared, providing all contending applications dont all need it at the same time. However, standard cloud platforms may only have limited network capacity allocated and will be ill suited for VC deployment, which may be better deployed on its own physical server with dedicated bandwidth. Some applications require high storage capacity or generate lots of storage i/o, which must both be catered for.

Quocirca 2012

- 10 -

Sourcing and integrating managed services

Application life cycle


Where an application is in its life cycle should also be taken into account. If there is a plan to replace an application in the longer term, then it may make sense to keep the current platform running until it is fully amortised, using the intervening time to carefully plan its replacement and take advantage of technology changes that arrive in the interim. This is especially true if it is an application has been written in a way that is hard to virtualise (see earlier discussion). For commercially acquired packages, it may also be that the vendor has plans that will affect the choices made (see below).

Bespoke software applications


Mid-market organisations are as likely to have developed software in-house as enterprise ones (see Figure 3). These will have been built to suit the technology of the day (for example client-server) and may well be poorly suited for virtualisation. However, the biggest problem is likely to be that the original developers are no longer around and that the application is poorly documented. This does not mean improvements are impossible; as discussed earlier, it may make sense to co-locate the hardware and redundancy may be improved.

Commercially acquired packages


Many commercially acquired applications will hit the same problems as bespoke software; for example if the original supplier goes out of business, is acquired or the application reaches end-of-life from the vendors perspective. End users may want to keep such applications on life support for long periods of time, often beyond the time the supplier has vowed to support them. Part of the planning process should be to recognise when this is the case and work out how to move forward with a new and more resilient replacement, but such goals will not be achieved overnight. For current applications that are fully supported by the vendor, there will be advice available for how the application should be run and the resources it needs. Some vendors will have adapted older applications to run in virtualised environments or have plans to do so. Continuing with vendors that have a clear road map to support flexible choice makes sense; if they do not, the long-term plan should be to ditch them and they should certainly not make it on to the short list for new acquisitions.

Software licencing
Another consideration is software licencing. First, it may be the case that an organisation has perpetual licences for certain infrastructure and applications software that need to be utilised in any long-term plan to avoid unnecessary new investment. However, moving licences to a virtualised environment may not be straightforward and a renegotiation with the supplier may be necessary. This is especially true as more and more software runs on multicore physical servers, as some vendors charge for a licence per core.

When to seek advice and resources of MSPs


Engaging with MSPs is not just about resources; integratorMSPs act as a one-stop-shop for advising on how to achieve the best results for a given application requirement. They may offer their own management and infrastructure resources, recommend those of other specialist MSPs or simply help make better use of already available in-house resources. Integrator-MSPs will have an understanding of all of the factors discussed above, considering each application requirement in turn rather than offering a one-size-fits-all approach, which may be taken by a provider that just focuses on one or two aspects of outsourcing. For example, a pure managed hosting provider will, of course, highlight and recommend the benefits

Two types of MSP


Specialist MSPs: provide point services such as managed hosting or remote sys-admin. Integrator-MSPs: offer broader end-to-end services, which include management of applications and their chosen delivery platforms. This may include managing services and resources from specialist MSPs or other cloud providersand integrating it all under a single SLA.

Quocirca 2012

- 11 -

Sourcing and integrating managed services


of a dedicated hosted platform; an application vendor is hardly likely to point out that it is time to ditch your investment in its software and switch to an alternative SaaS option instead. Integrator-MSPs can also scale their investments across multiple customers and will have invested in the enterpriseclass management and monitoring tools that are needed to ensure applications have the resources they need at all times to perform as required. Most mid-market organisations will not have funds to invest in such tools. It is also likely that a given integrator-MSP will have deployed a given type of application before and many problems that arise will have been encountered and handled previously. For most individual mid-market organisations a new application roll out will be a first time experience, wherever it is deployed. A steep learning curve can be avoided by turning to a partner with existing experience. Even if a given organisation has a core application that it considers gives it truly unique business differentiation and should be kept in-house, it could outsource the management of the infrastructure to an integrator-MSP that will put in place an optimised bespoke service for the application and the infrastructure it relies on. As with any third party engagement, the contract put in place should set out the desired outcomes and service levels. Of course, there should also be a means to measure that these are being achieved.

Check List: what do integrator-MSPs do for their customers?


1. 2. 3. 4. 5. 6. 7. 8. 9. 10. IT delivery is their core skill; the focus is 100% on the delivery of highly available, scalable, cost effective, compliant and secure applications. Work to service level agreements (SLA) and are measured on delivering against them; generally in-house IT departments are not, especially in the mid-market. Run enterprise class facilities and make them available to businesses of all sizes. This is especially valuable to mid-market companies, many of which will not have such facilities in house. Lower the cost of delivery as skills and resources are shared across multiple customers. Understand the range of platforms available for application delivery and can offer advice on when moving a given application from one to another will be beneficial on a temporary or permanent basis. Have existing relationships with IT hardware and software suppliers, accreditations for working with their products and understand the latest technology developments. The scale of their business allows them to keep inventory of new equipment and spares ready to install as required. Form relationships with specialist MSPs and integrate their services with their own and their customers infrastructure. Take responsibility for moving an application from one platform to another, as necessary, to maintain service levels. Provide scalability, as they are able to hold resources in reserve to be called on by individual customers as and when required. Customers only pay for what they use rather than bearing the expense of having such resources on permanent stand-by. As with scalability, they also provide cost effective failover platforms as they provide infrastructure when it is needed rather than having it on permanent stand-by. Likely to have been there before; for example when it comes to migrating legacy applications to new locations and/or platforms. Have the experience to advise and deliver on security and compliance goals. They will have done this for other customers and, in many cases, will have existing certifications for their own infrastructure and processes.

11. 12. 13.

Quocirca 2012

- 12 -

Sourcing and integrating managed services

Conclusions
All businesses are now reliant on IT to a greater or lesser extent; mid-market businesses, in particular, are unlikely to have the full range of in-house skills to ensure the performance, reliability, scalability, security and compliance of the applications they rely on. To this end they should consider engaging with integrator-MSPs that have the skills and resources they lack. This will leave them free to focus on their core business activity and avoid having to invest in internal IT expertise.

Quocirca 2012

- 13 -

Sourcing and integrating managed services

References
1. Quocirca 2012 The year of Application Performance Management February 2012, interviews with 500 UK, USA, German and French IT managers. To be published on www.quocirca.com later in 2012, freely available on request from bob.tarzey@quocirca.com or downloadable at: http://applicationperformance.dynatrace.com/2012_Application_Performance_Management_Outlook_Survey. html Goldman Sachs Global Investment Research; A paradigm shift for IT: The Cloud November 2009 Quocirca Outsourcing the problem of software security February 2012, interviews with 100 UK and US enterprises. To be published on www.quocirca.com later in 2012, freely available on request from bob.tarzey@quocirca.com or downloadable at: http://info.veracode.com/Quocirca_Outsourcing_Software_security.html Quocirca The data sharing paradox, Quocirca Sept 2011 http://www.quocirca.com/reports/620/the-datasharing-paradox (SMB data from UK, France, Germany USA and Australia) Quocirca; UK-only data taken from research published in Next Generation Datacentre Cycle II Cloud findings, April 2012 http://www.quocirca.com/reports/689/next-generation-datacentre-cycle-ii-cloud-findings

2. 3.

4. 5.

Quocirca 2012

- 14 -

About niu Solutions


niu Solutions specialises in building bespoke enterprise IT and communications, as organisations need it, as-aservice, and as they grow. Harnessing the power of cloud technology and knowhow gained from over 20 years of delivering managed and hosted services to the mid-market, niu provides tailored integration services that introduce resilience, security and compliance as standard. nius professional services team create high-performance IT environments integrated across cloud or private hosted services, or in house services, on an application by application basis. Focussed on delivering applications and infrastructure that enable more efficient business processes, niu tightly integrates platforms, processes and technologies from best of breed providers and delivers them as a single, tailored solution, with a bespoke and flexible service wrap. Taking the time to understand an individual organisations challenges is key to nius service values, as this quickly translates to a valuable competitive edge for the customers business. Organisations benefitting from nius bespoke service delivery include MetroBank, Keystone and JD Sports. Headquartered in the City of London, niu employs more than 100 people and has offices throughout the South East of England.

Sourcing and integrating managed services

REPORT NOTE: This report has been written independently by Quocirca Ltd to provide an overview of the issues facing mid-market organisations as they deploy business applications. The report draws on Quocircas extensive knowledge of the technology and business arenas, and provides advice on the approach that mid-market organisations should take to create a more effective and efficient IT environment for running applications that will support future growth.

About Quocirca
Quocirca is a primary research and analysis company specialising in the business impact of information technology and communications (ITC). With world-wide, native language reach, Quocirca provides in-depth insights into the views of buyers and influencers in large, mid-sized and small organisations. Its analyst team is made up of real-world practitioners with first-hand experience of ITC delivery who continuously research and track the industry and its real usage in the markets. Through researching perceptions, Quocirca uncovers the real hurdles to technology adoption the personal and political aspects of an organisations environment and the pressures of the need for demonstrable business value in any implementation. This capability to uncover and report back on the end-user perceptions in the market enables Quocirca to provide advice on the realities of technology adoption, not the promises. Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITC has the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas mission is to help organisations improve their success rate in process enablement through better levels of understanding and the adoption of the correct technologies at the correct time.

Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of long term investment trends, providing invaluable information for the whole of the ITC community. Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that ITC holds for business. Quocircas clients include Oracle, Microsoft, IBM, O2, T-Mobile, HP, Xerox, EMC, Symantec and Cisco, along with other large and medium-sized vendors, service providers and more specialist firms. Details of Quocircas work and the services it offers can be found at http://www.quocirca.com Disclaimer: This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca has used a number of sources for the information and views provided. Although Quocirca has attempted wherever possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors in information received in this manner. Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented here, including any and all consequential losses incurred by any organisation or individual taking any action based on such data and advice. All brand and product names are recognised and acknowledged as trademarks or service marks of their respective holders.

Das könnte Ihnen auch gefallen