Sie sind auf Seite 1von 10

Reducing Costs By Server Consolidation through Virtualization

Abstract
Enterprises are witnessing a server sprawl as low cost x86 boxes make their way into datacenters. Application provisioning on dedicated servers and over-provisioning hardware for peak loads have led to server proliferation and low server utilization. Virtualization technology has the potential to play a significant role in enabling logical server consolidation. It enables enterprises to achieve higher utilization and manage their hardware resources better, thus reducing Total Cost of Ownership (TCO) and help contain IT spending. Virtually every aspect of a business depends on the services provided by corporate datacenters to stay ahead of changing business conditions. Although the tremendous resources and capabilities afforded by a large infrastructure prove invaluable, these systems are often inflexible, hampering agility as companies look to react to evolving world markets. Indeed, todays hypercompetitive environment is forcing businesses to find ways to adapt and innovate to survive and be profitable. Yet, IT organizations are faced with service-level pressures that necessitate cost reductions and greater operational efficiency. The key to success is finding the right balance. This white paper highlights how customers can use innovative virtualization and consolidation Technologies to reduce the cost. In search of solution, Enterprises look into server consolidation as viable solution to their problems. Traditional server consolidation promises cost reduction through consolidation of hardware and software; while costs are reduced, performance, availability and agility can also be compromised. Server consolidation using virtualization may not only reduce the costs of hardware and software licensing while retaining performance but also offers simplified architecture, improved quality of datacenter management, reduced power consumption and cooling requirements, simplified backup and recovery activities, and significantly enhances the enterprise agility, as described in Figure 1. The business and technical rationale for server consolidation using virtualization technology is clear: bottom line benefit to the IT budget as well as improvement in the productivity of the enterprise workforce. In todays highly competitive manufacturing environment, success requires a constant focus on cost cutting while maintaining production throughput and employee safety. For manufacturers, this includes finding new ways to lower operating expenses, a large part of which are the purchase and support of industrial systems. A significant cost stems from the inefficiencies created by the growing numbers and varieties of systems on the factory floor. For instance, system proliferation is consuming precious space and straining IT resources, especially when systems have unique support requirements for configuration, backups, spares and software patching.

Efficiency can be improved when multiple factory functions are consolidated onto a single hardware platform, thus decreasing operating expense, factory footprint, energy consumption, and integration and support effort. This can be done using advanced multi-core processors along with proven virtualization technology, which has been around since the 1960s1 and is most notably used in data centers where many applications are consolidated onto a single server. Still, virtualization tools and methods used in the server environment are different from what is appropriate for the embedded environment. This white paper describes how virtualization technology running on multi-core Intel Core vPro processors can be used in industrial automation to consolidate computing devices for motion control, programmable logic control (PLC), human machine interface (HMI), machine vision, data acquisition, functional safety and so forth. This approach can help manufacturers reduce cost and complexity on the factory floor.

Introduction Virtualization is a technique through which hardware resources viz. processors, storage, I/O and network on one or more machines can be transformed through hardware/software partitioning, time sharing and simulation/emulation into multiple execution environments, each of which can act as a complete system. Computing power has doubled almost every other year as predicted by Gordon Moore. Processors based on IA32 have particularly been able to take advantage of this, resulting in a proliferation of IA32 based servers in most enterprises today. High acquisition and maintenance costs of mainframes and large UNIX machines have compelled enterprises to go with off the shelf servers based on IA32 architecture, which were cheaper by several orders of magnitude. Enterprises have typically deployed applications on dedicated IA 32 based servers in their data centers for two reasons. Firstly, to provide isolation and secondly to accommodate peak hour utilization and sudden utilization surges. Isolation is important to ensure that applications dont interfere with each other in terms of the configuration or environment. Also, security or other faults in one application dont compromise co-hosted applications. Secondly, peak hour utilization factors have to be considered to provision the servers to guarantee and maintain Service Level Agreements (SLA) for the application. This has resulted in heavily over-provisioned server farms in the data centers. Enterprises are now looking for ways to improve the average utilization of their servers, reduce maintenance cost and also

retain the Quality of Service (QoS). Consolidating multiple applications on a single physical server can solve issues related to lower utilization, however applications wont be isolated sufficiently to address the security, quality of service, fault tolerance and application incompatibility issues. Workload management solutions can perhaps address issues related to provisioning and QoS. Virtualization is the only technology currently available that makes a logical server consolidation possible without compromising on any of the desired features.

What Is Driving Virtualization


High growth and irrational exuberance quickly come to mind when defining business in the late 1990s. Consolidation within industries certainly ranks as a possible definition of the business landscape as we near the end of the first decade of the 21st century. But whether companies experienced rapid growth or found themselves on the buying or selling end of an M&A transaction, they added IT systems. They patched existing systems together to keep operations functioning and business moving. Unfortunately, that led to duplicate, incompatible and legacy systems. All of this has led to widespread inefficiencies, skyrocketing costs, complexity and inflexibility, to name a few challenges. In the midst of IT adding and patching systems, global competition forced management to look for ways to reduce costs and complexity while maintaining or increasing service levels. Executives sought to make their businesses more nimble. They wanted to respond to fast-changing global conditions. They sought to become more efficient to better compete with competitors, lower costs and improve shareholder value. The proliferation of multiple system platforms, often resulting from mergers and acquisitions and the lack of an overall IT strategy, leads to challenges in responding to business demands. IT executives must deal with infrastructures that are complex and often disconnected. What follows are extra administrative, utility, facilities and management costs, and escalating requirements for data center space, as well as service and maintenance issues. Typically when this happens, a companys technology infrastructure costs slip out of control. Server virtualization can become a compelling approach to getting control over server infrastructures. Divergent pathscomplex and costly IT systems colliding with business requirements for efficiency and agilitycreated an environment ripe for virtualization.

The Need for Server Consolidation


Isolation Enterprises deploy diverse applications / workloads in their data centers. These applications are different from each another with respect to platform configuration, environment settings and usage characteristics. Also, most classes of enterprise applications need to adhere to acceptable service levels in terms of response time and throughput. It will not be possible for applications to meet these service level objectives unless computing resources (processor, memory, disk, network etc) are committed to them. Security is another concern. There is a need to isolate / quarantine any application / workload that is a security threat to the organization to prevent it impacting other applications or services. These require applications to be isolated from one another and resources to be dedicated to them. The simplest, though inefficient, solution is to put these applications on dedicated servers and not host any other application on the servers. This leads to three problems 1. Over-provisioning servers for peak load 2. Under-utilization of servers 3. High infrastructure maintenance cost

Improving Utilization Deploying applications on dedicated servers is expensive for a multitude of reasons. The most important reason is overprovisioning. With applications deployed on a single unit (can also be a dedicated cluster) of computing resource, the resource needs to be large enough to accommodate the peak volume without falling back on the guaranteed service levels. While the peak period of activity may happen only for a few hours in a day or few weeks in a year, the resource will remain largely underutilized during the other times. Considering the over-provisioning, the TCO may be undesirably high and the loss due to wasted computing resource may account for a significant pie in the IT budget. Lowering Maintenance Cost Maintenance budgets are typically proportional to the number of servers deployed in the data centers. As the numbers of

servers increase, the maintenance cost increases non-linearly exerting considerable economic pressure. Increased maintenance effort and cost will also mean that IT staff is not spending enough time and resources in building newer applications required to provide a competitive edge to the enterprise. Data center applications generally have a requirement for high availability and traditionally this is achieved through redundancy (duplication of services in a fail-over standby server which is passive). Factoring in the redundant hardware for high availability and dedicated application deployment, the maintenance cost will grow exponentially with respect to the number of application/services deployed. Server Consolidation through Virtualization In light of all the issues highlighted in the earlier section, it is apparent that consolidating applications will help a great deal in reducing the TCO by reducing the maintenance cost and improving server utilization. However, neither can be achieved without compromising on the isolation factors such as environment isolation, security isolation and resource isolation. Numerous workload management software available commercially address the resource isolation problem satisfactorily; however security isolation and environment isolation problems will remain. Virtualization technology can enable workload consolidation by providing all the required levels of isolation with minimal loss in performance.

Benefits from Consolidation By consolidating devices using virtualization technology, original equipment manufacturers (OEMs) developing industrial automation solutions can provide substantial benefits to their customers, such as: Lower overall solution cost: Although a consolidated device may cost more than any of the individual subsystems, it should cost less to manufacture than the combined subsystems because it has a smaller bill of materials (BOM). In addition, virtualization makes it easier for OEMs to add new functionality to a system and expand their offerings. Smaller factory footprint: Consolidated equipment takes up less factory floor space than the individual systems it replaces. Reduced overall energy consumption: The power efficiency of Intel Core vPro processors, combined with system consolidation, can yield a solution that consumes less power than the individual systems combined. Reduced integration cost: By consolidating subsystems, OEMs effectively eliminate many integration tasks for their customers. For instance, the networking, cabling, shielding

and configuration that connect multiple subsystems together are handled within the system. Simpler to secure: The consolidated approach decreases the number of computing devices that require security software and may eliminate some varieties of security solutions the factory must support. In addition, there are fewer devices for hackers to attempt to infiltrate, thus reducing the attack surface of the factory floor. Easier system management: When subsystems are consolidated, factory IT personnel have a smaller number of devices to install, provision and manage. Also, a consolidated system is likely to have more capable hardware and software than the subsystems it replaces, allowing for additional manageability options and capabilities. Higher reliability: The greater the number of systems, the larger the number of devices that can fail. Consequently, a consolidated system should have a better mean time between failures (MTBF) than the combination of subsystems it replaces. Furthermore, there are fewer spares for factories to carry, and maintenance and repair procedures are simpler all ultimately leading to shorter downtimes.

Benefits of Server Consolidation


Lower TCO With improved utilization due to consolidation and lower maintenance cost by cutting down server proliferation, enterprises are witnessing significant reduction in total cost of ownership of their infrastructure resources. Enterprises are able to deploy newer applications without having to buy additional hardware. High Availability Features such as Live Migration with near zero downtime allow active virtual machines to be migrated during scheduled maintenance, thus allowing applications and services to continue running even during regular maintenance cycles. Also it is possible to create a highly available data center using virtual machines over fewer physical machines than was required earlier in active/passive mode. This can also result in higher utilization and lesser maintenance hassles. Service Level Guarantees

In spite of consolidating workloads onto a single physical server, using strict resource provisioning, it is possible to use system resources optimally without compromising on the service levels of individual applications. Workloads which naturally complement each other in terms of usage characteristics will be natural candidates for consolidation on a single machine and resource allocated between the virtual machines can be shunted through policies to make appropriate allocation at specific time periods. On Demand Resource Provisioning Live migration in conjunction with dynamic resource provisioning features available in virtualization software can open up lots of possibilities that make an enterprise data center better able to handle varying transactional volumes better. Applications can be moved around along with the virtual machine using live migration to bigger servers if they are found to be choking under high load or more resources can be committed to suffering applications by shrinking the resources levels of other virtual machines hosted on the same physical server.

Issues to consider with Virtualization


Before implementing virtualization solutions, as with the adoption of any new technology, due diligence needs to be performed to ensure that there are no obvious and high impact problems. The following are some areas to be noted: Every virtualization solution comes with a certain performance overhead. This overhead has to be measured and quantified during the evaluation stage. Only after ensuring that the loss (if any) in performance is within acceptable limits for the organization, should the roll out begin. The existing licensing models could be unsuitable for virtualization. If the existing models charge based on the number of instances of a particular software running, virtualization could end up being very costly as it increases the number of machines, and thereby the running instances of the software, in the system. Newer licensing and pricing agreements may have to be worked out with the ISVs. For example, licenses which charge based on the number of the physical nodes present in the system may be more suitable for such situations. The compatibility of existing software and systems with virtualization needs to be confirmed in a pilot rollout before proceeding with large scale deployment. Though most virtualization products claim to support unmodified

applications, organizations should not proceed before ensuring this is indeed so. The costs of incompatibility can be much higher than the benefits that virtualization brings in.

Suggested approach to deploying virtualization technology


Any organization looking to deploy virtualization technology in their systems should carefully plan and execute the adoption of the new technology. Since virtualization is a new technology (on the non-mainframe platforms at least), due diligence should be given to all aspects before deploying the technology in critical production environments. The following steps will serve as guidelines for such an adoption. 1. Analyze the existing systems looking for servers that are complementary in terms of workload patterns and resource consumption. For example, application servers which have peak loads during office hours and servers doing batch processing and having peak loads after end of business hours are good candidates for server consolidation through virtualization. 2. Evaluate the cost benefits that will come about by consolidating servers. This should consider the benefits in terms of savings in infrastructure, administration and maintenance. This benefit should be weighed against the potential issues that consolidation and virtualization can bring in, such as performance overhead and impact on SLA. 3. Once the cost benefit analysis in the previous step provides a go-ahead, the organization should pilot a virtualization project. Ideal candidates for such pilots would be applications in the test and development environments since they will not affect critical production servers. 4. Ensure the compatibility of existing software and services with the new virtualization technology. Watch out for problems that may arise post-virtualization. 5. Evaluate the performance of the new system. Measure the performance to gauge if it is within acceptable limits. Some parameters to measure could be throughput of the system, response times in case of service oriented architecture systems, latency and security. 6. Run the pilot for a sufficient duration of time to ensure that the system runs stable under virtualization. 7. Repeat the above steps for other suitable server systems and deploy on a wide scale. Server Efficiency The primary sales message for advocating virtualization is that companies are leaving money on the tablea high percentage of servers are woefully underused. Statistics on server utilization, as measured by CPU

utilization percentage, differ depending on the source. Server utilization ranges anywhere from 5 percent to 40 percent. Avanade and independent industry surveys estimate that server utilization rates of 20 percent or less are more common. It warrants stating the obvioustremendous inefficiency through excess capacity exists in data centers. One Gartner report found that even underutilized servers use high amounts of energy. Increasing utilization levels to 60% and more requires only modest increases in power. The effective use of virtualization can reduce server energy consumption by up to 82% and floor space by 85% ii . Server virtualization can improve the use of many resources that companies already have invested in for administration, facilities, and hardware.

The ROI on Virtualization


In order for businesses to know if virtualization will help meet their financial goals, its important to conduct a financial analysis to determine a companys server landscape total cost of ownership. Avanade created a modeling tool that calculates and compares virtualization costs, so clients can gain a detailed picture that can help them decide if, in fact, virtualization makes financial sense. The Avanade Virtualization Business Case Estimator (Figure 1) helps quantify the impact of server virtualization on cost savings, energy consumption, IT staffing, carbon footprint, and systems maintenance.

In the model, Avanade uses as many as 150 input variables that are analyzed to determine the financial impact of moving to a virtualized data center. That leads to a calculations matrix that can show clients the short- and long-term impact of data center virtualization. Clients typically realize short-term benefits when

server virtualization is integrated into a strategic investment plan. Here are examples of average results garnered with the Avanade Virtualization Business Case Estimator. Figure 2 shows the difference between a non-virtualized server and a virtualized environment when it comes to power costs.

Conclusion The economic climate is forcing companies to look for every possible means of increasing efficiency, flexibility and cost effectiveness. Strategic companies will capitalize on the current business crisis to create future opportunities. Server virtualization offers possibilities in an environment where some organizations might be paralyzed with fear. Virtualization involves a long-term strategy. It transforms how a company approaches IT and will require a change in how companies see their computing needs evolving and how they want to address them. Virtualization is the foundation for building an optimized data center that can help companies improve server efficiency, lower energy costs and enhance IT flexibility. Avanade believes that companies can confront the risks and rewards that come with virtualization in a way that optimizes their data center in the most efficient and flexible manner. But dont think of it as a quick fix. Think of it as future fix that can pay off in the long run. And it can pay off because Avanade is committed to helping companies realize results with virtualization.

Das könnte Ihnen auch gefallen