Beruflich Dokumente
Kultur Dokumente
Introduction
Companies looking to use enterprise applications have many decisions to make when considering the deployment and ongoing support of the applications. One consideration is whether to deploy into a hardwarevirtualized environment. Hardware virtualization involves hiding the specifics of a particular piece of server hardware from an operating system. It also involves the isolation of multiple instances of hardware-virtualized operating systems from one another.
Table of contents 1 2 4 6 7 Introduction Virtualization Technology VMware LiveCycle ES on VI3/vSphere 4 Deployment Architecture for Tests Tools Used for Testing Provisioning a VM for Adobe LiveCycle ES Performance Best Practices When Building VMs Conclusions References
The major benefits of hardware virtualization are: The ability to run multiple instances of disparate operating systems on the same host. Old legacy applications that only run on Windows 3.1 or Windows 95 can still be maintained alongside instances of Windows 7. Reduced server sprawl and the resulting energy and cost savings. Faster provisioning of server environments from weeks of man hours to a small number of hours. Quicker failover and disaster recovery. Higher server utilization. Software and application testing groups were once the primary users of virtualized environments for enterprise applications, but this is no longer the case. Virtualization technology has become mature and reliable. Enterprise IT groups now have enough confidence in the technology to deploy applications into virtualized environments for production use. This technical paper discusses the various aspects of deploying Adobe LiveCycle ES into a VMware ESX virtualized environment. This paper focuses on VMware ESX 3.5 (part of VMware Virtual Infrastructure 3) and 4.0 (part of vSphere). Other leading vendors are Microsoft, IBM (IBM POWER architecture), and Sun Microsystems (Sun/Fujitsu SPARC architecture). Tests conducted by Adobe at VMwares ISV Validation Lab show that Adobe LiveCycle ES can successfully leverage such VI3 and vSphere 4.0 technologies as VMotion, DRS and Fault Tolerance. Long-lived orchestrations that involve user tasks were not tested because of time constraints. Developing load test scripts for long-lived orchestrations is a major effort. LiveCycle ES2 was not tested because it was released later. Determining the performance differences between virtualized and physical machines was not a goal, and was not tested.
9 12
12
14 15
Virtualization Technology
Isolation (containment) of software execution environments has been a feature on mainframes for decades. VMware brought the technology to the x86 world circa 1999 and popularized it to such an extent that every major player in the industry now has a virtualization offering. In the x86 world, VMware continues to dominate the market.
B ackground
IT departments all over the world are today focused on server consolidation. These efforts are mainly driven by CIOs concerned about server sprawl and the resulting expenses related to electric power and data center space. Server sprawl wastes power because of the high heat dissipation from server components, especially CPUs, and the high-capacity air conditioners required to cool data centers. Server sprawl occurred because system administrators needed to physically contain applications so that one did not adversely affect another. The easiest way to accomplish this was to deploy each application on its own server. Also, since they could not confidently and reliably predict future growth, administrators tended to over-provision their server hardware, thus resulting in significant under-utilization [1]. Early attempts at controlling this server sprawl resulted in the widespread use of hardware server blades, which are essentially very thin servers housed inside a bigger chassis and mounted on a server rack. Later, containment was implemented in software, a technology we today call hardware virtualization.
T erminology
Although Sun Microsystems continues to use the term containers for isolated software execution environments, the popular term today is virtual machines, or VMs, especially in the x86 world. IBM uses the term logial partitions (LPARs) for its AIX operating system. Implementations of hardware virtualization can be grouped into two bucketsthose that provide operating system (OS) isolation (VMs) and those that provide only application isolation (virtual environments, or VEs). Virtual Machine (VM) A virtual machine is a representation in software of a physical machine. A virtual machine presents an abstraction of the CPU, memory, storage, etc., that would normally be present in a physical machine to the operating system and the applications that reside in it. Virtual machines require the installation of complete operating systems in each of the VMs. In the case of VMware, you can run different operating systems on the same host.
x861 In the x86 world, VMwares ESX Server and Microsofts Hyper-V are examples of this virtual machine model. VMwares implementation allows it to host VMs running Windows, Linux, Novell Netware, and Solaris x86. Microsofts implementation allows it to host Windows as well as Novell SUSE Linux Enterprise 10 VMs. Virtual Environment (VE) Also known as OS virtualization, virtual environments do not require the installation of separate operating systems on each of the VEs. As a result, only one kind of OS can be run on a given host. You cannot mix and match different operating systems on the same host like you can, for example, in a VMware ESX environment. However, Sun has technology that lets you run Solaris 8 and 9 containers on Solaris 10.
VE
VE
VE
VE
VE
Hardware
Figure 2- Virtual Environment Model
x86 Parallels Virtuozzo is an example. RingCubes MojoPac is a VE implementation for Windows desktop operating systems such as Windows XP and Windows Vista. Paravirtualization This is a technology invented by Xen that allows certain operating systems such as Red Hat Enterprise Linux 5.x to become aware that they have been virtualized. Using special paravirtualized device drivers [2], this allows near native performance of VMs. None of the Windows operating systems support this technology. Typically, Paravirtualization implies modifying the operating system source code. VMware supports paravirtualization through its work on the paravirt_ops and virtual machine Interface (VMI) APIs. VMware also recommends that users install a set of tools in the Windows and Linux operating systems that enhance the performance of these systems in virtual machines running on ESX. These include enhancements to the networking stack and the graphics device drivers.
1 The CPU architecture designed and implemented by Intel. AMD also implements the x86 architecture. In contrast to RISC, this architecture is CISC (Complex Instruction Set Computing).
VMware
Virtualization technologies tested and supported by Adobe for LiveCycle ES include VMware ESX, IBM LPAR, and Solaris Zones. VMware created the virtualization market in the x86 world. Their product offerings can be grouped under two buckets: hosted and bare-metal. Bare-metal virtualization offers better performance than hosted virtualization.
H osted Virtualization
Hosted virtualization requires a host OS which hosts other guest OSes that are contained in virtual machines.
Host OS
Hardware
Figure 3- Hosted Virtualization Model
OS (Solaris x86)
Workstation This is VMwares first desktop offering. It is highly popular with quality assurance and testing groups who face the daunting task of creating and maintaining tens or hundreds of test environments simultaneously. Some of the more advanced features in virtualization, such as the ability to record and replay any actions happening in a virtual machine, appear first in the VMware Workstation product. VMware Server This is VMwares original server virtualization offering. It requires a full host operating system. Today, VMware Server is offered as a free download and as an ideal starting point for users to experience the benefits of virtualization. VMwares expectation is that those customers who would like to implement large-scale virtualization solutions can easily migrate from VMware Server to VMware Infrastructure.
B are-metal Virtualization
A bare-metal virtualization platform is a barebones operating system-like kernel called the hypervisor, usually with a management console that runs on top of the kernel. There is no host OS. In VMwares case, the hypervisor is composed of a virtual machine monitor component and a kernel component, where the latter provides all of the hardware abstraction. These components are not based on traditional operating system technology, but were designed for efficiently managing virtual machines.
Hypervisor Hardware
Figure 4- Bare-metal Virtualization Model
VMware ESX VMware ESX is part of VMware Infrastructure 3 (VI3). The latest version is called vSphere (v4.0). It does not require a host operating system and is itself a barebones operating system called a bare-metal hypervisor. By getting rid of the extra host OS layer, overhead is reduced and performance is significantly better than VMware Server and more suitable for large-scale virtualization deployment. VMware ESXi VMware ESXi has all the functionality of VMware ESX, but it removes the management console operating system (COS) that is available in VMware ESX for improved security and manageability. Due mainly to this, its memory footprint is only 32 MB. As a result, the entire hypervisor can fit on a single ROM chip. It is available as installable or embedded on a ROM chip. Server vendors are expected to ship this chip on the motherboards of the servers they sell. The ESXi installable is now free to download and use. This change in VMware policy and licensing was announced in late July 2008.
OS (Solaris x86)
Virtual Infrastructure 3 (VI3) and vSphere 4.0 While VMware ESX virtualizes the hardware resources on a single server, VI3 and vSphere 4.0 are capable of pooling all of the virtualized hardware resources on all of the servers in an entire data center. Instead of provisioning servers, IT departments can provision VMs from their pool of CPUs, storage, and network cards. In addition, VI3 and vSphere 4.0 contain management infrastructure that provides additional enterprise features to virtualized environments, such as live migration of active running VMs from one host to another (VMotion), VMware Distributed Resource Scheduler (VMware DRS) and VMware High Availability (VMware HA). vSphere 4.0 provides an additional feature called Fault Tolerance (VMware FT). These features are based on a separately licensed product called VirtualCenter. VirtualCenter requires its own dedicated server and a back-end database (Oracle or Microsoft SQL Server 2005) for storage of its management data.
V Motion
VMotion is a VI3/vSphere 4 feature that allows the live migration of a running VM from one ESX host to another without the users experiencing disruption. This is a memory-memory transfer between ESX hosts and is usually completed within the range of 5-10 seconds. VMotion can be scheduled to run automatically, or executed manually. This technology has certain prerequisites. Mainly, the processor type for the source machine and target machine of a VMotion-based move must be in the same family (Intel or AMD) and compatible generations (Xeon or Opteron revision). For example, VMs running on an ESX host with Intel Xeon CPUs cannot be migrated to an ESX host with AMD Opteron CPUs that are designed with the Non-Uniform Memory Access (NUMA) architecture and vice versa [7]. Also VMs with four provisioned vCPUs cannot be migrated to an ESX host with only two pCPUs.
Partial DRS VirtualCenter will inform the system administrator that one or more of the ESX hosts are under heavy load and suggest migration recommendations for virtual machines. It does not initiate VM migration to other less loaded hosts except for initial placement of a VM at power on onto a DRS-enabled ESX cluster. Fully Automated DRS VI3/vSphere 4 will automatically initiate the migration of VMs from ESX hosts under heavy load to other less loaded ESX hosts. VI3/vSphere 4 checks whether the ESX hosts are under load every five minutes. This default behavior can be changed by editing the file vpxd.cfg on the VirtualCenter server [8].
As the screenshot of the orchestration shows, the following actions are performed serially: Read an XML file from the server filesystem. This XML file contains form data. Set the contents of this XML file to a process variable of type XML. Pass this data to the Forms ES component of LiveCycle along with a form template (.XDP) from the LiveCycle Repository (database). Keep the resulting PDF form in a process variable of type document. Read another PDF file from the server file system. Using Assembler ES, combine the previously created PDF form and the PDF into a single PDF. Apply a Rights Management ES policy to the combined PDF. Certify this PDF with a digital signature using a document signing credential kept in the LiveCycle Trust Store. Apply Reader Extensions rights to the PDF. Remove the Rights Management policy that was previously applied. Remove the Reader Extensions rights that were previously applied. Convert the PDF to the PDF/A archival format and keep the result in a process variable of type document.
Servlet A servlet was used to invoke the orchestration synchronously. Once the orchestration finished executing, the resulting output document was retrieved from the process variable by this servlet and streamed back to the client. HP LoadRunner LoadRunner from HP is a load-testing tool. Using its scripting language, a simple script was developed and run to drive the servlet that invokes the orchestration. This was the load generator tool used for tests conducted at the VMware ISV Validation Lab. Borland SilkPerformer For tests conducted at Adobes Technical Marketing Lab, SilkPerformer from Borland was used as the loadgenerating tool. Using its Benchmark Description Language (BDL) scripting language, a simple script was developed and run to drive the servlet that invokes the orchestration. Other Test Collateral The XML data, the XDP form template and the PDF document used for the tests are attached to this document as PDF attachments.
T ests Performed
Several tests were conducted at VMwares ISV Validation Lab, as well as in Adobes Technical Marketing Lab to assess the performance of LiveCycle ES on VI3/vSphere 4. No antivirus software was run on any of the VMs involved in the tests. Antivirus software tends to have a negative impact on performance of up to 30% on both physical and virtual machines. Baseline Test A baseline test was conducted to establish a performance baseline with which to compare the results of other subsequent tests. Throughput achieved with the eTech Benchmark Orchestration for LiveCycle was 172 transactions per hour. One transaction is defined as a single invocation of the eTech Benchmark Orchestration by one user. Please note that the servers used for this baseline test were all virtualized. VMotion Test In tests that were conducted at VMwares ISV Validation Lab, a VM hosting Adobe LiveCycle ES 8.2.1 under load was successfully migrated from one ESX host to another without any transaction failures. Compared to the throughput achieved with the baseline test (172 transactions per hour), this test, which was conducted during the VMotion, achieved a throughput of 165 transactions per hour. The VMotion operation itself took about 1 minute. Distributed Resource Scheduler (VMware DRS) Tests Manual DRS Test This scenario was not tested because it is practically the same as partial DRS. Partial DRS Test In tests that were conducted at VMwares ISV Validation Lab, a VM hosting Adobe LiveCycle ES 8.2.1 under load was successfully migrated from one ESX host to another without any transaction failures. Compared to the throughput achieved with the baseline test (172), this test achieved a throughput of 161 transactions per hour.
10
DRS was configured to be aggressive (as opposed to being conservative) and fully automated. There are five degrees of aggressiveness that determine how aggressively DRS tries to balance resources across ESX hosts in a VI3/vSphere 4 cluster [8]. Adobe LiveCycle VMs running on an ESX host under high load were automatically migrated (VMotioned) to another ESX host that was under less load without any transaction failures. Compared to the throughput achieved with the baseline test (172), this test achieved a throughput of 161 transactions per hour.
11
12
When building LiveCycle VMs, use independent, persistent disks for best performance. If possible, connect all VMs in the same LiveCycle configuration (DB servers, web servers, application servers) to the same vSwitch. If configured this way, network traffic between them will be transmitted in-memory rather than over the wire, which is slower. Disable screensavers and menu animations in LiveCycle VMs. Do not enable paravirtualization for Windows operating systems, because they currently do not support the virtual machine interface (VMI).
Licensing Considerations
Since Microsoft licenses Microsoft Office by installed instance, each VM hosting LiveCycle PDF Generator ES must be licensed for Microsoft Office. Since Adobe Acrobat is required for PDF Generator, it has to be licensed per VM also.
13
Disaster Recovery
It is faster to fire up archived LiveCycle VMs than to build a server from scratch in the event of disaster. Instead of installing the OS, the J2EE application server and LiveCycle, the key virtual machines containing the LiveCycle install can be quickly cloned and deployed to a secondary failover site. If a disaster occurs in a primary site, the failover sites instances of the critical VMs can be started automatically by VMwares Site Recovery Manager system once the primary site goes down. This function allows for easy testing of the disaster recovery plan also.
Adobe LiveCycle ES VM
Adobe LiveCycle ES VM
Adobe LiveCycle ES VM
Adobe LiveCycle ES VM
Adobe LiveCycle ES VM
Conclusions
There are several compelling reasons for deploying Adobe LiveCycle ES in a VI3/vSphere 4 environment. Applications involving Process Management and Workspace can be deployed on clusters that are virtualized. However, it should be noted that long-lived orchestrations that included user tasks were not tested. Although Adobes tests were conducted with IBM WebSphere ND and JBoss AS as the J2EE application server platforms, we expect the results to be essentially similar for Oracle WebLogic AS. LiveCycle ES works well with VMware VMotion, DRS, HA and FT. LiveCycle PDF Generator ES is a very good candidate for VI3/vSphere 4 deployment if Microsoft Office or Open Office native documents are being converted to PDFs. If calls to LiveCycle are essentially stateless, a farm of non-clustered but load-balanced LiveCycle VMs can be deployed on VI3/vSphere 4, taking advantage of High Availability offered by VI3/vSphere 4. Non-clustered LiveCycle VMs are not aware of each other.
Adobe LiveCycle ES VM
14
References
[1] Foxwell, H., Rozenfeld, I. Slicing and Dicing Servers A Guide to Virtualization and Containment Technologies, Sun BluePrints Online, Oct 2005 [2] Xen: Enterprise Grade Open Source Virtualization Inside Xen 3.2 Xen Whitepaper, 2006 [3] Phelps, J.R. How to Decide on a Linux Server Platform Gartner Research, September 28, 2007 [4] Day, B. Virtualization Trends On IBMs System P - Unraveling The Benefits In IBMs PowerVM Forrester Research, February 5, 2008. [5] Hochstetler, S., Castro, A, Griffiths, N, Ramireddy, N. and Tate, K., Getting Started With PowerVM Lx86 IBM Redpaper, 2nd Edition, May. 2008. [6] Cherry, M. Hyper-V Released Directions on Microsoft, July 21, 2008 [7] Performance Best Practices and Benchmarking Guidelines VMware Whitepaper, 2008 [8] DRS Performance and Best Practices VMware Whitepaper, 2008 [9] Gammage, B. Gartner Interviews Ian Pratt, Virtualization Visionary Gartner Research, August 11, 2008 [10] DRS Performance and Best Practices VMware Whitepaper, 2008 [11] Wolf, C., Lets Get Virtual: A Look at Todays Server Virtualization Architectures Burton Group Data Center Strategies In-Depth Overview, Version 1.0, May 14, 2007. [12] Irving, N., Jenner, M., and Kortesniemi, A., Partitioning Implementations for IBM eServer p5 Servers IBM Redbook, 3rd Edition, Feb. 2005.
We welcome your comments. Please send any feedback on this technical guide to LCES-Feedback@adobe.com. Adobe, the Adobe logo, Flex, LiveCycle, PostScript, and Reader are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. All other trademarks are the property of their respective owners.
Adobe Systems Incorporated 345 Park Avenue San Jose, CA 95110-2704 USA www.adobe.com
2009 Adobe Systems Incorporated. All rights reserved. Printed in the USA. 11/09