Sie sind auf Seite 1von 10

Caroline Boucher English 2 Professor Von Schilling 5/7/13 Desktop Virtualization Since the 1920s, computers have opened

up a new era in enhancing communication and manufacturing. They are an essential tool in every field and are found in practically all office and workspace environments today. Computer technology is evolving faster than ever before. A notable technological movement, which is taking over the personal computers per se, is called remote desktop virtualization. It is the approach of separating the user facing operating system from the client that is used to access the data held within. Essentially, its a relatively simple computer remote controlling another, more powerful computer. There are several types and models of desktop virtualization. They can be divided along the lines of whether the primary operating system is executed locally or remotely hosted. The former being a standard PC, and the latter being a remotely hosted desktop where the operating system, applications, and user data which are normally stored on the users computer (endpoint) are now hosted on a server in the companys data center. By using this, the user can access his desktop environment from anywhere in the internal network and from essentially any device, ranging from zero clients, thin clients, fat clients, and even mobile devices. The advantages to desktop virtualization are numerous and include easy administration and management, stronger security and policy enforcement, and flexibility in design and deployment. However, this technology does have its downfalls and despite seeing significant growth in certain areas, it

just has not been optimized for personal and at-home usage; primarily due to price and backend overhead costs. Just like any major city needs a good infrastructure and plan laid out before new buildings, so does a virtual desktop environment needs to have the underlying layout and interconnected frameworks that provide and support the entire network. Desktop virtualization requires an extensive and detailed network infrastructure in order to run smoothly and successfully. There are bandwidth, hardware, and licensing requirements that must first be met. Although these requirements may differ slightly from one virtualization technology to another, they all share generally the same guidelines. Network traffic capacity, or the bandwidth, plays a critical role in user experience and is where data is exchanged between the servers and the endpoints (users) display. Proper network bandwidth should be estimated for different user activities such as Internet and graphics; anywhere from 43 kbps to 1800 kbps should be expected (Feller). These bandwidth requirements differ based on the transmission protocols used to display, the two most common being RDP or PCoIP. RDP is a more widely used standard, but it is not optimized for bandwidth usage. On the other hand, PCoIP is optimized for bandwidth considerations, but it is not as widely employed by endpoints. The primary difference in the two protocols is how they render the screen on a redraw. RDP will always redraw the entire screen ensuring that the picture is always correct and PCoIP will only redraw the areas that have changed due to user activity (Boucher). Next, there are numerous hardware requirements that should be considered to follow in order to support the massive demands of the users. Hardware requirements are broken

down into two main categories, the server and the endpoint. On the server, the biggest factors in no particular order are: overall processing power, memory/RAM, and disk space. All of these go hand-in-hand and must be built up in regard to the others so as not to create and significant bottlenecks. When dealing with memory the range can be anywhere from 8GB to 2TB of physical RAM that will be shared among the virtualized desktops. It is hard to say a concrete number without taking into consideration the final deployment and expectation of the environment. When dealing with processors, many multi-core processors with a large L2 cache are necessary to allow the server to handle all the requests at a fast enough rate (Feller). As for disk space utilization, a common practice is to create the VMs with minimal disk space, while still allowing for overhead usage and growth, and then provide the user a shared disk drive that they access and store all personal files on. One of the last big considerations in requirements is licensing issues. The traditional licensing model dictates that every user will have their own personally licensed copy of an OS and programs. However, this is very costly and can be overcome because the VMs are in a shared environment. So, many licenses are now being sold on a usage base, not a user base. For example, a company that has 300 employees may only ever expect 100 employees to be using a program at the same time. So, instead of buying 300 separate software licenses, they may only purchase 150 and cut licensing cost in half (Boucher)! They should still plan for some extra usage; otherwise if they have more than 100 employees using it at the same time, they are still in compliance with their licensing agreements. This shifts the software licensing use case to more of a legal matter rather than technological, yet one that should not be forgotten. In addition to productivity software, it is highly encouraged to have anti-virus software and

firewall protection installed on the both the endpoints and host system. Requiring a secure connection between the remote client and the desktop operating system makes the service safer to use (Hess). Now that the infrastructure has been laid out, what exactly is desktop virtualization and how do users access their data? The answer to that question is surprisingly simple, based on understanding of the client-server model. It is when client devices, such as a zero client, thin client, fat client, or mobile device, requests information through the network; then the server responds to the clients and returns the results (Rouse). Desktop virtualization takes it a step further and is all about migrating your traditional desktop session away from the physical computer and hosting it on a centralized server in the companys data center. An operating system runs within virtualization software that emulates actual hardware into a standard set of virtual hardware (Hess). Some examples of virtualization include Oracle's VirtualBox, Apples Parallels Desktop, VMwares Workstation, QEMU, Citrixs XenApp, and Microsoft's Virtual PC. A virtual machine (VM) will host a user(s) complete desktop identity settings, applications, and rights, and host it on a server which they can access via the network on a remote display protocol (Metzler). All applications and data used will remain on the remote system with only the display, keyboard, and mouse information communicated to the physical endpoint. Broadly speaking, there are two types of desktop virtualization. Both types takes the physical machines off the users' desks and houses them in rack mounted in the datacenter (Rouse). The first and more complex type, local virtualization, runs a protected desktop environment on the server in which applications are utilized more by the physical hardware and system versus network connection (Brodkin). It is mainly achieved when a type 2 hypervisor is

installed on a server and the individual computers operating systems are remotely presented to the endpoints. Hypervisors are a thin layer of code in software to achieve dynamic resource sharing (Rouse). The figure above shows one physical system with a type 2 hypervisor running on a host operating system and three other virtual systems using the virtual resources provided by the hypervisor (Type 2 Hypervisor). Type 2 hypervisors are mainly used on systems where efficiency is less critical and connectivity is not always guaranteed. The information and applications used will be synced whenever connectivity is reestablished; thus enabling for more offline usage. This allows for multiple users on the same system, each with their own identity. They are also used in developmental and tests procedures by IT research departments (Brodkin). The second type, hosted desktop virtualization, requires constant connection to a network in order for a users profile, operating system, and applications to be constantly synced to the endpoints; there is no offline access. This method is usually achieved through a blade server. The user's keyboard, monitor and mouse plug into an endpoint at the desk that is wired via a direct connection to the shared blades in the datacenter. Furthermore, it is important to distinguish the types of client devices in which users deploy to access the desktop. This goes back to the client-server model where the device requests and eventually receives information from the server. Specifically, there are three main

types of client devices: fat clients, thin clients, and zero clients. Fat clients, also called heavy or thick, are full-featured computers which include most or all of the essential hardware/software components built on board in order to function independently without network connectivity. A fat client can then access other virtual machines over the network as well as run their own local OS. These are the types of computers that nearly all individuals use for personal at home use because they are highly customizable. Smartphones and tablets also fall into this category, but mobile virtualization is a whole other topic to be discussed! A thin client is a networked computer with far less built-in components and installed applications. They basically require network access at all times in order to connect with the server where most of the raw processing power is done (Gaskin). These computers are best suited for public environments such as airports and libraries where the system can be easily managed and secured remotely. However, according to J. Peter Bruzzese, a writer for Infoworld, The whole dilemma with thin clients is that they have hardware requiring support and maintenance logically, not to mention energy and cooling, but also include some form of operating system, such as Windows XP, Windows CE (Windows Embedded), or a Linux variant. They're as complex as regular PCs. As you can see, "thin" doesn't mean low maintenance (Bruzzese). The thin PC just comes in a much smaller form factor. Lastly, the zero client. Basically, it is a remote video output with user human interface devices (HID) input ports. It displays the remote computer session to a local monitor and allows the user to remote control his session on the server via keyboard and mouse. There is no true OS installed on the endpoint, only basic firmware that allows for network connectivity (Boucher). The zero client is a truly remote desktop experience.

With so many complex corners and terms factored into remote desktop virtualization, it is time to take a look at both the advantages and challenges that come with this technology. As companies face restrictive budgets, uncertain funding, and demands for bring-your-own-device (BYOD), and pressure to collaborate over a dispersed workforce; desktop virtualization allows them to provide a simple, cost-effective and secure way for employees to perform their job. The biggest benefit is the fact that it allows users greater comfort and ease by allowing them to virtually access their desktop anywhere and anytime. Probably the greatest benefit on the network administrators side is simplicity in being able to control and able to keep sensitive company data secure. By centralizing all the desktops on the company servers, each one can be monitored and configured according to their policies. Because the desktop is fully separate from the hardware, it allows for users to customize (to a degree) and yet be safe from malware and virus attacks. Also, updating the operating system and deploying new security patches can be reduced to only a matter of hours instead of a few days. Since virtual desktops reside in the data center, they are backed up regularly without requiring expensive local backup software on individual PCs and laptops. Financially speaking, virtualization is generally cost effective for the company because most of the time users are utilizing thin client or zero client devices, which are far cheaper to purchase. These are just a few of the multitude of benefits that virtualization has to offer (Boucher). However, it is not all this easy; problems and challenges do arise. Despite there being only a few challenges, they are very important and often overlooked. If companies want to increase efficiency and maximize the benefits of virtualization, they first have to address these issues. Resource allocation is very dynamic and unpredictable in a virtual environment. In depth

case studies should be conducted prior to virtualization rollout. Without the proper infrastructure and bandwidth, the whole entire system is useless. Companies often are not prepared enough and are left scrambling for consultants advice. Despite virtualization coming with less maintenance and security issues, most current IT department workers do not have the required skills to manage the system. They are still in the old mindset of individual endpoints each needing time and attention. The virtualized server environment is a whole different scope of work. According to Sanjay Srivastava, a researcher at Complex Business Products, Its a complex computing environment where all the parts are interconnected and to some degree also interdependent. This is why there is a higher need for individuals that possess a more varied skill set (Srivastava). Finally, the biggest challenge that companies face is licensing costs. Yes, it might be overall cost effective as mentioned above, but the initial startup costs can be enough to deter companies from advancing with this technology. Licensing rates are abstract, complicated and can either be based upon the number of users/devices or simply how they are configured (Yap). John Brand, vice president and principal analyst of the CIO group at Forrester Research, observed that there has since been a mixed response from enterprises. "There are organizations that have proactively and aggressively pursued the vendors and negotiated themselves much better deals. But there are also a great number of companies that have simply accepted the licensing demands of the vendors and are paying a much higher price than they need to (Yap). In conclusion, it is safe to say that remote desktop virtualization is a complex, detailed, and immense subject that is sweeping the computer and technology field today. Although virtualization offers companies a wide range of benefits and conveniences, it also comes with

unique challenges, which should be addressed with a pragmatic approach. IT executives must move from a purely technical necessity mindset to a tactical and strategic mindset. They must also must shift their thinking and adjust their processes from purely physical to a combination of physical and virtual. The rapid adoption of virtualization could exacerbate already strained communications among an IT administrators knowledge of domains such as server, network, storage, security, and applications. So where exactly is desktop virtualization headed to in the future? Recently announced at CES 2013 (Consumers Electronics Show), is the NVIDIA VGX platform, which enables IT departments to deliver a virtualized desktop with the graphics and GPU computing performance of a PC or workstation to employees using any connected device (Kaufman). "With this platform, employees can now access a true cloud PC from any device -- fat client, thin client, zero client, laptop, tablet, or smartphone; regardless of its operating system, with the responsiveness previously only available on the workstation," says NVIDIA General Manager of the Professional Solutions Group Jeff Brown (Kaufman). The possibilities are seemingly endless!

Works Cited Brodkin, Jon. "Virtual Desktops Ripe for Deployment, Hindered by Cost." Networkworld.com. Network World, 19 Feb. 2009. Web. Boucher, Joel. Virtualization SME. April 2013. Interview. Bruzzese, J. Peter. "Desktop Virtualization Clients: Fat, Thin, or Zero?" InfoWorld.com. InfoWorld, 17 Feb. 2010. Web. Gaskin, James E. "Thin vs. Thick Clients." BizTech Magazine. CDW, 1 Sept. 2011. Web. 27 Apr. 2013. Hess, Ken. "Desktop Virtualization vs Virtual Desktop Infrastructure." ZDNet.com. ZDnet, 27 June 2011. Web. Kaufman, Debra. "NVIDIA Introduces Virtualization and Remote Computing." Creativecow.net. Creative Cow Magazine, 13 Sept. 2012. Web. Mears, Jennifer. "The 8 Key Challenges of Virtualizing Your Data Center." NetworkWorld.com. Network World, 22 Feb. 2007. Web. Metzler, Jim. "The Impact of VDI." Networkworld.com. Network World, 30 Apr. 2009. Web. Rouse, Margaret. "Blade PC (or PC Blade)." Whatis.com. Tech Target, Mar. 2011. Web. Rouse, Margaret. "Type 2 Hypervisor (hosted Hypervisor)." SearchServerVirtualization.com. Tech Target, Feb. 2012. Web. Srivastava, Sanjay. "The Challenges and Pitfalls of Virtualization." Complex BusinessProducts.com. Complex Business Products, 5 June 2012. Web. Type 2 Hypervisor. N.d. Photograph. IBM Systems Software Information Center. IBM.com. IBM Systems Software Information Center, June 2011. Web. Yap, Jamie. "Virtualization Licensing Still a Minefield." ZDNet.com. ZDNet, 10 Apr. 2010. Web.

Das könnte Ihnen auch gefallen