Sie sind auf Seite 1von 123

VMware ESX 3.x Server and VirtualCenter 2.x (GA Build Eval) Upgrade Guide Document Version 1.

By Mike Laverick RTFM Education For Errors/Corrections please contact: mikelaverick@rtfm-ed.co.uk

This guide does change. Have you got the latest copy?

Table of Contents
Introduction .............................................................................................4 Module 1: Upgrading Virtual Infrastructure .............................................7 Executive Summary .................................................................................. 7 Technically Specific Issues ......................................................................... 7 Module 2: In Place Upgrade of VC .......................................................... 10 Executive Summary ................................................................................ 10 In Place Upgrade of VC 1.2 to 2.x ............................................................. 10 Post-Upgrade Tasks ................................................................................ 14 Installing the VI Client ............................................................................ 15 Module 3: In Place Upgrade of ESX ........................................................ 18 Executive Summary ................................................................................ 18 Confirming you can upgrade .................................................................... 20 Upgrading with CD-ROM.......................................................................... 21 Licensing ESX Server .............................................................................. 22 Upgrading VMFS-2 to VMFS-3 .................................................................. 24 Relocate the Virtual Machines .................................................................. 27 Update VMFS Volume Labels & DataStore Name for Uniqueness ................... 27 Custom Upgrading with a Tar-Ball (Optional) ............................................. 28 Upgrading to VMFS-3 from the Service Console.......................................... 31 Fixing the Service Console Network Problems ............................................ 34 Clean Installation of ESX 3.x (Optional)..................................................... 37 Enabling SSH/SCP from your ESX host to other hosts ................................. 42 Cold Migration from ESX 2.x to ESX 3.x..................................................... 43 Module 4: Upgrading Virtual Machines ................................................... 45 Executive Summary ................................................................................ 45 Upgrading Virtual Machine Hardware (GUI)................................................ 45 Upgrading VM Tools (GUI) ....................................................................... 46 Upgrading VMXnet to Flexible Network Adapter .......................................... 47 Upgrading VM Hardware & Tools for Bulk (CLI) .......................................... 47 Module 5: Networking ............................................................................ 49 Executive Summary ................................................................................ 49 Creating an Internal Only Switch .............................................................. 51 Creating a NIC Team vSwitch................................................................... 52 Creating a VMKernel Switch for VMotion .................................................... 52 Increasing the Number of Ports on vSwitch................................................ 55 Setting Speed & Duplex .......................................................................... 56 Managing Service Console Networking ...................................................... 57 Creating a vSwitch with VLAN Support ...................................................... 60 Conclusion............................................................................................. 61 Module 6: Storage (VMFS, SANs, NFS, iSCSI) ........................................ 64 Executive Summary ................................................................................ 64 Setting up NFS/NAS on RedHat Linux........................................................ 66 Setting up NFS/NAS on Windows with SFU ................................................ 66 Adding a NFS Mount Point ....................................................................... 69 Setting iSCSI (Software Emulation) .......................................................... 71 Setting up iSCSI with Fedora Core 5 and iSCSI Target ................................ 72 Setting up a Windows iSCSI Target with Stringbean WinTarget .................... 74 Module 7: Deployment and Migration ..................................................... 81 Executive Summary ................................................................................ 81 Creating a Folder Holding Templates ......................................................... 82 Using Clone to a Template ....................................................................... 83 Using Convert to a Template.................................................................... 85 Importing and Exporting Virtual Disks ....................................................... 86 Manage VMs using Web Access................................................................ 86

Generating Remote Console URLs for VMs ................................................ 87 Modifying a VM: Hot Pluggable Disks......................................................... 89 Using Topology Maps .............................................................................. 91 VMotion and CPU Compatibility ................................................................ 94 Checking your ESX CPU Attributes with CPU Boot CD .................................. 94 Enabling and Disable CPU Attributes on a VM ............................................. 95 Performing a Cold Migration..................................................................... 96 Performing a Hot Migration (VMotion) ....................................................... 97 Module 8: Resource Management ......................................................... 100 Executive Summary .............................................................................. 100 Creating Resource Pools on an ESX Host ................................................. 101 Distributed Resource Service (DRS) Cluster ............................................. 105 Creating a DRS Cluster.......................................................................... 106 Creating VM Affinities and Anti-Affinity Rules ........................................... 112 Modifying Virtual Machine Options .......................................................... 116 Removing an ESX Host from a DRS Cluster.............................................. 117 Module 9: High Availability & Backup ................................................... 119 Executive Summary .............................................................................. 119 Enabling the VMware HA Cluster............................................................. 119 Trigging VMware High Availability ........................................................... 121 Appendix A: Reverting back to SQL Authentication .............................. 123

Introduction
Purpose of this guide This guide is designed for people who already know ESX 2.x and VC 1.x very well. It based on the delta two-day training that can be optionally attended for those people with prior experience. It also contains additional information beyond that course. This guide is certainly NOT for novices or new-users! Although Ive chosen to call this an upgrade guide, its by no means a definitive statement on upgrading. For that you need to read VMware documentation. I also view this guide as upgrade of skills as well as software. It is not a comprehensive guide to ALL the differences just the primary ones. I hope to make this guide gradually more comprehensive, and cover all new features. As you might gather theres a lot marginal GUI like changes which are not included. The emphasis here is on the Virtual Infrastructure Client used with VC. Command-line options are introduced when it is quicker/easier or the only way to carry out a task. Its not intended as a comprehensive guide to the Service Console It is highly recommended to attend the Whats New Course course, and read VMware documentation. Try to view this guide as a quick how to do guide with quick Executive Summaries at the beginning of module. Please email at the email address at the beginning of this document if spot any errors or screen grabs that need updating Hardware This guide assumes you have at least 2 servers with 2CPUs at 1.4mgz each 2GB of RAM 2x36GB Hard Drives (SCSI Internal) 4x Network Cards (production in a bond, vmotion and Service Console eth0) This is the specification of my server an old Dell 1650 PowerEdge using PIII processors! My hardware isnt officially supported by VMware anymore. Anyway it still runs. But at some point I am going to buy two DL380, and re-use this hardware as my VC box and a NAS box I have VC set-up with a SQL Database. The layout of my VC looks like this before the upgrade:

Two Servers, Two Virtual Machine Groups (for me and my mate, Trevor) The domain name is rtfm-ed.co.uk lavericm-admin is the VC Administrator set at Server Farms instructor is a Virtual Machine Admin from the RTFM Farm (used when I teach with my hardware) baverstockt is a Virtual Machine User with rights only for Trevors VMs Group These are very much dummy permissions I use to test my migration

Software This guide was started using ESX 3.x Beta and VC 2.x Beta As time progressed I moved on to ESX 3.x RC1, and VC 2.x RC1 This version has been upgraded and is now based on the GA Eval released on 22nd June 2006 In my case VC Server 2.x and SQL 2000 (SP3) all runs on Windows 2003 with Service Pack 1. This was the most current release of Windows at the time of writing this document Warning: As ever my documents are released as is and without warranty Conventions in this Guide Blue usually represents a hyperlink in the Virtual Infrastructure Client (VI Client) I tend to use to represent radio buttons, as shortcut rather than another character like this Yes I tend use X to represent check off boxes, shortcut rather than another character like this X Connected at Power On I use PuTTy to get Service Console sessions I use nano rather than vi to edit and save files mainly because Im no Linux guru, and I find nano more user friendly. Vi text editor is popular in the Linux community because it pretty much standard amongst every Linux distribution

Any Major titles marked with Red indicates this section is broken, released as is, and I intend to return to fix it. But got bored banging my head against a brick wall.

Change Log from 1.1 to 1.2 General changes based on GA Eval Release of ESX 3.x and VirtualCenter 2.x Added how to set-up Fedora Core 5 with iSCSI Target, and then allowing ESX access to the iSCSI LUNs for use as VMFS or RAW LUNs with RDM Files. The observant amongst you will notice slight differences in some of my screen dumps as the documentation for iSCSI was retro-fitted to this guide after most of it had been written. Resolved upgrade problems with VirtualCenter Database Added a the new method of exporting virtual disks with vmkfstools i Added using the preupgrade.pl script which flags up potential problems prior to an in place upgrade Added using cupid.iso which shows cpu attributes such as Family, Stepping, SSE Version, Execute Denied/No Execute, and 64-bit support Correct problems in Tar Ball Upgrade of ESX 3.x Added how to upgrade VMFS from the Service Console Added the procedure for Relocating VM Files Change Log from 1.2. to 1.3 Added enabling the SSH Client on the ESX host to allow ssh and secure copy from just one ESX host to all others Corrected some basic typographical and spelling errors

Module 1: Upgrading Virtual Infrastructure


Note: I use the term VI-2 to refer collectively to ESX 2.x and VC 1.x I use the term VI-3 to refer collectively ESX 3.x and VC 2.x This module is NOT practical. But a quick summary of the upgrade issues

Executive Summary
There is NO upgrade path to VC 1.3.1 to VC 2.x. Due this the guide was written with an upgrade from VC 1.2 to VC 2.x As it is have to report that VC 1.x to 2.x in place upgrades are not very reliable (at least in my limited experience). ESX upgrades have been surprisingly painless. Still you cannot set much store by one guys experiences Lots of people on the forum have done an upgrade of VC 2.x without an error. You can run VC 1.x and VC 2.x within an appropriately resourced VM, VMware now suggest you that is possible to put the VC server in a VMware High Availability clusters (VHA was previously called DAS) so should the ESX host it is running on fails, it would get restarted on another ESX host in the cluster. This is because VC does not have to be running to HA to work The scariest aspect of an in place upgrade is the migration of VMFS-2 to VMFS-3. As more than one ESX server can use the SAME VMFS volume you need to know where you storage is, where your virtual disks are located, and which ESX server can see the LUNs. This because ESX 2.x cannot read/write to a VMFS-3 volume, and ESX 3.x has only read-only access to VMFS-2 As with all updates of software the big question is in place upgrade or re-install. I imagine most people will use a twin-track approach of running both VI-2 and VI-3 side-by-side. Critical to this is how long VI-2 will be supported by VMware. There is already healthy debate about when VMware thinks this will, and what its customers think it should be! VMware has many strategies to choose from for the upgrade path. The best one is the one that minimizes your VM downtime The order of the upgrade is important, and this guide obeys that order. Each stage of the upgrade process is not undoable, so backups are absolutely essential These are VMware recommend stages: o Backup o Upgrade VC 2.x o Upgrade ESX 3.0 o Upgrade to VMFS-3 o Upgrade Virtual Machine hardware o Upgrade VMware Tools o Leverage new features

Technically Specific Issues


ESX 2.x Partition Scheme and Boot Processes When ESX 2.x was released VMware recommended 50MB for /boot and 2048MB for / These recommendations were made in good faith at the time and were fit-for-purpose

However, you may experience problems if you have followed these recommendations to the letter With 50MB boot you will not have space for all the boot menu options and you will loose the Debug Menu. Basically, there is not enough space for the IMG files. I am working on a work-around on this issue by deleting the old IMG from the old ESX 2.x LILO menu The upgrade expect around 2048MB of free-space in the / partition I have followed ESX 2.x recommendations, and have done a successful upgrade. But I did loose the Debug Menu Ive experimented briefly with repartition tools but not found a tool that can move and resize a partition(s) within an extended partition, AND move an extended partition so then the primary partitions can be moved to make the first partition /boot bigger. FYI. The boot loader is now GRUB, and the ESX Service Console is based on Redhat Linux AS 3.0

VC Authentication VC 1.x was geared up to use NT4/AD/Workgroup users in Windows If you have a hot-backup/hot-replacement/standby of VC 1.x using Workgroup users its recommended to switch to a NT4/AD model In VC 1.x the users were stored by name (this was not fun if you rename users!), they are reference by UID. This means two VC 2.x servers pointing to the same SQL database would have a problem because the accounts would have different UID values in the Workgroup/SAM file Basically with out the move to NT4/AD you would have authentication problems This is not an issue if you only have one VC server and you have no hotbackup/hot-replacement/standby

Clustering VMs across boxes There is no shared mode anymore hurrah! There only support is for RDMs to a LUN Any clusters-across-boxes where the Quorum and Shared disks are virtual disks, need converting via cloning software to a raw LUN Clusters-in-a-box remain unchanged, as do physical2virtual clusters. However, LSIlogic and vmxnet are only supported that mean switching from BusLogic/Vlance if you used the default VMX options in VI-2 with Windows 2000 Advanced Server

Other Issues Commit your redos! o ESX 2.x redos do not work in ESX 3.x! New Processor Compatibility issues o Intel and AMD support NX/XD (No Execute/Execute Denied bit) o Prevents buffer overflows and other attacks o VI-3 supports it o By default you cant VMotion from a non-NX/XD host to a NX/XD enable host. I imagine there maybe a weakening this constraint/requirement or masking these bits between hosts o Check out the CPU Compatibility Boot Disk to confirm you processor attributes DNS o Some new features require DNS name resolution o Some new features work less easily out-of-the-box without DNS

If previously you have managed your VI-2 environment without DNS, using IP address or hosts files I would recommend integrating them into current DNS environment

Module 2: In Place Upgrade of VC


Executive Summary
According to the GA documentation you can only upgrade VC 1.2 and 1.3. Anything older than VC 1.2 needs upgrading to 1.2 or 1.3 before you begin Make sure you do NOT have IIS or any other software which claims port 80 or 443 as new VirtualCenter Web Service will now want to use those ports. Previously the VC Web-Service used different port numbers Installer detects old VC, install new VC, and de-installs the old version An upgrade wizard upgrades the SQL/Oracle database There is no support for MDB, and no upgrade path for MDB There is support for MSDE Serial numbers dont exist, instead they use a FlexNet License server and License files VMware recommend the new license server be install on the same box as the VC server You can have stand-alone/workgroup model for ESX with a LIC file on each, or served license from the license server to a group of ESX hosts no prizes for guess which is easier to manage! If the license server fails things carry on running on a cached license. Farms dont exist anymore, the upgrade converts them to DataCenters the principle is the same, its just the terminology has changed A single client called the Virtual Infrastructure Client (VI Client) used manage either an ESX host or VC The MUI no longer exists (Hurrah!) A single client called the Virtual Infrastructure Client (VI Client) used manage either an ESX host or VC The MUI no longer exists (Hurrah!) VC 2.x now asks during the install for a service account for running the VC Server. VMware recommend using the System Account. You do have the option of specifying an account

In Place Upgrade of VC 1.2 to 2.x


Note: My guide assumes you going for in-place upgrade of all the software. This might not be best for your environment. Do the research first before steam-rolling in! What you need to complete this task: The VC 2.x CD/Software 1 Served License Password/Username of the DB user which is currently used with the VC 1.x database My VC was set-up as member server on W2K3+SP1, using SQL Authentication IMPORTANT: Actually, I would normally set it up with Windows Authentication. I had problems with this so I reverted by to SQL Authentication. If you want to know how I did this see the Appendix at the end of this document

1. Insert the VC 2.x CD

Note: You may need to confirm the Microsoft-Nanny-State, are you sure you want to run this application style messages 2. Choose VC/Web Access Note: After a short while the MSI runs, and the installer detects your old version of VC 1.2 like so:

3. Click Next, the Confirm the EULA, and Accept the Location for the installation 4. Choose Custom 5. Click Next for the Program Features Note: Remember you only need the VC Web Service if you creating custom application with the VC SDK 6. Choose Using Existing database server Note: The installer should detect your previous VC ODBC and authentication settings. Leaving you with just a space for inputting the password. If not review your ODBC settings as appropriate and confirm your user accounts and passwords:

7. Click Next, and you should get a pop-warning:

Note: This is why you should backup before you begin. My VC/SQL is running within VMs in undoable mode in case this upgrade fails! Click OK 8. Following VMwares own recommendations select Install a local VMware License Server and Configure VMware VC to use it Warning: If you set-up the SQL database for VC with Windows Authentication, you must specify your account here. Otherwise the upgrade of the database will fail due to lack of permissions. In my case this was true:

9. Click the Browse button, and select a Served License for use with the license server, and click Next Note: Dont select an unserved, ESX host only LIC file. Otherwise (shock/horror) it doesnt work! 10. Following VMware recommendation, click Next to accept that VC2.x will use the built-in System Account 11. Accept the default port numbers for VC Web Service 12. In the VC Web Service setup dialog enable: X Set the Apache Tomcat Service for Automatic Start-up X Start the Apache Tomcat Service 13. Click Install Note: At some stage you will presented with this dialog below. Click No

Read that bit again your choosing NO because you do want to retain your DB so it can be upgraded! Note: At the end of this file copy you will be prompted to upgrade the database. This utility can be run as a separate entity if you so wish it is called VCDatabaseUpgrade.exe and is located in C:\Program Files\VMware\VMware VC 2.x 14. Choose Finish, but allow X Launch the VMware VC Server 2.x database upgrade wizard 15. Choose Next

Note: This is far as I got in my upgrade. From this point onwards, horrible things happened to me, which I would rather not talk about. 16. Choose Next to this dialog box:

Note: The database upgrade will then proceed to update the VC DB with status messages: Creating the new database schema Renaming Clashing VirtualCenter 1.x tables Creating VirtualCenter 2.0 Tables and Procedures Creating upgrade procedures Upgrading Hosts Upgrading Events and Tasks Upgrading Alarms Initializing access control lists Dropping Temporary Tables Dropping VC1.x tables and procedures Now starting the VMware VirtualCenter Service Note: You will be probably desperate to see whether this has really worked So go on install the VI client point it at your VC server and see whats there! When I first logged on I found my ESX 2.x hosts were disconnected so right-clicked them, and choose Connect and that fixed that problem

Post-Upgrade Tasks
Note: Open up the Services program in Administrative Tools, and confirm that your VMware VC Service has started properly sometimes it doesnt and you have to start it manually After the VC upgrade you will notice you will still have a directory left over from the VC 1.x install This sysprep directory held at C:\Program Files\VMware\VMware VC\resources\windows The location of the Resources\Windows\Sysprep has been change to C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\sysprep.

Installing the VI Client


Note: You can install the VI client to VC Server or Workstation You can get the VI client from the VC CD You can download the VI Client from an ESX 3.x server As we have yet to upgrade ESX 2.x lets do it from the CD I guess you will want to get into the VC Server ASAP to see how well the upgrade went

1. Insert the VC 2.x CD

2. Under other choose Virtual Infrastructure Client 3. Click Next and Accept the EULA 4. Run the Client and Login

5. After the upgrade your VC Inventory will look like this:

Note: You will notice that our VM Groups seem to have disappeared. They havent they are in different view. This is the Hosts & Clusters view if you switch the inventory to the view Virtual Machines and Templates you will find that the Virtual Machine groups (now referred to as merely folders) are still there I am going to rename RTFM Education, to be called RTFM DataCenter just so were clear that term farm is not used anymore 6. Click the Arrow next to the Inventory button:

7. Choose Virtual Machines and Templates

Note: At this stage we cannot manage the ESX 2.x host via the VI Client, because they are yet to be upgraded. You can however, manage the VMs that reside on them. Instead you are stuck with the MUI. Once ESX has been upgrade you will get a Configuration tab to manage the physical host until then to access the MUI from the VI Client do the following: Select the ESX Host Click the Summary Tab

Click

VMotion settings are accessible just above this option

Module 3: In Place Upgrade of ESX


Executive Summary
Downtime and VMFS Storage You must be running on ESX 2.1.1 or later with the exception of ESX 2.3.0 and ESX 2.5.0 Upgrading ESX requires a reboot which means VM down-time is inevitable It also means upgrade of VMFS-2 to VMFS-3 during this time the VMs must be powered off ESX 2.x cannot read VMFS-3. VMFS-3 volumes appear as unformatted or unavailable in the Storage Management Page of the MUI ESX 3.x can read VMFS-2 but cannot write to it Critically, the choice is between upgrade or migrate VMFS Volume Labels need to be globally unique to a Farm/DataCenter. So if every ESX server has a local VMFS volume called /vmfs/local, then the VC will serialise them to keep them unique. You can observe this phenomena on the properties of ESX and the Summary Tab esx1.rtfm-ed.co.uk:

esx2.rtfm-ed.co.uk:

Note: You can also this in the DataStores view in the VI Client:

The only disappointing thing I feel about the upgrade is the state your networking is left in afterwards. Theres lots of references to legacy this n that and I have (under the beta/rc1) experience situations where I have lost COS connectivity after an upgrade. This said all of these problems I was able to work around

Upgrade Criteria Few virtual disks VMs can be brought down altogether Downtime works out cheaper than provisioning new storage for a Migrate How: o Shutdown all VMs o In Place Upgrade ESX/VMFS-3 o Power on VMs on ESX 3.x server

Migrate Criteria Many virtual disks Impossible to shutdown all VMs Cost of provisioning new storage is cheaper than downtime, or new environment when storage can be use temporarily during migrate

How: o o o o o o

Clean install a new ESX 3.x server Shutdown VMs from ESX 2.x Copy VMDKs/VMX from ESX 2.x to 3.x Power on VMs on ESX 3.x Repeat until VMFS-2 partition is empty Replace old VMFS-2 partition with VMFS-3

Templates & ISOs are different You can still use the old format of virtual disk for templates (COW or whats they are now referred to as sparse format) You can still use vmkfstools i and e to import and export There is no GUI tool currently to import/export them into a VMFS partition because of the end of the MUI In my experience, with the advent of directories in a VMFS, VC seems to handle putting all the VMs files in one location more easily than doing it manually via the Service Console and VI Client It is no longer recommended to use Service Console anymore for the storage for ISOs You can store templates on VMFS Shared LUN as you did in VC 1.x You can put ISOs there as well alternatively you might want to take advantage of NAS based storage for ISOs and Templates as this is cheaper per MB of storage For this reason a vmimages partition is no longer required. I would recommend off-load your vmimages storage to NAS and then think of different usage for this storage or keep your vmimages as a backup of stuff you would normally use on your NAS This said, you will still find a /vmimages directory on your server even if you do a clean install and dont create a /vmimages partition. It is now used to store the ISO images for VMware Tools in the /vmimages/toolsisoimages for various operating systems. There is also /vmimages/floppies which current contains a copy of the vmscsi-1.2.0.4.flp file There are two ways to initiate an in place upgrade either from booting from the ESX 3.x CD or using a tar-ball method

Changes to ROOT Security An upgrade ESX server has different security settings to a clean install A clean install of ESX 3.x denies root access using SSH, whereas an upgrade inherits the security of the ESX 2.x The default in ESX 2.x allowed access to root for SSH, unless you changed /etc/sshd/sshd_config from #PermitRootLogin yes to PermitRootLogin no In a clean install, point the VI client to the ESX host, and create new user. Then connect to the Service Console with this new user, and then use the su command to elevate yourself to root or else configure and use the sudo command This change was introduce to enforce auditing/traceability in the Service Console

Confirming you can upgrade


1. Insert your ESX 3.x CD into the ESX 2.x server 2. Type: perl /mnt/cdrom/scripts/preupgrade.pl 3. This should give you a print out report of the status of the ESX server

Upgrading with CD-ROM


What you need for this task Erm, the ESX 3.x CD would be a good start!

Note: In this upgrade I chose not to make any revisions to /boot or /root from the recommended partition table for ESX 2.x. I would recommend a text mode option if you working through an ILO/DRAC card, as not all of them handle mouse activity equally well. 1. Shutdown your VMs on the specific host, and reboot 2. Insert the CD 3. At the boot prompt press [Enter] Note: At this stage drivers are then loaded in the blue background text mode. Ive noticed Ive sometime had to wait quite a bit this makes the installer look like its hung but its just busy 4. Choose Skip to the CD-ROM media test Note: Ive done a media test once. I guess you should your not missing anything exciting 5. Click Next 6. Choose your keyboard type, in my case United Kingdom Note: The mouse pointer question are only relevant if you were installing XWindows to the Service Console, which you never, never, never do. So as in ESX 2.x mouse selection is largely academic The installer will then search for an existing ESX installation 7. Choose Upgrade 8. Accept the EULA 9. In the Advanced Options page choose From a drive (Install on the MBR of the drive) Note: This should be the hard-drive/LUN you originally install ESX to, in my case it was /dev/sda Note: The installer will briefly search for packages to upgrade 10. Click Next Note: The installer will transfer install image to hard drive; prepare RPM transaction; start install processors, and then the main file copy begins Note:

Now go to your kitchen, make a cup of tea, have a biscuit. Or perhaps eat on of those 5 portions of fruit you have promising to consume

Licensing ESX Server


Note: Licenses served from the license server are in a pool, like in things like Citrix servers draw from this pool, and can give back their licenses when they are decommissioned Confirm you see the FQDN name of the license server in the VI Client pointed at the VC Server Confirm you have DNS resolution from the ESX host to this server When you license ESX you set its Host Edition as well, this indicates if your using a Developer Edition (VMTN) or Enterprise Edition or whether the ESX host is to be unlicensed and its license added back into the pool Licensing ESX 1. Before you begin, confirm the following: In your DataCenter Select your upgraded ESX Host Select the Configuration Tab In the Software Pane choose the Licensing Features link

Confirm the name of your license server

If this field is blank. Chances are there is a lack of DNS name resolution. Confirm that DNS is working from the ESX host and VirtualCenter server 2. Next to Host Edition, click Edit 3. Set your Host Edition (dependant on your license file

Note: This should complete the window with Host Edition features (iSCSI, SAN, NAS) and number of CPU licenses the ESX host will take (in my case just two) 4. Next to Add-ons, click Edit and add any licenses for add-on features

Note: This doesnt have to be done now, and could be done later. This allows you to consume the add-on licenses as you need them. You might have a stand-alone ESX host that is not part of VMotion or isnt suitable for Virtual SMP. Cost 1 License per CPU, mean a dual-box will take 2, and quad box will take 4. Remember dual-cores are licensed like the Xeons on per-socket basis. The window will refresh VirtualCenter DRS and HA will be marked as Not Used as we havent created any DRS or HA clusters yet. Viewing your License Pool Note: You will probably want to track your license usage/pool to see how many you have left over. To do this: 1. Click the Admin button

2. Select the Licenses Tab 3. To see each ESX hosts licenses used, select your host under the Name section

Note: See how we only have 1 license for VirtualCenter 1 used and 0 remaining. That my single ESX host has taken 2 licenses from a pool of 32 because of my 2 physical CPUs. The DRS and HA columns are unchanged because I have yet to configure DRS or HA

Upgrading VMFS-2 to VMFS-3


Note: Until you do this the effected VMs will not power-on

This is because ESX 3.x only had read-only access to a VMFS-2 volume

Another reason VMs might not power-on is that you have not acquired a license effectively Before upgrading to VMFS-3 you will have to place the ESX host in Maintenance Mode. This is management state which stops background processes affecting the ESX host.

1. In the Inventory, select your ESX Host 2. In the Summary Tab, Click Enter Maintenance Mode

3. and Choose Yes, to the dialog prompt Note: This can take sometime, afterwards the ESX host status changes to reflect maintenance mode

4. Select the Configuration Tab 5. In the Hardware Pane, choose Storage (SCSI, SAN, NFS)

6. Under Storage, select your VMFS-2 volume(s) 7. Underneath Storage in the Details Pane, select Upgrade to VMFS-3

8. Choose Yes, to confirm the Upgrade

Note: You do not currently see a status bar during the upgrade. You do get a notification in the Recent Tasks view at the bottom of the VI Client:

This does seem to take quite sometime so be patient it is reliable. Once it has completed the Recent Tasks bar updates like so:

9. Exit Maintenance Mode with In the Summary Tab, Click Exit Maintenance Mode

Note: Sometimes Ive found that the GUI still show the Upgrade to VMFS-3 link. Generally a few refreshes with F5 will clear these IMPORTANT: At this stage you can now power on a legacy ESX 2.x VM. So it not mandatory to do a VMware Virtual Hardware/Tools upgrade to get your VMs back online. However, until you do the VMware Virtual Hardware Upgrade you will not be able to edit the definition of your virtual machines.

If you wish you can skip the remainder of this Module, and proceed to Module 4, where I cover the upgrade of existing VMs originally created in the ESX 2.x platform

Relocate the Virtual Machines


Note: Currently the Virtual Machine VMX file are still on Service Console storage This is either /home/vmware if it was created in VC 1.x or /home/username/vmware if created via the ESX MUI Until they are relocated they will not power on and VC will give you this error:

The VMs configuration files must reside on either SAN, iSCSI or NAS Storage

1. Right-click your ESX Host 2. Choose Relocate VM Files Note: You will see this status progress:

Update VMFS Volume Labels & DataStore Name for Uniqueness


VMFS have a couple of IDs there Volume Label, the DataStore Label (normally derived from the volume label) and UUID value as well (which never changes) The primary label used by VC is the DataStore label. It can be different from the Volume Label, and most usefully as VMs VMX point to it rather than the volume label you can change a DataStore Label without having to update the VMX file as we did under ESX 2.x As I mentioned earlier in this guide VMFS DataStore names to be globally unique to a DataCenter. So if every ESX server has a local VMFS volume called /vmfs/local, then the VC will use this name as the basis for the Datastore name serialise them to keep them unique. You can observe this phenomena in the DataStores view in the VI Client and elsewhere in the VI Client

I intend to label my local storage like this /vmfs/local-esxN and check that the VMX files on each of my VMs is updated so they will know where the VMFS partition is located This was something I couldnt have done before the upgrades of the VMs Virtual Hardware and Tools, as the VMX file is locked until the upgrade

1. Select the ESX host and the Configuration Tab 2. Under the Hardware Pane, choose Storage (SCSI, SAN, and NFS) 3. Under the Storage heading, select your VMFS and choose Properties

4. In the dialog click Change button

5. Change the DataStore label to be something more unique [than local, local(1), local(2)] such as local-esx1

Custom Upgrading with a Tar-Ball (Optional)


What you need for this task:

Erm, the ESX 3.x tar-ball would be start!

Note: Why do your upgrades this way? Well, it allows for unattended upgrades without having to be physically present at the server or having to run around from one server to another with CD! This is a two-stage process involving two scripts and reboot between script1 and 2 I copied the Tar-Ball to area of free-space this happened to be my /vmimages location (I was concerned my /tmp partition might be too small having set the Vmware recommended size of 512MB in ESX 2.x) Upgrading ESX in this way didnt seem to help with my virtual networking I still had some machine which thought they were on the production switch and others thought they were connected to a Legacy Bond0 even though I never had a bond. Run First Script & Set ESX Boot Option 1. Transfer the Tar file to the ESX 2.x Host and using WinSCP 2. Logon to the Service Console with PuTTY, as ROOT 3. Untar the tar.gz file with: tar -xvzf esx-3.?.?-?????-upgrade.tar Note: This will take some time! 4. Change the boot option and Reboot ESX lilo R linux-up reboot Note: When you reconnect to the Service Console, confirm you have not loaded the Vmkernel. An easy check is to ls /vmfs. No Vmkernel, No VMFS 5. cd in to the esx-3.?.?-?????-upgrade directory 6. Run the Perl script called ./upgrade.pl Note: The system will verify files, and then ask you to read & accept an EULA! 7. Choose Q, to quit reading the EULA, and 8. Type Yes [Enter] to accept the EULA Note: You will receive status messages like so: Upgrading packages (this may take a while) ... .... done Removing obsolete packages .... done Upgrading system configuration ...

Running vmware-config.pl ... done then lastly *** You must now reboot the system into Service Console mode and run upgrade2.pl *** Reboot the server now?[y/n] 9. At the Reboot the Server now [y/n] choose N and [Enter] 10. Now edit the boot.conf file to make sure the server boots to the Service Console with: nano w /boot/grub/grub.conf change default=0 to default=1 11. Now do the reboot with the command: reboot Note: You might find you loose connectivity to the Service Console. I did on my first attempt. The second time I didnt get a script error and my Service Console networking was intact. The interesting thing is the upgrade was done on an identical ESX 2.x host. I know it was identical as it was the same physical machine cloned with ghost which contain an image of ESX. Perhaps these network errors were related to the boot errors? Run Second Script & Revert to ESX Boot Option 1. Logon to the Service Console with PuTTY, as ROOT Note: You will know you have boot to do the Service Console because of this message. ALERT [Wed May 17 21:19:45 GMT 2006]: This VMware ESX Server machine is currently booted into troubleshooting mode. 2. cd in to the esx-3.?.?-?????-upgrade directory 3. Run the Perl script called ./upgrade2.pl Note: You will receive this status messages like so: This script will complete upgrading your ESX server to version 3.x.x Verifying Files don Upgrading Packages (this may take a while). INIT: Version X.x.x reloading

done *** You must now reboot the system into the ESX Server mode to finish the upgrade *** 4. At the Reboot the Server now [y/n] choose N and [Enter] 5. Now edit the boot.conf file to make sure the server boots to the Service Console with: nano w /boot/grub/grub.conf change default=1 to default=0 6. Now do the reboot with the command: reboot Note: This will boot the ESX 3.x Vmkernel for the first time. The remote upgrade is complete. Note: If you wish to automate the change to grub.conf, this script is from IBM is called bootcontrol.pl you can download the zip file directly here. The very first time you run this command it returns an error which states "Use of uninitialized value in numeric eq (==) at bootcontrol.pl line 99". It does actually shift the boot option - and the error only occurs once. I've emailed the guys who wrote it - but I have a feeling its cause by our partially completed upgrade process. This command re-compiles the grub loader so that the Service Console Only (troubleshooting only) menu is chosen. It does that by changing the value in grub.conf called Default 0 to being Default 1

Upgrading to VMFS-3 from the Service Console


Acknowledgments: I would like to thank Mostafa Khalil, VCP (SE) for his Troubleshooting presentation at VMware TSX in Paris 2006. Note: From man pages of vmkfstools. VMFS-2 to VMFS-3 file system upgrade is a two step process. Before file system upgrade can begin the vmfs2 and vmfs3 driver must be unloaded and the auxiliary file system driver, fsaux, should be loaded. The first step of upgrade uses the -T option. Once the first step completes, the auxiliary file system driver, fsaux, is unloaded and, vmfs2 and vmfs3 drivers are reloaded. The second step of file system upgrade makes use of the u option. -T, --tovmfs3 converts a VMFS-2 file system on the specified partition to VMFS-3 format, preserving all files on the file system. The conversion is in-place and the auxiliary file system driver (fsaux) module must be loaded. The ESX Server file system locking mechanisms will try to ensure that no local process or remote ESX Server is currently accessing the VMFS file system

to be converted. The conversion is a one-way operation and once the VMFS-2 file system is converted to VMFS-3 file system, it cannot be rolled back to VMFS-2. -u, --upgradefinish /vmfs/volumes/<label/UUID>/ once the first step of file system upgrade has completed (using -T), the vmfs2 and vmfs3 modules are reloaded and the -u option is used to complete the upgrade 1. Logon to the Service Console as ROOT 2. Unload the vmfs2 driver with: vmkload_mod -u vmfs2 3. Unload the vmfs3 driver with: vmkload_mod -u vmfs3 4. Load the FS Auxiliary Driver with the upgrade function vmkload_mod fsaux fsauxFunction=upgrade 5. Run the first stage of the upgrade with vmkfstools -T /vmfs/volumes/local -x zeroedthick Note: -x zeroedthick (default) . Retains the properties of VMFS-2 thick files. With the zeroedthick file format, disk space is allocated to the files for future use and the unused data blocks are not zeroed out. -x eagerzeroedthick . Zeroes out unused data blocks in thick files during conversion. If you use this sub-option, the upgrade process might take much longer than with the other options. -x thin . Converts the VMFS-2 thick files into thin-provisioned VMFS-3 files. As opposed to thick file format, the thin-provisioned format doesn't allow files to have extra space allocated for their future use, but instead provides the space on demand. During this conversion, unused blocks of the thick files are discarded. Note: This will give you the following message I got /vmfs/volumes/44a38c72-156b2590-be15-00065bec0eb7 VMware ESX Server Question: Please make sure that the VMFS-2 volume /vmfs/volumes/44a38c72156b2590-be15-00065bec0eb7 is not in use by any local process or any remote ESX server. We do recommend the following: 1. Back up data on your volume as a safety measure. 2. Take precautions to make sure multiple servers aren't accessing this volume.

3. Please make sure you have approximately 1200MB of free space on your volume. Note that the number is an upper bound, as the actual requirements depend on physical layout of files. Continue converting VMFS-2 to VMFS-3? 0) Yes 1) No 6. Type 0, and press [Enter] Note: Currently, you get no status information while this is proceeding. It should complete with this message: Filesystem upgrade step one completed. Step two must be completed after the vmfs2 and vmfs3 modules are reloaded. When ready, run 'vmkfstools -u /vmfs/volumes/local' to complete the upgrade. 7. Once this part has completed confirm that your format is VMFS-3 vmkfstools -P /vmfs/volumes/local Note: This should report something like this: VMFS-3.21 file system spanning 1 partitions. File system label (if any): local Mode: public Capacity 36238786560 (34560 file blocks * 1048576), 5511315456 (5256 blocks) avail UUID: 44a38c72-156b2590-be15-00065bec0eb7 Partitions spanned: vmhba0:1:0:1 8. Confirm you files havent disappeared in the process with: ls l /vmfs/volumes/local 9. Next, unload the Auxiliary Files System Driver and reload you VMFS2 and VMFS3 driver with: vmkload_mod -u fsaux vmkload_mod vmfs2 vmkload_mod vmfs3 Note: You should get Module load of vmfs2succeeded and Module load of vmfs3succeeded 10. Restart the hostd service for these changes to be reflected in the VI client with: service mgmt-vmware restart

Fixing the Service Console Network Problems


Sometimes after an upgrade you find you have lost Service Console networking Why does this error occur? I dont know I think the Service Console is unwell But sometime its as if the upgrade miss-assigns the NICs to the switch other time its simply not present in the system at all! Normally, any deviation from common defaults is the primary cause in any OS upgrade. But my upgrades of ESX 2.x are standard, and do NOT deviate from defaults in ESX 2.x Anyway to fix this you must be at the physical console (for obvious reasons) At the recent Paris TSX there was slide on this, but I found the recovery instructions didnt work for me. So I had to go in and fix it myself

First Example Legacy eth0 absent Note: In this example I discovered I had no switch with Legacy eth0. There was no swift either In some respects this was easier all I had to do from the commandline was create a new switch and swift interface 1. Logon to the Service Console as ROOT (using the standard ESX boot options) 2. To list your current switch configuration type esxcfg-vswitch l Note: This will give you a print-out something like this Switch Name vSwitch0 Num Ports Used Ports Configured Ports 32 2 32 Internal ID portgroup1 portgroup0 VLAN ID 0 0 Used Ports 0 0 Uplinks vmnic0 Uplinks vmnic0 vmnic0

PortGroup Name Legacy vmnic0 vmotion Switch Name vSwitch1 Num 32

Ports Used Ports Configured Ports Uplinks 0 32 Internal ID portgroup3 portgroup2 VLAN ID 0 0 Used Ports 0 0 Uplinks

PortGroup Name Legacy vmnet_0 internal Switch Name vSwitch2 Num 32

Ports Used Ports Configured Ports Uplinks 2 32 vmnic2 Internal ID portgroup5 portgroup4 VLAN ID 0 0 Used Ports 0 0 Uplinks vmnic2 vmnic2

PortGroup Name Legacy vmnic2 production

Note: OK, so what is this telling me. Well, I have 3 switches. One for vmotion and one of production. There is an internal switch called internal. There is

no legacy eth0 I can see what Port Group vswif thinks it is using with the command: esxcfg-vswif l Name Port Group IP Address Netmask Broadcast Enabled DHCP

To my horror, I discover this is no vswif0. To correct this I did the following 3. Create a new vSwitch (switch3 in my case) and patch a spare physical NIC (vmnic1 in my case) which was available esxcfg-vswitch -add vSwitch3 esxcfg-vswitch -link=vmnic1 vSwitch3 4. Next create a Port Group within the vSwitch esxcfg-vswitch -add-pg=Service Console vSwitch3 5. Next create vswif interface mapped to the Port Group called Service Console and set it IP and Subnet Mask esxcfg-vswif -add vswif0 -portgroup Service Console -ip=192.168.2.102 --netmask=255.255.255.0 6. Restart hostd to make VI Client reflect new configuration service mgmt-vmware restart Note: This will result in a response like this Stopping VMware ESX Server Management services: VMware ESX Server Host Agent Services VMware ESX Server Host Agent Watchdog VMware ESX Server Host Agent Starting VMware ESX Server Management services: VMware ESX Server Host Agent (background) Availability report startup (background)

[OK] [OK] [OK] [OK] [OK]

Second Example Legacy eth0 is present Note: In this case I had 4 nics, but the first wasnt recognised by ESX 2.x, but was recognised by ESX 3.x. I believe location of legacy eth0 became confused. By swapping the NICs about I was able to get Legacy eth0 up 1. Logon to the Service Console as ROOT (using the standard ESX boot options) 2. To list your current switch configuration type esxcfg-vswitch l

Note: This will give you a print-out something like this Switch Name vSwitch0 Num Ports Used Ports Configured Ports 32 2 32 Internal ID portgroup1 portgroup0 VLAN ID 0 0 Used Ports 0 0 Uplinks vmnic0 Uplinks vmnic0 vmnic0

PortGroup Name Legacy vmnic0 vmotion Switch Name vSwitch1 Num 32

Ports Used Ports Configured Ports Uplinks 0 32 Internal ID portgroup3 portgroup2 VLAN ID 0 0 Used Ports 0 0 Uplinks

PortGroup Name Legacy vmnet_0 internal Switch Name vSwitch2 Num 32

Ports Used Ports Configured Ports Uplinks 2 32 vmnic2 Internal ID portgroup5 portgroup4 VLAN ID 0 0 Used Ports 0 0 Uplinks vmnic2 vmnic2

PortGroup Name Legacy vmnic2 production Switch Name vSwitch3

Num Ports 32

Used Ports 2

Configured Ports Uplinks 32 vmnic3 Used Ports Uplinks vmnic3 0

PortGroup Name Legacy vmnic3 Switch Name vSwitch4

Internal ID portgroup6

VLAN ID 0

Num Ports 32

Used Ports Configured Ports Uplinks 1 32 VLAN ID 0 Used Ports 1 Uplinks vmnic2

PortGroup Name Legacy eth0

Internal ID portgroup7

Note: OK, so what is this telling me. Well I have four switches. This is quite odd. Actually originally only had 3 (production, internal, vmotion) and three NICs one assigned to the Service Console, production, vmotion. My 4th network card was a new one not recognised by ESX 2.x but is recognised by ESX 3.x. interestingly, it looks as if ESX 3.x has automatically assigned it to a vSwitch. Something I wasnt expecting Anyway originally the Service Console was using eth0 now it looks like vSwitch4, with the port group of Legacy eth0 is using vmnic2. I think it should be using vmnic0 (the first NIC found in the PCI bus?) I can see what Port Group vswif thinks it is using with the command: esxcfg-vswif l Name vswif0 Port Group Legacy eth0 IP Address 192.168.2.102 Netmask 255.255.255.0 Broadcast 192.168.2.255 Enabled true DHCP false

3. I think I will try moving the NICs around esxcfg-vswitch esxcfg-vswitch esxcfg-vswitch esxcfg-vswitch unlink=vmnic0 vSwitch0 unlink=vmnic2 vSwitch4 link=vmnic2 vSwitch0 link=vmnic0 vSwitch4

Note: If all else fails assign all your NICs to the vSwitch containing the Legacy eth0 Port Group, and that will get you back into the VI Client 4. Restart hostd to make VI Client reflect new configuration service mgmt-vmware restart

Clean Installation of ESX 3.x (Optional)


Note: For this guide, Ive not bothered to cover a clean install of VC 2.x, thats very much like an VC 1.x install On the other hand an ESX 3.x install is significant different to a ESX 2.x install Installing the ESX Operating System 1. Boot to the ESX 3.x CD 2. Skip the CD Media Test Note: You should MD5Sum the ISO before you burn, and double-check the media. Alternatively, you can jump in with both feet and face the consequence later. I will let you be the judge. If your in a test & dev environment its not so very thrilling to do! If it is successful is says It is OK to install from this media which is reassuring. 3. 4. 5. 6. Click Next Select your Keyboard Type Select your Mouse Type Agree the EULA Note: Confirm the warning about the /dev/sda being unreadable. It should be blank already on a clean install 7. Confirm that under Advanced Note: If you have wiped the Service Console of ESX 2.x but kept the VMFS volumes, enable X Keep virtual machines and the VMFS that contains them. This is one upgrade strategy of re-install Service Console but keep your VMFS. 8. This is my recommendation for manual partitioning my deviations from VMware are in red and are combination of what I think is best from the documentation, automatic partitioning and recommendations from ESX 2.x.

The black and bolded numbers come from VMwares guide Notice there is no reference to /vmimages. As stated earlier in the guide it is no longer required. Space required without VMFS is about 12GB File System ext3 swap ext3 ext3 ext3 vmkcore vmfs Fixed Size X X X X X Size in MB 100/250 544/1600 2560(min)/5120 2048/2048 NA/2048 100/100 Fill To Force to Primary X X X

/boot n/a / /var/log /tmp NA /storage1

???

Note: ??? VMware suggest allowing / to fill to the maximum of the disk. If you choose this option you will have to use external storage for your VMFS and vmkcore. Plus this might make creating separate partitions for /var/log Note: /tmp In the past VMware have recommend using a separate partition for /tmp which I have always done in ESX 2.x as well. For some reason this recommendation was dropped in the final GA documentation Note: vmimages If you are doing an upgrade your /vmimages partition is retained is browse-able from with the virtual machine. If you have done a clean install even if you havent created a /vmimages (VMware state is no longer required) you will still have directory called /vmimages which contains the ISO for VMware Tools and Floppies directory for the VMware SCSI Driver (replacement of BusLogic) Note: /homes VMware no longer state you need to create a separate partition for /home. This is most likely due to the fact that VMX and other VM configuration files are no longer stored here but on a VMFS volume Note: VMFS Storage1 is a default volume label if you do not set one yourself. A VMFS volume is no longer required for a swapfile - the per-ESX host swapfile has been replaced with a per-VM swapfile. Your only need for local VMFS is if you don't have iSCSI/SAN/NAS based external storage I would not recommend creating VMFS volumes here - they are automatically labelled storage1, 2 and so on. For this reason you might prefer creating VMFS partitions until later when you will have more control. One thing I have noticed if you do have free space at the end of the LUN to where you installed the Service Console, you can't VMFS up this free space with the VI Client. It's almost like the VI client expects to see an empty LUN within which to put VMFS. So you either do it in the install or resort to fdisk to get this space. Note: Swap/Var/Tmp I decided to over-allocate the Swap Partition, using the old values of max Service Console memory of 800, and doubling the Swap Partition. I

decided to make the /tmp the same size as /var for consistency. Ive never allowed /tmp to just be directory of /. I think this over-allocation is fine, as we have lost requirement for vmimages, it has left me with lots of free disk space. Note: Automatic Vs Manual Automatic partitioning creates 100MB /boot not a 250MB as is recommended in the Beta/RC1 guides. They make root 5GB not 2560 which is recommended only in the Vmware guides as workable minimum. Automatic partitioning creates separate partition for /var/log of 2GB. There doesnt seem to be any recommendation on /var/log and /tmp in the Beta PDF guides 9. In the Advanced Options Choose SCSI Disk sda, only choose SAN LUN if you doing SAN based booting. Note: The other options arent relevant to us. Force LBA32 would allow /boot to above 1024 cylinder limit normally enabled on legacy hardware if your BIOS supports it. The From a Partition again is for legacy hardware that stores BIOS information in the MBR in a partition like some old Compaqs have a EISE partition 10. Network Configuration Select your NIC which will be used by the Service Console Set your IP/SN/DG/DNS and FQDN Set the VLAN ID, if your using VLAN Disable X Create a default network for virtual machines Note: If you enable X Create a default network for virtual machines, this creates a Port Group called VM Network attached to NIC0, the same NIC used by the Service Console. You probably want to dedicated vSwitch/NIC to the Service Console, and use your other NICs for the Virtual Machines Remember the Service Console is just patched to a vSwitch Port Group now, technically it doesnt really use eth0 as did ESX 2.x. More about this in the next module. 11. Set your Time Zone, and if appropriate UTC Settings Note: If your in the UK like me. We dont strictly obey UTC. We obey GMT and BST. If you enable UTC in and select the Europe/London you system clock will be 1hr a drift (depending on the time of the year) 12. Set the root password Note: There is no method to create ESX users within the installer this has to be

done with the VI Client pointed not at VirtualCenter but the ESX 3.x Host itself Adding an ESX Server into VirtualCenter 1. Right-Click the DataCenter name, in my case RTFM DataCenter 2. Choose Add Host or click this icon

3. Complete the dialog as appropriate

4. Next your way through the dialog boxes Note: These dialog boxes dont do much for us. As a blank ESX box has no virtual machines, and we dont have VHA or DRS Clusters created just yet Note: You are not asked to set VMotion settings here. It is enabled elsewhere Note: Remember to set the Host Edition and License Add-ons. Creating local ESX User Account for SSH/PuTTy Access Note: In clean install, by default there is no access to the ESX Service Console for root via SSH However during the installation you no longer have the ability to create users. So this is appears like catch-22 situation Heres how you should give root access to the Service Console 1. Close the VI client if opened on your VirtualCenter 2. Re-open the VI client and login to the ESX Host Server: type the name of your ESX Host Username: root Password: ********** Note: You may receive a notification that the VirtualCenter server is also managing this host. Just click OK, to this warning:

3. Click the User & Groups tab 4. Right-Click the Window and Choose Add 5. Fill in the dialog box like so:

Note: You must enable X Grant shell access to this user for this user to have access to the Service Console via Shell/PuTTy. Otherwise they will only access to the VI Client pointed at this ESX host. 6. Click OK 7. Open a SSH/PuTTy Session to the ESX Host 8. Login with you under-privileged account

9. To levitate to higher plain of privileges type: su Note: This Switches User and assumes root, unless otherwise specified. The takes roots environmental settings (very important if you want to run any commands properly) Note: Some applications do not support levitation to a higher plain for example you WinSCP. Sure you could use WinSCP to gain access as an ordinary user, but then you might lack permission to copy the files you need. If you try to logon as root, WinSCP will give you access denied. If this upsets you and causes your brain to melt out of your ears you could (gasp/horror) lower the security to allow root access. This also has the net effect of undermining auditing and traceability so important to our friends Sobarnes & Oxley. If only the governments of the world were so concerned with devaluation of pension funds and the fact that my pension company put 10 years on my working life. I guess Sorbarnes & Oxley would protect me from a company going into administration and the administration a billion pound whole in a pension fund. Anyway, I digress Anyway, security/auditing/traceability is important so proceed with extreme caution on this one nano w /etc/ssh/sshd_config Locate: PermitRootLogin no Place a # in front of PermitRootLogin no like so: #PermitRootLogin no Exit Nano & Save the file Restart sshd with service sshd restart

Enabling SSH/SCP from your ESX host to other hosts


Note: One of the thing I like to do is connect to one ESX host, and then use the SSH at the Service Console to get to my other servers this saves me having open repeated puTTy sessions This is not allow in ESX 3.x as the firewall denies the client (although every ESX host is an SSH server Also you will unable to use the SCP command to copy files to ESX servers from an ESX server ssh: connect to host esx2.rtfm-ed.co.uk port 22: Connection refused To enable this kind of access we need to adjust the firewall settings

1. In the Inventory, select your ESX Host 2. Click the Configuration Tab 3. Under the Software Pane, select Security Profile

4. Click the Properties link in the far right-hand corner 5. Enable X SSH Client and Click OK

Cold Migration from ESX 2.x to ESX 3.x


Note: If you an ESX 2.x server you can cold migrate the VMs from it to an ESX 3.x host You could use the move option to take the VMs files off a VMFS2 to VMFS3 partition as part of a migration strategy So you upgrade VC, and install/upgrade an ESX 3.x server, format a VMFS volume and cold migrate VMs from ESX 2.x to 3.x start the upgrade on ESX 2.x and cold migrate them back to the original server so if you like the ESX 3.x server and it storage is a helper in the migrate process When you have two ESX 3.x server looking at the same storage you could be VMotioning them back 1. Right-click a VM on a ESX 2.x host in the VC 2.x Inventory 2. Choose Migrate 3. Select an ESX 3.x host from the list Note: The system will try validate the ESX host and give you this dialog box:

4. Click Next and Next again 5. Choose Move virtual machine configuration files and virtual disk 6. Select a VMFS3 datastore from the list and Finish Note: After the move I was able to power on the VM without having to go through the next stage of upgrading the virtual hardware and Vmware Tools

Module 4: Upgrading Virtual Machines


Executive Summary
Upgrading Virtual Machine hardware depends on whether you have chosen an Upgrade or Migrate Strategy If you Upgraded from VMFS-2 to VMFS-3 the VMs hardware is automatically upgraded from VM-2 to VM-3 If you Migrated from VMFS-2 to VMFS-3 or import old ESX 2.x virtual disk you will have to upgrade their virtual hardware from VM-2 to VM-3 This can be done individually, or in bulk using a command-line tool (W2K/3/XP Only)

Whats new VM-3? Snapshots of VMs state, kind of saving as your go along, discard in reverse order method. Think of it as multiple undo levels like you would have in Word or Excel, but on an entire VMs system state (disk and memory) 16 GB of RAM 4-CPUs (with the license) Improved network performance especially for W2K3 Hot-plug virtual disks for OSes that support it such as W2K3 Support for silencing the file system within a VM for VMware Consolidated Backup (VCB)

Whats new in VMware Tools Upgrade can be done manually or by CLI as mentioned earlier Vlance NIC can metamorphosis into vmxnet, which make KB ??? a thing of the past Driver improvements for W2K/3 Drivers will be signed by Microsoft by time the product goes into GA. Currently, they are not and this stops a bulk upgrade of VMware Tools from being successful.

Consequences of not upgrading VM Hardware & Tools Snapshot feature does not work VM-2 limits persist (3.6GB RAM/2-vCPUs) No hot-plug disks Important: Can edit a VMs properties from VC2.x but can manage it from the MUI

Upgrading Virtual Machine Hardware (GUI)


Note: This upgrade must be done, for you to be able to modify the properties of the VMs VMX file You can do this by multiple selection of VM within the Virtual Machine tab of an ESX host. But you would still have to do a Vmware Tools upgrade and interact with the guest OS. So if bulk upgrades and automation is your goal you need to look at the vmware-vmupgrade tool on the hard-drive of the VC, which covered later in this module Warning: After the upgrade of virtual hardware I found my VMs VMX files had lost their switch, and it was set to nothing. I had to manually tell each VM which switch to use. This happened in the RC1 but not in the Beta.

1. Right-Click a VM in the Inventory 2. Choose Upgrade Virtual Hardware 3. Choose Yes to the warning:

Note: This operation is very quick, nonetheless you do get status information

Upgrading VM Tools (GUI)


1. Power on your VM

2. and Open a Console Session

Note: If this is the first time you have powered the VM, and you have not change the virtual disk extension from the old format (.dsk circa ESX 2.0) to vmdk (circa ESX 2.5), then you will get this dialog:

This message ONLY appears AFTER you have upgraded your Virtual Hardware and if you are using the older .DSK extension Click OK! 3. Logon on to your VM with Administrative/Supervisor/Root privileges depending on your guest OS

4. From the usual places, choose Install VMware Tools and confirm the pop-up dialog box:

5. Next your way through the dialog boxes, and choose Yes to Reboot the VM

Upgrading VMXnet to Flexible Network Adapter


Note: This was a procedure in the ESX Beta. But has now been incorporated into the upgrade of virtual hardware

Upgrading VM Hardware & Tools for Bulk (CLI)


Note: This bulk process upgrades the virtual hardware, powers on the VM, injects VMware Tools silently, and then shutsdown the VM I would recommend monitoring the whole process from the CMD prompt If you have kept some disks using the old .DSK extension. This bulkupgrade will stall asking you if its OK, to rename the .DSK file to .VMDK. There does seem to be a time out on this question and the VMs do power-up. If you have user questions like my one about the DSK/VMDK file the tool will take the default answer, and processes it to make sure it doesnt stall for lack of user input. The kind of response you get is RTFM DataCenter/esx1.rtfm-ed.co.uk:dc1: Question -msg.disk.ConvertExtension.ask: VMware ESX Server uses the new file extension '.vmdk' for virtual disks to avoid conflicts with files from other programs. Your configuration currently has at least one virtual disk with the older extension. We highly recommend that you allow it to be updated automatically. Select OK to update all configured disks or Cancel to keep the current file extensions. If you select Cancel you will not see this dialog again. I recommend not having any remote console windows open as I have seen VirtualCenter crash when answering questions This tool upgrades the virtual hardware, then powers on and upgrades Vmware Tools W2K/3/XP only, no support for NT4 or any other guest OS currently You can do the upgrade per-VM or per-Host, I will demo a per-host method You have options to limit the number VMs powered on at any given time; how long a VM is allowed to be powered on if the tools fails to power off the VM in a timely fashion; you can also ask to skip the Vmware Tools update

Lastly, I tried this on an upgraded ESX host from 2.5.2. After the upgrade my VMs all believed they were on legacy bond0. Trouble is I never had a bond! So I had to manually tell them they were on the production network. Additionally, I found this error was NOT consistent. Out of my 5 VMs one believed it was on legacy vmnic0

1. Power-down all your VMs 2. Connect to your VirtualCenter Server Desktop 3. Open a CMD prompt and navigate to C:\Program Files\VMware\VMware VirtualCenter 2.0 4. Type the following command (replacing variables where appropriate, command is word-wrapped for readability) vmware-vmupgrade -u rtfm-ed\lavericm-admin -p ******** -h "RTFM DataCenter/esx1.rtfm-ed.co.uk" Note: Note that you forward slashes to indicate the host / Typical Errors: Logon errors will be marked with this error message Failed to connect to VirtualCenter server: vim.fault.InvalidLogin Path errors such as Failed to upgrade: failed to find object at Hosts & Clusters/RTFM DataCenter/esx1.rtfm-ed.co.uk. In this case, I was specifying the built-in container called Hosts & Clusters when I should have just specified the DataCenter as in the example Failed to upgrade: failed to find object at RTFM DataCenter\esx1.rtfmed.co.uk. In this case I had the slash \ the wrong way round / Note: I was going document what happened. But it got too lengthy. Basically, the CLI connect VC, enumerates VMs, upgrades virtual hardware, powers on and installs Vmware Tools and then Powers Off the VM

Module 5: Networking
Executive Summary
Networking after an Upgrade One of the downsides of the ESX upgrade is how different networking looks in the VI client of an upgrade ESX server, to a cleanly installed Select your ESX Host, Configuration, in the Hardware Pane, select Networking My old VMotion network, now has no NIC attached to it Plus it has two port groups one called legacy the other called vmotion

This is my old internal switch which also has two port groups. One which is the old switch name internal the other legacy vmnet_0

For some reason, SQL-VCDB wound up on legacy vmnic2 but was originally on production switch in ESX 2.x. I had to manual put the remaining VMs on the original name of production. I also found some of their vNICs were not connected

My Service Console connection is referred to as Legacy eth0. In a clean install its called Service Console

So I decided to do some tidying up. To make it look as if it was clean install o Firstly, I deleted the vSwitch0, as no adapters are connected to it I dont need it anymore o I removed Legacy vmnet_0 from vSwitch1 o I patched SQL-VCDB into the production Port Group and then removed Legacy_vmnic2 o I re-labelled Legacy eth0 to be called Service Console. o Normally, after a clean install there would be only vSwitch0 for use with Service Console. You would then have to create and assign your remaining switches. The interface would look like this:

After clearing house, my networking looked a bit nicer to look at:

Important I am done with the in place upgrade. From now on, I will assume you have clean installation

General Overview Improvements in Networking open a new door of possibilities in ESX, and make certain network tasks much simpler than they have been in the past We used to think of Switch as vmnet/vmnic/bond you can still have these formats for networking But now conceptualise this based on which component uses that switch and what for. This also breaks up into three types: o Service Console Traffic o VMkernel (for VMotion/IP Storage) Traffic o Virtual Machine Traffic The Service Console is increasingly becoming a virtual machine (albeit it is not shown as such in the VI Client) There is no vmkpcidivy, as ESX 3.x owns all the hardware. Hurrah! Service Console no longer hogs NIC0 from the PCI Bus Instead it uses a vmkernel driver, and can be used by vmkernel as well. This is very much like the use of vmxnet_console in ESX 2.x used by some people who have blades with a limited number of network interfaces Virtual Switches still exist in there vmnet, vmnic, and bond formats. Their configuration is no longer in the files that you knew in ESX 2.x in /etc/vmware

If you wish to make vSwitches from the Service Console use the new esxcfg command As the vmkernel now has a full IP stack vSwitches are used for IP storage such as NAS and iSCSI. You can still create special vmkernel switches for such things as VMotion It is still recommended that you use a dedicated NIC for the Service Console, VMotion, and now IP Storage ESX now has a IP firewall which is configurable through the GUI or command-line. This is because there is now a full IP-stack on ESX, and because VMwares customers demanded one

Whats new in vSwitches 32 ports default, configurable for up to 1016 ports Load-balancing options such as out-mac, out-ip and standby are configurable through the GUI Port Groups are not just used for VLAN, but allow you break up a vSwitch into smaller management units. These management units can within a vSwitch can have different rules and regulations imposed on them so think of Port Groups as a way of applying different network policies imposed on which VM is patch to the port group Theoretically, one vSwitch can have many NICs with many Port Groups each port group configured for the Service Console, vmkernel IP Storage, VMotion, and Virtual Machines. Although this is possible we are much likely to want to maintain physically separate vSwitch by NIC card Remember all this work can be done with the VI client through VirtualCenter, you no longer have point to the ESX server to do this or access a MUI

Creating an Internal Only Switch


1. 2. 3. 4. 5. 6. Select your ESX Host in the Inventory Select the Configuration Tab In the Hardware Pane, select Networking Click the Add Networking link Choose Virtual Machine and Click Next Remove the X next to any network adapters Note: Notice when you do this the preview window updates and states under physical adapters the words No Adapters. Notice also how the ESX kernel does some clever sniffing to tell you about IP ranges to assist you in choosing the correct NIC for the right subnet/vlan

7. Click Next, 8. In the Port Groups Properties dialog, type a friendly name for this connection, such as internal Note: Notice the option to specify a port group not significant in this case 9. Click Finish

Creating a NIC Team vSwitch


1. 2. 3. 4. 5. 6. Select your ESX Host in the Inventory Select the Configuration Tab In the Hardware Pane, select Networking Click the Add Networking link Choose Virtual Machine and Click Next Select two or more NICs

7. Click Next, 8. In the Port Groups Properties dialog, type a friendly name for this connection, such as production

Creating a VMKernel Switch for VMotion


Note: Configuring a VMKernel switch for IP based storage will be covered in the next chapter

1. 2. 3. 4. 5.

Select your ESX Host in the Inventory Select the Configuration Tab In the Hardware Pane, select Networking Click the Add Networking link Choose VMKernel and Click Next Note: In this case 3 of my 4 NICs are assigned (1 to Service Console, 2 to Production, 0 to Internal) The dialog gives me the option to take NICs assigned to other switches if I wish

6. Click Next 7. In the Port Groups Properties dialog, type a friendly name for this connection, such as vmotion 8. Enable X Use this port group for VMotion 9. Set an IP Address and Subnet mask for VMotion Note: I am going to use 192.168.2.201/24 for esx1.rtfm-ed.co.uk, and 192.168.2.202/24 for esx2.rtfm-ed.co.uk. I have one big gigabit switch into which everything is plugged. In the ideal world VMotion and VMKernel IP Storage and Service Console would all be on separate discrete networks for performance and security 10. You will then get this message:

Note: At this stage Ive been told VMware have no intention of allowing VMotion across routers. However, a VMKernel port like this could be used to access iSCSI and NFS devices. In that case you might need to cross a router. For peace of mind, I always choose Yes and enter my details.

11. In my simple lab network I use the same router for ALL of my network traffic, so I make the VMkernel and Service Console default gateway the same value.

Note: You might like to confirm the DNS configuration tab. DNS is required for VMware High Availability and if the settings here are incorrect then you could have problems later. This is what my dialog box shows

Note: Confirm that VMotion is now enabled (this a combination of a license and VMkernel vSwitch enabled for VMotion) in the Summary tab of you ESX host(s)

Increasing the Number of Ports on vSwitch


Note: 1. 2. 3. 4. ESX 2.x were only allow 32 virtual machines per vSwitch The default is now 56 The maximum you ports you can have is 1016 You can increase the number of fixed numbers from 24, 56, 120, 248,504 and 1016 Warning: Changing this value requires a reboot of ESX for it to take effect

Select your ESX Host in the Inventory Select the Configuration Tab In the Hardware Pane, select Networking click Properties for you next to switch which contains the Port Group labelled production (in my case this is vSwitch2) 5. In the dialog box click the Edit button

6. Under the General Tab, click the pull down list for the number of ports:

7. Click OK Note: The message behind reads, Changes will not take effect until the system is restarted

Setting Speed & Duplex


1. 2. 3. 4. Select the Configuration Tab In the Hardware Pane, select Networking click Properties of vSwitchN Click the Network Adapters Tab

5. Select the NIC adapter and Click the Edit button

6. Select the Speed & Duplex and click OK Note: You can now do this with the Service Console NIC without need to edit /etc/modules.conf

Note: Remember IEEE Standards now recommend the gigabit-to-gigabit connects be set to auto-negotiate

Managing Service Console Networking


Important: OK, well on my 4-NIC box all my NICs are fully allocated If I want to show/configure other possibilities, I might have to free up NICs (say removing one nic from production nic-team or by removing the VMotion switch I just created to free up its NIC) This is what my cleanly install ESX server looks like now from a networking perspective

Creating a Switch for the Service Console Note: Theres always been a single point of failure on the Service Console network in ESX Unless you were prepared to edit files and make the Service Console part of your virtual machine network If you have free network adapters, its very easy to protect the Service Console network by adding another Service Console switch 1. 2. 3. 4. Select your ESX Host in the Inventory Select the Configuration Tab In the Hardware Pane, select Networking Click the Add Networking link

5. Select Service Console Note: As we already have Service Console port, the system labels this second service console port as Service Console 2

Click Next 6. Complete the IP Settings as befits your network infrastructure

7. Click Next Note: This appears as another vSwitch

Confirm you can connect to this new port with ping/PuTTy and by pointing the VI client to this IP/Name. Note: As another way of protecting the Service Console network try just adding an additional vmnic to it by doing so you will create bond for the Service Console. However, this configuration is set as Standby switch, not

for MAC or IP load-balancing. This is also a safe procedure to switch the NIC on the Service Console switch. Add in the 2nd NIC and remove the 1st NIC and shouldnt loose connectivity to the Service Console. Changing the Service Consoles Network Settings Note: Were going to modify the primary connection for the Service Console This is likely to cause a disconnection If this change goes horribly wrong, it is not the end of the world we will be able to use the backup Service Console 2 network connection You havent got a backup connection and proceed with this task you will get this warning message:

The safest method is to connect with your backup Service Console 2 network to the ESX host, and then change Service Console Properties (this way you dont get disconnected in the process of changing the IP!)

1. Close your VI Client 2. Open the VI Client using the Backup Service Console IP

3. Select the Configuration Tab 4. In the Hardware Pane, select Networking 5. Select Properties of vSwitch0/vswif0 with the Port Group name of Service Console 6. In the dialog box, select the Port Group of Service Console, and click Edit

7. In the Service Console Properties Dialog box you can change your IP settings

Creating a vSwitch with VLAN Support


Note: As in ESX 2.x Port Group can be used for VLAN support Where multiple VMs connect to a single switch, divided into Port Group which represents the different VLANs available 1. Select the Configuration Tab 2. In the Hardware Pane, select Networking 3. click Properties of vSwitchN host your production Port Group, my case this is vSwitch2 4. In the vSwitchN Properties dialog, click the Production Port Group, and click the Remove button

5. Then click Add, and in the Add Network Wizard, choose Virtual Machine 6. In the Port Group Properties dialog, type a friendly name like Accounts Network 7. In the VLAN ID (Optional) field type: 95 Note: Of course, this is specific to your implementation

8. Click Next and Finish, repeat this for other VLAN IDs Note: Confirm your VMs are patched into the correct network. For illustration purposes I patched some different VMs into these VLANs for the screen dump below:

Note: This VLAN system was setup on an upgrade ESX system. Im worried that despite setting the VLAN number the UI show VLAN ID *. This normally indicates no VLAN tagging has been set. In a clean install this didnt happen

Conclusion
This section on network is by no means definitive Its just meant to get you up and running

Ive not decided to go through ALL options (and dialog box settings) that would take too long, as for another guide designed for newbie As you have seen earlier with my tar-ball upgrade example you can modify switches through the command-line using the esxcfg-vswitch and esxcfg-vswif utilities But as one last illustration of the flexibility introduced to networking in ESX. Heres a single vSwitch set-up with ALL features demod above with its companion internal switch. This is not recommended but if you had a development machine with only one NIC this would be possible

At the end of this module I reset my systems back to 4 vSwitches (One for the Service Console, Production, VMotion, Internal) This leaves 1 NIC left out of my 4 (production not in a bond) so I can a have NIC free for a VMKernel IP Storage vSwitch covered in the next module (Im trying to obey a separate NIC/vSwitch for each component, despite the fact that each physical NIC plugs into the SAME gigabit pSwitch. Lastly, if you wish to see your network adapter by Make, Mode, Speed, Duplex, vSwitch and IP Range then look at the Network Adapter section in Select ESX Host Configuration Tab Hardware Pane Network Adapters

Module 6: Storage (VMFS, SANs, NFS, iSCSI)


Executive Summary
Links for sources of NAS and iSCSI Apart from the normal distributions of Linux which will all support NAS, you might wise to look at these instead http://www.nslu2-linux.org/wiki/OpenSlug/HomePage (NAS/iSCSI) Windows Services for UNIX (NAS on Windows) http://iscsitarget.sourceforge.net/ (iSCSI) http://www.openfiler.com/ (NAS/iSCSI)

Links for Virtual Appliances http://www.vmware.com/vmtn/appliances/directory/55 (NAS) http://www.vmware.com/vmtn/appliances/directory/364 (iSCSI) http://itservices.ne-worcs.ac.uk/pub/vmware/iscsitarget.zip (iSCSI)

General ESX now scans up 256 LUNs, in other words Disk.MaxLUNs is now 256 VMFS volumes are now to be found in /vmfs/volumes/volume_name If you want to partition a disk using fdisk you would use fdisk /vmfs/disks/vmhbaA:T:L:P. There is no need to relate vmhba syntax to /dev/sdn vmkpcidivy only exists to assist you in relating the vmhba syntax to the linux syntax of /dev/sd?. So you can still type: vmkpcidivy q vmhba_devs VMFS now supports directories, as such all a virtual machines files are now all in ONE location at /vmfs/volumes/volume_name/virtualmachine_name This will make backups easier, and if your using external storage all of your VMs data files are outside of the metal box so if you wipe ESX all you should have to do is register the VMs (oh, and set-up all your networking again ) This means we now have a mix of big and small files. As VMFS uses a relatively large cluster size, some whiz-bang stuff has been done at VMware Engineering to make sure small files dont waste space (something like sub-clustering I imagine) You can still extend/span a VMFS volume, but it suffers from the same problems as ever. Even if you SAN can dynamically make a LUN bigger, there are no tools (as far as I know of that can make VMFS expand to fit the new space) VMware still only provides fault-tolerance on redundant paths to a SAN Array. It use only one path at a time (as in ESX 2.x) and does not loadbalance the disk i/o

NAS Storage NAS can be presented to the VMkernel like any other storage when properly configured the VI Client sees at yet another DataStore Requires a VMKernel Switch port if not already created In production environment this is good enough for ISOs Less good is using NAS as storage for templates Not recommend, NAS for storing virtual disks. Only good for test/dev environments were performance is not critical, and you cannot justify SAN/iSCSI for ten minute demo of VMotion, DRS, and VMware High Availability (all of which require shared storage)

NFS is only supported, most NAS boxes support both If you must use Windows you can use Services For Unix, to emulate NFS NFS v3 with TCP is only supported - no other older format such as NFS v2 with UDP which was around at Redhat Linux 7.2 The Service Console cannot be your NFS server because it hasnt got NFS v3 with TCP. It would be a mad idea anyway, I should know Ive tried it!!! You can configure the Service Console to use NFS, but there isnt a huge point in doing so (except in troubleshooting) as we cannot browse to Service Console storage not even for ISOs. If you configure both it is safe because NFS handles the file locking (which would normally be done by VMFS file system) There are some files associated with virtual machines that cannot be stored on NAS for example I found RDMs (Raw Disk Mapping metadata files) must reside on vmfs volumes, even in physical compatibility mode Lost of NFS Mount in the GUI After a reboot, a datastore that was previously configured in the UI may no longer be visible. When the ESX Server host boots, it attempts to remount the existing datastores, but if the mount attempt fails because the server is unavailable, the operation is not retried. Workaround: From the service console, run esxcfag-nas -r. Then, restart the vmware-hostd agent with service mgmt-vmware restart at the Service Console

iSCSI Storage At last it is here after countless people bored us all senseless on the forum boards about if ESX 2.x supported, and when was it coming! This said, if you have been using ESX for a while, you probably have a nice SAN already. So what does iSCSI offer you? Well perhaps the possibility of shared storage, vmotion and ESX in small locations where previously it was too costly? With 10G-ethernet on its way, and SAN currently at 4GB then performance could be significant. Two formats hardware (QLA4410 card only at the moment!) or Software based. No prizes for guess which performs better. This because iSCSI adapters have a TCP Offload Engine (TOE) which stops the VMkernel from doing too much of the work. With the software method ordinary Ethernet NICs are used, and more TCP work has to be done by the VMkernel As iSCSI is still a network you might face routing issues! Boot from iSCSI is supported but for hardware only You can create VMFS volumes on iSCSI RDMs are supported too And no surprises VMotion, DRS, and VHA is supported as well It has its own addressing system which is not unlike the WNN of SAN Static and Dynamic discover of LUNs is supported CHAP is supported as an authentication method were talking SCSI command over IP we need a physical secure network (as the traffic is sent unencrypted) and some rudimentary access control Currently, its the Service Console that sends & receives this authentication this can raise some interesting networking issues. Do you really want to plug your Service Console NIC into the SAME network as your iSCSI storage? IF you lack iSCSI hardware you can emulate the whole lot with Fedora Core 4/5 (together with an iSCSI emulator) if you must use Windows, then check out Stringbean Softwares WinTarget which does the same for Windows Server You will need a Vmkernel Switch for this task this was already set-up in the previous module

Setting up NFS/NAS on RedHat Linux


Note: I used RHEL AS Release 3 (Taroon Update 2) as my NFS/NAS Server I gave the RHEL machine a 10GB boot disk, and 50MB data disk (mount point of /data) where I will store ISOs and Virtual Disks You can add in the NFS share (the term is actually export) by either an IP address or host/dns name. So make sure you have name resolution if you do! 1. Logon to your NFS file server and edit the /etc/exports file like so: /data 192.168.2.201(rw,no_root_squash) 192.168.2.202(rw,no_root_squash) Note: This is all one continuous-line, Ive used returns for read-ability purposes This allows the server 192.168.2.201/202 to access the mount/volume called /data. RW gives this server read and write access. The default is that none root users do not get full access to the volume. The command "no_root_squash" allows applications like VirtualCenter read/write access to the volume. Normally, root access to NFS volumes are squashed (in other words, denied) You can use wildcards with the IP address if you wish as I only have 2 ESX servers that need access I havent bothered. Ive not given the Service Console access to this share as there is no need to do so You can specify additional security using /etc/hosts.allow and /etc/host.deny files associated with the portmap service/daemon 2. Start your nfs service/daemon with: service nfs start 3. To make these NFS service start automatically at boot-up use the chkconfig utility which allows you to make safe changes to run-levels chkconfig --level 345 nfs on

Setting up NFS/NAS on Windows with SFU


Note: As part of my research into NFS I took a look at Windows Services for UNIX (SFU). This allows you to run a NFS service on Windows (NFS Version 3 on TCP) which is complainant with ESX IP Storage options for DataStores. This means you can store ISOs, FLP and Virtual Disks (in the monolithic format) on Windows File Server. With more than one ESX using the Windows Share/DataStore, advanced features like VMotion and DRS are possible without need for a SAN or iSCSI! (hurrah!) Alternatively, you could always install Linux and save yourself this hassle!

Before I begin the setup instructions. Here are some useful links: Homepage: Download: KB: KB: Windows Services for UNIX Windows Services for Unix Setting Permissions In Microsoft Services for UNIX v3.0 How to perform maintenance and ancillary with SFU

Installing SFU Note: These instructions are based on Windows 2003 without a Service Pack. I have also successfully made SFU on Windows 2000 with Service Pack 4. Windows does not allow unchallenged access to shares without authentication As Windows and the ESX Host do NOT share a common end-user database, we need some method of mapping the users on ESX Host to Windows. The method I have chosen is a simple mapping of the accounts using the files present on the ESX Host 1. Copy the passwd/group files from any one of your ESX servers, these are both held in /etc 2. Extract the SFU package and run the MSI package called SfuSetup.msi 3. Choose a (c) Custom Installation 4. Expand Authentication tools for NFS, User Mapping Service 5. Next to the setupid and case-sensitive options dialog box 6. Under Local User Name Mapping Service, select (c) Password and Group files 7. Type name and path for passwd/group files, for example: c:\etc\passwd c:\etc\group 8. Select the Windows Domain Note: Note: In my case I used local user & groups on a Member Server 9. Next, and accept the location for the install Note: Watch the status bar, check your email, make a cup of coffee, wonder how long you spend watching status bars oh, and at the end of this - reboot your Windows/NFS Server Creating a User Mapping (Between Administrator and Root) 1. 2. 3. 4. 5. 6. 7. From the Start Menu, Windows Services for UNIX Run the MMC, Services for UNIX Administration Select User Name Mapping node Choose Maps option and under Advanced Maps, click Show User Maps Click List Windows Users button - and select Administrator Click List Unix Users button - and Select root Click the Add button Note: When the warning box appears choose OK

8. At the top of the console choose the Apply button Sharing out a folder 1. On the Windows/NFS Server, right-click a folder, and choose Share and Security 2. Select the NFS Sharing tab 3. Choose (c) Share this folder 4. Click the Permissions button, Select X Allow root access 5. Change the Type of Access to Read-Write 6. Choose OK to exit sharing dialogs Note: Again if you just using this share for OS you could leave this as read-only Warning: Watch out foR CaseSensitivity on your sHaReNaMeS. If you want to remain sane make them all in lower-case and no spaces Confirming the Windows/NFS Server is functioning Note: There are a number of tools we can use at the Windows/NFS server to see if things are working before adding in the NFS Share as IP Storage in the VI Client rpcinfo -p (lists listening ports on the server, notice TCP, NFS v3, Port 2049) program version protocol port -------------------------------------------------100000 2 udp 111 portmapper 100000 2 tcp 111 portmapper 351455 1 tcp 904 mapsvc 351455 1 udp 905 mapsvc 351455 2 tcp 906 mapsvc 351455 2 udp 907 mapsvc 100005 1 udp 1048 mountd 100005 2 udp 1048 mountd 100005 3 udp 1048 mountd 100005 1 tcp 1048 mountd 100005 2 tcp 1048 mountd 100005 3 tcp 1048 mountd 100021 1 udp 1047 nlockmgr 100021 2 udp 1047 nlockmgr 100021 3 udp 1047 nlockmgr 100021 4 udp 1047 nlockmgr 100021 1 tcp 1047 nlockmgr 100021 2 tcp 1047 nlockmgr 100021 3 tcp 1047 nlockmgr 100021 4 tcp 1047 nlockmgr 100024 1 udp 1039 status 100024 1 tcp 1039 status 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs

showmount -e Exports list on nfs2.rtfm-ed.co.uk: /iso All Machines /data All Machines ls -l D:\sources\vmware\os-isos>ls ls total 11322628 1392128 -rwxrwxrwx+ 1 +Administrators 513 novel6.iso 1251856 -rwxrwxrwx+ 1 +Administrators 513 nt4.iso

712769536 Feb 1 2005

640950272 Aug 25 2004

Adding a NFS Mount Point


Note: Ive sometime noticed that the VI client incorrectly reports amount of disk space/size in these mount points but normally it does update this probably a GUI issue with the hostd 1. 2. 3. 4. 5. 6. Next Login with the VI client Choose the ESX Host from the list, and select Configuration Tab In the Hardware pane, select Storage (SCSI, SAN, NFS) In the right-hand side of the VI Client and click Add Storage In the Wizard, choose (C) Network File System In the Locate Network File System page complete the dialog as follows Server: Name of your NFS server, in my case nfs1.rtfm-ed.co.uk Folder: Name of mount/volume you wish to access in my case, /data DataStore Name: nfs1-data (or anything you deem suitable)

Note: I also added in DataStore from my Windows server called /data and /isos

Note: At the end of the process this is what I have:

These also appear in the Inventory View, in the DataStores View

and also in the Service Console under /vmfs/volumes

and most importantly they are accessible to Virtual Machines:

Note: It can take some time for the NFS mount to appear. The same folder path is the kind of thing you would in hardware NAS like netapp. You could use the X Mount NFS as read-only for ISOs. But if you want to use it for creating virtual disks well, it would have to be left unchecked and r/w to be enabled in the /etc/exports file If you receive an error check for the following: Permissions in /etc/exports FSDN/Hostname Resolution Routing Settings on the Vmkernel Switch If you create VMs on NAS storage and that NAS storage is down/unavailable you VMs will be flagged up like this:

Setting iSCSI (Software Emulation)


Note: The software based initiator (client) will use an ordinary network card to connect to the iSCSI Target (server) I dont have any physical iSCSI equipment we will use software iSCSI target and software iSCSI initiator There are two discovery methods find LUNs automatic and manual. Automatic send a SCSI instruction down the wire to the IP address specific iSCSI target to find LUNs You can also enable authentication (CHAP Secret) to secure it

Once correctly configured the VI client shows this as a vmhb40 device VMware Appliances: Dave Parson from Manchester, UK who is active on the forums has put one together. Check out this forum thread http://itservices.ne-worcs.ac.uk/pub/vmware/iscsitarget.zip http://itservices.ne-worcs.ac.uk/pub/vmware/iscsitarget.md5 Alternative Mirrors: http://chaz6.com/static/files/vmware/hosted/iscsitarget.zip http://chaz6.com/static/files/vmware/hosted/iscsitarget.md5 Alternatively, you might want to look at OpenFiler which is linux build designed for storage which includes iSCSI support If you are looking for a Virtual Appliance from the VMware Support site this is available: http://www.vmware.com/vmtn/appliances/directory/364

Setting up iSCSI with Fedora Core 5 and iSCSI Target


Note: Ideally you will set this up on a physical system but if you just doing this for evaluation purposes or because of re-certification reasons a virtual machine will suffice. iSCSI will present a LUN to ESX just like a SAN would so I would recommend at least one LUN for your boot disk, and two or more disks to act as iSCSI Targets (required if you setting up clustering within the VM) I have created VM based iSCSI box running on ESX host which in turn connects to the VM iSCSI box and you can format as VMFS. So my VM offers up VMFS partitions to the same ESX host on which resides! Crazy, huh! Acknowledgements: I would like to thank Geert Baeke and Anze Vidmar respectively for their help in this documentation I borrowed extensively from both of these links to get my iSCSI box working. It wouldnt have worked without their help. 1. Download and Install Fedora Core 5 Note: Do a standard installation of FC5 without the GUI. Make sure you enable Software Development. We will need tools like CC to compile ISCSI Target for our kernel. Remove X-Windows support as you see fit. If you do enable a firewall. You will have enable TCP Port 3260. You can change the configuration of the firewall after the install with the setup command 2. Confirm you current kernel release with uname r

3. Download and update your kernel with yum y update kernel kernel-devel Note: Yum is an automatic updater and package installer/remover for rpm systems. It automatically computes dependencies and figures out what things should occur to install packages. It makes it easier to maintain groups of machines without having to manually update each one using rpm. The y choose yes to all the prompts that occur. 4. Confirm the prompts from yum, and then reboot to make sure your updated kernel is in use 5. Check your kernel release again with: uname -r 6. Using the kernel name derived from above download the latest kernel sources with yum y install kernel-devel-kernel-release-value Note: In my case this was yum install kernel-devel-2.6.17-1.2139_FC5 Yum will download the kernel source code and put into the /usr/src/kernels directory 7. Next we will download and untar the iSCSI Target software to the /usr/local directory (from http://iscsitarget.sourceforge.net/) Note: If you want to download directly to your iSCSI box, you could use a mirror hosted at Kent University in the UK with cd /usr/local wget http://kent.dl.sourceforge.net/sourceforge/iscsitarget/iscsitarget0.4.13.tar.gz tar xvzf iscsitarget-0.4.13.tar.gz and then cd iscsitarget-0.4.13 8. Compile and Install the iSCSI Target Software with export KERNELSRC=/usr/src/kernels/kernel-release-value make && make install Note: In my case the export command was export KERNELSRC=/usr/src/kernels/2.6.17-1.2139_FC5 If you get an error stating that CC was not found. This is because you did standard install of Fedora Core 5 which doesnt include programming tools. CC is program that allows you compile programs from C programming language/environment

9. Copy the sample configuration file of iSCSI-Target to the /etc directory cp iscsitarget-0.4.13/etc/ietd.conf /etc 10. Edit the ietd.conf file with nano w /etc/ietd.conf Note: This is quite a lengthy file that has verbose information to explain the syntax. From this file we can specify an iSCSI Target ID (uses a combination of reverse lookup for the domain, and a date for uniqueness). It allows you set incoming and outgoing authentication and alias for the target itself I have two LUNs to make available without authentication. This how my ietd.conf reads: Target iqn.2006-06.uk.co.rtfm-ed:storage.lvm # Users, who can access this target # (no users means anyone can access the target) #IncomingUser #OutgoingUser # Lun definition # (right now only block devices are possible) Lun 0 Path=/dev/sdb Lun 1 Path=/dev/sdc # Alias name for this target Alias iSCSI # various iSCSI parameters # Not in use.. 11. Start the Service and Configure start on boot-up with: service iscsi-target start chkconfig iscsi-target on

Setting up a Windows iSCSI Target with Stringbean WinTarget


Note: 1. 2. 3. 4. 5. Stringbean WinTarget allows Windows to emulate an iSCSI Target It does what iSCSI Target does for Fedora Core 5 However, its not free/open-source and it was recently acquired by Microsoft Documentation on how to set this up when I get approval for an eval

Connecting ESX to iSCSI (Software Adapter) 1. Login with the VI client 2. Choose the ESX Host from the list, and select Configuration Tab 3. In the Hardware Pane, Choose Storage Adapters

4. Select iSCSI Software Adapter and Properties

5. In the General Tab, click the Configure button 6. Select X Enable and Click OK Note: You will receive this warning

In my case my VMkernel Port is on the SAME network as the Service Console. So if authentication requests happen they will go down the same network path to the iSCSI system which is also on the same network! This isnt recommended in production, but heck this is a test/dev environment.. So I will choose No Enabling does take a little time so follow its progress in the Task Pane:

Note: If you close the dialog box the background information will update to show that it is enabled and it will automatically generate a unique iSCSI Initiator (iqn.1998.01-com.vmware:esx1-62dcad1) and Alias (esx1.rtfmed.co.uk) for you

7. Select the Dynamic Discovery tab and Click the Add button type in the IP Address of your iSCSI Target Note: If you close the dialog box and re-open the Dynamic Discovery tab will be refreshed 8. Now force a rescan by clicking the Rescan link in the top right-hand corner choose OK to the Rescan dialog box:

Note: If you select the vmhba40 software-device you should see the LUNs you presented in the conf file like so:

Formatting iSCSI Volumes with VMFS Note: This the same as if you were formatting any LUN (Local, SAN or iSCSI) 1. Select the Configuration Tab 2. In the Hardware Pane, choose Storage (SCSI, SAN, NFS)

3. Click the Add Storage link 4. Choose Disk/LUN 5. Select the LUN you wish to format:

6. Set a datastore name such as iscsi1-lun0 7. Set your maximum file size parameters Note: At the end of the process I now have this Adding in a iSCSI LUN with RDM File Note: Again this exactly the same as if you were using a SAN 1. 2. 3. 4. 5. Click Edit Settings Click the Add button Choose Disk from the list of devices Select Mapped SAN LUN Select your LUN from the list:

6. Choose where you want to store the RDM file Warning: RDMs must go on a VMFS volume. If you virtual disk resides on NAS storage you MUST choose Specify Datastore, and select a VMFS Volume 7. Select a Compatibility Mode and Virtual SCSI Node 8. Click Next and Finish Note: You can now run disk management and write a disk signature to the LUN, and format as you see fit. Note: vmkfstools now has an update switch which allows you to see the metadata behind the RDM file like so: vmkfstools -q /vmfs/volumes/local-esx1/server1/server1_1.vmdk Disk /vmfs/volumes/local-esx1/server1/server1_1.vmdk is a Nonpassthrough Raw Device Mapping Disk Id: vml.010001000020202020564952545541 Maps to: vmhba40:0:1:0 Windows on Physical Machine Accessing a iSCSI LUN Note: This really isnt a VMware issue but its frequently asked It more a software issue within Windows Windows doesnt natively ship with drivers for iSCSI Access (where as many Linux distributions do)

You have a choice you could use the free Windows iSCSI Initiator or pay for one such as RocketDivision StarPort (which has lots of other neat features) The free iSCSI Initiator is very easy to configure

1. In Windows, download the Microsoft iSCSI Software Initiator Note: Make sure you download the right version for your platform! 2. Double-click the package and next your way through the installation and reboot! 3. Double-click the Microsoft iSCSI Initiator icon on your desktop 4. Select the Discovery Tab 5. Under Target Portals click the Add button 6. Type in the IP Address of your iSCSI Target 7. Select the Targets Tab and click the Logon on button 8. In the dialog box that appears enable X Automatically restore this connection when the system boots Note: The dialog box will refresh and show you targets connected 9. If you select the Details button and then the Devices Tab you will see the disks/luns to which you have allowed access Note: If you then proceed to disk management you will be able to put a disk signature on the LUNs and format with NTFS

Module 7: Deployment and Migration


Note: Well, I dont have a SAN So before starting this module, created a couple of virtual machines on my NAS boxe(s), ready for migration (cold/hot migration) I guess now I have an Software iSCSI system I could be using iSCSI instead If you have followed this guide serially you will be already able to do VMotion. The VMkernel switch and shared storage is ready to rock and roll Yes, thats right you can VMotion even if your shared storage is just NAS. I do it all the time. Creating new files on NAS is dog slow, but once the VM is up, with the disks not moving, VMotion works just fine for demonstration purposes. It also allows me to play with DRS and VHA Oh, and my NAS? Its a Dell Optiplex GX1 with PIII processor and IDE drive! Nuts, eh? This is what my VC looks like now:

4 VMs on each box, each running of my NAS box In fact, I used the templating features to create them after installing the first VM.

Executive Summary
As mentioned before VM-3 introduces 4 CPU and 16GB RAM options Virtual Disks are automatically named for you like server1.vmdk, and subsequent disks become server1_1.vmdk Snapshots now replace disk modes allow for multiple levels of undo, and hot-backup as before

Cleanly installed VMs start of with a vlance, and this automatically upgraded/morphed into the Enhance Flexible Adapter with cleanly install VMs the morphing process retains your IP settings. This is much better than the old vlance-to-vmxnet process after installing VMware Tools where you IP settings would be lost Templates are before but you have a few more options o Clone to Template This copies the VM, like a conventional template in VC 1.x. You now have a choice of compacting into a series of small files (now referred to as the sparse format, rather than COW) as before (NFS/NAS will work), or remaining in the monolithic format (should really be VMFS) Convert To Template This simply marks or unmarks a given VM as a template. The idea of this is that you can quickly unmark it as a template update you build with new software/patches/service packs and convert back into a template without going through whole import/export process

Incidentally, you still have the Clone option as you did in VC 1.2 nothing new there really There is no specific location for templates in the VI GUI as there was in VC 1.2. I tend to create a folder to hold them in the Inventory View of Virtual Machines and Templates. Of course, where they get physical stored iSCSI, SAN, NAS depends on your resources. Everyone can afford NAS (even its just a PC with shares on it like I have) so theres no excuse for not having a centralised repository for templates Lastly, what ever I do clone or convert and even if the template is stored on shared storage the VC does seem to remember where that VM once resided (actually youre asked in this in the wizard). Although it doesnt appear in the main inventory window it does appear in the ESX server Virtual Machines tab.

Creating a Folder Holding Templates


1. In the VI Client switch to the View called Virtual Machines and Templates

2. Right-click your DataCenter, in my case RTFM Education 3. Choose New Folder 4. Type in a name, in my case I used _Templates to make sure it was always at the top of list in the view

Using Clone to a Template


1. In the Inventory View, Hosts and Clusters 2. Select a VM you a powered off as your source for the template 1. In the Summary Tab and Commands Pane choose Clone to a Template

2. Type in a friendly name like Base W2K SP4 Build and select the _Template Folder as your location and Click Next Note: I will choose esx1.rtfm-ed.co.uk as the location for this template. But as my physical storage will be my NAS and more than one of ESX hosts can see that storage its not a problem 3. Select the physical location for storing template files (shared storage). Im going to use my NAS shares

Note: The advanced button shows you the size of the data 4. At this point you have the choice of Normal (all storage formats) or Compact (VMFS only)

Note: As I lack space and I need keep my templates as small as possible. Im

going to risk the Compact format 5. Click Finish

Note: On my NAS this takes a very long time, but once completed it appears in the Inventory like so:

Note: Creating a new VM from a template. mmm, I dont have to show you this do I?

Using Convert to a Template


Note: This method is newer and offers very quick way of making a template and unmarking it back as VM to update your template as software changes The only thing that feels odd is when you do this in the Inventory view of Hosts & Clusters is that it disappears from the list and you can only see it in the view of Virtual Machines and Templates. This isnt a bug, its by design but it just feels a bit odd Its like hey, where my VM gone oh, there it is in another view. Its actually makes perfect sense because you not creating (cloning) a new set of files, just marking an existing VM as template 1. In the Inventory View, Hosts and Clusters 2. Select a VM you a powered off as your source for the template 3. In the Summary Tab and Commands Pane choose Convert to a Template

Note: Notice how the VM disappears from the list

4. Change the Inventory View to Virtual Machines and Templates, it will be located in the same VM Folder were it began like so:

Note: If it pleases you move it to your _Templates folder and re-name it to something a bit more generic

Importing and Exporting Virtual Disks


Note: As you might gather the old method of creating templates with ESX is now becoming increasing legacy Importing virtual disks has not changed However, exporting disks has altered. You can still do it but the command e has be depreciated (whatever that means!) and the functionality of e has been rolled-up into vmfstools i So to do an export of vmdk from monolithic to COW (now correctly referred to as the sparse format) you would do the following: vmkfstools -i /vmfs/volumes/local-esx1/dc1/dc1.vmdk -d 2gbsparse /vmimages/dc1.vmdk Notice how the source (monolithic) is specified before the destination (sparse). This is also a change to the way vmkfstools e used to work

Manage VMs using Web Access


Note: OK we said the MUI is dead, and it has gone But we have web-based access via the VC or ESX Hosts This allows operator access to VMs without need for a VI client see it as a client-less access method How popular this front-end will be I dont know

Supported Browsers: IE 6.0 and higher, Mozilla Firefox for Windows/Linux 1.x or higher Typing the raw URL of an ESX host will take you to a welcome page where: o You can download and install the VI Client o Logon on to the web-page view o Logon to the Scripted Installer page Typing the raw URL of VC will take you to a welcome page where: o You can download and install the VI client o Logon to web-page view In Both systems o The main end-user logon is https://nameofserver.domain.com/ui o As in VI-2, the user accounts and password depend what you point at o ESX local users and passwords (root and password for example o VC domain/local Windows Accounts (administrator and password for example) o In this release, Ive noticed the some of the built-in links use http to get to the login, and you find that the login fields are disabled if you manually type https they work o Also Ive noticed the login process isnt horribly quick Ive played about with both I dont have any particularly positive or negative report. So I am not going dwell on this topic to much The web-system uses auto-generate certificates, I dont know about you but you might want to look at generating your own so they will be trusts and so on to reduce the amount of SSL pop-up warnings in IE and the like

Generating Remote Console URLs for VMs


Note: As an illustration of one of the uses of a web-access heres a neat feature It used be called Bookmark Virtual Machine, but its been renamed to Generating a Remote Console URL Basically, you can create a shortcut that points to a VM and send it to a user via email You can put some GUI restrictions on it too Be careful where you generate these shortcuts form. Do them at the ESX host your user will need a local account on ESX, do them from VC they will be able to use Windows Authentication 1. Login to the Web-Access View 2. Select your VM in the Inventory

3. In the right-hand column, click the link called Generate Remote Console URL

4. Under Generated URL, drag-and-drop the blue text to your desktop, in my case server4 Note: Forward and send to end-user

Note: Users get this kind of view

Based on their rights can power on a VM its bit like giving some ILO access I guess This icon allows them to go into a full-screen view

Modifying a VM: Hot Pluggable Disks


Note: There has been some improvements in managing VMs themselves You can now point a VM to your workstations CD/Floppy this means theres no need for end-users to access the ESX Hosts DataStores to find ISOs (although I prefer that) You can also hot-add devices currently this limited to virtual disks and if you guest OS supports it You cannot currently hot-remove virtual disks without powering off the VM 1. On a Powered On VM

2. In the Summary Tab and Commands Pane choose Edit Settings

3. Click the Add button 4. Select Hard Disk and Click Next

Note: The other devices are marked as (unavailable) as they are currently not hot-pluggable 5. Add in the disk as you see fit New, Existing, SAN Lun 6. Open a VC Console on the VM, login as Administrator 7. Open Computer Management, expand the +Storage Folder 8. Right-Click Disk Management and Choose Rescan Disks Note: This should make the virtual disk appear. You may have to, or maybe prompted to write a disk signature in Windows

Using Topology Maps


Topology maps give you graphical view of Host Resources and Virtual Machine Resources You can create a custom map to filter the information you need I good example of using maps is before you VMotion You might want to see: o Which host have common shared storage o If VMs are connected to internal networks o Does a VM have connection to ISO/FLP These are either requirements for, or can stop VMotion There are many views and many different ways of getting them and filters as well I will lead to my personal favourite and describe whats there Like any map its only as good as the map reader what we really need is Tom-Tom SatNav for Virtual Infrastructure at the next available point, please turn left..

1. In the Inventory View Hosts and Clusters, 2. Select your DataCenter, in my case RTFM DataCenter 3. Click the Maps button on the tool bar

Note: Ive found sometimes the map feature doesnt tick off every component so I sometime I have to put an X in the box:

Also occasionally I have to refresh to update when I filter or add new components

Note: Heres my typology map:

Interpretation: What do we see two servers, each with 2 VMs using shared storage. But, VM-Server3 is patched into an internal switch. Server4 has a CD-ROM connected to the ISOs DataStore fortunately, every ESX server can see this storage so it would not stop VMotion This is a custom map which shows Host to VM and Host to DataStore relationships

Other Stuff Well, I think you get the picture (no pun intended). I wont bore you with this. Other nice stuff? You can drag this graphic about and the overview window is useful to zip around big maps for those of you have more than a 2-node virtual infrastructure! You can export maps (jpeg, bmp, emf) You can print maps

VMotion and CPU Compatibility


Actually doing a cold/hot (VMotion) migration hasnt change that much except now we asked to place a VM in a Resource Pool (this is our next module) The Wizard for Cold Migration is slightly different its been improved to deal with the issue of whether the VMX/Virtual Disks are staying where they are (shared storage) or if they are being copied (Local to SAN, or SAN LUN to SAN LUN) We still have Stepping/Family Issues or the term that what introduce midway through ESX 2.x CPU Compatibility Groups ESX 3.x is be able to use new CPU features that allow as 64-bit Guest OS Intels eXecute Disable (XD-bit) AMDs No eXecute (NX-bit) These CPU features mark memory pages as data, inhibit attack via software errors and buffer overflow attacks Not all the OS support XD/NX but most of the modern ones do We will also hardware assist features designed to reduce some of the CPU problems we have. Most of these hardware assist features will only be accessible by 64-bit OS So despite efforts for an open-source CPU-neutral standard at the moment the processor guys are doing what they have always done vendor specific enhancements So in a nutshell in the short-term we are likely to see more processor incompatibility not less. VMware are going to give us more control over this to allow or disallow VMotion on certain criteria and to mask CPU features from one ESX host to another to allow VMotion to occur when normally it would be prevented In other words we will use software methods to hide attributes of a processor to VM to make it more likely to be VMotion-able. Loss of attributes could effect performance (hardware assist) or security (Nx/xD) So what are the new processor gotchas in VMotion? AMD Vs Intel 64bit Vs 32bit SSE3 Vs Non-SSE3 Hardware Assist Vs Non-Hardware Assist Nx/XD vs Non-Nx/XD Of course, cold migrates are always possible despite these differences

Checking your ESX CPU Attributes with CPU Boot CD


Note: If you are unable to check your CPU attributes you can you use a boot CD iso at your server to check your CPU attributes 1. From F:\images copy and unzip the cpuid.iso.gz file 2. Extract the cupid.iso and burn to a CD 3. Reboot your server with the CD

4. Choose Standard Note: You should receive a report like this: Reporting CPUID for 1 logical CPU... Family: 0f Model: 02 Stepping: 7 ID1ECX ID1EDX ID81ECX ID81EDX 0x00000400 0xbfebfbff 0000000000 0000000000 Vendor : Intel Processor Cores : 1 SSE Support : SSE2 Supports NX / ED : No Hyperthreading : Yes Supports 64-bit Longmode : No Supports 64-bit VMware : No

Enabling and Disable CPU Attributes on a VM


Note: Documentation is a little sketchy on this feature I guess there be some level of working with VMware Support on this 1. 2. 3. 4. 5. Select your VM In the Summary Tab, and the Commands Pane Click Edit Settings Select the Options tab in the dialog box In the Settings Column, select Advanced

Performing a Cold Migration


Note: I have seen a VM lose its Virtual Machine Start-Up and Shutdown options after a VMotion/Cold Migration 1. Power down your VM 2. In the Summary Tab, and the Commands Pane 3. Click Migrate to New Host

4. Select the Host you want to move this VM to 5. Select the Resource Pool Note: At this moment we dont have resource pool because we have created one year. So just select your ESX host as the resource pool 6. Next Select a DataStore

Note: This dialog box controls what happens to you files. You will generate network activity if youre using local storage and you choose Move virtual machine configuration files and virtual disks. As the source ESX host would have copy through the network to the destinations local storage. If you choose a SAN location you will create traffic on the SAN location. If you choose Keep virtual machine configuration files and virtual disks in their current location. The assumption is that the ESX server share common shared storage and the file dont need to be copied. This obliviously much quicker. To do this in VC 1.x we had to know where the files were and make sure we didnt accidentally choose a different location and create an unnecessary file copy event. You can use Move virtual machine configuration files and virtual disks copy VMs from one location to another say from local storage to SAN/iSCSI/NAS like you might have done in the past. However, its possible to keep the VM on the SAME ESX server but use this feature to change the storage location for the VM. This is different from VC 1.x where you could only cold-migrate to another server and VC couldnt be used in this way. So it is now possible to relocate the storage of a VM without relocating the Virtual Machine to different ESX host 7. Click Next and Finish Note: During this time you will see this status in Recent Tasks pane

This message appears even if the you choose Keep virtual machine configuration files and virtual disks in their current location which is a little confusing. But it doesnt say on screen very long You will might notice that for a short while the VM is orphaned while the VMX file unregistered from the source ESX host, and registered on the destination ESX host

I have not seen this happen in the GA release but it was there in Beta/RC1

Performing a Hot Migration (VMotion)


Note:

The wizard for VMotion remains unchanged you still get the option for high and low priority Prior to doing a VMotion VC validates if you VM and ESX host meets the requirements. In the dialog it will come up as Validating and then Validation Succeed If you pick a VM that doesnt meet the requirements for VMotion this has to be resolved before progressing and display as red exclamation mark

If on the other hand it is only minor problem you will get the caution symbol. You can proceed with a VMotion even with a caution

Example: VM connected to an internal switch

Example: VM connected to ESX Hosts CD-ROM

Example: VM only recently powered up and lacking a heartbeat signal

If CD-ROMs are connected but stored on shared storage you can still VMotion Oh yes, (after being asked 1,000 of times in VC 1.x) you can drag & drop as well making it dead easy to move a virtual machine by accident!

1. In the Inventory, select a powered on VM 2. In the Summary Tab and in the Command Pane 3. Choose Migrate to New Host

4. Select ESX host to move your VM too Note: What you might find a little odd is the server which currently hosts the VM is also listed. This didnt used to happen in the past. Needless to say there is no point VMotion-ing a VM to the server where it already resides. I guess the dialog boxes were standardised to match-up with the coldmigrate 5. Select the Resource Pool Note: At this moment we dont have resource pool because we have created one year. So just select your ESX host as the resource pool 6. Choose either High Priority or Low Priority 7. Click Finish Note: As in VC 1.x we get a status bar for a VMotion

Module 8: Resource Management


Executive Summary
Resource Pools In a crude way we have had resource pools for while After all one ESX server offers a pool of resources to running virtual machines, and collectively with VMotion we could balance the consumption of resources in a farm. To an extent this continues with the concept of many servers in a DataCenter Resource Pools in ESX 3.x and VC 2.x take this idea to the next logical level We can create resource pools labelled by respective parts of the business (sales, management, R&D, distribution) and allocate different amounts of resources depending on their criticality We can create resource pools on a stand-alone ESX host but where they come into their own is in a cluster of ESX hosts. With this model we begin to care less and less about the physical host seeing the ESX Hosts as collectively offering an amount CPU power (in Ghz, not a percentage) and RAM which we then carve up into pools that suit our needs Currently resource pools only handle CPU and Memory resources disk and network allocations are dealt with using the conventional methods You can only create resource pools on the right-click of an ESX host or a cluster. The option doesnt exist on the right-click of a DataCenter. See the DataCenter like the old-farm concept its merely a management unit/container For me resource pools, makes VMware feel like Citrix. We you care less and less about the particular server and see the servers as just a collection of resources for running in the case of Citrix Applications, in the case of VMware Virtual Machine. The principle feels pretty much the same in fact I think VMware have done a better job of resource management which has always been pretty lame on Citrix, as its constrained by the limits of the Windows OS

Analogy: The Power Station and Sub-station This analogy was offered to group of instructors at the Trainer TSX for EMEA by a VMware Instructor, called David Day. I think its an excellent analogy which I intend to use in my own courses - plagiarism is the sincerest form of flattery ;-) See each ESX host in a DataCenter or Cluster as turbine in power-station collectively the offer so much gigawatts of power The power station on its own is an unsuitable format for offering power to homes and business (the VMs) This is better controlled by stepping down the power into smaller units called a sub-station (resource pool) The sub-station can be configured by electrical engineers to draw a certain amount power from power-station This can be a capped amount so if the sub-station is depleted of resources by some runaway demand (everyone wanting to make a cup of tea in the ad-break of Friends for instance) then it is unable to suck resources from the national grid at the expense of others Alternatively, it that location is important we can allow the sub-station to demand more power from the grid at the expense of other sub-stations perhaps we treat the power demands of an inner city (sales resource

pool) as being more important than rural area which is less densely populated (research and development resource pool) These last two bullet points illustrate the idea of an expandable and nonexpandable resource pool

DRS This a new piece of software and requires a license DRS allows you take several ESX hosts and present them as a unit of CPU/RAM The primary purpose of DRS is to distribute VMs around the pool of ESX host of optimum CPU and Memory usage while at the SAME time obeying an resource pool constraints if they exist It can do this silently or with prompts to the Administrator The way it moves the VMs around is with VMotion Additionally, it can automatically select them most appropriate place to power on a VM (referred to as initial placement) It can also enforce rules to say VMs must be kept together (perhaps they are network intensive and would best on the SAME switch) or kept apart because they would create contention (two CPU intensive VMs on the SAME ESX host would be a bad idea) There are 3 levels of control (referred to as Automation Levels) which allow either full manual control by an operator to DRS taking full control with no human operator intervention. The default is Fully Automated VMware refer to this affinity and anti-affinity DRS acts is a master or parent resource pool in its own right (of course, we can create sub or child resource pools within it if necessary)

Creating Resource Pools on an ESX Host


Creating the Pools 1. Select one of your ESX hosts 2. Click at the Resource Pool icon

3. Type in a name like Sales Resource Pool

Note: This dialog box takes some explanation. Shares: we can set a share value. Went contention takes place the share value is applied between two resource pools. You can still have share values on individual VMs their share value takes place WITHIN the resource pool Reservations: We can reserve CPU and Memory currently they are only limited by the physical limitations of the ESX host. This is just like the MIN value we used to set on VMs for CPU/RAM Expandable Reservations: This is a default it means this resource pool could grab resources from another resource pool (within that ESX host). To meet is MIN requirements... if this was unchecked, and the amount reserved was too small for the demand a virtual machine would not power on Limit/Unlimited: This allows us to cap the maximum amount the resource pool could demand from the power-station. This is just like the MAX value we set on VMs for CPU/RAM The amount left over after other resource pools have taken their share of the resources or what is physical left over on that ESX hosts. You cannot allocate more resources that you

really have this is an upper-limit which cannot be exceeded. If you have all your VMs powered off on an ESX or Resource Pool then this option is not displayed so its context sensitive In my case I going to give a 2/3 of resources to Sales, and 1/3 of resources to a resource pool called Accounts. Sales will be an expandable resources pool but Accounts will not. I will also increase the share value for resource pool for Sales

Allocating Virtual Machines to Resource Pools Note: For existing VMs we can drag & drop them to the appropriate resource pool When Cold/Hot Migration takes place we can allocate them to a resource pool

When you do this if you move a VM to a Resource Pool that lacks enough of a minimum resource then it will stop the move and give you the

reason why

You can also get warnings when moving virtual machine OUT of Resource Pools like this

At the end of this process this is what I have:

Of course, it make much more sense to see esx1 and esx2 a one big lump of computing resources which can break-up into resource pools rather than having boundaries enforce by the limits of the physical server

Distributed Resource Service (DRS) Cluster


Note: What I did after this was remove my resource pools at the ESX host level This is because I wanted to set up a DRS Cluster without resource pools So I could look at DRS in isolation without resource pools offering another variable or distraction After this topic, I will move on to show DRS with Resource Pools You could decide not to have resource pools in DRS

DRS also changes the look of the VMotion dialog boxes like so:

Creating a DRS Cluster


With Manual Automation Level Note: I am beginning with manual automation so I can show you what happens. Beware that manual control irritating if you are powering on 40 virtual machines by hand each one would present it self with an initial placement question. This would give you 40 dialog boxes to deal with! 1. Select your DataCenter, in my case RTFM DataCenter 2. Click the New Cluster icon

3. Type a friendly name, such as London Cluster Note: I assuming that the DataCenter is logical, but the Cluster represents physical resources that maybe related to each by geographic limits. For example perhaps my two servers have access to the same collective storage but two servers in Birmingham would not have access to London Storage Enable X VMware DRS 4. Choose Manual Note: The option Partial Automated, gives you recommendations for VMotion. But the cluster automatically decides on which ESX host to initially place the VM. I would recommend this if you prefer to have manual control but dont want to answer dialog box for each VM you power on On the other hand wit Fully Automated the DRS cluster automatically places and VMotions VMs based on the resources available. The slider bar allows you to set how aggressively the DRS cluster looks for VMs to VMotion

If you like Partial Automated mode offers a compromise between the excessive chatty-ness of manual, and the scariness of a fully-automated VMotion process

Note: The advanced option is used in conjunction with VMware to apply custom criteria for the automatic VMotion 5. Choose Next and Finish 6. Now, Drag & Drop your ESX host to the Cluster Note: When you do this drag & drop. You will be presented with this dialog box:

As we removed the resource pools previously created the second radio button would do nothing. If we had kept our resource pool they would have been migrated across into the cluster a process they call grafting Choose Next, and Finish Add in your other server Note: This changes the view like so. Notice how we have a new tab called Migrations this is where we will find recommendation in a manual mode for suggestions of what VMs would be worth VMotioning elsewhere. The summary tab now shows not the resources of the individual host, but the collective resources of my two VMs 5ghz of CPU, and 4GB of RAM

Of course, VMs only run on one particular ESX host at any given time! Dealing with Manual VMotion Recommendations Note: You might already have recommendations based on the quality/quantity of your hardware If you wish to force recommendation. VMotion all you VMs on to one host. This will simulate an over burdened ESX server and will generate recommendations 1. Select your DRS Cluster 2. Select the Migrations Tab Note: If you have a recommendation to move a VM. It will look like this:

Note: VMware operate star recommendation. The larger the number stars the more beneficial the move would be. You will only see a 5-Star recommendation when you have broken the affinity or anti-affinity setting.

The migration tab shows where VM is currently (esx2.rtfm-ed.co.uk) and where it recommends taking the VM (esx1.rtfm-ed.co.uk. One VM may have multiple recommendations reflecting a cluster with more than 2-ESX host The tab also gives the reason fairnessCpuAvg 3. Select the desired recommendation, and click the Apply Migration Recommendation button Note: The Task Pane shows these recommendations, and progress of the VMotion when you click the button

Dealing with Initial Placement Questions Note: 1. With a manual mode you get a dialog prompt asking you for which ESX host the VM should power-up on (referred to as initial placement) 1. Shutdown a VM in the Cluster 2. Power On the VM in the Cluster, and this dialog box should appear

Note: Notice that given the lack of CPU/RAM used on my hosts both ESX1 and ESX2 have the same star-rating. When the VM first powers on it you might see (orphaned) in the UI, as the placement occurs. Note: If you use Partial Automation or Fully Automated this changes your deployments by templates you are not asked to which ESX you would like deploy to but which cluster:

Using the Fully Automated Mode Note: You might find this a bit scary. But look it at this way if a human operator must VMotion based on a recommendation that could trigger a change management request plus you have to pay this person a salary based on the fact they make business critical decisions. Plus, what happens if the recommendation comes at night when there is no-one take any action DRS also plugs into VMware HA so if an ESX host fails the will run the VMs on remaining hosts. Fully automated would desirable because if the failure happens at night again there could no-one available to answer the recommendation In this case we are going to switch to fully automated mode Then to trigger this we will VMotion all the VMs to one host Then we will wait and watch VMotion take place automatically 1. Select your DRS Cluster, in my case London Cluster 2. Select Summary Tab and in the Command Pane 3. Click Edit Settings

4. In the dialog box, choose VMware DRS 5. Choose Fully Automated Note: In my case to make sure I get VMotion events move the slider bar to aggressive. There are 5 notches on the slider bar: Slight improvement (Aggressive) Moderate Improvement 3-Star Recommendations

4-Star Recommendations Only apply VMotions to constraints such as VM Affinity & Host Maintenance 6. Click OK Note: In my case I made all my VMs run on ESX1. Within a short while I began to see automatic-VMotions. If you dont then use your templates to create more VMs and VMotion them to just one ESX host keep on doing this until automatic VMotion is triggered Note: These automatic VMotions get logged in the Migrations Tab under Migration History

Note: If you power-off and then power on all your VMs you should find the DRS cluster automatically places them on the ESX host it deems appropriate

Creating VM Affinities and Anti-Affinity Rules


Note: As mentioned earlier there maybe certain VMs that need to be kept together (I suggested network sensitive VMs) or kept apart (such as CPU intensive VMs) VMware refers controlling this as Affinity and Anti-Affinity settings. Im not sure how useful the term is, as I suspect novices might confuse this with CPU affinities which do exist but not related These affinity rules not only affect DRS, but manually trigged VMotions as well. So if you VMotion VM1 you VM2 also gets VMotiond as well Creating an Affinity Rule 1. Select your DRS Cluster, in my case London Cluster 2. Select Summary Tab and in the Command Pane 3. Click Edit Settings

4. In the dialog box under VMware DRS, select Rules 5. Click Add 6. In the Name field, type a name for your rule, such as VM1 love VM2 each other 7. Leave Keep Virtual Machines Selected

8. Click Add in the Virtual Machine Rule dialog, and select your two or more of your VMs

9. Click OK Creating Anti-Affinity Rule 1. Select your DRS Cluster, in my case London Cluster 2. Select Summary Tab and in the Command Pane 3. Click Edit Settings

4. 5. 6. 7. 8.

In the dialog box under VMware DRS, select Rules Click Add In the Name field, type a name for your rule, such as VM2 hates VM3 Change type to Separate Virtual Machines Click Add in the Virtual Machine Rule dialog, and select your two or more of your VMs

Note: At the end of this process my dialog box looks like this

So, Im my case server1 and 2 will always be on the SAME ESX host, but never on a ESX host that is running Server3. This also effect initial placement Affinity and Anti-Affinity Conflicts Note: It is entirely possible to create logical conflicts

In the above example if I created another rule that said Server3 loves Server2. There would be conflict. This is because if although Server3 loves Server2, Server2 ALSO loves Server1 and Server2 hates Server3. Its at this moment that your VM relationships resemble an Australian soap-opera playing out the age old eternal triangle saga that has bedevilled relationships between men and women since time began If you do create one of these conflicts, the VI Client will flag it up for you

The red exclamation marks indicate where the conflict lies and details button explains where the conflict exists

Modifying Virtual Machine Options


Note: You may have exceptions to the rules defined in DRS Perhaps there is VM that is flagged up as recommended for VMotion, and you would preferred this didnt happen Or you in Fully Automated mode and wish to exclude a given VM from this process 1. Select the DRS Cluster, in my case London Cluster 2. In the Summary Tab and Command Pane, 3. Select Editing Settings

4. Select Virtual Machine options 5. Select the Virtual Machine that you wish to exclude from the Fully Automated mode

Removing an ESX Host from a DRS Cluster


Note: If you try to drag-and-drop an ESX host out of DRS cluster you will get this message:

Maintenance mode puts the ESX host into a state which prevents VMotions to it

1. Shutdown the Virtual Machines active on the ESX host 2. Select the ESX, in the Summary Tab and Commands Pane 3. Select Enter Maintenance Mode

Choose Yes, to the dialog box warning After a short while, the ESX host will be flagged up as in maintenance mode like so:

4. Drag-and-drop your ESX host out of the cluster 5. Select the ESX, in the Summary Tab and Commands Pane 6. Select Exit Maintenance Mode

Note: You ESX host has left the cluster and you should be able to power on your virtual machines

Module 9: High Availability & Backup


Executive Summary
During this period there has some renaming The Distributed Availability Service (DAS) has been renamed before GA to be called VMware High Availability (VHA). It was felt by VMware that the name DAS would cause confusion with storage DAS systems What does VHA do for you? o If an ESX host fails in the cluster the virtual machines crash, but get restarted on another ESX host. Then DRS makes sure that it runs on the best ESX and Full Automation mode rebalances the cluster for performance What does VHA not do for you? o It doesnt currently keep the VM running when the ESX host fails. So there is downtime, and the potential loss of data not flushed from memory to the virtual disk o This ESX host failure only it does not protect VMs that blue screen of death. If you want to protect those you need clustering inside a virtual machine o Currently, VMotion expects a fully working ESX host so we cannot VMotion on a failed ESX host So its high availability, but not as high as you might think ;-) I guess in future VMware could keep a VM in constant state of VMotion but not trigger the VMotion until a lost of heartbeat of either the source VM, or the ESX Host itself. I imagine this would require quite a bit of bandwidth and we would have to be selective over which VMs we marked as requiring this level of availability How does it work o Its based on Legatos Automated Availability Manager (AAM) system o Some of the agents which run still have references to Legato o An agent runs on ESX host, and is independent of VirtualCenter. So if VirtualCenter fails your cluster is still intact. So this is a peerto-peer system o Failures are detected via a heartbeat network signal o The system has primary and backup servers which do the monitoring all the other servers are just secondary servers Admission Control o If we have a failure, there has to be enough resources remaining to power on the virtual machines o You can set the number of tolerate ESX host failures o If there is not sufficient resources then the fail-over will not occur Split-brain Scenario o You might get situation where the heartbeat network has failed o All the other servers in the cluster think that serverX is dead o And serverX is still alive o You can set controls to say what happens in this situation o The recommendation is you have two heartbeat networks to prevent this misunderstanding it is the Service Console NIC that is used for the heartbeat

Enabling the VMware HA Cluster


1. Select the Cluster, in my case London Cluster 2. In the Summary Tab and Command Pane,

3. Select Editing Settings

4. Click X Enable VMware HA 5. In the same dialog box, select VMware HA 6. Set the appropriate Host Failures allowed Admission Control Options

Note: In my case I cannot change Number of host failures allowed as I only have two servers! Note: If you have too many virtual machines running you can get errors starting the VHA. This happens if you have many virtual machines and few ESX hosts. This is an illustration of Admission Control when you have more virtual machines than you cluster has capacity and if a failure did occur then they would be insufficient resources to power on the virtual machines If this happens you will get alerts and warnings like this: In the main inventory:

In the Summary Tab of the Cluster:

Trigging VMware High Availability


Note: The best and most conclusive method of testing VHA is to pull the plug on a running ESX server Before pulling the plug on esx2.rtfm-ed.co.uk, I made sure only one VM was running on esx1.rtfm-ed.co.uk, and two VMs were running on esx2

This was to make sure there were plenty of resources on esx1 for the running VMs Then I powered of esx2.rtfm-ed.co.uk which was running server2 and server3 - in a short while (about 30s) VirtualCenter responded like so

Notice the red exclamation marks, and notice already server2 and server3 have already been registered on esx1.rtfm-ed.co.uk Once registered esx1 began powering up server2 and server3

With esx2 shutdown there was only one host left in the cluster so I also received this morning which makes sense

I then powered esx2.rtfm-ed.co.uk back on. Within a short while DRS used VMotion to move one of the VMs from esx1 to balance the load If you go remove a server from VMware HA Cluster you need to use maintenance mode will get this warning -

In my case (with two nodes in the HA cluster) this effectively breaks clustering. If there are spare resources on the remaining servers this will trigger a DRS VMotion of them (which is pretty neat!) VC will then put the server into maintenance mode when this happens on a two-node cluster it create a red alert on the cluster because it has lost its HA partner

Appendix A: Reverting back to SQL Authentication


Note: My initial experiences of upgrading from VC 1.x to 2.x were very poor This was because I had deviated from the VMware recommendations to use SQL Authentication In the end I was forced to revert back to SQL Authentication to make my upgrade work This is how I did it 1. Stop the VC Service on the VC Server 2. On the SQL Server, Open Enterprise Manger 3. Expand the nodes, and then right-click your server and choose Properties 4. Select the Security Tab 5. Enable SQL Server and Windows 6. Choose Yes, to restart the SQL Services 7. In the Login node, locate old user account associated with Windows Authentication and Choose Delete 8. Right-click the Login Node, and choose New Login 9. Using the locate the user account used to access the DB 10. Choose SQL Authentication and type a password ******* 11. Select the default database, of your VC DB and Language (English) 12. Select the Permissions Tab and Enable X db_owner 13. Click OK, and Confirm the password set at Step 9 14. Next, de-installed VC, choosing Yes to remove my database settings and 15. then re-install VC pointing it to the existing database and using SQL Authentication instead of Windows Authentication choosing Yes to this dialog box

Das könnte Ihnen auch gefallen