Sie sind auf Seite 1von 17

rev 0.2.

1
1
VMware OpenStack Virtual Appliance
Deployment Instructions





Contents
INTRODUCING "VOVA" "
1 WARNING "
2 For Help "
3 Prerequisites "
S.1 vCentei Seivei 2
S.2 Clustei Configuiation 2
S.S Fiiewall Configuiation 4
S.4 Cieating bi1uu Poit uioup 4
S.S v0vA Bost Configuiation S
4 OVF Package Download #
5 Working with the OpenStack $
S.1 Bashboaiu 9
S.2 CLI 12
S.S Configuiing Nova Biivei Post-Beployment 1S
6 Working with vCenter Web-Client Plugin for OpenStack %&
7 Troubleshooting & Logs %'
8 Limitations %(
9 Future Plans %(




rev 0.2.1
2
Introducing "VOVA"
VOVA is a virtual appliance to help VMware users easily get hands-on-experience with
OpenStack running on top of vSphere. With VOVA, you can take empty clusters in your
vSphere environment and use OpenStack web GUI or CLI to deploy VMs to these clusters.
Furthermore, you can use the vSphere Web Client plugin for OpenStack deployed along with
VOVA, to get insights into the OpenStack instances from your vCenter Web Client.
1 WARNING
This is a Proof-of-Concept Appliance.
This is not a product, and is not supported in any way shape or form.
Do not use it for production workloads, or on a production instance of vCenter.

2 For Help
VOVA is still very new, and we're sure that users will find all kinds of issues with it. Please see
the limitations section at the end of this document, and if you have additional
questions/comments/issues, start a discussion thread on our VMware OpenStack Community
site: https://communities.vmware.com/community/vmtn/openstack
3 Prerequisites
3.1 vCenter Server
! vCenter Version: VOVA has been tested with vCenter 5.1 and newer. There is a
known issue with older versions of vCenter, which we will resolve in future version of
VOVA.
! vCenter Inventory: only use a vCenter if there is a single datacenter configured (this is
a temporarily limitation) and if there are no production workloads using the vCenter (a
safety precaution).

3.2 Cluster Configuration
You'll need to perform a few steps to prepare your clusters and make sure they are suitable
for working with the appliance. DRS: If there are multiple ESX hosts in the cluster, DRS
should enabled with "Fully automated" placement turned on.

Shared Storage: If there are multiple ESX hosts in the cluster, the cluster should have only
Datastore that are shared among the all hosts in the cluster. We recommend using a single
shared Datastore for the cluster.
rev 0.2.1
3
ESX Firewall: To get VNC working with the OpenStack Web GUI, you need to open up the
VNC ports on your ESX host. See Firewall Configuration section below for details.

Networking: You must have a port group VM Network that provides network connectivity
to your vCenter server and to whatever client host you will be using to reach the OpenStack
Web GUI. We refer to this as the Management Network. You must also create a port group
br100 on each ESX host that are part of the clusters, for private connectivity between all
VMs provisioned by OpenStack, and between those VMs and the VOVA appliance. Refer to
the following section on Creating br100 port group.

To give you an idea of the desired end-state, here is an example "relationship map" shown
by the vCenter client with VOVA and two OpenStack VMs deployed (these OpenStack VMs
are identified by their OpenStack UUIDs) on a cluster. Note that all three VMs have an
interface on the br100 port group, while only VOVA has an interface on the "VM Network"
port group. All VMs are on a single Datastore associated with the cluster.

)*+,-. % /0/1 2.345-6*7+ 8.3,9

As shown in the figure, VOVA has two network interfaces: -
eth0 - Used as the Management Network Interface. This NIC must be able to reach
vCenter Server and be accessible to a host running the web browser you will use to
access the OpenStack Web GUI.
eth1 - Used as the Internal Interface. This NIC should be attached to a new port group
'br100' that starts out empty. OpenStack will connect VMs to this br100 port group
automatically, and the VOVA appliance will provide them with IP addresses via DHCP
using this interface. This interface also acts as the default gateway for OpenStack VMs
rev 0.2.1
4
to reach the Internet (for those of you familiar with OpenStack networking jargon, this
appliance uses FlatDHCPManager)

3.3 Firewall Configuration
Enable the port range 5900 - 6000 for VNC Connections on every ESX host in all the clusters
that will be managed by the appliance.
The easiest way to do this in a non-production is setting is just to use (abuse) the existing
firewall profile for gdbserver, since this opens everything VNC needs and more:
! To do this on the windows client, select the ESX host, click the "configuration" tab, select
"Security Profiles", then select "Properties.." in the firewall section, select "gdbserver"
and click OK.
! To do this on the new Web Client, go to the Host -> Manage -> Security Profile -->
Firewall --> Edit --> Enable gdbserver

3.4 Creating br100 Port Group
In order to achieve the desired network configuration, you can either choose to create a br100
port group on every ESX Host using vSphere Standard Switch or create a single br100 port
group on a vSphere Distributed Switch that spans all the clusters.

Work with your network administrator to identify an unused VLAN that is accessible to each of
the cluster with no DHCP on the network. It is important to have an isolated network with no
DHCP configured, as OpenStack will be responsible to manage the same.

Port Group br100 on vSphere Standard Switch [VSS]
With the Web Client, access the Host->Manage->Networking Tab.
With the Windows Client, select the host, and then select the "configuration" tab, and the
"networking" section.

! Click "Add Networking"
! Select "Virtual Machine Port Group for a Standard Switch" as the Connection Type. Click
Next.
! Select "New Standard Switch" as the Target Device and assign a Physical Adapter to
the switch. (Note that you will have to assign a physical NIC that corresponds to the
VLAN provided by your administrator.) Click Next.
! Type "br100" as the Network Label in the Connection Settings. Specify the VLAN ID
provided by your Network Administrator. Click Next.
! Review the displayed information. Should look similar as below: -

! Click Finish
rev 0.2.1
5

Perform the same steps as above on each ESX host on every cluster, but be sure to specify the
VLAN ID provided by your network administrator each time you create the br100 port group.

Port Group br100 on a vSphere Distributed Switch [VDS]
! Create a single br100 port group on a vSphere Distributed Switch whose uplink
adapters are connected to every ESX host in all the clusters that will be managed by the
appliance.
! Make sure to specify the VLAN ID in the port group properties.

Note: - There should not be any other port group named br100 on other Standard Switches, if
you are using the VDS option.
3.5 VOVA Host Configuration
Before you can deploy the VOVA, you'll need to perform the same steps as above to prepare
the ESX host on which you are deploying it. It is recommended to deploy VOVA on a host that
will not be part of a cluster managed by the appliance. This is different from the previous version,
where the appliance was deployed on the same cluster that it managed. This is because VOVA
can now manage multiple clusters in your vCenter instance.

! Sample Inventory:

Important: The ESX host, on which VOVA is deployed, must have the same networking
configuration as that specified in the cluster pre-requisites.
4 OVF Package Download
Now we aie ieauy to ueploy the 0vF package on the v0vA host machine. Bownloau the
0vF package in the .ova aichive foimat fiom the following location to youi local machine: -

http://partnerweb.vmware.com/programs/vmdkimage/vova/VOVA_ICEHOUSE.ova



To deploy:
" In windows client, select File -> Deploy OVF Template
rev 0.2.1
6
" In the web client: Right-Click on the Cluster in vCenter Web Client and choose "Deploy
OVF Template"

1. Select the OVF Source as the location of the VOVA_ICEHOUSE.ova file that you
downloaded to your local machine.

2. Give an appropriate name to the Appliance (e.g., VOVA) and place the appliance in an
appropriate Host.
3. Select a Datastore to be used for the Appliance.
4. Specify the Disk Format. Recommended: - Thin Format
5. Network Mapping: Map the Appliance's network interfaces to the Port Groups on your host as
per the image below. This step is critical for VM networking to work correctly.
rev 0.2.1
7

6. OVF Properties
When deploying the OVF, you have to enter in some information specific to your environment so
that VOVA knows what to do. There are two types of info to input:
! Nova Configuration: Details of your vCenter Server to configure the OpenStack Nova
Driver. You can specify the list of cluster names as comma-delimited in the Cluster List
field. The OpenStack appliance will manage these clusters.
! IP Configuration: VOVA needs a Static IP for consistency, so providing a static IP here
is suggested. However, you can choose to leave the fields blank if you have a DHCP
server on your "Mgmt Network".
Warning: - The appliance will not work properly if the IP address is changed Post-Deployment.
rev 0.2.1
8

6. Select the option to power on the appliance automatically once it is deployed.
7. Review the OVF Deployment Settings and Click Finish.
Now monitor the progress of the OVF deploy dialog until the deployment is complete, and you
can access the VM and its console via your vCenter Inventory.
This OVF deploy involves a 2 gigabyte download, so the time it takes depends largely on the
speed of your Internet connection.
rev 0.2.1
9
5 Working with the OpenStack
And just like that, we are ready to access OpenStack. Now for the fun part!
5.1 Dashboard
1. On your first boot, you should be able to see a screen on the VOVA's console from within
your vCenter client.

Note: if the console just shows a black background with white text, that means that VOVA is still
in the process of booting, just be patient.

2. Follow the OpenStack Dashboard URL on the Screen. Log In using the credentials: -
demo/vmware
rev 0.2.1
10

3. To Manage instances, click on the "Instances" tab on the left side of the screen.
4. To boot a new Instance, click "Launch Instance".
Your appliance will be pre-loaded with a Debian disk image.
You can use the defaults for almost everything, just select "debian-2.6.32-i686" in the "Image"
drop-down, enter text for an "Instance Name" (e.g., myVM) and click "Launch" in the lower right
hand corner of the dialog.

rev 0.2.1
11

5. You should see the Instance move through several states within a few sections, and then sit
in Spawning state for a while (e.g., up to 5 minutes). Spawning the first VM can take a while
because the 1 GB Debian disk image must be copied from the file system of the VOVA
appliance to your cluster's Datastore. All subsequent instances should be significantly faster
(under 30 seconds).
6. Once the Instance has moved from "Spawning" to the "Active" state, click on the VM's name
in the Instances list to go to its detail page.
7. From its details page, you can go the Console tab. At the login prompt, enter a username and
password of vmware/vmware or root/vmware to login to the VM. You can also access the VM
console via your vCenter client, but in a real OpenStack + vSphere setup, vCenter is only
accessible to the cloud administrator. The name of the VM in vCenter will map to the VM UUID
shown in the details page for the VM in the OpenStack Web GUI.
8. You can now create more VMs, experimenting with different "flavors" to give them access to
different resource. All VMs will be attached to a single network with a 10.0.0.* IP address. The
rev 0.2.1
12
VOVA appliance acts as a network gateway for the VMs, providing access to the Internet via
iptables NAT running in the VOVA VM.
9. To delete the instance. Click "Terminate Instance" button and confirm.
5.2 CLI
VOVA also provides an easy way to test out using the OpenStack CLI tools, which directly
access the OpenStack APIs. We do this by allowing you to SSH directly to VOVA's console and
run CLI commands.
Of course, in a production deployment normal users of the cloud would run the CLI on their own
client machines, but for now, use your imagination.
1. ssh root@<VOVA IP on the management network>
password: vmware
2. The environment is configured to be used as the demo user/ demo tenant. You can type
Nova CLI commands to boot an image
E.g.
$> glance image-list
+---------------------------------------------------------+--------------------------+-----------+---------+
| ID | Name | Status | Server |
+---------------------------------------------------------+--------------------------+-----------+---------+
| 02127fe0-51d6-45b1-ab04-7688785ee517 | debian-2.6.32-i686 | ACTIVE | |
+---------------------------------------------------------+--------------------------+-----------+---------+

$> nova boot myVM --image debian-2.6.32-i686 --flavor 1



rev 0.2.1
13
$> nova list

+-------------------------------------------------------+---------+------------+---------------+-----------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+-------------------------------------------------------+---------+------------+---------------+-----------------+-----------------------+
| d812fc8d-253e-4d2f-bb56-df237d9420da | myVM | ACTIVE | None | Running | private=10.0.0.2 |
+-------------------------------------------------------+---------+------------+---------------+-----------------+-----------------------+


The VOVA CLI will have direct network access to all of the 10.0.0.* IP addresses of the VMs, so
you can SSH directly to the VM using the IP address shown in Nova list. The password is
nicira/nicira (soon to be vmware/vmware).
Finally, you can delete the vm's using the VM's name or UUID. For example:

$> nova delete myVM
Admin CLI
By default, ssh session is configured for Demo user of Openstack. If you would like to issue
admin commands, you need to do the following: -
export OS_USERNAME=admin
Now, you can issue any admin commands like nova hypervisor-list, nova hypervisor-show.

5.3 Configuring Nova Driver Post-Deployment
In order to change the Nova Driver Configuration, you need to: -
1. Power-Off the Appliance from VC
2. Click on the VM -> Edit Settings --> vApp Options
3. Expand the "Nova VMware VCDriver Configuration"
4. Change the properties and click OK.
5. Power-On the Appliance.
6 Working with vCenter Web-Client Plugin for OpenStack
The vCenter Web-Client Plugin for Openstack provides a convenient way for the VI Admins to
be able to account for the Openstack Instances. Once you have seen the VOVA console as
shown in the last step. At this stage the vCenter Web Client Plugin is also ready to be
configured and used.
Follow the below steps to complete the configuration of the plugin:
! In the browser open the web client url for the vCenter Server e.g. https://<vCenter
Server>/vsphere-client
! Log in to the Web Client using the admin credentials
! Inside the web client navigate to Administration->Openstack. You should see the
screen as below
rev 0.2.1
14

Note: - If you do not see the above tab in your vCenter web client, follow the instructions
in the Troubleshooting section and post logs for our analysis.

! Click on the + button on top of the grid to register endpoints. (Note: These are the
endpoints that the plugin will then be interacting with to get the Openstack related
information for the vSphere inventory)
! First add the vCenter endpoint. Select the type as VCENTER, provide the url as
https://<vCenter Server>/sdk and provide the credentials for accessing the vCenter
Server. After you click the Add button, the vCenter endpoint should show up in the grid
above with a green tick on the Active column. In case you see a red cross verify the
following for the vCenter:
! vCenter server URL
! The credentials provided are correct
! The VOVA appliance and the vCenter server are time synchronized. A variance
more than 10 minutes will result in failure
! Second add the Keystone endpoint. Select the type as KEYSTONE, provide the url as
http://<VOVA IP>:5000/v2.0 and provide the admin credentials to provide access to the
openstack inventory across all the tenants in this case (admin/vmware). After you click
the Add button, the Keystone endpoint should show up in the grid above with a green
tick on the status column. In case you see a red cross in the status column, verify the
keystone url and credentials
! Once you have successfully added a vCenter and Keystone endpoints, you should see a
green tick at the top of the page. At this stage the configuration for the web client plugin
is complete.
rev 0.2.1
15


Once configured, when you start an instance from openstack UI or CLI, you should start
getting tags created for those VMs inside vSphere. For e.g. if you started an instance named
foo in openstack, now if you switch to the vSphere Web Client, you should be able to
search for foo from the top right search box and you would get a tag named foo which
would be associated with the VM in the vSphere inventory that represents the foo instance
in openstack. Also, when you navigate to this VMs summary tab, you should see a new
portlet named Openstack VM displaying the data related to the Openstack instance it is
associated with (see screenshot below).


rev 0.2.1
16
7 Troubleshooting & Logs
If you are not able to access the OpenStack Web GUI at all:
! Make sure the host you are running the web browser on has access to the VOVA
appliances eth0 IP address, using ping. If not, it is likely that the problem is with your
physical network, or that the correct networking configuration was not provided when the
OVF was deployed.
! If the VOVA IP is reachable, access it via SSH and collect + post logs as described
below.
! If you are able to access the OpenStack Web GUI, but you are not able to provision a
VM:
! Is the VM still in SPAWNING state? Did you wait long enough? The first VM you
provision can take 5 minutes+ depending on your setup, as the disk needs to stream to
your Datastore.
! If the VM went immediately to ERROR state, something is wrong with your setup.
# SSH to the VOVA host and confirm that you can ping the IP address of the
vCenter from the VOVA host.
# You may have input incorrect data about how to connect to vCenter (e.g,. a bad
username/password or cluster-name). Run "nova hypervisor-list and note the ID
of the reported hypervisor. Now execute nova hypervisor-show <id> and see a
corresponding hypervisor with the correct capacity of your cluster. If not, see the
above instructions about resetting configuration.
# Make sure the cluster has enough free capacity to be deploying the VM flavor
you are requesting. If not, the nova-scheduler will refuse to deploy it.
# Otherwise, collect + post logs as described below.

If you are not able to access the vSphere Web Client plugin for OpenStack, i.e. you do not see
the Administration -> OpenStack tab on your vCenter Web Client, please collect the logs as
described below.

Collecting Logs:
1. SSH to the VOVA deployment
2. Use tar or gzip to bundle up the files /opt/vmware/var/log/firstboot,
/opt/vmware/var/log/subsequentboot and the files in /opt/vmware/var/log/vova/.
3. Copy them to a system with web access and post them along with a discussion thread to
the VMware OpenStack Community:
https://communities.vmware.com/community/vmtn/openstack


rev 0.2.1
17
8 Limitations
! No Neutron support: Neutron with vSphere requires the VMware NSX solution. We plan
to release a future version of VOVA that can optionally leverage NSX.
! No Security Groups support. With vSphere, VMware NSX is required for security groups
network filtering. We plan to release a future version of VOVA that can optionally
leverage NSX.
! No exposed options to configure floating IPs. This is possible with the current appliance,
but it has not been exposed via the OVF options.
! No support for sparse disks. If you try to upload your own disk images, the images must
be flat, not sparse.
! No support for Swift (object storage). VOVA has no plans to leverage OpenStack swift
for object storage. You are free to deploy swift on your own in another VM.
9 Future Plans
VOVA is not a product, and will likely be discontinued once production-quality solutions with
similar ease-of-use are made available (remember that there is nothing "special" about VOVA, it
is just the open source OpenStack code running on Ubuntu, with proper configuration for using
vSphere).

Until then, we will update VOVA periodically to fix issues users find, to include new versions of
OpenStack, and to enable new features. In the months following, expect to see updated
versions of VOVA with the option of using Neutron + Security groups via VMware NSX.

We're always interested in hearing feedback + suggestions, but please remember that our goal
with VOVA is primarily to educate people about OpenStack + VMware, and so won't necessarily
prioritize some requests if they don't help further this goal.

Das könnte Ihnen auch gefallen