Beruflich Dokumente
Kultur Dokumente
Akilesh K
Johnson D
Yogesh G
Ubuntu, the Ubuntu Logo and Canonical are registered trademarks of Canonical. Read
Canonicals trademark policy here.
OpenStack and the OpenStack Logo are registered trademarks of OpenStack. Read
OpenStacks trademark policy here.
All other trademarks mentioned in the book belong to their respective owners.
This book is aimed at making it easy/simple for a beginner to build and maintain a private
cloud using OpenStack. This book will be updated periodically based on the suggestions, ideas,
corrections, etc., from readers.
Mail Feedback to: books@pinlabs.in
Please report bugs in the content of this book at :
https://github.com/eternaltyro/openstack-beginners-guide and we will try to fix them as early
as possible and incorporate them in to the next version of the book.
Released under Creative Commons - Attribution-ShareAlike 4.0 International license.
A brief description of the license
A more detailed license text
Preface
About this guide
The idea of OpenStack Beginners Guide came from OpenStack cactus release, during that time
we had limited documentation available. Our initial contributors were Murthyraju Manthena,
Kiran Murari, Suseendran RB, Yogesh G, Atul Jha and Johnson D. We released this book
updating every version and adding newer components. Later on this book got merged with official
OpenStack documentaion. We thought we still need to maintain this book for our reference and
hence we are releasing the book for Icehouse version on Ubuntu 14.04 LTS.
Target Audience
Our aim has been to provide a guide for a beginners who are new to OpenStack. Good familiarity with virtualization is assumed, as troubleshooting OpenStack related problems requires
a good knowledge of virtualization. Similarly, familiarity with Cloud Computing concepts and
terminology will be of help. Prior exposure to AWS API and/or tools is not mandatory, though
such exposure will accelerate the learning process greatly.
Acknowledgement
Most of the content has been borrowed from web resources like manuals, documentation, white
papers etc. from OpenStack and Canonical; numerous posts on forums and discussions on
OpenStack IRC Channel. We would like to thank all the authors of these resources.
License
Attribution-ShareAlike 4.0 International.
For the full version of the license text,
please refer to http://creativecommons.org/licenses/by-sa/4.0/legalcode and http://
creativecommons.org/licenses/by-sa/4.0/ for a shorter description.
Feedback
We would really appreciate your feedback. We will enhance the book on an ongoing basis based
on your feedback. Please mail your feedback to books@pinlabs.in.
Contents
1.1
Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1
1.2.1.1
1.2.1.2
1.2.2
1.2.3
1.2.4
1.2.1.2.1
1.2.1.2.2
1.2.1.2.3
Scheduler (nova-scheduler) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2.1
1.2.2.2
Components of Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.3.1
1.2.3.2
Components of Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.3.3
1.2.3.4
1.2.3.5
1.2.3.6
1.2.3.7
The Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.5
1.2.6
Neutron Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5.2
1.2.5.3
1.2.5.4
Neutron L3 Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5.5
Cinder API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.6.2
Cinder Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.6.3
Cinder Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.7
1.2.8
1.2.9
15
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2
Server1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1
Base OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2
Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.3
NTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.4
RabbitMQ Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.5
MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.6
2.2.7
2.2.5.1
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.5.2
Creating Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Keystone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.6.1
Creating Tenants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.6.2
Creating Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.6.3
Creating Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.6.4
2.2.6.5
Creating Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.6.6
Creating Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.7.1
2.2.8
Nova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.8.1
2.2.9
Glance Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Nova Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.9.1
Neutron Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.10 Cinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.10.1 Cinder configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.11 Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.11.1 Swift Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.11.2 Swift Storage Backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.11.2.1
2.2.11.2.2
2.2.11.2.3
2.2.11.4.2
2.2.11.4.3
2.2.11.4.4
2.2.11.4.5
Server2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.1
Base OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.2
Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.3
NTP Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.4
2.3.5
2.4
2.5
3
Client1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.1
BaseOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.2
Networking Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.3
NTP Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.4
Client Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
OpenStack Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Image Management
43
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2
3.2.2
4
Neutron Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
OS Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.1.1
Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.1.2
Fedora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.1.3
OpenSUSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.1.4
Debian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.1.5
Instance Management
47
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2
4.2.2
Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2
User overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3
Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.4
Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.5
Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.6
Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.7
Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Storage Management
6.1
6.1.1
6.1.2
Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
59
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Security
8.1
55
Cinder-volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Network Management
7.1
51
63
Security Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
OpenStack Commands
67
9.1
Nova Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
9.2
Glance Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.3
Swift Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.4
Keystone Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
9.5
Neutron Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
9.6
Cinder Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
List of Tables
2.1
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
This is a tutorial style beginners guide for OpenStack on Ubuntu 14.04, Trusty Tahr. The aim is to help the reader in setting
up a minimal installation of OpenStack.
Chapter 1
Cloud Computing
Cloud computing is a computing model, where resources such as computing power, storage, network and software are abstracted
and provided as services on the internet in a remotely accessible fashion. Billing models for these services are generally similar
to the ones adopted for public utilities. On-demand availability, ease of provisioning, dynamic and virtually infinite scalability
are some of the key attributes of cloud computing.
An infrastructure setup using the cloud computing model is generally referred to as the "cloud". The following are the broad
categories of services available on the cloud:
Infrastructure as a Service (IaaS)
Platform as a Service (PaaS)
Software as a Service (SaaS)
1.2
OpenStack
OpenStack is a collection of open source software projects that enterprises/service providers can use to setup and run their cloud
compute and storage infrastructure. Rackspace and NASA are the key initial contributors to the stack. Rackspace contributed
their "Cloud Files" platform (code) to power the Object Storage part of the OpenStack, while NASA contributed their "Nebula"
platform (code) to power the Compute part. OpenStack consortium has managed to have more than 150 members including
Canonical, Dell, Citrix etc.
There are 10 main service families in OpenStack Icehouse, out of which we have dealt with following 7 in this guide.
Nova - Compute Service
Swift - Storage Service
Glance - Imaging Service
Keystone - Identity Service
Neutron - Networking service
Cinder - Volume Service
Horizon - Web UI Service
1.2.1
Nova is the Computing Fabric controller for the OpenStack Cloud. All activities needed to support the life cycle of instances
within the OpenStack cloud are handled by Nova. This makes Nova a Management Platform that manages compute resources,
networking, authorization, and scalability needs of the OpenStack cloud. But, Nova does not provide any virtualization capabilities by itself; instead, it uses libvirt APIs to interact with the supported hypervisors. Nova exposes all its capabilities through a
web services API that is compatible with the EC2 API of Amazon Web Services.
1.2.1.1
The API Server provides an interface to the outside world to interact with the cloud infrastructure. API server is the only
component that the outside world uses to manage the infrastructure. The management is done through web services calls using
EC2 API. The API Server then, in turn, communicates with the relevant components of the cloud infrastructure through the
Message Queue. As an alternative to EC2 API, OpenStack also provides a native API called "OpenStack API".
1.2.1.2.2
Compute workers deal with instance management life cycle. they receive the requests for life cycle management via the Message
Queue and carry out operations. There are several Compute Workers in a typical production cloud deployment. An instance is
deployed on any of the available compute worker based on the scheduling algorithm used.
1.2.1.2.3
Scheduler (nova-scheduler)
The scheduler maps the nova-API calls to the appropriate openstack components. It runs as a daemon named nova-schedule and
picks up a compute server from a pool of available resources depending upon the scheduling algorithm in place. A scheduler can
base its decisions on various factors such as load, memory, physical distance of the availability zone, CPU architecture, etc. The
nova scheduler implements a pluggable architecture.
Currently the nova-scheduler implements a few basic scheduling algorithms:
chance: In this method, a compute host is chosen randomly across availability zones.
availability zone: Similar to chance, but the compute host is chosen randomly from within a specified availability zone.
simple: In this method, hosts whose load is least are chosen to run the instance. The load information may be fetched from a
load balancer.
1.2.2
OpenStack Imaging Service is a lookup and retrieval system for virtual machine images. It can be configured to use any one of
the following storage backends:
Local filesystem (default)
OpenStack Object Store to store images
S3 storage directly
S3 storage with Object Store as the intermediate for S3 access.
HTTP (read-only)
1.2.2.1
Components of Glance
Glance-api
Glance-registry
1.2.3
Swift provides a distributed, eventually consistent virtual object store for OpenStack. It is analogous to Amazon Web Services Simple Storage Service (S3). Swift is capable of storing billions of objects distributed across nodes. Swift has built-in redundancy
and failover management and is capable of archiving and media streaming. It is extremely scalable in terms of both size (several
petabytes) and capacity (number of objects).
1.2.3.1
Components of Swift
Swift Account
Swift Container
Swift Object
Swift Proxy
The RING
1.2.3.3
The consumers interact with the Swift setup through the proxy server using the Swift API. The proxy server acts as a gatekeeper
and recieves requests from the world. It looks up the location of the appropriate entities and routes the requests to them.
The proxy server also handles failures of entities by rerouting requests to failover entities (handoff entities)
1.2.3.4
The Object server is a blob store. Its responsibility is to handle storage, retrieval and deletion of objects stored in the local
storage. Objects are typically binary files stored in the filesystem with metadata contained as extended file attributes (xattr).
Note: xattr is supported in several filesystems such as ext3, ext4, XFS, Btrfs, JFS and ReiserFS in Linux. But it is known to work
best under XFS, JFS, ReiserFS, Reiser4, and ZFS. XFS is considered to be the best option.
1.2.3.5
The container server lists the objects in a container. The lists are stored as SQLite files. The container server also tracks the
statistics like the number of objects contained and the storage size occupied by a container.
1.2.3.6
The account server lists containers the same way a container server lists objects.
1.2.3.7
The Ring
The ring contains information about the physical location of the objects stored inside Swift. It is a virtual representation of
mapping of names of entities to their real physical location. It is analogous to an indexing service that various processes use to
lookup and locate the real physical location of entities within the cluster. Entities like Accounts, Containers, Objects have their
own seperate rings.
1.2.4
Keystone provides identity and access policy services for all components in the OpenStack family. It implements its own REST
based API (Identity API). It provides authentication and authorization for all components of OpenStack including (but not limited
to) Swift, Glance, Nova. Authentication verifies that a request actually comes from who it says it does. Authorization is verifying
whether the authenticated user has access to the services he/she is requesting for.
11
Keystone provides two ways of authentication. One is username/password based and the other is token based. Apart from that,
keystone provides the following services:
Token Service (that carries authorization information about an authenticated user)
Catalog Service (that contains a list of available services at the users disposal)
Policy Service (that lets keystone manage access to specific services by specific users or groups).
1.2.4.1
Endpoints - Every OpenStack service (Nova, Swift, Glance) runs on a dedicated port and on a dedicated URL(host), we call
them endpoints.
Regions - A region defines a dedicated physical location inside a data centre. In a typical cloud setup, most if not all services
are distributed across data centers/servers which are also called regions
User - A keystone authenticated user.
Services - Each component that is being connected to or being administered via keystone can be called a service. For example,
we can call Glance a keystone service.
Role - In order to maintain restrictions as to what a particular user can do inside cloud infrastructure it is important to have a
role associated.
Tenant - A tenant is a project with all the service endpoint and a role associated to user who is member of that particular tenant.
1.2.5
1.2.5.1
Neutron Server
The neutron-server is the process that exposes the Neutron API to users and other OpenStack services. The neutron-server is
also responsible for storing the network state and user configuration in database for use by the other services.
1.2.5.2
The neutron-openvswitch-agent runs on every compute node and the network node and is responsible for providing L2 connectivity between Instances running on various compute servers and the agents that offer other services. The neutron-openvswitchagent accepts user configuration from neutrons config files and overlays a virtual network atop the physical(data) network.
1.2.5.3
The neutron-dhcp-agent runs on the network node and is responsible for offering IP configuration for OpenStack Instances.
The neutron-dhcp-agent configures a dnsmasq Instance for every subnet created in Neutron, obeying the subnet parameters
like Allocation Pools and DNS Name Servers. The dnsmasq instance is configured to bind to an appropriate port configured
by the L2 agent on the network node and offer dhcp lease addresses for the Instances it its subnet.
1.2.5.4
Neutron L3 Agent
The neutron-l3-agent runs on the network node and is responsible for L3 connectivity, FloatingIP provisioning and enforcing
security groups. The neutron-l3-agent relies on the network nodes IP forwarding capabilities to perform routing and iptables
rules to provision FloatingIP and enforce security groups. Bot the the L3 agent and DHCP agent make use of linux networking
namespaces to make sure tenants are allowed to share subnet CIDR range.
1.2.5.5
The neutron-metadata-agent provides Instance metadata for all OpenStack Instances. The Instances can pull the metadata using
the url http://169.254.169.254:80 and use them for early Initialization, like for example
Update the Instances ssh authorized keys for password less ssh login
Create default users
Refresh package cache and system update, etc.
1.2.6
Volume services are used for management of LVM-based instance volumes. Volume services perform volume related functions
such as creation, deletion, attaching a volume to an instance, and detaching a volume from an instance. Volumes provide a way
of providing persistent storage for the instances, as the root partition is non-persistent and any changes made to it are lost when
an instance is terminated. When a volume is detached from an instance or when an instance, to which the volume is attached, is
terminated, it retains the data that was stored on it. This data can be accessed by re-attaching the volume to the same instance or
by attaching it to other instances.
Critical data in an instance must always be written to a volume, so that it can be accessed later. This typically applies to the
storage needs of database servers etc.
Cinder services include
Cinder API
Cinder Scheduler
Cinder Volume
1.2.6.1
13
Cinder API
Cinder API provides an interface to the external world other services like nova to interact with other Cinder services.
1.2.6.2
Cinder Scheduler
When there are 2 or more machines running Cinder Volume service, scheduler decides the machine in which the new volume is
to be created.
1.2.6.3
Cinder Volume
In a default setup, Cinder Volume interacts with Logical Volume Manager(LVM)to create, delete and other volume related
functions
1.2.7
Horizon the web based dashboard that can be used to manage /administer OpenStack services. It can be used to manage instances
and images, create keypairs, attach volumes to instances, manipulate Swift containers etc. Apart from this, dashboard even gives
the user access to instance console and can connect to an instance through VNC. Overall, Horizon features the following:
Instance Management - Create or terminate instance, view console logs and connect through VNC, Attaching volumes, etc.
Access and Security Management - Create security groups, manage keypairs, assign floating IPs, etc.
Flavor Management - Manage different flavors or instance virtual hardware templates.
Image Management - Edit or delete images.
View service catalog.
Manage users, quotas and usage for projects.
User Management - Create user, etc.
Volume Management - Creating Volumes and snapshots.
Object Store Manipulation - Create, delete containers and objects.
Downloading environment variables for a project.
1.2.8
The OpenStack Cloud Controller communicates with other nova components such as the Scheduler, Network Controller, and
Volume Controller via AMQP(Advanced Message Queue Protocol). Nova uses asynchronous calls for request response, with a
call-back that gets triggered once a response is received. Since asynchronous communication is used, none of the user actions
get stuck for long in a waiting state. This is especially true since many actions expected by the API calls such as launching an
instance or uploading an image are time consuming.
1.2.9
OpenStack services persist their configuration in database. You can use MySQL, PostgreSQL or SQLite as your database server.
Depending upon your choice of database, you will need to install the necessary packages and configure the database server. In
this guide we will use MySQL.
1.2.10
OpenvSwitch is a production quality, multilayer software switch. In OpenStack, OpenvSwitch is used to create virtual(software)
switches, with connected ports to which the VMs attach. The OpenvSwitch is configured by Neutron in way such as to ensure
VM Connectivity with each other and with the external networks.
15
Chapter 2
2.1
Introduction
The following section describes how to set up a minimal cloud infrastructure based on OpenStack using 3 machines. These
machines are referred to in this and subsequent chapters as Server1, Server2 and Client1. Server1 runs all the components of
Nova, Glance, Swift, Keystone, Neutron, Cinder and Horizon (OpenStack Dashboard). Server2 runs only nova-compute and
neutron-openvswitch-agent. Since OpenStack components follow a shared-nothing policy, each component or any group of
components can be installed on any server.
Client1 is not a required component. In our sample setup, it is used for bundling images, as a client to the web interface and
to run OpenStack commands to manage the infrastructure. Having this client ensures that you do not need to meddle with the
servers for tasks such as bundling. Also, bundling of Desktop Systems including Windows will require a GUI and it is better to
have a dedicated machine for this purpose. We would recommend this machine to be VT-Enabled so that KVM can be run which
allows launching of VMs during image creation for bundling.
The installation steps use certain specifics such as host names/IP addresses etc. Modify them to suit your environment before
using them. The following table summarizes these specifics.
Functionality
Network Interfaces
IP addresses
Hostname
DNS servers
Gateway IP
Server1
All components of
OpenStack including
nova-compute
eth0 - Management N/W,
eth1 - Data N/W, eth2 External N/W
eth0 - 10.10.10.2, eth1 192.168.3.2, eth2192.168.100.2
server1.example.com
8.8.8.8
10.10.10.1
Server2
Client1
Client tools
eth0 - 10.10.10.4
server2.example.com
8.8.8.8
10.10.10.1
client.example.com
8.8.8.8
10.10.10.1
2.2
Server1
As shown in the figure above, Server1 contains all Nova services including nova-compute, Cinder, Neutron, Glance, Swift,
Keystone and Horizon. It contains two network interface cards (NICs).
2.2.1
Base OS
Install 64 bit version of Ubuntu server 14.04 keeping the following configurations in mind.
Create the first user with the name localadmin .
Installation lets you setup the IP address for the first interface i.e. eth0. Set the IP address details.
During installation select only Openssh-server in the packages menu.
We will also be running cinder-volume on this server and it is ideal to have a dedicated partition for the use of cinder-volume. So,
ensure that you choose manual partitioning scheme while installing Ubuntu Server and create a dedicated partition with adequate
amount of space for this purpose. We have referred to this partition in the rest of the chapter as /dev/sda6. You can substitute
the correct device name of this dedicated partition based on your local setup while following the instructions. Also ensure that
the partition type is set as Linux LVM (8e) using fdisk either during install or immediately after installation is over. If you also
plan to use a dedicated partition as Swift backend, create another partition for this purpose and follow the instructions in "Swift
Installation" section below.
Update the machine using the following commands.
17
apt-get update
apt-get upgrade
Reboot the server and login as the admin user(localadmin) created during the OS installation.
2.2.2
Network Configuration
2.2.3
NTP Server
Install NTP package. This server is going to act as an NTP server for the nodes. The time on all components of OpenStack will
have to be in sync. We can run NTP server on this and have other components sync to it.
apt-get install ntp
Open the file /etc/ntp.conf and add the following lines to make sure that the time of the server is in sync with an external server
and in case Internet connectivity is down, NTP server uses its own hardware clock as the fallback.
server ntp.ubuntu.com
server 127.127.1.0
fudge 127.127.1.0 stratum 10
2.2.4
RabbitMQ Server
2.2.5
MySQL
2.2.5.1
Installation
Create the root password for mysql. The password here is "mygreatsecret"
Change the bind address from 127.0.0.1 to 0.0.0.0 in /etc/mysql/my.cnf and it will look like this:
bind-address = 0.0.0.0
2.2.5.2
Creating Databases
19
2.2.6
Keystone
Keystone is the identity service used by OpenStack. Install Keystone using the following command.
apt-get install keystone python-keystone python-keystoneclient
to
connection = mysql://keystonedbadmin:keystonesecret@10.10.10.2/keystone
Now we need export some environment variables which are required frequently while working with OpenStack.
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=admin
You can also add these variables to ~/.bashrc, so that you need not have to export them everytime.
2.2.6.1
Creating Tenants
Create the tenants by executing the following commands. In this case, we are creating two tenants - admin and service.
keystone tenant-create --name admin
keystone tenant-create --name service
2.2.6.2
21
Creating Users
Create the users by executing the following commands. In this case, we are creating four users - admin, nova, glance and swift
keystone
keystone
keystone
keystone
keystone
keystone
2.2.6.3
user-create
user-create
user-create
user-create
user-create
user-create
Creating Roles
Create the roles by executing the following commands. In this case, we are creating two roles - admin and Member.
keystone role-create --name admin
keystone role-create --name Member
2.2.6.4
Now we add roles to the users that have been created. A role to a specific user in a specific tenant can be assigned with the
following command:
To add a role of admin to the user admin of the tenant admin.
keystone user-role-add --user=admin --tenant=admin --role=admin
2.2.6.5
user-role-add
user-role-add
user-role-add
user-role-add
user-role-add
--user
--user
--user
--user
--user
Creating Services
Now we need to create the required services which the users can authenticate with. nova, glance, swift, keystone, neutron and
cinder are some of the services that we create.
keystone service-create
'
keystone service-create
'
keystone service-create
Storage v2"
keystone service-create
keystone service-create
Service'
keystone service-create
Service'
keystone service-create
Service'
--name nova --type compute --description 'OpenStack Compute Service --name cinder --type volume --description 'OpenStack Volume Service --name=cinderv2 --type=volumev2 --description="OpenStack Block
2.2.6.6
Creating Endpoints
Create endpoints for each of the services that have been created above.
For creating an endpoint for nova-compute, execute the following command
keystone endpoint-create --service=nova --publicurl=http://10.10.10.2:8774/v2/%\(tenant_id \)s --internalurl=http://10.10.10.2:8774/v2/%\(tenant_id\)s --adminurl=http ://10.10.10.2:8774/v2/%\(tenant_id\)s
2.2.7
Glance
2.2.7.1
Glance Configuration
Glance uses SQLite by default. MySQL and PostgreSQL can also be configured to work with Glance.
Open /etc/glance/glance-api.conf and add the make the following changes
Edit the line which contains the option "sqlite_db=" to this
connection = mysql://glancedbadmin:glancesecret@10.10.10.2/glance
Edit the following lines so it looks like this. The admin_tenant_name will be service, admin_user will be glance and admin_password is glance.
After editing, the lines should be as follows:
23
admin_tenant_name = service
admin_user = glance
admin_password = glance
In order to tell glance to use keystone for authentication, add the following lines at the section "[paste_deploy]" to
flavor = keystone
Edit the following lines so it looks like this. The admin_tenant_name will be service, admin_user will be glance and admin_password is glance.
After editing, the lines should be as follows:
admin_tenant_name = service
admin_user = glance
admin_password = glance
In order to tell glance to use keystone for authentication, add the following lines at the section "[paste_deploy]" to
flavor = keystone
SERVICE_TOKEN=admin
OS_TENANT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=admin
OS_AUTH_URL="http://localhost:5000/v2.0/"
SERVICE_ENDPOINT=http://localhost:35357/v2.0
we can also compile our own images and upload them to glance. This has been explained in detail in "Image Management"
chapter.
2.2.8
Nova
2.2.8.1
Nova Configuration
25
Host
server1
server1
server1
server1
server1
server1
Zone
internal
internal
internal
internal
nova
internal
Status
enabled
enabled
enabled
enabled
enabled
enabled
State
:-)
:-)
:-)
:-)
:-)
:-)
Updated_At
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
08:55:13
08:55:14
08:55:13
08:55:13
08:55:14
08:55:14
If you see something like the above with all components happy, it means that the set up is ready to be used.
2.2.9
Neutron
Neutron is the networking service of OpenStack. Lets see the steps involved in the installation of Neutron alonside other components.
Install the Neutron services using the following command >
apt-get install -y neutron-server neutron-plugin-openvswitch neutron-plugin-openvswitch- agent neutron-common neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent openvswitch-switch
2.2.9.1
Neutron Configuration
Remove all the lines from /etc/neutron/neutron.conf and add the following lines
[DEFAULT]
core_plugin = ml2
notification_driver=neutron.openstack.common.notifier.rpc_notifier
verbose=True
rabbit_host=10.10.10.2
rpc_backend=neutron.openstack.common.rpc.impl_kombu
service_plugins=router
allow_overlapping_ips=True
auth_strategy=keystone
neutron_metadata_proxy_shared_secret=openstack
service_neutron_metadata_proxy=True
notify_nova_on_port_data_changes=True
notify_nova_on_port_status_changes=True
nova_admin_auth_url=http://10.10.10.2:35357/v2.0
nova_admin_tenant_id=service
nova_url=http://10.10.10.2:8774/v2
nova_admin_username=nova
nova_admin_password=nova
[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
signing_dir = $state_path/keystone-signing
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 10.10.10.2
rabbit_port = 5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.10.10.2:8774
nova_admin_username = nova
nova_admin_tenant_id =
nova_admin_password = nova
nova_admin_auth_url = http://10.10.10.2:35357/v2.0
[database]
connection = mysql://neutrondbadmin:neutronsecret@10.10.10.2/neutron
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
We have created two physical networks one as a flat network and the other as a vlan network with vlan ranging from 100 to 200.
We have mapped External network to br-ex and Intnet1 to br-eth1.Now Create bridges
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl
add-br br-int
add-br br-eth1
add-br br-ex
add-port br-eth1 eth1
add-port br-ex eth2
27
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
2.2.10
Cinder
Cinder is the volume service of OpenStack. In this section lets see how to install Cinder with other components of OpenStack.
Install Cinder services using the following command
apt-get install cinder-api cinder-scheduler cinder-volume lvm2
2.2.10.1
Cinder configuration
Remove all the lines from the file /etc/cinder/cinder.conf and add the following lines
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = 10.10.10.2
rabbit_port = 5672
rabbit_userid = guest
glance_host = 10.10.10.2
[database]
connection = mysql://cinderdbadmin:cindersecret@10.10.10.2/cinder
[keystone_authtoken]
auth_uri = http://10.10.10.2:5000
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = cinder
Cinder needs a LVM volume-group named "cinder-volumes". To create "cinder-volumes" use the following commands
Create a physical volume on the dedicated partition /dev/sdb
pvcreate /dev/sdb
Zone
nova
nova
Status
enabled
enabled
State Updated At
:-)
2014-10-11 11:48:47
:-)
2014-10-11 11:48:38
2.2.11
Swift
2.2.11.1
Swift Installation
29
The primary components are the proxy, account, container and object servers.
apt-get install swift swift-proxy swift-account swift-container swift-object memcached
Other components that might be xfsprogs (for dealing with XFS filesystem), python.pastedeploy (for keystone access), curl (to
test swift).
apt-get install xfsprogs curl python-pastedeploy
2.2.11.2
There are two methods one can try to create/prepare the storage backend. One is to use an existing partition/volume as the storage
device. The other is to create a loopback file and use it as the storage device. Use the appropriate method as per your setup.
2.2.11.2.1
If you have a physical partition (e.g. /dev/sdb3), all you have to do is format it to xfs filesystem using parted or fdisk and use it
as the backend. You need to specify the mount point in /etc/fstab.
CAUTION: Replace /dev/sdb to your appropriate device. I'm assuming that there is an unused/ un-formatted partition section in /dev/sdb
fdisk /dev/sdb
Type n for new partition
Type e for extended partion
Choose appropriate partition number ( or go with the default )
Choose first and last sectors to set the hard disk size (or go with defaults)
Note that 83 is the partition type number for Linux
Type w to write changes to the disk
This would have created a partition (something like /dev/sdb3) that we can now format to XFS filesystem. Do fdisk -l in the
terminal to view and verify the partion table. Find the partition Make sure that the one that you want to use is listed there. This
would work only if you have xfsprogs installed.
mkfs.xfs -i size=1024 /dev/sdb3
tune2fs -l /dev/sdb3 |grep -i inode
Create a directory /mnt/swift_backend that can be used as a mount point to the partion tha we created.
mkdir /mnt/swift_backend
Now edit /etc/fstab and append the following line to mount the partition automatically everytime the system restarts.
/dev/sdb3 /mnt/swift_backend xfs noatime,nodiratime,nobarrier,logbufs=8 0 0
2.2.11.2.2
We create a zero filled file for use as a loopback device for the Swift storage backend. Here we use the disk copy command to
create a file named swift-disk and allocate a million 1KiB blocks (976.56 MiB) to it. So we have a loopback disk of approximately
1GiB. We can increase this size by modifying the seek value. The disk is then formated to XFS filesystem. The file command
can be used to verify if it worked.
Create a directory /mnt/swift_backend that can be used as a mount point to the partion tha we created.
mkdir /mnt/swift_backend
2.2.11.2.3
Now before mounting the backend that will be used, create some nodes to be used as storage devices and set ownership to swift
user and group.
mount /dev/sdb3 /mnt/swift_backend
pushd /mnt/swift_backend
mkdir node1 node2 node3 node4
popd
chown swift:swift /mnt/swift_backend/*
for i in {1..4}; do sudo ln -s /mnt/swift_backend/node$i /srv/node$i; done;
mkdir -p /etc/swift/account-server /etc/swift/container-server /etc/swift/object-server / srv/node1/device /srv/node2/device /srv/node3/device /srv/node4/device
chown -R swift:swift /etc/swift /srv/node[1-4]/
Append the following lines in /etc/rc.local just before "exit 0";. This will be run everytime the system starts.
mkdir /run/swift
chown swift:swift /run/swift
2.2.11.3
Configure Rsync
Rsync is responsible for maintaining object replicas. It is used by various swift services to maintain consistency of objects and
perform updation operations. It is configured for all the storage nodes.
Set RSYNC_ENABLE=true in /etc/default/rsync.
Modify /etc/rsyncd.conf as follows:
# General stuff
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1
# Account Server replication settings
[account6012]
max connections = 25
path = /srv/node1/
read only = false
lock file = /var/lock/account6012.lock
[account6022]
max connections = 25
path = /srv/node2/
read only = false
lock file = /var/lock/account6022.lock
[account6032]
max connections = 25
path = /srv/node3/
read only = false
lock file = /var/lock/account6032.lock
[account6042]
max connections = 25
path = /srv/node4/
read only = false
lock file = /var/lock/account6042.lock
# Container server replication settings
[container6011]
max connections = 25
path = /srv/node1/
read only = false
lock file = /var/lock/container6011.lock
[container6021]
max connections = 25
path = /srv/node2/
read only = false
lock file = /var/lock/container6021.lock
[container6031]
max connections = 25
path = /srv/node3/
read only = false
lock file = /var/lock/container6031.lock
[container6041]
max connections = 25
path = /srv/node4/
read only = false
lock file = /var/lock/container6041.lock
# Object Server replication settings
[object6010]
max connections = 25
path = /srv/node1/
read only = false
lock file = /var/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/node2/
read only = false
lock file = /var/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/node3/
read only = false
lock file = /var/lock/object6030.lock
31
[object6040]
max connections = 25
path = /srv/node4/
read only = false
lock file = /var/lock/object6040.lock
Restart rsync.
service rsync restart
2.2.11.4
General server configuration options can be found in http://swift.openstack.org/deployment_guide.html. If the swift-doc package
is installed it can also be viewed in the /usr/share/doc/swift-doc/html directory. Python uses paste.deploy to manage configuration.
Default configuration options are set in the [DEFAULT] section, and any options specified there can be overridden in any of the
other sections BUT ONLY BY USING THE SYNTAX set option_name = value.
Here is a sample paste.deploy configuration for reference:
[DEFAULT]
name1 = globalvalue
name2 = globalvalue
name3 = globalvalue
set name4 = globalvalue
[pipeline:main]
pipeline = myapp
[app:myapp]
use = egg:mypkg#myapp
name2 = localvalue
set name3 = localvalue
set name5 = localvalue
name6 = localvalue
Create and edit /etc/swift/swift.conf and add the following lines to it:
[swift-hash]
# random unique string that can never change (DO NOT LOSE). I'm using 03c9f48da2229770.
# od -t x8 -N 8 -A n < /dev/random
# The above command can be used to generate random a string.
swift_hash_path_suffix = 03c9f48da2229770
You will need the random string when you add more nodes to the setup. So never lose the string.
You can generate a random string by running the following command:
od -t x8 -N 8 -A n < /dev/random
2.2.11.4.1
Proxy server acts as the gatekeeper to swift. It takes the responsibility of authenticating the user. Authentication verifies that a
request actually comes from who it says it does. Authorization verifies the who has access to the resource(s) the request wants.
Authorization is done by identity services like keystone. Create and edit /etc/swift/proxy-server.conf and add the following lines.
[DEFAULT]
bind_port = 8080
user = swift
[pipeline:main]
33
Note: You can find sample configuration files at the "etc" directory in the source. Some documentation can be found under
"/usr/share/doc/swift-doc/html" if you had installed the swift-doc package using apt-get.
2.2.11.4.2
Account server configuration files are also looked up under /etc/swift/account-server.conf. Here we can create several account
server configuration files each of which would correspond to a device under /srv. The files can be named 1.conf, 2.conf and so
on. Here are the contents of /etc/swift/account-server/1.conf:
[DEFAULT]
devices = /srv/node1
mount_check = false
bind_port = 6012
user = swift
log_facility = LOG_LOCAL2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
vm_test_mode = no
[account-auditor]
[account-reaper]
For the other devices, (/srv/node2, /srv/node3, /srv/node4), we create 2.conf, 3.conf and 4.conf. So we make three more copies of
1.conf and set unique bind ports for the rest of the nodes (6022, 6032 and 6042) and different local log values (LOG_LOCAL3,
LOG_LOCAL4, LOG_LOCAL5).
2.2.11.4.3
Container server configuration files are also looked up under /etc/swift/container-server.conf. Here we can create several container server configuration files each of which would correspond to a device under /srv. The files can be named 1.conf, 2.conf
and so on. Here are the contents of /etc/swift/container-server/1.conf:
[DEFAULT]
devices = /srv/node1
mount_check = false
bind_port = 6011
user = swift
log_facility = LOG_LOCAL2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
vm_test_mode = no
[container-updater]
[container-auditor]
35
[container-sync]
For the other devices, (/srv/node2, /srv/node3, /srv/node4), we create 2.conf, 3.conf and 4.conf. So we make three more copies of
1.conf and set unique bind ports for the rest of the nodes (6021, 6031 and 6041) and different local log values (LOG_LOCAL3,
LOG_LOCAL4, LOG_LOCAL5).
2.2.11.4.4
Object server configuration files are also looked up under /etc/swift/object-server.conf. Here we can create several object server
configuration files each of which would correspond to a device under /srv. The files can be named 1.conf, 2.conf and so on. Here
are the contents of /etc/swift/object-server/1.conf:
[DEFAULT]
devices = /srv/node1
mount_check = false
bind_port = 6010
user = swift
log_facility = LOG_LOCAL2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
vm_test_mode = no
[object-updater]
[object-auditor]
For the other devices, (/srv/node2, /srv/node3, /srv/node4), we create 2.conf, 3.conf and 4.conf. So we make three more copies of
1.conf and set unique bind ports for the rest of the nodes (6020, 6030 and 6040) and different local log values (LOG_LOCAL3,
LOG_LOCAL4, LOG_LOCAL5).
2.2.11.4.5
Ring is an important component of swift. It maintains the information about the physical location of objects, their replicas and
devices. We now create the ring builder files corresponding to object service, container service and account service.
NOTE: We need to be in the /etc/swift directory when executing the following commands.
pushd /etc/swift
swift-ring-builder object.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder account.builder create 18 3 1
The numbers indicate the desired number of partitions, replicas and the time in hours to restrict moving a partition more than
once. See the man page for swift-ring-builder for more information.
Now we add zones and balance the rings. The syntax is as follows:
swift-ring-builder <builder_file> add <zone>-<ip_address>:<port>/<device> <weight>
Execute the following commands to add the zones and rebalance the ring.
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
2.2.11.5
1
1
1
1
To start swift and the REST API, run the following commands.
swift-init main start
swift-init rest start
service swift-proxy start
2.2.11.6
Testing Swift
Swift can be tested using the swift command or the dashboard web interface (Horizon). Firstly, make sure that the ownership for
/etc/swift directory is set to swift:swift.
chown -R swift:swift /etc/swift
Then run the following command and verify if you get the appropriate account information. The number of containers and
objects stored within are displayed as well.
swift stat
Account: AUTH_8df4664c24bc40a4889fab4517e8e599
Containers: 0
Objects: 0
Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1411522928.10863
X-Trans-Id: txcb5b324224544f8892252-0054222170
X-Put-Timestamp: 1411522928.10863
2.2.12
37
OpenStack Dashboard
In a browser enter the URL "http://10.10.10.2/horizon" on which the dashboard is installed and you should see the OpenStack
Dashboard login prompt. Login with username as admin and password as admin. From the dashboard, you can create
keypairs, create/edit security groups, raise new instances, attach volumes etc. which is explained in "OpenStack Dashboard"
chapter.
2.3
Server2
2.3.1
Base OS
Install 64 bit version of Ubuntu server 14.04 keeping the following configurations in mind.
Create the first user with the name localadmin .
Installation lets you setup the IP address for the first interface i.e. eth0. Set the IP address details.
During installation select only Openssh-server in the packages menu.
2.3.2
Network Configuration
2.3.3
NTP Client
Open the file /etc/ntp.conf and add the following line to sync to server1.
server 10.10.10.2
2.3.4
Edit the /etc/nova/nova.conf file to look like this. This file is essentially similar to the configuration file (/etc/nova/nova.conf) of
Server1 except the VNC part
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 10.10.10.2
my_ip = 10.10.10.2
vncserver_listen = 10.10.10.2
vncserver_proxyclient_address = 10.10.10.2
novncproxy_base_url=http://10.10.10.3:6080/vnc_auto.html
glance_host = 10.10.10.2
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.10.10.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_metadata_proxy_shared_secret=openstack
neutron_admin_auth_url=http://10.10.10.2:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
vif_plugging_is_fatal: false
39
vif_plugging_timeout: 0
[database]
connection = mysql://nova:nova_dbpass@10.10.10.2/nova
[keystone_authtoken]
auth_uri = http://10.10.10.2:5000
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass
If you see an output similar to the following, it means that the set up is ready to be used.
localadmin@server1:~$nova-manage service list
Binary
Host
Zone
nova-consoleauth server1
internal
nova-conductor
server1
internal
nova-cert
server1
internal
nova-scheduler
server1
internal
nova-compute
server1
nova
nova-console
server1
internal
nova-compute
server2
nova
2.3.5
Status
enabled
enabled
enabled
enabled
enabled
enabled
enabled
State
:-)
:-)
:-)
:-)
:-)
:-)
:-)
Updated_At
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
08:55:13
08:55:14
08:55:13
08:55:13
08:55:14
08:55:14
10:45:19
Neutron is the networking service of OpenStack. Lets see the steps involved in the installation of Neutron alongside other
components.
Install the Neutron services using the following command >
apt-get install -y neutron-plugin-openvswitch neutron-plugin-openvswitch-agent openvswitch- switch
2.3.5.1
Neutron Configuration
Remove all the lines from /etc/neutron/neutron.conf and add the following lines
[DEFAULT]
core_plugin = ml2
notification_driver=neutron.openstack.common.notifier.rpc_notifier
verbose=True
rabbit_host=10.10.10.2
rpc_backend=neutron.openstack.common.rpc.impl_kombu
service_plugins=router
allow_overlapping_ips=True
auth_strategy=keystone
neutron_metadata_proxy_shared_secret=openstack
service_neutron_metadata_proxy=True
nova_admin_password=nova_pass
notify_nova_on_port_data_changes=True
notify_nova_on_port_status_changes=True
nova_admin_auth_url=http://10.10.10.2:35357/v2.0
nova_admin_tenant_id=service
nova_url=http://10.10.10.2:8774/v2
nova_admin_username=nova
[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
signing_dir = $state_path/keystone-signing
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 10.10.10.2
rabbit_port = 5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.10.10.2:8774
nova_admin_username = nova
nova_admin_tenant_id =
nova_admin_password = nova
nova_admin_auth_url = http://10.10.10.2:35357/v2.0
[database]
connection = mysql://neutrondbadmin:neutronsecret@10.10.10.2/neutron
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
We have created two physical networks one as a flat network and the other as a vlan network with vlan ranging from 100 to 200.
We have mapped External network to br-ex and Intnet1 to br-eth1.Now Create bridges
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
41
2.4
Client1
2.4.1
BaseOS
2.4.2
Networking Configuration
2.4.3
NTP Client
Open the file /etc/ntp.conf and add the following line to sync to server1.
server 10.10.10.2
2.4.4
Client Tools
As mentioned above, this is a desktop installation of Ubuntu 14.04 to be used for tasks such as bundling of images. It will also
be used for managing the cloud infrastructure using nova, glance and swift commandline tools.
Install the required command line tools with the following command:
apt-get install python-novaclient glance-client swift python-cinderclient python- neutronclient
Install qemu-kvm
apt-get install qemu-kvm
export
export
export
export
export
export
SERVICE_TOKEN=admin
OS_TENANT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=admin
OS_AUTH_URL="http://10.10.10.2:5000/v2.0/"
SERVICE_ENDPOINT=http://10.10.10.2:35357/v2.0
Execute nova and glance commands to check the connectivity to OpenStack setup.
nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| 25ee9230-6bb5-4eca-8808-e6b4e0348362 | myinstance | ACTIVE | private=192.168.4.35 |
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.36 |
+--------------------------------------+------------+--------+----------------------+
glance index
ID
-----------------------------------65b9f8e1-cde8-40e7-93e3-0866becfb9d4
f147e666-990c-47e2-9caa-a5a21470cc4e
f3a8e689-02ed-460f-a587-dc868576228f
aa362fd9-7c28-480b-845c-85a5c38ccd86
49f0ec2b-26dd-4644-adcc-2ce047e281c5
2.5
Name
Disk
Container Size
Format
Format
------------------------------ ---------------windows
qcow2
ovf
7580745728
debian
qcow2
ovf
932904960
opensuse
qcow2
ovf
1072300032
centoscli
qcow2
ovf
1611530240
ubuntuimage
qcow2
ovf
1471807488
OpenStack Dashboard
Start a browser and type the URL i.e, http://10.10.10.2/horizon. You should see the dashboard login screen. Login with the
credentials username - admin and password - admin to manage the OpenStack setup.
43
Chapter 3
Image Management
3.1
Introduction
There are several pre-built images for OpenStack available from various sources. You can download such images and use them
to get familiar with OpenStack.
For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or
configuration. This chapter will guide you through the process of creating Linux images of popular distributions from scratch.
We have also covered an approach to bundling Windows images.
There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it
very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch.
cloud-init handles importing ssh keys for password-less login, setting host name etc. The instance acquires the instance specific
configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.
While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing
the keys etc. by running a set of commands at boot time from rc.local.
The process used for creating the Linux images of different distributions is largely the same with a few minor differences, which
are explained below.
In all the cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We
are using the machine called client1 as explained in the chapter on "Installation and Configuration" for this purpose.
The approach explained below will give you disk images that represent a disk without any partitions.
3.2
The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure
to give it as much space as you will need.
kvm-img create -f raw server.img 5G
3.2.1
OS Installation
Download the iso file of the Linux distribution you want to install in the image. For Ubuntu, you can download the iso from
http://releases.ubuntu.com using wget or with the help of a browser
Boot a KVM instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command
below also sets up a VNC display at port 0
sudo kvm -m 256 -cdrom ubuntu-14.04-server-amd64.iso -drive file=server.img,if=virtio,index =0 -boot d -net nic -net user -nographic ~-vnc :0
Connect to the VM through VNC (use display number :0) and finish the installation.
For Example, where 10.10.10.4 is the IP address of client1:
vncviewer 10.10.10.4 :0
During creation of Linux images , create a single ext4 partition mounted on a swap partition.
After finishing the installation, relaunch the VM by executing the following command.
sudo kvm -m 256 -drive file=server.img,if=virtio,index=0 -boot c -net nic -net user - nographic -vnc :0
At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.
3.2.1.1
Ubuntu
Remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance
coming up as an interface other than eth0.
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
3.2.1.2
Fedora
yum update
yum install openssh-server
chkconfig sshd on
Remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance
coming up as an interface other than eth0.
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules
45
depmod -a
modprobe acpiphp
# simple attempt to get the user ssh key using the meta-data service
mkdir -p /root/.ssh
echo >> /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key| grep 'ssh- rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"
3.2.1.3
OpenSUSE
Install curl.
zypper install curl
For ssh key injection into the instance use the following steps:
Create a file /etc/init.d/sshkey and add the following lines
echo >> /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh -rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"
Configure the firewall (not iptables) using the following command and allow ssh service
yast2
Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the
instance coming up as an interface other than eth0.
rm -rf /etc/udev/rules.d/70-persistent-net.rules
3.2.1.4
Debian
Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the
instance coming up as an interface other than eth0.
rm -rf /etc/udev/rules.d/70-persistent-net.rules
3.2.1.5
Remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance
coming up as an interface other than eth0.
rm -rf /etc/udev/rules.d/70-persistent-net.rules
3.2.2
47
Chapter 4
Instance Management
4.1
Introduction
An instance is a virtual machine provisioned by OpenStack on one of the nova-compute servers. When you launch an instance,
a series of steps are triggered on various components of the OpenStack. During the life cycles of an instance, it moves through
various stages as shown in the diagram below:
4.2
Nova has a bunch of command line tools to manage the OpenStack setup. These commands help you manage images, instances,
storage, networking etc. A few commands related to managing the instances are given below.
For a complete list of commands, see Commands chapter.
4.2.1
OpenStack expects the client tools to use 2 kinds of credentials. One set of credentials are called Access Key and Secret Key that
all clients would need to use to make any requests to the Cloud Controller. Each user registered on the web interface has this set
created for him. You can download it from the web interface as mentioned in the chapter on "Web Interface".
You will also need to generate a keypair consisting of private key/public key to be able to launch instances on OpenStack. These
keys are injected into the instances to make password-less SSH access to the instance. This depends on the way the necessary
tools are bundled into the images. Please refer to the chapter on "Image Management" for more details.
Keypairs can also be generated using the following commands.
ssh-keygen
cd .ssh
nova keypair-add --pub_key id_rsa.pub mykey
This creates a new keypair called mykey. The private key id_rsa is saved locally in ~/.ssh which can be used to connect to an
instance launched using mykey as the keypair. You can see the available keypairs with nova keypair-list command.
nova keypair-list
+-------+-------------------------------------------------+
| Name |
Fingerprint
|
+-------+-------------------------------------------------+
| mykey | b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82|
| mykey2| b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82|
+-------+-------------------------------------------------+
Also while executing ssh-keygen you can specify a custom location and custom file names for the keypairs that you want to
create.
To delete an existing keypair:
nova keypair-delete mykey2
4.2.2
There are several commands that help in managing the instances. Here are a few examples:
localadmin@server1$ nova boot --flavor 1 --image 4b1dceb5-6c9a-4268-9bcc-4ff33e209bd2 myinstance --key_name mykey
+--------------------------------------+-----------------------------------------------+
| Property
| Value
|
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig
| MANUAL
|
| OS-EXT-AZ:availability_zone
| nova
|
| OS-EXT-SRV-ATTR:host
| |
| OS-EXT-SRV-ATTR:hypervisor_hostname | |
| OS-EXT-SRV-ATTR:instance_name
| instance-00000005
|
| OS-EXT-STS:power_state
| 0
|
| OS-EXT-STS:task_state
| scheduling
|
| OS-EXT-STS:vm_state
| building
|
| OS-SRV-USG:launched_at
| |
| OS-SRV-USG:terminated_at
| |
| accessIPv4
|
|
| accessIPv6
|
|
| adminPass
| WrGtHqBCNA64
|
| config_drive
|
|
| created
| 2014-09-27T10:09:11Z
|
| flavor
| m1.tiny (1)
|
| hostId
|
|
| id
| 4017d7f2-44e6-4858-a175-b4dbb178156d
|
49
| image
| Cirros (4b1dceb5-6c9a-4268-9bcc-4ff33e209bd2) |
| key_name
| mykey
|
| metadata
| {}
|
| name
| myinstance
|
| os-extended-volumes:volumes_attached | []
|
| progress
| 0
|
| security_groups
| default
|
| status
| BUILD
|
| tenant_id
| fec44bd5871a41068791756610090158
|
| updated
| 2014-09-27T10:09:11Z
|
| user_id
| f50fe85ae90b4381871f4018257414ed
|
+--------------------------------------+-----------------------------------------------+
$ nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| 25ee9230-6bb5-4eca-8808-e6b4e0348362 | myinstance | ACTIVE | private=192.168.4.35 |
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.36 |
+--------------------------------------+------------+--------+----------------------+
$ nova reboot 25ee9230-6bb5-4eca-8808-e6b4e0348362
$ nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| 25ee9230-6bb5-4eca-8808-e6b4e0348362 | myinstance | REBOOT | private=192.168.4.35 |
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.34 |
+--------------------------------------+------------+--------+----------------------+
$ nova delete 25ee9230-6bb5-4eca-8808-e6b4e0348362
$ nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.34 |
+--------------------------------------+------------+--------+----------------------+
VM type has implications for harddisk size, amount of RAM and number of CPUs allocated to the instance. Check the VM types
available.
nova flavor-list
51
Chapter 5
5.1
Login
Using the OpenStack Dashboard, one can manage various OpenStack services. It may be used to manage instances and images,
create keypairs, attach volumes to instances, manipulate Swift containers etc. The OpenStack Dashboard is accessible via
http://10.10.10.2/horizon
Login to the dashboard with username "admin" and password "admin".
5.2
User overview
After logging, depending on the access privileges, the user is allowed access to specific projects. The below is an overview page
for a project belonging to the admin user. One can view and download some basic usage metric reports here.
5.3
Instances
The page lists currently running instances belonging to the user admin. From this page, one can terminate, pause, reboot any
running instances, connect to vnc console of the instance etc.
5.4
Instances
5.5
53
Images
5.6
Projects
This page lists the available projects (tenants) that have been created. One can also create new projects, assign users to the
projects etc.
5.7
Users
This page lists the users that have been created. One can also create new users, disable/delete existing users.
55
Chapter 6
Storage Management
6.1
Cinder-volume
Cinder-volume provides persistent block storage for use with instances. The storage on the instances is non persistent in nature
and hence any data that you generate and store on the file system on the first disk of the instance gets lost when the instance is
terminated. You will need to use persistent volumes provided by cinder-volume if you want any data generated during the life of
the instance to persist after the instance is terminated.
Commands from cinder client package can be used to manage these volumes.
Here are a few examples:
6.1.1
Make sure that you have sourced creds file before running any of the following commands.
Create a 10 GB volume
cinder create 10 --display-name myvolume1
A volume can only be attached to one instance at a time. When "cinder list" shows the status of a volume as available, it means
it is not attached to any instance and ready to be used. If the status changes from "available" to "in-use" if it is attached to an
instance successfully.
When a volume is attached to an instance, it shows up as an additional SCSI disk on the instance. You can login to the instance
and mount the disk, format it and use it.
Detach a volume from an instance.
nova volume-detach b4aa8dd3-6d2c-42fa-882f-1ab3160ee6e7 2c6ed-8de-49c-8f7-6eda4fd
The data on the volume persists even after the volume is detached from an instance. You can see the data on reattaching the
volume to another instance.
Even though you have indicated /dev/vdb as the device on the instance, the actual device name created by the OS running inside
the instance may differ. You can find the name of the device by looking at the device nodes in /dev or by watching the syslog
when the volume is being attached.
6.1.2
Swift
Swift is a storage service that can be used for storage and archival of objects. Swift provides a REST interface. You can use
Swift commandline which is an interface for the OpenStack object store service.
To get the information about swift account, container and objects.
swift stat
Output:
Account:
Containers:
Objects:
Bytes:
Accept-Ranges:
X-Timestamp:
X-Trans-Id:
Content-Type:
AUTH_8df4664c24bc40a4889fab4517e8e599
1
1
1790
bytes
1413260773.11084
txa04183fe1f9943fa94672-00543ca5f6
text/plain; charset=utf-8
57
59
Chapter 7
Network Management
7.1
Introduction
In OpenStack, the networking is managed by a component called Neutron. This interacts with nova-compute to ensure that
the instances have the right kind of networking setup for them to communicate among themselves as well as with the outside
world. Each OpenStack instance can have 2 IP addresses attached to it. One is the private IP address and the other called
Public IP address. The private IP address is typically used for communication between instances and the public IP is used for
communication of instances with the outside world. The so called public IP address need not be a public IP address route-able
on the Internet ; it can even be an address on the corporate LAN.
The network configuration inside the instance is done with the private IP address in view. The association between the private IP
and the public IP and necessary routing is handled by neutron-l3-agent and the instances need not be aware of it.
Neutron provides 4 different network types
Flat
VLAN
GRE
VxLAN
Flat network should be chosen if you intend to create only one openstack private network per physical network. VLAN Network
is ideal for beginners to easily grasp the idea of OpenStack Networking. The other modes can be chosen when you do not have
VLAN Enabled switches or more specialized setup is needed.
The network type is chosen by using one of the following configuration options in /etc/neutron/plugins/ml2/ml2_conf.ini. This
line can also contain more than network type.
tenant_network_types =
In each of these cases, run the following commands to set up private and public IP addresses for use by the instances:
You can create a private network by using the following command
neutron net-create N1
| name
| N1
|
| provider:network_type
| vlan
|
| provider:physical_network | Intnet1
|
| provider:segmentation_id | 100
|
| router:external
| False
|
| shared
| False
|
| status
| ACTIVE
|
| subnets
|
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+---------------------------+--------------------------------------+
Now you can launch instances in network N1 and they will have IP addresses in the range defined by subnetone.
The public IP network or the External network can be created using the following command
neutron net-create Extnet --provider:network_type flat --provider:physical_network External --router:external True --shared
61
We need a router to connect between the private network and public network. We can create a router using the following command
neutron router-create EXTRouter
We have to add the private and public network to the router interfaces.
The router is now attached to the private network, by specifying the corresponding subnets name.
neutron router-interface-add EXTRouter subnetone
You can then create a Floating IP from the public IP reserve created already from 10.10.10.10 to 10.10.10.49 in the network
"Extnet"
neutron floatingip-create Extnet
| router_id
|
|
| status
| ACTIVE
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+---------------------+--------------------------------------+
Associate the floating IP with a running instance using the following command
nova floating-ip-associate instance-ID Floating-IP-Address
63
Chapter 8
Security
8.1
Security Overview
OpenStack provides ingress filtering for the instances based on the concept of security groups. OpenStack accomplishes ingress
filtering by creating suitable IP Tables rules. A Security Group is a named set of rules that get applied to the incoming packets
for the instances. You can specify a security group while launching an instance. Each security group can have multiple rules
associated with it. Each rule specifies the source IP/network, protocol type, destination ports etc. Any packet matching these
parameters specified in a rule is allowed in. Rest of the packets are blocked.
A security group that does not have any rules associated with it causes blocking of all incoming traffic. The mechanism only
provides ingress filtering and does not provide any egress filtering. As a result all outbound traffic is allowed. If you need to
implement egress filtering, you will need to implement that inside the instance using a firewall.
Horizon lets you manage security groups and also let you specify a security group while launching an instance. You can also use
nova client for this purpose.
Here are a few Nova commands to manage security groups. Like in our earlier chapters, the project name is "proj"
Create a security group named "myserver".
nova secgroup-create myserver "My default"
+--------------------------------------+----------+-------------+
| Id
| Name
| Description |
+--------------------------------------+----------+-------------+
| 0d524280-2e8a-4b43-9445-50e6d185daf8 | myserver | My default |
+--------------------------------------+----------+-------------+
Add a rule to the security group "myserver" allowing icmp and tcp traffic from 192.168.1.1.
nova secgroup-add-rule myserver tcp 22 22 192.168.1.1/32
+-------------+-----------+---------+----------------+--------------+
| IP Protocol | From Port | To Port | IP Range
| Source Group |
+-------------+-----------+---------+----------------+--------------+
| tcp
| 22
| 22
| 192.168.1.1/32 |
|
+-------------+-----------+---------+----------------+--------------+
localadmin@server1$ nova secgroup-add-rule myserver icmp -1 -1 192.168.1.1/32
+-------------+-----------+---------+----------------+--------------+
| IP Protocol | From Port | To Port | IP Range
| Source Group |
+-------------+-----------+---------+----------------+--------------+
| icmp
| -1
| -1
| 192.168.1.1/32 |
|
+-------------+-----------+---------+----------------+--------------+
Remove the rule for ssh traffic from the source ip 192.168.1.1 from the security group "myserver"
nova secgroup-delete-rule myserver tcp 22 22 192.168.1.1/32
+-------------+-----------+---------+----------------+--------------+
| IP Protocol | From Port | To Port | IP Range
| Source Group |
+-------------+-----------+---------+----------------+--------------+
| tcp
| 22
| 22
| 192.168.1.1/32 |
|
+-------------+-----------+---------+----------------+--------------+
| progress
| 0
|
| security_groups
| default
|
| status
| BUILD
|
| tenant_id
| fec44bd5871a41068791756610090158
|
| updated
| 2014-09-27T10:09:11Z
|
| user_id
| f50fe85ae90b4381871f4018257414ed
|
+--------------------------------------+-----------------------------------------------+
When you do not specify a security group, the instance gets associated with an inbuilt security group called "default".
65
Chapter 9
OpenStack Commands
9.1
Nova Commands
67
dns-create-public-domain
Create the specified DNS domain.
dns-delete
Delete the specified DNS entry.
dns-delete-domain
Delete the specified DNS domain.
dns-domains
Print a list of available dns domains.
dns-list
List current DNS entries for domain and ip or domain
and name.
endpoints
Discover endpoints that get returned from the
authenticate services.
evacuate
Evacuate server from failed host to specified one.
fixed-ip-get
Retrieve info on a fixed ip.
fixed-ip-reserve
Reserve a fixed IP.
fixed-ip-unreserve Unreserve a fixed IP.
flavor-access-add
Add flavor access for the given tenant.
flavor-access-list Print access information about the given flavor.
flavor-access-remove
Remove flavor access for the given tenant.
flavor-create
Create a new flavor
flavor-delete
Delete a specific flavor
flavor-key
Set or unset extra_spec for a flavor.
flavor-list
Print a list of available 'flavors' (sizes of
servers).
flavor-show
Show details about the given flavor.
floating-ip-associate
Associate a floating IP address to a server.
floating-ip-bulk-create
Bulk create floating ips by range.
floating-ip-bulk-delete
Bulk delete floating ips by range.
floating-ip-bulk-list
List all floating ips.
floating-ip-create Allocate a floating IP for the current tenant.
floating-ip-delete De-allocate a floating IP.
floating-ip-disassociate
Disassociate a floating IP address from a server.
floating-ip-list
List floating ips for this tenant.
floating-ip-pool-list
List all floating ip pools.
get-password
Get password for a server.
get-rdp-console
Get a rdp console to a server.
get-spice-console
Get a spice console to a server.
get-vnc-console
Get a vnc console to a server.
host-action
Perform a power action on a host.
host-describe
Describe a specific host.
host-list
List all hosts by service.
host-update
Update host settings.
hypervisor-list
List hypervisors.
hypervisor-servers List servers belonging to specific hypervisors.
hypervisor-show
Display the details of the specified hypervisor.
hypervisor-stats
Get hypervisor statistics over all compute nodes.
hypervisor-uptime
Display the uptime of the specified hypervisor.
image-create
Create a new image by taking a snapshot of a running
server.
image-delete
Delete specified image(s).
image-list
Print a list of available images to boot from.
image-meta
Set or Delete metadata on an image.
image-show
Show details about the given image.
interface-attach
Attach a network interface to a server.
interface-detach
Detach a network interface from a server.
interface-list
List interfaces attached to a server.
keypair-add
Create a new key pair for use with servers.
keypair-delete
Delete keypair given by its name.
keypair-list
keypair-show
list
list-secgroup
live-migration
lock
meta
migrate
69
shelve-offload
Remove a shelved server from the compute node.
show
Show details about the given server.
ssh
SSH into a server.
start
Start a server.
stop
Stop a server.
suspend
Suspend a server.
unlock
Unlock a server.
unpause
Unpause a server.
unrescue
Unrescue a server.
unshelve
Unshelve a server.
usage
Show usage data for a single tenant.
usage-list
List usage data for all tenants.
volume-attach
Attach a volume to a server.
volume-create
Add a new volume.
volume-delete
Remove volume(s).
volume-detach
Detach a volume from a server.
volume-list
List all the volumes.
volume-show
Show details about a volume.
volume-snapshot-create
Add a new snapshot.
volume-snapshot-delete
Remove a snapshot.
volume-snapshot-list
List all the snapshots.
volume-snapshot-show
Show details about a snapshot.
volume-type-create Create a new volume type.
volume-type-delete Delete a specific flavor
volume-type-list
Print a list of available 'volume types'.
volume-update
Update volume attachment.
x509-create-cert
Create x509 cert for a user in tenant.
x509-get-root-cert Fetch the x509 root cert.
bash-completion
Prints all of the commands and options to stdout so
that the nova.bash_completion script doesn't have to
hard code them.
help
Display help about this program or one of its
subcommands.
host-servers-migrate
Migrate all instances of the specified host to other
available hosts.
list-extensions
List all the os-api extensions that are available.
force-delete
Force delete a server.
restore
Restore a soft-deleted server.
migration-list
Print a list of migrations.
host-evacuate
Evacuate all instances from failed host to specified
one.
instance-action
Show an action.
instance-action-list
List actions on a server.
cell-capacities
Get cell capacities for all cells or a given cell.
cell-show
Show details of a given cell.
baremetal-interface-add
Add a network interface to a baremetal node.
baremetal-interface-list
List network interfaces associated with a baremetal
node.
baremetal-interface-remove
Remove a network interface from a baremetal node.
baremetal-node-create
Create a baremetal node.
baremetal-node-delete
Remove a baremetal node and any associated interfaces.
baremetal-node-list
Print list of available baremetal nodes.
baremetal-node-show
host-meta
net
net-create
net-delete
net-list
9.2
Glance Commands
9.3
Swift Commands
71
9.4
Keystone Commands
9.5
Neutron Commands
agent-delete
agent-list
agent-show
agent-update
cisco-credential-create
cisco-credential-delete
cisco-credential-list
cisco-credential-show
cisco-network-profile-create
cisco-network-profile-delete
cisco-network-profile-list
cisco-network-profile-show
cisco-network-profile-update
cisco-policy-profile-list
cisco-policy-profile-show
cisco-policy-profile-update
dhcp-agent-list-hosting-net
dhcp-agent-network-add
dhcp-agent-network-remove
ext-list
ext-show
firewall-create
firewall-delete
firewall-list
firewall-policy-create
firewall-policy-delete
firewall-policy-insert-rule
firewall-policy-list
firewall-policy-remove-rule
firewall-policy-show
firewall-policy-update
firewall-rule-create
firewall-rule-delete
firewall-rule-list
firewall-rule-show
firewall-rule-update
firewall-show
firewall-update
floatingip-associate
floatingip-create
floatingip-delete
floatingip-disassociate
floatingip-list
floatingip-show
help
ipsec-site-connection-create
ipsec-site-connection-delete
ipsec-site-connection-list
ipsec-site-connection-show
ipsec-site-connection-update
l3-agent-list-hosting-router
l3-agent-router-add
l3-agent-router-remove
lb-agent-hosting-pool
lb-healthmonitor-associate
lb-healthmonitor-create
lb-healthmonitor-delete
lb-healthmonitor-disassociate
lb-healthmonitor-list
lb-healthmonitor-show
lb-healthmonitor-update
lb-member-create
73
lb-member-delete
lb-member-list
lb-member-show
lb-member-update
lb-pool-create
lb-pool-delete
lb-pool-list
lb-pool-list-on-agent
lb-pool-show
lb-pool-stats
lb-pool-update
lb-vip-create
lb-vip-delete
lb-vip-list
lb-vip-show
lb-vip-update
meter-label-create
meter-label-delete
meter-label-list
meter-label-rule-create
meter-label-rule-delete
meter-label-rule-list
meter-label-rule-show
meter-label-show
net-create
net-delete
net-external-list
net-gateway-connect
net-gateway-create
net-gateway-delete
net-gateway-disconnect
net-gateway-list
net-gateway-show
net-gateway-update
net-list
net-list-on-dhcp-agent
net-show
net-update
port-create
port-delete
port-list
port-show
port-update
queue-create
queue-delete
queue-list
queue-show
quota-delete
quota-list
values.
quota-show
quota-update
router-create
router-delete
router-gateway-clear
router-gateway-set
router-interface-add
router-interface-delete
router-list
router-list-on-l3-agent
router-port-list
router.
router-show
router-update
security-group-create
security-group-delete
security-group-list
security-group-rule-create
security-group-rule-delete
security-group-rule-list
security-group-rule-show
security-group-show
security-group-update
service-provider-list
subnet-create
subnet-delete
subnet-list
subnet-show
subnet-update
vpn-ikepolicy-create
vpn-ikepolicy-delete
vpn-ikepolicy-list
vpn-ikepolicy-show
vpn-ikepolicy-update
vpn-ipsecpolicy-create
vpn-ipsecpolicy-delete
vpn-ipsecpolicy-list
connection.
vpn-ipsecpolicy-show
vpn-ipsecpolicy-update
vpn-service-create
vpn-service-delete
vpn-service-list
tenant.
vpn-service-show
vpn-service-update
9.6
75
Cinder Commands
encryption-type-list
List encryption type information for all volume types
(Admin Only).
encryption-type-show
endpoints
extend
extra-specs-list
force-delete
list
metadata
metadata-show
metadata-update-all
transfer-show
type-create
type-delete
type-key
type-list
upload-to-image
bash-completion
help
list-extensions
77