Sie sind auf Seite 1von 83

For Ubuntu - Trusty (LTS)

OpenStack beginner's guide

OpenStack Beginners Guide


(for Ubuntu - Trusty)
v4.2, 10 Nov 2014

Akilesh K
Johnson D
Yogesh G

OpenStack Beginners Guide


(for Ubuntu - Trusty)
v4.2, 10 Nov 2014

Ubuntu, the Ubuntu Logo and Canonical are registered trademarks of Canonical. Read
Canonicals trademark policy here.
OpenStack and the OpenStack Logo are registered trademarks of OpenStack. Read
OpenStacks trademark policy here.
All other trademarks mentioned in the book belong to their respective owners.
This book is aimed at making it easy/simple for a beginner to build and maintain a private
cloud using OpenStack. This book will be updated periodically based on the suggestions, ideas,
corrections, etc., from readers.
Mail Feedback to: books@pinlabs.in
Please report bugs in the content of this book at :
https://github.com/eternaltyro/openstack-beginners-guide and we will try to fix them as early
as possible and incorporate them in to the next version of the book.
Released under Creative Commons - Attribution-ShareAlike 4.0 International license.
A brief description of the license
A more detailed license text

Preface
About this guide
The idea of OpenStack Beginners Guide came from OpenStack cactus release, during that time
we had limited documentation available. Our initial contributors were Murthyraju Manthena,
Kiran Murari, Suseendran RB, Yogesh G, Atul Jha and Johnson D. We released this book
updating every version and adding newer components. Later on this book got merged with official
OpenStack documentaion. We thought we still need to maintain this book for our reference and
hence we are releasing the book for Icehouse version on Ubuntu 14.04 LTS.

Target Audience
Our aim has been to provide a guide for a beginners who are new to OpenStack. Good familiarity with virtualization is assumed, as troubleshooting OpenStack related problems requires
a good knowledge of virtualization. Similarly, familiarity with Cloud Computing concepts and
terminology will be of help. Prior exposure to AWS API and/or tools is not mandatory, though
such exposure will accelerate the learning process greatly.

Acknowledgement
Most of the content has been borrowed from web resources like manuals, documentation, white
papers etc. from OpenStack and Canonical; numerous posts on forums and discussions on
OpenStack IRC Channel. We would like to thank all the authors of these resources.

License
Attribution-ShareAlike 4.0 International.
For the full version of the license text,
please refer to http://creativecommons.org/licenses/by-sa/4.0/legalcode and http://
creativecommons.org/licenses/by-sa/4.0/ for a shorter description.

Feedback
We would really appreciate your feedback. We will enhance the book on an ongoing basis based
on your feedback. Please mail your feedback to books@pinlabs.in.

OpenStack Compute Starter Guide

Contents

Introduction to OpenStack and Its Components

1.1

Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.1

Open Stack Compute Infrastructure (Nova) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.1.1

Functions and Features: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.1.2

Components of OpenStack Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.2

1.2.3

1.2.4

1.2.1.2.1

API Server (nova-api) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.1.2.2

Compute Worker (nova-compute) . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.1.2.3

Scheduler (nova-scheduler) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

OpenStack Imaging Service (Glance) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.2.1

Functions and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.2.2

Components of Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

OpenStack Storage Infrastructure (Swift) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.3.1

Functions and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.3.2

Components of Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2.3.3

Swift Proxy Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2.3.4

Swift Object Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2.3.5

Swift Container server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2.3.6

Swift Account Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2.3.7

The Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

OpenStack Identity Service (Keystone) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10


1.2.4.1

1.2.5

1.2.6

Components of Identity Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

OpenStack Network Service (Neutron) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11


1.2.5.1

Neutron Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.2.5.2

Neutron OpenvSwitch Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.2.5.3

Neutron DHCP Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.2.5.4

Neutron L3 Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.2.5.5

Neutron Metadata Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Openstack Volume Service (Cinder) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


1.2.6.1

Cinder API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2.6.2

Cinder Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2.6.3

Cinder Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2.7

Openstack Administrative Web-Interface (Horizon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2.8

Message Queue (Rabbit MQ Server) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2.9

SQL Backend (MySQL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2.10 Software Switch (OpenvSwitch) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13


2

Installation and Configuration

15

2.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2

Server1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1

Base OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2.2

Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2.3

NTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2.4

RabbitMQ Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2.5

MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2.6

2.2.7

2.2.5.1

Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2.5.2

Creating Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Keystone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.6.1

Creating Tenants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.6.2

Creating Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2.6.3

Creating Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2.6.4

Adding Roles to Users in Tenants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2.6.5

Creating Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2.6.6

Creating Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.7.1

2.2.8

Nova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.8.1

2.2.9

Glance Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Nova Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.9.1

Neutron Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.2.10 Cinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.10.1 Cinder configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.11 Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.11.1 Swift Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.11.2 Swift Storage Backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.11.2.1

Partition as a storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.2.11.2.2

Loopback File as a storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.2.11.2.3

Using the backend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.2.11.3 Configure Rsync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

OpenStack Compute Starter Guide

2.2.11.4 Configure Swift Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32


2.2.11.4.1

Configure Swift Proxy Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.2.11.4.2

Configure Swift Account Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.2.11.4.3

Configure Swift Container Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.2.11.4.4

Configure Swift Object Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.2.11.4.5

Configure Swift Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.2.11.5 Starting Swift services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36


2.2.11.6 Testing Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.12 OpenStack Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3

Server2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.1

Base OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.3.2

Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.3.3

NTP Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.3.4

Nova Components (nova-compute alone) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.3.5

Neutron Components(Neutron-OpenvSwitch agent alone) . . . . . . . . . . . . . . . . . . . . . . . . . 39


2.3.5.1

2.4

2.5
3

Client1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.1

BaseOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.4.2

Networking Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.4.3

NTP Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.4.4

Client Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

OpenStack Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Image Management

43

3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2

Creating a Linux Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43


3.2.1

3.2.2
4

Neutron Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

OS Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.1.1

Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.2.1.2

Fedora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.2.1.3

OpenSUSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2.1.4

Debian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2.1.5

CentOS 6 and RHEL 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Uploading the Linux image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Instance Management

47

4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.2

Openstack Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47


4.2.1

Creation of Key Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.2.2

Launch and manage instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

OpenStack Dashboard (Horizon)


5.1

Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2

User overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.3

Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5.4

Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5.5

Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.6

Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.7

Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Storage Management
6.1

6.1.1

Interacting with Storage Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.1.2

Swift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
59

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Security
8.1

55

Cinder-volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Network Management
7.1

51

63

Security Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

OpenStack Commands

67

9.1

Nova Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

9.2

Glance Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

9.3

Swift Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

9.4

Keystone Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

9.5

Neutron Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

9.6

Cinder Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

OpenStack Compute Starter Guide

List of Tables

2.1

Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

This is a tutorial style beginners guide for OpenStack on Ubuntu 14.04, Trusty Tahr. The aim is to help the reader in setting
up a minimal installation of OpenStack.

OpenStack Compute Starter Guide

Chapter 1

Introduction to OpenStack and Its Components


1.1

Cloud Computing

Cloud computing is a computing model, where resources such as computing power, storage, network and software are abstracted
and provided as services on the internet in a remotely accessible fashion. Billing models for these services are generally similar
to the ones adopted for public utilities. On-demand availability, ease of provisioning, dynamic and virtually infinite scalability
are some of the key attributes of cloud computing.
An infrastructure setup using the cloud computing model is generally referred to as the "cloud". The following are the broad
categories of services available on the cloud:
Infrastructure as a Service (IaaS)
Platform as a Service (PaaS)
Software as a Service (SaaS)

1.2

OpenStack

OpenStack is a collection of open source software projects that enterprises/service providers can use to setup and run their cloud
compute and storage infrastructure. Rackspace and NASA are the key initial contributors to the stack. Rackspace contributed
their "Cloud Files" platform (code) to power the Object Storage part of the OpenStack, while NASA contributed their "Nebula"
platform (code) to power the Compute part. OpenStack consortium has managed to have more than 150 members including
Canonical, Dell, Citrix etc.
There are 10 main service families in OpenStack Icehouse, out of which we have dealt with following 7 in this guide.
Nova - Compute Service
Swift - Storage Service
Glance - Imaging Service
Keystone - Identity Service
Neutron - Networking service
Cinder - Volume Service
Horizon - Web UI Service

1.2.1

Open Stack Compute Infrastructure (Nova)

Nova is the Computing Fabric controller for the OpenStack Cloud. All activities needed to support the life cycle of instances
within the OpenStack cloud are handled by Nova. This makes Nova a Management Platform that manages compute resources,
networking, authorization, and scalability needs of the OpenStack cloud. But, Nova does not provide any virtualization capabilities by itself; instead, it uses libvirt APIs to interact with the supported hypervisors. Nova exposes all its capabilities through a
web services API that is compatible with the EC2 API of Amazon Web Services.
1.2.1.1

Functions and Features:

Instance life cycle management


Management of compute resources
Networking and Authorization
REST-based API
Asynchronous eventually consistent communication
Hypervisor agnostic : support for Xen, XenServer/XCP, KVM, UML, VMware vSphere and Hyper-V
1.2.1.2

Components of OpenStack Compute

Nova Cloud Fabric is composed of the following major components:


API Server (nova-api)
Compute Workers (nova-compute)
Scheduler (nova-scheduler)
1.2.1.2.1

API Server (nova-api)

The API Server provides an interface to the outside world to interact with the cloud infrastructure. API server is the only
component that the outside world uses to manage the infrastructure. The management is done through web services calls using
EC2 API. The API Server then, in turn, communicates with the relevant components of the cloud infrastructure through the
Message Queue. As an alternative to EC2 API, OpenStack also provides a native API called "OpenStack API".
1.2.1.2.2

Compute Worker (nova-compute)

Compute workers deal with instance management life cycle. they receive the requests for life cycle management via the Message
Queue and carry out operations. There are several Compute Workers in a typical production cloud deployment. An instance is
deployed on any of the available compute worker based on the scheduling algorithm used.
1.2.1.2.3

Scheduler (nova-scheduler)

The scheduler maps the nova-API calls to the appropriate openstack components. It runs as a daemon named nova-schedule and
picks up a compute server from a pool of available resources depending upon the scheduling algorithm in place. A scheduler can
base its decisions on various factors such as load, memory, physical distance of the availability zone, CPU architecture, etc. The
nova scheduler implements a pluggable architecture.
Currently the nova-scheduler implements a few basic scheduling algorithms:
chance: In this method, a compute host is chosen randomly across availability zones.
availability zone: Similar to chance, but the compute host is chosen randomly from within a specified availability zone.
simple: In this method, hosts whose load is least are chosen to run the instance. The load information may be fetched from a
load balancer.

OpenStack Compute Starter Guide

1.2.2

OpenStack Imaging Service (Glance)

OpenStack Imaging Service is a lookup and retrieval system for virtual machine images. It can be configured to use any one of
the following storage backends:
Local filesystem (default)
OpenStack Object Store to store images
S3 storage directly
S3 storage with Object Store as the intermediate for S3 access.
HTTP (read-only)
1.2.2.1

Functions and Features

Provides imaging service


1.2.2.2

Components of Glance

Glance-api
Glance-registry

1.2.3

OpenStack Storage Infrastructure (Swift)

Swift provides a distributed, eventually consistent virtual object store for OpenStack. It is analogous to Amazon Web Services Simple Storage Service (S3). Swift is capable of storing billions of objects distributed across nodes. Swift has built-in redundancy
and failover management and is capable of archiving and media streaming. It is extremely scalable in terms of both size (several
petabytes) and capacity (number of objects).
1.2.3.1

Functions and Features

Storage of large number of objects


Storage of large sized objects
Data Redundancy
Archival capabilities - Work with large datasets
Data container for virtual machines and cloud apps
Media Streaming capabilities
Secure storage of objects
Backup and archival
Extreme scalability
1.2.3.2

Components of Swift

Swift Account
Swift Container
Swift Object
Swift Proxy
The RING

1.2.3.3

Swift Proxy Server

The consumers interact with the Swift setup through the proxy server using the Swift API. The proxy server acts as a gatekeeper
and recieves requests from the world. It looks up the location of the appropriate entities and routes the requests to them.
The proxy server also handles failures of entities by rerouting requests to failover entities (handoff entities)

1.2.3.4

Swift Object Server

The Object server is a blob store. Its responsibility is to handle storage, retrieval and deletion of objects stored in the local
storage. Objects are typically binary files stored in the filesystem with metadata contained as extended file attributes (xattr).
Note: xattr is supported in several filesystems such as ext3, ext4, XFS, Btrfs, JFS and ReiserFS in Linux. But it is known to work
best under XFS, JFS, ReiserFS, Reiser4, and ZFS. XFS is considered to be the best option.

1.2.3.5

Swift Container server

The container server lists the objects in a container. The lists are stored as SQLite files. The container server also tracks the
statistics like the number of objects contained and the storage size occupied by a container.

1.2.3.6

Swift Account Server

The account server lists containers the same way a container server lists objects.

1.2.3.7

The Ring

The ring contains information about the physical location of the objects stored inside Swift. It is a virtual representation of
mapping of names of entities to their real physical location. It is analogous to an indexing service that various processes use to
lookup and locate the real physical location of entities within the cluster. Entities like Accounts, Containers, Objects have their
own seperate rings.

1.2.4

OpenStack Identity Service (Keystone)

Keystone provides identity and access policy services for all components in the OpenStack family. It implements its own REST
based API (Identity API). It provides authentication and authorization for all components of OpenStack including (but not limited
to) Swift, Glance, Nova. Authentication verifies that a request actually comes from who it says it does. Authorization is verifying
whether the authenticated user has access to the services he/she is requesting for.

OpenStack Compute Starter Guide

11

Keystone provides two ways of authentication. One is username/password based and the other is token based. Apart from that,
keystone provides the following services:
Token Service (that carries authorization information about an authenticated user)
Catalog Service (that contains a list of available services at the users disposal)
Policy Service (that lets keystone manage access to specific services by specific users or groups).
1.2.4.1

Components of Identity Service

Endpoints - Every OpenStack service (Nova, Swift, Glance) runs on a dedicated port and on a dedicated URL(host), we call
them endpoints.
Regions - A region defines a dedicated physical location inside a data centre. In a typical cloud setup, most if not all services
are distributed across data centers/servers which are also called regions
User - A keystone authenticated user.
Services - Each component that is being connected to or being administered via keystone can be called a service. For example,
we can call Glance a keystone service.
Role - In order to maintain restrictions as to what a particular user can do inside cloud infrastructure it is important to have a
role associated.
Tenant - A tenant is a project with all the service endpoint and a role associated to user who is member of that particular tenant.

1.2.5

OpenStack Network Service (Neutron)

Neutron is OpenStacks networking component. It is responsible for the following


Providing api for users to configure networking
Provide api for OpenStack compute services
Provide L2/L3 and security services for OpenStack Instances
Neutron is highly modular and its functionality is distributed among the several independent components. Each component
except the neutron-server is pluggable and can be replaced with another that offers the same service.

1.2.5.1

Neutron Server

The neutron-server is the process that exposes the Neutron API to users and other OpenStack services. The neutron-server is
also responsible for storing the network state and user configuration in database for use by the other services.
1.2.5.2

Neutron OpenvSwitch Agent

The neutron-openvswitch-agent runs on every compute node and the network node and is responsible for providing L2 connectivity between Instances running on various compute servers and the agents that offer other services. The neutron-openvswitchagent accepts user configuration from neutrons config files and overlays a virtual network atop the physical(data) network.
1.2.5.3

Neutron DHCP Agent

The neutron-dhcp-agent runs on the network node and is responsible for offering IP configuration for OpenStack Instances.
The neutron-dhcp-agent configures a dnsmasq Instance for every subnet created in Neutron, obeying the subnet parameters
like Allocation Pools and DNS Name Servers. The dnsmasq instance is configured to bind to an appropriate port configured
by the L2 agent on the network node and offer dhcp lease addresses for the Instances it its subnet.
1.2.5.4

Neutron L3 Agent

The neutron-l3-agent runs on the network node and is responsible for L3 connectivity, FloatingIP provisioning and enforcing
security groups. The neutron-l3-agent relies on the network nodes IP forwarding capabilities to perform routing and iptables
rules to provision FloatingIP and enforce security groups. Bot the the L3 agent and DHCP agent make use of linux networking
namespaces to make sure tenants are allowed to share subnet CIDR range.
1.2.5.5

Neutron Metadata Agent

The neutron-metadata-agent provides Instance metadata for all OpenStack Instances. The Instances can pull the metadata using
the url http://169.254.169.254:80 and use them for early Initialization, like for example
Update the Instances ssh authorized keys for password less ssh login
Create default users
Refresh package cache and system update, etc.

1.2.6

Openstack Volume Service (Cinder)

Volume services are used for management of LVM-based instance volumes. Volume services perform volume related functions
such as creation, deletion, attaching a volume to an instance, and detaching a volume from an instance. Volumes provide a way
of providing persistent storage for the instances, as the root partition is non-persistent and any changes made to it are lost when
an instance is terminated. When a volume is detached from an instance or when an instance, to which the volume is attached, is
terminated, it retains the data that was stored on it. This data can be accessed by re-attaching the volume to the same instance or
by attaching it to other instances.
Critical data in an instance must always be written to a volume, so that it can be accessed later. This typically applies to the
storage needs of database servers etc.
Cinder services include
Cinder API
Cinder Scheduler
Cinder Volume

OpenStack Compute Starter Guide

1.2.6.1

13

Cinder API

Cinder API provides an interface to the external world other services like nova to interact with other Cinder services.
1.2.6.2

Cinder Scheduler

When there are 2 or more machines running Cinder Volume service, scheduler decides the machine in which the new volume is
to be created.
1.2.6.3

Cinder Volume

In a default setup, Cinder Volume interacts with Logical Volume Manager(LVM)to create, delete and other volume related
functions

1.2.7

Openstack Administrative Web-Interface (Horizon)

Horizon the web based dashboard that can be used to manage /administer OpenStack services. It can be used to manage instances
and images, create keypairs, attach volumes to instances, manipulate Swift containers etc. Apart from this, dashboard even gives
the user access to instance console and can connect to an instance through VNC. Overall, Horizon features the following:
Instance Management - Create or terminate instance, view console logs and connect through VNC, Attaching volumes, etc.
Access and Security Management - Create security groups, manage keypairs, assign floating IPs, etc.
Flavor Management - Manage different flavors or instance virtual hardware templates.
Image Management - Edit or delete images.
View service catalog.
Manage users, quotas and usage for projects.
User Management - Create user, etc.
Volume Management - Creating Volumes and snapshots.
Object Store Manipulation - Create, delete containers and objects.
Downloading environment variables for a project.

1.2.8

Message Queue (Rabbit MQ Server)

The OpenStack Cloud Controller communicates with other nova components such as the Scheduler, Network Controller, and
Volume Controller via AMQP(Advanced Message Queue Protocol). Nova uses asynchronous calls for request response, with a
call-back that gets triggered once a response is received. Since asynchronous communication is used, none of the user actions
get stuck for long in a waiting state. This is especially true since many actions expected by the API calls such as launching an
instance or uploading an image are time consuming.

1.2.9

SQL Backend (MySQL)

OpenStack services persist their configuration in database. You can use MySQL, PostgreSQL or SQLite as your database server.
Depending upon your choice of database, you will need to install the necessary packages and configure the database server. In
this guide we will use MySQL.

1.2.10

Software Switch (OpenvSwitch)

OpenvSwitch is a production quality, multilayer software switch. In OpenStack, OpenvSwitch is used to create virtual(software)
switches, with connected ports to which the VMs attach. The OpenvSwitch is configured by Neutron in way such as to ensure
VM Connectivity with each other and with the external networks.

OpenStack Compute Starter Guide

15

Chapter 2

Installation and Configuration

2.1

Introduction

The following section describes how to set up a minimal cloud infrastructure based on OpenStack using 3 machines. These
machines are referred to in this and subsequent chapters as Server1, Server2 and Client1. Server1 runs all the components of
Nova, Glance, Swift, Keystone, Neutron, Cinder and Horizon (OpenStack Dashboard). Server2 runs only nova-compute and
neutron-openvswitch-agent. Since OpenStack components follow a shared-nothing policy, each component or any group of
components can be installed on any server.
Client1 is not a required component. In our sample setup, it is used for bundling images, as a client to the web interface and
to run OpenStack commands to manage the infrastructure. Having this client ensures that you do not need to meddle with the
servers for tasks such as bundling. Also, bundling of Desktop Systems including Windows will require a GUI and it is better to
have a dedicated machine for this purpose. We would recommend this machine to be VT-Enabled so that KVM can be run which
allows launching of VMs during image creation for bundling.
The installation steps use certain specifics such as host names/IP addresses etc. Modify them to suit your environment before
using them. The following table summarizes these specifics.

Functionality

Network Interfaces

IP addresses
Hostname
DNS servers
Gateway IP

Server1
All components of
OpenStack including
nova-compute
eth0 - Management N/W,
eth1 - Data N/W, eth2 External N/W
eth0 - 10.10.10.2, eth1 192.168.3.2, eth2192.168.100.2
server1.example.com
8.8.8.8
10.10.10.1

Server2

Client1

nova-compute and neutronplugin-openvswitch-agent

Client tools

eth0 - Management N/W,


eth1 - Data N/W

eth0 - Management N/W

eth0 - 10.10.10.3, eth1 192.168.3.3

eth0 - 10.10.10.4

server2.example.com
8.8.8.8
10.10.10.1

client.example.com
8.8.8.8
10.10.10.1

Table 2.1: Configuration

2.2

Server1

As shown in the figure above, Server1 contains all Nova services including nova-compute, Cinder, Neutron, Glance, Swift,
Keystone and Horizon. It contains two network interface cards (NICs).

2.2.1

Base OS

Install 64 bit version of Ubuntu server 14.04 keeping the following configurations in mind.
Create the first user with the name localadmin .
Installation lets you setup the IP address for the first interface i.e. eth0. Set the IP address details.
During installation select only Openssh-server in the packages menu.
We will also be running cinder-volume on this server and it is ideal to have a dedicated partition for the use of cinder-volume. So,
ensure that you choose manual partitioning scheme while installing Ubuntu Server and create a dedicated partition with adequate
amount of space for this purpose. We have referred to this partition in the rest of the chapter as /dev/sda6. You can substitute
the correct device name of this dedicated partition based on your local setup while following the instructions. Also ensure that
the partition type is set as Linux LVM (8e) using fdisk either during install or immediately after installation is over. If you also
plan to use a dedicated partition as Swift backend, create another partition for this purpose and follow the instructions in "Swift
Installation" section below.
Update the machine using the following commands.

OpenStack Compute Starter Guide

17

apt-get update
apt-get upgrade

Install vlan and bridge-utils


apt-get install vlan bridge-utils

Edit the following lines in the file /etc/sysctl.conf


net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Reboot the server and login as the admin user(localadmin) created during the OS installation.

2.2.2

Network Configuration

Edit the /etc/network/interfaces file so as to looks like this:


auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.10.10.2
netmask 255.255.255.0
broadcast 10.10.10.255
gateway 10.10.10.1
dns-nameservers 10.10.8.3
auto eth1
iface eth1 inet static
address 192.168.3.2
netmask 255.255.255.0
network 192.168.3.0
broadcast 192.168.3.255
auto eth2
iface eth2 inet static
address 192.168.100.2
netmask 255.255.255.0
network 192.168.100.0
broadcast 192.168.100.255

Restart the network now


/etc/init.d/networking restart

2.2.3

NTP Server

Install NTP package. This server is going to act as an NTP server for the nodes. The time on all components of OpenStack will
have to be in sync. We can run NTP server on this and have other components sync to it.
apt-get install ntp

Open the file /etc/ntp.conf and add the following lines to make sure that the time of the server is in sync with an external server
and in case Internet connectivity is down, NTP server uses its own hardware clock as the fallback.

server ntp.ubuntu.com
server 127.127.1.0
fudge 127.127.1.0 stratum 10

Restart NTP service to make the changes effective


/etc/init.d/ntp restart

2.2.4

RabbitMQ Server

Install RabbitMQ Server


apt-get install rabbitmq-server

2.2.5

MySQL

2.2.5.1

Installation

Install mysql-server and python-mysqldb package


apt-get install mysql-server python-mysqldb

Create the root password for mysql. The password here is "mygreatsecret"
Change the bind address from 127.0.0.1 to 0.0.0.0 in /etc/mysql/my.cnf and it will look like this:
bind-address = 0.0.0.0

Change the following lines under "[mysqld]"


collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

Restart MySQL server to ensure that it starts listening on all interfaces.


restart mysql

2.2.5.2

Creating Databases

Create MySQL databases to be used with nova, glance and keystone.


Create a database named nova.
mysql -uroot -pmygreatsecret -e 'CREATE DATABASE nova;'

Create a user named novadbadmin.


mysql -uroot -pmygreatsecret -e 'CREATE USER novadbadmin;'

Grant all privileges for novadbadmin on the database "nova".


mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%';"

Create a password for the user "novadbadmin".

OpenStack Compute Starter Guide

19

mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'novadbadmin'@'%' = PASSWORD('novasecret ');"

Create a database named glance.


mysql -uroot -pmygreatsecret -e 'CREATE DATABASE glance;'

Create a user named glancedbadmin.


mysql -uroot -pmygreatsecret -e 'CREATE USER glancedbadmin;'

Grant all privileges for glancedbadmin on the database "glance".


mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'%';"

Create a password for the user "glancedbadmin".


mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'glancedbadmin'@'%' = PASSWORD(' glancesecret');"

Create a database named keystone.


mysql -uroot -pmygreatsecret -e 'CREATE DATABASE keystone;'

Create a user named keystonedbadmin.


mysql -uroot -pmygreatsecret -e 'CREATE USER keystonedbadmin;'

Grant all privileges for keystonedbadmin on the database "keystone".


mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystonedbadmin'@ '%';"

Create a password for the user "keystonedbadmin".


mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'keystonedbadmin'@'%' = PASSWORD(' keystonesecret');"

Create a database named neutron.


mysql -uroot -pmygreatsecret -e 'CREATE DATABASE neutron;'

Create a user named neutrondbadmin.


mysql -uroot -pmygreatsecret -e 'CREATE USER neutrondbadmin;'

Grant all privileges for neutrondbadmin on the database "neutron".


mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutrondbadmin'@ '%';"

Create a password for the user "neutrondbadmin".


mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'neutrondbadmin'@'%' = PASSWORD(' neutronsecret');"

Create a database named cinder.


mysql -uroot -pmygreatsecret -e 'CREATE DATABASE cinder;'

Create a user named cinderdbadmin.

mysql -uroot -pmygreatsecret -e 'CREATE USER cinderdbadmin;'

Grant all privileges for cinderdbadmin on the database "cinder".


mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinderdbadmin'@'%';"

Create a password for the user "cinderdbadmin".


mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'cinderdbadmin'@'%' = PASSWORD(' cindersecret');"

2.2.6

Keystone

Keystone is the identity service used by OpenStack. Install Keystone using the following command.
apt-get install keystone python-keystone python-keystoneclient

Open /etc/keystone/keystone.conf, uncomment and change the line


#admin_token = ADMIN

so that it looks like


admin_token = admin

(We have used admin as the token in this book.)


Since MySQL database is used to store keystone configuration, edit the following line in /etc/keystone/keystone.conf from
connection = sqlite:////var/lib/keystone/keystone.db

to
connection = mysql://keystonedbadmin:keystonesecret@10.10.10.2/keystone

Restart Keystone by issuing this command:


service keystone restart

To synchronize the database, execute the following command:


keystone-manage db_sync

Now we need export some environment variables which are required frequently while working with OpenStack.
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=admin

You can also add these variables to ~/.bashrc, so that you need not have to export them everytime.
2.2.6.1

Creating Tenants

Create the tenants by executing the following commands. In this case, we are creating two tenants - admin and service.
keystone tenant-create --name admin
keystone tenant-create --name service

OpenStack Compute Starter Guide

2.2.6.2

21

Creating Users

Create the users by executing the following commands. In this case, we are creating four users - admin, nova, glance and swift
keystone
keystone
keystone
keystone
keystone
keystone

2.2.6.3

user-create
user-create
user-create
user-create
user-create
user-create

--name admin --pass admin --email admin@foobar.com


--name nova --pass nova --email nova@foobar.com
--name glance --pass glance --email glance@foobar.com
--name swift --pass swift --email swift@foobar.com
--name=neutron --pass=neutron --email=neutron@foobar.com
--name=cinder --pass=cinder --email=cinder@foobar.com

Creating Roles

Create the roles by executing the following commands. In this case, we are creating two roles - admin and Member.
keystone role-create --name admin
keystone role-create --name Member

2.2.6.4

Adding Roles to Users in Tenants

Now we add roles to the users that have been created. A role to a specific user in a specific tenant can be assigned with the
following command:
To add a role of admin to the user admin of the tenant admin.
keystone user-role-add --user=admin --tenant=admin --role=admin

A few more examples are as follows:


keystone
keystone
keystone
keystone
keystone

2.2.6.5

user-role-add
user-role-add
user-role-add
user-role-add
user-role-add

--user
--user
--user
--user
--user

nova --role admin --tenant service


glance --role admin --tenant service
swift --role admin --tenant service
neutron --role admin --tenant service
cinder --role admin --tenant service

Creating Services

Now we need to create the required services which the users can authenticate with. nova, glance, swift, keystone, neutron and
cinder are some of the services that we create.
keystone service-create
'
keystone service-create
'
keystone service-create
Storage v2"
keystone service-create
keystone service-create
Service'
keystone service-create
Service'
keystone service-create
Service'

--name nova --type compute --description 'OpenStack Compute Service --name cinder --type volume --description 'OpenStack Volume Service --name=cinderv2 --type=volumev2 --description="OpenStack Block

--name glance --type image --description 'OpenStack Image Service'


--name swift --type object-store --description 'OpenStack Storage --name keystone --type identity --description 'OpenStack Identity

--name neutron --type network --description 'OpenStack Networking

2.2.6.6

Creating Endpoints

Create endpoints for each of the services that have been created above.
For creating an endpoint for nova-compute, execute the following command
keystone endpoint-create --service=nova --publicurl=http://10.10.10.2:8774/v2/%\(tenant_id \)s --internalurl=http://10.10.10.2:8774/v2/%\(tenant_id\)s --adminurl=http ://10.10.10.2:8774/v2/%\(tenant_id\)s

For creating an endpoints for Cinder, execute the following command


keystone endpoint-create --service=cinder --publicurl=http://10.10.10.2:8776/v1/%\( tenant_id\)s --internalurl=http://10.10.10.2:8776/v1/%\(tenant_id\)s --adminurl=http ://10.10.10.2:8776/v1/%\(tenant_id\)s
keystone endpoint-create --service=cinderv2 --publicurl=http://10.10.10.2:8776/v2/%\( tenant_id\)s --internalurl=http://10.10.10.2:8776/v2/%\(tenant_id\)s --adminurl=http ://10.10.10.2:8776/v2/%\(tenant_id\)s

For creating an endpoint for glance, execute the following command


keystone endpoint-create --service=glance --publicurl=http://10.10.10.2:9292 --adminurl= http://10.10.10.2:9292 --internalurl=http://10.10.10.2:9292

For creating an endpoint for swift, execute the following command:


keystone endpoint-create --service=swift --publicurl='http://10.10.10.2:8080/v1/AUTH_%( tenant_id)s' --internalurl='http://10.10.10.2:8080/v1/AUTH_%(tenant_id)s' --adminurl=' http://10.10.10.2:8080/v1'

For creating an endpoint for keystone, execute the following command


keystone endpoint-create --service=keystone --publicurl http://10.10.10.2:5000/v2.0 -- internalurl http://10.10.10.2:5000/v2.0 --adminurl http://10.10.10.2:35357/v2.0

For creating an endpoint for neutron, execute the following command


keystone endpoint-create --service=neutron --publicurl http://10.10.10.2:9696 --internalurl http://10.10.10.2:9696 --adminurl http://10.10.10.2:9696

2.2.7

Glance

Install glance using the following command


apt-get install glance

2.2.7.1

Glance Configuration

Glance uses SQLite by default. MySQL and PostgreSQL can also be configured to work with Glance.
Open /etc/glance/glance-api.conf and add the make the following changes
Edit the line which contains the option "sqlite_db=" to this
connection = mysql://glancedbadmin:glancesecret@10.10.10.2/glance

Edit the following lines so it looks like this. The admin_tenant_name will be service, admin_user will be glance and admin_password is glance.
After editing, the lines should be as follows:

OpenStack Compute Starter Guide

23

admin_tenant_name = service
admin_user = glance
admin_password = glance

In order to tell glance to use keystone for authentication, add the following lines at the section "[paste_deploy]" to
flavor = keystone

Open /etc/glance/glance-registry.conf and add the make the following changes


Edit the line which contains the option "sqlite_db=" to this
connection = mysql://glancedbadmin:glancesecret@10.10.10.2/glance

Edit the following lines so it looks like this. The admin_tenant_name will be service, admin_user will be glance and admin_password is glance.
After editing, the lines should be as follows:
admin_tenant_name = service
admin_user = glance
admin_password = glance

In order to tell glance to use keystone for authentication, add the following lines at the section "[paste_deploy]" to
flavor = keystone

Create glance schema in the MySQL database.:


glance-manage db_sync

Restart glance-api and glance-registry after making the above changes.


restart glance-api
restart glance-registry

Export the following environment variables.


export
export
export
export
export
export

SERVICE_TOKEN=admin
OS_TENANT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=admin
OS_AUTH_URL="http://localhost:5000/v2.0/"
SERVICE_ENDPOINT=http://localhost:35357/v2.0

Alternatively, you can add these variables to ~/.bashrc.


To test if glance is setup correctly execute the following commands.The following command downloads a pre-built image from
the web and uploads it to Glance
glance image-create --name Cirros --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64- disk.img
glance index

we can also compile our own images and upload them to glance. This has been explained in detail in "Image Management"
chapter.

2.2.8

Nova

Install nova using the following commands:


apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova- scheduler python-novaclient nova-compute nova-console

2.2.8.1

Nova Configuration

Edit the /etc/nova/nova.conf file to look like this.


[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 10.10.10.2
my_ip = 10.10.10.2
vncserver_listen = 10.10.10.2
vncserver_proxyclient_address = 10.10.10.2
novncproxy_base_url=http://10.10.10.2:6080/vnc_auto.html
glance_host = 10.10.10.2
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.10.10.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_metadata_proxy_shared_secret=openstack
neutron_admin_auth_url=http://10.10.10.2:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
vif_plugging_is_fatal: false
vif_plugging_timeout: 0
[database]
connection = mysql://novadbadmin:novasecret@10.10.10.2/nova
[keystone_authtoken]
auth_uri = http://10.10.10.2:5000
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova

Create a schema for nova in the database


nova-manage db sync

OpenStack Compute Starter Guide

25

Restart nova services


service nova-api restart ;service nova-cert restart; service nova-consoleauth restart ; service nova-scheduler restart;service nova-conductor restart; service nova-novncproxy
restart; service nova-compute restart; service nova-console restart

To test if nova is setup correctly, execute the following command.


nova-manage service list

The output should be like


Binary
nova-consoleauth
nova-conductor
nova-cert
nova-scheduler
nova-compute
nova-console

Host
server1
server1
server1
server1
server1
server1

Zone
internal
internal
internal
internal
nova
internal

Status
enabled
enabled
enabled
enabled
enabled
enabled

State
:-)
:-)
:-)
:-)
:-)
:-)

Updated_At
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19

08:55:13
08:55:14
08:55:13
08:55:13
08:55:14
08:55:14

If you see something like the above with all components happy, it means that the set up is ready to be used.

2.2.9

Neutron

Neutron is the networking service of OpenStack. Lets see the steps involved in the installation of Neutron alonside other components.
Install the Neutron services using the following command >
apt-get install -y neutron-server neutron-plugin-openvswitch neutron-plugin-openvswitch- agent neutron-common neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent openvswitch-switch

2.2.9.1

Neutron Configuration

Remove all the lines from /etc/neutron/neutron.conf and add the following lines
[DEFAULT]
core_plugin = ml2
notification_driver=neutron.openstack.common.notifier.rpc_notifier
verbose=True
rabbit_host=10.10.10.2
rpc_backend=neutron.openstack.common.rpc.impl_kombu
service_plugins=router
allow_overlapping_ips=True
auth_strategy=keystone
neutron_metadata_proxy_shared_secret=openstack
service_neutron_metadata_proxy=True
notify_nova_on_port_data_changes=True
notify_nova_on_port_status_changes=True
nova_admin_auth_url=http://10.10.10.2:35357/v2.0
nova_admin_tenant_id=service
nova_url=http://10.10.10.2:8774/v2
nova_admin_username=nova
nova_admin_password=nova

[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357

auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
signing_dir = $state_path/keystone-signing
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 10.10.10.2
rabbit_port = 5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.10.10.2:8774
nova_admin_username = nova
nova_admin_tenant_id =
nova_admin_password = nova
nova_admin_auth_url = http://10.10.10.2:35357/v2.0
[database]
connection = mysql://neutrondbadmin:neutronsecret@10.10.10.2/neutron
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Open /etc/neutron/plugins/ml2/ml2_conf.ini and make the following changes


[ml2]
type_drivers=flat,vlan
tenant_network_types=vlan,flat
mechanism_drivers=openvswitch
[ml2_type_flat]
flat_networks=External
[ml2_type_vlan]
network_vlan_ranges=Intnet1:100:200
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
[ovs]
bridge_mappings=External:br-ex,Intnet1:br-eth1

We have created two physical networks one as a flat network and the other as a vlan network with vlan ranging from 100 to 200.
We have mapped External network to br-ex and Intnet1 to br-eth1.Now Create bridges
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl
ovs-vsctl

add-br br-int
add-br br-eth1
add-br br-ex
add-port br-eth1 eth1
add-port br-ex eth2

Edit /etc/neutron/metadata_agent.ini to look like this


[DEFAULT]
auth_url = http://10.10.10.2:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
metadata_proxy_shared_secret = openstack

Edit /etc/neutron/dhcp_agent.ini to look like this

OpenStack Compute Starter Guide

27

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True

Edit /etc/neutron/l3_agent.ini to look like this


[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

Restart all Neutron services


service neutron-server restart; service neutron-plugin-openvswitch-agent restart;service neutron-metadata-agent restart; service neutron-dhcp-agent restart; service neutron-l3- agent restart

Check if the services are running. Run the following command


neutron agent-list

The output should be like this


+--------------------------------------+------------------+--------+-------+--------------+
| id
|agent_type
| host
| alive |admin_state_up|
+--------------------------------------+------------------+--------+-------+--------------+
| 01a5e70c-324a-4183-9652-6cc0e5c98499 |Metadata agent
| ubuntu | :-)
|True
|
| 17b9440b-50eb-48b7-80a8-a5bbabc47805 |DHCP agent
| ubuntu | :-)
|True
|
| c30869f2-aaca-4118-829d-a28c63a27aa4 |L3 agent
| ubuntu | :-)
|True
|
| f846440e-4ca6-4120-abe1-ffddaf1ab555 |Open vSwitch agent| ubuntu | :-)
|True
|
+--------------------------------------+------------------+--------+-------+--------------+

To know more about managing Neutron,refer to network management chapter.

2.2.10

Cinder

Cinder is the volume service of OpenStack. In this section lets see how to install Cinder with other components of OpenStack.
Install Cinder services using the following command
apt-get install cinder-api cinder-scheduler cinder-volume lvm2

2.2.10.1

Cinder configuration

Remove all the lines from the file /etc/cinder/cinder.conf and add the following lines
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = 10.10.10.2

rabbit_port = 5672
rabbit_userid = guest
glance_host = 10.10.10.2
[database]
connection = mysql://cinderdbadmin:cindersecret@10.10.10.2/cinder
[keystone_authtoken]
auth_uri = http://10.10.10.2:5000
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = cinder

Create Cinder schema in the MySQL database.:


cinder-manage db sync

Cinder needs a LVM volume-group named "cinder-volumes". To create "cinder-volumes" use the following commands
Create a physical volume on the dedicated partition /dev/sdb
pvcreate /dev/sdb

Create volume group named cinder-volumes


vgcreate cinder-volumes /dev/sdb

Restart all the Cinder services


service cinder-scheduler restart;service cinder-api restart;service cinder-volume restart; service tgt restart

Now to check the Cinder setup, use the following command


cinder-manage service list

The output should be like


Binary
Host
cinder-scheduler ubuntu
cinder-volume
ubuntu

Zone
nova
nova

Status
enabled
enabled

State Updated At
:-)
2014-10-11 11:48:47
:-)
2014-10-11 11:48:38

Try creating a new volume


cinder create --display-name myVolume 1

List the volume you created now


cinder list

The output should be like


+---------------+----------+------------+----+-----------+--------+-----------+
|
ID
| Status |Display Name|Size|Volume Type|Bootable|Attached to|
+---------------+----------+------------+----+-----------+--------+-----------+
|xxx-xxx-xxx-xxx|available | myVolume | 1 |
None
| false |
|
+---------------+----------+------------+----+-----------+--------+-----------+

To know more about managing volumes,refer to storage management chapter.

OpenStack Compute Starter Guide

2.2.11

Swift

2.2.11.1

Swift Installation

29

The primary components are the proxy, account, container and object servers.
apt-get install swift swift-proxy swift-account swift-container swift-object memcached

Other components that might be xfsprogs (for dealing with XFS filesystem), python.pastedeploy (for keystone access), curl (to
test swift).
apt-get install xfsprogs curl python-pastedeploy

2.2.11.2

Swift Storage Backends

There are two methods one can try to create/prepare the storage backend. One is to use an existing partition/volume as the storage
device. The other is to create a loopback file and use it as the storage device. Use the appropriate method as per your setup.
2.2.11.2.1

Partition as a storage device

If you have a physical partition (e.g. /dev/sdb3), all you have to do is format it to xfs filesystem using parted or fdisk and use it
as the backend. You need to specify the mount point in /etc/fstab.
CAUTION: Replace /dev/sdb to your appropriate device. I'm assuming that there is an unused/ un-formatted partition section in /dev/sdb
fdisk /dev/sdb
Type n for new partition
Type e for extended partion
Choose appropriate partition number ( or go with the default )
Choose first and last sectors to set the hard disk size (or go with defaults)
Note that 83 is the partition type number for Linux
Type w to write changes to the disk

This would have created a partition (something like /dev/sdb3) that we can now format to XFS filesystem. Do fdisk -l in the
terminal to view and verify the partion table. Find the partition Make sure that the one that you want to use is listed there. This
would work only if you have xfsprogs installed.
mkfs.xfs -i size=1024 /dev/sdb3
tune2fs -l /dev/sdb3 |grep -i inode

Create a directory /mnt/swift_backend that can be used as a mount point to the partion tha we created.
mkdir /mnt/swift_backend

Now edit /etc/fstab and append the following line to mount the partition automatically everytime the system restarts.
/dev/sdb3 /mnt/swift_backend xfs noatime,nodiratime,nobarrier,logbufs=8 0 0

2.2.11.2.2

Loopback File as a storage device

We create a zero filled file for use as a loopback device for the Swift storage backend. Here we use the disk copy command to
create a file named swift-disk and allocate a million 1KiB blocks (976.56 MiB) to it. So we have a loopback disk of approximately
1GiB. We can increase this size by modifying the seek value. The disk is then formated to XFS filesystem. The file command
can be used to verify if it worked.

dd if=/dev/zero of=/srv/swift-disk bs=1024 count=0 seek=1000000


mkfs.xfs -i size=1024 /srv/swift-disk
/srv/swift-disk
swift-disk1: SGI XFS filesystem data (blksz 4096, inosz 1024, v2 dirs)

Create a directory /mnt/swift_backend that can be used as a mount point to the partion tha we created.
mkdir /mnt/swift_backend

Make it mount on boot by appending this to /etc/fstab.


/srv/swift-disk /mnt/swift_backend xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0

2.2.11.2.3

Using the backend

Now before mounting the backend that will be used, create some nodes to be used as storage devices and set ownership to swift
user and group.
mount /dev/sdb3 /mnt/swift_backend
pushd /mnt/swift_backend
mkdir node1 node2 node3 node4
popd
chown swift:swift /mnt/swift_backend/*
for i in {1..4}; do sudo ln -s /mnt/swift_backend/node$i /srv/node$i; done;
mkdir -p /etc/swift/account-server /etc/swift/container-server /etc/swift/object-server / srv/node1/device /srv/node2/device /srv/node3/device /srv/node4/device
chown -R swift:swift /etc/swift /srv/node[1-4]/

Append the following lines in /etc/rc.local just before "exit 0";. This will be run everytime the system starts.
mkdir /run/swift
chown swift:swift /run/swift

2.2.11.3

Configure Rsync

Rsync is responsible for maintaining object replicas. It is used by various swift services to maintain consistency of objects and
perform updation operations. It is configured for all the storage nodes.
Set RSYNC_ENABLE=true in /etc/default/rsync.
Modify /etc/rsyncd.conf as follows:
# General stuff
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1
# Account Server replication settings
[account6012]
max connections = 25
path = /srv/node1/
read only = false
lock file = /var/lock/account6012.lock
[account6022]
max connections = 25

OpenStack Compute Starter Guide

path = /srv/node2/
read only = false
lock file = /var/lock/account6022.lock
[account6032]
max connections = 25
path = /srv/node3/
read only = false
lock file = /var/lock/account6032.lock
[account6042]
max connections = 25
path = /srv/node4/
read only = false
lock file = /var/lock/account6042.lock
# Container server replication settings
[container6011]
max connections = 25
path = /srv/node1/
read only = false
lock file = /var/lock/container6011.lock
[container6021]
max connections = 25
path = /srv/node2/
read only = false
lock file = /var/lock/container6021.lock
[container6031]
max connections = 25
path = /srv/node3/
read only = false
lock file = /var/lock/container6031.lock
[container6041]
max connections = 25
path = /srv/node4/
read only = false
lock file = /var/lock/container6041.lock
# Object Server replication settings
[object6010]
max connections = 25
path = /srv/node1/
read only = false
lock file = /var/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/node2/
read only = false
lock file = /var/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/node3/
read only = false
lock file = /var/lock/object6030.lock

31

[object6040]
max connections = 25
path = /srv/node4/
read only = false
lock file = /var/lock/object6040.lock

Restart rsync.
service rsync restart

2.2.11.4

Configure Swift Components

General server configuration options can be found in http://swift.openstack.org/deployment_guide.html. If the swift-doc package
is installed it can also be viewed in the /usr/share/doc/swift-doc/html directory. Python uses paste.deploy to manage configuration.
Default configuration options are set in the [DEFAULT] section, and any options specified there can be overridden in any of the
other sections BUT ONLY BY USING THE SYNTAX set option_name = value.
Here is a sample paste.deploy configuration for reference:
[DEFAULT]
name1 = globalvalue
name2 = globalvalue
name3 = globalvalue
set name4 = globalvalue
[pipeline:main]
pipeline = myapp
[app:myapp]
use = egg:mypkg#myapp
name2 = localvalue
set name3 = localvalue
set name5 = localvalue
name6 = localvalue

Create and edit /etc/swift/swift.conf and add the following lines to it:
[swift-hash]
# random unique string that can never change (DO NOT LOSE). I'm using 03c9f48da2229770.
# od -t x8 -N 8 -A n < /dev/random
# The above command can be used to generate random a string.
swift_hash_path_suffix = 03c9f48da2229770

You will need the random string when you add more nodes to the setup. So never lose the string.
You can generate a random string by running the following command:
od -t x8 -N 8 -A n < /dev/random

2.2.11.4.1

Configure Swift Proxy Server

Proxy server acts as the gatekeeper to swift. It takes the responsibility of authenticating the user. Authentication verifies that a
request actually comes from who it says it does. Authorization verifies the who has access to the resource(s) the request wants.
Authorization is done by identity services like keystone. Create and edit /etc/swift/proxy-server.conf and add the following lines.
[DEFAULT]
bind_port = 8080
user = swift
[pipeline:main]

OpenStack Compute Starter Guide

33

pipeline = healthcheck cache authtoken keystoneauth proxy-server


[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = 10.10.10.2
auth_port = 35357
# the service tenant and swift username and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = swift
[filter:cache]
use = egg:swift#memcache
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck

Note: You can find sample configuration files at the "etc" directory in the source. Some documentation can be found under
"/usr/share/doc/swift-doc/html" if you had installed the swift-doc package using apt-get.
2.2.11.4.2

Configure Swift Account Server

The default swift account server configuration is /etc/swift/account-server.conf.


[DEFAULT]
bind_ip = 0.0.0.0
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]

Account server configuration files are also looked up under /etc/swift/account-server.conf. Here we can create several account
server configuration files each of which would correspond to a device under /srv. The files can be named 1.conf, 2.conf and so
on. Here are the contents of /etc/swift/account-server/1.conf:
[DEFAULT]
devices = /srv/node1
mount_check = false
bind_port = 6012
user = swift
log_facility = LOG_LOCAL2

[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
vm_test_mode = no
[account-auditor]
[account-reaper]

For the other devices, (/srv/node2, /srv/node3, /srv/node4), we create 2.conf, 3.conf and 4.conf. So we make three more copies of
1.conf and set unique bind ports for the rest of the nodes (6022, 6032 and 6042) and different local log values (LOG_LOCAL3,
LOG_LOCAL4, LOG_LOCAL5).
2.2.11.4.3

Configure Swift Container Server

The default swift container server configuration is /etc/swift/container-server.conf.


[DEFAULT]
bind_ip = 0.0.0.0
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]

Container server configuration files are also looked up under /etc/swift/container-server.conf. Here we can create several container server configuration files each of which would correspond to a device under /srv. The files can be named 1.conf, 2.conf
and so on. Here are the contents of /etc/swift/container-server/1.conf:
[DEFAULT]
devices = /srv/node1
mount_check = false
bind_port = 6011
user = swift
log_facility = LOG_LOCAL2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
vm_test_mode = no
[container-updater]
[container-auditor]

OpenStack Compute Starter Guide

35

[container-sync]

For the other devices, (/srv/node2, /srv/node3, /srv/node4), we create 2.conf, 3.conf and 4.conf. So we make three more copies of
1.conf and set unique bind ports for the rest of the nodes (6021, 6031 and 6041) and different local log values (LOG_LOCAL3,
LOG_LOCAL4, LOG_LOCAL5).
2.2.11.4.4

Configure Swift Object Server

The default swift object server configuration is /etc/swift/object-server.conf.


[DEFAULT]
bind_ip = 0.0.0.0
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]

Object server configuration files are also looked up under /etc/swift/object-server.conf. Here we can create several object server
configuration files each of which would correspond to a device under /srv. The files can be named 1.conf, 2.conf and so on. Here
are the contents of /etc/swift/object-server/1.conf:
[DEFAULT]
devices = /srv/node1
mount_check = false
bind_port = 6010
user = swift
log_facility = LOG_LOCAL2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
vm_test_mode = no
[object-updater]
[object-auditor]

For the other devices, (/srv/node2, /srv/node3, /srv/node4), we create 2.conf, 3.conf and 4.conf. So we make three more copies of
1.conf and set unique bind ports for the rest of the nodes (6020, 6030 and 6040) and different local log values (LOG_LOCAL3,
LOG_LOCAL4, LOG_LOCAL5).
2.2.11.4.5

Configure Swift Rings

Ring is an important component of swift. It maintains the information about the physical location of objects, their replicas and
devices. We now create the ring builder files corresponding to object service, container service and account service.
NOTE: We need to be in the /etc/swift directory when executing the following commands.

pushd /etc/swift
swift-ring-builder object.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder account.builder create 18 3 1

The numbers indicate the desired number of partitions, replicas and the time in hours to restrict moving a partition more than
once. See the man page for swift-ring-builder for more information.
Now we add zones and balance the rings. The syntax is as follows:
swift-ring-builder <builder_file> add <zone>-<ip_address>:<port>/<device> <weight>

Execute the following commands to add the zones and rebalance the ring.
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder
swift-ring-builder

2.2.11.5

object.builder add z1-127.0.0.1:6010/device 1


object.builder add z2-127.0.0.1:6020/device 1
object.builder add z3-127.0.0.1:6030/device 1
object.builder add z4-127.0.0.1:6040/device 1
object.builder rebalance
container.builder add z1-127.0.0.1:6011/device
container.builder add z2-127.0.0.1:6021/device
container.builder add z3-127.0.0.1:6031/device
container.builder add z4-127.0.0.1:6041/device
container.builder rebalance
account.builder add z1-127.0.0.1:6012/device 1
account.builder add z2-127.0.0.1:6022/device 1
account.builder add z3-127.0.0.1:6032/device 1
account.builder add z4-127.0.0.1:6042/device 1
account.builder rebalance

1
1
1
1

Starting Swift services

To start swift and the REST API, run the following commands.
swift-init main start
swift-init rest start
service swift-proxy start

2.2.11.6

Testing Swift

Swift can be tested using the swift command or the dashboard web interface (Horizon). Firstly, make sure that the ownership for
/etc/swift directory is set to swift:swift.
chown -R swift:swift /etc/swift

Then run the following command and verify if you get the appropriate account information. The number of containers and
objects stored within are displayed as well.
swift stat
Account: AUTH_8df4664c24bc40a4889fab4517e8e599
Containers: 0
Objects: 0
Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1411522928.10863
X-Trans-Id: txcb5b324224544f8892252-0054222170
X-Put-Timestamp: 1411522928.10863

OpenStack Compute Starter Guide

2.2.12

37

OpenStack Dashboard

Install OpenStack Dashboard by executing the following command


apt-get install openstack-dashboard

Restart apache with the following command:


service apache2 restart

In a browser enter the URL "http://10.10.10.2/horizon" on which the dashboard is installed and you should see the OpenStack
Dashboard login prompt. Login with username as admin and password as admin. From the dashboard, you can create
keypairs, create/edit security groups, raise new instances, attach volumes etc. which is explained in "OpenStack Dashboard"
chapter.

2.3

Server2

This server runs only nova-compute service.

2.3.1

Base OS

Install 64 bit version of Ubuntu server 14.04 keeping the following configurations in mind.
Create the first user with the name localadmin .
Installation lets you setup the IP address for the first interface i.e. eth0. Set the IP address details.
During installation select only Openssh-server in the packages menu.

2.3.2

Network Configuration

Install vlan and bridge-utils:


apt-get install vlan bridge-utils

Edit the following lines in the file /etc/sysctl.conf


net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Edit the /etc/network/interfaces file so as to looks like this:


auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.10.10.3
netmask 255.255.255.0
broadcast 10.10.10.255
gateway 10.10.10.1
dns-nameservers 10.10.10.3
auto eth1
iface eth1 inet static
address 192.168.3.3
netmask 255.255.255.0
network 192.168.3.0
broadcast 192.168.3.255

Restart the network now


/etc/init.d/networking restart

2.3.3

NTP Client

Install NTP package.


apt-get install ntp

Open the file /etc/ntp.conf and add the following line to sync to server1.
server 10.10.10.2

Restart NTP service to make the changes effective


etc/init.d/ntp restart

2.3.4

Nova Components (nova-compute alone)

Install the nova-components and dependencies.


apt-get install -y nova-common python-nova nova-compute vlan

Edit the /etc/nova/nova.conf file to look like this. This file is essentially similar to the configuration file (/etc/nova/nova.conf) of
Server1 except the VNC part
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 10.10.10.2
my_ip = 10.10.10.2
vncserver_listen = 10.10.10.2
vncserver_proxyclient_address = 10.10.10.2
novncproxy_base_url=http://10.10.10.3:6080/vnc_auto.html
glance_host = 10.10.10.2
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.10.10.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_metadata_proxy_shared_secret=openstack
neutron_admin_auth_url=http://10.10.10.2:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
vif_plugging_is_fatal: false

OpenStack Compute Starter Guide

39

vif_plugging_timeout: 0
[database]
connection = mysql://nova:nova_dbpass@10.10.10.2/nova
[keystone_authtoken]
auth_uri = http://10.10.10.2:5000
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass

Restart nova-compute on Server2.


service restart nova-compute

On Server1, check if the second compute node (Server2) is detected by running


nova-manage service list

If you see an output similar to the following, it means that the set up is ready to be used.
localadmin@server1:~$nova-manage service list
Binary
Host
Zone
nova-consoleauth server1
internal
nova-conductor
server1
internal
nova-cert
server1
internal
nova-scheduler
server1
internal
nova-compute
server1
nova
nova-console
server1
internal
nova-compute
server2
nova

2.3.5

Status
enabled
enabled
enabled
enabled
enabled
enabled
enabled

State
:-)
:-)
:-)
:-)
:-)
:-)
:-)

Updated_At
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19
2014-04-19

08:55:13
08:55:14
08:55:13
08:55:13
08:55:14
08:55:14
10:45:19

Neutron Components(Neutron-OpenvSwitch agent alone)

Neutron is the networking service of OpenStack. Lets see the steps involved in the installation of Neutron alongside other
components.
Install the Neutron services using the following command >
apt-get install -y neutron-plugin-openvswitch neutron-plugin-openvswitch-agent openvswitch- switch

2.3.5.1

Neutron Configuration

Remove all the lines from /etc/neutron/neutron.conf and add the following lines
[DEFAULT]
core_plugin = ml2
notification_driver=neutron.openstack.common.notifier.rpc_notifier
verbose=True
rabbit_host=10.10.10.2
rpc_backend=neutron.openstack.common.rpc.impl_kombu
service_plugins=router
allow_overlapping_ips=True
auth_strategy=keystone
neutron_metadata_proxy_shared_secret=openstack
service_neutron_metadata_proxy=True

nova_admin_password=nova_pass
notify_nova_on_port_data_changes=True
notify_nova_on_port_status_changes=True
nova_admin_auth_url=http://10.10.10.2:35357/v2.0
nova_admin_tenant_id=service
nova_url=http://10.10.10.2:8774/v2
nova_admin_username=nova

[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
signing_dir = $state_path/keystone-signing
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 10.10.10.2
rabbit_port = 5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.10.10.2:8774
nova_admin_username = nova
nova_admin_tenant_id =
nova_admin_password = nova
nova_admin_auth_url = http://10.10.10.2:35357/v2.0
[database]
connection = mysql://neutrondbadmin:neutronsecret@10.10.10.2/neutron
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Open /etc/neutron/plugins/ml2/ml2_conf.ini and make the following changes


[ml2]
type_drivers=flat,vlan
tenant_network_types=vlan,flat
mechanism_drivers=openvswitch
[ml2_type_flat]
flat_networks=External
[ml2_type_vlan]
network_vlan_ranges=Intnet1:100:200
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
[ovs]
bridge_mappings=External:br-ex,Intnet1:br-eth1

We have created two physical networks one as a flat network and the other as a vlan network with vlan ranging from 100 to 200.
We have mapped External network to br-ex and Intnet1 to br-eth1.Now Create bridges
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1

Restart Neutron OpenvSwitch agent

OpenStack Compute Starter Guide

41

service neutron-plugin-openvswitch-agent restart

2.4

Client1

2.4.1

BaseOS

Install 64-bit version of Ubuntu 14.04 Desktop

2.4.2

Networking Configuration

Edit the /etc/network/interfaces file so as to looks like this:


auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.10.10.4
netmask 255.255.255.0
broadcast 10.10.10.255
gateway 10.10.10.1
dns-nameservers 10.10.8.3

2.4.3

NTP Client

Install NTP package.


apt-get install -y ntp

Open the file /etc/ntp.conf and add the following line to sync to server1.
server 10.10.10.2

Restart NTP service to make the changes effective


service ntp restart

2.4.4

Client Tools

As mentioned above, this is a desktop installation of Ubuntu 14.04 to be used for tasks such as bundling of images. It will also
be used for managing the cloud infrastructure using nova, glance and swift commandline tools.
Install the required command line tools with the following command:
apt-get install python-novaclient glance-client swift python-cinderclient python- neutronclient

Install qemu-kvm
apt-get install qemu-kvm

Export the following environment variables or add them to your ~/.bashrc.

export
export
export
export
export
export

SERVICE_TOKEN=admin
OS_TENANT_NAME=admin
OS_USERNAME=admin
OS_PASSWORD=admin
OS_AUTH_URL="http://10.10.10.2:5000/v2.0/"
SERVICE_ENDPOINT=http://10.10.10.2:35357/v2.0

Execute nova and glance commands to check the connectivity to OpenStack setup.
nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| 25ee9230-6bb5-4eca-8808-e6b4e0348362 | myinstance | ACTIVE | private=192.168.4.35 |
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.36 |
+--------------------------------------+------------+--------+----------------------+
glance index
ID
-----------------------------------65b9f8e1-cde8-40e7-93e3-0866becfb9d4
f147e666-990c-47e2-9caa-a5a21470cc4e
f3a8e689-02ed-460f-a587-dc868576228f
aa362fd9-7c28-480b-845c-85a5c38ccd86
49f0ec2b-26dd-4644-adcc-2ce047e281c5

2.5

Name

Disk
Container Size
Format
Format
------------------------------ ---------------windows
qcow2
ovf
7580745728
debian
qcow2
ovf
932904960
opensuse
qcow2
ovf
1072300032
centoscli
qcow2
ovf
1611530240
ubuntuimage
qcow2
ovf
1471807488

OpenStack Dashboard

Start a browser and type the URL i.e, http://10.10.10.2/horizon. You should see the dashboard login screen. Login with the
credentials username - admin and password - admin to manage the OpenStack setup.

OpenStack Compute Starter Guide

43

Chapter 3

Image Management
3.1

Introduction

There are several pre-built images for OpenStack available from various sources. You can download such images and use them
to get familiar with OpenStack.
For any production deployment, you may like to have the ability to bundle custom images, with a custom set of applications or
configuration. This chapter will guide you through the process of creating Linux images of popular distributions from scratch.
We have also covered an approach to bundling Windows images.
There are some minor differences in the way you would bundle a Linux image, based on the distribution. Ubuntu makes it
very easy by providing cloud-init package, which can be used to take care of the instance configuration at the time of launch.
cloud-init handles importing ssh keys for password-less login, setting host name etc. The instance acquires the instance specific
configuration from Nova-compute by connecting to a meta data interface running on 169.254.169.254.
While creating the image of a distro that does not have cloud-init or an equivalent package, you may need to take care of importing
the keys etc. by running a set of commands at boot time from rc.local.
The process used for creating the Linux images of different distributions is largely the same with a few minor differences, which
are explained below.
In all the cases, the documentation below assumes that you have a working KVM installation to use for creating the images. We
are using the machine called client1 as explained in the chapter on "Installation and Configuration" for this purpose.
The approach explained below will give you disk images that represent a disk without any partitions.

3.2

Creating a Linux Image

The first step would be to create a raw image on Client1. This will represent the main HDD of the virtual machine, so make sure
to give it as much space as you will need.
kvm-img create -f raw server.img 5G

3.2.1

OS Installation

Download the iso file of the Linux distribution you want to install in the image. For Ubuntu, you can download the iso from
http://releases.ubuntu.com using wget or with the help of a browser
Boot a KVM instance with the OS installer ISO in the virtual CD-ROM. This will start the installation process. The command
below also sets up a VNC display at port 0

sudo kvm -m 256 -cdrom ubuntu-14.04-server-amd64.iso -drive file=server.img,if=virtio,index =0 -boot d -net nic -net user -nographic ~-vnc :0

Connect to the VM through VNC (use display number :0) and finish the installation.
For Example, where 10.10.10.4 is the IP address of client1:
vncviewer 10.10.10.4 :0

During creation of Linux images , create a single ext4 partition mounted on a swap partition.
After finishing the installation, relaunch the VM by executing the following command.
sudo kvm -m 256 -drive file=server.img,if=virtio,index=0 -boot c -net nic -net user - nographic -vnc :0

At this point, you can add all the packages you want to have installed, update the installation, add users and make any configuration changes you want in your image.
3.2.1.1

Ubuntu

sudo apt-get update


sudo apt-get upgrade
sudo apt-get install openssh-server cloud-init

Remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance
coming up as an interface other than eth0.
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules

3.2.1.2

Fedora

yum update
yum install openssh-server
chkconfig sshd on

Edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 to look like this


DEVICE="eth0"
BOOTPROTO=dhcp
NM_CONTROLLED="yes"
ONBOOT="yes"

Remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance
coming up as an interface other than eth0.
sudo rm -rf /etc/udev/rules.d/70-persistent-net.rules

Shutdown the virtual machine.


Since, Fedora does not ship with cloud-init or an equivalent, you will need to take a few steps to have the instance fetch the meta
data like ssh keys etc.
Edit the /etc/rc.local file and add the following lines before the line "touch /var/lock/subsys/local"

OpenStack Compute Starter Guide

45

depmod -a
modprobe acpiphp
# simple attempt to get the user ssh key using the meta-data service
mkdir -p /root/.ssh
echo >> /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key| grep 'ssh- rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"

3.2.1.3

OpenSUSE

Select ssh server, curl and other packages needed.


Install ssh server.
zypper install openssh

Install curl.
zypper install curl

For ssh key injection into the instance use the following steps:
Create a file /etc/init.d/sshkey and add the following lines
echo >> /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh -rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"

Change the permissions for the file.


chmod 755 /etc/init.d/sshkey

Configure the service to start automatically while booting.


chkconfig sshkey on

Configure the firewall (not iptables) using the following command and allow ssh service
yast2

Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the
instance coming up as an interface other than eth0.
rm -rf /etc/udev/rules.d/70-persistent-net.rules

3.2.1.4

Debian

Select SSH server, Curl and other packages needed.


Do the necessary changes needed for the image. For key injection add the following lines in the file /etc/rc.local.

echo >> /root/.ssh/authorized_keys


curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh -rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"

Also remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the
instance coming up as an interface other than eth0.
rm -rf /etc/udev/rules.d/70-persistent-net.rules

3.2.1.5

CentOS 6 and RHEL 6

Select SSH server, Curl and other packages needed.


Do the necessary changes needed for the image. For key injection add the following lines in the file /etc/rc.local.
echo >> /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh -rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"

Edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 to look like this


DEVICE="eth0"
BOOTPROTO=dhcp
NM_CONTROLLED="yes"
ONBOOT="yes"

Remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance
coming up as an interface other than eth0.
rm -rf /etc/udev/rules.d/70-persistent-net.rules

3.2.2

Uploading the Linux image

Upload the image


glance add name="<Image name>" is_public=true container_format=ovf disk_format=qcow2 < < filename>.img

OpenStack Compute Starter Guide

47

Chapter 4

Instance Management
4.1

Introduction

An instance is a virtual machine provisioned by OpenStack on one of the nova-compute servers. When you launch an instance,
a series of steps are triggered on various components of the OpenStack. During the life cycles of an instance, it moves through
various stages as shown in the diagram below:

The following interfaces can be used for managing instances in nova.


Nova commands
Custom applications developed using Nova APIs

4.2

Openstack Command Line Tools

Nova has a bunch of command line tools to manage the OpenStack setup. These commands help you manage images, instances,
storage, networking etc. A few commands related to managing the instances are given below.
For a complete list of commands, see Commands chapter.

4.2.1

Creation of Key Pairs

OpenStack expects the client tools to use 2 kinds of credentials. One set of credentials are called Access Key and Secret Key that
all clients would need to use to make any requests to the Cloud Controller. Each user registered on the web interface has this set
created for him. You can download it from the web interface as mentioned in the chapter on "Web Interface".
You will also need to generate a keypair consisting of private key/public key to be able to launch instances on OpenStack. These
keys are injected into the instances to make password-less SSH access to the instance. This depends on the way the necessary
tools are bundled into the images. Please refer to the chapter on "Image Management" for more details.
Keypairs can also be generated using the following commands.
ssh-keygen
cd .ssh
nova keypair-add --pub_key id_rsa.pub mykey

This creates a new keypair called mykey. The private key id_rsa is saved locally in ~/.ssh which can be used to connect to an
instance launched using mykey as the keypair. You can see the available keypairs with nova keypair-list command.
nova keypair-list
+-------+-------------------------------------------------+
| Name |
Fingerprint
|
+-------+-------------------------------------------------+
| mykey | b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82|
| mykey2| b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82|
+-------+-------------------------------------------------+

Also while executing ssh-keygen you can specify a custom location and custom file names for the keypairs that you want to
create.
To delete an existing keypair:
nova keypair-delete mykey2

4.2.2

Launch and manage instances

There are several commands that help in managing the instances. Here are a few examples:
localadmin@server1$ nova boot --flavor 1 --image 4b1dceb5-6c9a-4268-9bcc-4ff33e209bd2 myinstance --key_name mykey
+--------------------------------------+-----------------------------------------------+
| Property
| Value
|
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig
| MANUAL
|
| OS-EXT-AZ:availability_zone
| nova
|
| OS-EXT-SRV-ATTR:host
| |
| OS-EXT-SRV-ATTR:hypervisor_hostname | |
| OS-EXT-SRV-ATTR:instance_name
| instance-00000005
|
| OS-EXT-STS:power_state
| 0
|
| OS-EXT-STS:task_state
| scheduling
|
| OS-EXT-STS:vm_state
| building
|
| OS-SRV-USG:launched_at
| |
| OS-SRV-USG:terminated_at
| |
| accessIPv4
|
|
| accessIPv6
|
|
| adminPass
| WrGtHqBCNA64
|
| config_drive
|
|
| created
| 2014-09-27T10:09:11Z
|
| flavor
| m1.tiny (1)
|
| hostId
|
|
| id
| 4017d7f2-44e6-4858-a175-b4dbb178156d
|

OpenStack Compute Starter Guide

49

| image
| Cirros (4b1dceb5-6c9a-4268-9bcc-4ff33e209bd2) |
| key_name
| mykey
|
| metadata
| {}
|
| name
| myinstance
|
| os-extended-volumes:volumes_attached | []
|
| progress
| 0
|
| security_groups
| default
|
| status
| BUILD
|
| tenant_id
| fec44bd5871a41068791756610090158
|
| updated
| 2014-09-27T10:09:11Z
|
| user_id
| f50fe85ae90b4381871f4018257414ed
|
+--------------------------------------+-----------------------------------------------+
$ nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| 25ee9230-6bb5-4eca-8808-e6b4e0348362 | myinstance | ACTIVE | private=192.168.4.35 |
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.36 |
+--------------------------------------+------------+--------+----------------------+
$ nova reboot 25ee9230-6bb5-4eca-8808-e6b4e0348362
$ nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| 25ee9230-6bb5-4eca-8808-e6b4e0348362 | myinstance | REBOOT | private=192.168.4.35 |
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.34 |
+--------------------------------------+------------+--------+----------------------+
$ nova delete 25ee9230-6bb5-4eca-8808-e6b4e0348362
$ nova list
+--------------------------------------+------------+--------+----------------------+
|
ID
|
Name
| Status |
Networks
|
+--------------------------------------+------------+--------+----------------------+
| c939cb2c-e662-46e5-bc31-453007442cf9 | myinstance1| ACTIVE | private=192.168.4.34 |
+--------------------------------------+------------+--------+----------------------+

$ nova console-log myinstance

For passwordless ssh access to the instance:


ssh -i <private_key> username@<ip_address>

VM type has implications for harddisk size, amount of RAM and number of CPUs allocated to the instance. Check the VM types
available.
nova flavor-list

New flavours can be created with the command.


nova-manage flavor create <args> [options]

A flavour can be deleted with the command.


nova-manage flavor delete <args> [options]

OpenStack Compute Starter Guide

51

Chapter 5

OpenStack Dashboard (Horizon)

5.1

Login

Using the OpenStack Dashboard, one can manage various OpenStack services. It may be used to manage instances and images,
create keypairs, attach volumes to instances, manipulate Swift containers etc. The OpenStack Dashboard is accessible via
http://10.10.10.2/horizon
Login to the dashboard with username "admin" and password "admin".

5.2

User overview

After logging, depending on the access privileges, the user is allowed access to specific projects. The below is an overview page
for a project belonging to the admin user. One can view and download some basic usage metric reports here.

5.3

Instances

The page lists currently running instances belonging to the user admin. From this page, one can terminate, pause, reboot any
running instances, connect to vnc console of the instance etc.

5.4

Instances

The list of services defiend can be viewed on this page.

OpenStack Compute Starter Guide

5.5

53

Images

The list of services defiend can be viewed on this page.

5.6

Projects

This page lists the available projects (tenants) that have been created. One can also create new projects, assign users to the
projects etc.

5.7

Users

This page lists the users that have been created. One can also create new users, disable/delete existing users.

OpenStack Compute Starter Guide

55

Chapter 6

Storage Management
6.1

Cinder-volume

Cinder-volume provides persistent block storage for use with instances. The storage on the instances is non persistent in nature
and hence any data that you generate and store on the file system on the first disk of the instance gets lost when the instance is
terminated. You will need to use persistent volumes provided by cinder-volume if you want any data generated during the life of
the instance to persist after the instance is terminated.
Commands from cinder client package can be used to manage these volumes.
Here are a few examples:

6.1.1

Interacting with Storage Controller

Make sure that you have sourced creds file before running any of the following commands.
Create a 10 GB volume
cinder create 10 --display-name myvolume1

You should see an output like:


+---------------------+--------------------------------------+
|
Property
|
Value
|
+---------------------+--------------------------------------+
|
attachments
|
[]
|
| availability_zone |
nova
|
|
bootable
|
false
|
|
created_at
|
2014-10-04T10:35:44.750783
|
| display_description |
None
|
|
display_name
|
myvolume1
|
|
encrypted
|
False
|
|
id
| 2c56e6ed-80de-409c-8f37-6edaf8b224fd |
|
metadata
|
{}
|
|
size
|
10
|
|
snapshot_id
|
None
|
|
source_volid
|
None
|
|
status
|
creating
|
|
volume_type
|
None
|
+---------------------+--------------------------------------+

List the volumes


cinder list

You should see an output like this:


+-------------------------+---------+------------+----+-----------+--------+-----------+
|
ID
| Status |Display Name|Size|Volume Type|Bootable|Attached to|
+-------------------------+---------+------------+----+-----------+--------+-----------+
|2c6ed-8de-49c-8f7-6eda4fd|available| myvolume1 | 10 |
None
| false |
|
+-------------------------+---------+------------+----+-----------+--------+-----------+

Attach a volume to a running instance


nova volume-attach b4aa8dd3-6d2c-42fa-882f-1ab3160ee6e7 2c6ed-8de-49c-8f7-6eda4fd /dev/vdb

A volume can only be attached to one instance at a time. When "cinder list" shows the status of a volume as available, it means
it is not attached to any instance and ready to be used. If the status changes from "available" to "in-use" if it is attached to an
instance successfully.
When a volume is attached to an instance, it shows up as an additional SCSI disk on the instance. You can login to the instance
and mount the disk, format it and use it.
Detach a volume from an instance.
nova volume-detach b4aa8dd3-6d2c-42fa-882f-1ab3160ee6e7 2c6ed-8de-49c-8f7-6eda4fd

The data on the volume persists even after the volume is detached from an instance. You can see the data on reattaching the
volume to another instance.
Even though you have indicated /dev/vdb as the device on the instance, the actual device name created by the OS running inside
the instance may differ. You can find the name of the device by looking at the device nodes in /dev or by watching the syslog
when the volume is being attached.

6.1.2

Swift

Swift is a storage service that can be used for storage and archival of objects. Swift provides a REST interface. You can use
Swift commandline which is an interface for the OpenStack object store service.
To get the information about swift account, container and objects.
swift stat

Output:
Account:
Containers:
Objects:
Bytes:
Accept-Ranges:
X-Timestamp:
X-Trans-Id:
Content-Type:

AUTH_8df4664c24bc40a4889fab4517e8e599
1
1
1790
bytes
1413260773.11084
txa04183fe1f9943fa94672-00543ca5f6
text/plain; charset=utf-8

To get information about a particular container (mycontainer):


swift stat mycontainer

To get information about an object (abc123.txt) within container (mycontainer):


swift stat mycontainer abc123.txt

To list available containers in account:


swift list

OpenStack Compute Starter Guide

To list all objects within container mycontainer:


$ swift list mycontainer

To upload files abc.txt and xyz.txt to mycontainer:


$ swift upload mycontainer /path/abc.txt /path/xyz.txt

To download all objects from container mycontainer:


$ swift download mycontainer

To download abc.txt and xyz.txt from container mycontainer:


$ swift download mycontainer abc.txt xyz.txt

To delete all objects in all containers:


$ swift --all delete

To delete all objects in container mycontainer:


$ swift delete mycontainer

To delete files abc.txt and xyz.txt from container mycontainer:


$ swift delete mycontainer abc.txt xyz.txt

57

OpenStack Compute Starter Guide

59

Chapter 7

Network Management
7.1

Introduction

In OpenStack, the networking is managed by a component called Neutron. This interacts with nova-compute to ensure that
the instances have the right kind of networking setup for them to communicate among themselves as well as with the outside
world. Each OpenStack instance can have 2 IP addresses attached to it. One is the private IP address and the other called
Public IP address. The private IP address is typically used for communication between instances and the public IP is used for
communication of instances with the outside world. The so called public IP address need not be a public IP address route-able
on the Internet ; it can even be an address on the corporate LAN.
The network configuration inside the instance is done with the private IP address in view. The association between the private IP
and the public IP and necessary routing is handled by neutron-l3-agent and the instances need not be aware of it.
Neutron provides 4 different network types
Flat
VLAN
GRE
VxLAN
Flat network should be chosen if you intend to create only one openstack private network per physical network. VLAN Network
is ideal for beginners to easily grasp the idea of OpenStack Networking. The other modes can be chosen when you do not have
VLAN Enabled switches or more specialized setup is needed.
The network type is chosen by using one of the following configuration options in /etc/neutron/plugins/ml2/ml2_conf.ini. This
line can also contain more than network type.
tenant_network_types =

In each of these cases, run the following commands to set up private and public IP addresses for use by the instances:
You can create a private network by using the following command
neutron net-create N1

The output of the command will be like this


+---------------------------+--------------------------------------+
| Field
| Value
|
+---------------------------+--------------------------------------+
| admin_state_up
| True
|
| id
| beb5d2ca-46b0-45d7-a469-98d0575b7530 |

| name
| N1
|
| provider:network_type
| vlan
|
| provider:physical_network | Intnet1
|
| provider:segmentation_id | 100
|
| router:external
| False
|
| shared
| False
|
| status
| ACTIVE
|
| subnets
|
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+---------------------------+--------------------------------------+

Create a subnet for the network created above


neutron subnet-create --name subnetone N1 192.168.10.0/24 --dns-nameserver 8.8.8.8

The output would be like


+------------------+----------------------------------------------------+
| Field
| Value
|
+------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.10.2", "end": "192.168.10.254"} |
| cidr
| 192.168.10.0/24
|
| dns_nameservers | 8.8.8.8
|
| enable_dhcp
| True
|
| gateway_ip
| 192.168.10.1
|
| host_routes
|
|
| id
| ccc80588-2b0d-459b-82e9-972ff4291b79
|
| ip_version
| 4
|
| name
| subnetone
|
| network_id
| beb5d2ca-46b0-45d7-a469-98d0575b7530
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+------------------+----------------------------------------------------+

Now you can launch instances in network N1 and they will have IP addresses in the range defined by subnetone.
The public IP network or the External network can be created using the following command
neutron net-create Extnet --provider:network_type flat --provider:physical_network External --router:external True --shared

The output would be like


+---------------------------+--------------------------------------+
| Field
| Value
|
+---------------------------+--------------------------------------+
| admin_state_up
| True
|
| id
| 9c9436d4-2b7c-4787-8535-9835e6d9ac8e |
| name
| Extnet
|
| provider:network_type
| flat
|
| provider:physical_network | External
|
| provider:segmentation_id |
|
| router:external
| True
|
| shared
| True
|
| status
| ACTIVE
|
| subnets
| be9e9fbc-9140-4048-9636-eab2dd10d2e8 |
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+---------------------------+--------------------------------------+

Create a subnet for the External network


neutron subnet-create --name Externalsubnet --gateway 10.10.10.1 Extnet 10.10.10.0/24 -- enable_dhcp False --allocation-pool start=10.10.10.10,end=10.10.10.49

OpenStack Compute Starter Guide

61

The output would be like


+------------------+------------------------------------------------+
| Field
| Value
|
+------------------+------------------------------------------------+
| allocation_pools | {"start": "10.10.10.10", "end": "10.10.10.49"} |
| cidr
| 10.10.10.0/24
|
| dns_nameservers |
|
| enable_dhcp
| False
|
| gateway_ip
| 10.10.10.1
|
| host_routes
|
|
| id
| be9e9fbc-9140-4048-9636-eab2dd10d2e8
|
| ip_version
| 4
|
| name
| ExternalSubnet
|
| network_id
| 9c9436d4-2b7c-4787-8535-9835e6d9ac8e
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+------------------+------------------------------------------------+

We need a router to connect between the private network and public network. We can create a router using the following command
neutron router-create EXTRouter

The output would be like


+-----------------------+-------------------------------------------+
| Field
| Value
|
+-----------------------+-------------------------------------------+
| admin_state_up
| True
|
| external_gateway_info |
|
| id
| a8b94496-87eb-4ecc-add3-7e4236780d46
|
| name
| EXTRouter
|
| routes
|
|
| status
| ACTIVE
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+-----------------------+-------------------------------------------+

We have to add the private and public network to the router interfaces.
The router is now attached to the private network, by specifying the corresponding subnets name.
neutron router-interface-add EXTRouter subnetone

The router is also attached to external network


neutron router-gateway-set EXTRouter Extnet

You can then create a Floating IP from the public IP reserve created already from 10.10.10.10 to 10.10.10.49 in the network
"Extnet"
neutron floatingip-create Extnet

The output of the command would be like


Created a new floatingip:
+---------------------+--------------------------------------+
| Field
| Value
|
+---------------------+--------------------------------------+
| fixed_ip_address
|
|
| floating_ip_address | 10.10.10.11
|
| floating_network_id | 9c9436d4-2b7c-4787-8535-9835e6d9ac8e |
| id
| 7b4cee72-ffcd-4484-a5d8-371b23bb3cc3 |
| port_id
|
|

| router_id
|
|
| status
| ACTIVE
|
| tenant_id
| 8df4664c24bc40a4889fab4517e8e599
|
+---------------------+--------------------------------------+

Associate the floating IP with a running instance using the following command
nova floating-ip-associate instance-ID Floating-IP-Address

OpenStack Compute Starter Guide

63

Chapter 8

Security
8.1

Security Overview

OpenStack provides ingress filtering for the instances based on the concept of security groups. OpenStack accomplishes ingress
filtering by creating suitable IP Tables rules. A Security Group is a named set of rules that get applied to the incoming packets
for the instances. You can specify a security group while launching an instance. Each security group can have multiple rules
associated with it. Each rule specifies the source IP/network, protocol type, destination ports etc. Any packet matching these
parameters specified in a rule is allowed in. Rest of the packets are blocked.
A security group that does not have any rules associated with it causes blocking of all incoming traffic. The mechanism only
provides ingress filtering and does not provide any egress filtering. As a result all outbound traffic is allowed. If you need to
implement egress filtering, you will need to implement that inside the instance using a firewall.
Horizon lets you manage security groups and also let you specify a security group while launching an instance. You can also use
nova client for this purpose.
Here are a few Nova commands to manage security groups. Like in our earlier chapters, the project name is "proj"
Create a security group named "myserver".
nova secgroup-create myserver "My default"
+--------------------------------------+----------+-------------+
| Id
| Name
| Description |
+--------------------------------------+----------+-------------+
| 0d524280-2e8a-4b43-9445-50e6d185daf8 | myserver | My default |
+--------------------------------------+----------+-------------+

Add a rule to the security group "myserver" allowing icmp and tcp traffic from 192.168.1.1.
nova secgroup-add-rule myserver tcp 22 22 192.168.1.1/32
+-------------+-----------+---------+----------------+--------------+
| IP Protocol | From Port | To Port | IP Range
| Source Group |
+-------------+-----------+---------+----------------+--------------+
| tcp
| 22
| 22
| 192.168.1.1/32 |
|
+-------------+-----------+---------+----------------+--------------+
localadmin@server1$ nova secgroup-add-rule myserver icmp -1 -1 192.168.1.1/32
+-------------+-----------+---------+----------------+--------------+
| IP Protocol | From Port | To Port | IP Range
| Source Group |
+-------------+-----------+---------+----------------+--------------+
| icmp
| -1
| -1
| 192.168.1.1/32 |
|
+-------------+-----------+---------+----------------+--------------+

Rules can be viewed with nova secgroup-list.


nova secgroup-list
+--------------------------------------+---------+-------------+
| Id
| Name
| Description |
+--------------------------------------+---------+-------------+
| 9fedb29d-c12c-49c0-bdd4-e5265488f53d | myserver| My default |
+--------------------------------------+---------+-------------+

Remove the rule for ssh traffic from the source ip 192.168.1.1 from the security group "myserver"
nova secgroup-delete-rule myserver tcp 22 22 192.168.1.1/32
+-------------+-----------+---------+----------------+--------------+
| IP Protocol | From Port | To Port | IP Range
| Source Group |
+-------------+-----------+---------+----------------+--------------+
| tcp
| 22
| 22
| 192.168.1.1/32 |
|
+-------------+-----------+---------+----------------+--------------+

Delete the security group "myserver"


nova secgroup-delete myserver
+--------------------------------------+----------+-------------------+
| Id
| Name
| Description
|
+--------------------------------------+----------+-------------------+
| 0d524280-2e8a-4b43-9445-50e6d185daf8 | myserver | My default
|
+--------------------------------------+----------+-------------------+

Launch an instance associated with the security group "myservers".


nova boot --flavor 1 --image 4b1dceb5-6c9a-4268-9bcc-4ff33e209bd2 myinstance --key_name
mykey
+--------------------------------------+-----------------------------------------------+
| Property
| Value
|
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig
| MANUAL
|
| OS-EXT-AZ:availability_zone
| nova
|
| OS-EXT-SRV-ATTR:host
| |
| OS-EXT-SRV-ATTR:hypervisor_hostname | |
| OS-EXT-SRV-ATTR:instance_name
| instance-00000005
|
| OS-EXT-STS:power_state
| 0
|
| OS-EXT-STS:task_state
| scheduling
|
| OS-EXT-STS:vm_state
| building
|
| OS-SRV-USG:launched_at
| |
| OS-SRV-USG:terminated_at
| |
| accessIPv4
|
|
| accessIPv6
|
|
| adminPass
| WrGtHqBCNA64
|
| config_drive
|
|
| created
| 2014-09-27T10:09:11Z
|
| flavor
| m1.tiny (1)
|
| hostId
|
|
| id
| 4017d7f2-44e6-4858-a175-b4dbb178156d
|
| image
| Cirros (4b1dceb5-6c9a-4268-9bcc-4ff33e209bd2) |
| key_name
| mykey
|
| metadata
| {}
|
| name
| myinstance
|
| os-extended-volumes:volumes_attached | []
|

OpenStack Compute Starter Guide

| progress
| 0
|
| security_groups
| default
|
| status
| BUILD
|
| tenant_id
| fec44bd5871a41068791756610090158
|
| updated
| 2014-09-27T10:09:11Z
|
| user_id
| f50fe85ae90b4381871f4018257414ed
|
+--------------------------------------+-----------------------------------------------+

When you do not specify a security group, the instance gets associated with an inbuilt security group called "default".

65

OpenStack Compute Starter Guide

Chapter 9

OpenStack Commands
9.1

Nova Commands

nova is the commandline interface for OpenStack Compute API.


usage: nova
Positional arguments:
absolute-limits
Print a list of absolute limits for a user
add-fixed-ip
Add new IP address on a network to server.
add-floating-ip
DEPRECATED, use floating-ip-associate instead.
add-secgroup
Add a Security Group to a server.
agent-create
Create new agent build.
agent-delete
Delete existing agent build.
agent-list
List all builds.
agent-modify
Modify existing agent build.
aggregate-add-host Add the host to the specified aggregate.
aggregate-create
Create a new aggregate with the specified details.
aggregate-delete
Delete the aggregate.
aggregate-details
Show details of the specified aggregate.
aggregate-list
Print a list of all aggregates.
aggregate-remove-host
Remove the specified host from the specified
aggregate.
aggregate-set-metadata
Update the metadata associated with the aggregate.
aggregate-update
Update the aggregate's name and optionally
availability zone.
availability-zone-list
List all the availability zones.
backup
Backup a server by creating a 'backup' type snapshot.
boot
Boot a new server.
clear-password
Clear password for a server.
cloudpipe-configure
Update the VPN IP/port of a cloudpipe instance.
cloudpipe-create
Create a cloudpipe instance for the given project.
cloudpipe-list
Print a list of all cloudpipe instances.
console-log
Get console log output of a server.
credentials
Show user credentials returned from auth.
delete
Immediately shut down and delete specified server(s).
diagnostics
Retrieve server diagnostics.
dns-create
Create a DNS entry for domain, name and ip.
dns-create-private-domain
Create the specified DNS domain.

67

dns-create-public-domain
Create the specified DNS domain.
dns-delete
Delete the specified DNS entry.
dns-delete-domain
Delete the specified DNS domain.
dns-domains
Print a list of available dns domains.
dns-list
List current DNS entries for domain and ip or domain
and name.
endpoints
Discover endpoints that get returned from the
authenticate services.
evacuate
Evacuate server from failed host to specified one.
fixed-ip-get
Retrieve info on a fixed ip.
fixed-ip-reserve
Reserve a fixed IP.
fixed-ip-unreserve Unreserve a fixed IP.
flavor-access-add
Add flavor access for the given tenant.
flavor-access-list Print access information about the given flavor.
flavor-access-remove
Remove flavor access for the given tenant.
flavor-create
Create a new flavor
flavor-delete
Delete a specific flavor
flavor-key
Set or unset extra_spec for a flavor.
flavor-list
Print a list of available 'flavors' (sizes of
servers).
flavor-show
Show details about the given flavor.
floating-ip-associate
Associate a floating IP address to a server.
floating-ip-bulk-create
Bulk create floating ips by range.
floating-ip-bulk-delete
Bulk delete floating ips by range.
floating-ip-bulk-list
List all floating ips.
floating-ip-create Allocate a floating IP for the current tenant.
floating-ip-delete De-allocate a floating IP.
floating-ip-disassociate
Disassociate a floating IP address from a server.
floating-ip-list
List floating ips for this tenant.
floating-ip-pool-list
List all floating ip pools.
get-password
Get password for a server.
get-rdp-console
Get a rdp console to a server.
get-spice-console
Get a spice console to a server.
get-vnc-console
Get a vnc console to a server.
host-action
Perform a power action on a host.
host-describe
Describe a specific host.
host-list
List all hosts by service.
host-update
Update host settings.
hypervisor-list
List hypervisors.
hypervisor-servers List servers belonging to specific hypervisors.
hypervisor-show
Display the details of the specified hypervisor.
hypervisor-stats
Get hypervisor statistics over all compute nodes.
hypervisor-uptime
Display the uptime of the specified hypervisor.
image-create
Create a new image by taking a snapshot of a running
server.
image-delete
Delete specified image(s).
image-list
Print a list of available images to boot from.
image-meta
Set or Delete metadata on an image.
image-show
Show details about the given image.
interface-attach
Attach a network interface to a server.
interface-detach
Detach a network interface from a server.
interface-list
List interfaces attached to a server.
keypair-add
Create a new key pair for use with servers.
keypair-delete
Delete keypair given by its name.

OpenStack Compute Starter Guide

keypair-list
keypair-show
list
list-secgroup
live-migration
lock
meta
migrate

Print a list of keypairs for a user


Show details about the given keypair.
List active servers.
List Security Group(s) of a server.
Migrate running server to a new machine.
Lock a server.
Set or Delete metadata on a server.
Migrate a server. The new host will be selected by the
scheduler.
network-associate-host
Associate host with network.
network-associate-project
Associate project with network.
network-create
Create a network.
network-disassociate
Disassociate host and/or project from the given
network.
network-list
Print a list of available networks.
network-show
Show details about the given network.
pause
Pause a server.
quota-class-show
List the quotas for a quota class.
quota-class-update Update the quotas for a quota class.
quota-defaults
List the default quotas for a tenant.
quota-delete
Delete quota for a tenant/user so their quota will
Revert back to default.
quota-show
List the quotas for a tenant/user.
quota-update
Update the quotas for a tenant/user.
rate-limits
Print a list of rate limits for a user
reboot
Reboot a server.
rebuild
Shutdown, re-image, and re-boot a server.
refresh-network
Refresh server network information.
remove-fixed-ip
Remove an IP address from a server.
remove-floating-ip DEPRECATED, use floating-ip-disassociate instead.
remove-secgroup
Remove a Security Group from a server.
rename
Rename a server.
rescue
Rescue a server.
reset-network
Reset network of a server.
reset-state
Reset the state of a server.
resize
Resize a server.
resize-confirm
Confirm a previous resize.
resize-revert
Revert a previous resize (and return to the previous
VM).
resume
Resume a server.
root-password
Change the root password for a server.
scrub
Delete data associated with the project.
secgroup-add-group-rule
Add a source group rule to a security group.
secgroup-add-rule
Add a rule to a security group.
secgroup-create
Create a security group.
secgroup-delete
Delete a security group.
secgroup-delete-group-rule
Delete a source group rule from a security group.
secgroup-delete-rule
Delete a rule from a security group.
secgroup-list
List security groups for the current tenant.
secgroup-list-rules
List rules for a security group.
secgroup-update
Update a security group.
service-disable
Disable the service.
service-enable
Enable the service.
service-list
Show a list of all running services. Filter by host and binary.
shelve
Shelve a server.

69

shelve-offload
Remove a shelved server from the compute node.
show
Show details about the given server.
ssh
SSH into a server.
start
Start a server.
stop
Stop a server.
suspend
Suspend a server.
unlock
Unlock a server.
unpause
Unpause a server.
unrescue
Unrescue a server.
unshelve
Unshelve a server.
usage
Show usage data for a single tenant.
usage-list
List usage data for all tenants.
volume-attach
Attach a volume to a server.
volume-create
Add a new volume.
volume-delete
Remove volume(s).
volume-detach
Detach a volume from a server.
volume-list
List all the volumes.
volume-show
Show details about a volume.
volume-snapshot-create
Add a new snapshot.
volume-snapshot-delete
Remove a snapshot.
volume-snapshot-list
List all the snapshots.
volume-snapshot-show
Show details about a snapshot.
volume-type-create Create a new volume type.
volume-type-delete Delete a specific flavor
volume-type-list
Print a list of available 'volume types'.
volume-update
Update volume attachment.
x509-create-cert
Create x509 cert for a user in tenant.
x509-get-root-cert Fetch the x509 root cert.
bash-completion
Prints all of the commands and options to stdout so
that the nova.bash_completion script doesn't have to
hard code them.
help
Display help about this program or one of its
subcommands.
host-servers-migrate
Migrate all instances of the specified host to other
available hosts.
list-extensions
List all the os-api extensions that are available.
force-delete
Force delete a server.
restore
Restore a soft-deleted server.
migration-list
Print a list of migrations.
host-evacuate
Evacuate all instances from failed host to specified
one.
instance-action
Show an action.
instance-action-list
List actions on a server.
cell-capacities
Get cell capacities for all cells or a given cell.
cell-show
Show details of a given cell.
baremetal-interface-add
Add a network interface to a baremetal node.
baremetal-interface-list
List network interfaces associated with a baremetal
node.
baremetal-interface-remove
Remove a network interface from a baremetal node.
baremetal-node-create
Create a baremetal node.
baremetal-node-delete
Remove a baremetal node and any associated interfaces.

OpenStack Compute Starter Guide

baremetal-node-list
Print list of available baremetal nodes.
baremetal-node-show
host-meta
net
net-create
net-delete
net-list

9.2

Show information about a baremetal node.


Set or Delete metadata on all instances of a host.
Show a network
Create a network
Delete a network
List networks

Glance Commands

Glance is the commandline interface for the OpenStack Imaging service.


usage: glance
Positional arguments:
add
clear
delete
details
image-create
image-delete
image-download
image-list
image-members
image-show
image-update
index
member-add
member-create
member-delete
member-images
member-list
members-replace
show
update
help

9.3

DEPRECATED! Use image-create instead.


DEPRECATED!
DEPRECATED! Use image-delete instead.
DEPRECATED! Use image-list instead.
Create a new image.
Delete specified image(s).
Download a specific image.
List images you can access.
DEPRECATED! Use member-list instead.
Describe a specific image.
Update a specific image.
DEPRECATED! Use image-list instead.
DEPRECATED! Use member-create instead.
Share a specific image with a tenant.
Remove a shared image from a tenant.
DEPRECATED! Use member-list instead.
Describe sharing permissions by image or tenant.
DEPRECATED!
DEPRECATED! Use image-show instead.
DEPRECATED! Use image-update instead.
Display help about this program or one of its
subcommands.

Swift Commands

Swift is the commandline interface for OpenStack Object Store service.


Usage: swift
delete
download
list
post
stat
upload
capabilities

Delete a container or objects within a container


Download objects from containers
Lists the containers for the account or the objects
for a container
Updates meta information for the account, container,
or object; creates containers if not present
Displays information for the account, container,
or object
Uploads files or directories to the given container
List cluster capabilities

71

9.4

Keystone Commands

Keystone is the commandline interface to the OpenStack Identity service.


usage: keystone
catalog
List service catalog, possibly filtered by service.
ec2-credentials-create
Create EC2-compatible credentials for user per tenant.
ec2-credentials-delete
Delete EC2-compatible credentials.
ec2-credentials-get
Display EC2-compatible credentials.
ec2-credentials-list
List EC2-compatible credentials for a user.
endpoint-create
Create a new endpoint associated with a service.
endpoint-delete
Delete a service endpoint.
endpoint-get
Find endpoint filtered by a specific attribute or
service type.
endpoint-list
List configured service endpoints.
password-update
Update own password.
role-create
Create new role.
role-delete
Delete role.
role-get
Display role details.
role-list
List all roles.
service-create
Add service to Service Catalog.
service-delete
Delete service from Service Catalog.
service-get
Display service from Service Catalog.
service-list
List all services in Service Catalog.
tenant-create
Create new tenant.
tenant-delete
Delete tenant.
tenant-get
Display tenant details.
tenant-list
List all tenants.
tenant-update
Update tenant name, description, enabled status.
token-get
Display the current user token.
user-create
Create new user
user-delete
Delete user.
user-get
Display user details.
user-list
List users.
user-password-update
Update user password.
user-role-add
Add role to user.
user-role-list
List roles granted to a user.
user-role-remove
Remove role from user.
user-update
Update user's name, email, and enabled status.
discover
Discover Keystone servers, supported API versions and
extensions.
bootstrap
Grants a new role to a new user on a new tenant, after
creating each.
bash-completion
Prints all of the commands and options to stdout.
help
Display help about this program or one of its
subcommands.

9.5

Neutron Commands

Neutron is the command line interface to the OpenStack Object-storage service.


usage: neutron

OpenStack Compute Starter Guide

agent-delete
agent-list
agent-show
agent-update
cisco-credential-create
cisco-credential-delete
cisco-credential-list
cisco-credential-show
cisco-network-profile-create
cisco-network-profile-delete
cisco-network-profile-list
cisco-network-profile-show
cisco-network-profile-update
cisco-policy-profile-list
cisco-policy-profile-show
cisco-policy-profile-update
dhcp-agent-list-hosting-net
dhcp-agent-network-add
dhcp-agent-network-remove
ext-list
ext-show
firewall-create
firewall-delete
firewall-list
firewall-policy-create
firewall-policy-delete
firewall-policy-insert-rule
firewall-policy-list
firewall-policy-remove-rule
firewall-policy-show
firewall-policy-update
firewall-rule-create
firewall-rule-delete
firewall-rule-list
firewall-rule-show
firewall-rule-update
firewall-show
firewall-update
floatingip-associate
floatingip-create
floatingip-delete
floatingip-disassociate
floatingip-list
floatingip-show
help
ipsec-site-connection-create
ipsec-site-connection-delete
ipsec-site-connection-list
ipsec-site-connection-show
ipsec-site-connection-update
l3-agent-list-hosting-router
l3-agent-router-add
l3-agent-router-remove
lb-agent-hosting-pool
lb-healthmonitor-associate
lb-healthmonitor-create
lb-healthmonitor-delete
lb-healthmonitor-disassociate
lb-healthmonitor-list
lb-healthmonitor-show
lb-healthmonitor-update
lb-member-create

73

Delete a given agent.


List agents.
Show information of a given agent.
Update a given agent.
Creates a credential.
Delete a given credential.
List credentials that belong to a given tenant.
Show information of a given credential.
Creates a network profile.
Delete a given network profile.
List network profiles that belong to a given tenant.
Show information of a given network profile.
Update network profile's information.
List policy profiles that belong to a given tenant.
Show information of a given policy profile.
Update policy profile's information.
List DHCP agents hosting a network.
Add a network to a DHCP agent.
Remove a network from a DHCP agent.
List all extensions.
Show information of a given resource.
Create a firewall.
Delete a given firewall.
List firewalls that belong to a given tenant.
Create a firewall policy.
Delete a given firewall policy.
Insert a rule into a given firewall policy.
List firewall policies that belong to a given tenant.
Remove a rule from a given firewall policy.
Show information of a given firewall policy.
Update a given firewall policy.
Create a firewall rule.
Delete a given firewall rule.
List firewall rules that belong to a given tenant.
Show information of a given firewall rule.
Update a given firewall rule.
Show information of a given firewall.
Update a given firewall.
Create a mapping between a floating ip and a fixed ip.
Create a floating ip for a given tenant.
Delete a given floating ip.
Remove a mapping from a floating ip to a fixed ip.
List floating ips that belong to a given tenant.
Show information of a given floating ip.
print detailed help for another command
Create an IPsecSiteConnection.
Delete a given IPsecSiteConnection.
List IPsecSiteConnections that belong to a given tenant.
Show information of a given IPsecSiteConnection.
Update a given IPsecSiteConnection.
List L3 agents hosting a router.
Add a router to a L3 agent.
Remove a router from a L3 agent.
Get loadbalancer agent hosting a pool.
Create a mapping between a health monitor and a pool.
Create a healthmonitor.
Delete a given healthmonitor.
Remove a mapping from a health monitor to a pool.
List healthmonitors that belong to a given tenant.
Show information of a given healthmonitor.
Update a given healthmonitor.
Create a member.

lb-member-delete
lb-member-list
lb-member-show
lb-member-update
lb-pool-create
lb-pool-delete
lb-pool-list
lb-pool-list-on-agent
lb-pool-show
lb-pool-stats
lb-pool-update
lb-vip-create
lb-vip-delete
lb-vip-list
lb-vip-show
lb-vip-update
meter-label-create
meter-label-delete
meter-label-list
meter-label-rule-create
meter-label-rule-delete
meter-label-rule-list
meter-label-rule-show
meter-label-show
net-create
net-delete
net-external-list
net-gateway-connect
net-gateway-create
net-gateway-delete
net-gateway-disconnect
net-gateway-list
net-gateway-show
net-gateway-update
net-list
net-list-on-dhcp-agent
net-show
net-update
port-create
port-delete
port-list
port-show
port-update
queue-create
queue-delete
queue-list
queue-show
quota-delete
quota-list
values.
quota-show
quota-update
router-create
router-delete
router-gateway-clear
router-gateway-set
router-interface-add
router-interface-delete
router-list
router-list-on-l3-agent
router-port-list
router.

Delete a given member.


List members that belong to a given tenant.
Show information of a given member.
Update a given member.
Create a pool.
Delete a given pool.
List pools that belong to a given tenant.
List the pools on a loadbalancer agent.
Show information of a given pool.
Retrieve stats for a given pool.
Update a given pool.
Create a vip.
Delete a given vip.
List vips that belong to a given tenant.
Show information of a given vip.
Update a given vip.
Create a metering label for a given tenant.
Delete a given metering label.
List metering labels that belong to a given tenant.
Create a metering label rule for a given label.
Delete a given metering label.
List metering labels that belong to a given label.
Show information of a given metering label rule.
Show information of a given metering label.
Create a network for a given tenant.
Delete a given network.
List external networks that belong to a given tenant.
Add an internal network interface to a router.
Create a network gateway.
Delete a given network gateway.
Remove a network from a network gateway.
List network gateways for a given tenant.
Show information of a given network gateway.
Update the name for a network gateway.
List networks that belong to a given tenant.
List the networks on a DHCP agent.
Show information of a given network.
Update network's information.
Create a port for a given tenant.
Delete a given port.
List ports that belong to a given tenant.
Show information of a given port.
Update port's information.
Create a queue.
Delete a given queue.
List queues that belong to a given tenant.
Show information of a given queue.
Delete defined quotas of a given tenant.
List quotas of all tenants who have non-default quota

Show quotas of a given tenant


Define tenant's quotas not to use defaults.
Create a router for a given tenant.
Delete a given router.
Remove an external network gateway from a router.
Set the external network gateway for a router.
Add an internal network interface to a router.
Remove an internal network interface from a router.
List routers that belong to a given tenant.
List the routers on a L3 agent.
List ports that belong to a given tenant, with specified

OpenStack Compute Starter Guide

router-show
router-update
security-group-create
security-group-delete
security-group-list
security-group-rule-create
security-group-rule-delete
security-group-rule-list
security-group-rule-show
security-group-show
security-group-update
service-provider-list
subnet-create
subnet-delete
subnet-list
subnet-show
subnet-update
vpn-ikepolicy-create
vpn-ikepolicy-delete
vpn-ikepolicy-list
vpn-ikepolicy-show
vpn-ikepolicy-update
vpn-ipsecpolicy-create
vpn-ipsecpolicy-delete
vpn-ipsecpolicy-list
connection.
vpn-ipsecpolicy-show
vpn-ipsecpolicy-update
vpn-service-create
vpn-service-delete
vpn-service-list
tenant.
vpn-service-show
vpn-service-update

9.6

75

Show information of a given router.


Update router's information.
Create a security group.
Delete a given security group.
List security groups that belong to a given tenant.
Create a security group rule.
Delete a given security group rule.
List security group rules that belong to a given tenant.
Show information of a given security group rule.
Show information of a given security group.
Update a given security group.
List service providers.
Create a subnet for a given tenant.
Delete a given subnet.
List subnets that belong to a given tenant.
Show information of a given subnet.
Update subnet's information.
Create an IKEPolicy.
Delete a given IKE Policy.
List IKEPolicies that belong to a tenant.
Show information of a given IKEPolicy.
Update a given IKE Policy.
Create an ipsecpolicy.
Delete a given ipsecpolicy.
List ipsecpolicies that belongs to a given tenant Show information of a given ipsecpolicy.
Update a given ipsec policy.
Create a VPNService.
Delete a given VPNService.
List VPNService configurations that belong to a given
Show information of a given VPNService.
Update a given VPNService.

Cinder Commands

Cinder is the command line interface to the OpenStack Volume service.


usage: cinder
absolute-limits
Print a list of absolute limits for a user
availability-zone-list
List all the availability zones.
backup-create
Creates a backup.
backup-delete
Remove a backup.
backup-list
List all the backups.
backup-restore
Restore a backup.
backup-show
Show details about a backup.
create
Add a new volume.
credentials
Show user credentials returned from auth.
delete
Remove volume(s).
encryption-type-create
Create a new encryption type for a volume type (Admin
Only).
encryption-type-delete
Delete the encryption type for a volume type (Admin
Only).

encryption-type-list
List encryption type information for all volume types
(Admin Only).
encryption-type-show

endpoints
extend
extra-specs-list
force-delete
list
metadata
metadata-show
metadata-update-all

Show the encryption type information for a volume type


(Admin Only).
Discover endpoints that get returned from the
authenticate services.
Attempt to extend the size of an existing volume.
Print a list of current 'volume types and extra specs'
(Admin Only).
Attempt forced removal of volume(s), regardless of the
state(s).
List all the volumes.
Set or Delete metadata on a volume.
Show metadata of given volume.

Update all metadata of a volume.


migrate
Migrate the volume to the new host.
qos-associate
Associate qos specs with specific volume type.
qos-create
Create a new qos specs.
qos-delete
Delete a specific qos specs.
qos-disassociate
Disassociate qos specs from specific volume type.
qos-disassociate-all
Disassociate qos specs from all of its associations.
qos-get-association
Get all associations of specific qos specs.
qos-key
Set or unset specifications for a qos spec.
qos-list
Get full list of qos specs.
qos-show
Get a specific qos specs.
quota-class-show
List the quotas for a quota class.
quota-class-update Update the quotas for a quota class.
quota-defaults
List the default quotas for a tenant.
quota-show
List the quotas for a tenant.
quota-update
Update the quotas for a tenant.
quota-usage
List the quota usage for a tenant.
rate-limits
Print a list of rate limits for a user
readonly-mode-update
Update volume read-only access mode read_only.
rename
Rename a volume.
reset-state
Explicitly update the state of a volume.
service-disable
Disable the service.
service-enable
Enable the service.
service-list
List all the services. Filter by host service
binary.
show
Show details about a volume.
snapshot-create
Add a new snapshot.
snapshot-delete
Remove a snapshot.
snapshot-list
List all the snapshots.
snapshot-metadata
Set or Delete metadata of a snapshot.
snapshot-metadata-show
Show metadata of given snapshot.
snapshot-metadata-update-all
Update all metadata of a snapshot.
snapshot-rename
Rename a snapshot.
snapshot-reset-state
Explicitly update the state of a snapshot.
snapshot-show
Show details about a snapshot.
transfer-accept
Accepts a volume transfer.
transfer-create
Creates a volume transfer.
transfer-delete
Undo a transfer.
transfer-list
List all the transfers.

OpenStack Compute Starter Guide

transfer-show
type-create
type-delete
type-key
type-list
upload-to-image
bash-completion
help
list-extensions

Show details about a transfer.


Create a new volume type.
Delete a specific volume type.
Set or unset extra_spec for a volume type.
Print a list of available 'volume types'.
Upload volume to image service as image.
Print arguments for bash_completion.
Display help about this program or one of its
subcommands.
List all the os-api extensions that are available.

77

Das könnte Ihnen auch gefallen