Openstack single-host installation guide

This is intended to be a copy-paste guide on installing Openstack “Kilo” on a single machine , and being able to add additional compute nodes afterwards .

Following this guide, you will install the full Openstack distribution , not DevStack or other all-in-one packages that need to be reconfigured after a server reboot .

Kilo is the current Openstack version , and so far the best ( in my opinion ).

The only things you need to change and set are IP address of your local and external network interfaces, and set passwords for the databases and services.

Some of my friends keep asking me for help , so there it is, all you need to do, in the correct order and with all the necessary “.conf” files.

First, a little bit about the machine i’m going to work on

Its a HP Compaq Elite 8000 , with a quad-core  Q8300 CPU , 16Gb of ram ( this is the maximum amount of ram this machine takes ) , two disk drives ( a 500Gb drive and a 320Gb drive ) , and two network interfaces ( the on-board interface , and a pci-e additional network card ).

Ubuntu is installed on the 500Gb drive , which will serve as the OS root drive, VM storage and Glance storage , and the 320Gb drive will serve as the Cinder storage .

The Openstack architecture i’m going to use , will have all the services listening on one of those interfaces , so we can easily add a second compute node if needed , without modifying configuration files or databases . ( see picture )

 

Basic Network

my home network

To help in installation and simplify setup, create some entries in your hosts file :

127.0.0.1 localhost
127.0.0.1 cloud.local
10.0.0.1  cloud.local

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

 

Most of this tutorial is based on official Openstack documentation, so you can also use that as a guide in case you want a full blown cloud , with services spanned onto separate machines .

We will set up a public flat network , so no routers and private networks for this setup .

In my “Work Projects” section i will describe a full openstack installation , which will feature  routers, private / public networks and floating IP’s . But for our home test cloud , is better this way .

We have the instances directly connected to our home network, and accesible directly from our local network , so we can use them if we have a local DNS easy .

The security groups and metadata network are fully functional , so no shortcuts there 🙂 .

There will be a section describing the setup of a local caching DNS on my Microtik router .

This tutorial DOES NOT cover swift object storage ( Swift object storage needs at least 3 physical machines , and i only have one … so far )

To address a question that i received at some point .. Yes , i could have made bash scripts to set automatically many things in this article , but i consider that in doing them “by hand” , will help you understand the relations between services and their dependencies .

It’s better to have an extended understanding about a complex system , than to just “make it work as fast as possible” , because when you will face a problem ( and YOU WILL !! ) , you can take a logical approach to solve-it. The knowledge gained executing each step will at least help you debug this complex installation in the future , and have a better understanding of how Openstack works.

The first thing we need to do , is install a fresh copy of Ubuntu Server 14.04 LTS ( this is what i’m going to be using for all of my projects ) .

Installation of  Ubuntu is not the scope of this article, so i’m assuming that you know how to do that 😉 .

If you do plan to add another compute node , you need to give that node internet access , to download packages and update . My solution was to add the following lines to rc.local on the first server :

iptables -A FORWARD -i eth1 -o eth0 -s 10.0.0.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -s 10.0.0.0/24 -t nat -j MASQUERADE

# where eth0 is your connection to the router and eth1 is the local cloud network

1.1 – First steps

After installation , let’s ssh onto our “server” and run a update of everything so far ( just to be sure ) :

apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade

Install NTP ( is important in case you decide to add other nodes , they really have to be in sync ) :

apt-get install ntp

And set your timezone.

Add oficial Openstack repositories :

apt-get install ubuntu-cloud-keyring
echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
  "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list

And run another update ( some packages are newer in this repo ) :

apt-get update && apt-get dist-upgrade

Install MariaDB , and choose a good root password :

apt-get install mariadb-server python-mysqldb

Edit /etc/mysql/my.cnf and modify / add the following lines :

[mysqld]

bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

Finish mysql installation with :

service mysql restart

and run :

mysql_secure_installation

following the instructions to secure your mysql server.

Now , install RabbitMQ :

apt-get install rabbitmq-server

and configure after the installation finishes :

rabbitmqctl add_user admin ADMIN_PASS 

rabbitmqctl set_permissions admin ".*" ".*" ".*"

1.2 – Openstack Identity service ( Keystone )

To avoid entering the root user and password for each database , let’s make a file to help us ,

touch .my.cnf

and add your MySql root user and password that we choose earlier like this :

[client]
user=root
password=PASSWORD

Now, create the keystone database :

CREATE DATABASE keystone;

and give access to it to the keystone user :

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';

Generate a random value to use as administrator token ( in keystone.conf as ADMIN_TOKEN ) :

openssl rand -hex 10

After installation, prevent keystone from starting automatically :

echo "manual" > /etc/init/keystone.override

And install keystone and dependencies :

apt-get install keystone python-openstackclient apache2 \
        libapache2-mod-wsgi memcached python-memcache

Edit /etc/keystone/keystone.conf and add in the [DEFAULT] section :

[DEFAULT]
...
admin_token = ADMIN_TOKEN

and in the [database] section :

[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@cloud.local/keystone

Replace memcache with your server ( mine is localhost ) :

[memcache]
...
servers = localhost:11211

In the [token] section , set :

[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.memcache.Token

In the [revoke] section set :

[revoke]
...
driver = keystone.contrib.revoke.backends.sql.Revoke

Optional, you can enable verbose mode, to aid debugging :

[DEFAULT]
...
verbose = True

Populate the identity service database :

su -s /bin/sh -c "keystone-manage db_sync" keystone

Configure apache2 server keystone vhost , and set the “ServerName” directive globally in /etc/apache2.conf

ServerName cloud.local

Create /etc/apache2/sites-available/wsgi-keystone.conf with the following content :

Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /var/www/cgi-bin/keystone/main
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    LogLevel info
    ErrorLog /var/log/apache2/keystone-error.log
    CustomLog /var/log/apache2/keystone-access.log combined
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /var/www/cgi-bin/keystone/admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    LogLevel info
    ErrorLog /var/log/apache2/keystone-error.log
    CustomLog /var/log/apache2/keystone-access.log combined
</VirtualHost>

Enable identity service vhost :

a2ensite wsgi-keystone.conf

Create the identity service directory structure :

mkdir -p /var/www/cgi-bin/keystone

Copy the WSGI components from openstack repo :

curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo \
  | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin

Adjust ownership and permissions :

chown -R keystone:keystone /var/www/cgi-bin/keystone
chmod 755 /var/www/cgi-bin/keystone/*

To finalize the installation, restart apache2 webserver :

service apache2 restart

and remove keystone sqlite DB :

rm -f /var/lib/keystone/keystone.db

For reference , this is my keystone.conf ( without quotations ) . You can make a backup of the default, copy-paste this conf and change variables according to your setup .

[DEFAULT]
verbose = False
admin_token = ADMIN_TOKEN
log_dir = /var/log/keystone
[assignment]
[auth]
[cache]
[catalog]
[credential]
[database]
connection = mysql://keystone:KEYSTONE_DBPASS@cloud.local/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = localhost:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
[role]
[saml]
[signing]
[ssl]
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.memcache.Token
[trust]
[extra_headers]
Distribution = Ubuntu

Now , let’s create the service entity and api endpoints.

First , export the admin token created previously :

export OS_TOKEN=ADMIN_TOKEN

And configure the enpoint URL :

export OS_URL=http://cloud.local:35357/v2.0

Create the service entity for the identity service

openstack service create \
  --name keystone --description "OpenStack Identity" identity


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | 4ddaae90388b4ebc9d252ec2252d8d10 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+

Create the service api endpoint :

openstack endpoint create \
  --publicurl http://cloud.local:5000/v2.0 \
  --internalurl http://cloud.local:5000/v2.0 \
  --adminurl http://cloud.local:35357/v2.0 \
  --region RegionOne \
  identity


+--------------+-----------------------------------+
| Field        | Value                             |
+--------------+-----------------------------------+
| adminurl     | http://cloud.local:35357/v2.0     |
| id           | 4a9ffc04b8eb4848a49625a3df0170e5  |
| internalurl  | http://cloud.local:5000/v2.0      |
| publicurl    | http://cloud.local:5000/v2.0      |
| region       | RegionOne                         |
| service_id   | 4ddaae90388b4ebc9d252ec2252d8d10  |
| service_name | keystone                          |
| service_type | identity                          |
+--------------+-----------------------------------+

Create projects, users and roles.

Create the admin project :

openstack project create --description "Admin Project" admin


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Admin Project                    |
| enabled     | True                             |
| id          | cf12a15c5ea84b019aec3dc45580896b |
| name        | admin                            |
+-------------+----------------------------------+

Create the admin user :

openstack user create --password-prompt admin
User Password:ADMIN_PASS
Repeat User Password:ADMIN_PASS


+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| email      | None                             |
| enabled    | True                             |
| id         | 4d411f2291f34941b30eef9bd797505a |
| name       | admin                            |
| username   | admin                            |
+------------+----------------------------------+

Create the admin role :

openstack role create admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Add the admin role to the admin project and user :

openstack role add --project admin --user admin admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create the service project :

openstack project create --description "Service Project" service


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| enabled     | True                             |
| id          | 55cbd79c0c014c8a95534ebd16213ca1 |
| name        | service                          |
+-------------+----------------------------------+

Create the user role :

openstack role create user


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 9fe2ff9ee4384b1894a90878d3e92bab |
| name  | user                             |
+-------+----------------------------------+

Create Openstack admin user environment script admin-openrc.sh :

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_AUTH_TYPE=password
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://cloud.local:35357/v3

Load admin-openrc.sh :

source admin-openrc.sh

Request authentication token

openstack token issue


+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2015-03-25T01:45:49.950092Z      |
| id         | cd4110152ac24bdeaa82e1443c910c36 |
| project_id | cf12a15c5ea84b019aec3dc45580896b |
| user_id    | 4d411f2291f34941b30eef9bd797505a |
+------------+----------------------------------+

2.1 – Install the image service ( Glance )

Create the database for glance :

CREATE DATABASE glance;

Add credentials for glance :

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';

Source admin credentials to gain access to CLI commands :

source admin-openrc.sh

Create glance admin user :

openstack user create --password-prompt glance
User Password:GLANCE_PASS
Repeat User Password:GLANCE_PASS


+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 1dc206e084334db2bee88363745da014 |
| name     | glance                           |
| username | glance                           |
+----------+----------------------------------+

Add the admin role to glance user and service :

openstack role add --project service --user glance admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create glance service entity :

openstack service create --name glance \
  --description "OpenStack Image service" image


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image service          |
| enabled     | True                             |
| id          | 178124d6081c441b80d79972614149c6 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

Create the image service API endpoint :

openstack endpoint create \
  --publicurl http://cloud.local:9292 \
  --internalurl http://cloud.local:9292 \
  --adminurl http://cloud.local:9292 \
  --region RegionOne \
  image


+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| adminurl     | http://cloud.local:9292          |
| id           | 805b1dbc90ab47479111102bc6423313 |
| internalurl  | http://cloud.local:9292          |
| publicurl    | http://cloud.local:9292          |
| region       | RegionOne                        |
| service_id   | 178124d6081c441b80d79972614149c6 |
| service_name | glance                           |
| service_type | image                            |
+--------------+----------------------------------+

Install glance packages :

apt-get install glance python-glanceclient

Edit /etc/glance/glance-api.conf and add in the [database] section :

[database]
...
connection = mysql://glance:GLANCE_DBPASS@cloud.local/glance

In the [keystone_authtoken] and [paste_deploy] sections configure identity service access :

[keystone_authtoken]
...
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS
 
[paste_deploy]
...
flavor = keystone

In the [glance_store] section configure the local filesystem store of image files :

[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

In the [DEFAULT] section disable the notification driver ( until we install telemetry service ) :

[DEFAULT]
...
notification_driver = noop

and enable verbose mode in case we need debugging :

[DEFAULT]
...
verbose = True

Edit /etc/glance/glance-registry.conf and configure in the [database] section :

[database]
...
connection = mysql://glance:GLANCE_DBPASS@cloud.local/glance

In the [keystone_authtoken] and [paste_deploy] sections configure identity service access :

[keystone_authtoken]
...
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS
 
[paste_deploy]
...
flavor = keystone

In the [DEFAULT] section disable the notification driver ( until we install telemetry service ) :

[DEFAULT]
...
notification_driver = noop

and enable verbose mode in case we need debugging :

[DEFAULT]
...
verbose = True

Populate glance database :

su -s /bin/sh -c "glance-manage db_sync" glance

And restart glance services :

service glance-registry restart
service glance-api restart

Remove the default glance sqlite db :

rm -f /var/lib/glance/glance.sqlite

For reference , these are my glance-api.conf and glance-registry.conf , without quotes .

You can use them, as long as you make a backup of the default files, and set your variables accordingly.

[DEFAULT]
verbose = False
notification_driver = messagingv2
[oslo_messaging_rabbit]
rpc_backend = rabbit
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS
bind_host = 0.0.0.0
bind_port = 9191
log_file = /var/log/glance/registry.log
backlog = 4096
api_limit_max = 1000
limit_param_default = 25
[oslo_policy]
[database]
connection = mysql://glance:GLANCE_DBPASS@cloud.local/glance
backend = sqlalchemy
[keystone_authtoken]
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[profiler]
[DEFAULT]
verbose = False
notification_driver = messagingv2
[oslo_messaging_rabbit]
rpc_backend = rabbit
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS
bind_host = 0.0.0.0
bind_port = 9292
log_file = /var/log/glance/api.log
backlog = 4096
registry_host = 0.0.0.0
registry_port = 9191
registry_client_protocol = http
delayed_delete = False
scrub_time = 43200
scrubber_datadir = /var/lib/glance/scrubber
image_cache_dir = /var/lib/glance/image-cache/
[oslo_policy]
[database]
connection = mysql://glance:GLANCE_DBPASS@cloud.local/glance
backend = sqlalchemy
[oslo_concurrency]
[keystone_authtoken]
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[profiler]
[task]
[taskflow_executor]
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
swift_store_auth_version = 2
swift_store_auth_address = 127.0.0.1:5000/v2.0/
swift_store_user = jdoe:jdoe
swift_store_key = a86850deb2742ec3cb41518e26aa2d89
swift_store_container = glance
swift_store_create_container_on_put = False
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
s3_store_host = s3.amazonaws.com
s3_store_access_key = <20-char AWS access key>
s3_store_secret_key = <40-char AWS secret key>
s3_store_bucket = <lowercased 20-char aws access key>glance
s3_store_create_bucket_on_put = False
sheepdog_store_address = localhost
sheepdog_store_port = 7000
sheepdog_store_chunk_size = 64

Note : These glance configuration files DO contain the notification driver and rabbit connection for metering . I said previously that we will use that when we install Ceilometer . It’s your choice either you install-it or not ..

Now, lets verify that glance is functioning properly.

Add glance to your admin-oprnrc.sh :

echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh

and :

source admin-openrc.sh

Create a local folder to store images when adding from CLI :

mkdir /home/images

And download our first image ( a small test image from cirros ) :

wget -P /home/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Now, upload the image to glance :

glance image-create --name "cirros-0.3.4-x86_64" --file /home/images/cirros-0.3.4-x86_64-disk.img \
  --disk-format qcow2 --container-format bare --visibility public --progress

[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 133eae9fb1c98f45894a4e60d8736619     |
| container_format | bare                                 |
| created_at       | 2015-03-26T16:52:10Z                 |
| disk_format      | qcow2                                |
| id               | 38047887-61a7-41ea-9b49-27987d5e8bb9 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.4-x86_64                  |
| owner            | ae7a98326b9c455588edd2656d723b9d     |
| protected        | False                                |
| size             | 13200896                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2015-03-26T16:52:10Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+

Confirm that the image is indeed in glance image store :

glance image-list

+--------------------------------------+---------------------+
| ID                                   | Name                |
+--------------------------------------+---------------------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros-0.3.4-x86_64 |
+--------------------------------------+---------------------+

3.1 – Nova and Neutron services 

So far we pretty much followed the basic installation how-to from Openstack installation how-to.

Here is where things get different , mainly because we are trying to use all the services on the same machine .

So , now we run all the nova services with the same config file, and all need a little something from it .

I’m not going to treat each and every little piece separate, to avoid confusion , but just give you my conf file after installing all the necessary packages , and you need to set your variables accordingly.

Let’s begin with nova database :

CREATE DATABASE nova;

Grant access to the database :

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

Source the admin credentials :

source admin-openrc.sh

Create the service credentials :

openstack user create --password-prompt nova
User Password:NOVA_PASS
Repeat User Password:NOVA_PASS


+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 8e0b71d732db4bfba04943a96230c8c0 |
| name     | nova                             |
| username | nova                             |
+----------+----------------------------------+

Add the admin role to nova user :

openstack role add --project service --user nova admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create the nova service entity :

openstack service create --name nova \
  --description "OpenStack Compute" compute


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 060d59eac51b4594815603d75a00aba2 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

And the nova API endpoint :

openstack endpoint create \
  --publicurl http://cloud.local:8774/v2/%\(tenant_id\)s \
  --internalurl http://cloud.local:8774/v2/%\(tenant_id\)s \
  --adminurl http://cloud.local:8774/v2/%\(tenant_id\)s \
  --region RegionOne \
  compute
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| adminurl     | http://cloud.local:8774/v2/%(tenant_id)s  |
| id           | 4e885d4ad43f4c4fbf2287734bc58d6b          |
| internalurl  | http://cloud.local:8774/v2/%(tenant_id)s  |
| publicurl    | http://cloud.local:8774/v2/%(tenant_id)s  |
| region       | RegionOne                                 |
| service_id   | 060d59eac51b4594815603d75a00aba2          |
| service_name | nova                                      |
| service_type | compute                                   |
+--------------+-------------------------------------------+

Now comes the fun part . Install ALL nova packages in one go :

apt-get install nova-api nova-cert nova-conductor nova-consoleauth \
  nova-novncproxy nova-scheduler python-novaclient \
  nova-compute sysfsutils

Now , make a backup of /etc/nova/nova.conf , and use this config :

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.1
vncserver_listen = 10.0.0.1
vncserver_proxyclient_address = 10.0.0.1
vnc_enabled = True
vncserver_listen = 0.0.0.0
novncproxy_base_url = http://192.168.200.2:6080/vnc_auto.html
verbose = False
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
allow_resize_to_same_host=True
scheduler_default_filters=AllHostsFilter
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
[database]
connection = mysql://nova:NOVA_DBPASS@tom-cloud/nova
[oslo_messaging_rabbit]
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS
[keystone_authtoken]
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS
[glance]
host = cloud.local
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret = SECRET
url = http://cloud.local:9696
auth_strategy = keystone
admin_auth_url = http://cloud.local:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS

And populate the nova database :

su -s /bin/sh -c "nova-manage db sync" nova

As you probably notice , there are a lot of config options that we haven’t got to yet ( neutron ) .

That’s ok , after we finish with all our cloud configuration, we will restart all nova and neutron services, and all our configuration will apply.

Having set up the nova service , let’s go on to neutron ( networking )

First , let’s create the database :

CREATE DATABASE neutron;

And add credentials for access :

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'NEUTRON_DBPASS';

Source the admin credentials :

source admin-openrc.sh

Create the neutron user :

openstack user create --password-prompt neutron
User Password:NEUTRON_PASS
Repeat User Password:NEUTRON_PASS


+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | ab67f043d9304017aaa73d692eeb4945 |
| name     | neutron                          |
| username | neutron                          |
+----------+----------------------------------+

Add the admin role for neutron user :

openstack role add --project service --user neutron admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create the neutron service entity :

openstack service create --name neutron \
  --description "OpenStack Networking" network


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | f71529314dab4a4d8eca427e701d209e |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

Create the networking API endpoint :

openstack endpoint create \
  --publicurl http://cloud.local:9696 \
  --adminurl http://cloud.local:9696 \
  --internalurl http://cloud.local:9696 \
  --region RegionOne \
  network


+--------------+-----------------------------------+
| Field        | Value                             |
+--------------+-----------------------------------+
| adminurl     | http://cloud.local:9696           |
| id           | 04a7d3c1de784099aaba83a8a74100b3  |
| internalurl  | http://cloud.local:9696           |
| publicurl    | http://cloud.local:9696           |
| region       | RegionOne                         |
| service_id   | f71529314dab4a4d8eca427e701d209e  |
| service_name | neutron                           |
| service_type | network                           |
+--------------+-----------------------------------+

Let’s install ALL the neutron components :

apt-get install neutron-server python-neutronclient \
  neutron-plugin-ml2 neutron-plugin-openvswitch-agent  \
  neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent \
  neutron-lbaas-agent

Edit /etc/sysctl.conf and add at the end of the file :

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

and run to activate configuration :

sysctl -p

Make a backup of /etc/neutron/neutron.conf , edit and paste the following configuration :

[DEFAULT]
verbose = False
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router,lbaas
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://cloud.local:8774/v2
[service_providers]
service_provider = LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
core_plugin = ml2
dhcp_agents_per_network = 2
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
[database]
connection = mysql://neutron:NEUTRON_DBPASS@cloud.local/neutron
[nova]
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = $state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS

As you can see, i’v enabled LBAAS ( load balancer as a service ) , which we will configure and test later in this article , but for now we have the relevant settings enabled .

Next , let’s configure the Modular Layer 2 Plugin (ML2 plugin )

Make a backup of /etc/neutron/plugins/ml2/ml2_conf.ini , and replace contents with the following :

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.0.0.1
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre

Now, let’s populate the database :

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

And remove the sqlite.db :

rm -f /var/lib/neutron/neutron.sqlite

The next step is to configure the network . be extra careful , as you might loose network connectivity, and might have to resort to logging in directly to the server if anything goes wrong.

A good idea would be to have access to the “local cloud network” ..

Edit /etc/network/interfaces :

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface

auto eth0
iface eth0 inet manual
	up ip address add 0/0 dev $IFACE
	up ip link set $IFACE up
	down ip link set $IFACE down

auto br-ex
iface br-ex inet static
	address 192.168.200.2
	netmask 255.255.255.0
	gateway 192.168.200.1
	dns-nameservers 192.168.200.1


auto p3p1
iface p3p1 inet static
	address 10.0.0.1
	netmask 255.255.255.224

Add br-ex :

ovs-vsctl add-br br-ex

and tie br-ex to eth0 interface, followed by reboot ( my recommendation ) :

ovs-vsctl add-port br-ex eth0 && reboot

Depending on your network interface driver, you may need to disable generic receive offload (GRO) to achieve suitable throughput between your instances and the external network.

To temporarily disable GRO on the external network interface while testing your environment:

ethtool -K eth0 gro off

If this works, add the line to /etc/rc.local

Source the admin credentials :

source admin-openrc.sh

To configure neutron l3 agent, make a backup of /etc/neutron/l3_agent.ini, and replace contents with :

[DEFAULT]
verbose = False
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True

To configure neutron dhcp agent , make a backup of /etc/neutron/dhcp_agent.ini, and replace contents with :

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
enable_isolated_metadata = True
enable_metadata_network = True
dhcp_domain = cloud.local
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

and create /etc/neutron/dnsmasq-neutron.conf with the following contents :

dhcp-option-force=26,1454

Kill all existing dnsmasq processes :

pkill dnsmasq

To configure neutron metadata agent , make a backup of /etc/neutron/metadata_agent.ini , and replace contents with :

[DEFAULT]
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:5000/v2.0
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
nova_metadata_ip = 10.0.0.1
metadata_proxy_shared_secret = SECRET

The line “metadata_proxy_shared_secret = SECRET” needs to match the “SECRET” set in /etc/nova/nova.conf.

Restart all neutron services :

cd /etc/init/; for i in $(ls neutron-* | cut -d \. -f 1 | xargs); do sudo service $i restart; done

Now, let’s verify that all the neutron agents are working ok

neutron agent-list
+--------------------------------------+--------------------+-----------+-------+----------------+-----------------------------+
| id                                   | agent_type         | host      | alive | admin_state_up | binary                      |
+--------------------------------------+--------------------+-----------+-------+----------------+-----------------------------+
| 2d6feac4-f1b2-43f0-9238-898bb924fe15 | Loadbalancer agent | cloud.local | :-)   | True           | neutron-lbaas-agent       |
| 74720eca-611f-4dc4-a11c-73b787683cfa | Metadata agent     | cloud.local | :-)   | True           | neutron-metadata-agent    |
| 92d168d7-a780-466d-b358-6b31e56212df | Open vSwitch agent | cloud.local | :-)   | True           | neutron-openvswitch-agent |
| a566a30f-33bb-4b49-8c83-14983aa8f0ec | L3 agent           | cloud.local | :-)   | True           | neutron-l3-agent          |
| ad398c46-5c4f-4ec6-8801-0a2d41a23290 | DHCP agent         | cloud.local | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-----------+-------+----------------+-----------------------------+

Source the admin credentials :

source admin-openrc.sh

And let’s list the loaded extensions just to make sure that everything is going smooth :

neutron ext-list


+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| security-group        | security-group                                |
| l3_agent_scheduler    | L3 Agent Scheduler                            |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| provider              | Provider Network                              |
| agent                 | agent                                         |
| quotas                | Quota management support                      |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| l3-ha                 | HA Router extension                           |
| multi-provider        | Multi Provider Network                        |
| external-net          | Neutron external network                      |
| router                | Neutron L3 Router                             |
| allowed-address-pairs | Allowed Address Pairs                         |
| extraroute            | Neutron Extra Route                           |
| extra_dhcp_opt        | Neutron Extra DHCP opts                       |
| dvr                   | Distributed Virtual Router                    |
+-----------------------+-----------------------------------------------+

Remember early in this article , we modified nova.conf , without restarting and testing nova .

That’s because nova and neutron work together , and without neutron it will not work properly .

Now its’s time to restart and test the nova services :

cd /etc/init/; for i in $(ls nova-* | cut -d \. -f 1 | xargs); do sudo service $i restart; done

And check if all the services work properly :

nova service-list
+----+------------------+-----------+----------+---------+-------+----------------------------+-------------------+
| Id | Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason   |
+----+------------------+-----------+----------+---------+-------+----------------------------+-------------------+
| 1  | nova-conductor   | cloud.local | internal | enabled | up    | 2015-11-29T19:43:56.000000 | -               |
| 2  | nova-scheduler   | cloud.local | internal | enabled | up    | 2015-11-29T19:44:03.000000 | -               |
| 3  | nova-cert        | cloud.local | internal | enabled | up    | 2015-11-29T19:43:55.000000 | -               |
| 4  | nova-consoleauth | cloud.local | internal | enabled | up    | 2015-11-29T19:44:03.000000 | -               |
| 5  | nova-compute     | cloud.local | nova     | enabled | up    | 2015-11-29T19:44:02.000000 | -               |
+----+------------------+-----------+----------+---------+-------+----------------------------+-------------------+

Ok, now before we do anything else, let’s install the Openstack dashboard, horizon :

apt-get install openstack-dashboard

Reload apache2 to activate the changes :

service apache2 reload

Access your new dashboard to test-it :

http://cloud.local/horizon

Now, if you do not like the ubuntu theme, you can uninstall :

apt-get remove --auto-remove openstack-dashboard-ubuntu-theme

And restart apache2 webserver :

service apache2 restart

If everything went well so far, now you should have a pretty much functional cloud by now. Just a few steps from completion, so keep going 🙂 .

Let’s create the initial external network .. or not ??

The situation with our home network is a bit special , because we have a router, which has a DHCP server , and that could interfere with neutron-dhcp-agent .

That’s why i created a pool for all dhcp hosts on the router , which starts at 192.168.200.10 and ends at 192.168.200.200

The next IP’s will be allocated to put cloud instances , by neutron-dhcp-agent. So far i had no conflicts or problems, but if you do , leave a comment and i will investigate.

So , let’s create our flat network..

Source the admin credentials :

source admin-openrc.sh

And add the network :

neutron net-create ext-net --router:external \
  --provider:physical_network external --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 893aebb9-1c1e-48be-8908-6b947f3237b3 |
| name                      | ext-net                              |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 54cd044c64d5408b83f843d63624e0d8     |
+---------------------------+--------------------------------------+

And the subnet :

neutron subnet-create ext-net 192.168.200.0/24 --name ext-subnet \
  --allocation-pool start=192.168.200.201,end=192.168.200.250 \
  --gateway 192.168.200.1

Created a new subnet:
+-------------------+----------------------------------------------------------+
| Field             | Value                                                    |
+-------------------+----------------------------------------------------------+
| allocation_pools  | {"start": "192.168.200.201", "end": "192.168.200.250"}   |
| cidr              | 192.168.200.0/24                                         |
| dns_nameservers   |                                                          |
| enable_dhcp       | False                                                    |
| gateway_ip        | 192.168.200.1                                            |
| host_routes       |                                                          |
| id                | 9159f0dc-2b63-41cf-bd7a-289309da1391                     |
| ip_version        | 4                                                        |
| ipv6_address_mode |                                                          |
| ipv6_ra_mode      |                                                          |
| name              | ext-subnet                                               |
| network_id        | 893aebb9-1c1e-48be-8908-6b947f3237b3                     |
| tenant_id         | 54cd044c64d5408b83f843d63624e0d8                         |
+-------------------+----------------------------------------------------------+

Login to horizon , and add a public key to be used with your virtual machines :

Openstack security groups

 

Now we can launch our first instance !! YEY 😀 .

Openstack instances

 

4.1 – Block storage , metering , orchestration and load-balancers

The next services we will install are Cinder ( block storage ) Ceilometer ( metering ) and Heat ( orchestration )

We will start with cinder .

Create the initial database :

CREATE DATABASE cinder;

Grant access to the database to cinder user :

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'CINDER_DBPASS';

Source your admin credentials :

source admin-openrc.sh

Create the cinder user :

openstack user create --password-prompt cinder
User Password:CINDER_PASS
Repeat User Password:CINDER_PASS


+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 881ab2de4f7941e79504a759a83308be |
| name     | cinder                           |
| username | cinder                           |
+----------+----------------------------------+

Add the admin role to cinder user :

openstack role add --project service --user cinder admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create the cinder service entities :

openstack service create --name cinder \
  --description "OpenStack Block Storage" volume


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 1e494c3e22a24baaafcaf777d4d467eb |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
openstack service create --name cinderv2 \
  --description "OpenStack Block Storage" volumev2


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 16e038e449c94b40868277f1d801edb5 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

Create block storage API endpoints :

openstack endpoint create \
  --publicurl http://cloud.local:8776/v2/%\(tenant_id\)s \
  --internalurl http://cloud.local:8776/v2/%\(tenant_id\)s \
  --adminurl http://cloud.local:8776/v2/%\(tenant_id\)s \
  --region RegionOne \
  volume


+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| adminurl     | http://cloud.local:8776/v2/%(tenant_id)s |
| id           | d1b7291a2d794e26963b322c7f2a55a4         |
| internalurl  | http://cloud.local:8776/v2/%(tenant_id)s |
| publicurl    | http://cloud.local:8776/v2/%(tenant_id)s |
| region       | RegionOne                                |
| service_id   | 1e494c3e22a24baaafcaf777d4d467eb         |
| service_name | cinder                                   |
| service_type | volume                                   |
+--------------+------------------------------------------+
openstack endpoint create \
  --publicurl http://cloud.local:8776/v2/%\(tenant_id\)s \
  --internalurl http://cloud.local:8776/v2/%\(tenant_id\)s \
  --adminurl http://cloud.local:8776/v2/%\(tenant_id\)s \
  --region RegionOne \
  volumev2
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| adminurl     | http://cloud.local:8776/v2/%(tenant_id)s |
| id           | 097b4a6fc8ba44b4b10d4822d2d9e076         |
| internalurl  | http://cloud.local:8776/v2/%(tenant_id)s |
| publicurl    | http://cloud.local:8776/v2/%(tenant_id)s |
| region       | RegionOne                                |
| service_id   | 16e038e449c94b40868277f1d801edb5         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
+--------------+------------------------------------------+

Install the packages :

apt-get install cinder-api cinder-scheduler python-cinderclient qemu lvm2

Make a backup of /etc/cinder/cinder.conf , and replace the contents with :

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.1
enabled_backends = lvm
glance_host = cloud.local
control_exchange = cinder
notification_driver = messagingv2
[oslo_messaging_rabbit]
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS
[database]
connection = mysql://cinder:CINDER_DBPASS@cloud.local/cinder
[keystone_authtoken]
auth_uri = http://cloud.local:5000
auth_url = http://cloud.local:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS
[oslo_concurrency]
lock_path = /var/lock/cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
[oslo_concurrency]
lock_path = /var/lock/cinder

Populate the cinder database :

su -s /bin/sh -c "cinder-manage db sync" cinder

After installing all there packages and configuring everything , we still have a 320Gb HDD , in my case /dev/sdb .

We will use that as the cinder storage volume .. Let’s proceed.

Create a partition on that drive, and format that partition to ext4 file system.

Create the LVM physical volume /dev/sdb1 :

pvcreate /dev/sdb1

  Physical volume "/dev/sdb1" successfully created

Create the LVM volume group cinder-volumes:

vgcreate cinder-volumes /dev/sdb1

  Volume group "cinder-volumes" successfully created

Remove initial cinder sqlite db :

rm -f /var/lib/cinder/cinder.sqlite

And restart all cinder services :

cd /etc/init/; for i in $(ls cinder-* | cut -d \. -f 1 | xargs); do sudo service $i restart; done

Let’s verify the operation ..

Export block storage api:

echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

Source your admin credentials :

source admin-openrc.sh

List cinder services to verify operations :

cinder service-list
+------------------+------------+------+---------+-------+----------------------------+------------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason  |
+------------------+------------+------+---------+-------+----------------------------+------------------+
| cinder-scheduler | cloud.local | nova | enabled |   up  | 2014-10-18T01:30:54.000000 |       None      |
| cinder-volume    | cloud.local | nova | enabled |   up  | 2014-10-18T01:30:57.000000 |       None      |
+------------------+------------+------+---------+-------+----------------------------+------------------+

Create a 1Gb volume :

cinder create --name demo-volume1 1


+---------------------------------------+--------------------------------------+
|                Property               |                Value                 |
+---------------------------------------+--------------------------------------+
|              attachments              |                  []                  |
|           availability_zone           |                 nova                 |
|                bootable               |                false                 |
|          consistencygroup_id          |                 None                 |
|               created_at              |      2015-04-21T23:46:08.000000      |
|              description              |                 None                 |
|               encrypted               |                False                 |
|                   id                  | 6c7a3d28-e1ef-42a0-b1f7-8d6ce9218412 |
|                metadata               |                  {}                  |
|              multiattach              |                False                 |
|                  name                 |             demo-volume1             |
|      os-vol-tenant-attr:tenant_id     |   ab8ea576c0574b6092bb99150449b2d3   |
|   os-volume-replication:driver_data   |                 None                 |
| os-volume-replication:extended_status |                 None                 |
|           replication_status          |               disabled               |
|                  size                 |                  1                   |
|              snapshot_id              |                 None                 |
|              source_volid             |                 None                 |
|                 status                |               creating               |
|                user_id                |   3a81e6c8103b46709ef8d141308d4c72   |
|              volume_type              |                 None                 |
+---------------------------------------+--------------------------------------+

Verify that the volume has been created and available :

cinder list


+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |     Name     | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 6c7a3d28-e1ef-42a0-b1f7-8d6ce9218412 | available | demo-volume1 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

 

By now, you should have all the services for your private cloud enabled and working, all we need to do not is install telemetry, orchestration and enable lbaas..

Let’s enable lbaas first, since most of the configuration was already done when we installed neutron..

All we need to do now is install haproxy :

apt-get install haproxy

and make a backup op /etc/neutron/lbaas_agent.ini, replacing the contents with :

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
[haproxy]
user_group = haproxy
send_gratuitous_arp = 3

Restart neutron-lbaas-agent :

service neutron-lbaas-agent restart

And you should have access to load balancer service in horizon .

Openstack load balancer

Neutron load balancer is installed, let’s proceed further .

Let’s install ceilometer and dependencies :

apt-get install mongodb-server mongodb-clients python-pymongo \
    ceilometer-agent-compute

Edit /etc/mongodb.conf , and set the following :

bind_ip = 10.0.0.1

By default, MongoDB creates several 1 GB journal files in the /var/lib/mongodb/journal directory. If you want to reduce the size of each journal file to 128 MB and limit total journal space consumption to 512 MB, assert the smallfiles key:

smallfiles = true

Restart mongodb and do a little cleanup :

service mongodb stop
rm -Rf /var/lib/mongodb/journal/prealloc.*
service mongodb start

Create the ceilometer database :

mongo --host cloud.local --eval '
  db = db.getSiblingDB("ceilometer");
  db.addUser({user: "ceilometer",
  pwd: "CEILOMETER_DBPASS",
  roles: [ "readWrite", "dbAdmin" ]})'

MongoDB shell version: 2.4.x
connecting to: cloud.local:27017/test
{
 "user" : "ceilometer",
 "pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f",
 "roles" : [
  "readWrite",
  "dbAdmin"
 ],
 "_id" : ObjectId("5489c22270d7fad1ba631dc3")
}

Don’t forget to set “CEILOMETER_DBPASS” to something secure.

Source the admin credentials :

source admin-openrc.sh

And create service ceilometer user :

openstack user create --password-prompt ceilometer
User Password:CEILOMETER_PASS
Repeat User Password:CEILOMETER_PASS


+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | b7657c9ea07a4556aef5d34cf70713a3 |
| name     | ceilometer                       |
| username | ceilometer                       |
+----------+----------------------------------+

Add the admin role to ceilometer user :

openstack role add --project service --user ceilometer admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create the ceilometer service entity:

openstack service create --name ceilometer \
  --description "Telemetry" metering


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Telemetry                        |
| enabled     | True                             |
| id          | 3405453b14da441ebb258edfeba96d83 |
| name        | ceilometer                       |
| type        | metering                         |
+-------------+----------------------------------+

And create the Telemetry module API endpoint :

openstack endpoint create \
  --publicurl http://cloud.local:8777 \
  --internalurl http://cloud.local:8777 \
  --adminurl http://cloud.local:8777 \
  --region RegionOne \
  metering


+--------------+------------------------------------+
| Field        | Value                              |
+--------------+------------------------------------+
| adminurl     | http://cloud.local:8777            |
| id           | d3716d85b10d4e60a67a52c6af0068cd   |
| internalurl  | http://cloud.local:8777            |
| publicurl    | http://cloud.local:8777            |
| region       | RegionOne                          |
| service_id   | 3405453b14da441ebb258edfeba96d83   |
| service_name | ceilometer                         |
| service_type | metering                           |
+--------------+------------------------------------+

Install the rest of ceilometer packages :

apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central \
  ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier \
  python-ceilometerclient

Generate a random value , to use as the metering secret :

openssl rand -hex 10

Edit /etc/ceilometer/ceilometer.conf , and set in the [database] section :

[database]
...
connection = mongodb://ceilometer:CEILOMETER_DBPASS@cloud.local:27017/ceilometer

In the [DEFAULT] and [oslo_messaging_rabbit] sections :

[DEFAULT]
...
rpc_backend = rabbit
 
[oslo_messaging_rabbit]
...
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS

In the [DEFAULT] and [keystone_authtoken] sections :

[DEFAULT]
...
auth_strategy = keystone
 
[keystone_authtoken]
...
auth_uri = http://cloud.local:5000/v2.0
identity_uri = http://cloud.local:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = CEILOMETER_PASS

In the [service_credentials] section, configure the service credentials :

[service_credentials]
...
os_auth_url = http://cloud.local:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
os_endpoint_type = internalURL
os_region_name = RegionOne

In the [publisher] section, configure the metering secret we generated early on :

[publisher]
...
telemetry_secret = SECRET

Optional, if you need for debugging purposes , enable verbose mode in the [DEFAULT] section:

[DEFAULT]
...
verbose = True

This is my ceilometer.conf , for reference :

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
verbose = True
[database]
connection = mongodb://ceilometer:CEILOMETER_DBPASS@cloud.local:27017/ceilometer
[keystone_authtoken]
auth_uri = http://cloud.local:5000/v2.0
identity_uri = http://cloud.local:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = CEILOMETER_PASS
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS
[publisher]
telemetry_secret = SECRET
[service_credentials]
os_auth_url = http://cloud.local:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
os_endpoint_type = internalURL
os_region_name = RegionOne

Finally, restart all ceilometer services to activate the configuration :

cd /etc/init/; for i in $(ls ceilometer-* | cut -d \. -f 1 | xargs); do sudo service $i restart; done

Now Telemetry should be enabled for all your Openstack services, provided you used the .conf files that are provided by this article .

All the services should generate graphs and stats , as you use them .

If you used your own .conf files , here is what you need to look out for in each one

In nova.conf :

[DEFAULT]
...
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2

In glance-api.conf and glance-registry.conf :

notification_driver = messagingv2
[oslo_messaging_rabbit]
rpc_backend = rabbit
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS

In cinder.conf :

[DEFAULT]
...
control_exchange = cinder
notification_driver = messagingv2

If all is working, you can proceed to the next , and last step.

4.2 – Orchestration ( Heat ) install and configure

Let’s start with creating the database for heat :

CREATE DATABASE heat;

Create and grant access to heat database :

GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
  IDENTIFIED BY 'HEAT_DBPASS';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \
  IDENTIFIED BY 'HEAT_DBPASS';

Source your admin credentials :

source admin-openrc.sh

Create the service credentials :

openstack user create --password-prompt heat
User Password:HEAT_PASS
Repeat User Password:HEAT_PASS


+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 7fd67878dcd04d0393469ef825a7e005 |
| name     | heat                             |
| username | heat                             |
+----------+----------------------------------+

Add admin role to heat user :

openstack role add --project service --user heat admin


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | cd2cb9a39e874ea69e5d4b896eb16128 |
| name  | admin                            |
+-------+----------------------------------+

Create the heat_stack_owner role :

openstack role create heat_stack_owner


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | c0a1cbee7261446abc873392f616de87 |
| name  | heat_stack_owner                 |
+-------+----------------------------------+

Create the heat_stack_user role :

openstack role create heat_stack_user


+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | e01546b1a81c4e32a6d14a9259e60154 |
| name  | heat_stack_user                  |
+-------+----------------------------------+

Create the heat and heat-cfn service entities :

openstack service create --name heat \
  --description "Orchestration" orchestration


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Orchestration                    |
| enabled     | True                             |
| id          | 031112165cad4c2bb23e84603957de29 |
| name        | heat                             |
| type        | orchestration                    |
+-------------+----------------------------------+


openstack service create --name heat-cfn \
  --description "Orchestration"  cloudformation


+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Orchestration                    |
| enabled     | True                             |
| id          | 297740d74c0a446bbff867acdccb33fa |
| name        | heat-cfn                         |
| type        | cloudformation                   |
+-------------+----------------------------------+

Create the orchestration service API endpoints :

openstack endpoint create \
  --publicurl http://cloud.local:8004/v1/%\(tenant_id\)s \
  --internalurl http://cloud.local:8004/v1/%\(tenant_id\)s \
  --adminurl http://cloud.local:8004/v1/%\(tenant_id\)s \
  --region RegionOne \
  orchestration


+--------------+------------------------------------------+
|        Field | Value                                    |
+--------------+------------------------------------------+
| adminurl     | http://cloud.local:8004/v1/%(tenant_id)s |
| id           | f41225f665694b95a46448e8676b0dc2         |
| internalurl  | http://cloud.local:8004/v1/%(tenant_id)s |
| publicurl    | http://cloud.local:8004/v1/%(tenant_id)s |
| region       | RegionOne                                |
| service_id   | 031112165cad4c2bb23e84603957de29         |
| service_name | heat                                     |
| service_type | orchestration                            |
+--------------+------------------------------------------+


openstack endpoint create \
  --publicurl http://cloud.local:8000/v1 \
  --internalurl http://cloud.local:8000/v1 \
  --adminurl http://cloid.local:8000/v1 \
  --region RegionOne \
  cloudformation


+--------------+-----------------------------------+
| Field        | Value                             |
+--------------+-----------------------------------+
| adminurl     | http://cloud.local:8000/v1        |
| id           | f41225f665694b95a46448e8676b0dc2  |
| internalurl  | http://cloud.local:8000/v1        |
| publicurl    | http://cloud.local:8000/v1        |
| region       | RegionOne                         |
| service_id   | 297740d74c0a446bbff867acdccb33fa  |
| service_name | heat-cfn                          |
| service_type | cloudformation                    |
+--------------+-----------------------------------+

Install heat packages :

apt-get install heat-api heat-api-cfn heat-engine python-heatclient

Edit /etc/heat/heat.conf , and add / modify the [database] section :

[database]
...
connection = mysql://heat:HEAT_DBPASS@cloud.local/heat

In the [DEFAULT] and [oslo_messaging_rabbit] sections configure RabbitMQ access :

DEFAULT]
...
rpc_backend = rabbit
 
[oslo_messaging_rabbit]
...
rabbit_host = cloud.local
rabbit_userid = openstack
rabbit_password = ADMIN_PASS

In the [keystone_authtoken] and [ec2authtoken] sections, configure Identity service access :

[keystone_authtoken]
...
auth_uri = http://cloud.local:5000/v2.0
identity_uri = http://cloud.local:35357
admin_tenant_name = service
admin_user = heat
admin_password = HEAT_PASS
 
[ec2authtoken]
...
auth_uri = http://cloud.local:5000/v2.0

In the [DEFAULT] section, configure the metadata and wait condition URLs :

[DEFAULT]
...
heat_metadata_server_url = http://cloud.local:8000
heat_waitcondition_server_url = http://cloud.local:8000/v1/waitcondition

In the [DEFAULT] section, configure information about the heat Identity service domain :

[DEFAULT]
...
stack_domain_admin = heat_domain_admin
stack_domain_admin_password = HEAT_DOMAIN_PASS
stack_user_domain_name = heat_user_domain

Replace HEAT_DOMAIN_PASS with the password you chose for the admin user of the heat user domain in the Identity service.

Optional, to help you with debugging, enable verbose mode in the [DEFAULT] section :

[DEFAULT]
...
verbose = True

This is my heat.conf , you can use-it as long as you change the variables :

[DEFAULT]
rpc_backend = rabbit
heat_metadata_server_url = http://cloud.local:8000
heat_waitcondition_server_url = http://cloud.local:8000/v1/waitcondition
stack_user_domain_id=DOMAIN_ID
stack_domain_admin=heat_domain_admin
stack_domain_admin_password=HEAT_DOMAIN_PASS
[database]
connection = mysql://heat:HEAT_DBPASS@tom-cloud/heat
[keystone_authtoken]
auth_uri = http://cloud.local:5000/v2.0
identity_uri = http://cloud.local:35357
admin_tenant_name = service
admin_user = heat
admin_password = HEAT_PASS
[ec2authtoken]
auth_uri = http://cloud.local:5000/v2.0
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = cloud.local
rabbit_userid = admin
rabbit_password = ADMIN_PASS

 

Source your admin credentials :

source admin-openrc.sh

Create the heat domain identity service :

heat-keystone-setup-domain \
  --stack-user-domain-name heat_user_domain \
  --stack-domain-admin heat_domain_admin \
  --stack-domain-admin-password HEAT_DOMAIN_PASS

Populate the heat database :

su -s /bin/sh -c "heat-manage db_sync" heat

Restart all the heat services :

cd /etc/init/; for i in $(ls heat-* | cut -d \. -f 1 | xargs); do sudo service $i restart; done

Now you should see the Orchestration panel in Horizon, and you can start creating stacks

Openstack Orchestration

Stacks

That’s it !! I hope this guide is useful , and if anyone finds an error, please leave a comment .

Reference link :

http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_preface.html

Leave a comment

Your email address will not be published.


*


Solve : *
28 + 7 =