Openstack single-host installation guide

This is intended to be a copy-paste guide on installing Openstack “Kilo” on a single machine , and being able to add additional compute nodes afterwards .

Following this guide, you will install the full Openstack distribution , not DevStack or other all-in-one packages that need to be reconfigured after a server reboot .

Kilo is the current Openstack version , and so far the best ( in my opinion ).

The only things you need to change and set are IP address of your local and external network interfaces, and set passwords for the databases and services.

Some of my friends keep asking me for help , so there it is, all you need to do, in the correct order and with all the necessary “.conf” files.

First, a little bit about the machine i’m going to work on

Its a HP Compaq Elite 8000 , with a quad-core  Q8300 CPU , 16Gb of ram ( this is the maximum amount of ram this machine takes ) , two disk drives ( a 500Gb drive and a 320Gb drive ) , and two network interfaces ( the on-board interface , and a pci-e additional network card ).

Ubuntu is installed on the 500Gb drive , which will serve as the OS root drive, VM storage and Glance storage , and the 320Gb drive will serve as the Cinder storage .

The Openstack architecture i’m going to use , will have all the services listening on one of those interfaces , so we can easily add a second compute node if needed , without modifying configuration files or databases . ( see picture )

 

Basic Network

my home network

To help in installation and simplify setup, create some entries in your hosts file :

 

Most of this tutorial is based on official Openstack documentation, so you can also use that as a guide in case you want a full blown cloud , with services spanned onto separate machines .

We will set up a public flat network , so no routers and private networks for this setup .

In my “Work Projects” section i will describe a full openstack installation , which will feature  routers, private / public networks and floating IP’s . But for our home test cloud , is better this way .

We have the instances directly connected to our home network, and accesible directly from our local network , so we can use them if we have a local DNS easy .

The security groups and metadata network are fully functional , so no shortcuts there 🙂 .

There will be a section describing the setup of a local caching DNS on my Microtik router .

This tutorial DOES NOT cover swift object storage ( Swift object storage needs at least 3 physical machines , and i only have one … so far )

To address a question that i received at some point .. Yes , i could have made bash scripts to set automatically many things in this article , but i consider that in doing them “by hand” , will help you understand the relations between services and their dependencies .

It’s better to have an extended understanding about a complex system , than to just “make it work as fast as possible” , because when you will face a problem ( and YOU WILL !! ) , you can take a logical approach to solve-it. The knowledge gained executing each step will at least help you debug this complex installation in the future , and have a better understanding of how Openstack works.

The first thing we need to do , is install a fresh copy of Ubuntu Server 14.04 LTS ( this is what i’m going to be using for all of my projects ) .

Installation of  Ubuntu is not the scope of this article, so i’m assuming that you know how to do that 😉 .

If you do plan to add another compute node , you need to give that node internet access , to download packages and update . My solution was to add the following lines to rc.local on the first server :

1.1 – First steps

After installation , let’s ssh onto our “server” and run a update of everything so far ( just to be sure ) :

Install NTP ( is important in case you decide to add other nodes , they really have to be in sync ) :

And set your timezone.

Add oficial Openstack repositories :

And run another update ( some packages are newer in this repo ) :

Install MariaDB , and choose a good root password :

Edit /etc/mysql/my.cnf and modify / add the following lines :

Finish mysql installation with :

and run :

following the instructions to secure your mysql server.

Now , install RabbitMQ :

and configure after the installation finishes :

1.2 – Openstack Identity service ( Keystone )

To avoid entering the root user and password for each database , let’s make a file to help us ,

and add your MySql root user and password that we choose earlier like this :

Now, create the keystone database :

and give access to it to the keystone user :

Generate a random value to use as administrator token ( in keystone.conf as ADMIN_TOKEN ) :

After installation, prevent keystone from starting automatically :

And install keystone and dependencies :

Edit /etc/keystone/keystone.conf and add in the [DEFAULT] section :

and in the [database] section :

Replace memcache with your server ( mine is localhost ) :

In the [token] section , set :

In the [revoke] section set :

Optional, you can enable verbose mode, to aid debugging :

Populate the identity service database :

Configure apache2 server keystone vhost , and set the “ServerName” directive globally in /etc/apache2.conf

Create /etc/apache2/sites-available/wsgi-keystone.conf with the following content :

Enable identity service vhost :

Create the identity service directory structure :

Copy the WSGI components from openstack repo :

Adjust ownership and permissions :

To finalize the installation, restart apache2 webserver :

and remove keystone sqlite DB :

For reference , this is my keystone.conf ( without quotations ) . You can make a backup of the default, copy-paste this conf and change variables according to your setup .

Now , let’s create the service entity and api endpoints.

First , export the admin token created previously :

And configure the enpoint URL :

Create the service entity for the identity service

Create the service api endpoint :

Create projects, users and roles.

Create the admin project :

Create the admin user :

Create the admin role :

Add the admin role to the admin project and user :

Create the service project :

Create the user role :

Create Openstack admin user environment script admin-openrc.sh :

Load admin-openrc.sh :

Request authentication token

2.1 – Install the image service ( Glance )

Create the database for glance :

Add credentials for glance :

Source admin credentials to gain access to CLI commands :

Create glance admin user :

Add the admin role to glance user and service :

Create glance service entity :

Create the image service API endpoint :

Install glance packages :

Edit /etc/glance/glance-api.conf and add in the [database] section :

In the [keystone_authtoken] and [paste_deploy] sections configure identity service access :

In the [glance_store] section configure the local filesystem store of image files :

In the [DEFAULT] section disable the notification driver ( until we install telemetry service ) :

and enable verbose mode in case we need debugging :

Edit /etc/glance/glance-registry.conf and configure in the [database] section :

In the [keystone_authtoken] and [paste_deploy] sections configure identity service access :

In the [DEFAULT] section disable the notification driver ( until we install telemetry service ) :

and enable verbose mode in case we need debugging :

Populate glance database :

And restart glance services :

Remove the default glance sqlite db :

For reference , these are my glance-api.conf and glance-registry.conf , without quotes .

You can use them, as long as you make a backup of the default files, and set your variables accordingly.

Note : These glance configuration files DO contain the notification driver and rabbit connection for metering . I said previously that we will use that when we install Ceilometer . It’s your choice either you install-it or not ..

Now, lets verify that glance is functioning properly.

Add glance to your admin-oprnrc.sh :

and :

Create a local folder to store images when adding from CLI :

And download our first image ( a small test image from cirros ) :

Now, upload the image to glance :

Confirm that the image is indeed in glance image store :

3.1 – Nova and Neutron services 

So far we pretty much followed the basic installation how-to from Openstack installation how-to.

Here is where things get different , mainly because we are trying to use all the services on the same machine .

So , now we run all the nova services with the same config file, and all need a little something from it .

I’m not going to treat each and every little piece separate, to avoid confusion , but just give you my conf file after installing all the necessary packages , and you need to set your variables accordingly.

Let’s begin with nova database :

Grant access to the database :

Source the admin credentials :

Create the service credentials :

Add the admin role to nova user :

Create the nova service entity :

And the nova API endpoint :

Now comes the fun part . Install ALL nova packages in one go :

Now , make a backup of /etc/nova/nova.conf , and use this config :

And populate the nova database :

As you probably notice , there are a lot of config options that we haven’t got to yet ( neutron ) .

That’s ok , after we finish with all our cloud configuration, we will restart all nova and neutron services, and all our configuration will apply.

Having set up the nova service , let’s go on to neutron ( networking )

First , let’s create the database :

And add credentials for access :

Source the admin credentials :

Create the neutron user :

Add the admin role for neutron user :

Create the neutron service entity :

Create the networking API endpoint :

Let’s install ALL the neutron components :

Edit /etc/sysctl.conf and add at the end of the file :

and run to activate configuration :

Make a backup of /etc/neutron/neutron.conf , edit and paste the following configuration :

As you can see, i’v enabled LBAAS ( load balancer as a service ) , which we will configure and test later in this article , but for now we have the relevant settings enabled .

Next , let’s configure the Modular Layer 2 Plugin (ML2 plugin )

Make a backup of /etc/neutron/plugins/ml2/ml2_conf.ini , and replace contents with the following :

Now, let’s populate the database :

And remove the sqlite.db :

The next step is to configure the network . be extra careful , as you might loose network connectivity, and might have to resort to logging in directly to the server if anything goes wrong.

A good idea would be to have access to the “local cloud network” ..

Edit /etc/network/interfaces :

Add br-ex :

and tie br-ex to eth0 interface, followed by reboot ( my recommendation ) :

Depending on your network interface driver, you may need to disable generic receive offload (GRO) to achieve suitable throughput between your instances and the external network.

To temporarily disable GRO on the external network interface while testing your environment:

If this works, add the line to /etc/rc.local

Source the admin credentials :

To configure neutron l3 agent, make a backup of /etc/neutron/l3_agent.ini, and replace contents with :

To configure neutron dhcp agent , make a backup of /etc/neutron/dhcp_agent.ini, and replace contents with :

and create /etc/neutron/dnsmasq-neutron.conf with the following contents :

Kill all existing dnsmasq processes :

To configure neutron metadata agent , make a backup of /etc/neutron/metadata_agent.ini , and replace contents with :

The line “metadata_proxy_shared_secret = SECRET” needs to match the “SECRET” set in /etc/nova/nova.conf.

Restart all neutron services :

Now, let’s verify that all the neutron agents are working ok

Source the admin credentials :

And let’s list the loaded extensions just to make sure that everything is going smooth :

Remember early in this article , we modified nova.conf , without restarting and testing nova .

That’s because nova and neutron work together , and without neutron it will not work properly .

Now its’s time to restart and test the nova services :

And check if all the services work properly :

Ok, now before we do anything else, let’s install the Openstack dashboard, horizon :

Reload apache2 to activate the changes :

Access your new dashboard to test-it :

Now, if you do not like the ubuntu theme, you can uninstall :

And restart apache2 webserver :

If everything went well so far, now you should have a pretty much functional cloud by now. Just a few steps from completion, so keep going 🙂 .

Let’s create the initial external network .. or not ??

The situation with our home network is a bit special , because we have a router, which has a DHCP server , and that could interfere with neutron-dhcp-agent .

That’s why i created a pool for all dhcp hosts on the router , which starts at 192.168.200.10 and ends at 192.168.200.200

The next IP’s will be allocated to put cloud instances , by neutron-dhcp-agent. So far i had no conflicts or problems, but if you do , leave a comment and i will investigate.

So , let’s create our flat network..

Source the admin credentials :

And add the network :

And the subnet :

Login to horizon , and add a public key to be used with your virtual machines :

Openstack security groups

 

Now we can launch our first instance !! YEY 😀 .

Openstack instances

 

4.1 – Block storage , metering , orchestration and load-balancers

The next services we will install are Cinder ( block storage ) Ceilometer ( metering ) and Heat ( orchestration )

We will start with cinder .

Create the initial database :

Grant access to the database to cinder user :

Source your admin credentials :

Create the cinder user :

Add the admin role to cinder user :

Create the cinder service entities :

Create block storage API endpoints :

Install the packages :

Make a backup of /etc/cinder/cinder.conf , and replace the contents with :

Populate the cinder database :

After installing all there packages and configuring everything , we still have a 320Gb HDD , in my case /dev/sdb .

We will use that as the cinder storage volume .. Let’s proceed.

Create a partition on that drive, and format that partition to ext4 file system.

Create the LVM physical volume /dev/sdb1 :

Create the LVM volume group cinder-volumes:

Remove initial cinder sqlite db :

And restart all cinder services :

Let’s verify the operation ..

Export block storage api:

Source your admin credentials :

List cinder services to verify operations :

Create a 1Gb volume :

Verify that the volume has been created and available :

 

By now, you should have all the services for your private cloud enabled and working, all we need to do not is install telemetry, orchestration and enable lbaas..

Let’s enable lbaas first, since most of the configuration was already done when we installed neutron..

All we need to do now is install haproxy :

and make a backup op /etc/neutron/lbaas_agent.ini, replacing the contents with :

Restart neutron-lbaas-agent :

And you should have access to load balancer service in horizon .

Openstack load balancer

Neutron load balancer is installed, let’s proceed further .

Let’s install ceilometer and dependencies :

Edit /etc/mongodb.conf , and set the following :

By default, MongoDB creates several 1 GB journal files in the /var/lib/mongodb/journal directory. If you want to reduce the size of each journal file to 128 MB and limit total journal space consumption to 512 MB, assert the smallfiles key:

Restart mongodb and do a little cleanup :

Create the ceilometer database :

Don’t forget to set “CEILOMETER_DBPASS” to something secure.

Source the admin credentials :