============================
== Keep Calm and Route On ==
============================
infra ramblings. all views are my own

Openstack in the Homelab, Part 2: Deploy

Kolla openstack virtualization kvm

Deploy Openstack on homelab equipment

In PART 1 we reviewed the prerequisites for getting Kolla-Ansible setup, installed, and configured, along with the associated networking.

In this part (part 2) we will go over configuring our configuration files, inventories, and performing the initial deployment steps.

Configure our Inventory

Since Kolla-Ansible uses Ansible to perform the Kolla-Ansible Openstack (KAO) deployment, we first need to configure our Ansible inventory to tell Kolla where to perform our configuration. We are again following the steps in the Kolla-Ansible quickstart with a few modifications.

If you are deploying to a single host (all-in-one) you can use the all-in-one inventory that was copied over in part 1 to complete the setup.

If you are doing a multinode deployment (like we are here) please take a look at the points made in the quickstart guide here. Once reviewed, we will edit the beginning of our multinode inventory, to look similar to below:

### /home/deploy-user/multinode
os01     ansible_connection=local    network_interface=br0 neutron_external_interface=veth2 ansible_python_interpreter=/usr/bin/python3
os02     ansible_ssh_host=192.168.15.45 network_interface=br0 neutron_external_interface=veth2 ansible_python_interpreter=/usr/bin/python3
os03     ansible_ssh_host=192.168.15.47 network_interface=br0 neutron_external_interface=veth2 ansible_python_interpreter=/usr/bin/python3

# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
os01

# The above can also be specified as follows:
#control[01:03]     ansible_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
os[01:03]

[compute]
os[01:03]

[monitoring]
os01

# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
os[01:03]

[deployment]
localhost       ansible_connection=local

Let’s break down the above configuration. Strictly speaking, the hostnames of our nodes, and then their inclusion in the control, network, compute and monitoring sections are all that is required.

At the top, we are defining some additional variables. To start, I am explicitly defining what IP to use to reach each node. This is done with ansible_ssh_host on os02/03 and with the ansible_connection=local on os01 (since we are running Kolla-Ansible from os01).

network_interface defines the interface that has an IP on a management network, and will be used to allow the nodes to talk to eachother (and the services on those nodes).

neutron_external_interface defines the interface Neutron will use for the ingress/egress port. Note that in part 1, we added veth1 to our bridge interface. Here we connect the “other end” of our veth to Neutron, by attaching veth2 to Neutron.

On the note of network interfaces, it is possible to split “internal” and “external” communication for the nodes onto two subnets, and it is possible to define these interfaces globally in the globals.yml file (which we will get to). I am defining them here so that if the interfaces were to differ node-to-node, we could configure them here. For example, if you have multiple REAL NICs on your hosts, but they had different interface names, you could have a set of interface configs that look like:

os01     network_interface=ens3 neutron_external_interface=ens9
os02     network_interface=br0 neutron_external_interface=veth2
os03     network_interface=eth2 neutron_external_interface=eth0

It’s worth noting that if you are deploying to a single node (all-in-one) it still may make sense to include these variables in your inventory, even though you will only have one host for config.

Finally, we define ansible_python_interpreter. This is to help Ansible know to use python3 in this case, instead of python2. If python2 is used, the deployment will fail. While you can set the default python interpreter in Ubuntu on each host, this allows us to skip those steps, and just tell Ansible explicitly which python interpreter to use.

In the subsequent sections (designated by the [ ] brackets), we define which services we want to deploy to which hosts. I personally choose to deploy everything to all hosts except monitoring which I am only deploying to os01. Storage in our case needs to be the same as our compute group, otherwise our NFS mounts will fail.

Once these items are defined, we can save and close our inventory. You do not need to edit the other group settings, as these settings are used by Kolla-Ansible to designate groups that are dependent on other groups, and other settings that we generally don’t need to change.

Test connectivity to our inventory nodes using Ansible

Now that our inventory is configured, we can perform a quick test to check and make sure that Ansible can communicate properly with our nodes:

deploy-user@os01:~$ ansible -i multinode all -m ping
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
os03 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
os02 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
localhost | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
os01 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

As we can see above, we are able to properly communicate with our nodes.

Editing globals.yml

Now that our inventory is configured and tested, it’s time to edit our globals.yml.

Again, following our quickstart guide here our globals.yml has the following variables set (or uncommented):

### /etc/kolla/globals.yml
# Not all lines are shown; this is a huge file. Changed lines are shown below

###############
# Kolla options
###############
kolla_base_distro: "ubuntu"
kolla_install_type: "source"

kolla_internal_vip_address: "192.168.15.50"
kolla_external_vip_address: "192.168.15.2"
kolla_external_fqdn: "openstack.kolla.local"

#############
# TLS options
#############
# To provide encryption and authentication on the kolla_external_vip_interface,
# TLS can be enabled.  When TLS is enabled, certificates must be provided to
# allow clients to perform authentication.
openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"
kolla_enable_tls_internal: "yes"
#kolla_enable_tls_external: "{{ kolla_enable_tls_internal if kolla_same_external_internal_vip | bool else 'no' }}"
kolla_enable_tls_external: "yes"
#kolla_certificates_dir: "{{ node_config }}/certificates"
#kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem"
#kolla_internal_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy-internal.pem"
kolla_admin_openrc_cacert: "/etc/ssl/certs/ca-certificates.crt"
kolla_copy_ca_into_containers: "yes"
kolla_verify_tls_backend: "yes"
#haproxy_backend_cacert: "{{ 'ca-certificates.crt' if kolla_base_distro in ['debian', 'ubuntu'] else 'ca-bundle.trust.crt' }}"
#haproxy_backend_cacert_dir: "/etc/ssl/certs"
kolla_enable_tls_backend: "yes"

################
# Region options
################
openstack_region_name: "us-west"

###################
# OpenStack options
###################
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"

A couple of notes:

You only need to define kolla_internal_vip_address. If you don’t define the kolla_external_vip_address or kolla_external_fqdn, Kolla will use the internal_vip_address for internal & external communications.

A bunch of TLS settings are enabled here. We will use a command to have Kolla auto-generate a CA and certificates for TLS communication. You can also disable TLS communication if you don’t want this setup (though everything will then pass in plain-text, which is insecure) or can use your own certificates (we will talk about this too) if you wish to use your own PKI for TLS.

Generate Kolla Passwords

There will be a handful of passwords for services and users created by Kolla throughout the installation. These passwords can be easily generated like below:

deploy-user@os01:~$ kolla-genpwd

That’s it. The generated passwords will be stored in /etc/kolla/passwords.yml.

Configure the NFS mounts for Cinder

Since we have enabled Cinder, and the Cinder NFS backend, we need to add our configuration for where our NFS exports exist.

To do this, we just need to edit /etc/kolla/config/nfs_shares

### /etc/kolla/config/nfs_shares
192.168.15.10:/srv/cinder

Obviously the contents will vary depending on the mount configuration for your NAS or NFS device.

For my Synology NAS, the following settings were required to get the exports to work:

### /etc/exports
/srv/cinder	192.168.15.45(rw,sync,no_wdelay,no_root_squash,sec=sys,anonuid=65534,anongid=65534)
/srv/cinder	192.168.15.47(rw,sync,no_wdelay,no_root_squash,sec=sys,anonuid=65534,anongid=65534)
/srv/cinder	192.168.15.49(rw,sync,no_wdelay,no_root_squash,sec=sys,anonuid=65534,anongid=65534)

Though YMMV depending on your NAS.

Generate (or Install) your Certificates

At this point, we have two directions we can go in (this is a choose your own adventure moment). We can have Kolla-Ansible generate a CA, and then sign the certificates used in our deployment automagically, or we can generate and install certificates created with some other form of internal PKI.

Personally, for the first test deployment, I just used the Kolla generated CA and certificates to test the whole deployment. That said, since I have a SmallStep CA configured and deployed (separate blog post on that later) for my final deployment, I decided to generate the certificates and install them via SmallStep.

Generate the Certificates the Kolla Way (easiest)

deploy-user@os01:~$ kolla-ansible -i multinode certificates

That’s it. The certificates get generated and put into /etc/kolla/certificates/. The only final step we have is to copy the Root CA certificate (/etc/kolla/certificates/ca/root.crt) and import it into our trusted CA store on os01. This step is important, or the deployment will fail.

To do this, we run a set of simple commands to copy the certificate, and then import it:

deploy-user@os01:~$ sudo cp /etc/kolla/certificates/ca/root.crt /usr/local/share/ca-certificates/kolla-root.crt
deploy-user@os01:~$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

Checking to make sure we see 1 added we know that our certificate has been imported successfully. Since we told Kolla-Ansible to use the ca-bundle from our Ubuntu host (openstack_cacert: "/etc/ssl/certs/ca-certificates.crt") Kolla will copy this file to our other containers & hosts so that the new CA generated by Kolla is trusted.

Generate the Certificates Yourself (slightly harder)

Since I am running SmallStep CA already, I will use that CA (and intermediate CA) to create certificates for Kolla-Ansible to use.

To start, I will need to copy my SmallStep Root CA certificate to os01

user@workstation:~$ scp .step/certs/root_ca.crt deploy-user@os01:/home/deploy-user

Once copied, we can create our directory structure for the Kolla CA files, and install the SmallStep Root CA into the trsuted certificate store for os01:

deploy-user@os01:~$ sudo cp root_ca.crt /usr/local/share/ca-certificates/ssca_root.crt
deploy-user@os01:~$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
deploy-user@os01:~$ mkdir -p /etc/kolla/certificates/ca #Create folders for certificates and CA file
cp root_ca.crt /etc/kolla/certificates/ca/root.crt

At this point, our CA is now installed in the trusted CA bundle on os01 and is also installed in the ca folder for Kolla-Ansible.

The next step is to generate a certificate we can use for TLS. It is possible to generate a separate certificate for internal and external communications, with different IPs and FQDNs for both. I am doing this in a slightly easier way, and generating a single certificate that will with for both.

user@workstation:~$ step certificate create openstack.kolla.local dopenstack.kolla.local.crt openstack.kolla.local.key --profile leaf --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --san openstack --san openstack.kolla.local --san 192.168.15.50 --san 192.168.15.2 --not-after 17520h --no-password --insecure

Please feel free to reference SmallSteps documentation if you need additional arguments for this command, but basically we are generating a certificate with the following attributes:

  • CN = openstack.kolla.local
  • SAN = openstack.kolla.local, openstack
  • IP = 192.168.15.50, 192.168.15.2
  • expires in 2 years
  • no password for private key

Once generated, we will have the certificate and private key for use in our Openstack deployment. But before we can use it, we will need to build the full “chain” certificate, which includes the generated cert, intermediate CA, and root CA:

user@workstation:~$ cat .step/certs/openstack.kolla.local.crt > openstack.kolla.local.chain.crt
user@workstation:~$ cat .step/certs/intermediate_ca.crt >> openstack.kolla.local.chain.crt
user@workstation:~$ cat .step/certs/root_ca.crt >> openstack.kolla.local.chain.crt

Our openstack.kolla.local.chain.crt is the certificate we will use as the public certificate, while openstack.kolla.local.key our private key.

The next step is to copy these over to our os01 node:

user@workstation:~$ scp openstack.kolla.local.chain.crt deploy-user@os01:/home/deploy-user
user@workstation:~$ scp openstack.kolla.local.key deploy-user@os01:/home/deploy-user

Now that these files are on our control node (os01) we can copy them into the correct places.

But wait! Before we do that, there’s one more step we have to do: we need to concatenate the chain certificate and the private key for HAproxy to use:

deploy-user@os01:~$ cat openstack.kolla.local.chain.crt > /etc/kolla/certificates/haproxy.pem
deploy-user@os01:~$ cat openstack.kolla.local.key >> /etc/kolla/certificates/haproxy.pem

Now that that is complete, we can finish moving in our certificates:

deploy-user@os01:~$ mv openstack.kolla.local.chain.crt /etc/kolla/certificates/backend-cert.pem
deploy-user@os01:~$ mv openstack.kolla.local.key /etc/kolla/certificates/backend-key.pem
deploy-user@os01:~$ cp /etc/kolla/certificates/haproxy.pem /etc/kolla/certificates/haproxy-internal.pem

Engines are a go! (Deploy)

At this point, we now have all of our configurations completed, and are ready to start deploying KAO.

To begin, we will bootstrap the nodes, to install all necessary prerequisites:

deploy-user@os01:~$ kolla-ansible -i multinode bootstrap-servers

Depending on the speed of your nodes, this may take a little while to complete. Once completed, if no errors are observed, the nodes are now ready for deployment. But before we deploy, we run precheck to ensure no other checks are failing:

deploy-user@os01:~$ kolla-ansible -i multinode prechecks

Prechecks (at least for me) took longer than the bootstrapping. If you are deploying onto machines that previously had Docker installed, you may run into an issue that requires you to fully purge Docker, and pip3 install docker.

Assuming prechecks complete, it’s time to actually deploy KAO:

deploy-user@os01:~$ kolla-ansible -i multinode deploy

Be patient, this step takes the longest, since it is actually deploying and configuring Openstack. If you have not installed your certificates correctly, this step will fail, since our config will setup containers and nodes to communicate with TLS.

Once completed (without warnings), you should have a fully running version of Openstack. Finally, we can run the post-deployment tool to generate our adminrc config file for connecting to Openstack:

deploy-user@os01:~$ kolla-ansible -i multinode post-deploy

Once generated, we will need to install the openstack tools to test our connectivity:

deploy-user@os01:~$ sudo pip3 install python-openstackclient

Finally, we can test our connectivity with Openstack by getting all endpoints in the deployment:

deploy-user@os01:~$ source /etc/kolla/admin-openrc.sh
deploy-user@os01:~$ openstack endpoint list

If this is successful, a table of endpoints should be displayed.

At this point, we are deployed! Head over to PART 3 to complete a couple of post-deployment tasks to start using Openstack!