============================
== Keep Calm and Route On ==
============================
infra ramblings. all views are my own

Openstack in the Homelab, Part 1: Setup

Kolla openstack virtualization kvm

Deploy Openstack on homelab equipment

With three KVM/libvirt hosts, I recently wanted to migrate towards something a little more feature rich, and a little easier to manage without SSHing into each host to work with each VM.

Having just worked on a deployment of Openstack (and Ceph) at work, I decided deploying Openstack was what I wanted to do. (oVirt is another great option for something like this, which I definitely will write about at some point in the future)

Unfortunately, unlike my work environment, I did not have six hypervisors, with two orchestration hosts, and a slew of nodes for a Ceph cluster, all manageable via IPMI. If you do have all this hardware, definitely look at deploying Openstack with MAAS and Juju. It’s awesome, and makes the deployment of the host OS and all of the Openstack/Ceph coponents super easy.

I however, have two Optiplex 9020s, and one Intel NUC. None of them have IPMI, or vPro. I also don’t have the infrastructure to deploy Ceph. Instead, I will be using Kolla-Ansible, my three “hypervisors” and a two-drive Synology NAS with an NFS export.

What is Kolla-Ansible?

To summarize, Kolla-Ansible uses the Kolla containers for Openstack to deploy Openstack using Ansible, with all of the OpenStack components (save some elements that have to run on bare-metal like Nova) in containers.

See the quickstart guide here.

Getting Started (and Assumptions)

For this guide, we will be using Ubuntu 18.04, which is the latest supported version of Ubuntu for Kolla-Ansible Openstack Ussuri as of writing. You can check the support matrix here to see what the supported systems are for the latest version of Kolla-Ansible Openstack. This link here will provide the support matrix for the Ussuri version of Kolla-Ansible Openstack.

For the sake of writing, for the remainder of this guide we will refer to Kolla-Ansible Openstack as KAO.

In this tutorial, we have three hosts, os01, os02, os03. We will run our KAO orchestration from os01.

For this to work, we will need a user on all of our hosts that has passwordless sudo and is able to SSH into all hosts using a public/private SSH key.

We will use the user deploy-user for this example, but feel free to use any username that you want.

Create the user, give it the powers of passwordless sudo

user@os01:~$ sudo useradd deploy-user -m -d /home/deploy-user -s /bin/bash
user@os01:~$ echo "deploy-user  ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/deploy-user

Become the new deployment user, create SSH keys

user@os01:~$ sudo su - deploy-user
deploy-user@os01:~$ ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/deploy-user/.ssh/id_ed25519):
Created directory '/home/deploy-user/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/deploy-user/.ssh/id_ed25519.
Your public key has been saved in /home/deploy-user/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:zXlod3kl1YB/DBZW7LCzR7lTt3uxH/VmMittgOQDFPY deploy-user@ven-os02
The key's randomart image is:
+--[ED25519 256]--+
|        o.   .+=+|
|       ...  ..+.o|
|       .  E  o.*o|
|        .o.o  ++O|
|        S+*.o o=B|
|         .+o...*+|
|           . oooO|
|            . oOo|
|             o. +|
+----[SHA256]-----+

Once generated, you will need to create this user on each of your other hosts, and add the key from /home/deploy-user/.ssh/id_ed25519.pub to the /home/deploy-user/.ssh/authorized_keys file on the other hosts.

I would strongly recommend doing this with something like Ansible, which can make this much easier.

If you don’t want to go down the Ansible rabbit-hole (it’s really a good one to go down) some quick copypasta will also suffice.

If everything is working, you should be able to SSH, from os01 to os02 and os03 (in this example) as the deploy-user without the need to enter a password.

Configure Networking

This section is as simple or as complicated as it needs to be. For KAO to deploy correctly, we need one network interface (NIC) that has a static IP on our management network and one that has no IP configuration, that Neutron will use as the ingress/egress port for VMs running in OpenStack.

Just one problem: none of my hosts have two NICs. You’re probably thinking, well, I could just go out and buy a couple of cheap PCIe NIC cards and have NICs coming out of my ears, and you would be correct. For the NUC I could use a USB3.0 NIC adapter, and have the same situation solved. While these are all perfectly good solutions to this problem, I felt that while I could do that, I shouldn’t have to with such a simple deployment.

And that’s how we get to our veth & bridge configs. If you have two NICs, you can skip over this section, since this networking config pertains to configuring our hosts to work with a single NIC.

What is a veth?

Virtual Ethernet devices, “veths” come in pairs, and are used for lots of different things, but in our case, we can consider the pair like two ends of the same Ethernet cable, where one veth is connected to one thing, and the other is connected to the other end, connecting those two items together.

In our case, we are going to build a bridge interface (this allows us to plug lots of interfaces in together, in a switch-type of way) and then connect our physical NIC to our bridge, and one end of our veth to the bridge. The other end we will give to Neutron, which it can use as it’s ingress/egress interface.

Currently, if you’ve just done a clean installation of Ubuntu and set static IPs for your hosts, you probably have a netplan file (/etc/netplan/01-netcfg.yaml) that looks similar to this:

### /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      addresses:
        - 192.168.15.49/24
      gateway4: 192.168.15.1
      nameservers:
        addresses: [192.168.15.65]

If you didn’t set static IPs, feel free to checkout netplan.io or the above example, and get static IPs configured on your hosts.

Build a Bridge (and get over it)

To start, we are going to get a bridge interface built and configured. To do this, we simple update our netplan file to look similar to this:

### /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
  bridges:
    br0:
      addresses:
        - 192.168.15.49/24
      gateway4: 192.168.15.1
      nameservers:
        addresses: [192.168.15.65]
      interfaces:
        - eno1

Once configured, we just need to apply our configuration:

deploy-user@os01:~$ sudo netplan apply

Note, you can use netplan try to try your configuration, and then hit ENTER to confirm it so that if you make a mistake and lock yourself out, it will rollback the config after 30 seconds

Once applies, we can check our interfaces with ip

deploy-user@os01:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether c0:3f:d5:6e:61:b8 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 42:67:67:ad:82:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.15.49/24 brd 192.168.15.255 scope global br0
       valid_lft forever preferred_lft forever

Here we can see our IP has successfully moved from eno1 to our new bridge interface, br0.

Make some veth

This next step requires us to create a set of veth interfaces. As of writing, Netplan does not offer a way to generate these interfaces, but because Netplan’s backend (at least on Ubuntu server) is systemd-networkd we can use this to our advantage to create this interface another way.

Create a file /lib/systemd/network/25-veth-b1.netdev. This will serve as the file that defines our veth interfaces. From there, edit the file to contain the following:

[NetDev]
Name=veth1
Kind=veth
[Peer]
Name=veth2

Once added, we will need to restart systemd-networkd to find our new interfaces:

deploy-user@os01:~$ sudo systemctl restart systemd-networkd

Once this service has restarted, our final step is to define our veth interfaces in our Netplan config. Our final configuration will look something like this:

### /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
    veth1: {}
    veth2: {}
  bridges:
    br0:
      addresses:
        - 192.168.15.49/24
      gateway4: 192.168.15.1
      nameservers:
        addresses: [192.168.15.65]
      interfaces:
        - eno1
        - veth1

Finally, we reapply our Netplan config:

deploy-user@os01:~$ sudo netplan apply

And then check to make sure our interfaces show up as expected:

deploy-user@os01:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether c0:3f:d5:6e:61:b8 brd ff:ff:ff:ff:ff:ff
3: veth2@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether fe:cf:c9:f1:c4:dd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fccf:c9ff:fef1:c4dd/64 scope link
       valid_lft forever preferred_lft forever
4: veth1@veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether ae:80:0a:0b:34:c2 brd ff:ff:ff:ff:ff:ff
5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 42:67:67:ad:82:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.15.49/24 brd 192.168.15.255 scope global br0
       valid_lft forever preferred_lft forever

All of our interface show up, and show their status as UP which means we are ready to go.

Install Kolla-Ansible (and prerequisites)

Again, we are just following the initial setup guide from the KAO documentation, found here, so these steps will be shown below, but with less explaination, since the documentation for installation is already so good.

# Install Prereqs, Ansible & Kolla-Ansible
deploy-user@os01:~$ sudo apt-get install python3-dev libffi-dev gcc libssl-dev -y
deploy-user@os01:~$ sudo apt-get install python3-pip -y
deploy-user@os01:~$ sudo pip3 install -U pip #Update pip
deploy-user@os01:~$ sudo pip3 install 'ansible<2.10'
deploy-user@os01:~$ sudo pip3 install kolla-ansible
# Create Directories
deploy-user@os01:~$ sudo mkdir -p /etc/kolla
deploy-user@os01:~$ sudo chown $USER:$USER /etc/kolla
# Copy Kolla-Ansible Template Files
deploy-user@os01:~$ cp -r /usr/local/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
deploy-user@os01:~$ cp /usr/local/share/kolla-ansible/ansible/inventory/* .
# Initial Config for Ansible (If you are cutting and pasting, C/P from the beginning of the cat command to EOF)
deploy-user@os01:~$ cat <<EOF | sudo tee /etc/ansible/ansible.cfg
[defaults]
host_key_checking=False
pipelining=True
forks=100
deprecation_warnings=False
EOF

Great! If you have made it this far, you should now have Kolla-Ansible fully installed, the initial config files & inventories copied to the correct locations, Ansible configured to do our bidding, and networking setup so that we can properly use it. At this point, it’s time to start configuring our configuration files and inventories, which is coming in PART 2.