Explore this guide detailing the installation of OpenStack on a Ubuntu 24.04 server using a private network.

image

Kolla Ansible provides production-ready containers (here, Docker) and deployment tools for operating OpenStack clouds. This guide explains installing a single-host (all-in-one) OpenStack Cloud on a Ubuntu 24.04 server using a private network. We specify values and variables that can easily be adapted to others’ networks. We do not address encryption for the different OpenStack services and will use an HTTPS reverse proxy to access the dashboard. We will use Spice to connect to desktop VMs. This setup requires two physical NICs in the computer you will use.

Preamble

  • We will install OpenStack on a private network on a single host using a clean Ubuntu Linux 24.04 server installation. We will use Kolla Ansible to add floating IPs directly to the private network’s IP range.
  • We will not address HTTPS terminations on our private network. We will use another host to configure an HTTPS reverse proxy for the HTTP-configured OpenStack install and investigate how to integrate it separately, particularly regarding the SECURE_PROXY_SSL_HEADER, as detailed at https://docs.openstack.org/security-guide/dashboard/https-hsts-xss-ssrf.html.
  • Our installation uses a /openstack folder for creating the disk images and volumes.
    • If you have a NAS where you want to store your VM disk images, use this method: Cinder can have more than one destination location in its /etc/kolla/config/nfs_shares.
  • OpenStack will use your host’s various cores and memory as needed, so it is recommended that you dedicate the host to only OpenStack.

How to Use this Guide

Some of the files listed below are available here.

Once you have obtained the source markdown file, open it in an editor and perform a find and replace for the different values you will need to customize for your setup. This will allow you to copy/paste directly from the source file.

Values to adjust (in no particular order):

  • eno1 is the host’s primary NIC.
    • 10.30.0.20 is the DHCP (or manual) IP of that primary NIC.
  • enp1s0 is the secondary NIC of the host that should not have an IP and will be used for neutron.
  • kaosu, the user we are using for installation.
  • /openstack the location where we prepare the installation (in a kaos directory) and store Cinder’s NFS disks.
  • 10.30.0.1 with your network’s gateway.
  • 10.30.0.100 is the start IP for the OpenSack Floating IPs range.
  • 10.30.0.199 is the end IP for the OpenStack Floating IPs range.
  • 10.30.0.254 is the OpenStack internal VIP address.
  • os.example.com, the URL for OpenStack for our HTTPS upgrading reverse proxy.

We are not addressing user choices like Cinder or values for disk size/memory/number of cores/quotas in the my-init-runonce.sh script or later command lines.

Most steps in the “Post-installation” section require you to select your preferred user/project/IPs; adapt as needed in those steps.

Requirements

  • Hardware:
    • Make sure virtualization is enabled in your host’s BIOS.
    • Enough cores on the host to run the VMs: OpenStack is a cloud operating system. Learn more about it and its different services at https://www.openstack.org/
    • At least 8GB RAM to run OpenStack, recommended a lot more to run the VMs themselves.
    • We recommend at least 40GB of disk storage on the disk where the containers will be installed and a lot of extra storage for the VMs and disk images. We will use Cinder and NFS to store VM images.
    • 2x physical NICs are needed. Their configuration is in /etc/netplan/50-cloud-init.yaml Here:
      • eno1 is the primary NIC, with IP 10.30.0.20
        • Make sure to have dhcp6: false in the netplan for that section.
      • enp1s0 is the secondary NIC, which should not have an IP assigned.
        • Disable its DHCP, set dhcp4: false and dhcp6: false for enp1s0
      • To apply changes to the configuration file: sudo netplan apply
  • A Linux host, here, an Ubuntu 24.04 server.
    • With ssh set up.
    • With system upgrades done, as needed.
    • With a sudo-capable kaosu user for our OpenStack Kolla Ansible installation: sudo adduser kaosu; sudo usermod -aG sudo kaosu
    • a /openstack directory for installing the different components: sudo mkdir /openstack
  • Networking with routing capabilities (i.e., a home router connecting to the internet). For our private network:
    • The router’s gateway is 10.30.0.1.
    • We will use a static IP for the primary NIC (here eno1 on 10.30.0.20).
    • We reserved a range of IPs on the subnet that are unused, consecutive, and not assigned to the router’s DHCP range. We will use an IP range of 100 IPs: 10.30.0.10010.30.0.199.
    • We reserved one unused IP for the OpenStack connection; here 10.30.0.254.

Pre-installation steps

Hardware Enablement (HWE, optional)

To enable the later 6.x kernel:

sudo apt-get install -y linux-generic-hwe-24.04

sudo reboot -h now

Docker installation

As the kaosu user (latest instructions from https://docs.docker.com/engine/install/ubuntu/):

# Remove potential older versions
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

sudo usermod -aG docker $USER

# logout from ssh and log back in, test that a sudo-less docker is available to your user
docker run hello-world

Passwordless sudo

To make our kaosu user use the sudo command without being prompted for a password:

sudo visudo -f /etc/sudoers.d/kaosu-Overrides

# Add and adapt kaosu as needed
kaosu ALL=(ALL) NOPASSWD:ALL

# save the file and test in a new terminal or login
sudo echo works

NFS for Cinder

Additional details available here and here.

We want to use NFS on /openstack/nfs to store Cinder-created volumes:

 
# Install nfs server
sudo apt-get install -y nfs-kernel-server

# Create the destination directory and make it nfs-permissions ready
sudo mkdir -p /openstack/nfs
sudo chown nobody:nogroup /openstack/nfs

# edit the `exports` configuration file
sudo nano /etc/exports

# Wihin this file: add the directory and the access host (ourselves, ie, our 10. IP) to the authorized list
/openstack/nfs       10.30.0.20(rw,sync,no_subtree_check)

# After saving, restart the nfs server
sudo systemctl restart nfs-kernel-server

# Prepare the cinder configuration to enable the NFS mount
sudo mkdir -p /etc/kolla/config
sudo nano /etc/kolla/config/nfs_shares

# Add the "remote" to mount in the file and save
10.30.0.20:/openstack/nfs

Kolla Ansible OpenStack (KAOS)

Latest instructions available here.

We will work from/openstack/kaos for this install as the kaosu user (we recommend the use of a tmux).

Preparation

 
cd /openstack
sudo mkdir kaos
sudo chown $USER:$USER kaos
cd kaos

# Install a few things that might otherwise fail during ansible prechecks
sudo apt-get install -y git python3-dev libffi-dev gcc \
  libssl-dev build-essential libdbus-glib-1-dev libpython3-dev \
  cmake libglib2.0-dev python3-venv python3-pip

# Activate a venv
python3 -m venv venv
source venv/bin/activate
pip install -U pip

# Install extra python packages
pip install docker pkgconfig dbus-python

# Install Kolla Ansible from git
pip install git+https://opendev.org/openstack/kolla-ansible@master

# Create the /etc/kolla director, and populate it
sudo mkdir -p /etc/kolla
sudo chown $USER:$USER /etc/kolla
cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
# we are going to do an all-in-one (single host) install, copy it in the current folder for easy edits
cp venv/share/kolla-ansible/ansible/inventory/all-in-one .

# Install Ansible Galaxy requirements
kolla-ansible install-deps

# generate random passwords (stored into /etc/kolla/passwords.yml)
kolla-genpwd

Edit and adapt the sudo nano /etc/kolla/globals.yml file as follows (search for matching keys):

  • kolla_base_distro: "ubuntu”
  • kolla_internal_vip_address: "10.30.0.254"
  • network_interface: "eno1"
  • neutron_external_interface: "enp1s0”
  • enable_cinder: "yes"
  • enable_cinder_backend_nfs: "yes"

Before we try the deployment, let’s ensure the Python interpreter is the venv one: at the top of the /openstack/kaos/all-in-one file, add:

 
localhost ansible_python_interpreter=/openstack/kaos/venv/bin/python

The proposed files are available here and here

Deployment

As the kaosu user in /openstack/kaos with the venv activated:

    • Bootstrap the host:
kolla-ansible bootstrap-servers -i ./all-in-one
    • Do pre-deployment checks for the host:
kolla-ansible prechecks -i ./all-in-one
    • Perform the OpenStack deployment:
kolla-ansible deploy -i ./all-in-one

If all goes well, you will have a PLAY RECAP at the end of a successful install, which might look similar to the following:

 
PLAY RECAP ****...
localhost                  : ok=425  changed=280  unreachable=0    failed=0    skipped=249  rescued=0    ignored=1

The Dashboard will be on our host’s port 80 at http://10.30.0.20/. The admin user password can be found using:

 fgrep keystone_admin_password /etc/kolla/passwords.yml

 

CLI

(still using the venv)

OpenStack command line

Install the python openstack command:

pip install python-openstackclient -c https://releases.openstack.org/constraints/upper/master

OpenStack configuration file

Create multiple post-deployment scripts, including the admin-openrc.sh and cloud.yml files:

kolla-ansible post-deploy -i ./all-in-one

That file should be added to your default config:

kolla-ansible post-deploy -i ./all-in-one

Cloud Init: Run once

(requires the venv, the openstack command line, the cloud.yml file, and the generated/etc/kolla/admin-openrc.sh script)

In /openstack/kaos, there is a venv/share/kolla-ansible/init-runonce script to create some of the basic configurations for your cloud. Most end users will modify their EXT_NET_CIDR, EXT_NET_RANGE, and EXT_NET_GATEWAY variables.

The proposed my-init-runonce.sh executable (ie chmod +x it) script uses larger tiny images (5GB, as a Ubuntu server is over 2GB), and other instances only use a base image of 20GB (since you can specify your preferred disk image size during the instance creation process), its instance names following the m<number_of_cores> naming convention and adds xxlarge and xxxlarge memory instances.

Adapt the USER CONF section based on your system and preferences.

% ./my-init-runonce.sh
[...]
-- Attempt to add external-net (if not already present)
[...]
-- Attempt to configure Security Groups: ssh and ICMP (ping)
[...]
-- Attempt to create and add a default id_ecdsa key to nova (if not already present)
[...]
-- Setting quota defaults following user values
[...]
-- Creating defaults flavors (instance type) (if not already present)
[...]
Done

Once run, we should have:

    • An external-net: the pool from which your floating IPs will be obtained.
    • Added ssh and ICMP to the admin project’s default security group.
    • Created a default ssh key (mykey) and added it to the admin user.
    • Set the admin’s project default quotas (this will not propagate to other projects, but the CLI logic can with the right project_id).
    • Created a list of default flavors, such as:
% source /etc/kolla/admin-openrc.sh
% openstack flavor list
+----+--------------+-------+------+-----------+-------+-----------+
| ID | Name         |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+--------------+-------+------+-----------+-------+-----------+
| 1  | m1.tiny      |   512 |    5 |         0 |     1 | True      |
| 2  | m2.tiny      |   512 |    5 |         0 |     2 | True      |
| 3  | m2.small     |  2048 |   20 |         0 |     2 | True      |
| 4  | m2.medium    |  4096 |   20 |         0 |     2 | True      |
| 5  | m4.large     |  8192 |   20 |         0 |     4 | True      |
| 6  | m8.xlarge    | 16384 |   20 |         0 |     8 | True      |
| 7  | m16.xxlarge  | 32768 |   20 |         0 |    16 | True      |
| 8  | m32.xxxlarge | 65536 |   20 |         0 |    32 | True      |
+----+--------------+-------+------+-----------+-------+-----------+

FYSA: From the UI, it is possible to add new flavors from Admin -> Compute -> Flavors

Post-Installation

Note: kolla-ansible or openstack requires the venv to be activated and source /etc/kolla/admin-openrc.sh to be performed for the commands to have the correct configuration information. As kaosu:

cd /openstack/kaos
source /etc/kolla/admin-openrc.sh
source venv/bin/activate

New admin user (UI)

Login to your OpenStack instance by going to the web dashboard (horizon, available on port 80) at http://10.30.0.20

The default admin user’s password can be obtained using:

fgrep keystone_admin_password /etc/kolla/passwords.yml

Using Project -> Compute -> Overview gives you a list of used and available resources.

Create a new project and another admin user for your account. As the admin user:

  • In the Identity -> Projects (left column), Create Project and choose a name. For this example, we will use newprojectname. That new project does not inherit the existing one’s default values. We will update the quotas in the next section.
  • In the Identity -> Users (left column), Create User. Provide its User Name and Password (Confirm), assign that user the Primary Project created above, and give it the Admin Role. Enable the account. For this example, we will use newadminuser.

New User + Project: ssh + security groups + quotas (CLI)

The following steps use the CLI to re-add our ssh key, security groups, and quotas to the new user and its project.

Add a public ssh key (here id_ecdsa.pub) to your new user (adapting newadminuser):

Add the security groups and quotas to your new project (adapting newprojectname):

# Adapt newprojectname
MY_PROJECT_ID=$(openstack project list | awk '/ newprojectname / {print $2}')
MY_SEC_GROUP=$(openstack security group list --project ${MY_PROJECT_ID} | awk '/ default / {print $2}')
# check values are assigned
echo $MY_PROJECT_ID
echo $MY_SEC_GROUP

openstack security group rule create --ingress --ethertype IPv4 --protocol icmp ${MY_SEC_GROUP}
openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22 ${MY_SEC_GROUP}

openstack quota set --force --instances 10 ${MY_PROJECT_ID}
openstack quota set --force --cores 32 ${MY_PROJECT_ID}
openstack quota set --force --ram 96000 ${MY_PROJECT_ID}
openstack quota set --force --floating-ips 10 ${MY_PROJECT_ID}

A slightly modified version of this newproject.sh file is available:

 

Add an Ubuntu image to Glance

Go to https://cloud-images.ubuntu.com/ and select the distro you want (here, we will use Noble Numbat/Ubuntu 24.04’s most current image). Copy the URL of the QCow2 UEFI/GPT Bootable disk image of your choice.

cd /openstack
sudo mkdir cloudimg
sudo chown $USER:$USER cloudimg
cd cloudimg

# Name it with the OS information and the date shown in the "Last modified" column
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img -O ubuntu2404-20250403.img

Use the openstack command line to add the image to the list of available images for all users of our cloud OS, giving it a name that indicates its content:

openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ubuntu2404-20250403.img ubuntu2404server-20250403

Once completed, a table with details of the new image added to our OpenStack installation will appear. From our new admin user’s UI, select Project -> Compute -> Images, and we will see the added image listed.

Network and Router setup

From our new admin user’s UI (which should start in our recently added project), select Project -> Network -> Network Topology. This should show a graph with only the external-net.

We need a network and a router added for VMs to communicate.

Network

Select Create Network

  • Network tab:
    • Name it: we recommend project-net with project reflecting our project’s name.
    • Check Enable Admin State to make sure it is active.
    • Uncheck Shared, this network is only for this project.
    • Check Create Subnet; we need to configure the IP details for this subnet.
    • There is no need to modify Availability Zone Hints or MTU
    • Click Next.
  • Subnet tab:
    • Name it: a similar project-subnet.
    • For the Network Address, use a private IP range not currently used in our network, such as 10.56.78.0/24; subnets must be independent and not currently in use.
    • Select IPv4.
    • Use 10.56.78.1 for the Gateway IP; it must be in the same IP range as your subnet.
    • Uncheck Disable Gateway.
    • Click Next.
  • Subnet Details tab:
    • Check Enable DHCP. We want our VM instances to get IPs automatically when they start.
    • For Allocation Pool use something unused within the subnet range, for example, 10.56.78.100,10.56.78.200.
    • DNS Name Servers (one entry per line) use Google (8.8.8.8, 8.8.4.4) or CloudFlare (1.1.1.1).
    • No need to add any Host Routes.
    • Click Create.

You now have a new network ready to be used with VMs. We still need a router.

Router

Select Create Router:

  • Name it: project-router.
  • Check Enable Admin State to make sure it will be active.
  • Select the external-net External Network.
  • Check Enable SNAT since we do have an external network.
  • Leave Availabilty Zone Hints as is.

We now have a router connected to the external network. The IP for the router on the external network is automatically selected from the pool.

The router has yet to be connected to the “project network.” Hover over the “router” and select Add Interface. Select the project-subnet Subnet and leave the IP Address unspecified; it will use the configured gateway.

When we return to the Network Topology page, we will see an external-net connected to our project-net by our project-router.

Starting and accessing our first VM

Launch Instance

From our new admin user’s UI, select Project -> Compute -> Instances and choose Launch Instance.

  • On the Details tab:
    • Give your instance a name, for example, u24test.
    • Click Next.
  • On the Source tab:
    • Use an Image as the Select Boot Source.
    • Select Yes for Create New Volume; this will force the creation of the VM disk image onto the Cinder location.
    • If the Volume Size is less than the flavor’s disk size, the larger of the two will be selected.
    • Delete Volume on Instance Delete is a user choice. We often select Yes.
    • Click on the up arrow next to our ubuntu server image to have it become the Allocated image.
    • Click Next.
  • On the Flavor tab:
    • Click on the up arrow next to our m2.tiny flavor (2x VCPUs, 512MB RAM, 5GB disk).
    • If you kept the Volume Size to 1GB in the Source tab, looking back on the Source tab, you will see it now shows 5GB, the size of our Flavor.
    • Click Next.
  • On the Netwoks tab:
    • The project-net and project-subnet should be automatically allocated.
    • Click Next.
  • We are not adding any Network Ports, so click Next.
  • The Allocated Security Groupsdefault will have ssh and icmp listed (feel free to verify by clicking the toggle arrow), so click Next.
  • Our mykey will show in Key Pair.

Feel free to investigate the other available tabs. We will Launch Instance.

After a few seconds, the instance should appear to be Running.

ls -alh /openstack/nfs will show the file for our newly created disk volume.

From the instance list, our running instance has an Actions submenu (right); we can View Logs or the interactive Console. We can not log in using the Console terminal; the ubuntu user has no known password set. The instance is designed to be remotely accessed using SSH. We need to assign our instance a “Floating IP”: a public IP address that can be dynamically associated with a private instance, allowing it to be accessible from outside the private cloud.

Floating IPs

With our instance Running, its IP Address is within our project’s subnet range.

We need to obtain a Floating IP to access the instance via SSH.

In the Actions (right) submenu for our instance row, select Associate Floating IP:

  • None are listed; click the + and Allocate IP from our external-net pool.
  • An IP will now show in the IP Address dropdown. Make sure the Port to be associated matches our u24test instance and Associatethem.

The IP Address column will now show two IPs: one from the project-subnet DHCP range and one from the external-net pool.

From your kaosu user, we can ssh into the host’s created floating IP using the authorized ssh key and the default cloud image user of ubuntu.

For example:

 
# Adapt the 10. IP to match your floating IP
ssh -i ~/.ssh/id_ecdsa [email protected]
[...]
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-57-generic x86_64)
[...]
ubuntu@u24test:~$

From there, you can confirm that your instance can connect to the Internet by running sudo apt update && sudo apt -y upgrade.

Securely accessing Horizon using a reverse proxy

If you have a reverse proxy setup on another host and want to benefit from https on horizon (the dashboard):

  • In your reverse proxy, configure the Proxy Host as you would typically; here, we will use os.example.com
  • Run sudo nano /etc/kolla/horizon/_9999-custom-settings.py and add to it:
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
CSRF_TRUSTED_ORIGINS = [ 'https://os.example.com' ]
  • Restart horizon using docker kill horizon. Wait a few seconds, and your access via https://os.example.com should be functional
    • ie, not present us with a csrf_failure=Origin checking failed - https%3A//os.example.com does not match any trusted origin error (in the address bar).
    • FYSA, your installer has named all the containers using the name of the service they provide, so horizon is one of them

Troubleshooting

“reconfigure” if you need to modify globals.yml

If you modify a globals.yml configuration option,

cd /openstack/kaos
source venv/bin/activate
kolla-ansible reconfigure -i ./all-in-one

More kolla-ansible CLI options at https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html.

Broken after a Reboot?

I experienced this in a previous installation. Luckily, it is just a matter of re-running the reconfigure step to make it functional again.

Login as the kaosu user

cd /openstack/kaos
source venv/bin/activate

pip3 install -U pip

kolla-ansible -i ./all-in-one --yes-i-really-really-mean-it stop
kolla-ansible -i ./all-in-one install-deps
kolla-ansible -i ./all-in-one prechecks
kolla-ansible -i ./all-in-one reconfigure

sudo docker ps -a

Additional FAQ

Please refer to the Ubuntu 22.04 post for additional content and in the original version of this tutorial on Martial Michel’s blog.