Quickstart guide¶
In this guide, we will deploy OpenStack using the YAOOK/Operator on your Kubernetes cluster. The following OpenStack services will be deployed:
Keystone (identity service)
Glance (image service)
Neutron (networking)
Nova (compute service)
Horizon (dashboard)
This guide is only meant as demo setup and is NOT FOR PRODUCTIVE USE.
To install and run Yaook, you need a Kubernetes cluster. Let’s make sure to
review the cluster requirements because different Kubernetes features are
required. If you don’t have a Kubernetes cluster, you can create one using
Tarook, follow this Quick Start Guide, but also
other Kubernetes installations can be used. The Yaook-in-a-Box project also provides a simple way to
install a single node k8s cluster via multipass locally on arm64
or amd64
.
Requirements:
Please ensure that the following general requirements are met:
Select namespace for OpenStack services¶
First, set the YAOOK_OP_NAMESPACE
and export it as an environment
variable:
export YAOOK_OP_NAMESPACE=yaook
Prerequisites¶
Storage: Install a storage backend in your cluster
If you don’t have a suitable backend, you can deploy the local-path provisioner which uses local node storage:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml
If you don’t have it installed via Tarook already, install Rook Ceph and make sure that your cluster has an attached storage device:
curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/install-ceph.sh | bash
Install dependencies in Kubernetes cluster. If you use Tarook, they should be already installed by Tarook.
curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/getting_started/install_cert_manager.sh | bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/getting_started/install_prometheus.sh | bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/getting_started/install_ingress_controller.sh | bash
Install certificates, issuers as well as secrets since all internal communication is encrypted:
openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.crt -subj "/CN=YAOOK-CA" kubectl -n "$YAOOK_OP_NAMESPACE" create secret tls root-ca --key ca.key --cert ca.crt kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/cert-manager.yaml
Install the real time scheduling hack according to Yaook operation system requirements:
kubectl -n $YAOOK_OP_NAMESPACE apply -f https://gitlab.com/yaook/operator/-/raw/devel/ci/devel_integration_tests/deploy/realtime-hack.yaml
Install helm repos¶
Add helm repos (local) so you can install helm releases from them later.
helm repo add stable https://charts.helm.sh/stable
helm repo add yaook.cloud https://charts.yaook.cloud/operator/stable/
helm repo update
Select the latest Yaook version¶
Get the latest Yaook version from the Yaook Helm repository and export it as an environment variable:
export YAOOK_VERSION=$(helm search repo yaook.cloud/crds -o json | jq -r '.\[0\].version')
Install Custom Resource Definitions (CRDs)¶
We use CRDs in order to describe all relevant services needed to run OpenStack. We install the CRDs using the Kubernetes package manager Helm.
helm upgrade --install -n "$YAOOK_OP_NAMESPACE" --version "$YAOOK_VERSION" crds yaook.cloud/crds
Deploy chosen Yaook operators¶
Now, Helm is used again in order to deploy the all operators needed for the quick-start:
for operator in "infra-operator" "keystone" "keystone-resources" "glance" "nova" "nova-compute" "neutron" "neutron-ovn" "horizon"; do
echo "Installing yaook.cloud/$operator-operator via helm:"
helm upgrade --install -n "$YAOOK_OP_NAMESPACE" --version "$YAOOK_VERSION" "$operator-operator" "yaook.cloud/$operator-operator"
done
Label your nodes¶
In order to automatically deploy the necessary services according to the node type, we need to label the nodes.
To get an overview over your nodes, run
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
In that list you should see nodes that will have the role “control plane” and nodes that will have the role “compute” (hypervisor).
Label the control plane nodes¶
export ctl_plane_labels="node-role.kubernetes.io/control-plane=true
any.yaook.cloud/api=true
infra.yaook.cloud/any=true
operator.yaook.cloud/any=true
key-manager.yaook.cloud/barbican-any-service=true
block-storage.yaook.cloud/cinder-any-service=true
compute.yaook.cloud/nova-any-service=true
ceilometer.yaook.cloud/ceilometer-any-service=true
key-manager.yaook.cloud/barbican-keystone-listener=true
gnocchi.yaook.cloud/metricd=true
infra.yaook.cloud/caching=true
network.yaook.cloud/neutron-northd=true
network.yaook.cloud/neutron-ovn-agent=true"
export control_plane_nodes="<your control plane nodes>"
# example:
# export control_plane_nodes="ctl-01 ctl-02 ctl-03"
for node in "$control_plane_nodes"; do
kubectl label node "$node" $ctl_plane_labels
done
Label the compute nodes¶
If you are not running on bare metal, you may need to additionally label your workers / compute hosts to prevent kvm-in-kvm failures in nova-compute:
kubectl label node $WORKER compute.yaook.cloud/hypervisor-type=qemu
export hypervisor_nodes="<your hypervisor nodes>"
# example:
# export hypervisor_nodes="cmp01 cmp02 cmp03"
for node in "$hypervisor_nodes"; do
kubectl label node "$node" "compute.yaook.cloud/hypervisor=true"
done
Configure Glance for Ceph¶
Let’s create a Ceph client which has access to a Ceph pool for glance:
kubectl apply -n rook-ceph -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/rook-ceph-glance.yaml; sleep 5
Warning
Depending on the attached storage device you might have to change the deviceClass in rook-ceph-glance.yaml
for the CephBlockPool
from hdd
to ssd
.
Copy the secret (which was generated in the rook-ceph namespace) to $YAOOK_OP_NAMESPACE
.
kubectl get secret rook-ceph-client-glance -n rook-ceph -o yaml \
| yq 'del(.metadata.uid)' \
| yq 'del(.metadata.creationTimestamp)' \
| yq 'del(.metadata.resourceVersion)' \
| yq 'del(.metadata.ownerReferences)' \
| yq '.metadata.namespace = "$YAOOK_OP_NAMESPACE"' \
| kubectl apply -n $YAOOK_OP_NAMESPACE -f -
Create OpenStack custom resource objects¶
In the next and final step we deploy custom resource objects (deployments). Those describe the shape of the OpenStack deployment. We have such deployments for many OpenStack service, but we install here only Keystone, Glance, Nove, Neutron and Horizon for the quick-start:
Install keystone, the authentication (users, projects, groups) service
kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/keystone.yaml
Install glance, a service where users can upload and discover data assets that are meant to be used with other services (images, metadata)
kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/glance.yaml
Install nova, provides a way to provision compute instances (aka virtual servers)
kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/nova.yaml
Install neutron, to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other OpenStack services (e.g., nova)
kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/neutron.yaml
Install horizon, which provides a web based user interface to OpenStack services including Nova, Swift, Keystone, etc.
kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/horizon.yaml
If you feel confident about what the individual steps in this document
do, you can also run the install-yaook.sh
script which bundles all
actions we describe here:
curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/install-yaook.sh
You can also give Yaook-in-a-box 📦 a try. This helps you setting up a Kubernetes cluster with Ceph included and installs all above mentioned steps.
Test installation¶
If everything is shown as running in k9s or kubectl you can try to create a image with glance. Therefore you need to shell into the cluster using yaookctl.
yaookctl openstack shell
Now execute the following commands:
apt update && apt install wget qemu-utils jq -y && \
wget -O cirros.qcow2 http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img && \
qemu-img convert -f qcow2 -O raw cirros.qcow2 cirros.raw && \
openstack image create --file cirros.raw cirros
If everything works as it should you would see something like this:
+------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 01e7d1515ee776be3228673441d449e6 |
| container_format | bare |
| created_at | 2025-07-28T15:28:05Z |
| disk_format | raw |
| file | /v2/images/577e40d3-f712-49ff-b03d-4fbd184fe5ac/file |
| id | 577e40d3-f712-49ff-b03d-4fbd184fe5ac |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | e578c9a042994b21b8ed64359f7b16cf |
| properties | direct_url='rbd://f8..81/glance-pool/57..ac/snap', |
| | os_hash_algo='sha512', os_hash_value='d6...36', os_hidden='False', owner_specified.openstack.md5='', |
| | owner_specified.openstack.object='images/cirros', owner_specified.openstack.sha256='' |
| protected | False |
| schema | /v2/schemas/image |
| size | 117440512 |
| status | active |
| tags | |
| updated_at | 2025-07-28T15:28:08Z |
| virtual_size | 117440512 |
| visibility | shared |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------+
In case you encounter problems you can join our community via Webchat, XMPP or Matrix.