Quickstart guide ====================== In this guide, we will deploy OpenStack using the YAOOK/Operator on your Kubernetes cluster. The following `OpenStack services `__ will be deployed: - `Keystone `__ (identity service) - `Glance `__ (image service) - `Neutron `__ (networking) - `Nova `__ (compute service) - `Horizon `__ (dashboard) **This guide is only meant as demo setup and is NOT FOR PRODUCTIVE USE.** To install and run Yaook, you need a Kubernetes cluster. Let’s make sure to review the cluster requirements because different Kubernetes features are required. If you don't have a Kubernetes cluster, you can create one using Tarook, follow this `Quick Start Guide `__, but also other Kubernetes installations can be used. The `Yaook-in-a-Box project `__ also provides a simple way to install a single node k8s cluster via multipass locally on ``arm64`` or ``amd64``. Requirements: - Please ensure that the following general requirements are met: - :ref:`OS requirements` - :ref:`Kubernetes API requirements` - :ref:`Kubernetes cluster requirements` Select namespace for OpenStack services --------------------------------------- First, set the ``YAOOK_OP_NAMESPACE`` and export it as an environment variable: .. code:: bash export YAOOK_OP_NAMESPACE=yaook Prerequisites ------------- - Storage: Install a storage backend in your cluster If you don’t have a suitable backend, you can deploy the local-path provisioner which uses local node storage: .. code:: bash kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml - If you don't have it installed via Tarook already, install `Rook Ceph `__ and make sure that your cluster has an attached storage device: .. code:: bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/install-ceph.sh | bash - Install dependencies in Kubernetes cluster. If you use Tarook, they should be already installed by Tarook. .. code:: bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/getting_started/install_cert_manager.sh | bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/getting_started/install_prometheus.sh | bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/getting_started/install_ingress_controller.sh | bash - Install certificates, issuers as well as secrets since all internal communication is encrypted: .. code:: bash openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.crt -subj "/CN=YAOOK-CA" kubectl -n "$YAOOK_OP_NAMESPACE" create secret tls root-ca --key ca.key --cert ca.crt kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/cert-manager.yaml - Install the real time scheduling hack according to :ref:`Yaook operation system requirements `: .. code:: bash kubectl -n $YAOOK_OP_NAMESPACE apply -f https://gitlab.com/yaook/operator/-/raw/devel/ci/devel_integration_tests/deploy/realtime-hack.yaml Install helm repos ------------------ Add helm repos (local) so you can install helm releases from them later. .. code:: bash helm repo add stable https://charts.helm.sh/stable helm repo add yaook.cloud https://charts.yaook.cloud/operator/stable/ helm repo update Select the latest Yaook version ------------------------------- Get the latest Yaook version from the Yaook Helm repository and export it as an environment variable: .. code:: bash export YAOOK_VERSION=$(helm search repo yaook.cloud/crds -o json | jq -r '.\[0\].version') Install Custom Resource Definitions (CRDs) ------------------------------------------ We use `CRD `__\ s in order to describe all relevant services needed to run OpenStack. We install the CRDs using the Kubernetes package manager `Helm `__. .. code:: bash helm upgrade --install -n "$YAOOK_OP_NAMESPACE" --version "$YAOOK_VERSION" crds yaook.cloud/crds Deploy chosen Yaook operators ----------------------------- Now, Helm is used again in order to deploy the all operators needed for the quick-start: .. code:: bash for operator in "infra-operator" "keystone" "keystone-resources" "glance" "nova" "nova-compute" "neutron" "neutron-ovn" "horizon"; do echo "Installing yaook.cloud/$operator-operator via helm:" helm upgrade --install -n "$YAOOK_OP_NAMESPACE" --version "$YAOOK_VERSION" "$operator-operator" "yaook.cloud/$operator-operator" done Label your nodes ---------------- In order to automatically deploy the necessary services according to the node type, we need to label the nodes. To get an overview over your nodes, run .. code:: bash kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' In that list you should see nodes that will have the role “control plane” and nodes that will have the role “compute” (hypervisor). Label the control plane nodes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code:: bash export ctl_plane_labels="node-role.kubernetes.io/control-plane=true any.yaook.cloud/api=true infra.yaook.cloud/any=true operator.yaook.cloud/any=true key-manager.yaook.cloud/barbican-any-service=true block-storage.yaook.cloud/cinder-any-service=true compute.yaook.cloud/nova-any-service=true ceilometer.yaook.cloud/ceilometer-any-service=true key-manager.yaook.cloud/barbican-keystone-listener=true gnocchi.yaook.cloud/metricd=true infra.yaook.cloud/caching=true network.yaook.cloud/neutron-northd=true network.yaook.cloud/neutron-ovn-agent=true" export control_plane_nodes="" # example: # export control_plane_nodes="ctl-01 ctl-02 ctl-03" for node in "$control_plane_nodes"; do kubectl label node "$node" $ctl_plane_labels done Label the compute nodes ^^^^^^^^^^^^^^^^^^^^^^^ If you are not running on bare metal, you may need to additionally label your workers / compute hosts to prevent kvm-in-kvm failures in nova-compute: .. code:: bash kubectl label node $WORKER compute.yaook.cloud/hypervisor-type=qemu .. code:: bash export hypervisor_nodes="" # example: # export hypervisor_nodes="cmp01 cmp02 cmp03" for node in "$hypervisor_nodes"; do kubectl label node "$node" "compute.yaook.cloud/hypervisor=true" done Configure Glance for Ceph ------------------------- Let's create a Ceph client which has access to a Ceph pool for glance: .. code:: bash kubectl apply -n rook-ceph -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/rook-ceph-glance.yaml; sleep 5 .. warning:: Depending on the attached storage device you might have to change the deviceClass in ``rook-ceph-glance.yaml`` for the ``CephBlockPool`` from ``hdd`` to ``ssd``. Copy the secret (which was generated in the `rook-ceph` namespace) to ``$YAOOK_OP_NAMESPACE``. .. code:: bash kubectl get secret rook-ceph-client-glance -n rook-ceph -o yaml \ | yq 'del(.metadata.uid)' \ | yq 'del(.metadata.creationTimestamp)' \ | yq 'del(.metadata.resourceVersion)' \ | yq 'del(.metadata.ownerReferences)' \ | yq '.metadata.namespace = "$YAOOK_OP_NAMESPACE"' \ | kubectl apply -n $YAOOK_OP_NAMESPACE -f - Create OpenStack custom resource objects ---------------------------------------- In the next and final step we deploy custom resource objects (deployments). Those describe the shape of the OpenStack deployment. We have such deployments for many OpenStack service, but we install here only Keystone, Glance, Nove, Neutron and Horizon for the quick-start: - Install keystone, the authentication (users, projects, groups) service .. code:: bash kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/keystone.yaml - Install glance, a service where users can upload and discover data assets that are meant to be used with other services (images, metadata) .. code:: bash kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/glance.yaml - Install nova, provides a way to provision compute instances (aka virtual servers) .. code:: bash kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/nova.yaml - Install neutron, to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other OpenStack services (e.g., nova) .. code:: bash kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/neutron.yaml - Install horizon, which provides a web based user interface to OpenStack services including Nova, Swift, Keystone, etc. .. code:: bash kubectl apply -n "$YAOOK_OP_NAMESPACE" -f https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/horizon.yaml If you feel confident about what the individual steps in this document do, you can also run the ``install-yaook.sh`` script which bundles all actions we describe here: .. code:: bash curl -sSL https://gitlab.com/yaook/operator/-/raw/devel/docs/user/guides/quickstart-guide/install-yaook.sh You can also give `Yaook-in-a-box 📦 `__ a try. This helps you setting up a Kubernetes cluster with Ceph included and installs all above mentioned steps. Test installation ----------------- If everything is shown as running in `k9s `__ or kubectl you can try to create a image with `glance.` Therefore you need to shell into the cluster using `yaookctl `__. .. code:: bash yaookctl openstack shell Now execute the following commands: .. code:: bash apt update && apt install wget qemu-utils jq -y && \ wget -O cirros.qcow2 http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img && \ qemu-img convert -f qcow2 -O raw cirros.qcow2 cirros.raw && \ openstack image create --file cirros.raw cirros If everything works as it should you would see something like this: .. code:: bash +------------------+--------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------------------------------------------------------------------------------------------+ | checksum | 01e7d1515ee776be3228673441d449e6 | | container_format | bare | | created_at | 2025-07-28T15:28:05Z | | disk_format | raw | | file | /v2/images/577e40d3-f712-49ff-b03d-4fbd184fe5ac/file | | id | 577e40d3-f712-49ff-b03d-4fbd184fe5ac | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | e578c9a042994b21b8ed64359f7b16cf | | properties | direct_url='rbd://f8..81/glance-pool/57..ac/snap', | | | os_hash_algo='sha512', os_hash_value='d6...36', os_hidden='False', owner_specified.openstack.md5='', | | | owner_specified.openstack.object='images/cirros', owner_specified.openstack.sha256='' | | protected | False | | schema | /v2/schemas/image | | size | 117440512 | | status | active | | tags | | | updated_at | 2025-07-28T15:28:08Z | | virtual_size | 117440512 | | visibility | shared | +------------------+--------------------------------------------------------------------------------------------------------------------------------------+ In case you encounter problems you can join our community via `Webchat, XMPP or Matrix `__.