Development Setup

The following page will guide you through the setup of a development environment for Yaook.

Warning

This documentation is for a development setup only!

Do not use this, without extra thought and knowing what you’re doing, to deploy a productive or even a demo setup!

If you run into problems, please refer to our Common Problems page.

Requirements

Please ensure that the following general requirements are met:

Note

Although we recommend using yaook/k8s to do so, we provide scripts for installing prometheus, cert-manager, nginx ingress controller and rook/ceph. This means that even if some required features are not yet present in your kubernetes cluster you can go ahead and follow these instructions.

Requirements for the Development Setup

Please ensure that the following requirements are met, when setting up your system for development:

  • The operator repository has been cloned.

  • A virtual environment has been setup with Python v3.11. (We do not guarantee for this to work with any other Python version than the one used in ./Dockerfile in the repo!)

  • You have access to a Kubernetes cluster

    • either via the default ~/.kube/config, or alternatively

    • using the KUBECONFIG environment variable.

  • The kubectl and helm binaries are in your path.

  • CUE is installed and in your path. You can install it via:

    # Install cue (version may be adjusted)
    GO111MODULE=on go get cuelang.org/go/cmd/cue@v0.4.3
    # For golang 1.16 or higher use go install instead of go get
    go install cuelang.org/go/cmd/cue@v0.4.3
    
  • GNU make is installed.

  • The prerequisites of the mysqlclient pip package are installed.

Preparation of Environment and Kubernetes Resources

  1. Set the Environment Variable Reference:

    # Used to determine which namespaces are relevant for the operator
    export YAOOK_OP_NAMESPACE="yaook"
    # Allows the operator to use the latest versions of dependencies (alpha releases)
    export YAOOK_OP_VERSIONS_USE_ALPHA=true
    # Allows the operator to use the latest versions of dependencies (rolling releases)
    export YAOOK_OP_VERSIONS_USE_ROLLING=true
    # If you are coming from managed-k8s, you need to set this too
    export YAOOK_OP_CLUSTER_DOMAIN="cluster.local"
    
  2. Execute the following script to create the kubernetes resources required (bash script available at docs/getting_started/dev_setup.sh):

    ##
    ## Copyright (c) 2021 The Yaook Authors.
    ##
    ## This file is part of Yaook.
    ## See https://yaook.cloud for further info.
    ##
    ## Licensed under the Apache License, Version 2.0 (the "License");
    ## you may not use this file except in compliance with the License.
    ## You may obtain a copy of the License at
    ##
    ##     http://www.apache.org/licenses/LICENSE-2.0
    ##
    ## Unless required by applicable law or agreed to in writing, software
    ## distributed under the License is distributed on an "AS IS" BASIS,
    ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    ## See the License for the specific language governing permissions and
    ## limitations under the License.
    ##
    set -e -x
    
    #Set global variables. Feel free to adjust them to your needs
    export NAMESPACE=$YAOOK_OP_NAMESPACE
    export MYPYPATH="$(pwd)/.local/mypy-stubs"
    
    #Install requirements
    pip install -r requirements-build.txt
    pip install -e .
    
    stubgen -p kubernetes_asyncio -o "$MYPYPATH"
    
    #Generate all generated data like CRDs, operator-deployments and cuelang schema
    make all
    
    # Create namespace if not exists
    kubectl get namespace $NAMESPACE 2>/dev/null || kubectl create namespace $NAMESPACE
    
    # Deploy helm charts for central services
    helm repo add stable https://charts.helm.sh/stable
    helm repo update
    
    # Deploy custom resources, roles for operators, openstack service deployments
    HELMFLAGS="--namespace $NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel"
    helm upgrade --install --namespace $NAMESPACE yaook-crds ./yaook/helm_builder/Charts/crds
    
  3. Optional: Depending on which features you still need to deploy in your kubernetes cluster, execute the following scripts:

    ./docs/getting_started/install_prometheus.sh
    ./docs/getting_started/install_ingress_controller.sh
    ./docs/getting_started/install_cert_manager.sh
    

    See the kubernetes cluster requirements for more information.

  4. Either
    • disable ceph by setting spec:backends:ceph:enabled to False in docs/examples/{nova,cinder,glance}.yaml and spec:glanceConfig:glance_store:default_store to file in docs/examples/glance.yaml, or

    • if you want to use ceph as storage (recommended):

      Note

      In the following we will call the namespace in which you did / will install ceph, the ROOK_CEPH_NAMESPACE namespace.

      • If you have not yet installed ceph, you can install it in the ROOK_CEPH_NAMESPACE = $YAOOK_OP_NAMESPACE namespace by executing the following script:

        ./docs/getting_started/install_rook.sh
        
      • If you did already install ceph, follow these instructions.

      You can inspect your ceph setup with kubectl -n $ROOK_CEPH_NAMESPACE get cephclients, which should yield the clients cinder, glance and gnocchi. kubectl get --all-namespaces secrets | grep client should yield secrets for all three clients in both namespaces, although it might take a while for them to be created.

  5. Setup domain name to IP translation. The ports for all the OpenStack services are currently bound via node ports on each k8s node. With $WORKER_IP being the IP address of one of your worker nodes, add the following line to your /etc/hosts file:

    $WORKER_IP    keystone.yaook.cloud nova.yaook.cloud
    
  6. Create a secret for the cert-manager:
    1. If you do not already have a key and certificate that you want to use, you can create them like so:

      openssl genrsa -out ca.key 2048
      openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=YAOOK-CA"
      
    2. Create the kubernetes secret:

      kubectl -n $YAOOK_OP_NAMESPACE create secret tls root-ca --key ca.key --cert ca.crt
      
  7. Optional: If you want to interact with OpenStack from your local system, add ca.crt to the list of systemd-wide trusted CAs. More information on how to access the OpenStack deployment are given in Accessing the OpenStack Deployment.

  8. Create the issuers:

    kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/getting_started/ca-issuer.yaml
    kubectl -n $YAOOK_OP_NAMESPACE apply -f deploy/selfsigned-issuer.yaml
    
  9. Label the nodes to allow scheduling of the OpenStack services to k8s nodes by executing the following command - once per worker:

      kubectl label node $WORKER network.yaook.cloud/neutron-ovn-agent=true network.yaook.cloud/neutron-northd=true operator.yaook.cloud/any=true compute.yaook.cloud/hypervisor=true compute.yaook.cloud/nova-any-service=true block-storage.yaook.cloud/cinder-any-service=true any.yaook.cloud/api=true infra.yaook.cloud/any=true infra.yaook.cloud/caching=true ceilometer.yaook.cloud/ceilometer-any-service=true gnocchi.yaook.cloud/metricd=true key-manager.yaook.cloud/barbican-any-service=true key-manager.yaook.cloud/barbican-keystone-listener=true heat.yaook.cloud/engine=true
    
    For neutron release < yoga:
    set additional labels: `network.yaook.cloud/neutron-dhcp-agent=true`, `network.yaook.cloud/neutron-l3-agent=true`, `network.yaook.cloud/neutron-bgp-dragent=true`.
    

    If you are not running on bare metal, you may need to additionally label your workers / compute hosts to prevent kvm-in-kvm failures in nova-compute:

    kubectl label node $WORKER compute.yaook.cloud/hypervisor-type=qemu
    
  10. Ensure that you have a default storage class configured, by running kubectl get storageclass. We recommend choosing csi-sc-cinderplugin as the default. You can set it to be the default by patching it as follows:

    kubectl patch storageclass csi-sc-cinderplugin -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
    

Deploy OpenStack

The Yaook Operators will deploy OpenStack and keep your deployment up-to-date with your configuration automatically.

Creating the Infrastructure Operator

  1. Check whether your os fulfills the requirement regarding real-time scheduling.

  2. Start the infrastructure operator, which manages the MySQL and RabbitMQ services:

    helm upgrade --install --namespace $YAOOK_OP_NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel --set operator.image.pullPolicy=Always infra-operator ./yaook/helm_builder/Charts/infra-operator/
    

The infrastructure operator now should spawn inside a normal pod in your cluster.

Initialize the service-specific Operators

Warning

These examples are for development purposes only. Do not use them, without extensive modification, for productive or demonstration setups, as they are rough collections of arbitrary settings to give onlookers an idea of which options exist.

Before starting the service-specific operators, you have to create the deployments in advance. You can create the deployments via:

kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/keystone.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/barbican.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/nova.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/neutron-ovn.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/glance.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/cinder.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/gnocchi.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/ceilometer.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/heat.yaml

Note

The neutron deployment needs a specific version regarding the OpenStack that should be installed. neutron-ovn.yaml must be used for >= yoga. For neutron release < yoga please use OVS (docs/examples/neutron-ovs.yaml).

Warning

Gateway nodes need a second network interface, which is set within the neutron deployment:

- nodeSelectors:
  - matchLabels:
      "network.yaook.cloud/neutron-ovn-agent": "true"
  bridgeConfig:
    - bridgeName: br-ex
      uplinkDevice: eth1
      openstackPhysicalNetwork: "physnet1"

You can create a dummy interface on the gateway node with the exact interface name (eth1), but beware that the OpenStack physical network won’t work as expected with a dummy interface!

Now, start the other Operators locally by executing each of the following commands in a separate terminal window:

Note

Make sure you do not have an OpenRC file sourced when starting an Operator.

python3 -m yaook.op -vvv keystone run
YAOOK_KEYSTONE_OP_INTERFACE=public python3 -m yaook.op -vvv keystone_resources run
python3 -m yaook.op -vvv barbican run
python3 -m yaook.op -vvv nova run
YAOOK_NOVA_COMPUTE_OP_INTERFACE=public YAOOK_NOVA_COMPUTE_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator:devel" python3 -m yaook.op -vvv nova_compute run
python3 -m yaook.op -vvv neutron run
YAOOK_NEUTRON_OVN_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" YAOOK_NEUTRON_OVN_AGENT_OP_INTERFACE=public python3 -m yaook.op -vvv neutron_ovn run
YAOOK_NEUTRON_OVN_BGP_AGENT_OP_INTERFACE=public YAOOK_NEUTRON_OVN_BGP_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_ovn_bgp run
python3 -m yaook.op -vvv glance run
python3 -m yaook.op -vvv cinder run
python3 -m yaook.op -vvv cds run
python3 -m yaook.op -vvv gnocchi run
python3 -m yaook.op -vvv ceilometer run
python3 -m yaook.op -vvv heat run

For neutron release < yoga run the following operators:

YAOOK_NEUTRON_DHCP_AGENT_OP_INTERFACE=public YAOOK_NEUTRON_DHCP_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_dhcp run
YAOOK_NEUTRON_L2_AGENT_OP_INTERFACE=public python3 -m yaook.op -vvv neutron_l2 run
YAOOK_NEUTRON_L3_AGENT_OP_INTERFACE=public YAOOK_NEUTRON_L3_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_l3 run
YAOOK_NEUTRON_BGP_DRAGENT_OP_INTERFACE=public YAOOK_NEUTRON_BGP_DRAGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_bgp run

The convergence may take a while. Take the chance, pause for a moment, and watch in amazement how the operators do their job and create the OpenStack deployment for you.

Note

If you do not want to run all operators locally, you can also spawn them via helm, for instance using (for the cds operator):

helm upgrade --install --namespace $YAOOK_OP_NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel --set operator.image.pullPolicy=Always cds-operator ./yaook/helm_builder/Charts/cds-operator/

If you then temporarily want to run an operator locally, you must scale down its deployment inside the cluster to 0 to avoid conflicts.

To deploy all operators via helm you may use the following loop:

for OP_NAME in keystone keystone-resources barbican nova nova-compute neutron neutron-ovn neutron-ovn-bgp glance cinder cds gnocchi ceilometer heat; do
     helm upgrade --install --namespace $YAOOK_OP_NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel --set operator.image.pullPolicy=Always "$OP_NAME-operator" ./yaook/helm_builder/Charts/$OP_NAME-operator/
done

For neutron release < yoga please use the following operators:

neutron-bgp neutron-dhcp neutron-l2 neutron-l3

Warning: Only a single instance of each operator must run at any time against a cluster. You must not run them both locally and via helm at the same time.

Accessing the OpenStack Deployment

You can access the OpenStack deployment either via a pod running inside the k8s cluster or directly from your local machine.

Using a Pod running inside the cluster

This method is most reliable as it does not require you to import any CA certificate or similar.

The ConfigMap keystone-ca-certificates gets a random suffix. You have to adjust the manifest located at tools/openstackclient.yaml and set spec.template.spec.volumes[0].configMap.name to your corresponding ConfigMap name before creating the Pod. You can determine the precise name via:

kubectl get -n $YAOOK_OP_NAMESPACE ConfigMap -l "state.yaook.cloud/component=ca_certs,state.yaook.cloud/parent-name=keystone,state.yaook.cloud/parent-plural=keystonedeployments,state.yaook.cloud/parent-group=yaook.cloud"

You can now create the pod using:

kubectl -n $YAOOK_OP_NAMESPACE apply -f tools/openstackclient.yaml

Note

This assumes that your KeystoneDeployment is called keystone. If you gave it a different name, you need to adapt the openstackclient Deployment to use a different credentials secret (whatevernameyougaveyourkeystone-admin).

To use the Pod, run:

kubectl -n $YAOOK_OP_NAMESPACE exec -it "$(kubectl -n $YAOOK_OP_NAMESPACE get pod -l app=openstackclient -o jsonpath='{ .items[0].metadata.name }')" -- bash

This will provide you with a shell. openstack is already installed and configured there.

From your local machine

This requires that you have set up /etc/hosts entries for all services, not just keystone and nova. In addition, it requires that you have the CA certificate imported in such a way that it is usable with openstack (probably by pointing REQUESTS_CA_BUNDLE at it, but make sure not to have REQUESTS_CA_BUNDLE set when running Operators, since it will break them).

Create an openrc file and load it:

./tools/download-os-env.sh public -n $YAOOK_OP_NAMESPACE > openrc
. ./openrc

If everything is set up correctly, you should now be able to use the OpenStack cluster from your local machine.

If it does not work, please try the approach described in Using a Pod running inside the cluster before resorting to more drastic measures.

Verify basic functionality

Once you have access to the OpenStack deployment, you can for example make a quick smoke-test:

Note

The openstack volume service list command below will currently list some cinder backup services. These are expected to be down, since they do not have a storage driver configured.

openstack endpoint list
openstack volume service list
openstack compute service list
openstack network agent list

To create your first glance image, run the following:

apt install wget qemu-utils jq  # only in the container
wget -O cirros.qcow2 http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
qemu-img convert -f qcow2 -O raw cirros.qcow2 cirros.raw
openstack image create --file cirros.raw cirros

If you are encountering problems, please look here for a solution.

Using different Docker images

The Operator uses the yaook/assets/pinned_version.yml file to find the appropriate Docker image tag for the OpenStack services. To use a different image tag you can update values in the file and restart the Operator.

Using an already existing rook-based ceph cluster

You can use an already existing rook-based ceph cluster of your Kubernetes cluster, if you have set up one in advance. This may be the case if you created your Kubernetes cluster via managed-k8s.

Note

In the following sections, we will call the namespace in which ceph was installed, the ROOK_CEPH_NAMESPACE namespace.

Create CephClient Secrets

Make sure you add the ceph authorization keys to ROOK_CEPH_NAMESPACE, i.e., the namespace of your rook operator. If you installed ceph with yaook/k8s this will probably be the rook-ceph namespace.

kubectl -n $ROOK_CEPH_NAMESPACE apply -f docs/examples/rook-resources.yaml

Wait for the three ceph client secrets rook-ceph-client-gnocchi, rook-ceph-client-glance and rook-ceph-client-cinder to be created.

Copy CephClient Secrets

The ceph client secrets should also be present in the $YAOOK_OP_NAMESPACE namespace. You can copy them there by repeating the following command for all three ceph clients gnocchi, glance and cinder as the value of CEPH_CLIENT_NAME:

CEPH_CLIENT_NAME=gnocchi sh -c 'kubectl get secret rook-ceph-client-$CEPH_CLIENT_NAME --namespace=$ROOK_CEPH_NAMESPACE -o yaml | sed -e "s/namespace: .*/namespace: $YAOOK_OP_NAMESPACE/" | kubectl apply -f -'

Adjust Service Deployments

Additionally you have to tell the ceph clients cinder, glance and gnocchi how to reach the mons by adjusting docs/examples/{cinder,glance,gnocchi}.yaml as following when you create the service-specific deployments:

# For glance.yaml and gnocchi.yaml
[...]
ceph:
  keyringReference: rook-ceph-client-glance #adjust for gnocchi
  keyringUsername: glance # adjust for gnocchi
  keyringPoolname: glance-pool #only for glance
  cephConfig:
    global:
      "mon_host": "rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789"
[...]

# For cinder.yaml
[...]
backends:
  - name: ceph
    rbd:
      keyringReference: rook-ceph-client-cinder
      keyringUsername: cinder
      cephConfig:
        "mon_host": "rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789"
[...]

Untaint Storage Nodes

If you tainted your storage nodes, you may want to untaint them before starting the operators:

# Depending on the used label, this command may vary
kubectl taint nodes $STORAGE_WORKER node-restriction.kubernetes.io/cah-managed-k8s-role=storage:NoSchedule-

Updating the Development Setup

  1. When updating your development setup do a git pull.

  2. Update the crds and roles make k8s_apply_crds and make k8s_apply_roles.

  3. Update the appropriate operators running inside your k8s cluster. You can do so by restarting them (if they point to the devel image tag).

  4. Update the deployed custom resources. Either by running make k8s_apply_examples or (if you changed the resources in the cluster) by manually examining the individual diffs.

  5. In rare cases there might have been additional external dependencies introduced. Check the dev_setup.sh and install_*.sh scripts from above.

  6. Restart all of your operators running outside the cluster.

Removing the Development Setup

Warning

This action cannot be undone. Following these instructions will remove the development setup, i.e., all OpenStack resources you might have created in the OpenStack deployment, which you deployed on your Kubernetes cluster using Yaook, the OpenStack deployment itself, as well as all Kubernetes resources in the $YAOOK_OP_NAMESPACE namespace on your Kubernetes cluster will be removed.

To remove the development setup, first stop the operators, then execute the following commands:

make k8s_clean

If you also installed a feature using any of the docs/getting_started/install_*.sh scripts, you can also choose from the following scripts to uninstall those features:

./docs/getting_started/uninstall_prometheus.sh
./docs/getting_started/uninstall_ingress_controller.sh
./docs/getting_started/uninstall_cert_manager.sh
./docs/getting_started/uninstall_rook.sh

Note

If you deployed the Rook Ceph cluster based on docs/examples/rook-cluster.yaml, you might need to do additional cleanup steps on your nodes:

# on the nodes formerly carrying the OSDs
lvchange -an /dev/ceph-...
lvremove /dev/ceph-...
dd if=/dev/zero of=/dev/sdX bs=1M count=100

# on all nodes
rm -rf /var/lib/rook

Where /dev/ceph-... and /dev/sdX are the LVM device and disk partition of the Ceph OSD drive respectively.

As soon as all resources are deleted from the $YAOOK_OP_NAMESPACE namespace, you can also go ahead and delete the namespace itself:

kubectl delete namespace $YAOOK_OP_NAMESPACE

If the previous command does not complete, this means some resources still are present in the namespace. You can check which ones that is, by running:

kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl -n $YAOOK_OP_NAMESPACE get --show-kind --ignore-not-found