Development Setup

The following page will guide you through the setup of a development environment for Yaook.

General Requirements

Please ensure that the following requirements are met:

  • The operator repository has been cloned.

  • A virtual environment has been setup with Python >= v3.8.0.

  • You have access to a Kubernetes cluster

    • either via the default ~/.kube/config, or alternatively

    • using the KUBECONFIG environment variable.

  • The kubectl and helm binaries are in your path.

  • CUE is installed. You can install it via:

    # Install cue (version may be adjusted)
    GO111MODULE=on go get cuelang.org/go/cmd/cue@v0.4.0
    
  • GNU make is installed.

  • The prerequisites of the mysqlclient pip package are met.

  • Optional: If ceph OSDs should be deployed on a k8s host, lvm2 needs to be installed on it.

Preparation of Environment and Kubernetes Resources

Warning

Pod Security Policies (PSPs) are not supported (they have been deprecated with Kubernetes v1.21). The enforcement of these needs to be disabled in you Kubernetes cluster (please also refer to the Kubernetes API Requirements).

  1. Optional: Disable ceph by setting spec:backends:ceph:enabled to False in docs/examples/{nova,cinder,glance}.yaml and spec:glanceConfig:glance_store:default_store to file in docs/examples/glance.yaml.

  2. Optional: If you have already setup rook on your Kubernetes cluster and want to use the existing ceph cluster, please refer to section: Using an already existing rook-based ceph cluster.

  3. Set the Environment Variables:

    # Used to determine which namespaces are relevant for the operator
    export YAOOK_OP_NAMESPACE="yaook"
    # Allows the operator to use the latest versions of dependencies (alpha releases)
    export YAOOK_OP_VERSIONS_USE_ALPHA=true
    # Allows the operator to use the latest versions of dependencies (rolling releases)
    export YAOOK_OP_VERSIONS_USE_ROLLING=true
    # If you are coming from managed-k8s, you need to set this too
    export YAOOK_OP_CLUSTER_DOMAIN="cluster.local"
    
  4. Setup domain name to IP translation. The ports for all the OpenStack services are currently bound via node ports on each k8s node. With $WORKER_IP being the IP address of one of your worker nodes, add the following line to your /etc/hosts file:

    $WORKER_IP    keystone.yaook.cloud nova.yaook.cloud
    
  5. Execute the following steps to create the Kubernetes resources required (bash script available at docs/getting_started/dev_setup.sh):

    ##
    ## Copyright (c) 2021 The Yaook Authors.
    ##
    ## This file is part of Yaook.
    ## See https://yaook.cloud for further info.
    ##
    ## Licensed under the Apache License, Version 2.0 (the "License");
    ## you may not use this file except in compliance with the License.
    ## You may obtain a copy of the License at
    ##
    ##     http://www.apache.org/licenses/LICENSE-2.0
    ##
    ## Unless required by applicable law or agreed to in writing, software
    ## distributed under the License is distributed on an "AS IS" BASIS,
    ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    ## See the License for the specific language governing permissions and
    ## limitations under the License.
    ##
    set -e -x
    
    #Set global variables. Feel free to adjust them to your needs
    export NAMESPACE=$YAOOK_OP_NAMESPACE
    export MYPYPATH="$(pwd)/.local/mypy-stubs"
    
    #Install requirements
    pip install -e .
    pip install -r requirements-build.txt
    
    stubgen -p kubernetes_asyncio -o "$MYPYPATH"
    
    # Generate default policy files for the openstack components.
    # TODO: For now fixed default policy files for queens are used
    # under yaook/op/$os_component/static/default_policies.yaml.
    # As soon as we support deploying other releases of openstack
    # the next line should be commented in and
    # the location for the default policy files will be
    # yaook/op/$os_component/generated/default_policies.yaml.
    # ./generate_default_policies.py
    
    #Generate all generated data like CRDs, operator-deployments and cuelang schema
    make all
    
    # Create namespace
    kubectl create namespace $NAMESPACE
    
    # Deploy helm charts for central services
    helm repo add stable https://charts.helm.sh/stable
    helm repo add rook-release https://charts.rook.io/release
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    
    helm upgrade --install --namespace $NAMESPACE \
      --version v1.4.9 \
      rook-ceph rook-release/rook-ceph
    helm upgrade --install --namespace $NAMESPACE ingress-nginx\
      --set "controller.service.type=NodePort,controller.service.nodePorts.http=32080,controller.service.nodePorts.https=32443,controller.extraArgs.enable-ssl-passthrough=true"\
      --version 3.35.0 \
      ingress-nginx/ingress-nginx
    helm upgrade --install --namespace $NAMESPACE cert-manager\
      --set "installCRDs=true"\
      --version 1.4.2 \
      jetstack/cert-manager
    
    # Deploy custom resources, roles for operators, openstack service deployments and ceph cluster
    make k8s_deploy
    
    # Deploy the ceph toolbox
    kubectl -n $NAMESPACE apply -f docs/getting_started/ceph-toolbox.yaml
    
    # Create a CA for the cert-manager
    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=YAOOK-CA"
    kubectl -n $NAMESPACE create secret tls root-ca --key ca.key --cert ca.crt
    kubectl -n $NAMESPACE apply -f - <<'EOF'
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: ca-issuer
    spec:
      ca:
        secretName: root-ca
    EOF
    
  6. If you want to interact with OpenStack from your local system: Add the freshly created ca.crt to the list of systemd-wide trusted CAs. More information on how to access the OpenStack deployment are given in Accessing the OpenStack Deployment.

  7. Label the nodes to allow scheduling of the OpenStack services to k8s nodes by executing the following command - once per worker:

    kubectl label node $WORKER operator.yaook.cloud/any=true network.yaook.cloud/neutron-dhcp-agent=true network.yaook.cloud/neutron-l3-agent=true network.yaook.cloud/neutron-bgp-dragent=true compute.yaook.cloud/hypervisor=true compute.yaook.cloud/nova-any-service=true block-storage.yaook.cloud/cinder-any-service=true any.yaook.cloud/api=true infra.yaook.cloud/any=true ceilometer.yaook.cloud/ceilometer-any-service=true gnocchi.yaook.cloud/metricd=true key-manager.yaook.cloud/barbican-any-service=true key-manager.yaook.cloud/barbican-keystone-listener=true
    

    If you are not running on bare metal, you may need to additionally label your workers / compute hosts to prevent kvm-in-kvm failures in nova-compute:

    kubectl label node $WORKER compute.yaook.cloud/hypervisor-type=qemu
    
  8. Ensure that you have a default storage class configured. If you created your cluster with managed-k8s, then you can do:

    kubectl patch storageclass csi-sc-cinderplugin -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
    

    Otherwise, you’ll have to check which storage classes are available in your cluster via kubectl get storageclass and find out which of that qualifies as default storage class.

  9. Add a GitLab private token to Kubernetes to allow pulling of the images. Create a GitLab token with at least read_registry-scope at GitLab - Personal Access Tokens, and replace $TOKEN by it in the following command. Which email address you use as $EMAIL_ADDRESS is not important, but $USERNAME must be your GitLab username.

    kubectl -n $YAOOK_OP_NAMESPACE create secret docker-registry regcred --docker-server=registry.gitlab.com --docker-username=$USERNAME --docker-password=$TOKEN --docker-email=$EMAIL_ADDRESS
    
  10. If you did not disable ceph above, add ceph authorization keys for glance and cinder:

    kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/rook-resources.yaml
    

    You can verify this with kubectl -n $YAOOK_OP_NAMESPACE get cephclients which should yield the clients cinder and glance.

Deploy OpenStack

The Yaook Operators will deploy OpenStack and keep your deployment up-to-date with your configuration automatically.

Setting up GitLab registry access

To be able to determine the newest available version of the Docker images from the GitLab registry, the Operators rely on the environment variable YAOOK_OP_DOCKER_CONFIG pointing to a Docker config file containing your credentials for the registry. If you need to, you can create the config file by saving the output of the following command into a file:

echo "{\"auths\": {\"registry.gitlab.com\": {\"auth\": \"$(printf '%s:%s' "$USERNAME" "$TOKEN" | base64)\"}}}"

where $USERNAME is your GitLab username and $TOKEN is the GitLab registry token mentioned above. Alternatively, the information is automatically stored into ~/.docker/config.json, when you run docker login registry.gitlab.com -u "$USERNAME" -p "$TOKEN".

Assuming your credentials are now located in the file $DOCKER_CONFIG_FILE_PATH, you can set the environment variable as followed:

export YAOOK_OP_DOCKER_CONFIG=$DOCKER_CONFIG_FILE_PATH

Creating the infrastructure Operator

First check if you are running a kernel built with CONFIG_RT_GROUP_SCHED enabled

grep CONFIG_RT_GROUP_SCHED /boot/config-$(uname -r)

If yes, then run the following daemon as a workaround for this issue, related to MySQL. Otherwise you can skip this.

kubectl -n $NAMESPACE apply -f ci/devel_integration_tests/deploy/realtime-hack.yaml

Start the infrastructure Operator which manages the MySQL and RabbitMQ services, execute the following:

kubectl apply -n "$YAOOK_OP_NAMESPACE" -f deploy/00-infra-operator.yaml

The infrastructure Operator then should spawn inside a normal Pod in your cluster.

Initialize the service-specific Operators

Before starting the service-specific Operators, you have to create the Deployments in advance. You can create the Deployments via:

kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/keystone.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/barbican.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/nova.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/neutron.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/glance.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/cinder.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/gnocchi.yaml
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/ceilometer.yaml

Now, start the other Operators by executing each of the following commands in a separate terminal window:

Note

Make sure you do not have an OpenRC file sourced when starting an Operator.

python3 -m yaook.op -vvv keystone run
YAOOK_KEYSTONE_OP_INTERFACE=public python3 -m yaook.op -vvv keystone_resources run
python3 -m yaook.op -vvv barbican run
python3 -m yaook.op -vvv nova run
YAOOK_NOVA_COMPUTE_OP_INTERFACE=public YAOOK_NOVA_COMPUTE_OP_JOB_IMAGE="registry.gitlab.com/yaook/operator/operator:devel" python3 -m yaook.op -vvv nova_compute run
python3 -m yaook.op -vvv neutron run
python3 -m yaook.op -vvv glance run
python3 -m yaook.op -vvv cinder run
python3 -m yaook.op -vvv cds run
python3 -m yaook.op -vvv gnocchi run
python3 -m yaook.op -vvv ceilometer run

The convergence may take a while. Take the chance, pause for a moment, and watch in amazement how the Operators do their job and create the OpenStack deployment for you.

Accessing the OpenStack Deployment

You can access the OpenStack deployment either via a Pod running inside the k8s cluster or directly from your local machine.

Using a Pod running inside the cluster

This method is most reliable as it does not require you to import any CA certificate or similar.

The ConfigMap keystone-ca-certificates gets a random suffix. You have to adjust the manifest located at tools/openstackclient.yaml and set your corresponding ConfigMap name before creating the Pod. You can determine the precise name via:

kubectl get -n $YAOOK_OP_NAMESPACE ConfigMap -l "state.yaook.cloud/component=ca_certs,state.yaook.cloud/parent-name=keystone,state.yaook.cloud/parent-plural=keystonedeployments,state.yaook.cloud/parent-group=yaook.cloud"

You can now create the Pod using:

kubectl -n $YAOOK_OP_NAMESPACE apply -f tools/openstackclient.yaml

Note

This assumes that your KeystoneDeployment is called keystone. If you gave it a different name, you need to adapt the openstackclient Deployment to use a different credentials secret (whatevernameyougaveyourkeystone-admin).

To use the Pod, run:

kubectl -n $YAOOK_OP_NAMESPACE exec -it "$(kubectl -n $YAOOK_OP_NAMESPACE get pod -l app=openstackclient -o jsonpath='{ .items[0].metadata.name }')" -- bash

This will provide you with a shell. openstack is already installed and configured there.

From your local machine

This requires that you have set up /etc/hosts entries for all services, not just keystone and nova. In addition, it requires that you have the CA certificate imported in such a way that it is usable with openstack (probably by pointing REQUESTS_CA_BUNDLE at it, but make sure not to have REQUESTS_CA_BUNDLE set when running Operators, since it will break them).

Create an openrc file and load it:

./tools/download-os-env.sh public -n $YAOOK_OP_NAMESPACE > openrc
. ./openrc

If everything is set up correctly, you should now be able to use the OpenStack cluster from your local machine.

If it does not work, please try the approach described in Using a Pod running inside the cluster before resorting to more drastic measures.

Verify basic functionality

Once you have access to the OpenStack deployment, you can for example make a quick smoke-test:

openstack endpoint list
openstack volume service list
openstack compute service list
openstack network agent list

or create your first glance image:

dnf install wget qemu-img jq  # only in the container
wget -O cirros.qcow2 http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
qemu-img convert -f qcow2 -O raw cirros.qcow2 cirros.raw
openstack image create --file cirros.raw cirros

Using different Docker images

The Operator uses the yaook/assets/pinned_version.yml file to find the appropriate Docker image tag for the OpenStack services. To use a different image tag you can update values in the file and restart the Operator.

Using an already existing rook-based ceph cluster

You can also use an already existing rook-based ceph cluster of your Kubernetes cluster, if you have set up one in advance. This may be the case if you created your Kubernetes cluster via managed-k8s.

If you want to use an existing ceph cluster, ensure that you have skipped the installation of the rook-ceph Operator via the helm chart as well as the creation of the ceph cluster and the ceph toolbox in the Development Setup Script.

Make sure you add the ceph authorization keys to the namespace of your rook Operator (probably rook-ceph) when you apply the rook resources. Copy the created secrets to the Yaook namespace via the following commands (you may have to adjust them):

kubectl get secret rook-ceph-client-gnocchi --namespace=rook-ceph -o yaml | sed -e 's/namespace: .*/namespace: yaook/' -e 's/name: rook-ceph-client-gnocchi/name: gnocchi-client-key/' | kubectl apply -f -
kubectl get secret rook-ceph-client-glance --namespace=rook-ceph -o yaml | sed -e 's/namespace: .*/namespace: yaook/' -e 's/name: rook-ceph-client-glance/name: glance-client-key/' | kubectl apply -f -
kubectl get secret rook-ceph-client-cinder --namespace=rook-ceph -o yaml | sed -e 's/namespace: .*/namespace: yaook/' -e 's/name: rook-ceph-client-cinder/name: cinder-client-key/' | kubectl apply -f -

Additionally you have to tell cinder, glance and gnocchi how to reach the mons by adjusting docs/examples/{cinder,glance,gnocchi}.yaml like in the following when you create the service-specific Deployments:

# For glance.yaml and gnocchi.yaml
[...]
ceph:
  keyringReference: glance-client-key #adjust for gnocchi
  keyringUsername: glance # adjust for gnocchi
  cephConfig:
    global:
      "mon host": "rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789"
[...]

# For cinder.yaml
[...]
backends:
  - name: ceph
    rbd:
      keyringReference: cinder-client-key
      keyringUsername: cinder
      cephConfig:
        "mon host": "rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789"
[...]

If you tainted your storage nodes, you may want to untaint them before starting the operators:

# Depending on the used label, this command may vary
kubectl taint nodes $STORAGE_WORKER node-restriction.kubernetes.io/cah-managed-k8s-role=storage:NoSchedule-

Removing the Development Setup

Warning

This action cannot be undone. Following these instructions will remove the development setup, i.e., all OpenStack resources you might have created in the OpenStack deployment, which you deployed on your Kubernetes cluster using Yaook, the OpenStack deployment itself, as well as all Kubernetes resources in the $YAOOK_OP_NAMESPACE namespace on your Kubernetes cluster will be removed.

To remove the development setup, execute the following commands:

make k8s_clean
./tools/strip-finalizers.sh cephclusters -n $YAOOK_OP_NAMESPACE
kubectl -n $YAOOK_OP_NAMESPACE delete cephclusters --all
./tools/strip-finalizers.sh cephblockpools -n $YAOOK_OP_NAMESPACE
kubectl -n $YAOOK_OP_NAMESPACE delete cephblockpools --all
kubectl delete namespace $YAOOK_OP_NAMESPACE

If needed, you can verify that all resources in the $YAOOK_OP_NAMESPACE namespace have been deleted by executing the following command. It lists all existing resources in the $YAOOK_OP_NAMESPACE namespace.

kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl -n $YAOOK_OP_NAMESPACE get --show-kind --ignore-not-found