Development Setup

The following page will guide you through setup of a development environment for yaook.

General Requirements

We assume the following requirements are already met

  1. clone of the repository

  2. virtualenv with python 3.8

  3. access to a Kuberentes cluster via the default ~/.kube/config, or alternatively using the KUBECONFIG environment variable

    • If ceph osd should be deployed on the k8s hosts, lvm2 needs to be installed there.

  4. the kubectl and helm3 binarys in your path

  5. cue. Install with GO111MODULE=on go get cuelang.org/go/cmd/cue@v0.4.0 (or adjust to a later version).

  6. GNU make installed

Preparation of Environment and Kubernetes Resources

  1. Optional: Disable ceph by setting spec:backends:ceph:enabled to False in docs/examples/{nova,cinder,glance}.yaml and spec:glanceConfig:glance_store:default_store to file in docs/examples/glance.yaml.

  2. Set the environment variables:

    export YAOOK_OP_NAMESPACE="yaook" # used to determine which namespaces are relevant for the operator
    export YAOOK_OP_VERSIONS_USE_ALPHA=true # allows the operator to use the latest versions of dependencies (alpha releases)
    export YAOOK_OP_VERSIONS_USE_ROLLING=true # allows the operator to use the latest versions of dependencies (rolling releases)
    
  3. Setup domain name to ip translation. The ports for all the OpenStack services are currently bound via node ports on each k8s node. With $WORKER_IP being the ip address of one of your worker nodes, add the following line to your /etc/hosts file:

    $WORKER_IP    keystone.yaook.cloud nova.yaook.cloud
    
  4. Execute the following script (located in docs/getting_started/dev_setup.sh) to create the Kubernetes resources:

    set -e -x
    
    #Set global variables. Feel free to adjust them to your needs
    export NAMESPACE=$YAOOK_OP_NAMESPACE
    export MYPYPATH="$(pwd)/.local/mypy-stubs"
    
    #Install requirements
    pip install -e .
    pip install -r requirements-build.txt
    
    stubgen -p kubernetes_asyncio -o "$MYPYPATH"
    
    # Generate default policy files for the openstack components.
    # TODO: For now fixed default policy files for queens are used
    # under yaook/op/$os_component/static/default_policies.yaml.
    # As soon as we support deploying other releases of openstack
    # the next line should be commented in and
    # the location for the default policy files will be
    # yaook/op/$os_component/generated/default_policies.yaml.
    # ./generate_default_policies.py
    
    #Generate all generated data like CRDs, operator-deployments and cuelang schema
    make all
    
    # Create namespace
    kubectl create namespace $NAMESPACE
    
    # Deploy helm charts for central services
    helm repo add stable https://charts.helm.sh/stable
    helm repo add rook-release https://charts.rook.io/release
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    
    helm upgrade --install --namespace $NAMESPACE \
      --version v1.4.9 \
      rook-ceph rook-release/rook-ceph
    helm upgrade --install --namespace $NAMESPACE ingress-nginx\
      --set "controller.service.type=NodePort,controller.service.nodePorts.http=32080,controller.service.nodePorts.https=32443,controller.extraArgs.enable-ssl-passthrough=true"\
      ingress-nginx/ingress-nginx
    helm upgrade --install --namespace $NAMESPACE cert-manager\
      --set "installCRDs=true"\
      jetstack/cert-manager
    
    # Deploy custom resources
    make k8s_apply_crds
    
    for i in deploy/*-issuer.yaml; do kubectl -n $NAMESPACE apply -f $i; done
    
    # Deploy roles for operators
    make k8s_apply_roles
    
    # Deploy a ceph cluster
    kubectl -n $NAMESPACE apply -f docs/examples/rook-cluster.yaml
    
    # Deploy the ceph toolbox
    kubectl -n $NAMESPACE apply -f docs/getting_started/ceph-toolbox.yaml
    
    # Create a CA for the cert-manager
    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=YAOOK-CA"
    kubectl -n $NAMESPACE create secret tls root-ca --key ca.key --cert ca.crt
    kubectl -n $NAMESPACE apply -f - <<'EOF'
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: ca-issuer
    spec:
      ca:
        secretName: root-ca
    EOF
    
  5. If you want to interact with openstack from your local system: Add ca.crt to the list of systemd-wide trusted CAs

  6. Label the nodes to allow scheduling of the OpenStack services to Kubernetes nodes by executing the following command - once per worker:

    kubectl label node $WORKER operator.yaook.cloud/any=true network.yaook.cloud/neutron-dhcp-agent=true network.yaook.cloud/neutron-l3-agent=true compute.yaook.cloud/hypervisor=true compute.yaook.cloud/nova-any-service=true block-storage.yaook.cloud/cinder-any-service=true any.yaook.cloud/api=true infra.yaook.cloud/any=true ceilometer.yaook.cloud/ceilometer-any-service=true gnocchi.yaook.cloud/metricd=true
    
  7. Ensure that you have a default storage class configured. If you created your cluster with managed-k8s, then you can do:

    kubectl patch storageclass csi-sc-cinderplugin -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
    

    Otherwise, you’ll have to check which storage classes are available in your cluster with kubectl get storageclass and find out which of that qualifies as default storage class.

  8. Add a gitlab private token to Kubernetes to allow pulling of the images. Create a gitlab token at https://gitlab.com/-/profile/personal_access_tokens, and replace $TOKEN by it in the following command. Which email address you use as $EMAIL_ADDRESS is not important, but $USERNAME must be your gitlab username.

    kubectl -n $YAOOK_OP_NAMESPACE create secret docker-registry regcred --docker-server=registry.gitlab.com --docker-username=$USERNAME --docker-password=$TOKEN --docker-email=$EMAIL_ADDRESS
    
  9. If you did not disable ceph above, add ceph authorization keys for glance and cinder:

    kubectl -n $YAOOK_OP_NAMESPACE apply -f ci/devel_integration_tests/deploy/rook-resources.yaml
    

    Verify with kubectl -n $YAOOK_OP_NAMESPACE get cephclients which should yield the clients cinder and glance.

Deploy OpenStack

The yaook operators will deploy OpenStack and keep your deployment up-to-date with your configuration. To be able to determine the newest available version of the docker images from the gitlab registry, they rely on the environment variable YAOOK_OP_DOCKER_CONFIG pointing to a docker config file containing your credentials for the registry. If you need to, you can create the config file by saving the output of the following command into a file:

echo "{\"auths\": {\"registry.gitlab.com\": {\"auth\": \"$(printf '%s:%s' "$USERNAME" "$TOKEN" | base64)\"}}}"

where $USERNAME is your gitlab username and $TOKEN is the gitlab registry token mentioned above. (Alternatively, the information is automatically stored into ~/.docker/config.json, when you run docker login registry.gitlab.com -u "$USERNAME" -p "$TOKEN".)

Assuming your credentials are now located in the file $DOCKER_CONFIG_FILE_PATH, you can set the environment variable as follows:

export YAOOK_OP_DOCKER_CONFIG=$DOCKER_CONFIG_FILE_PATH

To start the infrastructure operator which manages the MySQL and RabbitMQ services, execute the following:

kubectl apply -n "$YAOOK_OP_NAMESPACE" -f deploy/00-infra-operator.yaml

It should then spawn inside a normal pod in your cluster.

Now, start the other operators by executing each of the following commands in a separate terminal window:

python3 -m yaook.op -vvv keystone run
YAOOK_KEYSTONE_OP_INTERFACE=public python3 -m yaook.op -vvv keystone_resources run
python3 -m yaook.op -vvv barbican run
python3 -m yaook.op -vvv nova run
YAOOK_NOVA_COMPUTE_OP_INTERFACE=public YAOOK_NOVA_COMPUTE_OP_JOB_IMAGE="registry.gitlab.com/yaook/operator/operator:devel" python3 -m yaook.op -vvv nova_compute run
python3 -m yaook.op -vvv neutron run
python3 -m yaook.op -vvv glance run
python3 -m yaook.op -vvv cinder run
python3 -m yaook.op -vvv cds run
python3 -m yaook.op -vvv gnocchi run
python3 -m yaook.op -vvv ceilometer run

Accessing the OpenStack Deployment

There are two ways to access the OpenStack deployment:

  1. Using a Pod running inside the cluster.

    This method is most reliable as it does not require you to import any CA certificate or similar.

    You can create the pod using:

    kubectl -n $YAOOK_OP_NAMESPACE apply -f tools/openstackclient.yaml
    

    Note

    This assumes that your KeystoneDeployment is called keystone. If you gave it a different name, you need to adapt the openstackclient Deployment to use a different credentials secret (whatevernameyougaveyourkeystone-admin).

    To use the pod, run:

    kubectl -n $YAOOK_OP_NAMESPACE exec -it "$(kubectl -n $YAOOK_OP_NAMESPACE get pod -l app=openstackclient -o jsonpath='{ .items[0].metadata.name }')" -- bash
    

    This will provide you with a shell. openstack is already installed and configured there.

  2. From your local machine

    This requires that you have set up /etc/hosts entries for all services, not just keystone and nova. In addition, it requires that you have the CA certificate imported in such a way that it is usable with openstack (probably by pointing REQUESTS_CA_BUNDLE at it, but make sure not to have REQUESTS_CA_BUNDLE set when running operators, since it will break them).

    Create an openrc file and load it:

    ./tools/download-os-env.sh public -n $YAOOK_OP_NAMESPACE > openrc
    . ./openrc
    

    If everything is set up correctly, you should now be able to use the OpenStack cluster from your local machine.

    If it does not work, please try the Pod approach above before resorting to more drastic measures.

Once you have access to the cluster, you can for example make a quick smoke-test:

openstack endpoint list
openstack volume service list
openstack compute service list
openstack network agent list

or create your first glance image:

dnf install wget qemu-img jq  # only in the container
wget -O cirros.qcow2 http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
qemu-img convert -f qcow2 -O raw cirros.qcow2 cirros.raw
openstack image create --file cirros.raw cirros

Using different docker images

The operator uses the yaook/assets/pinned_version.yml file to find the appropriate docker image tag for the openstack services. To use a different image tag you can update value in the file and restart the operator.

Removing the Development Setup

CAUTION: This action cannot be undone. Following these instructions will remove the development setup, i.e., all OpenStack resources you might have created in the OpenStack deployment, which you deployed on your Kubernetes cluster using yaook, the OpenStack deployment itself, as well as all Kubernetes resources in the $YAOOK_OP_NAMESPACE namespace on your Kubernetes cluster will be removed.

To remove the development setup, execute the following commands:

make k8s_clean
./tools/strip-finalizers.sh cephclusters -n $YAOOK_OP_NAMESPACE
kubectl -n $YAOOK_OP_NAMESPACE delete cephclusters --all
./tools/strip-finalizers.sh cephblockpools -n $YAOOK_OP_NAMESPACE
kubectl -n $YAOOK_OP_NAMESPACE delete cephblockpools --all
kubectl delete namespace $YAOOK_OP_NAMESPACE

If needed, you can verify that all resources in the $YAOOK_OP_NAMESPACE namespace have been deleted by executing the following command. It lists all existing resources in the $YAOOK_OP_NAMESPACE namespace.

kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl -n $YAOOK_OP_NAMESPACE get --show-kind --ignore-not-found