################# Development Setup ################# The following page will guide you through the setup of a development environment for Yaook. .. warning:: This documentation is **for a development setup only!** Do **not** use this, without extra thought and knowing what you're doing, to deploy a productive or even a demo setup! If you run into problems, please refer to our :ref:`common-problems` page. ********************* Requirements ********************* Please ensure that the following general requirements are met: - :ref:`OS requirements` - :ref:`Kubernetes API requirements` - :ref:`Kubernetes cluster requirements` .. note:: Although we recommend using yaook/k8s to do so, we provide scripts for installing prometheus, cert-manager, nginx ingress controller and rook/ceph. This means that even if some required features are not yet present in your kubernetes cluster you can go ahead and follow these instructions. Requirements for the Development Setup ========================================== Please ensure that the following requirements are met, when setting up your system for development: - The `operator `_ repository has been cloned. - A `virtual environment `_ has been setup with Python ``v3.11``. (We do not guarantee for this to work with any other Python version than the one used in ``./Dockerfile`` in the repo!) - You have access to a Kubernetes cluster - either via the default ``~/.kube/config``, or alternatively - using the ``KUBECONFIG`` environment variable. - The `kubectl `_ and `helm `_ binaries are in your path. - `CUE `_ is installed and in your path. You can install it via: .. code-block:: bash # Install cue (version may be adjusted) GO111MODULE=on go get cuelang.org/go/cmd/cue@v0.4.3 # For golang 1.16 or higher use go install instead of go get go install cuelang.org/go/cmd/cue@v0.4.3 - `GNU make `_ is installed. - The `prerequisites `_ of the ``mysqlclient`` pip package are installed. *************************************************** Preparation of Environment and Kubernetes Resources *************************************************** #. Set the :ref:`uref-operator-env-vars`: .. code-block:: bash # Used to determine which namespaces are relevant for the operator export YAOOK_OP_NAMESPACE="yaook" # Allows the operator to use the latest versions of dependencies (alpha releases) export YAOOK_OP_VERSIONS_USE_ALPHA=true # Allows the operator to use the latest versions of dependencies (rolling releases) export YAOOK_OP_VERSIONS_USE_ROLLING=true # If you are coming from managed-k8s, you need to set this too export YAOOK_OP_CLUSTER_DOMAIN="cluster.local" #. Execute the following script to create the kubernetes resources required (bash script available at ``docs/getting_started/dev_setup.sh``): .. _prerequisites_script: .. literalinclude:: dev_setup.sh :language: bash #. **Optional:** Depending on which features you still need to deploy in your kubernetes cluster, execute the following scripts: .. code-block:: bash ./docs/getting_started/install_prometheus.sh ./docs/getting_started/install_ingress_controller.sh ./docs/getting_started/install_cert_manager.sh See :ref:`the kubernetes cluster requirements` for more information. #. Either - disable ceph by setting ``spec:backends:ceph:enabled`` to ``False`` in ``docs/examples/{nova,cinder,glance}.yaml`` and ``spec:glanceConfig:glance_store:default_store`` to ``file`` in ``docs/examples/glance.yaml``, or - if you want to use ceph as storage (recommended): .. note:: In the following we will call the namespace in which you did / will install ceph, the ``ROOK_CEPH_NAMESPACE`` namespace. - If you have not yet installed ceph, you can install it in the ``ROOK_CEPH_NAMESPACE = $YAOOK_OP_NAMESPACE`` namespace by executing the following script: .. code-block:: bash ./docs/getting_started/install_rook.sh - If you did already install ceph, follow these :ref:`instructions`. You can inspect your ceph setup with ``kubectl -n $ROOK_CEPH_NAMESPACE get cephclients``, which should yield the clients ``cinder``, ``glance`` and ``gnocchi``. ``kubectl get --all-namespaces secrets | grep client`` should yield secrets for all three clients in both namespaces, although it might take a while for them to be created. #. Setup domain name to IP translation. The ports for all the OpenStack services are currently bound via node ports on each k8s node. With ``$WORKER_IP`` being the IP address of one of your worker nodes, add the following line to your ``/etc/hosts`` file: .. code-block:: bash $WORKER_IP keystone.yaook.cloud nova.yaook.cloud #. Create a secret for the cert-manager: 1. If you do not already have a key and certificate that you want to use, you can create them like so: .. code-block:: bash openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=YAOOK-CA" #. Create the kubernetes secret: .. code-block:: bash kubectl -n $YAOOK_OP_NAMESPACE create secret tls root-ca --key ca.key --cert ca.crt #. **Optional:** If you want to interact with OpenStack from your local system, add ``ca.crt`` to the list of systemd-wide trusted CAs. More information on how to access the OpenStack deployment are given in :ref:`access_openstack_deployment`. #. Create the issuers: .. code-block:: bash kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/getting_started/ca-issuer.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f deploy/selfsigned-issuer.yaml #. Label the nodes to allow scheduling of the OpenStack services to k8s nodes by executing the following command - once per worker: .. code-block:: bash kubectl label node $WORKER network.yaook.cloud/neutron-ovn-agent=true network.yaook.cloud/neutron-northd=true operator.yaook.cloud/any=true compute.yaook.cloud/hypervisor=true compute.yaook.cloud/nova-any-service=true block-storage.yaook.cloud/cinder-any-service=true any.yaook.cloud/api=true infra.yaook.cloud/any=true infra.yaook.cloud/caching=true ceilometer.yaook.cloud/ceilometer-any-service=true gnocchi.yaook.cloud/metricd=true key-manager.yaook.cloud/barbican-any-service=true key-manager.yaook.cloud/barbican-keystone-listener=true heat.yaook.cloud/engine=true For neutron release < yoga: set additional labels: `network.yaook.cloud/neutron-dhcp-agent=true`, `network.yaook.cloud/neutron-l3-agent=true`, `network.yaook.cloud/neutron-bgp-dragent=true`. If you are not running on bare metal, you may need to additionally label your workers / compute hosts to prevent kvm-in-kvm failures in nova-compute: .. code-block:: bash kubectl label node $WORKER compute.yaook.cloud/hypervisor-type=qemu #. Ensure that you have a default `storage class `_ configured, by running ``kubectl get storageclass``. We recommend choosing ``csi-sc-cinderplugin`` as the default. You can set it to be the default by patching it as follows: .. code-block:: bash kubectl patch storageclass csi-sc-cinderplugin -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' **************** Deploy OpenStack **************** The Yaook Operators will deploy OpenStack and keep your deployment up-to-date with your configuration automatically. Creating the Infrastructure Operator ==================================== 1. :ref:`Check` whether your os fulfills the :ref:`requirement regarding real-time scheduling`. - If it does not, perform the :ref:`workaround`. #. Start the infrastructure operator, which manages the MySQL and RabbitMQ services: .. code-block:: bash helm upgrade --install --namespace $YAOOK_OP_NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel --set operator.image.pullPolicy=Always infra-operator ./yaook/helm_builder/Charts/infra-operator/ The infrastructure operator now should spawn inside a normal pod in your cluster. Initialize the service-specific Operators ========================================= .. _create_service_deployments: .. warning:: These examples are for **development purposes only**. Do not use them, without extensive modification, for productive or demonstration setups, as they are rough collections of arbitrary settings to give onlookers an idea of which options exist. Before starting the service-specific operators, you have to create the deployments in advance. You can create the deployments via: .. code-block:: bash kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/keystone.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/barbican.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/nova.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/neutron-ovn.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/glance.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/cinder.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/gnocchi.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/ceilometer.yaml kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/heat.yaml .. note:: The neutron deployment needs a specific version regarding the OpenStack that should be installed. `neutron-ovn.yaml` must be used for >= yoga. For neutron release < yoga please use OVS (`docs/examples/neutron-ovs.yaml`). .. warning:: Gateway nodes need a second network interface, which is set within the neutron deployment: .. code-block:: yaml - nodeSelectors: - matchLabels: "network.yaook.cloud/neutron-ovn-agent": "true" bridgeConfig: - bridgeName: br-ex uplinkDevice: eth1 openstackPhysicalNetwork: "physnet1" You can create a dummy interface on the gateway node with the exact interface name (eth1), but beware that the OpenStack physical network won't work as expected with a dummy interface! .. _starting_operators: Now, start the other Operators locally by executing each of the following commands in a separate terminal window: .. note:: Make sure you do not have an OpenRC file sourced when starting an Operator. .. code-block:: bash python3 -m yaook.op -vvv keystone run YAOOK_KEYSTONE_OP_INTERFACE=public python3 -m yaook.op -vvv keystone_resources run python3 -m yaook.op -vvv barbican run python3 -m yaook.op -vvv nova run YAOOK_NOVA_COMPUTE_OP_INTERFACE=public YAOOK_NOVA_COMPUTE_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator:devel" python3 -m yaook.op -vvv nova_compute run python3 -m yaook.op -vvv neutron run YAOOK_NEUTRON_OVN_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" YAOOK_NEUTRON_OVN_AGENT_OP_INTERFACE=public python3 -m yaook.op -vvv neutron_ovn run YAOOK_NEUTRON_OVN_BGP_AGENT_OP_INTERFACE=public YAOOK_NEUTRON_OVN_BGP_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_ovn_bgp run python3 -m yaook.op -vvv glance run python3 -m yaook.op -vvv cinder run python3 -m yaook.op -vvv cds run python3 -m yaook.op -vvv gnocchi run python3 -m yaook.op -vvv ceilometer run python3 -m yaook.op -vvv heat run For neutron release < yoga run the following operators: .. code-block:: bash YAOOK_NEUTRON_DHCP_AGENT_OP_INTERFACE=public YAOOK_NEUTRON_DHCP_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_dhcp run YAOOK_NEUTRON_L2_AGENT_OP_INTERFACE=public python3 -m yaook.op -vvv neutron_l2 run YAOOK_NEUTRON_L3_AGENT_OP_INTERFACE=public YAOOK_NEUTRON_L3_AGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_l3 run YAOOK_NEUTRON_BGP_DRAGENT_OP_INTERFACE=public YAOOK_NEUTRON_BGP_DRAGENT_OP_JOB_IMAGE="registry.yaook.cloud/yaook/operator/operator:devel" python3 -m yaook.op -vvv neutron_bgp run The convergence may take a while. Take the chance, pause for a moment, and watch in amazement how the operators do their job and create the OpenStack deployment for you. .. note:: If you do not want to run all operators locally, you can also spawn them via helm, for instance using (for the cds operator): .. code-block:: bash helm upgrade --install --namespace $YAOOK_OP_NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel --set operator.image.pullPolicy=Always cds-operator ./yaook/helm_builder/Charts/cds-operator/ If you then temporarily want to run an operator locally, you must scale down its deployment inside the cluster to 0 to avoid conflicts. To deploy all operators via helm you may use the following loop: .. code-block:: bash for OP_NAME in keystone keystone-resources barbican nova nova-compute neutron neutron-ovn neutron-ovn-bgp glance cinder cds gnocchi ceilometer heat; do helm upgrade --install --namespace $YAOOK_OP_NAMESPACE --set operator.pythonOptimize=false --set operator.image.tag=devel --set operator.image.pullPolicy=Always "$OP_NAME-operator" ./yaook/helm_builder/Charts/$OP_NAME-operator/ done For neutron release < yoga please use the following operators: .. code-block:: bash neutron-bgp neutron-dhcp neutron-l2 neutron-l3 **Warning:** Only a single instance of each operator must run at any time against a cluster. You *must not* run them both locally and via helm at the same time. .. _access_openstack_deployment: ********************************** Accessing the OpenStack Deployment ********************************** You can access the OpenStack deployment either via a pod running inside the k8s cluster or directly from your local machine. .. _access_via_pod: Using a Pod running inside the cluster ====================================== This method is most reliable as it does not require you to import any CA certificate or similar. The ConfigMap ``keystone-ca-certificates`` gets a random suffix. You have to adjust the manifest located at ``tools/openstackclient.yaml`` and set ``spec.template.spec.volumes[0].configMap.name`` to your corresponding ConfigMap name before creating the Pod. You can determine the precise name via: .. code-block:: bash kubectl get -n $YAOOK_OP_NAMESPACE ConfigMap -l "state.yaook.cloud/component=ca_certs,state.yaook.cloud/parent-name=keystone,state.yaook.cloud/parent-plural=keystonedeployments,state.yaook.cloud/parent-group=yaook.cloud" You can now create the pod using: .. code-block:: bash kubectl -n $YAOOK_OP_NAMESPACE apply -f tools/openstackclient.yaml .. note:: This assumes that your ``KeystoneDeployment`` is called ``keystone``. If you gave it a different name, you need to adapt the ``openstackclient`` ``Deployment`` to use a different credentials secret (``whatevernameyougaveyourkeystone-admin``). To use the Pod, run: .. code-block:: bash kubectl -n $YAOOK_OP_NAMESPACE exec -it "$(kubectl -n $YAOOK_OP_NAMESPACE get pod -l app=openstackclient -o jsonpath='{ .items[0].metadata.name }')" -- bash This will provide you with a shell. ``openstack`` is already installed and configured there. From your local machine ======================= This requires that you have set up ``/etc/hosts`` entries for all services, not just keystone and nova. In addition, it requires that you have the CA certificate imported in such a way that it is usable with ``openstack`` (probably by pointing ``REQUESTS_CA_BUNDLE`` at it, but make sure not to have ``REQUESTS_CA_BUNDLE`` set when running Operators, since it will break them). Create an openrc file and load it: .. code-block:: bash ./tools/download-os-env.sh public -n $YAOOK_OP_NAMESPACE > openrc . ./openrc If everything is set up correctly, you should now be able to use the OpenStack cluster from your local machine. If it does not work, please try the approach described in :ref:`access_via_pod` before resorting to more drastic measures. ************************** Verify basic functionality ************************** Once you have access to the OpenStack deployment, you can for example make a quick smoke-test: .. note:: The ``openstack volume service list`` command below will currently list some cinder backup services. These are expected to be down, since they do not have a storage driver configured. .. code-block:: bash openstack endpoint list openstack volume service list openstack compute service list openstack network agent list To create your first glance image, run the following: .. code-block:: bash apt install wget qemu-utils jq # only in the container wget -O cirros.qcow2 http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img qemu-img convert -f qcow2 -O raw cirros.qcow2 cirros.raw openstack image create --file cirros.raw cirros If you are encountering problems, please look :ref:`here` for a solution. ***************************** Using different Docker images ***************************** The Operator uses the ``yaook/assets/pinned_version.yml`` file to find the appropriate Docker image tag for the OpenStack services. To use a different image tag you can update values in the file and restart the Operator. .. _use_existing_rook: ************************************************* Using an already existing rook-based ceph cluster ************************************************* You can use an already existing rook-based ceph cluster of your Kubernetes cluster, if you have set up one in advance. This may be the case if you created your Kubernetes cluster via managed-k8s. .. note:: In the following sections, we will call the namespace in which ceph was installed, the ``ROOK_CEPH_NAMESPACE`` namespace. .. _create_cephclient_secrets: Create CephClient Secrets ========================= Make sure you add the ceph authorization keys to ``ROOK_CEPH_NAMESPACE``, i.e., the namespace of your rook operator. If you installed ceph with yaook/k8s this will probably be the ``rook-ceph`` namespace. .. code-block:: bash kubectl -n $ROOK_CEPH_NAMESPACE apply -f docs/examples/rook-resources.yaml Wait for the three ceph client secrets ``rook-ceph-client-gnocchi``, ``rook-ceph-client-glance`` and ``rook-ceph-client-cinder`` to be created. .. _copy_cephclient_secrets: Copy CephClient Secrets ======================= The ceph client secrets should also be present in the ``$YAOOK_OP_NAMESPACE`` namespace. You can copy them there by repeating the following command for all three ceph clients ``gnocchi``, ``glance`` and ``cinder`` as the value of ``CEPH_CLIENT_NAME``: .. code-block:: bash CEPH_CLIENT_NAME=gnocchi sh -c 'kubectl get secret rook-ceph-client-$CEPH_CLIENT_NAME --namespace=$ROOK_CEPH_NAMESPACE -o yaml | sed -e "s/namespace: .*/namespace: $YAOOK_OP_NAMESPACE/" | kubectl apply -f -' Adjust Service Deployments ============================================== Additionally you have to tell the ceph clients ``cinder``, ``glance`` and ``gnocchi`` how to reach the mons by adjusting ``docs/examples/{cinder,glance,gnocchi}.yaml`` as following when you :ref:`create the service-specific deployments`: .. code-block:: yaml # For glance.yaml and gnocchi.yaml [...] ceph: keyringReference: rook-ceph-client-glance #adjust for gnocchi keyringUsername: glance # adjust for gnocchi keyringPoolname: glance-pool #only for glance cephConfig: global: "mon_host": "rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789" [...] # For cinder.yaml [...] backends: - name: ceph rbd: keyringReference: rook-ceph-client-cinder keyringUsername: cinder cephConfig: "mon_host": "rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789" [...] Untaint Storage Nodes ======================= If you tainted your storage nodes, you may want to untaint them before :ref:`starting the operators`: .. code-block:: bash # Depending on the used label, this command may vary kubectl taint nodes $STORAGE_WORKER node-restriction.kubernetes.io/cah-managed-k8s-role=storage:NoSchedule- ****************************** Updating the Development Setup ****************************** 1. When updating your development setup do a ``git pull``. 2. Update the crds and roles ``make k8s_apply_crds`` and ``make k8s_apply_roles``. 3. Update the appropriate operators running inside your k8s cluster. You can do so by restarting them (if they point to the ``devel`` image tag). 4. Update the deployed custom resources. Either by running ``make k8s_apply_examples`` or (if you changed the resources in the cluster) by manually examining the individual diffs. 5. In rare cases there might have been additional external dependencies introduced. Check the ``dev_setup.sh`` and ``install_*.sh`` scripts from above. 6. Restart all of your operators running outside the cluster. ****************************** Removing the Development Setup ****************************** .. warning:: This action cannot be undone. Following these instructions will remove the development setup, i.e., all OpenStack resources you might have created in the OpenStack deployment, which you deployed on your Kubernetes cluster using Yaook, the OpenStack deployment itself, as well as all Kubernetes resources in the ``$YAOOK_OP_NAMESPACE`` namespace on your Kubernetes cluster **will be removed**. To remove the development setup, **first stop the operators**, then execute the following commands: .. code-block:: bash make k8s_clean If you also installed a feature using any of the ``docs/getting_started/install_*.sh`` scripts, you can also choose from the following scripts to uninstall those features: .. code-block:: bash ./docs/getting_started/uninstall_prometheus.sh ./docs/getting_started/uninstall_ingress_controller.sh ./docs/getting_started/uninstall_cert_manager.sh ./docs/getting_started/uninstall_rook.sh .. note:: If you deployed the Rook Ceph cluster based on ``docs/examples/rook-cluster.yaml``, you might need to do additional cleanup steps on your nodes: .. code-block:: bash # on the nodes formerly carrying the OSDs lvchange -an /dev/ceph-... lvremove /dev/ceph-... dd if=/dev/zero of=/dev/sdX bs=1M count=100 # on all nodes rm -rf /var/lib/rook Where ``/dev/ceph-...`` and ``/dev/sdX`` are the LVM device and disk partition of the Ceph OSD drive respectively. As soon as all resources are deleted from the ``$YAOOK_OP_NAMESPACE`` namespace, you can also go ahead and delete the namespace itself: .. code-block:: bash kubectl delete namespace $YAOOK_OP_NAMESPACE If the previous command does not complete, this means some resources still are present in the namespace. You can check which ones that is, by running: .. code-block:: bash kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl -n $YAOOK_OP_NAMESPACE get --show-kind --ignore-not-found