Octavia operator
Octavia in general
control plane:
Octavia API: API-service to interact with octavia
Octavia worker: running the logic to create, update and remove load balancers
health manager: monitors the amphorae
house keeping: performs clean up activities and can manage a pool of spare amphorae to optimize the time it takes to spin up a new load balancer
Deploy
Pre-Requirements
Second provider network is necessary for octavia. This load-balancer provider network must be only accessible from within the deployment, but is not allowed to be accessible from the internet.
Create general openstack Resources
The following openstack resources have to be created before deploying the octavia, because UUIDs and Names of these openstack resources are required of the YAOOK octavia CR.
flavor
openstack flavor create \
--vcpus 1 \
--ram 1024 \
--disk 2 \
--private \
amphora
image
Prebuild amphora-Images can be found for example here: https://github.com/osism/openstack-octavia-amphora-image
openstack image create \
--disk-format qcow2 \
--container-format bare \
--private \
--tag amphora \
--file $PATH_TO_AMPHORA_IMAGE \
amphora-x64-haproxy
Create certificates
Octavia requires a bunch of certificates.
Documentation: https://docs.openstack.org/octavia/latest/admin/guides/certificates.html
These steps were also automated in the script tools/create_octavia_certs.sh -n yaook
. The script has to be executed on a node, which has access to the kubernetes, because it automatically push the certificates as secrets into kubernetes at the end.
Before the script is executed, the file tools/octavia_openssl.cnf
has to be filled with the correct information.
In case the certificates are not create with the script, they have to be added to kubernetes by
kubectl create secret -n $YAOOK_OP_NAMESPACE generic octavia-server-ca-key --from-file=server_ca.key.pem
kubectl create secret -n $YAOOK_OP_NAMESPACE generic octavia-server-ca-cert --from-file=server_ca.cert.pem
kubectl create secret -n $YAOOK_OP_NAMESPACE generic octavia-client-ca-cert --from-file=client_ca.cert.pem
kubectl create secret -n $YAOOK_OP_NAMESPACE generic octavia-client-cert-and-key --from-file=client.cert-and-key.pem
Apply octavia to YAOOK
label worker
kubectl label node $WORKER octavia.yaook.cloud/octavia-any-service=true
create CRD
helm upgrade --install --namespace $NAMESPACE yaook-crds yaook.cloud/crds
deploy octavia-operator
helm upgrade \
--install \
--namespace $YAOOK_OP_NAMESPACE \
--set operator.pythonOptimize=false \
--set operator.image.repository=registry.yaook.cloud/yaook/operator-test \
--set operator.image.tag=0.0.0-dev-feature-add-octavia-operator-amd64 \
"octavia-operator" \
./yaook/helm_builder/Charts/octavia-operator/
Create custom-resource
Use example as template: docs/examples/octavia.yaml
controller_worker
amp_image_owner_id
: Openstack-ID of the owner user of the amphora-image
amp_ssh_key_name
: Name of the key-pair within openstack of the octavia-user
amp_secgroup_list
: UUIDs of the security-groups as list (leave empty for now, because it has to be created as octavia-user below)
amp_boot_network_list
: UUID of the provider-network, where amphora-VM reaches the health-manager and vice versa
amp_flavor_id
: ID of the flavor created above
health_manager
:
controller_ip_port_list
: Address to reach the health-manager. The health-manager listen with external service. Usekubectl get services | grep octavia-health-manager
to get the IP. The service always listen on port 5555. Example: ‘[192.168.144.101:5555]’.
kubectl -n $YAOOK_OP_NAMESPACE apply -f docs/examples/octavia.yaml
The following has to be created as octavia-user. This user is created by the octavia operator. The credentials can be found within the config-file (/etc/octavia/octavia.confg) of the one of the deployed octavia-pods.
key-pair
openstack keypair create --public-key $PUBLIC_KEY_FILE $KEYPAIR_NAME
$KEYPAIR_NAME
must be the same name like in the docs/examples/octavia.yaml
for the amp_ssh_key_name
The key-pair is used for the amphore-VM, which is created by octavia, so the admin can log into this VM if necessary. Octavia itself doesn’t need the private key of the key-pair, so this should be stored safely, where only the deployment-admin has access to.
security-groups
openstack security group create lb-mgmt-sec-grp
openstack security group rule create \
--protocol icmp \
lb-mgmt-sec-grp
openstack security group rule create \
--protocol tcp \
--dst-port 22 \
lb-mgmt-sec-grp
openstack security group rule create \
--protocol tcp \
--dst-port 9443 \
lb-mgmt-sec-grp
Update the UUID in the docs/examples/octavia.yaml
in the amp_secgroup_list
and apply the file again
Using as normal user
To run octavia commands, a new role load-balancer_member
has to be created
openstack role create load-balancer_member
which then has to be assigned to the user
openstack role add --project $PROJECT_ID --user $USER_ID load-balancer_member
Delete again
delete CRD
kubectl delete crd octaviadeployments.yaook.cloud
delete remaining persistent volume claims
When these are not deleted, then after new deploying the octavia operator, it can not login to the database and message-queue anymore, because these are the old one with the old login-credentials.
kubectl delete persistentvolumeclaims data-octavia-octavia-mq-mq-0 data-octavia-octavia-db-0
delete operator
helm uninstall octavia-operator
delete certificates added by the script
kubectl delete secret octavia-server-ca-key octavia-server-ca-cert octavia-client-cert-and-key octavia-client-ca-cert
Example Workflow in Octavia
create subnet with router in openstack, where the loadbalancer should be attached to
openstack network create l2-network
openstack subnet create --subnet-range 192.168.4.0/24 --network l2-network --dhcp l3-network
openstack router create --external-gateway provider-network-ext1 test-router
openstack router add subnet test-router l3-network
create loadbalancer with listener and pool. In this case for the ssh-port 22
openstack loadbalancer create --name lb1 --vip-subnet-id 5dc7a67d-5f43-46e7-b2a5-b8ef5cde8f7e
openstack loadbalancer listener create --name my_listener --protocol TCP --protocol-port 22 lb1
openstack loadbalancer pool create --name my_pool --lb-algorithm ROUND_ROBIN --listener my_listener --protocol TCP
Create VM’s, which should be behind the loadbalancer
openstack server create --image cirros --network l2-network --flavor S test-cirros1
openstack server create --image cirros --network l2-network --flavor S test-cirros2
openstack server list --all
openstack subnet list
Add VM’s to the pool of the loadbalancer
openstack loadbalancer member create --subnet-id 5dc7a67d-5f43-46e7-b2a5-b8ef5cde8f7e --address 192.168.4.191 --protocol-port 22 my_pool
openstack loadbalancer member create --subnet-id 5dc7a67d-5f43-46e7-b2a5-b8ef5cde8f7e --address 192.168.4.225 --protocol-port 22 my_pool
Add healthmonitor to the loadbalancer
openstack loadbalancer healthmonitor create --delay 5 --timeout 3 --max-retries 3 --type TCP my_pool
Create a floating-ip and get port-id from the loadbalancer to attach the floating-ip to the loadbalancer
openstack floating ip create $PROVIDER_NETWORK_ID
PORT_ID=openstack loadbalancer show lb1 -f value -c vip_port_id
openstack floating ip set --port $PORT_ID $FLOAGING_IP