Deploying to your own Kubernetes Service
Note: The commands referenced in this document will be kubernetes cli specific but the same can be used by changing the kubectl command to oc using the openshift cli
This document is split into 2 sections:
New deployment will take you through the steps to deploy K for the first time.
Upgrades will take you through the steps to update your K installation.
Before you start
You should have
Received a configuration package from KADA.
Be familiar or be knowledgable about the network and load balancer settings for exposing services on your Kubernetes instance.
Request from your network team a DNS alias and certificate for the KADA Platform.
Access to an environment a Kubernetes cluster
In your local environment
Install kubectl
Install your cloud provider cli: eg
azure cli, aws cli
For windows environments install gitbash.
1. New Deployments
A new deployment will take approximately 1 hour in duration to complete.
The commands in the steps below assume you are running in a unix bash env.
We strongly recommend using a Linux based environment over windows for compatibility of the deployment scripts. However windows use may run gitbash if you are deploying from a windows environment.
Step 1) Create a new Kubernetes cluster.
Kubernetes Service Providers supported: Amazon’s Elastic Kubernetes Service (EKS) & Microsoft Azure’s Kubernetes Service (AKS). Our customers have deployed onto Openshift Kubernetes Service. Reach out for assistance with other Kubernetes options that are not listed.
For cluster requirements see How to deploy on your cloud | Minimum-infrastructure-requirements
Step 2) Setting up access to KADA Image repository
KADA will provide a KADA_CLIENT_ID
and KADA_CLIENT_SECRET
to access the KADA Image repository. The following setups your Kubernetes service to access the repository
Create a secrete
kubectl create secret docker-registry kada-image-credentials \
--docker-server=kadaexternal.azurecr.io \
--docker-username=$KADA_CLIENT_ID \
--docker-password=$KADA_CLIENT_SECRET
Patch the service account with the above secret
kubectl patch serviceaccount <the service_account or "default"> \
-p "{\"imagePullSecrets\": [{\"name\": \"kada-image-credentials\"}]}"
If your Kubernetes cluster does not have internet access to pull images then contact KADA Support for assistance to download images into your internal image repository.
Step 3) Creating a certificate
Create a certificate and key
Raise a certificate request for the domain hosting the K Platform.
Generate a full chain certificate. The cert file should contain the root CA and all intermediary certificates.
The certificate should be in the format of a *.crt
and *.key
file.
The cert should be signed via a trust authority that is trusted by your organisations browser. This is so there are no cert issues when user accesses K from a browser.
Load the cert / key into Kubernetes
kubectl create secret tls kada-ssl-cert --cert /path/to/fullchain.cer --key /path/to/certificate.key
Step 4) Kubernetes ingress
Your organisation will most likely have a standard pattern for routing network traffic to a Kubernetes cluster via a Load Balancer / HA Proxy / Ingress routes.
Using your organisations Load Balancer and ingress service
KADA Deployment can make use of your organisations pattern with a few additional configuration steps.
Note the domain of the Load Balancer URL. We will refer to this as DOMAIN_URL from here on.
[OPENSHIFT ONLY] Openshift Load Balancer definition
cert:<GENERATED BY YOUR ORGANISATION>
key:<GENERATED BY YOUR ORGANISATION>
caCertificate:<GENERATED BY YOUR ORGANISATION>
host: <the DOMAIN_URL>
to:{"kind":"Service","name":"<name of ingress service>"}
port:{"targetPort": "<port defined in ingress default is 8080>"} maps to the ingress network port (step 9)
tls:termination:edge
Add the following mappings to the config of your organisation’s ingress
List on *:8080
/keycloak -> keycloak-cluster-ip-service.<REPLACE WITH PROJECT NAMESPACE>.svc.cluster.local:8080
/api -> cerebrum-cluster-ip-service.<REPLACE WITH PROJECT NAMESPACE>.svc.cluster.local:5002
/solr -> solr-gatekeeper-cluster-ip-service.<REPLACE WITH PROJECT NAMESPACE>.svc.cluster.local:8888
/ -> cortex-cluster-ip-service.<REPLACE WITH PROJECT NAMESPACE>.svc.cluster.local:9002
Deploy KADA’s generic Load Balancer and ingress service
If you are not using your own load balancer / ingress service you can use the one packaged with the K Platform.
kubectl apply -f cortex/k8s-ingress-nginx
Step 5) Deploying KADA Services into Kubernetes
Download the distribution package and unzip it
CODEunzip kada_x.x.x.zip
Navigate to the kube_setup directory
CODEcd d_pkg/kube_setup
Populate the k8s_env.sh with the correct values according to your desired values
CODEexport HOST= export KADA_ADMIN_PASSWORD= export POSTGRES_PASSWORD= export KEYCLOAK_PASSWORD= export KEYCLOAK_POSTGRES_PASSWORD= export CEREBRUM_SECRET= export SOLR_SECRET= export FERNET_KEYS=
NOTES
HOST is in the format of the alias name or canonical host name. It must be lowercase e.g. if I intend to access K via
https://prod.kada.ai
, then the host value isprod.kada.ai
FERNET_KEYS should be generated using one of these methods
Python
CODEfrom cryptography.fernet import Fernet Fernet.generate_key()
Unix shell
CODEecho $(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
SSL_SECRET_NAME is the Kubernetes secret name you installed the SSL Certificate as
Avoid special characters in the values above if possible. Any value that contains the following special characters need to be escaped with a
\
backslash:\
→\\\\
`
→\`
$
→\$
Save k8s_env.sh in a secure location so that it can be used when upgrading to the K Platform.
Run to create a generated-k8s-common folder containing 2 yaml files.
CODE./kada_gen.sh
Make sure
kubectl
is configured and pointing to a Kubernetes cluster.Deploy the generated config
CODEkubectl apply -f generated-k8s-common
Deploy the K platform. Note the
y
arg will deploy an ingress which terminates SSL.CODE./kada_deploy.sh y
[OPENSHIFT ONLY] Update the users that k8s runs the contains
CODE# Find the uid-range that can run in the project (numerator part) oc describe project <project_name> | grep sa.scc.uid-range $ openshift.io/sa.scc.uid-range=1002060000/10000 # For each config file replace the values with the uid for the following properties. runAsUser fsGroup pv.beta.kubernetes.io/guid
Modify PV Policy
Modify all Persistent Volumes associated to the deployment to ensure that the Reclaim Policy is set to Retain. This is important to prevent data loss in the event of prolonged node outage.
kubectl get pv
For these 2 claims, set to Retain
default/postgres-storage-postgres-statefulset-0
default/keycloak-postgres-storage-keycloak-postgres-statefulset-0
kubectl patch pv <REPLACE WITH pv name eg pvc-xxxxxxxxxxxx> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Then run to validate Reclaim policy has updated to Retain
kubectl get pv
Step 6) Post deployment verification
Check all Kubernetes services are running and not in error
setup-solr and setup-postgres may have multiple pod instance where some have status failed. This is normal provided as there an instance of each with STATUS: COMPLETED.
kubctl get pods
Example of expected output
NAME READY STATUS RESTARTS AGE
cerebrum-celery-batch-worker-deployment-6879558f88-btnd9 1/1 Running 0 3h54m
cerebrum-celery-extract-worker-deployment-5dd688875f-gg7q8 1/1 Running 0 3h54m
cerebrum-celery-impact-worker-deployment-84cf999969-mscgt 1/1 Running 0 3h54m
cerebrum-celery-interactive-worker-deployment-6c5846cf48-hxts5 1/1 Running 0 3h54m
cerebrum-celery-scheduler-deployment-7df5848b95-r8wnw 1/1 Running 0 3h54m
cerebrum-celery-usage-worker-deployment-6d7d445654-j6fzk 1/1 Running 0 3h54m
cerebrum-celery-watcher-deployment-7bbbc4797f-mr87m 1/1 Running 0 3h54m
cerebrum-celery-worker-deployment-86d5fb9586-knhx6 1/1 Running 0 3h54m
cerebrum-deployment-67b4c54b57-jbfr7 1/1 Running 0 3h54m
cortex-deployment-7d4985fd46-whm6b 1/1 Running 0 3h54m
keycloak-deployment-7f7d447656-hwkcj 1/1 Running 0 3h54m
keycloak-postgres-statefulset-0 1/1 Running 0 3h54m
postgres-statefulset-0 1/1 Running 0 3h54m
redis-statefulset-0 1/1 Running 0 3h54m
reset-postgres-vrs9p 0/1 Completed 0 3h54m
solr-gatekeeper-deployment-646c8bbfdc-jb5z5 1/1 Running 0 3h54m
solr-statefulset-0 1/1 Running 0 3h54m
Check the status API. It should return 200 if successful
https://<YOUR DOMAIN>/api/status
Step 7) Setup Users
KADA uses Keycloak to manage users in platform.
The Keycloak portal is accessible at the following link
http://<YOUR DOMAIN>/keycloak/auth/admin/master/console/#/realms/kada
User can be setup locally Managing local users (Add, Edit, Delete, Reset Password) or configured for SSO Configuring SSO with Azure Active Directory / Entra ID .
Step 8) Setup Landing Storage
KADA uses object store as a landing zone to process metadata and log files.
We currently support AWS s3, Azure Blob or local attached Kubernetes PVs.
To setup log into the KADA Platform http://<YOUR DOMAIN>
and navigate to Platform Settings > Settings
AWS s3 setup
storage_type = s3
storage_root_folder = <s3 bucket name>
storage_aws_region = <Your AWS region >
storage_aws_access_key_id = <Your AWS IAM user access key>
storage_aws_secret_access_key = <Your AWS IAM user secret>
Azure Blob setup
Configure the following in Admin > Platform Settings > Settings
storage_type = azure
storage_root_folder = <Azure container name>
storage_azure_storage_account = <Your azure storage account>
storage_azure_access_key = <Your azure storage account access key>
Performing an upgrade to your K installation
Step 1 ) Pre checks
Check that no jobs are currently running. Admin > Monitor.
Step 2) Deploying KADA updates
Download the distribution package and unzip it
CODEunzip kada_x.x.x.zip
Navigate to the kube_setup directory
CODEcd d_pkg/kube_setup
Populate the k8s_env.sh or overwrite with the k8s_env.sh from a prior deployment
Deploy the K platform.
CODE./kada_deploy.sh y
Step 3) Post deployment verification
Follow the same verification as per New Deployments (Step 5 in New Deployments)