This guide shows you how to run Omni on Kubernetes. This guide assumes that Omni will be deployed on a Kubernetes cluster with 1 controlplane and 1 worker node. Small differences should be expected when using different components.
For SAML integration sections, this guide assumes Azure AD will be the provider for SAML.
Omni is available via a Business Source License which allows free installations in non-production environments. If you would like to deploy Omni for production use please contact Sidero sales. If you would like to subscribe to the hosted version of Omni please see the SaaS pricing;
1 Prerequisites
There are several prerequisites for deploying Omni on Kubernetes. We will assume you have an Kubernetes cluster available with a CNI and CSI plugin installed.
You should not run Omni on a Talos cluster that is managed by Omni. It can be deployed on a standalone Talos cluster or another managed Kubernetes offering.
1.1 Certificates
We need to have a component that will manage SSL certificates for Omni. In this example, we are using cert-manager. The best way to install cert-manager is with Helm.
helm install \
cert-manager oci://quay.io/jetstack/charts/cert-manager \
--version v1.19.2 \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true \
--set global.leaderElection.namespace=cert-manager \
--set enableCertificateOwnerRef=true \
--set "extraArgs[0]=--dns01-recursive-nameservers-only" \
--set-string "extraArgs[1]=--dns01-recursive-nameservers=1.1.1.1:53\,8.8.8.8:53"
1.2 Issuers
We also need to have issuers in order to request certificates. These can be a Issuer or a ClusterIssuer. In this example we’re using DNS Challenge with CloudFlare and a ClusterIssuer.
export CLOUDFLARE_API_TOKEN=<Cloudflare API token>
export YOUR_EMAIL=<Your Email>
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token
namespace: cert-manager
type: Opaque
stringData:
api-token: ${CLOUDFLARE_API_TOKEN}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: ${YOUR_EMAIL}
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cluster-issuer-account-key
solvers:
- dns01:
cloudflare:
email: ${YOUR_EMAIL}
apiTokenSecretRef:
name: cloudflare-api-token
key: api-token
EOF
1.3 LoadBalancer (optional)
If you want to expose Omni with a LoadBalancer service, you will need to have a LoadBalancer available in your cluster. If you do not have a LoadBalancer available, you can still expose Omni with an Ingress and a NodePort service.
In this example, we are using MetalLB as a LoadBalancer in on-prem Kubernetes clusters. You can install MetalLB with kubectl apply or helm install.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
helm repo add metallb https://metallb.github.io/metallb --force-update
helm install \
metallb metallb/metallb \
--version v0.15.3 \
--namespace metallb-system \
--create-namespace
We also need to configure an address pool and a L2 advertisement for MetalLB to use. This can be done with the following manifest:
export METALLB_IP_RANGE=192.168.1.5-192.168.1.10
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-ip-pool
namespace: metallb-system
spec:
addresses:
- ${METALLB_IP_RANGE}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
EOF
If you try to create the IPAddressPool and L2Advertisement resources immediately after installing MetalLB, you might encounter an error. If that happens, please wait until the MetalLB pods are running and try creating the resources again.
If you used Helm to install MetalLB and are running a Kubernetes version that enforces
Pod Security Admission Policies, the namespace for MetalLB must be labeled to allow privileged containers. Click
here. for more info.
1.4 Ingress Controller
You will also need to have an ingress controller installed in your cluster. In this example, we are using Traefik, but any ingress controller should work.
helm repo add traefik https://helm.traefik.io/traefik --force-update
helm install \
traefik traefik/traefik \
--version 39.0.1 \
--namespace traefik \
--create-namespace \
--set ingressClass.enabled=true
Because Omni does not have any user authentication we need to add a additional component that handles authentication. Omni supports OIDC and SAML providers, so we can use a variety of different components for this.
Auth0
SAML Identity Providers
Local Authentication
Create an Auth0 account.On the account level, configure “Authentication - Social” to allow GitHub and Google login.Create an Auth0 application of the type “single page web application”.Configure the Auth0 application with the following:
- Allowed callback URLs:
https://<domain name for omni on k8s>
- Allowed web origins:
https://<domain name for omni on k8s>
- Allowed logout URLs:
https://<domain name for omni on k8s>
Disable username/password auth on “Authentication - Database - Applications” tab.Enable GitHub and Google login on the “Authentication - Social” tab.Enable email access in the GitHub settings.Take note of the following information from the Auth0 application: Other identity providers also work with Omni. Configuring these should be similar to Auth0. In this example, we are using Dex. Dex will be configured so we can log in to Omni with a static user.First we need to export some environment variables that will be used in the Helm chart values.The most important one is OMNI_USER_PASSWORD which will be used to create a password hash for the user we want to create. Make sure you remember this password for logging in later.export DOMAIN_SUFFIX=example.com
export OMNI_CLIENT_SECRET=$(echo -n "internal-sidero-stack" | base64)
export OMNI_USER_NAME=admin
export OMNI_USER_PASSWORD=$(htpasswd -BnC 15 admin | cut --delimiter=: --fields=1 --complement)
Now we can install Dex with Helm.helm repo add dex https://charts.dexidp.io --force-update
helm install dex dex/dex \
--version 0.24.0 \
--create-namespace \
--namespace dex \
--set ingress.enabled=false \
--set https.enabled=false \
--set config.issuer="https://dex.${DOMAIN_SUFFIX}" \
--set config.storage.type=kubernetes \
--set config.storage.config.inCluster=true \
--set config.oauth2.skipApprovalScreen=false \
--set config.web.http=0.0.0.0:5556 \
--set config.enablePasswordDB=true \
--set "config.staticClients[0].name=Omni" \
--set "config.staticClients[0].id=omni" \
--set "config.staticClients[0].secret=${OMNI_CLIENT_SECRET}" \
--set "config.staticClients[0].redirectURIs[0]=https://omni.${DOMAIN_SUFFIX}/oidc/consume" \
--set "config.staticPasswords[0].email=${OMNI_USER_NAME}@${DOMAIN_SUFFIX}" \
--set "config.staticPasswords[0].username=${OMNI_USER_NAME}" \
--set "config.staticPasswords[0].preferredUsername=${OMNI_USER_NAME}" \
--set "config.staticPasswords[0].hash=${OMNI_USER_PASSWORD}"
We also need to create an Ingress resource to expose Dex.cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
name: dex
namespace: dex
spec:
ingressClassName: traefik
rules:
- host: dex.${DOMAIN_SUFFIX}
http:
paths:
- backend:
service:
name: dex
port:
number: 5556
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- dex.${DOMAIN_SUFFIX}
secretName: dex-tls
EOF
Make sure that the DNS records for Dex are correctly configured and pointing to the LoadBalancer IP of the Ingress controller.
Dex can also be configured with other connectors such as LDAP, SAML, OIDC, etc. For more information on how to configure Dex with these connectors, please see the
Dex documentation.
3 Run Omni on Kubernetes
3.1 Create etcd encryption key
Generate a GPG key:
gpg --batch --passphrase '' \
--quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" \
rsa4096 cert never
FINGERPRINT=$(gpg --with-colons --list-keys "how-to-guide@siderolabs.com" \
| awk -F: '$1 == "fpr" {print $10; exit}')
gpg --batch --passphrase '' \
--quick-add-key ${FINGERPRINT} rsa4096 encr never
gpg --export-secret-key --armor how-to-guide@siderolabs.com > omni.asc
Do not add passphrases to keys during creation.
3.2 Deploy Omni
First we need to create a namespace for Omni to run in.
kubectl create namespace omni
If you are running a Kubernetes version that enforces Pod Security Admission Policies.
The namespace for Omni must be labeled to allow privileged containers.
kubectl label namespace omni pod-security.kubernetes.io/enforce=privileged
Now that we have out namespace created, we need to create a secret with the GPG key we just generated. This will be used by Omni to encrypt data in etcd.
kubectl create secret generic omni-gpg-key \
--namespace omni \
--from-file=omni.asc
You can install Omni with Helm, but before we can do that we need to create a values file with the necessary configuration. You can use the following template as a starting point. For the full list of configuration options, please see the Helm chart documentation.
export AUTH0_CLIENT_ID=<your-auth0-client-id>
export AUTH0_DOMAIN=<your-auth0-domain>
export INITIAL_USER=user@example.com
export DOMAIN_SUFFIX=example.com
export WIREGUARD_ADVERTISED_ENDPOINT=192.168.1.5
cat <<EOF > values.yaml
config:
account:
id: $(uuidgen)
name: my-omni
auth:
auth0:
enabled: true
clientID: ${AUTH0_CLIENT_ID}
domain: ${AUTH0_DOMAIN}
initialUsers: # Initial users are created on the first startup of Omni. They are created with the "admin" role.
- ${INITIAL_USER}
services:
api:
advertisedURL: https://omni.${DOMAIN_SUFFIX}
kubernetesProxy:
advertisedURL: https://kubernetes.${DOMAIN_SUFFIX}
machineAPI:
advertisedURL: https://siderolink.${DOMAIN_SUFFIX}
siderolink:
wireGuard:
# The externally accessible IP:port for WireGuard connections.
# If using an externally accessible node IP, the port should match
# service.wireguard.nodePort (default: 30180).
advertisedEndpoint: "${WIREGUARD_ADVERTISED_ENDPOINT}:30180"
etcdEncryptionKey:
existingSecret: "omni-gpg-key"
ingress:
main:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: omni.${DOMAIN_SUFFIX}
tls:
- secretName: omni-tls
hosts:
- omni.${DOMAIN_SUFFIX}
kubernetesProxy:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: kubernetes.${DOMAIN_SUFFIX}
tls:
- secretName: omni-k8s-tls
hosts:
- kubernetes.${DOMAIN_SUFFIX}
siderolinkApi:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: siderolink.${DOMAIN_SUFFIX}
tls:
- secretName: omni-siderolink-tls
hosts:
- siderolink.${DOMAIN_SUFFIX}
EOF
export DOMAIN_SUFFIX=example.com
export TENANT_ID=<your-azure-ad-tenant-id>
export SAML_METADATA_URL=https://login.microsoftonline.com/${TENANT_ID}/federationmetadata/2007-06/federationmetadata.xml
cat <<EOF > omni-values.yaml
config:
account:
id: $(uuidgen)
name: my-omni
auth:
auth0:
enabled: false
saml:
enabled: true
url: ${SAML_METADATA_URL}
initialUsers: # Initial users are created on the first startup of Omni. They are created with the "admin" role.
- ${INITIAL_USER}
services:
api:
advertisedURL: https://omni.${DOMAIN_SUFFIX}
kubernetesProxy:
advertisedURL: https://kubernetes.${DOMAIN_SUFFIX}
machineAPI:
advertisedURL: https://siderolink.${DOMAIN_SUFFIX}
siderolink:
wireGuard:
# The externally accessible IP:port for WireGuard connections.
# If using an externally accessible node IP, the port should match
# service.wireguard.nodePort (default: 30180).
advertisedEndpoint: "${WIREGUARD_ADVERTISED_ENDPOINT}:30180"
etcdEncryptionKey:
existingSecret: "omni-gpg-key"
ingress:
main:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: omni.${DOMAIN_SUFFIX}
tls:
- secretName: omni-tls
hosts:
- omni.${DOMAIN_SUFFIX}
kubernetesProxy:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: kubernetes.${DOMAIN_SUFFIX}
tls:
- secretName: omni-k8s-tls
hosts:
- kubernetes.${DOMAIN_SUFFIX}
siderolinkApi:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: siderolink.${DOMAIN_SUFFIX}
tls:
- secretName: omni-siderolink-tls
hosts:
- siderolink.${DOMAIN_SUFFIX}
EOF
export INITIAL_USER=user@example.com
export DOMAIN_SUFFIX=example.com
export WIREGUARD_ADVERTISED_ENDPOINT=192.168.1.5
cat <<EOF > omni-values.yaml
config:
account:
id: $(uuidgen)
name: my-omni
auth:
auth0:
enabled: false
oidc:
enabled: true
clientID: omni
clientSecret: ${OMNI_CLIENT_SECRET}
providerURL: https://dex.${DOMAIN_SUFFIX}
scopes:
- openid
- profile
- email
initialUsers: # Initial users are created on the first startup of Omni. They are created with the "admin" role.
- ${INITIAL_USER}
services:
api:
advertisedURL: https://omni.${DOMAIN_SUFFIX}
kubernetesProxy:
advertisedURL: https://kubernetes.${DOMAIN_SUFFIX}
machineAPI:
advertisedURL: https://siderolink.${DOMAIN_SUFFIX}
siderolink:
wireGuard:
# The externally accessible IP:port for WireGuard connections.
# If using an externally accessible node IP, the port should match
# service.wireguard.nodePort (default: 30180).
advertisedEndpoint: ${WIREGUARD_ADVERTISED_ENDPOINT}:30180
etcdEncryptionKey:
existingSecret: "omni-gpg-key"
ingress:
main:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: omni.${DOMAIN_SUFFIX}
tls:
- secretName: omni-tls
hosts:
- omni.${DOMAIN_SUFFIX}
kubernetesProxy:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: kubernetes.${DOMAIN_SUFFIX}
tls:
- secretName: omni-k8s-tls
hosts:
- kubernetes.${DOMAIN_SUFFIX}
siderolinkApi:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
className: traefik
host: siderolink.${DOMAIN_SUFFIX}
tls:
- secretName: omni-siderolink-tls
hosts:
- siderolink.${DOMAIN_SUFFIX}
EOF
With the values file created, we can now install the Helm chart.
Congratulations! You have now deployed Omni on Kubernetes.
3.3 Workload Proxy (Optional)
Workload Proxy allows you to expose HTTP services running in your managed clusters through Omni. Once configured, you can annotate Kubernetes Services to make them accessible, protected by Omni’s authentication.
For details on exposing services, see Expose an HTTP Service from a Cluster.
The workload proxy domain is not a subdomain of Omni—it exists alongside it. For example:
- Omni: omni.example.com
- Workload Proxy: *.omni-workload.example.com
To enable the workload proxy, you need to enable the workloadProxy component in the Helm chart values and create an additional Ingress resource with the help of extraObjects section.
export DOMAIN_SUFFIX=example.com
export WORKLOAD_PROXY_SUBDOMAIN=omni-workload
cat <<EOF > omni-workload-values.yaml
config:
services:
workloadProxy:
enabled: true
subdomain: ${WORKLOAD_PROXY_SUBDOMAIN}
extraObjects:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: omni-workload-proxy
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/service.passhostheader: "true"
traefik.ingress.kubernetes.io/service.serversscheme: h2c
spec:
tls:
- hosts:
- ${WORKLOAD_PROXY_SUBDOMAIN}.${DOMAIN_SUFFIX}
- "*.${WORKLOAD_PROXY_SUBDOMAIN}.${DOMAIN_SUFFIX}"
secretName: omni-workload-proxy-wildcard-tls
rules:
- host: "*.${WORKLOAD_PROXY_SUBDOMAIN}.${DOMAIN_SUFFIX}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: omni
port:
name: omni
EOF
You can then upgrade the updated Helm chart with this command.