- Collect CA certificates and a bootstrap token from a control plane node.
- Create a Talos machine config with the CA certificates with the ones you collected.
- Update control plane endpoint in the machine config to point to the existing control plane (i.e. your load balancer address).
- Boot a new Talos machine and apply the machine config.
- Verify that the new control plane node is ready.
- Remove one of the old control plane nodes.
- Repeat the same steps for all control plane nodes.
- Verify that all control plane nodes are ready.
- Repeat the same steps for all worker nodes, using the machine config generated for the workers.
Remarks on kube-apiserver load balancer
While migrating to Talos, you need to make sure that your kube-apiserver load balancer is in place and keeps pointing to the correct set of control plane nodes. This process depends on your load balancer setup. If you are using an LB that is external to the control plane nodes (e.g. cloud provider LB, F5 BIG-IP, etc.), you need to make sure that you update the backend IPs of the load balancer to point to the control plane nodes as you add Talos nodes and remove kubeadm-based ones. If your load balancing is done on the control plane nodes (e.g. keepalived + haproxy on the control plane nodes), you can do the following:- Add Talos nodes and remove kubeadm-based ones while updating the haproxy backends to point to the newly added nodes except the last kubeadm-based control plane node.
- Turn off keepalived to drop the virtual IP used by the kubeadm-based nodes (introduces kube-apiserver downtime).
- Set up a virtual-IP based new load balancer on the new set of Talos control plane nodes. Use the previous LB IP as the LB virtual IP.
- Verify apiserver connectivity over the Talos-managed virtual IP.
- Migrate the last control-plane node.
Prerequisites
- Admin access to the kubeadm-based cluster
- Access to the
/etc/kubernetes/pkidirectory (e.g. SSH & root permissions) on the control plane nodes of the kubeadm-based cluster - Access to kube-apiserver load-balancer configuration
Step-by-step guide
-
Download
/etc/kubernetes/pkidirectory from a control plane node of the kubeadm-based cluster. -
Create a new join token for the new control plane nodes:
-
Create Talos secrets from the PKI directory you downloaded on step 1 and the token you generated on step 2:
-
Create a new Talos config from the secrets:
-
Collect the information about the kubeadm-based cluster from the kubeadm configmap:
Take note of the following information in the
ClusterConfiguration:.controlPlaneEndpoint.networking.dnsDomain.networking.podSubnet.networking.serviceSubnet
-
Replace the following information in the generated
controlplane.yaml:.cluster.network.cni.namewithnone.cluster.network.podSubnets[0]with the value of thenetworking.podSubnetfrom the previous step.cluster.network.serviceSubnets[0]with the value of thenetworking.serviceSubnetfrom the previous step.cluster.network.dnsDomainwith the value of thenetworking.dnsDomainfrom the previous step
-
Go through the rest of
controlplane.yamlandworker.yamlto customize them according to your needs, especially :.cluster.secretboxEncryptionSecretshould be either removed if you donβt currently useEncryptionConfigon yourkube-apiserveror set to the correct value
-
Make sure that, on your current Kubeadm cluster, the first
--service-account-issuer=parameter in/etc/kubernetes/manifests/kube-apiserver.yamlis equal to the value of.cluster.controlPlane.endpointincontrolplane.yaml. If itβs not, add a new--service-account-issuer=parameter with the correct value before your current one in/etc/kubernetes/manifests/kube-apiserver.yamlon all of your control planes nodes, and restart the kube-apiserver containers. - Bring up a Talos node to be the initial Talos control plane node.
-
Apply the generated
controlplane.yamlto the Talos control plane node: -
Wait until the new control plane node joins the cluster and is ready.
- Update your load balancer to point to the new control plane node.
-
Drain the old control plane node you are replacing:
-
Remove the old control plane node from the cluster:
-
Destroy the old node:
- Repeat the same steps, starting from step 7, for all control plane nodes.
-
Repeat the same steps, starting from step 7, for all worker nodes while applying the
worker.yamlinstead and skipping the LB step: -
Your kubeadm
kube-proxyconfiguration may not be compatible with the one generated by Talos, which will make the Talos Kubernetes upgrades impossible (labels may not be the same, andselector.matchLabelsis an immutable field). To be sure, export your current kube-proxy daemonset manifest, check the labels, they have to be:If the are not, modify all the labels fields, save the file, delete your current kube-proxy daemonset, and apply the one you modified.
Limitations on Custom PKI
Talos always uses a per-cluster PKI model. During bootstrap, Talos expects a single root CA to issue all other certificates, including those for etcd, the Kubernetes API server, and the front-proxy. Talos does not support kubeadm PKIs that rely on intermediate CAs (for example, a root CA with separate intermediates for different components). By design, both--cluster-signing-cert-file and --root-ca-file point to the same CA certificate, and these values cannot be overridden.
If your kubeadm cluster uses an intermediate CA hierarchy, you cannot directly reuse that PKI with Talos.
Instead, you must regenerate certificates using the Talos per-cluster CA model.