/etc/kubernetes/pki
directory (e.g. SSH & root permissions)
on the control plane nodes of the kubeadm-based cluster/etc/kubernetes/pki
directory from a control plane node of the kubeadm-based cluster.
ClusterConfiguration
:
.controlPlaneEndpoint
.networking.dnsDomain
.networking.podSubnet
.networking.serviceSubnet
controlplane.yaml
:
.cluster.network.cni.name
with none
.cluster.network.podSubnets[0]
with the value of the networking.podSubnet
from the previous step.cluster.network.serviceSubnets[0]
with the value of the networking.serviceSubnet
from the previous step.cluster.network.dnsDomain
with the value of the networking.dnsDomain
from the previous stepcontrolplane.yaml
and worker.yaml
to customize them according to your needs, especially :
.cluster.secretboxEncryptionSecret
should be either removed if you donβt currently use EncryptionConfig
on your kube-apiserver
or set to the correct value--service-account-issuer=
parameter in /etc/kubernetes/manifests/kube-apiserver.yaml
is equal to the value of .cluster.controlPlane.endpoint
in controlplane.yaml
.
If itβs not, add a new --service-account-issuer=
parameter with the correct value before your current one in /etc/kubernetes/manifests/kube-apiserver.yaml
on all of your control planes nodes, and restart the kube-apiserver containers.
controlplane.yaml
to the Talos control plane node:
worker.yaml
instead and skipping the LB step:
kube-proxy
configuration may not be compatible with the one generated by Talos, which will make the Talos Kubernetes upgrades impossible (labels may not be the same, and selector.matchLabels
is an immutable field).
To be sure, export your current kube-proxy daemonset manifest, check the labels, they have to be: