Creating a Cluster using the Vultr CLI
This guide will demonstrate how to create a highly-available Kubernetes cluster with one worker using the Vultr cloud provider. Vultr have a very well documented REST API, and an open-source CLI tool to interact with the API which will be used in this guide. Make sure to follow installation and authentication instructions for thevultr-cli
tool.
Boot Options
Upload an ISO Image
First step is to make the Talos ISO available to Vultr by uploading the latest release of the ISO to the Vultr ISO server. Make a note of theID
in the output, it will be needed later when creating the instances.met
PXE Booting via Image Factory
Talos Linux can be PXE-booted on Vultr using Image Factory, using thevultr
platform: e.g.
(this URL references the default schematic and amd64
architecture).
Make a note of the ID
in the output, it will be needed later when creating the instances.
Create a Load Balancer
A load balancer is needed to serve as the Kubernetes endpoint for the cluster.ID
of the load balancer from the output of the above command, it will be needed after the control plane instances are created.
IP
address, it will be needed later when generating the configuration.
Create the Machine Configuration
Generate Base Configuration
Using the IP address (or DNS name if one was created) of the load balancer created above, generate the machine configuration files for the new cluster.Validate the Configuration Files
Create the Nodes
Create the Control Plane Nodes
First a control plane needs to be created, with the example below creating 3 instances in a loop. The instance type (noted by the--plan vc2-2c-4gb
argument) in the example is for a minimum-spec control plane node, and should be updated to suit the cluster being created.
ID
s, as they are needed to attach to the load balancer created earlier.
Create the Worker Nodes
Now worker nodes can be created and configured in a similar way to the control plane nodes, the difference being mainly in the machine configuration file. Note that like with the control plane nodes, the instance type (here set by--plan vc2-1-1gb
) should be changed for the actual cluster requirements.
Bootstrap etcd
Once all the cluster nodes are correctly configured, the cluster can be bootstrapped to become functional. It is important that thetalosctl bootstrap
command be executed only once and against only a single control plane node.
Configure Endpoints and Nodes
While the cluster goes through the bootstrapping process and beings to self-manage, thetalosconfig
can be updated with the endpoints and nodes.
Retrieve the kubeconfig
Finally, with the cluster fully running, the administrative kubeconfig
can be retrieved from the Talos API to be saved locally.
kubeconfig
can be used by any of the usual Kubernetes tools to interact with the Talos-based Kubernetes cluster as normal.