In this tutorial we will be creating a Kubernetes Cluster on Civo using Terraform and use the local provider to save the kubeconfig to a local file.

Once we have our Infrastructure up and running, we will deploy a Go web application to our cluster.

Pre-Requisites

You will require the following to follow this tutorial:

Authentication

We require a Civo API Key to authenticate terraform with Civo, by heading to your security section from your account.

I will be saving the API Key to my environment as:

$ export TF_VAR_civo_api_key="x"

Configuration

First we create our providers.tf which will include the providers that we will be using to create our Kubernetes cluster, which will be civo and local to download the kubeconfig and write it to a local file:

terraform {
  required_providers {
    civo = {
      source = "civo/civo"
      version = "0.10.9"
    }
    local = {
      source = "hashicorp/local"
      version = "2.1.0"
    }
  }
}

provider "civo" {
  token = var.civo_api_key
}

provider "local" {}

Next our variables.tf where we will define all our user-defined values for our cluster:

variable "civo_api_key" {
  type        = string
  description = "civo api key to authenticate"
}

variable "civo_cluster_name" {
  type        = string
  description = "civo kubernetes cluster name"
  default     = "ruan"
}

variable "civo_region" {
  type        = string
  description = "civo region to use"
  default     = "LON1"
}

variable "civo_cluster_applications" {
  type        = string
  description = "applications to deploy on kubernetes cluster"
  default     = "Traefik"
}

variable "civo_cluster_nodes" {
  type        = number
  description = "kubernetes nodes count"
  default     = 1
}

variable "civo_nodes_size" {
  type        = string
  description = "instance size to use for nodes"
  default     = "g3.k3s.xsmall"
}

Our main.tf which will be our code to define the civo kubernetes cluster and the local file that we will write to our current working directory $PWD/kubeconfig.civo which will include the kubeconfig that we will get from the civo_kubernetes_cluster resource:

resource "civo_kubernetes_cluster" "ruan" {
    name              = var.civo_cluster_name
    region            = var.civo_region
    applications      = var.civo_cluster_applications
    num_target_nodes  = var.civo_cluster_nodes
    target_nodes_size = var.civo_nodes_size
}

resource "local_file" "kubeconfig" {
    content   = civo_kubernetes_cluster.ruan.kubeconfig
    filename  = "${path.module}/kubeconfig.civo"
}

And lastly our outputs.tf which will output information about our kubernetes cluster:

output "cluster_name" {
  value = civo_kubernetes_cluster.ruan.name
}

output "applications" {
  value = civo_kubernetes_cluster.ruan.applications
}

output "kubernetes_version" {
  value = civo_kubernetes_cluster.ruan.kubernetes_version
}

output "api_endpoint" {
  value = civo_kubernetes_cluster.ruan.api_endpoint
}

output "dns_entry" {
  value = civo_kubernetes_cluster.ruan.dns_entry
}

Infrastructure Deployment

First we need to initialize terraform by downloading the providers:

$ terraform init

Initializing the backend...
...
Terraform has been successfully initialized!

To see what terraform will be creating, run a plan:

$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # civo_kubernetes_cluster.ruan will be created
  + resource "civo_kubernetes_cluster" "ruan" {
      + api_endpoint           = (known after apply)
      + applications           = "Traefik"
      + created_at             = (known after apply)
      + dns_entry              = (known after apply)
      + id                     = (known after apply)
      + installed_applications = (known after apply)
      + instances              = (known after apply)
      + kubeconfig             = (known after apply)
      + kubernetes_version     = (known after apply)
      + master_ip              = (known after apply)
      + name                   = "ruan"
      + network_id             = (known after apply)
      + num_target_nodes       = 1
      + pools                  = (known after apply)
      + ready                  = (known after apply)
      + region                 = "LON1"
      + status                 = (known after apply)
      + target_nodes_size      = "g3.k3s.xsmall"
    }

  # local_file.kubeconfig will be created
  + resource "local_file" "kubeconfig" {
      + content              = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./kubeconfig.civo"
      + id                   = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + api_endpoint       = (known after apply)
  + applications       = "Traefik"
  + cluster_name       = "ruan"
  + dns_entry          = (known after apply)
  + kubernetes_version = (known after apply)

Once your are happy with what terraform will be creating, run apply:

$ terraform apply -auto-approve
...
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

api_endpoint = "https://x.x.x.x:6443"
applications = "Traefik"
cluster_name = "ruan"
dns_entry = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.k8s.civo.com"
kubernetes_version = "1.20.0-k3s1"

In our current directory, you will notice that our kubeconfig has been saved to kubeconfig.civo:

$ ls | grep kubeconfig
kubeconfig.civo

Set the file to our KUBECONFIG environment variable:

$ export KUBECONFIG=$PWD/kubeconfig.civo

Interact with the Kubernetes Cluster

List the nodes:

$ kubectl get nodes --output wide
NAME                               STATUS   ROLES    AGE   VERSION        INTERNAL-IP   EXTERNAL-IP    OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k3s-ruan-471f4933-node-pool-4696   Ready    <none>   16m   v1.20.2+k3s1   192.168.1.2   xx.xxx.xx.xx   Ubuntu 20.04.2 LTS   5.4.0-80-generic   containerd://1.4.3-k3s1

Software Deployment

Let's deploy a basic web application to our cluster, which is a Go webapp which returns the hostname from where the pod is running from.

If you follow along step-by-step, just make sure to replace the cluster id:

---
apiVersion: v1
kind: Service
metadata:
  name: hostname-service
spec:
  selector:
    app: hostname
  ports:
    - protocol: TCP
      port: 8000
      targetPort: 8000
      name: web
---
# for k3s => v1.19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hostname-ingress
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: hostname.your-cluster-id.k8s.civo.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hostname-service
            port:
              number: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: hostname
  name: hostname
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hostname
  template:
    metadata:
      labels:
        app: hostname
    spec:
      containers:
      - name: hostname
        image: ruanbekker/hostname:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
          name: http
          protocol: TCP

Deploy the application to the cluster:

$ kubectl apply -f app.yml
service/hostname-service created
ingress.networking.k8s.io/hostname-ingress created
deployment.apps/hostname created

After a couple of seconds, we can get the status of our deployment and getting the output of our ingress:

$ kubectl get ingress,deployment,pods --output wide
NAME                                         CLASS    HOSTS                                                        ADDRESS        PORTS   AGE
ingress.networking.k8s.io/hostname-ingress   <none>   hostname.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.k8s.civo.com   74.220.21.50   80      6m14s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                       SELECTOR
deployment.apps/hostname   3/3     3            3           6m14s   hostname     ruanbekker/hostname:latest   app=hostname

NAME                           READY   STATUS    RESTARTS   AGE     IP          NODE                               NOMINATED NODE   READINESS GATES
pod/hostname-5c5978694-7kpg4   1/1     Running   0          6m14s   10.42.1.3   k3s-ruan-471f4933-node-pool-f6b1   <none>           <none>
pod/hostname-5c5978694-t7rql   1/1     Running   0          6m14s   10.42.1.4   k3s-ruan-471f4933-node-pool-f6b1   <none>           <none>
pod/hostname-5c5978694-tbvwl   1/1     Running   0          6m14s   10.42.1.5   k3s-ruan-471f4933-node-pool-f6b1   <none>           <none>

Testing the Application

As we can see our deployment is in it's desired state, we can test our application by making a http request to it:

$ curl http://hostname.your-cluster-id.k8s.civo.com
Hostname: hostname-5c5978694-7kpg4

Tear Down

To destroy the cluster, use terraform destroy:

$ terraform destroy -auto-approve

And remove the Civo API Key from your environment:

$ unset TF_VAR_civo_api_key

Resources

The container image:

The Civo provider:

Source code for this demonstration:

Video Walkthrough of using the Civo Provider by Saiyam Pathak

Thanks

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.