Persisting Jenkins Data on Kubernetes with Longhorn on Civo
Provision a Kubernetes Cluster
Create a civo kubernetes cluster:
$ civo kubernetes create --size=g2.small --nodes 3 --version 0.9.1 --applications Longhorn --wait
Building new Kubernetes cluster rebel-comrade: Done
Created Kubernetes cluster rebel-comrade in 02 min 03 sec
Append the kubernetes config to your kubeconfig file:
$ civo kubernetes config rebel-comrade --save
Merged config into ~/.kube/config
Switch the context to the new cluster:
$ kubectx rebel-comrade
Switched to context "rebel-comrade".
Check that Longhorn is Deployed
Ensure that longhorn is deployed and running:
$ get pods --namespace longhorn-system
NAMESPACE NAME READY STATUS RESTARTS AGE
longhorn-system longhorn-manager-4jlhs 1/1 Running 0 113s
longhorn-system engine-image-ei-9bea8a9c-6dtfw 1/1 Running 0 67s
longhorn-system longhorn-driver-deployer-5d94b4b959-kmdnh 1/1 Running 0 113s
longhorn-system longhorn-ui-6bddf979f4-jgwlb 1/1 Running 0 113s
longhorn-system csi-provisioner-5d785688b7-jjzxx 1/1 Running 0 25s
longhorn-system csi-attacher-774b869c8d-4g2jq 1/1 Running 0 25s
longhorn-system csi-provisioner-5d785688b7-dbf69 1/1 Running 0 25s
longhorn-system csi-attacher-774b869c8d-7bz4w 1/1 Running 0 25s
longhorn-system longhorn-csi-plugin-jk8ws 2/2 Running 0 25s
longhorn-system csi-provisioner-5d785688b7-c5ddk 1/1 Running 0 25s
longhorn-system csi-attacher-774b869c8d-5vlhk 1/1 Running 0 25s
Get the Cluster Details
Get the DNS name for your cluster so that we can populate the ingress address that we will use for Jenkins:
$ civo kubernetes show rebel-comrade
ID : 06d83d77-4a70-4647-9c46-94cf1b5b57ec
Name : rebel-comrade
# Nodes : 3
Size : g2.small
Status : ACTIVE
Version : 0.9.1
API Endpoint : https://185.136.233.204:6443
DNS A record : 06d83d77-4a70-4647-9c46-94cf1b5b57ec.k8s.civo.com
Nodes:
+------------------+-----------------+--------+
| Name | IP | Status |
+------------------+-----------------+--------+
| kube-master-50ab | 185.136.233.204 | ACTIVE |
| kube-node-9595 | 185.136.235.148 | ACTIVE |
| kube-node-dc13 | 185.136.233.122 | ACTIVE |
+------------------+-----------------+--------+
Installed marketplace applications:
+----------+-----------+-----------+--------------+
| Name | Version | Installed | Category |
+----------+-----------+-----------+--------------+
| Longhorn | 0.5.0 | Yes | storage |
| Traefik | (default) | Yes | architecture |
+----------+-----------+-----------+--------------+
Jenkins Manifests
Below is the files that I used for deploying Jenkins. First create the jenkins directory:
$ mkdir jenkins
The persistent volume:
$ cat > jenkins/pv.yaml<< EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
namespace: apps
labels:
name: jenkins-data
type: longhorn
spec:
capacity:
storage: 5G
volumeMode: Filesystem
storageClassName: longhorn
accessModes:
- ReadWriteOnce
csi:
driver: io.rancher.longhorn
fsType: ext4
volumeAttributes:
numberOfReplicates: '2'
staleReplicaTimeout: '20'
volumeHandle: jenkins-data
EOF
The persistent volume claim:
$ cat > jenkins/pv-claim.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
labels:
type: longhorn
app: jenkins
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
EOF
The deployment:
$ cat > jenkins/deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
labels:
app: jenkins
spec:
selector:
matchLabels:
app: jenkins
tier: jenkins
strategy:
type: Recreate
template:
metadata:
labels:
app: jenkins
tier: jenkins
spec:
containers:
- image: bitnami/jenkins:2.190.1-debian-9-r14
name: jenkins
env:
- name: JENKINS_USERNAME
value: jenkinsuser
- name: JENKINS_PASSWORD
value: password12345
ports:
- containerPort: 8080
name: jenkins
- containerPort: 50000
name: jenkins-agent
volumeMounts:
- name: jenkins-persistent-storage
mountPath: /bitnami
volumes:
- name: jenkins-persistent-storage
persistentVolumeClaim:
claimName: jenkins-pv-claim
EOF
The Jenkins Service:
$ cat > jenkins/jenkins-service.yaml << EOF
kind: Service
apiVersion: v1
metadata:
name: jenkins-frontend
labels:
app: jenkins
spec:
selector:
app: jenkins
ports:
- port: 80
targetPort: 8080
clusterIP: None
EOF
The ingress, remember to replace your cluster id / dns endpoint:
$ cat > jenkins/jenkins-ingress.yaml << EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
spec:
rules:
- host: jenkins.your-cluster-id.k8s.civo.com
http:
paths:
- backend:
serviceName: jenkins-frontend
servicePort: 80
EOF
Deploy Jenkins
Deploy Jenkins to the Kubernetes Cluster:
$ kubectl apply -f ./jenkins/
ingress.extensions/jenkins-ingress created
service/jenkins-frontend created
deployment.apps/jenkins created
persistentvolumeclaim/jenkins-pv-claim created
persistentvolume/jenkins-pv created
Ensure that Jenkins is running:
$ kubectl get pods --selector app=jenkins
NAME READY STATUS RESTARTS AGE
jenkins-7f7f68b597-s97ps 1/1 Running 0 3m
Access Jenkins
Get the Jenkins ingress address:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
jenkins-ingress jenkins.06d83d77-4a70-4647-9c46-94cf1b5b57ec.k8s.civo.com 172.31.0.185 80 21m
Access Jenkins and you should see a login like this:
Testing Persistence
Now we want to test the persistence using longhorn volumes. We will create a basic job in jenkins, then delete the pod and allow the pod to be scheduled on another host, then refresh the ui and see if our job is still visible.
After creating a basic job:
View the current running pod with the wide output to see on which node its running:
$ kubectl get pods --selector app=jenkins --output wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
jenkins-7f7f68b597-s97ps 1/1 Running 0 44m 192.168.0.25 kube-master-50ab <none> <none>
Then delete the pod:
$ kubectl delete pod/jenkins-7f7f68b597-s97ps
pod "jenkins-7f7f68b597-s97ps" deleted
View the pod again:
$ kubectl get pods --selector app=jenkins --output wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
jenkins-7f7f68b597-wdr7n 1/1 Running 0 2m7s 192.168.1.17 kube-node-dc13 <none> <none>
Now that we see its running on another node, refresh the ui and access jenkins again. You should see the test job we created earlier and we can confirm that data is being persisted using longhorn:
Thank You
Thanks for reading. If you would like to check out more of my content, check out my website at ruan.dev or follow me on Twitter @ruanbekker