Automated Application Deployment with ArgoCD and Jenkins Pipeline

Create Helm Repository

First, create a basic Helm template repository:

1
helm create template .

For actual deployments, you’ll need to customize the Helm template according to your business requirements. Here, we directly use an internal custom generic template for rapid deployment. Alternatively, you can refer to Bitnami’s maintained Helm charts: https://github.com/bitnami/charts/tree/master/bitnami

Jenkins Credential Configuration: ArgoCD Token

argocd and jenkins

Configure Jenkins Pipeline

We’ll use the gotest project (https://code.test.cn/hqliang/gotest) as an example.

When deploying a Deployment in Argo CD, you encounter the error: `no space left on device`.

Failure Phenomenon

This morning, I deployed several business applications via ArgoCD. After successfully deploying two applications, subsequent deployments from a third-party source consistently failed—despite using identical configurations, only the target cluster differed. Why would this happen?

I checked the logs and found the following:

1
2
3
  Warning  Failed     1m                kubelet, 172.16.25.13  Error: Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/ba37165607862efb350093e5e287207e2547759fd81dc4e5e356a86ac5e28324-init/merged: no space left on device
  Warning  Failed     1m                kubelet, 172.16.25.13  Error: Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/f69b62f360fc2a94487aca041b08d0929810beab0602e0ec8b90c94b2e893337-init/merged: no space left on device
  Warning  Failed     48s               kubelet, 172.16.25.13  Error: Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/a8d20a44183b39ae989eee8a442960124ff23844482f726ea7ab39a292aecbb3-init/merged: no space left on device

Solution

  1. Check disk space—no issues found:
1
2
3
root@gpu613:~# df -Th /
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sda2      ext4  1.8T  359G  1.3T  22% /
  1. After Googling, I discovered this might be due to exhausted inotify watches.

Check current limit:

Add multiple clusters to ArgoCD

Generate Argo CD Management User Token

Log in to the dashboard, go to SettingsAccountsadminGenerate New.
After generation, record the token information, which looks like this:

1
fyJhbGciOiJ3UzI1NiIsInR5cCI6IkpXVCJ9.eyJqdGkiOiI2OWI0M2M0Mi01MmZiLTRlZmItODIxOC0yOWU3NGM5MWI0NDIiLCJpYXQiOjE1OTUzMTEx3zQsImlzcyI6ImFyZ29jZCIsIm5iZiI6MTU5NTMxMTE3NCwic3ViIjoib3duZXIifQ.9u4XzArEeaz7G2Q2TWusnTkakEmq9BYDAUHr3dC6wG5

Configure Argo CD Config

For Argo CD with HTTPS enabled, adding clusters becomes cumbersome—it requires logging into the server pod for configuration. Follow these steps:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# cat ~/.argocd/config
contexts:
- name: argocd-server.argocd
  server: qacd.test.cn
  user: argocd-server.argocd
current-context: argocd-server.argocd
servers:
- grpc-web-root-path: ""
  insecure: true
  server: qacd.test.cn
users:
- auth-token: xxxxxx # This is the token generated in step 1
  name: argocd-server.argocd

Configure kubeconfig

Skip detailed configuration here—refer to previous documentation. Ensure you can access the cluster and have cluster administrator privileges. Set the CONTEXT to idc-bj-k8s.

ArgoCD Installation and Deployment

Installation and Deployment

Deploying ArgoCD is straightforward. Use the official high-availability (HA) deployment method:

1
2
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.5.2/manifests/ha/install.yaml

You can customize the deployment file as needed. After the pods are successfully started:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# kubectl -n argocd get pod
NAME                                             READY   STATUS    RESTARTS   AGE
argocd-application-controller-66fbf66657-ghf2c   1/1     Running   0          6d17h
argocd-application-controller-66fbf66657-gpm7d   1/1     Running   0          6d17h
argocd-application-controller-66fbf66657-tr5kd   1/1     Running   0          6d17h
argocd-dex-server-5c5f986596-c8ftv               1/1     Running   0          9d
argocd-redis-ha-haproxy-69c6df79c6-2fxd6         1/1     Running   0          9d
argocd-redis-ha-haproxy-69c6df79c6-mksg2         1/1     Running   0          9d
argocd-redis-ha-haproxy-69c6df79c6-wq57f         1/1     Running   0          9d
argocd-redis-ha-server-0                         2/2     Running   0          9d
argocd-redis-ha-server-1                         2/2     Running   0          9d
argocd-redis-ha-server-2                         2/2     Running   0          9d
argocd-repo-server-76bbb56cc7-d8fp5              1/1     Running   0          7d
argocd-repo-server-76bbb56cc7-qvl5z              1/1     Running   0          7d
argocd-repo-server-76bbb56cc7-xqrfn              1/1     Running   0          7d
argocd-server-6464c7bcd-fgktr                    1/1     Running   0          6d19h
argocd-server-6464c7bcd-jkqdb                    1/1     Running   0          6d19h
argocd-server-6464c7bcd-nfdwn                    1/1     Running   0          6d19h

Configure Ingress for ArgoCD Access

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: argocd-server-ingress
  namespace: argocd
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
  rules:
    - host: cd.testcn
      http:
        paths:
        - backend:
            serviceName: argocd-server
            servicePort: https
          path: /

Access ArgoCD via https://cd.test.cn/. The default username is admin, and the password is the name of one of the pods. Retrieve the password using:

Build a Personal Blog with Hugo + GitHub

Introduction to Hugo

Previously, I used Hexo to build my blog. As I’ve been using Go more and more, I’ve wanted to migrate my blog to Hugo. Hugo is a static site generator written in Go—simple, easy to use, efficient, extensible, and fast to deploy.

Installing Hugo

Here’s how to install Hugo on macOS:

1
2
3
4
5
6
brew install hugo
hugo new site wanzi
cd wanzi
git clone https://github.com/xianmin/hugo-theme-jane.git --depth=1 themes/jane
cp -r themes/jane/exampleSite/content ./
cp themes/jane/exampleSite/config.toml ./

Update config.toml with your own blog information.

Add a user to a Kubernetes cluster

Previously, an Kubernetes cluster environment was set up using Ansible. The current requirement is to add a user for daily management, restricted to a specific namespace. Below are the steps:

Kubernetes Users

In Kubernetes, there are two types of users: ServiceAccounts and regular users (User). ServiceAccounts are managed by Kubernetes, while regular users are typically managed externally. Kubernetes does not store user lists—meaning user creation, modification, or deletion must be handled externally, without interacting with the Kubernetes API. Although Kubernetes does not manage users directly, it can recognize the identity of users making API requests. In fact, every API request to Kubernetes must be associated with an identity (either a User or a ServiceAccount), allowing us to assign permissions within the cluster to specific users.

Deploy traefik2.1 in kubernetes cluster

Architecture & Concepts

traefik v2.1 router

Traefik 2.x has a big change compared to 1.7.x architecture. As shown in the architecture diagram above, the main function is to support TCP protocol and add the concept of Router.

Here we use Traefik 2.1 deployed in the kubernetes cluster. Business access is requested to traefik Ingress through haproxy. The following are some concepts involved in the construction process:

  • EntryPoints: Traefik’s network entry, defining the port where the request is accepted (regardless of http or tcp)

Deploying a K8s cluster with kubeasz

Environment Preparation

  • Master nodes
1
2
3
172.16.244.14
172.16.244.16
172.16.244.18
  • Node nodes
1
2
172.16.244.25
172.16.244.27
  • Master node VIP: 172.16.243.13

  • Deployment tool: Ansible/kubeasz

Initialize Environment

Install Ansible

1
2
3
4
5
apt update
apt-get install ansible expect
git clone https://github.com/easzlab/kubeasz
cd kubeasz
cp * /etc/ansible/

Configure Ansible SSH Keyless Login

1
2
ssh-keygen -t rsa -b 2048 # Generate key pair
./tools/yc-ssh-key-copy.sh hosts root 'rootpassword'

Prepare Binary Files

1
2
cd tools
./easzup -D # Downloads binaries to /etc/ansible/bin/ by default

Configure hosts file as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
[kube-master]
172.16.244.14
172.16.244.16
172.16.244.18

[etcd]
172.16.244.14 NODE_NAME=etcd1
172.16.244.16 NODE_NAME=etcd2
172.16.244.18 NODE_NAME=etcd3

# haproxy-keepalived
[haproxy]
172.16.244.14
172.16.244.16
172.16.244.18

[kube-node]
172.16.244.25
172.16.244.27

# [optional] load balance for accessing k8s from outside
[ex-lb]
172.16.244.14 LB_ROLE=backup EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443
172.16.244.16 LB_ROLE=backup EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443
172.16.244.18 LB_ROLE=master EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
172.16.244.18

[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
#CLUSTER_NETWORK="flannel"
CLUSTER_NETWORK="calico"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.101.0.0/16"

# NodePort Range
NODE_PORT_RANGE="20000-40000"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"

Deploy K8S Cluster

Initialize Configuration

1
2
cd /etc/ansible
ansible-playbook 01.prepare.yml

This step performs three main tasks: