Recently revisiting Golang, I built a web application and successfully ran it locally. Now, I wanted to test it in a Kubernetes cluster, but due to hardware limitations, setting up a full-fledged K8s cluster was challenging. I remembered a friend mentioning that Kubernetes can also run inside Docker—so I decided to give it a try.

Today’s focus is kind. What is kind? And what can it do?


1. Introduction to kind

kind stands for Kubernetes In Docker — a tool that uses Docker container nodes to run local Kubernetes clusters. It’s primarily designed for testing Kubernetes itself and is ideal for local development or CI pipelines.

Currently, kind supports almost all official Kubernetes versions. At present, it only supports Docker as the runtime, but future versions will gradually add support for common CRI runtimes like containerd. In the near future, support for containerd is expected to be included.

Under the hood, kind leverages open-source tools such as kubeadm and kustomize to manage cluster creation and configuration.

Enough talk — let’s get started with installing and using kind.


2. Installing kind

Since I’m on macOS, I’ll use brew:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# brew install kind
==> Downloading https://mirrors.ustc.edu.cn/homebrew-bottles/kind-0.17.0.arm64_monterey.bottle.tar.gz
Already downloaded: /Users/wanzi/Library/Caches/Homebrew/downloads/bcd419997297730492f5cebc36be86fb51f21061a6ddb2e066e1e4d8ad33ddf3--kind-0.17.0.arm64_monterey.bottle.tar.gz
==> Pouring kind-0.17.0.arm64_monterey.bottle.tar.gz
==> Caveats
zsh completions have been installed to:
  /opt/homebrew/share/zsh/site-functions
==> Summary
🍺  /opt/homebrew/Cellar/kind/0.17.0: 8 files, 8.7MB
==> Running `brew cleanup kind`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
# kind version
kind v0.17.0 go1.19.2 darwin/arm64

For Linux:

1
2
3
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

If you’re on Windows, you can use Chocolatey or install via binary — both are officially supported.


3. Creating Your First Cluster

Before proceeding, ensure Docker and kubectl are installed (installation steps omitted here).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# kind help
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
  kind [command]

Available Commands:
  build       Build one of [node-image]
  completion  Output shell completion code for the specified shell (bash, zsh or fish)
  create      Creates one of [cluster]
  delete      Deletes one of [cluster]
  export      Exports one of [kubeconfig, logs]
  get         Gets one of [clusters, nodes, kubeconfig]
  help        Help about any command
  load        Loads images into nodes
  version     Prints the kind CLI version

Flags:
  -h, --help              help for kind
      --loglevel string   DEPRECATED: see -v instead
  -q, --quiet             silence all stderr output
  -v, --verbosity int32   info log verbosity, higher value produces more output

Use "kind [command] --help" for more information about a command.

From kind help, we see that kind supports commands like build, create, delete, get, load, etc. Use -h for detailed help on subcommands.

Now, create a cluster:

Ensure ~/.kube directory exists — kind writes cluster API address, certificates, and other config into kubeconfig during setup.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

# kubectl get node
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   9m21s   v1.25.3

# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-565d847f94-kp6l7                     1/1     Running   0          9m15s
kube-system          coredns-565d847f94-ml28b                     1/1     Running   0          9m15s
kube-system          etcd-kind-control-plane                      1/1     Running   0          9m32s
kube-system          kindnet-hfsc8                                1/1     Running   0          9m15s
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          9m31s
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          9m33s
kube-system          kube-proxy-m9zp5                             1/1     Running   0          9m15s
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          9m33s
local-path-storage   local-path-provisioner-684f458cdd-d7f29      1/1     Running   0          9m15s

Success! Your first kind cluster is up and running.


4. Cluster Operations

1. Custom Cluster Name

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# kind create cluster --name ci-cluster
Creating cluster "ci-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.25.3) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-ci-cluster"
You can now use your cluster with:

2. Switch Context

1
2
3
4
5
6
7
# kubectl cluster-info --context kind-ci-cluster
Kubernetes control plane is running at https://127.0.0.1:56527
CoreDNS is running at https://127.0.0.1:56527/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Have a nice day! 👋

3. List All Clusters

1
2
3
# kind get clusters
ci-cluster
kind

4. Load Images into Nodes

By default, kind nodes cannot access host machine images — they must be manually loaded.

1
2
3
4
# kind load docker-image --name ci-cluster --nodes ci-cluster-control-plane traefik:v2.9.5
# docker exec -it ci-cluster-control-plane bash
root@ci-cluster-control-plane:/# crictl images | grep traefik
docker.io/library/traefik                  v2.9.5               a1252ce6bfaaa       132MB

5. Advanced Cluster Configuration

1. Multi-node Cluster

By default, kind creates a single-node cluster. Here’s how to define multiple nodes:

Create config.yaml:

1
2
3
4
5
6
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

Recreate the cluster:

1
2
3
4
5
6
# kind create cluster --config config.yaml
# kubectl get node
NAME                 STATUS   ROLES           AGE    VERSION
kind-control-plane   Ready    control-plane   110s   v1.25.3
kind-worker          Ready    <none>          74s    v1.25.3
kind-worker2         Ready    <none>          88s    v1.25.3

Note: Kind also supports multi-master HA setups — though not covered here for brevity.

2. Custom Kubernetes Version

1
2
3
4
5
6
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  images: kindest/node:v1.24.7@sha256:577c630ce8e509131eab1aea12c022190978dd2f745aac5eb1fe65c0807eb315

3. Mount Host Directory to Node Containers (Persistent Storage)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /Users/wanzi/tools/kind/files
    containerPath: /files
  - hostPath: /Users/wanzi/tools/kind/other-files/
    containerPath: /other-files
    readOnly: true
    selinuxRelabel: false
    propagation: None

4. Port Mapping

Map host port 80 to nodePort 30950. The container port must match the Service’s nodePort.

1
2
3
4
5
6
7
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30950
    hostPort: 80

And apply this service/pod example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
kind: Pod
apiVersion: v1
metadata:
  name: foo
  labels:
    app: foo
spec:
  containers:
  - name: foo
    image: hashicorp/http-echo:0.2.3
    args:
    - "-text=foo"
    ports:
    - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: foo
spec:
  type: NodePort
  ports:
  - name: http
    nodePort: 30950
    port: 5678
  selector:
    app: foo

5. Add Custom Labels

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  extraPortMappings:
  - containerPort: 30950
    hostPort: 80
  labels:
    tier: frontend
- role: worker
  labels:
    tier: backend

6. Customize kubeadm Configurations

Since kind uses kubeadm under the hood, you can customize the init process on the first control plane node:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "my-label=true"    

For deeper customization, four configuration types are available:

  • InitConfiguration
  • ClusterConfiguration
  • KubeProxyConfiguration
  • KubeletConfiguration

Example: Override API server flags using ClusterConfiguration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: ClusterConfiguration
    apiServer:
      extraArgs:
        enable-admission-plugins: NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook    

For additional nodes joining the cluster, use JoinConfiguration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "my-label2=true"    
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "my-label3=true"    

7. Configure Ingress

Allow local requests to ingress controller via ports 80/443 and set specific labels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP

Apply ingress-nginx:

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

6. Complete Cluster Configuration Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  images: kindest/node:v1.24.7@sha256:577c630ce8e509131eab1aea12c022190978dd2f745aac5eb1fe65c0807eb315
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
  images: kindest/node:v1.24.7@sha256:577c630ce8e509131eab1aea12c022190978dd2f745aac5eb1fe65c0807eb315
  labels:
    app: front
  extraMounts:
  - hostPath: /Users/wanzi/tools/kind/wwwroot
    containerPath: /wwwroot
- role: worker
  images: kindest/node:v1.24.7@sha256:577c630ce8e509131eab1aea12c022190978dd2f745aac5eb1fe65c0807eb315
  labels:
    app: backend
  extraMounts:
  - hostPath: /Users/wanzi/tools/kind/wwwroot
    containerPath: /wwwroot

networking:
  apiServerAddress: "127.0.0.1"
  apiServerPort: 6443
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
  # disableDefaultCNI: true # Default CNI is kindnetd; can disable to use others like Calico
  kubeProxyMode: "ipvs" # Set kube-proxy mode to ipvs; use "none" to disable

That concludes our journey with kind. You now have a fast, lightweight way to spin up Kubernetes clusters for testing and feature validation.

References: