[kube-master]172.16.244.14172.16.244.16172.16.244.18[etcd]172.16.244.14NODE_NAME=etcd1172.16.244.16NODE_NAME=etcd2172.16.244.18NODE_NAME=etcd3# haproxy-keepalived[haproxy]172.16.244.14172.16.244.16172.16.244.18[kube-node]172.16.244.25172.16.244.27# [optional] load balance for accessing k8s from outside[ex-lb]172.16.244.14LB_ROLE=backup EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443172.16.244.16LB_ROLE=backup EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443172.16.244.18LB_ROLE=master EX_APISERVER_VIP=172.16.243.13 EX_APISERVER_PORT=8443# [optional] ntp server for the cluster[chrony]172.16.244.18[all:vars]# --------- Main Variables ---------------# Cluster container-runtime supported: docker, containerdCONTAINER_RUNTIME="docker"# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn#CLUSTER_NETWORK="flannel"CLUSTER_NETWORK="calico"# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'PROXY_MODE="ipvs"# K8S Service CIDR, not overlap with node(host) networkingSERVICE_CIDR="10.68.0.0/16"# Cluster CIDR (Pod CIDR), not overlap with node(host) networkingCLUSTER_CIDR="10.101.0.0/16"# NodePort RangeNODE_PORT_RANGE="20000-40000"# Cluster DNS DomainCLUSTER_DNS_DOMAIN="cluster.local."# -------- Additional Variables (don't change the default value right now) ---# Binaries Directorybin_dir="/opt/kube/bin"# CA and other components cert/key Directoryca_dir="/etc/kubernetes/ssl"# Deploy Directory (kubeasz workspace)base_dir="/etc/ansible"
chrony role:Synchronize time across cluster nodes [optional]deploy role:Generate CA certificates, kubeconfig, kube-proxy.kubeconfigprepare role:Distribute CA certificates, install kubectl client, configure environment
cd /opt/soft
wget https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz
tar xf helm-v3.0.1-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/
Create Certificate
Since we use our own domain certificate, we create a certificate using Kubernetes secret. Alternatively, you can use cert-manager to issue a certificate for Rancher or use Let’s Encrypt.
# kubectl get ingress --all-namespacesNAMESPACE NAME HOSTS ADDRESS PORTS AGE
cattle-system rancher rancher-cicd.test.cn 80, 443 20h
# kubectl -n cattle-system rollout status deploy/rancherWaiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 2 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
# kubectl -n cattle-system get deploy rancherNAME READY UP-TO-DATE AVAILABLE AGE
rancher 3/3 33 5m5s
At this point, the entire K8S cluster has been successfully deployed. If everything proceeds smoothly, the entire process should take around 10 minutes. The key is proper planning in advance.