使用 kubeadm 安装 k8s 集群

本文将通过 kubeadm 实现单 master 节点模式和集群模式两种部署方式。

所有节点均需初始化操作

所有节点均需做的操作。

主机准备

1
2
3
4
5
6
7
8
cat > /etc/sysctl.d/kubernets.conf <<EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
modprobe br_netfilter

安装containerd

由于 dockerd 从 k8s 1.24 版本开始不再支持,这里选择 containerd。

手工安装

安装 containerd,containerd 的版本可以从这里获取 https://github.com/containerd/containerd/releases

1
2
3
4
5
wget https://github.com/containerd/containerd/releases/download/v1.6.11/containerd-1.6.11-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-1.6.11-linux-amd64.tar.gz
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -P /usr/local/lib/systemd/system/
systemctl daemon-reload
systemctl enable --now containerd

安装 runc

1
2
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc

yum 源安装

1
2
3
4
5
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install containerd.io -y
systemctl daemon-reload
systemctl enable --now containerd

通过 yum 安装的containerd 没有启用 cri,在其配置文件 /etc/containerd/config.toml 中包含了 disabled_plugins = ["cri"] 配置,需要将配置信息注释后并重启 containerd。

1
2
sed -i 's/disabled_plugins/#disabled_plugins/'  /etc/containerd/config.toml
systemctl restart containerd

安装 kubeadm/kubelet/kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

单 master 节点模式

节点 角色
172.21.115.190
master 节点

kubeadm 初始化

创建文件 kubeadm-config.yaml,文件内容如下:

1
2
3
4
5
6
7
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.25.4
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

执行命令:

1
2
3
kubeadm init --config kubeadm-config.yaml
kubeadm config print init-defaults --component-configs KubeletConfiguration > cluster.yaml
kubeadm init --config cluster.yaml

接下来初始化 kubeconfig 文件,这样即可通过 kubectl 命令来访问 k8s 了。

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件

刚部署完成的节点处于NotReady 的状态,原因是因为还没有安装网络插件。

cilim 网络插件

cilim 网络插件比较火爆,下面介绍其安装步骤:

1
2
3
4
5
6
7
8
9
10
11
# 安装 cilium 客户端
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

# 网络插件初始化
cilium install

在安装完网络插件后,node 节点即可变为 ready 状态。

查看环境中包含如下的 pod:

1
2
3
4
5
6
7
8
9
10
11
$ kubectl  get pod  -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-7zj7t 1/1 Running 0 82s
kube-system cilium-operator-bc4d5b54-kvqqx 1/1 Running 0 82s
kube-system coredns-565d847f94-hrm9b 1/1 Running 0 14m
kube-system coredns-565d847f94-z5kwr 1/1 Running 0 14m
kube-system etcd-k8s002 1/1 Running 0 14m
kube-system kube-apiserver-k8s002 1/1 Running 0 14m
kube-system kube-controller-manager-k8s002 1/1 Running 0 14m
kube-system kube-proxy-bhpqr 1/1 Running 0 14m
kube-system kube-scheduler-k8s002 1/1 Running 0 14m

k8s 自带的 bridge 插件

在单节点的场景下,pod 不需要跨节点通讯,k8s 自带的 bridge 插件也可以满足单节点内的 pod 相互通讯,类似于 docker 的 bridge 网络模式。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
mkdir -p /etc/cni/net.d
cat > /etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "172.19.0.0/24",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF

如果 k8s 节点已经部署完成,需要重启下 kubelet 进程该配置即可生效。

添加其他节点

1
2
kubeadm join 172.21.115.189:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:457fba2c4181a5b02d2a4f202dfe20f9ce5b9f2274bf40b6d25a8a8d4a7ce440

此时即可以将节点添加到 k8s 集群中

1
2
3
4
$ kubectl  get node 
NAME STATUS ROLES AGE VERSION
k8s002 Ready control-plane 79m v1.25.4
k8s003 Ready <none> 35s v1.25.4

节点清理

清理普通节点

1
2
3
4
5
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
kubeadm reset
# 清理 iptabels 规则
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
kubectl delete node <node name>

清理 control plan 节点

1
kubeadm reset

集群模式部署

待补充,参考文档:Creating Highly Available Clusters with kubeadm | Kubernetes

资料