基于青云VPC网络的Kubernetes v1.19.8高可用集群(完结)

今天我们来介绍一个如何基于青云VPN网络来构建一个Kubernetes高可用集群;先声明一下、这里并没有用到自动化部署的相关内容。我们还是参考了之前的部署方案采用二进制的方式进行部署;和之前有一点区别的是、网络我们采用的是Calico;Kubernetes版本我们升级到了 v1.19.8。容器运行时还是使用的Docker、并没有适配Containerd容器运行时。感兴趣的同学可以自动切换到Containerd容器运行时(Containerd 实现了 kubernetes 的 Container Runtime Interface (CRI) 接口,提供容器运行时核心功能,如镜像管理、容器管理等,相比 dockerd 更加简单、健壮和可移植);下面我们一起来看看具体的操作步骤。

首先我们需要到青云上注册账号并购买相关资源、比如云主机、公网IP、VPC网络等资源。购买完成以后我们创建一个名为Kubernetes的项目、把所有资源全部放到Kubernetes项目中去进行管理。我们自定义VPC网络为 172.20.0.0/24网段、并给云主机配置IP地址和DNS信息;然后我们通过青云自带的VPN服务连接到Kubernetes项目即可管理所有资源啦(青云相关操作请参考官方文档、这里就不再详细描述了;这里说一点就是配置完VPN之后记得在安全组中开放对应端口)。

image-20210331160336896

image-20210331160504404

1、组件版本说明(参考)

序号 组件 版本 备注
1 kubernetes v1.19.8
2 etcd v3.4.15
3 docker v19.03.8
4 calico v3.18.1
5 coredns v1.8.3
6 dashboard v2.2.2
7 k8s-prometheus-adapter v0.5.0
8 prometheus-operator v0.38.0
9 prometheus v2.15.2
10 elasticsearch、kibana v7.11.2
11 cni-plugins v3.18.1
12 metrics-server v0.4.2
13 weave v2.8.1
14 kubeapps v2.3.1
15 helm v3.1.0
16 grafana v1.13.0
17 traefik v2.1

主要配置策略

kube-apiserver:

  • 使用节点本地 nginx 4 层透明代理实现高可用;
  • 关闭非安全端口 8080 和匿名访问;
  • 在安全端口 6443 接收 https 请求;
  • 严格的认证和授权策略 (x509、token、RBAC);
  • 开启 bootstrap token 认证,支持 kubelet TLS bootstrapping;
  • 使用 https 访问 kubelet、etcd,加密通信;

kube-controller-manager:

  • 3 节点高可用;
  • 关闭非安全端口,在安全端口 10252 接收 https 请求;
  • 使用 kubeconfig 访问 apiserver 的安全端口;
  • 自动 approve kubelet 证书签名请求 (CSR),证书过期后自动轮转;
  • 各 controller 使用自己的 ServiceAccount 访问 apiserver;

kube-scheduler:

  • 3 节点高可用;
  • 使用 kubeconfig 访问 apiserver 的安全端口;

kubelet:

  • 使用 kubeadm 动态创建 bootstrap token,而不是在 apiserver 中静态配置;
  • 使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转;
  • 在 KubeletConfiguration 类型的 JSON 文件配置主要参数;
  • 关闭只读端口,在安全端口 10250 接收 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;
  • 使用 kubeconfig 访问 apiserver 的安全端口;

kube-proxy:

  • 使用 kubeconfig 访问 apiserver 的安全端口;
  • 在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数;
  • 使用 ipvs 代理模式;

集群插件:

  • DNS:使用功能、性能更好的 coredns;
  • Dashboard:支持登录认证;
  • Metric:metrics-server,使用 https 访问 kubelet 安全端口;
  • Log:Elasticsearch、Fluend、Kibana;
  • Registry 镜像库:docker-registry、harbor;

2、系统初始化

集群规划

  • Kubernetes-01:172.20.0.20
  • Kubernetes-02:172.20.0.21
  • Kubernetes-03:172.20.0.22

注:如果没有特殊说明,本小节所有操作需要在所有节点上执行本文档的初始化操作。

配置主机名

# 可以将下面的节点名称替换为自己的主机名称
hostnamectl set-hostname kubernetes-01
hostnamectl set-hostname kubernetes-02
hostnamectl set-hostname kubernetes-03

如果 DNS 不支持主机名称解析,还需要在每台机器的 /etc/hosts 文件中添加主机名和 IP 的对应关系:

cat >> /etc/hosts <<EOF
172.20.0.20  kubernetes-01
172.20.0.21  kubernetes-02
172.20.0.22  kubernetes-03
EOF

然后退出,重新登录 root 账号,可以看到主机名生效。

添加节点SSH互信

本操作只需要在 Kubernetes-01 节点上进行,设置 root 账户可以无密码登录所有节点:

ssh-keygen -t rsa 
ssh-copy-id root@kubernetes-01
ssh-copy-id root@kubernetes-02
ssh-copy-id root@kubernetes-03

更新PATH变量

echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc && source /root/.bashrc

注:/opt/k8s/bin 目录保存本文档下载安装的程序。

安装依赖包

yum install -y epel-release
yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git

注:本文档的 kube-proxy 使用 ipvs 模式,ipvsadm 为 ipvs 的管理工具; etcd 集群各机器需要时间同步,chrony 用于系统时间同步。

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

注:关闭防火墙,清理防火墙规则,设置默认转发策略。

# 关闭swap分区
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# 关闭SELinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

注:关闭 swap 分区,否则kubelet 会启动失败(可以设置 kubelet 启动参数 –fail-swap-on 为 false 关闭 swap 检查)。关闭 SELinux,否则 kubelet 挂载目录时可能报错 Permission denied。

优化内核参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.neigh.default.gc_thresh1=1024
net.ipv4.neigh.default.gc_thresh1=2048
net.ipv4.neigh.default.gc_thresh1=4096
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

注:关闭 tcp_tw_recycle,否则与 NAT 冲突,可能导致服务不通。

配置系统时区及时钟同步

# 配置系统时区
[root@kubernetes-01 work]# timedatectl set-timezone Asia/Shanghai

# 查看同步状态
[root@kubernetes-01 work]# timedatectl status
      Local time: Tue 2021-03-30 14:17:11 CST
  Universal time: Tue 2021-03-30 06:17:11 UTC
        RTC time: Tue 2021-03-30 06:17:12
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
[root@kubernetes-01 work]#

# 将当前的 UTC 时间写入硬件时钟
[root@kubernetes-01 work]# timedatectl set-local-rtc 0

# 重启依赖于系统时间的服务
[root@kubernetes-01 work]# systemctl restart rsyslog 
[root@kubernetes-01 work]# systemctl restart crond

# 关闭无关服务
[root@kubernetes-01 work]# systemctl stop postfix && systemctl disable postfix

# 创建接下来要使用的相关安装目录
[root@kubernetes-01 work]# mkdir -p /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert

注:System clock synchronized: yes,表示时钟已同步; NTP service: active,表示开启了时钟同步服务。

分发集群配置参数

后续使用的环境变量都定义在文件 environment.sh 中,请根据自己的机器、网络情况修改:

#!/usr/bin/bash

# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# 集群各机器 IP 数组
export NODE_IPS=(172.20.0.20 172.20.0.21 172.20.0.22)

# 集群各 IP 对应的主机名数组
export NODE_NAMES=(kubernetes-01 kubernetes-02 kubernetes-03)

# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://172.20.0.20:2379,https://172.20.0.21:2379,https://172.20.0.22:2379"

# etcd 集群间通信的 IP 和端口
export ETCD_NODES="kubernetes-01=https://172.20.0.20:2380,kubernetes-02=https://172.20.0.21:2380,kubernetes-03=https://172.20.0.22:2380"

# kube-apiserver 的反向代理(kube-nginx)地址端口
export KUBE_APISERVER="https://127.0.0.1:8443"

# 节点间互联网络接口名称
export IFACE="eth0"

# etcd 数据目录
export ETCD_DATA_DIR="/data/k8s/etcd/data"

# etcd WAL 目录,建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区
export ETCD_WAL_DIR="/data/k8s/etcd/wal"

# k8s 各组件数据目录
export K8S_DIR="/data/k8s/k8s"

## DOCKER_DIR 和 CONTAINERD_DIR 二选一
# docker 数据目录
export DOCKER_DIR="/data/k8s/docker"

# flannel
export FLANNEL_ETCD_PREFIX=”/kubernetes/network”

# containerd 数据目录
# export CONTAINERD_DIR="/data/k8s/containerd"

## 以下参数一般不需要修改

# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"

# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段

# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 保证)
SERVICE_CIDR="10.254.0.0/16"

# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="172.30.0.0/16"

# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="30000-32767"

# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"

# 集群 DNS 域名(末尾不带点号)
export CLUSTER_DNS_DOMAIN="cluster.local"

# 将二进制目录 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH

把上面的 environment.sh 文件修改完成之后保存到 /opt/k8s/bin/ 目录下面,然后拷贝到所有节点(修改完上面的文件之后再执行下面的操作):

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp environment.sh root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@kubernetes-01 bin]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 bin]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp environment.sh root@${node_ip}:/opt/k8s/bin/
>     ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
environment.sh                                                                                   100% 2275     6.0MB/s   00:00    
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
environment.sh                                                                                   100% 2275     2.4MB/s   00:00    
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
environment.sh                                                                                   100% 2275     2.6MB/s   00:00    
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 bin]#

内核升级

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如:

  • 高版本的 docker(1.13 以后) 启用了 3.10 kernel 实验支持的 kernel memory account 功能(无法关闭),当节点压力大如频繁启动和停止容器时会导致 cgroup memory leak;
  • 网络设备引用计数泄漏,会导致类似于报错:”kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1″;

解决方案如下:

  • 升级内核到 4.4.X 以上;
  • 或者,手动编译内核,disable CONFIG_MEMCG_KMEM 特性;
  • 或者,安装修复了该问题的 Docker 18.09.1 及以上的版本。但由于 kubelet 也会设置 kmem(它 vendor 了 runc),所以需要重新编译 kubelet 并指定 GOFLAGS=”-tags=nokmem”;
git clone --branch v1.14.1 --single-branch --depth 1 https://github.com/kubernetes/kubernetes
cd kubernetes
KUBE_GIT_VERSION=v1.14.1 ./build/run.sh make kubelet GOFLAGS="-tags=nokmem"

这里采用升级内核的解决办法:

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
grub2-set-default 0

# 执行完上面所有的操作之后我们就可以重启所有主机了
sync
reboot

本小节参考文档:

  • 系统内核相关参数参考:https://docs.openshift.com/enterprise/3.2/admin_guide/overcommit.html
  • 3.10.x 内核 kmem bugs 相关的讨论和解决办法:
    • https://github.com/kubernetes/kubernetes/issues/61937
    • https://support.mesosphere.com/s/article/Critical-Issue-KMEM-MSPH-2018-0006
    • https://pingcap.com/image/try-to-fix-two-linux-kernel-bugs-while-testing-tidb-operator-in-k8s/

3、创建CA证书和秘钥

为确保安全,kubernetes 系统各组件需要使用 x509 证书对通信进行加密和认证。 CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。 CA 证书是集群所有节点共享的,只需要创建一次,后续用它签名其它所有证书。 本小节使用 CloudFlare 的 PKI 工具集 cfssl 创建所有证书。

注:如果没有特殊指明,本小节所有操作均在 Kubernetes-01 节点上执行。

安装 cfssl 工具集

cfssl github项目地址:https://github.com/cloudflare/cfssl

sudo mkdir -p /opt/k8s/cert && cd /opt/k8s/work

wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64

mv cfssl_1.5.0_linux_amd64 /opt/k8s/bin/cfssl
mv cfssljson_1.5.0_linux_amd64 /opt/k8s/bin/cfssljson
mv cfssl-certinfo_1.5.0_linux_amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/* && export PATH=/opt/k8s/bin:$PATH

创建配置文件

CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等):

cd /opt/k8s/work

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
  • signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
  • server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
  • client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
  • “expiry”: “876000h”:证书有效期设置为 100 年;

创建证书签名请求文件

cd /opt/k8s/work
cat > ca-csr.json <<EOF
{
  "CN": "kubernetes-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "z0ukun"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}
EOF
  • CN:Common Name:kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;
  • O:Organization:kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
  • kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

注:不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现 PEER’S CERTIFICATE HAS AN INVALID SIGNATURE 错误;后续创建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的。

生成CA证书和私钥

[root@kubernetes-01 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*2021/03/30 14:07:56 [INFO] generating a new CA key and certificate from CSR
2021/03/30 14:07:56 [INFO] generate received request
2021/03/30 14:07:56 [INFO] received CSR
2021/03/30 14:07:56 [INFO] generating key: rsa-2048
2021/03/30 14:07:57 [INFO] encoded CSR
2021/03/30 14:07:57 [INFO] signed certificate with serial number 11321818047568877525981512325340936770890720652
[root@kubernetes-01 work]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

分发证书文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
    scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
>     scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
ca-key.pem                                                                                                      100% 1675   930.7KB/s   00:00    
ca.pem                                                                                                          100% 1322     2.4MB/s   00:00    
ca-config.json                                                                                                  100%  293   556.0KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
ca-key.pem                                                                                                      100% 1675   998.5KB/s   00:00    
ca.pem                                                                                                          100% 1322     1.7MB/s   00:00    
ca-config.json                                                                                                  100%  293   433.7KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
ca-key.pem                                                                                                      100% 1675   870.4KB/s   00:00    
ca.pem                                                                                                          100% 1322     1.7MB/s   00:00    
ca-config.json                                                                                                  100%  293   305.5KB/s   00:00    
[root@kubernetes-01 work]#

4、部署 Kubectl 命令行工具

注:本小节介绍安装和配置 kubernetes 命令行管理工具 kubectl 的步骤。本小节只需要部署一次,生成的 kubeconfig 文件是通用的,可以拷贝到需要执行 kubectl 命令的机器的 ~/.kube/config 位置。

注:如果没有特殊指明,本小节所有操作均在 Kubernetes-01 节点上执行。

下载Kubectl二进制文件

cd /opt/k8s/work

# 自行解决翻墙下载问题
wget https://dl.k8s.io/v1.19.8/kubernetes-client-linux-amd64.tar.gz 
tar -xzvf kubernetes-client-linux-amd64.tar.gz

[root@kubernetes-01 work]# wget https://dl.k8s.io/v1.19.8/kubernetes-client-linux-amd64.tar.gz 
--2021-03-30 14:08:13--  https://dl.k8s.io/v1.19.8/kubernetes-client-linux-amd64.tar.gz
Resolving dl.k8s.io (dl.k8s.io)... 34.107.204.206, 2600:1901:0:26f3::
Connecting to dl.k8s.io (dl.k8s.io)|34.107.204.206|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://storage.googleapis.com/kubernetes-release/release/v1.19.8/kubernetes-client-linux-amd64.tar.gz [following]
--2021-03-30 14:08:16--  https://storage.googleapis.com/kubernetes-release/release/v1.19.8/kubernetes-client-linux-amd64.tar.gz
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.27.144, 172.217.160.80, 216.58.200.48, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.27.144|:443... connected.
^C
[root@kubernetes-01 work]# tar -zxvf kubernetes-client-linux-amd64.tar.gz 
kubernetes/
kubernetes/client/
kubernetes/client/bin/
kubernetes/client/bin/kubectl
[root@kubernetes-01 work]#

分发到所有使用 kubectl 工具的节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kubernetes/client/bin/kubectl root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kubernetes/client/bin/kubectl root@${node_ip}:/opt/k8s/bin/
>     ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kubectl                                                                                                         100%   41MB 122.7MB/s   00:00    
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kubectl                                                                                                         100%   41MB  93.2MB/s   00:00    
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kubectl                                                                                                         100%   41MB  94.6MB/s   00:00    
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.

创建admin证书和私钥

kubectl 使用 https 协议与 kube-apiserver 进行安全通信,kube-apiserver 对 kubectl 请求包含的证书进行认证和授权。kubectl 后续用于集群管理,所以这里创建具有最高权限的 admin 证书。

创建证书签名请求:

cd /opt/k8s/work
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "z0ukun"
    }
  ]
}
EOF
  • O: system:masters:kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system:masters;
  • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空。

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*

[root@kubernetes-01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
>   -ca-key=/opt/k8s/work/ca-key.pem \
>   -config=/opt/k8s/work/ca-config.json \
>   -profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*2021/03/30 14:10:04 [INFO] generate received request
2021/03/30 14:10:04 [INFO] received CSR
2021/03/30 14:10:04 [INFO] generating key: rsa-2048
2021/03/30 14:10:04 [INFO] encoded CSR
2021/03/30 14:10:04 [INFO] signed certificate with serial number 263807722639968682310482022034641272658670741851
2021/03/30 14:10:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@kubernetes-01 work]# ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem
[root@kubernetes-01 work]#

注:忽略警告消息: [WARNING] This certificate lacks a “hosts” field。

创建 kubeconfig 文件

kubectl 使用 kubeconfig 文件访问 apiserver,该文件包含 kube-apiserver 的地址和认证信息(CA 证书和客户端证书):

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://${NODE_IPS[0]}:6443 \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=/opt/k8s/work/admin.pem \
  --client-key=/opt/k8s/work/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig

# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
  • –certificate-authority:验证 kube-apiserver 证书的根证书;
  • –client-certificate、–client-key:刚生成的 admin 证书和私钥,与 kube-apiserver https 通信时使用;
  • –embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(否则,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件,不方便。);
  • –server:指定 kube-apiserver 的地址,这里指向第一个节点上的服务。

分发 kubeconfig 文件

分发到所有使用 kubectl 命令的节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p ~/.kube"
>     scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kubectl.kubeconfig                                                                                              100% 6223     6.6MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kubectl.kubeconfig                                                                                              100% 6223     2.2MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kubectl.kubeconfig                                                                                              100% 6223     2.3MB/s   00:00    
[root@kubernetes-01 work]#

5、部署Etcd集群

etcd 是基于 Raft 的分布式 KV 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等);kubernetes 使用 etcd 集群持久化存储所有 API 对象、运行数据。

本小节介绍如何部署一个三节点高可用 etcd 集群的步骤:

  • 下载和分发 etcd 二进制文件;
  • 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的通信;
  • 创建 etcd 的 systemd unit 文件,配置服务参数;
  • 检查集群工作状态;

etcd 集群节点名称和 IP 如下:

  • Kubernetes-01:172.20.0.20
  • Kubernetes-02:172.20.0.21
  • Kubernetes-03:172.20.0.22

注:如果没有特殊指明,本小节所有操作均在 Kubernetes-01 节点上执行

下载etcd二进制文件

etcd下载地址:https://github.com/etcd-io/etcd/releases

cd /opt/k8s/work
wget https://github.com/coreos/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz

分发二进制文件到集群所有节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp /opt/k8s/work/etcd-v3.4.15-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp /opt/k8s/work/etcd-v3.4.15-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
>     ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
etcd                                                                                                                                                                        100%   23MB 133.4MB/s   00:00    
etcdctl                                                                                                                                                                     100%   17MB 144.4MB/s   00:00    
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
etcd                                                                                                                                                                        100%   23MB 120.3MB/s   00:00    
etcdctl                                                                                                                                                                     100%   17MB 134.3MB/s   00:00    
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
etcd                                                                                                                                                                        100%   23MB 134.1MB/s   00:00    
etcdctl                                                                                                                                                                     100%   17MB 133.9MB/s   00:00    
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 work]# 

创建etcd证书和私钥

创建证书签名请求:

cd /opt/k8s/work
cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.20.0.20",
    "172.20.0.21",
    "172.20.0.22"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "z0ukun"
    }
  ]
}
EOF
  • hosts:指定授权使用该证书的 etcd 节点 IP 列表,需要将 etcd 集群所有节点 IP 都列在其中。

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
    -ca-key=/opt/k8s/work/ca-key.pem \
    -config=/opt/k8s/work/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*pem

[root@kubernetes-01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
>     -ca-key=/opt/k8s/work/ca-key.pem \
>     -config=/opt/k8s/work/ca-config.json \
>     -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2021/03/31 13:23:06 [INFO] generate received request
2021/03/31 13:23:06 [INFO] received CSR
2021/03/31 13:23:06 [INFO] generating key: rsa-2048
2021/03/31 13:23:07 [INFO] encoded CSR
2021/03/31 13:23:07 [INFO] signed certificate with serial number 702521290657600647093264045094244876594460352842
[root@kubernetes-01 work]# ls etcd*pem
etcd-key.pem  etcd.pem
[root@kubernetes-01 work]# 

分发生成的证书和私钥到各 etcd 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
    scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
>     scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
etcd-key.pem                                                                                                                                                                100% 1675     1.9MB/s   00:00    
etcd.pem                                                                                                                                                                    100% 1440     1.8MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
etcd-key.pem                                                                                                                                                                100% 1675     1.5MB/s   00:00    
etcd.pem                                                                                                                                                                    100% 1440   137.6KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
etcd-key.pem                                                                                                                                                                100% 1675     1.7MB/s   00:00    
etcd.pem                                                                                                                                                                    100% 1440     1.6MB/s   00:00    
[root@kubernetes-01 work]# 

创建etcd的systemd unit模板文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
  --data-dir=${ETCD_DATA_DIR} \\
  --wal-dir=${ETCD_WAL_DIR} \\
  --name=##NODE_NAME## \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://##NODE_IP##:2380 \\
  --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
  --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://##NODE_IP##:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • WorkingDirectory、–data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
  • –wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 –data-dir 不同的磁盘;
  • –name:指定节点名称,当 –initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中;
  • –cert-file、–key-file:etcd server 与 client 通信时使用的证书和私钥;
  • –trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
  • –peer-cert-file、–peer-key-file:etcd 与 peer 通信使用的证书和私钥;
  • –peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书。

为各节点创建和分发 etcd systemd unit 文件

替换模板文件中的变量,为各节点创建 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service 
  done

ls *.service

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for (( i=0; i < 3; i++ ))
>   do
>     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service 
>   done
[root@kubernetes-01 work]# ls *.service
etcd-172.20.0.20.service  etcd-172.20.0.21.service  etcd-172.20.0.22.service
[root@kubernetes-01 work]# 
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP。

分发生成的 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
etcd-172.20.0.20.service                                                                                                                                                    100% 1392     2.1MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
etcd-172.20.0.21.service                                                                                                                                                    100% 1392     1.5MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
etcd-172.20.0.22.service                                                                                                                                                    100% 1392   919.0KB/s   00:00    
[root@kubernetes-01 work]# 

启动etcd服务

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
[3] 1671
>>> 172.20.0.21
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
[4] 1680
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
[5] 1708
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 work]# 
[3]   Done                    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd "
[4]   Done                    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd "
[5]-  Done                    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd "
[root@kubernetes-01 work]# 
  • 切记必须先创建 etcd 数据目录和工作目录;
  • etcd 进程首次启动时会等待其它节点的 etcd 加入集群,命令 systemctl start etcd 会卡住一段时间,为正常现象。

检查启动结果

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status etcd|grep Active"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status etcd|grep Active"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 13:36:34 CST; 33s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 13:36:34 CST; 33s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 13:36:34 CST; 34s ago
[root@kubernetes-01 work]# 

注:确保状态为 active (running),否则可以通过 journalctl -u etcd 查看日志,确认原因。

验证服务状态

部署完 etcd 集群后,在任一 etcd 节点上执行如下命令:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    /opt/k8s/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
  done
  • 3.4.X 版本的 etcd/etcdctl 默认启用了 V3 API,所以执行 etcdctl 命令时不需要再指定环境变量 ETCDCTL_API=3;
  • 从 K8S 1.13 开始,不再支持 v2 版本的 etcd。

预期输出:

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     /opt/k8s/bin/etcdctl \
>     --endpoints=https://${node_ip}:2379 \
>     --cacert=/etc/kubernetes/cert/ca.pem \
>     --cert=/etc/etcd/cert/etcd.pem \
>     --key=/etc/etcd/cert/etcd-key.pem endpoint health
>   done
>>> 172.20.0.20
https://172.20.0.20:2379 is healthy: successfully committed proposal: took = 11.520721ms
>>> 172.20.0.21
https://172.20.0.21:2379 is healthy: successfully committed proposal: took = 12.447712ms
>>> 172.20.0.22
https://172.20.0.22:2379 is healthy: successfully committed proposal: took = 12.52316ms
[root@kubernetes-01 work]# 

输出均为 healthy 时表示集群服务正常。

查看当前的leader

source /opt/k8s/bin/environment.sh
/opt/k8s/bin/etcdctl \
  -w table --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status

输出信息如下:

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# /opt/k8s/bin/etcdctl \
>   -w table --cacert=/etc/kubernetes/cert/ca.pem \
>   --cert=/etc/etcd/cert/etcd.pem \
>   --key=/etc/etcd/cert/etcd-key.pem \
>   --endpoints=${ETCD_ENDPOINTS} endpoint status
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://172.20.0.20:2379 |  96ff53158e80760 |  3.4.15 |   20 kB |      true |      false |       148 |         18 |                 18 |        |
| https://172.20.0.21:2379 | a27a01e650530e07 |  3.4.15 |   20 kB |     false |      false |       148 |         18 |                 18 |        |
| https://172.20.0.22:2379 | f868e309e507a67b |  3.4.15 |   20 kB |     false |      false |       148 |         18 |                 18 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@kubernetes-01 work]#

可见,当前的 leader 为 172.20.0.20。

6、部署 Master 节点

kubernetes master 节点运行如下组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:

  • kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;
  • kube-apiserver 是无状态的,可以通过 kube-nginx 进行代理访问,从而保证服务可用性;

注:如果没有特殊指明,本小节所有操作均在 Kubernetes-01 节点上执行。

下载最新版本二进制文件

Kubernetes v1.19.8下载地址:https://github.com/kubernetes/kubernetes/tree/v1.19.8

cd /opt/k8s/work

# 自行解决翻墙问题
wget https://dl.k8s.io/v1.19.8/kubernetes-server-linux-amd64.tar.gz  
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes && tar -xzvf  kubernetes-src.tar.gz

将二进制文件拷贝到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done


[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_ip}:/opt/k8s/bin/
>     ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
apiextensions-apiserver                                                  100%   45MB 110.4MB/s   00:00    
kube-apiserver                                                           100%  110MB 124.0MB/s   00:00    
kube-controller-manager                                                  100%  102MB 122.9MB/s   00:00    
kube-proxy                                                               100%   37MB 111.7MB/s   00:00    
kube-scheduler                                                           100%   41MB 112.5MB/s   00:00    
kubeadm                                                                  100%   37MB 113.0MB/s   00:00    
kubectl                                                                  100%   41MB 141.4MB/s   00:00    
kubelet                                                                  100%  105MB 144.7MB/s   00:00    
mounter                                                                  100% 1596KB 121.0MB/s   00:00    
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
apiextensions-apiserver                                                  100%   45MB 102.8MB/s   00:00    
kube-apiserver                                                           100%  110MB 109.9MB/s   00:01    
kube-controller-manager                                                  100%  102MB 102.3MB/s   00:01    
kube-proxy                                                               100%   37MB 105.0MB/s   00:00    
kube-scheduler                                                           100%   41MB  96.5MB/s   00:00    
kubeadm                                                                  100%   37MB 114.6MB/s   00:00    
kubectl                                                                  100%   41MB 129.1MB/s   00:00    
kubelet                                                                  100%  105MB 133.4MB/s   00:00    
mounter                                                                  100% 1596KB 102.6MB/s   00:00    
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
apiextensions-apiserver                                                  100%   45MB 103.2MB/s   00:00    
kube-apiserver                                                           100%  110MB 109.9MB/s   00:01    
kube-controller-manager                                                  100%  102MB 109.2MB/s   00:00    
kube-proxy                                                               100%   37MB 102.9MB/s   00:00    
kube-scheduler                                                           100%   41MB  41.0MB/s   00:01    
kubeadm                                                                  100%   37MB 108.8MB/s   00:00    
kubectl                                                                  100%   41MB 119.0MB/s   00:00    
kubelet                                                                  100%  105MB 111.5MB/s   00:00    
mounter                                                                  100% 1596KB  86.1MB/s   00:00    
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 work]#

6.1、APIServer集群

注:本小节讲解部署一个三实例 kube-apiserver 集群的步骤。如果没有特殊指明,本小节所有操作均在Kubernetes-01 节点上执行

创建 kubernetes-master 证书和私钥

创建证书签名请求:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes-master",
  "hosts": [
    "127.0.0.1",
    "172.20.0.20",
    "172.20.0.21",
    "172.20.0.22",
    "${CLUSTER_KUBERNETES_SVC_IP}",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local.",
    "kubernetes.default.svc.${CLUSTER_DNS_DOMAIN}."
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "z0ukun"
    }
  ]
}
EOF
  • hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
ls kubernetes*pem

[root@kubernetes-01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
>   -ca-key=/opt/k8s/work/ca-key.pem \
>   -config=/opt/k8s/work/ca-config.json \
>   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2021/03/31 14:01:12 [INFO] generate received request
2021/03/31 14:01:12 [INFO] received CSR
2021/03/31 14:01:12 [INFO] generating key: rsa-2048
2021/03/31 14:01:13 [INFO] encoded CSR
2021/03/31 14:01:13 [INFO] signed certificate with serial number 161942050792348668140609684865799859487804413770
[root@kubernetes-01 work]# ls kubernetes*pem
kubernetes-key.pem  kubernetes.pem
[root@kubernetes-01 work]# 

将生成的证书和私钥文件拷贝到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
    scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
>     scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kubernetes-key.pem                                                                                                                                                          100% 1679     2.2MB/s   00:00    
kubernetes.pem                                                                                                                                                              100% 1700     3.6MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kubernetes-key.pem                                                                                                                                                          100% 1679   902.7KB/s   00:00    
kubernetes.pem                                                                                                                                                              100% 1700     2.1MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kubernetes-key.pem                                                                                                                                                          100% 1679   795.6KB/s   00:00    
kubernetes.pem                                                                                                                                                              100% 1700     2.2MB/s   00:00    
[root@kubernetes-01 work]#

创建加密配置文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

将加密配置文件拷贝到所有 master 节点的 /etc/kubernetes 目录下:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
encryption-config.yaml                                                                                                                                                      100%  240   273.9KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
encryption-config.yaml                                                                                                                                                      100%  240    25.7KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
encryption-config.yaml                                                                                                                                                      100%  240   137.7KB/s   00:00    
[root@kubernetes-01 work]#

创建审计策略文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses
  # for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io

  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF

分发审计策略文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp audit-policy.yaml root@${node_ip}:/etc/kubernetes/audit-policy.yaml
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp audit-policy.yaml root@${node_ip}:/etc/kubernetes/audit-policy.yaml
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
audit-policy.yaml                                                                                                                                                           100% 4322     4.9MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
audit-policy.yaml                                                                                                                                                           100% 4322     3.5MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
audit-policy.yaml                                                                                                                                                           100% 4322     2.8MB/s   00:00    
[root@kubernetes-01 work]#

创建后续访问 metrics-server 或 kube-prometheus 使用的证书

创建证书签名请求:

cd /opt/k8s/work
cat > proxy-client-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "z0ukun"
    }
  ]
}
EOF
  • CN 名称需要位于 kube-apiserver 的 –requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem  \
  -config=/etc/kubernetes/cert/ca-config.json  \
  -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
ls proxy-client*.pem

[root@kubernetes-01 work]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
>   -ca-key=/etc/kubernetes/cert/ca-key.pem  \
>   -config=/etc/kubernetes/cert/ca-config.json  \
>   -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
2021/03/31 14:03:13 [INFO] generate received request
2021/03/31 14:03:13 [INFO] received CSR
2021/03/31 14:03:13 [INFO] generating key: rsa-2048
2021/03/31 14:03:13 [INFO] encoded CSR
2021/03/31 14:03:13 [INFO] signed certificate with serial number 519453823699954399345164730015346247955623868178
2021/03/31 14:03:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@kubernetes-01 work]# ls proxy-client*.pem
proxy-client-key.pem  proxy-client.pem
[root@kubernetes-01 work]#

将生成的证书和私钥文件拷贝到所有 master 节点:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp proxy-client*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp proxy-client*.pem root@${node_ip}:/etc/kubernetes/cert/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
proxy-client-key.pem                                                                                                                                                        100% 1675     2.8MB/s   00:00    
proxy-client.pem                                                                                                                                                            100% 1399     3.7MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
proxy-client-key.pem                                                                                                                                                        100% 1675     1.7MB/s   00:00    
proxy-client.pem                                                                                                                                                            100% 1399     1.9MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
proxy-client-key.pem                                                                                                                                                        100% 1675     1.6MB/s   00:00    
proxy-client.pem                                                                                                                                                            100% 1399     1.5MB/s   00:00    
[root@kubernetes-01 work]# 

创建 kube-apiserver systemd unit 模板文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\
  --advertise-address=##NODE_IP## \\
  --default-not-ready-toleration-seconds=360 \\
  --default-unreachable-toleration-seconds=360 \\
  --max-mutating-requests-inflight=2000 \\
  --max-requests-inflight=4000 \\
  --default-watch-cache-size=200 \\
  --delete-collection-workers=2 \\
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  --etcd-servers=${ETCD_ENDPOINTS} \\
  --bind-address=##NODE_IP## \\
  --secure-port=6443 \\
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  --insecure-port=0 \\
  --audit-log-maxage=15 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-truncate-enabled \\
  --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --profiling \\
  --anonymous-auth=false \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --enable-bootstrap-token-auth \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-admission-plugins=NodeRestriction \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --event-ttl=168h \\
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  --kubelet-https=true \\
  --kubelet-timeout=10s \\
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=${NODE_PORT_RANGE} \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • –advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
  • –default-*-toleration-seconds:设置节点异常相关的阈值;
  • –max-*-requests-inflight:请求相关的最大阈值;
  • –etcd-*:访问 etcd 的证书和 etcd 服务器地址;
  • –bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
  • –secret-port:https 监听端口;
  • –insecure-port=0:关闭监听 http 非安全端口(8080);
  • –tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • –audit-*:配置审计策略和审计日志文件相关的参数;
  • –client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
  • –enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
  • –requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
  • –requestheader-client-ca-file:用于签名 –proxy-client-cert-file 和 –proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
  • –requestheader-allowed-names:不能为空,值为逗号分割的 –proxy-client-cert-file 证书的 CN 名称,这里设置为 “aggregator”;
  • –service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 –service-account-private-key-file 指定私钥文件,两者配对使用;
  • –runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • –authorization-mode=Node,RBAC、–anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • –enable-admission-plugins:启用一些默认关闭的 plugins;
  • –allow-privileged:运行执行 privileged 权限的容器;
  • –apiserver-count=3:指定 apiserver 实例的数量;
  • –event-ttl:指定 events 的保存时间;
  • –kubelet-:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • –proxy-client-*:apiserver 访问 metrics-server 使用的证书;
  • –service-cluster-ip-range: 指定 Service Cluster IP 地址段;
  • –service-node-port-range: 指定 NodePort 的端口范围;

注:如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 –enable-aggregator-routing=true 参数;

关于 –requestheader-XXX 相关参数,参考:

  • https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md
  • https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/

注: –requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth; 如果 –requestheader-allowed-names 不为空,且 –proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,提示:

为各节点创建和分发 kube-apiserver systemd unit 文件

替换模板文件中的变量,为各节点生成 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service 
  done
ls kube-apiserver*.service

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for (( i=0; i < 3; i++ ))
>   do
>     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service 
>   done
[root@kubernetes-01 work]# ls kube-apiserver*.service
kube-apiserver-172.20.0.20.service  kube-apiserver-172.20.0.21.service  kube-apiserver-172.20.0.22.service
[root@kubernetes-01 work]#
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发生成的 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-apiserver-172.20.0.20.service                                                                                                                                          100% 2506     3.6MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-apiserver-172.20.0.21.service                                                                                                                                          100% 2506     2.8MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-apiserver-172.20.0.22.service                                                                                                                                          100% 2506     2.5MB/s   00:00    
[root@kubernetes-01 work]#

启动 kube-apiserver 服务

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /etc/systemd/system/kube-apiserver.service.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /etc/systemd/system/kube-apiserver.service.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /etc/systemd/system/kube-apiserver.service.
[root@kubernetes-01 work]#

检查 kube-apiserver 运行状态

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:04:30 CST; 15s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:04:35 CST; 10s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:04:40 CST; 5s ago
[root@kubernetes-01 work]#

确保状态为 active (running),否则可以通过 journalctl -u kube-apiserver 查看日志,确认原因。

检查集群状态

# 查看集群信息
[root@kubernetes-01 work]# kubectl cluster-info
Kubernetes master is running at https://172.20.0.20:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@kubernetes-01 work]# 

# 查看所有的命令空间
[root@kubernetes-01 work]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   2m6s
[root@kubernetes-01 work]# 

# 查看组件状态
[root@kubernetes-01 work]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                                                             
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}                                                                             
[root@kubernetes-01 work]#

参考文章:https://zhongpan.tech/2019/10/30/018-output-change-kubectl-get-cs/

Github Issue:https://github.com/kubernetes/kubernetes/issues/83024

检查 kube-apiserver 监听的端口

[root@kubernetes-01 work]# sudo netstat -lnpt|grep kube
tcp        0      0 172.20.0.20:6443        0.0.0.0:*               LISTEN      3679/kube-apiserver 
[root@kubernetes-01 work]#
  • 6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
  • 由于关闭了非安全端口,故没有监听 8080;

6.2、Controller-Manager集群

本小节介绍部署高可用 kube-controller-manager 集群的步骤。该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 安全端口(https,10252) 输出 prometheus 格式的 metrics;

注:如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行。

创建 kube-controller-manager 证书和私钥

创建证书签名请求:

cd /opt/k8s/work
cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "172.20.0.20",
      "172.20.0.21",
      "172.20.0.22"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "z0ukun"
      }
    ]
}
EOF
  • hosts 列表包含所有 kube-controller-manager 节点 IP;
  • CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
ls kube-controller-manager*pem

[root@kubernetes-01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
>   -ca-key=/opt/k8s/work/ca-key.pem \
>   -config=/opt/k8s/work/ca-config.json \
>   -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2021/03/31 14:08:34 [INFO] generate received request
2021/03/31 14:08:34 [INFO] received CSR
2021/03/31 14:08:34 [INFO] generating key: rsa-2048
2021/03/31 14:08:35 [INFO] encoded CSR
2021/03/31 14:08:35 [INFO] signed certificate with serial number 167734461365167531618854597958116803518757833256
[root@kubernetes-01 work]# ls kube-controller-manager*pem
kube-controller-manager-key.pem  kube-controller-manager.pem
[root@kubernetes-01 work]#

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-controller-manager-key.pem                                                                                                                                             100% 1675     1.9MB/s   00:00    
kube-controller-manager.pem                                                                                                                                                 100% 1513     2.6MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-controller-manager-key.pem                                                                                                                                             100% 1675     1.8MB/s   00:00    
kube-controller-manager.pem                                                                                                                                                 100% 1513     2.0MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-controller-manager-key.pem                                                                                                                                             100% 1675     1.4MB/s   00:00    
kube-controller-manager.pem                                                                                                                                                 100% 1513     1.9MB/s   00:00    
[root@kubernetes-01 work]#

创建和分发 kubeconfig 文件

kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书等信息:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  • kube-controller-manager 与 kube-apiserver 混布,故直接通过节点 IP 访问 kube-apiserver。

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kube-controller-manager.kubeconfig > kube-controller-manager-${node_ip}.kubeconfig
    scp kube-controller-manager-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-controller-manager.kubeconfig
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     sed -e "s/##NODE_IP##/${node_ip}/" kube-controller-manager.kubeconfig > kube-controller-manager-${node_ip}.kubeconfig
>     scp kube-controller-manager-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-controller-manager.kubeconfig
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-controller-manager-172.20.0.20.kubeconfig                                                                                                                              100% 6453     8.1MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-controller-manager-172.20.0.21.kubeconfig                                                                                                                              100% 6453     5.1MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-controller-manager-172.20.0.22.kubeconfig                                                                                                                              100% 6453     5.2MB/s   00:00    
[root@kubernetes-01 work]# 

创建 kube-controller-manager systemd unit 模板文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --bind-address=##NODE_IP## \\
  --secure-port=10252 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
  --port=0 \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --experimental-cluster-signing-duration=876000h \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --pod-eviction-timeout=6m \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/cert/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • –port=0:关闭监听非安全端口(http),同时 –address 参数无效,–bind-address 参数有效;
  • –secure-port=10252、–bind-address=0.0.0.0: 在所有网络接口监听 10252 端口的 https /metrics 请求;
  • –kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
  • –authentication-kubeconfig 和 –authorization-kubeconfig:kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 –tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。
  • –cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
  • –experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
  • –root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
  • –service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 –service-account-key-file 指定的公钥文件配对使用;
  • –service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
  • –controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
  • –horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1; –tls-cert-file、
  • –tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
  • –use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver;

为各节点创建和分发 kube-controller-mananger systemd unit 文件

替换模板文件中的变量,为各节点创建 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${NODE_IPS[i]}.service 
  done

ls kube-controller-manager*.service

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for (( i=0; i < 3; i++ ))
>   do
>     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${NODE_IPS[i]}.service 
>   done
[root@kubernetes-01 work]# ls kube-controller-manager*.service
kube-controller-manager-172.20.0.20.service  kube-controller-manager-172.20.0.21.service  kube-controller-manager-172.20.0.22.service
[root@kubernetes-01 work]#

分发到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-controller-manager-172.20.0.20.service                                                                                                                                 100% 1882     1.9MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-controller-manager-172.20.0.21.service                                                                                                                                 100% 1882     1.8MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-controller-manager-172.20.0.22.service                                                                                                                                 100% 1882     1.6MB/s   00:00    
[root@kubernetes-01 work]# 

启动 kube-controller-manager 服务

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /etc/systemd/system/kube-controller-manager.service.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /etc/systemd/system/kube-controller-manager.service.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /etc/systemd/system/kube-controller-manager.service.
[root@kubernetes-01 work]#

检查服务运行状态

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:10:48 CST; 18s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:10:48 CST; 18s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:10:48 CST; 17s ago
[root@kubernetes-01 work]#

确保状态为 active (running),否则 可以通过 journalctl -u kube-controller-manager 命令查看日志,确认原因。

kube-controller-manager 监听 10252 端口,接收 https 请求:

[root@kubernetes-01 work]# sudo netstat -nltp | grep kube-cont
tcp        0      0 172.20.0.20:10252       0.0.0.0:*               LISTEN      4328/kube-controlle 
[root@kubernetes-01 work]#

查看输出的 metrics

下面的命令在 kube-controller-manager 节点上执行:

[root@kubernetes-01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.20.0.20:10252/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@kubernetes-01 work]#

查看当前的 leader

[root@kubernetes-01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kubernetes-01_bebce482-0a5f-41c3-b1dd-d7d19ca128c2","leaseDurationSeconds":15,"acquireTime":"2021-03-31T06:10:48Z","renewTime":"2021-03-31T06:12:28Z","leaderTransitions":0}'
  creationTimestamp: "2021-03-31T06:10:48Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-31T06:10:48Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "534"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: a34fb645-ced8-4f53-84de-c6070e4ae7b0
[root@kubernetes-01 work]# 

可见,当前的 leader 为 Kubernetes-01 节点;其他的节点测试请各位小伙伴自行完成。

测试 kube-controller-manager 集群的高可用

停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。

[root@kubernetes-01 work]# systemctl stop kube-controller-manager

[root@kubernetes-02 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kubernetes-01_bebce482-0a5f-41c3-b1dd-d7d19ca128c2","leaseDurationSeconds":15,"acquireTime":"2021-03-31T06:10:48Z","renewTime":"2021-03-31T06:13:37Z","leaderTransitions":0}'
  creationTimestamp: "2021-03-31T06:10:48Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-31T06:10:48Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "623"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: a34fb645-ced8-4f53-84de-c6070e4ae7b0
[root@kubernetes-02 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kubernetes-03_b71fe527-2fde-4f36-b079-9dfa64175ccd","leaseDurationSeconds":15,"acquireTime":"2021-03-31T06:13:56Z","renewTime":"2021-03-31T06:13:56Z","leaderTransitions":1}'
  creationTimestamp: "2021-03-31T06:10:48Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-31T06:10:48Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "631"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: a34fb645-ced8-4f53-84de-c6070e4ae7b0
[root@kubernetes-02 ~]#
  • 关于 controller 权限和 use-service-account-credentials 参数:https://github.com/kubernetes/kubernetes/issues/48208
  • kubelet 认证和授权:https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization

6.3、Scheduler集群

本小节介绍部署高可用 kube-scheduler 集群的步骤。该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:

  • 与 kube-apiserver 的安全端口通信;
  • 安全端口(https,10251) 输出 prometheus 格式的 metrics;

注:如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行

创建 kube-scheduler 证书和私钥

创建证书签名请求:

cd /opt/k8s/work
cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "172.20.0.20",
      "172.20.0.21",
      "172.20.0.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "z0ukun"
      }
    ]
}
EOF
  • hosts 列表包含所有 kube-scheduler 节点 IP;
  • CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限;

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

ls kube-scheduler*pem

[root@kubernetes-01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
>   -ca-key=/opt/k8s/work/ca-key.pem \
>   -config=/opt/k8s/work/ca-config.json \
>   -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2021/03/31 14:15:06 [INFO] generate received request
2021/03/31 14:15:06 [INFO] received CSR
2021/03/31 14:15:06 [INFO] generating key: rsa-2048
2021/03/31 14:15:06 [INFO] encoded CSR
2021/03/31 14:15:06 [INFO] signed certificate with serial number 216169409049839048403577298545882720794791501568
[root@kubernetes-01 work]# ls kube-scheduler*pem
kube-scheduler-key.pem  kube-scheduler.pem
[root@kubernetes-01 work]#

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler*.pem root@${node_ip}:/etc/kubernetes/cert/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-scheduler*.pem root@${node_ip}:/etc/kubernetes/cert/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-scheduler-key.pem                                                                                                                                                      100% 1675     2.6MB/s   00:00    
kube-scheduler.pem                                                                                                                                                          100% 1489     3.1MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-scheduler-key.pem                                                                                                                                                      100% 1675     1.5MB/s   00:00    
kube-scheduler.pem                                                                                                                                                          100% 1489     1.6MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-scheduler-key.pem                                                                                                                                                      100% 1675     1.4MB/s   00:00    
kube-scheduler.pem                                                                                                                                                          100% 1489     1.3MB/s   00:00    
[root@kubernetes-01 work]#

创建和分发 kubeconfig 文件

kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kube-scheduler.kubeconfig > kube-scheduler-${node_ip}.kubeconfig
    scp kube-scheduler-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-scheduler.kubeconfig
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     sed -e "s/##NODE_IP##/${node_ip}/" kube-scheduler.kubeconfig > kube-scheduler-${node_ip}.kubeconfig
>     scp kube-scheduler-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-scheduler.kubeconfig
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.20.kubeconfig                                                                                                                                       100% 6385     5.7MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.21.kubeconfig                                                                                                                                       100% 6385     5.9MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.22.kubeconfig                                                                                                                                       100% 6385     5.7MB/s   00:00    
[root@kubernetes-01 work]# 

创建 kube-scheduler 配置文件

cd /opt/k8s/work
cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
enableContentionProfiling: false
enableProfiling: true
healthzBindAddress: ##NODE_IP##:10251
leaderElection:
  leaderElect: true
metricsBindAddress: ##NODE_IP##:10251
EOF
  • –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

替换模板文件中的变量:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${NODE_IPS[i]}.yaml
  done

ls kube-scheduler*.yaml

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for (( i=0; i < 3; i++ ))
>   do
>     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.yaml.template > kube-scheduler-${NODE_IPS[i]}.yaml
>   done
[root@kubernetes-01 work]# ls kube-scheduler*.yaml
kube-scheduler-172.20.0.20.yaml  kube-scheduler-172.20.0.21.yaml  kube-scheduler-172.20.0.22.yaml
[root@kubernetes-01 work]#
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发 kube-scheduler 配置文件到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.yaml root@${node_ip}:/etc/kubernetes/kube-scheduler.yaml
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-scheduler-${node_ip}.yaml root@${node_ip}:/etc/kubernetes/kube-scheduler.yaml
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.20.yaml                                                                                                                                             100%  324   525.8KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.21.yaml                                                                                                                                             100%  324   262.3KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.22.yaml                                                                                                                                             100%  324   353.5KB/s   00:00    
[root@kubernetes-01 work]#
  • 重命名为 kube-scheduler.yaml;

创建 kube-scheduler systemd unit 模板文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --config=/etc/kubernetes/kube-scheduler.yaml \\
  --bind-address=##NODE_IP## \\
  --secure-port=10259 \\
  --port=0 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

为各节点创建和分发 kube-scheduler systemd unit 文件

替换模板文件中的变量,为各节点创建 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${NODE_IPS[i]}.service 
  done

ls kube-scheduler*.service

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
[root@kubernetes-01 work]# for (( i=0; i < 3; i++ ))
>   do
>     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-scheduler.service.template > kube-scheduler-${NODE_IPS[i]}.service 
>   done
[root@kubernetes-01 work]# ls kube-scheduler*.service
kube-scheduler-172.20.0.20.service  kube-scheduler-172.20.0.21.service  kube-scheduler-172.20.0.22.service
[root@kubernetes-01 work]# 

分发 systemd unit 文件到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-scheduler-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-scheduler.service
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-scheduler-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-scheduler.service
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.20.service                                                                                                                                          100%  985     1.4MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.21.service                                                                                                                                          100%  985   824.8KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-scheduler-172.20.0.22.service                                                                                                                                          100%  985   770.5KB/s   00:00    
[root@kubernetes-01 work]#

启动 kube-scheduler 服务

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /etc/systemd/system/kube-scheduler.service.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /etc/systemd/system/kube-scheduler.service.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /etc/systemd/system/kube-scheduler.service.
[root@kubernetes-01 work]#

检查服务运行状态

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:17:36 CST; 14s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:17:36 CST; 13s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:17:37 CST; 13s ago
[root@kubernetes-01 work]# 

确保状态为 active (running),否则可以通过 journalctl -u kube-scheduler 命令查看日志,确认原因。

查看输出的 metrics

注:以下命令在 kube-scheduler 节点上执行。

kube-scheduler 监听 10251 和 10259 端口:

  • 10251:接收 http 请求,非安全端口,不需要认证授权;
  • 10259:接收 https 请求,安全端口,需要认证授权;

两个接口都对外提供 /metrics 和 /healthz 的访问。

[root@kubernetes-01 work]# sudo netstat -nltp | grep kube-sch
tcp        0      0 172.20.0.20:10251       0.0.0.0:*               LISTEN      5019/kube-scheduler 
tcp        0      0 172.20.0.20:10259       0.0.0.0:*               LISTEN      5019/kube-scheduler 
[root@kubernetes-01 work]# 

[root@kubernetes-01 work]# curl -s http://172.20.0.20:10251/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@kubernetes-01 work]#

[root@kubernetes-01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.20.0.20:10259/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@kubernetes-01 work]# 

查看当前的 leader

[root@kubernetes-01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kubernetes-01_feaae043-cf6a-4d22-9227-4c8a3e6fc49b","leaseDurationSeconds":15,"acquireTime":"2021-03-31T06:17:37Z","renewTime":"2021-03-31T06:19:35Z","leaderTransitions":0}'
  creationTimestamp: "2021-03-31T06:17:37Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-scheduler
    operation: Update
    time: "2021-03-31T06:17:37Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "1190"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 40dc57ef-af3a-4e83-98c7-cdcd79d10704
[root@kubernetes-01 work]# 

可见,当前的 leader 为 Kubernetes-01 节点。

测试 kube-scheduler 集群的高可用

随便找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限。

[root@kubernetes-01 work]# systemctl stop kube-scheduler

[root@kubernetes-02 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kubernetes-01_feaae043-cf6a-4d22-9227-4c8a3e6fc49b","leaseDurationSeconds":15,"acquireTime":"2021-03-31T06:17:37Z","renewTime":"2021-03-31T06:20:26Z","leaderTransitions":0}'
  creationTimestamp: "2021-03-31T06:17:37Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-scheduler
    operation: Update
    time: "2021-03-31T06:17:37Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "1305"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 40dc57ef-af3a-4e83-98c7-cdcd79d10704
[root@kubernetes-02 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kubernetes-03_67d9c4d1-fb75-495a-b628-17ffe90d068b","leaseDurationSeconds":15,"acquireTime":"2021-03-31T06:20:45Z","renewTime":"2021-03-31T06:20:45Z","leaderTransitions":1}'
  creationTimestamp: "2021-03-31T06:17:37Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:control-plane.alpha.kubernetes.io/leader: {}
    manager: kube-scheduler
    operation: Update
    time: "2021-03-31T06:17:37Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "1331"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 40dc57ef-af3a-4e83-98c7-cdcd79d10704
[root@kubernetes-02 ~]#

7、部署 Worker 节点

kubernetes worker 节点运行如下组件:

  • docker
  • kubelet
  • kube-proxy
  • kube-nginx

注:如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行。切记在开始部署Worker集群之前、我们需要先去部署Flannel网络和Docker。

安装依赖包

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "yum install -y epel-release" &
    ssh root@${node_ip} "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git" &
  done

7.1、APIServer高可用

本小节讲解使用 nginx 4 层透明代理功能实现 Kubernetes worker 节点组件高可用访问 kube-apiserver 集群的步骤。

注:如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行

基于 nginx 代理的 kube-apiserver 高可用方案

  • 控制节点的 kube-controller-manager、kube-scheduler 是多实例部署且连接本机的 kube-apiserver,所以只要有一个实例正常,就可以保证高可用;
  • 集群内的 Pod 使用 K8S 服务域名 kubernetes 访问 kube-apiserver, kube-dns 会自动解析出多个 kube-apiserver 节点的 IP,所以也是高可用的;
  • 在每个节点起一个 nginx 进程,后端对接多个 apiserver 实例,nginx 对它们做健康检查和负载均衡;
  • kubelet、kube-proxy 通过本地的 nginx(监听 127.0.0.1)访问 kube-apiserver,从而实现 kube-apiserver 的高可用。

下载和编译Nginx

Nginx下载地址:http://nginx.org/

下载源码(这里我们安装Nginx 1.18.0):

cd /opt/k8s/work && wget http://nginx.org/download/nginx-1.18.0.tar.gz && tar -xzvf nginx-1.18.0.tar.gz

配置编译参数

cd /opt/k8s/work/nginx-1.18.0 &&mkdir nginx-prefix && yum install -y gcc make
./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
  • –with-stream:开启 4 层透明转发(TCP Proxy)功能;
  • –without-xxx:关闭所有其他功能,这样生成的动态链接二进制程序依赖最小;

编译和安装:

cd /opt/k8s/work/nginx-1.18.0 && make && make install

验证编译的 nginx

[root@kubernetes-01 nginx-1.18.0]# ./nginx-prefix/sbin/nginx -v
nginx version: nginx/1.18.0
[root@kubernetes-01 nginx-1.18.0]# 

安装和部署Nginx

创建目录结构:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 work]#

拷贝二进制程序:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
    scp /opt/k8s/work/nginx-1.18.0/nginx-prefix/sbin/nginx  root@${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
    ssh root@${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
>     scp /opt/k8s/work/nginx-1.18.0/nginx-prefix/sbin/nginx  root@${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
>     ssh root@${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
nginx                                                                                                                                                                       100% 1740KB 132.2MB/s   00:00    
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
nginx                                                                                                                                                                       100% 1740KB 105.4MB/s   00:00    
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
nginx                                                                                                                                                                       100% 1740KB 106.5MB/s   00:00    
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 work]#
  • 这里我们重命名二进制文件为 kube-nginx;

配置 nginx,开启 4 层透明转发功能:

cd /opt/k8s/work
cat > kube-nginx.conf << \EOF
worker_processes 1;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 172.20.0.20:6443        max_fails=3 fail_timeout=30s;
        server 172.20.0.21:6443        max_fails=3 fail_timeout=30s;
        server 172.20.0.22:6443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF
  • upstream backend 中的 server 列表为集群中各 kube-apiserver 的节点 IP,需要根据实际情况修改;

分发配置文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-nginx.conf  root@${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.conf
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-nginx.conf  root@${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.conf
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-nginx.conf                                                                                                                                                             100%  461   548.3KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-nginx.conf                                                                                                                                                             100%  461   453.8KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-nginx.conf                                                                                                                                                             100%  461   230.5KB/s   00:00    
[root@kubernetes-01 work]#

配置 systemd unit 文件,启动服务

配置 kube-nginx systemd unit 文件:

cd /opt/k8s/work
cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

分发 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-nginx.service  root@${node_ip}:/etc/systemd/system/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp kube-nginx.service  root@${node_ip}:/etc/systemd/system/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kube-nginx.service                                                                                                                                                          100%  624   870.6KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kube-nginx.service                                                                                                                                                          100%  624   601.9KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kube-nginx.service                                                                                                                                                          100%  624   518.2KB/s   00:00    
[root@kubernetes-01 work]#

启动 kube-nginx 服务:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-nginx.service to /etc/systemd/system/kube-nginx.service.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-nginx.service to /etc/systemd/system/kube-nginx.service.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-nginx.service to /etc/systemd/system/kube-nginx.service.
[root@kubernetes-01 work]#

检查 kube-nginx 服务运行状态

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-nginx |grep 'Active:'"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status kube-nginx |grep 'Active:'"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:50:37 CST; 17s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:50:37 CST; 17s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:50:37 CST; 16s ago
[root@kubernetes-01 work]#

确保状态为 active (running),否则可以通过 journalctl -u kube-nginx 查看日志,确认原因。

7.2、部署Docker

docker 运行和管理容器,kubelet 通过 Container Runtime Interface (CRI) 与它进行交互。

注:在开始部署Docker之前一定要记得提前部署Flannel网络;如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行,然后远程分发文件和执行命令;如果之前安装过Docker的相关版本、记得一定要把Docker提前卸载干净。

Docker卸载命令:

# 查询已安装的Docker版本
rpm -qa | grep docker

# 卸载已安装的Docker版本
yum remove -y *** 

# 重启主机
reboot

安装依赖包

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "yum install -y epel-release" &
    ssh root@${node_ip} "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git" &
  done

下载Docker二进制文件:

Docker下载地址:https://download.docker.com/linux/static/stable/x86_64/

cd /opt/k8s/work && wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.8.tgz
tar -xvf docker-19.03.8.tgz

分发Docker二进制文件到所有的Worker节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp docker/*  root@${node_ip}:/opt/k8s/bin/
    ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp docker/*  root@${node_ip}:/opt/k8s/bin/
>     ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
containerd                                                                    100%   33MB 141.3MB/s   00:00    
containerd-shim                                                               100% 5977KB 135.4MB/s   00:00    
ctr                                                                           100%   18MB 141.5MB/s   00:00    
docker                                                                        100%   63MB 141.4MB/s   00:00    
dockerd                                                                       100%   69MB 143.9MB/s   00:00    
docker-init                                                                   100%  746KB 125.5MB/s   00:00    
docker-proxy                                                                  100% 2810KB 137.9MB/s   00:00    
runc                                                                          100% 9403KB 144.1MB/s   00:00    
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
containerd                                                                    100%   33MB 131.6MB/s   00:00    
containerd-shim                                                               100% 5977KB 129.5MB/s   00:00    
ctr                                                                           100%   18MB 109.1MB/s   00:00    
docker                                                                        100%   63MB 110.3MB/s   00:00    
dockerd                                                                       100%   69MB 124.5MB/s   00:00    
docker-init                                                                   100%  746KB 107.5MB/s   00:00    
docker-proxy                                                                  100% 2810KB 124.3MB/s   00:00    
runc                                                                          100% 9403KB 105.8MB/s   00:00    
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
containerd                                                                    100%   33MB 123.5MB/s   00:00    
containerd-shim                                                               100% 5977KB 128.2MB/s   00:00    
ctr                                                                           100%   18MB 129.6MB/s   00:00    
docker                                                                        100%   63MB 134.3MB/s   00:00    
dockerd                                                                       100%   69MB 127.4MB/s   00:00    
docker-init                                                                   100%  746KB 105.5MB/s   00:00    
docker-proxy                                                                  100% 2810KB 119.4MB/s   00:00    
runc                                                                          100% 9403KB 123.9MB/s   00:00    
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
[root@kubernetes-01 work]# 

创建和分发 systemd unit 文件

cd /opt/k8s/work
cat > docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
WorkingDirectory=##DOCKER_DIR##
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
  • EOF 前后有双引号,这样 bash 不会替换文档中的变量,如 $DOCKER_NETWORK_OPTIONS (这些环境变量是 systemd 负责替换的。);
  • dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 PATH 环境变量中;
  • flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段; 如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);
  • docker 需要以 root 用于运行;
  • docker 从 1.13 版本开始,可能将 iptables FORWARD chain的默认策略设置为DROP,从而导致 ping 其它 Node 上的 Pod IP 失败,遇到这种情况时,需要手动设置策略为 ACCEPT:sudo iptables -P FORWARD ACCEPT;并且把以下命令写入 /etc/rc.local 文件中,防止节点重启iptables FORWARD chain的默认策略又还原为DROP:/sbin/iptables -P FORWARD ACCEPT

分发 systemd unit 文件到所有 worker 机器:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp docker.service root@${node_ip}:/etc/systemd/system/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     scp docker.service root@${node_ip}:/etc/systemd/system/
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
docker.service                                                                                                                                                              100%  487   675.4KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
docker.service                                                                                                                                                              100%  487   512.2KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
docker.service                                                                                                                                                              100%  487   485.1KB/s   00:00    
[root@kubernetes-01 work]#

配置和分发 docker 配置文件

使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > docker-daemon.json <<EOF
{
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
    "insecure-registries": ["docker02:35000"],
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "max-concurrent-uploads": 10,
    "debug": true,
    "data-root": "${DOCKER_DIR}/data",
    "exec-root": "${DOCKER_DIR}/exec",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    }
}
EOF

分发 docker 配置文件到所有 worker 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p  /etc/docker/ ${DOCKER_DIR}/{data,exec}"
    scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p  /etc/docker/ ${DOCKER_DIR}/{data,exec}"
>     scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
docker-daemon.json                                                                                                                                                          100%  417   272.7KB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
docker-daemon.json                                                                                                                                                          100%  417   305.8KB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
docker-daemon.json                                                                                                                                                          100%  417   226.0KB/s   00:00    

启动 docker 服务

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.

检查服务运行状态

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status docker|grep Active"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status docker|grep Active"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:29:30 CST; 4s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:29:30 CST; 4s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 14:29:30 CST; 4s ago
[root@kubernetes-01 work]#

确保状态为 active (running),否则可以通过 journalctl -u docker 查看日志,确认原因。

查看 docker 的状态信息

[root@kubernetes-01 work]# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.8
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.4.108-1.el7.elrepo.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 62.82GiB
 Name: kubernetes-01
 ID: 4FKH:ERXJ:64V7:WMHA:PBUP:G27H:L3BI:ADXT:2RAZ:2TC7:UEAX:ZX4N
 Docker Root Dir: /data/k8s/docker/data
 Debug Mode: true
  File Descriptors: 22
  Goroutines: 40
  System Time: 2021-03-31T14:31:05.770114236+08:00
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  docker02:35000
  127.0.0.0/8
 Registry Mirrors:
  https://docker.mirrors.ustc.edu.cn/
  https://hub-mirror.c.163.com/
 Live Restore Enabled: true
 Product License: Community Engine

[root@kubernetes-01 work]# 

7.3、Kubelet

kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

注:如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行

创建 kubelet bootstrap kubeconfig 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"

    # 创建 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)

    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
  done
  • 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书;

查看 kubeadm 为各节点创建的 token:

[root@kubernetes-01 work]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
5bgx89.2yhcvzr6eh443q85   23h         2021-04-01T14:55:30+08:00   authentication,signing   kubelet-bootstrap-token                                    system:bootstrappers:kubernetes-02
qkrmag.qhzu0kfzevmyaz44   23h         2021-04-01T14:55:29+08:00   authentication,signing   kubelet-bootstrap-token                                    system:bootstrappers:kubernetes-01
sgjlb2.vm7s1pv7mdzho6ze   23h         2021-04-01T14:55:30+08:00   authentication,signing   kubelet-bootstrap-token                                    system:bootstrappers:kubernetes-03
[root@kubernetes-01 work]# 
  • token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
  • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding;

分发 bootstrap kubeconfig 文件到所有 worker 节点

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_name in ${NODE_NAMES[@]}
>   do
>     echo ">>> ${node_name}"
>     scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
>   done
>>> kubernetes-01
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
kubelet-bootstrap-kubernetes-01.kubeconfig                                                                                                                                  100% 2106     2.1MB/s   00:00    
>>> kubernetes-02
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
kubelet-bootstrap-kubernetes-02.kubeconfig                                                                                                                                  100% 2106     1.6MB/s   00:00    
>>> kubernetes-03
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
kubelet-bootstrap-kubernetes-03.kubeconfig                                                                                                                                  100% 2106   865.2KB/s   00:00    
[root@kubernetes-01 work]#

创建和分发 kubelet 参数配置文件

从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet –help 会提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

创建 kubelet 参数配置文件模板(可配置项参考代码中注释):

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
  nodefs.available:  "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
  • address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
  • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证; 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
  • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
  • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 –experimental-cluster-signing-duration 参数;
  • 需要 root 账户运行;

为各节点创建和分发 kubelet 配置文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
    scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do 
>     echo ">>> ${node_ip}"
>     sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
>     scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
kubelet-config-172.20.0.20.yaml.template                                                                                                                                    100% 1534     2.2MB/s   00:00    
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
kubelet-config-172.20.0.21.yaml.template                                                                                                                                    100% 1534     1.9MB/s   00:00    
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
kubelet-config-172.20.0.22.yaml.template                                                                                                                                    100% 1534     1.3MB/s   00:00    
[root@kubernetes-01 work]# 

创建 kubelet systemd unit 文件模板:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --fail-swap-on=false \\
  --cgroup-driver=cgroupfs \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --network-plugin=cni \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
  • 如果设置了 –hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
  • –bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
  • K8S approve kubelet 的 csr 请求后,在 –cert-dir 目录创建证书和私钥文件,然后写入 –kubeconfig 文件;
  • –pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸;

为各节点创建和分发 kubelet systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_name in ${NODE_NAMES[@]}
>   do 
>     echo ">>> ${node_name}"
>     sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
>     scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
>   done
>>> kubernetes-01
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
kubelet-kubernetes-01.service                                                                                                                                               100%  825   987.6KB/s   00:00    
>>> kubernetes-02
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
kubelet-kubernetes-02.service                                                                                                                                               100%  825   747.1KB/s   00:00    
>>> kubernetes-03
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
kubelet-kubernetes-03.service                                                                                                                                               100%  825   669.0KB/s   00:00    
[root@kubernetes-01 work]# 

授予 kube-apiserver 访问 kubelet API 的权限

在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes-master)访问 kubelet API 的权限:

[root@kubernetes-01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created
[root@kubernetes-01 work]# 

Bootstrap Token Auth 和授予权限

kubelet 启动时查找 –kubeletconfig 参数对应的文件是否存在,如果不存在则使用 –bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。

kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

原文这里说:默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:

sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'

我这里并没有遇到这个问题、但还是建议把下面的命令执行以下:解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

[root@kubernetes-01 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@kubernetes-01 work]# 

自动 approve CSR 请求,生成 kubelet client 证书

kubelet 创建 CSR 请求后,下一步需要创建被 approve,有两种方式:

  • kube-controller-manager 自动 aprrove;
  • 手动使用命令 kubectl certificate approve;

CSR 被 approve 后,kubelet 向 kube-controller-manager 请求创建 client 证书,kube-controller-manager 中的 csrapproving controller 使用 SubjectAccessReview API 来检查 kubelet 请求(对应的 group 是 system:bootstrappers)是否具有相应的权限。

创建三个 ClusterRoleBinding,分别授予 group system:bootstrappers 和 group system:nodes 进行 approve client、renew client、renew server 证书的权限(server csr 是手动 approve 的,见后):

cd /opt/k8s/work
cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

[root@kubernetes-01 work]# kubectl apply -f csr-crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/auto-approve-csrs-for-group created
clusterrolebinding.rbac.authorization.k8s.io/node-client-cert-renewal created
clusterrole.rbac.authorization.k8s.io/approve-node-server-renewal-csr created
clusterrolebinding.rbac.authorization.k8s.io/node-server-cert-renewal created
[root@kubernetes-01 work]# 
  • auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
  • node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
  • node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

启动 kubelet 服务

source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    ssh root@${node_name} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${node_name} "/usr/sbin/swapoff -a"
    ssh root@${node_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_name in ${NODE_NAMES[@]}
>   do
>     echo ">>> ${node_name}"
>     ssh root@${node_name} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
>     ssh root@${node_name} "/usr/sbin/swapoff -a"
>     ssh root@${node_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
>   done
>>> kubernetes-01
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
>>> kubernetes-02
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
>>> kubernetes-03
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@kubernetes-01 work]# 
  • 启动服务前必须先创建工作目录;
  • 关闭 swap 分区,否则 kubelet 会启动失败;

kubelet 启动后使用 –bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 –kubeletconfig 文件。

注:kube-controller-manager 需要配置 –cluster-signing-cert-file 和 –cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

查看 kubelet 情况

稍等一会,三个节点的 CSR 都被自动 approved:

[root@kubernetes-01 work]# kubectl get csr 
NAME        AGE   SIGNERNAME                                    REQUESTOR                   CONDITION
csr-29nmm   25s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:sgjlb2     Approved,Issued
csr-2xgp7   25s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:5bgx89     Approved,Issued
csr-99shf   18s   kubernetes.io/kubelet-serving                 system:node:kubernetes-02   Pending
csr-fxvmw   17s   kubernetes.io/kubelet-serving                 system:node:kubernetes-03   Pending
csr-ntjdd   25s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:qkrmag     Approved,Issued
csr-v42ns   12s   kubernetes.io/kubelet-serving                 system:node:kubernetes-01   Pending
[root@kubernetes-01 work]#
  • Pending 的 CSR 用于创建 kubelet server 证书,需要手动 approve,参考后文。

所有节点均注册(NotReady 状态是预期的,后续安装了网络插件后就好):

[root@kubernetes-01 work]# kubectl get node
NAME            STATUS     ROLES    AGE   VERSION
kubernetes-01   NotReady   <none>   12s   v1.19.8
kubernetes-02   NotReady   <none>   17s   v1.19.8
kubernetes-03   NotReady   <none>   17s   v1.19.8
[root@kubernetes-01 work]#

kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:

[root@kubernetes-01 work]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2246 Mar 31 14:59 /etc/kubernetes/kubelet.kubeconfig
[root@kubernetes-01 work]#
  • 没有自动生成 kubelet server 证书;

手动 approve server cert csr

基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve:

kubectl get csr

# 手动 approve
kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

# 自动生成了 server 证书
[root@kubernetes-01 work]# ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1232 Mar 31 14:59 /etc/kubernetes/cert/kubelet-client-2021-03-31-14-59-43.pem
lrwxrwxrwx 1 root root   59 Mar 31 14:59 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2021-03-31-14-59-43.pem
-rw------- 1 root root 1277 Mar 31 15:00 /etc/kubernetes/cert/kubelet-server-2021-03-31-15-00-41.pem
lrwxrwxrwx 1 root root   59 Mar 31 15:00 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2021-03-31-15-00-41.pem
[root@kubernetes-01 work]# 

kubelet api 认证和授权

kubelet 配置了如下认证参数:

  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

同时配置了如下授权参数:

  • authroization.mode=Webhook:开启 RBAC 授权;

kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:

curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.20.0.20:10250/metrics

curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.20.0.20:10250/metrics

通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);

证书认证和授权

# 权限不足的证书;
[root@kubernetes-01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.20.0.20:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

# 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@kubernetes-01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.20.0.20:10250/metrics|head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
[root@kubernetes-01 work]#
  • –cacert、–cert、–key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized;

bear token 认证和授权

创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:

kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.20.0.20:10250/metrics | head

cadvisor 和 metrics

cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况的服务。

浏览器访问 https://172.20.0.20:10250/metrics 和 https://172.20.0.20:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。

注意:

  • kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
  • 参考浏览器访问kube-apiserver安全端口,创建和导入相关证书,然后访问上面的 10250 端口;

    参考

  1. kubelet 认证和授权:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/

7.4、Kube-Proxy

kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。本小节讲解部署 ipvs 模式的 kube-proxy 过程。

注意:如果没有特殊指明,本小节的所有操作均在 Kubernetes-01 节点上执行,然后远程分发文件和执行命令。

创建 kube-proxy 证书

创建证书签名请求:

cd /opt/k8s/work
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "z0ukun"
    }
  ]
}
EOF
  • CN:指定该证书的 User 为 system:kube-proxy;
  • 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限; 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*

[root@kubernetes-01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
>   -ca-key=/opt/k8s/work/ca-key.pem \
>   -config=/opt/k8s/work/ca-config.json \
>   -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
2021/03/31 15:04:33 [INFO] generate received request
2021/03/31 15:04:33 [INFO] received CSR
2021/03/31 15:04:33 [INFO] generating key: rsa-2048
2021/03/31 15:04:33 [INFO] encoded CSR
2021/03/31 15:04:33 [INFO] signed certificate with serial number 616010196370462377273421189169206860799678306037
2021/03/31 15:04:33 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@kubernetes-01 work]# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
[root@kubernetes-01 work]# 

创建和分发 kubeconfig 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分发 kubeconfig 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_name in ${NODE_NAMES[@]}
>   do
>     echo ">>> ${node_name}"
>     scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
>   done
>>> kubernetes-01
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
kube-proxy.kubeconfig                                                                                                                                                       100% 6225     7.3MB/s   00:00    
>>> kubernetes-02
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
kube-proxy.kubeconfig                                                                                                                                                       100% 6225     4.7MB/s   00:00    
>>> kubernetes-03
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
kube-proxy.kubeconfig                                                                                                                                                       100% 6225     4.8MB/s   00:00    
[root@kubernetes-01 work]#

创建 kube-proxy 配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 –write-config-to 选项生成该配置文件,或者参考 源代码的注释

创建 kube-proxy config 文件模板:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
iptables:
  masqueradeAll: false
ipvs:
  scheduler: rr
  excludeCIDRs: []
EOF
  • bindAddress: 监听地址; clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
  • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

为各节点创建和分发 kube-proxy 配置文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do 
    echo ">>> ${NODE_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
    scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for (( i=0; i < 3; i++ ))
>   do 
>     echo ">>> ${NODE_NAMES[i]}"
>     sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
>     scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
>   done
>>> kubernetes-01
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
kube-proxy-config-kubernetes-01.yaml.template                                                                                                                               100%  453   561.0KB/s   00:00    
>>> kubernetes-02
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
kube-proxy-config-kubernetes-02.yaml.template                                                                                                                               100%  453   420.7KB/s   00:00    
>>> kubernetes-03
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
kube-proxy-config-kubernetes-03.yaml.template                                                                                                                               100%  453   494.7KB/s   00:00    
[root@kubernetes-01 work]# 

创建和分发 kube-proxy systemd unit 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

分发 kube-proxy systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    scp kube-proxy.service root@${node_name}:/etc/systemd/system/
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_name in ${NODE_NAMES[@]}
>   do 
>     echo ">>> ${node_name}"
>     scp kube-proxy.service root@${node_name}:/etc/systemd/system/
>   done
>>> kubernetes-01
Warning: Permanently added 'kubernetes-01,172.20.0.20' (ECDSA) to the list of known hosts.
kube-proxy.service                                                                                                                                                          100%  393   368.6KB/s   00:00    
>>> kubernetes-02
Warning: Permanently added 'kubernetes-02,172.20.0.21' (ECDSA) to the list of known hosts.
kube-proxy.service                                                                                                                                                          100%  393   426.3KB/s   00:00    
>>> kubernetes-03
Warning: Permanently added 'kubernetes-03,172.20.0.22' (ECDSA) to the list of known hosts.
kube-proxy.service                                                                                                                                                          100%  393   322.4KB/s   00:00    
[root@kubernetes-01 work]#

启动 kube-proxy 服务

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_ip} "modprobe ip_vs_rr"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
>     ssh root@${node_ip} "modprobe ip_vs_rr"
>     ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /etc/systemd/system/kube-proxy.service.
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /etc/systemd/system/kube-proxy.service.
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /etc/systemd/system/kube-proxy.service.
[root@kubernetes-01 work]#

检查启动结果

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 15:06:00 CST; 13s ago
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 15:06:01 CST; 12s ago
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
   Active: active (running) since Wed 2021-03-31 15:06:01 CST; 12s ago
[root@kubernetes-01 work]# 

确保状态为 active (running),否则可以通过 journalctl -u kube-proxy 查看日志,确认原因。

查看监听端口

[root@kubernetes-01 work]# sudo netstat -lnpt|grep kube-prox
tcp        0      0 172.20.0.20:10256       0.0.0.0:*               LISTEN      9366/kube-proxy     
tcp        0      0 172.20.0.20:10249       0.0.0.0:*               LISTEN      9366/kube-proxy     
[root@kubernetes-01 work]# 
  • 10249:http prometheus metrics port;
  • 10256:http healthz port;

查看 ipvs 路由规则

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 172.20.0.20:6443             Masq    1      0          0         
  -> 172.20.0.21:6443             Masq    1      0          0         
  -> 172.20.0.22:6443             Masq    1      0          0         
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 172.20.0.20:6443             Masq    1      0          0         
  -> 172.20.0.21:6443             Masq    1      0          0         
  -> 172.20.0.22:6443             Masq    1      0          0         
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 172.20.0.20:6443             Masq    1      0          0         
  -> 172.20.0.21:6443             Masq    1      0          0         
  -> 172.20.0.22:6443             Masq    1      0          0         
[root@kubernetes-01 work]#

可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口。

8、Calico网络配置

这里我们采用 Yaml 资源文件的方式部署Calico ;我们先去后面的链接 https://docs.projectcalico.org/manifests/calico.yaml 下载我们需要的Calico资源文件。然后我们需要对Yaml文件进行修改:

[root@kubernetes-01 work]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  184k  100  184k    0     0   132k      0  0:00:01  0:00:01 --:--:--  133k
[root@kubernetes-01 work]# cp calico.yaml calico.yaml.orig
[root@kubernetes-01 work]# vi calico.yaml
[root@kubernetes-01 work]# diff calico.yaml.orig calico.yaml
3673a3674,3677
>             - name: CALICO_IPV4POOL_CIDR
                # 定义calico cidr 网络
>               value: "172.30.0.0/16"
>             - name: IP_AUTODETECTION_METHOD
                # 定义本地网络接口
>               value: "interface=eth.*"
[root@kubernetes-01 work]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@kubernetes-01 work]# mkdir -p /opt/cni/bin && cp /opt/k8s/bin/* /opt/cni/bin/
  • 将 Pod 网段地址修改为 172.30.0.0/16;
  • calico 自动探查互联网卡,如果有多快网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 ens160 (根据自己服务器的网络接口名修改);

然后我们直接创建Calico资源,Calico会以 Daemonset 方式运行在所有的 K8S 节点上。

[root@kubernetes-01 work]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE   IP             NODE            NOMINATED NODE   READINESS GATES
calico-kube-controllers-69496d8b75-2mvdw   1/1     Running   0          21m   172.30.31.65   kubernetes-01   <none>           <none>
calico-node-mbtf8                          1/1     Running   0          21m   172.20.0.20    kubernetes-01   <none>           <none>
calico-node-r442b                          1/1     Running   0          21m   172.20.0.22    kubernetes-03   <none>           <none>
calico-node-zv6ms                          1/1     Running   0          21m   172.20.0.21    kubernetes-02   <none>           <none>
[root@kubernetes-01 work]# 
[root@kubernetes-01 work]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:54:dc:33:23:15 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.20/24 brd 172.20.0.255 scope global noprefixroute dynamic eth0
       valid_lft 85351sec preferred_lft 85351sec
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:60:98:a9:d4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 16:3a:4e:b9:26:1c brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 2a:23:9d:48:03:e5 brd ff:ff:ff:ff:ff:ff
    inet 10.254.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
6: cali4daa57f953d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 172.30.31.64/32 scope global tunl0
       valid_lft forever preferred_lft forever
[root@kubernetes-01 work]#

注:之前参考的 https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/06-6.calico.md 文档中有一个bug、下面这个目录不需要修改;因为 Kubelet 会定时汇报当前节点的状态给APIServer,以供调度的时候使用。而我们修改了目录之后 Calico 组件默认还是会部署到 /opt/cni/bin 目录下面会导致Calico启动失败、这样Kubelet就无法获取节点状态。

错误提示如下:Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn’t tolerate

如果你修改了这个目录之后可以,你可以手动mkdir新建 /opt/cni/bin 目录,然后把 /opt/k8s/bin 的文件拷贝到 /opt/cni/bin 目录下面。稍等片刻、Calico就可以自动启动了。

9、集群验证

本小节验证 K8S 集群是否工作正常。

注:如果没有特殊指明,本文档的所有操作均在 Kubernetes-01 节点上执行,然后远程分发文件和执行命令。

检查节点状态

[root@kubernetes-01 work]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
kubernetes-01   Ready    <none>   32m   v1.19.8
kubernetes-02   Ready    <none>   32m   v1.19.8
kubernetes-03   Ready    <none>   32m   v1.19.8
[root@kubernetes-01 work]# 

都为 Ready 且版本为 v1.19.8 时正常。

创建测试文件

cd /opt/k8s/work
cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
EOF

执行测试

[root@kubernetes-01 work]# kubectl create -f nginx-ds.yml
service/nginx-ds created
daemonset.apps/nginx-ds created
[root@kubernetes-01 work]#

检查各节点的 Pod IP 连通性

[root@kubernetes-01 work]# kubectl get pods  -o wide -l app=nginx-ds
NAME             READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
nginx-ds-hhkgm   1/1     Running   0          4m27s   172.30.31.66     kubernetes-01   <none>           <none>
nginx-ds-knnwf   1/1     Running   0          4m27s   172.30.218.129   kubernetes-02   <none>           <none>
nginx-ds-v7nqb   1/1     Running   0          4m27s   172.30.11.1      kubernetes-03   <none>           <none>
[root@kubernetes-01 work]# 

在所有 Node 上分别 ping 上面三个 Pod IP,看是否连通:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "ping -c 1 172.30.31.66"
    ssh ${node_ip} "ping -c 1 172.30.218.129"
    ssh ${node_ip} "ping -c 1 172.30.11.1"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh ${node_ip} "ping -c 1 172.30.31.66"
>     ssh ${node_ip} "ping -c 1 172.30.218.129"
>     ssh ${node_ip} "ping -c 1 172.30.11.1"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
PING 172.30.31.66 (172.30.31.66) 56(84) bytes of data.
64 bytes from 172.30.31.66: icmp_seq=1 ttl=64 time=0.124 ms

--- 172.30.31.66 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
PING 172.30.218.129 (172.30.218.129) 56(84) bytes of data.
64 bytes from 172.30.218.129: icmp_seq=1 ttl=63 time=0.544 ms

--- 172.30.218.129 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
PING 172.30.11.1 (172.30.11.1) 56(84) bytes of data.
64 bytes from 172.30.11.1: icmp_seq=1 ttl=63 time=0.520 ms

--- 172.30.11.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
PING 172.30.31.66 (172.30.31.66) 56(84) bytes of data.
64 bytes from 172.30.31.66: icmp_seq=1 ttl=63 time=0.338 ms

--- 172.30.31.66 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
PING 172.30.218.129 (172.30.218.129) 56(84) bytes of data.
64 bytes from 172.30.218.129: icmp_seq=1 ttl=64 time=0.071 ms

--- 172.30.218.129 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
PING 172.30.11.1 (172.30.11.1) 56(84) bytes of data.
64 bytes from 172.30.11.1: icmp_seq=1 ttl=63 time=0.451 ms

--- 172.30.11.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
PING 172.30.31.66 (172.30.31.66) 56(84) bytes of data.
64 bytes from 172.30.31.66: icmp_seq=1 ttl=63 time=0.500 ms

--- 172.30.31.66 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
PING 172.30.218.129 (172.30.218.129) 56(84) bytes of data.
64 bytes from 172.30.218.129: icmp_seq=1 ttl=63 time=0.415 ms

--- 172.30.218.129 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
PING 172.30.11.1 (172.30.11.1) 56(84) bytes of data.
64 bytes from 172.30.11.1: icmp_seq=1 ttl=64 time=0.060 ms

--- 172.30.11.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
[root@kubernetes-01 work]# 

检查服务 IP 和端口可达性

[root@kubernetes-01 work]# kubectl get svc -l app=nginx-ds
NAME       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-ds   NodePort   10.254.35.136   <none>        80:32311/TCP   5m31s
[root@kubernetes-01 work]#

可见:

  • Service Cluster IP:10.254.35.136
  • 服务端口:80
  • NodePort 端口:32311

在所有 Node 上 curl Service IP:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s 10.254.35.136"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh ${node_ip} "curl -s 10.254.35.136"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@kubernetes-01 work]# 

预期输出 nginx 欢迎页面内容。

检查服务的 NodePort 可达性

在所有 Node 上执行:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s ${node_ip}:32311"
  done

[root@kubernetes-01 work]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 work]# for node_ip in ${NODE_IPS[@]}
>   do
>     echo ">>> ${node_ip}"
>     ssh ${node_ip} "curl -s ${node_ip}:32311"
>   done
>>> 172.20.0.20
Warning: Permanently added '172.20.0.20' (ECDSA) to the list of known hosts.
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
>>> 172.20.0.21
Warning: Permanently added '172.20.0.21' (ECDSA) to the list of known hosts.
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
>>> 172.20.0.22
Warning: Permanently added '172.20.0.22' (ECDSA) to the list of known hosts.
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@kubernetes-01 work]#

预期输出 nginx 欢迎页面内容。

10、集群插件部署-CoreDNS

注:如果没有特殊指明,本小节所有操作均在 Kubernetes-01 节点上执行。

下载CoreDNS

CoreDNS下载地址:https://github.com/coredns/deployment

CoreDNS官方网站:https://coredns.io/

cd /opt/k8s/work
git clone https://github.com/coredns/deployment.git
mv deployment coredns-deployment

[root@kubernetes-01 work]# git clone https://github.com/coredns/deployment.git
Cloning into 'deployment'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 870 (delta 2), reused 1 (delta 0), pack-reused 859
Receiving objects: 100% (870/870), 244.72 KiB | 101.00 KiB/s, done.
Resolving deltas: 100% (470/470), done.
[root@kubernetes-01 work]# mv deployment coredns-deployment

创建CoreDNS

cd /opt/k8s/work/coredns-deployment/kubernetes
source /opt/k8s/bin/environment.sh
./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -

[root@kubernetes-01 work]# cd /opt/k8s/work/coredns-deployment/kubernetes
[root@kubernetes-01 kubernetes]# source /opt/k8s/bin/environment.sh
[root@kubernetes-01 kubernetes]# ./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@kubernetes-01 kubernetes]#

检查CoreDNS功能

[root@kubernetes-01 kubernetes]# kubectl get all -n kube-system -l k8s-app=kube-dns
NAME                           READY   STATUS              RESTARTS   AGE
pod/coredns-867bfd96bd-fscvd   0/1     ContainerCreating   0          50s

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.254.0.2   <none>        53/UDP,53/TCP,9153/TCP   50s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   0/1     1            0           50s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-867bfd96bd   1         1         0       50s
[root@kubernetes-01 kubernetes]#

新建一个 Deployment:

cd /opt/k8s/work
cat > my-nginx.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      run: my-nginx
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
EOF

[root@kubernetes-01 kubernetes]# kubectl create -f my-nginx.yaml
deployment.apps/my-nginx created
[root@kubernetes-01 kubernetes]# 

export 该 Deployment, 生成 my-nginx 服务:

[root@kubernetes-01 kubernetes]# kubectl expose deploy my-nginx
service/my-nginx exposed

[root@kubernetes-01 kubernetes]# kubectl get services my-nginx -o wide
NAME       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
my-nginx   ClusterIP   10.254.149.187   <none>        80/TCP    30s   run=my-nginx
[root@kubernetes-01 kubernetes]# 

创建另一个 Pod,查看 /etc/resolv.conf 是否包含 kubelet 配置的 –cluster-dns 和 –cluster-domain,是否能够将服务 my-nginx 解析到上面显示的 Cluster IP 10.254.149.187

cd /opt/k8s/work
cat > dnsutils-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: dnsutils-ds
  labels:
    app: dnsutils-ds
spec:
  type: NodePort
  selector:
    app: dnsutils-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dnsutils-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: dnsutils-ds
  template:
    metadata:
      labels:
        app: dnsutils-ds
    spec:
      containers:
      - name: my-dnsutils
        image: tutum/dnsutils:latest
        command:
          - sleep
          - "3600"
        ports:
        - containerPort: 80
EOF

[root@kubernetes-01 kubernetes]# kubectl create -f dnsutils-ds.yml
service/dnsutils-ds created
daemonset.apps/dnsutils-ds created
[root@kubernetes-01 kubernetes]# kubectl get pods -lapp=dnsutils-ds -o wide
NAME                READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
dnsutils-ds-994bl   1/1     Running   0          3m45s   172.30.218.131   kubernetes-02   <none>           <none>
dnsutils-ds-drjdb   1/1     Running   0          3m45s   172.30.11.4      kubernetes-03   <none>           <none>
dnsutils-ds-hw7ff   1/1     Running   0          3m45s   172.30.31.67     kubernetes-01   <none>           <none>
[root@kubernetes-01 kubernetes]# kubectl -it exec dnsutils-ds-994bl cat /etc/resolv.conf
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
nameserver 10.254.0.2
search default.svc.cluster.local svc.cluster.local cluster.local pekdemo.pekdemo.com
options ndots:5
[root@kubernetes-01 kubernetes]# kubectl -it exec dnsutils-ds-994bl cat /etc/resolv.conf
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
nameserver 10.254.0.2
search default.svc.cluster.local svc.cluster.local cluster.local pekdemo.pekdemo.com
options ndots:5
[root@kubernetes-01 kubernetes]# kubectl -it exec dnsutils-ds-994bl nslookup www.baidu.com
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server:         10.254.0.2
Address:        10.254.0.2#53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 180.101.49.11
Name:   www.a.shifen.com
Address: 180.101.49.12

[root@kubernetes-01 kubernetes]#


[root@kubernetes-01 kubernetes]# kubectl -it exec dnsutils-ds-drjdb nslookup www.baidu.com
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server:         10.254.0.2
Address:        10.254.0.2#53

www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 180.101.49.12
Name:   www.a.shifen.com
Address: 180.101.49.11

[root@kubernetes-01 kubernetes]# 


[root@kubernetes-01 kubernetes]# kubectl -it exec dnsutils-ds-hw7ff nslookup www.baidu.com
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server:         10.254.0.2
Address:        10.254.0.2#53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 180.101.49.11
Name:   www.a.shifen.com
Address: 180.101.49.12

[root@kubernetes-01 kubernetes]#

11、访问Dashboard

从 1.7 开始,dashboard 只允许通过 https 访问,如果使用 kube proxy 则必须监听 localhost 或 127.0.0.1。对于 NodePort 没有这个限制,但是仅建议在开发环境中使用。对于不满足这些条件的登录访问,在登录成功后浏览器不跳转,始终停在登录界面

通过 port forward 访问 dashboard,启动端口转发:

[root@kubernetes-01 ~]# kubectl port-forward -n kubernetes-dashboard  svc/kubernetes-dashboard 4443:443 --address 0.0.0.0
Forwarding from 0.0.0.0:4443 -> 8443
Handling connection for 4443
Handling connection for 4443
Handling connection for 4443
Handling connection for 4443
Handling connection for 4443

dashboard 默认只支持 token 认证(不支持 client 证书认证),所以如果使用 Kubeconfig 文件,需要将 token 写入到该文件。

注:这里我们暂时使用端口转发的方式进行访问,后面大家可以通过NodePort的方式或者通过Ingress的方式进行访问。

创建登录 token

kubectl create sa dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
echo ${DASHBOARD_LOGIN_TOKEN}

使用输出的 token 登录 Dashboard即可:

image-20210401190254370

创建使用 token 的 KubeConfig 文件

source /opt/k8s/bin/environment.sh
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=dashboard.kubeconfig

# 设置客户端认证参数,使用上面创建的 Token
kubectl config set-credentials dashboard_user \
  --token=${DASHBOARD_LOGIN_TOKEN} \
  --kubeconfig=dashboard.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=dashboard_user \
  --kubeconfig=dashboard.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=dashboard.kubeconfig

用生成的 dashboard.kubeconfig 登录 Dashboard:

image-20210401190159206

推荐文章

3条评论

  1. Greetings! Very helpful advice within this article! It’s the little
    changes that produce the most significant changes.
    Many thanks for sharing!

  2. whoah this weblog is excellent i love reading your articles.

    Keep up the good work! You know, lots of individuals are looking round for this info, you could aid them
    greatly.

  3. Greetings! I know this is kind of off topic but I was wondering if you knew where I could find a captcha
    plugin for my comment form? I’m using the same blog platform as yours and I’m having problems finding one?

    Thanks a lot!

评论已关闭。