阿里云部署多主master高可用k8s集群

阿里云K8S高可用部署(我们需要准备六台阿里云的ECS,选择有公网IP的)

二进制搭建k8s多master集群之第一篇(集群环境和各个组件功能介绍)

规划如下:

master01:172.24.150.85(内网)  kube-apiserver  kube-conroller-manager etcd        flannel  docker
master02:172.24.150.86(内网)  kube-apiserver  kube-conroller-manager etcd        flannel     docker        
master03:172.24.150.87(内网)    kube-apiserver  kube-conroller-manager etcd        flannel     docker
node01:172.24.150.88(内网)    kube-proxy  kubelet        flannel     docker
node02:172.24.150.89(内网)    kube-proxy  kubelet        flannel     docker
haproxyharbor:172.24.150.90(内网)        haproxy harbor

一 组件版本

  • kubernetes 1.12.3
  • Docker 18.06.1-ce
  • Etcd 3.3.10
  • Flanneld 0.10.0
  • 插件
    • CoreDNS
    • Dashboard
    • Heapster(influxdb,grafana)
    • Metrics-Server
    • EFK(elasticsearch,fluentd,kibana)
    • 镜像仓库
    • docker registry
    • harbor

二 主要配置策略

  • kube-apiserver:
  • 使用keepalived和haproxy实现3节点高可用;
  • 关闭非安全端口8080和匿名访问;
  • 在安全端口6443接收https请求;
  • 严格的认证和授权策略(x509,token,RBAC)
  • 开启bootstrap token认证,支持kubelet TLS bootstrapping;
  • 使用https访问kubelet,etcd,加密通信;
    kube-scheduler:
  • 3节点高可用;
  • 使用kubeconfig访问apiserver的安全端口;

kubelet:

  • 使用kubeadm动态创建bootstrap token,而不是在apiserver中静态配置

  • 使用TLS bootstrap机制自动生成client和server证书,过期后自动轮转;

  • 在KubeletConfiguration类型的JSON文件配置主要参数;

  • 关闭只读端口,在安全端口10250接收https请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;

  • 使用kubeconfig访问apiserver的安全端口;

  • kube-proxy:使用kubeconfig访问apiserver的安全端口;

  • 在kubeProxyConfiguration类型的JSON文件配置主要参数;

  • IPVS代理模式

  • 集群插件:

    • DNS:使用功能、性能更好的coredns;
    • Dashboard:支持登录认证
    • Metric:heapster,metrics-server,使用https访问kubelet安全端口;
    • Log:Elasticsearch,Fluend,Kibana;
    • Registry镜像库:docker-registry,harbor;

三 系统初始化

1 主机名修改(6个节点都操作)

例如:master01
hostnamectl set-hostname master01

2 本地hosts解析(6个节点都操作)(至于我为啥多加了etcd01,etcd02,etcd03,为啥etcd三节点集群中的命名而已)

[root@master01 ~] cat >>/etc/hosts <<EOF
172.24.150.85  master01 etcd01
172.24.150.86  master02 etcd02
172.24.150.87  master03 etcd03
172.24.150.88  node01
172.24.150.89  node02
172.24.150.90  haproxyhabor
EOF

3 master01免密码登录其它节点(由于下面的很多操作都是单独在master01(192.168.1.145)上完成的,即设置master01免密登录其它节点)

配置master01可以免密SSH登录其它节点,方便远程分发文件及执行命令

[root@master01 ~]# ssh-keygen -t rsa            #生成公钥和私钥,一路Enter键                        
[root@master01 ~]# ssh-copy-id root@master01    #分发公钥到master01上,这里需要输入一次master01的密码
[root@master01 ~]# ssh-copy-id root@master02    #分发公钥到master02上,这里需要输入一次master02的密码
[root@master01 ~]# ssh-copy-id root@master03    #分发公钥到master03上,这里需要输入一次master03的密码
[root@master01 ~]# ssh-copy-id root@node01        #分发公钥到node01上,这里需要输入一次node01的密码
[root@master01 ~]# ssh-copy-id root@node02        #分发公钥到node02上,这里需要输入一次node02的密码

4 关闭防火墙firewall(6个节点都操作)

    systemctl stop firewalld          (root权限)
    systemctl disable firewalld         (设置开启未启动模式 )
    iptables -P FORWARD ACCEPT         (设置FORWARD链为接收模式,要对4表5链有了解哦!)

5 关闭swap分区

swapoff -a 
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab (不太明白哦)

6 关闭SELinux

setenforce 0
grep SELINUX /etc/selinux/config
SELINUX=disabled

7 加载内核模块

modprobe br-netfilter
modprobe ip_vs

8 设置系统参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
使上面内容生效:
sysctl -p /etc/sysctl.d/kubernetes.conf

tcp_tw_recycle和Kubernetes的NAT冲突,必须关闭,
关闭不适用的IPV6协议栈,防止触发docker BUG;

9 检查系统内核和模块适不适合运行docker(centos系统)

curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh
./check-config.sh

四 环境介绍

二进制搭建k8s多master集群之第二篇(使用TLS证书搭建etcd三节点高可用集群)
下面本文etcd集群才用三台centos7.5搭建完成。
etcd01(master01): 172.24.150.85
etcd02(master02): 172.24.150.86
etcd03(master03): 172.24.150.87

一 创建CA证书和密钥

k8s系统各组件间需要使用TLS证书对通信进行加密,本文档使用CloudFlare的PKI工具集cfssl来生成Certificate Authority(CA)证书和密钥文件,CA是自签名证书,用来签名后续创建的其它TLS证书。由于第一篇我们做了master01(etcd01)到master02(etcd02),master03(etcd03)的免密验证,因此下面步骤中我们只需要在master01上操作,然后把证书和工具命令拷贝到其它节点上就可以了,有些需要其它节点必须登录操作的,我会红色标记提醒的

1 安装CFSSL(master01操作)

curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x /usr/local/bin/cfssl*

2 创建CA配置文件

先创建存放证书的目录
mkdir -p /etc/kubernetes/cert && cd /etc/kubernetes/cert (master01,master02,master03都需要操作一遍)

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

简要介绍:
    ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile; 
    signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE; 
    server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证; 
    client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证;

3 创建CA证书签名请求

cat > ca-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "yunwei"
    }
  ]
}
EOF

简要介绍:
    CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;
    O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
    kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

生成CA证书和私钥:

[root@master01 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/07/07 10:53:30 [INFO] generating a new CA key and certificate from CSR
2019/07/07 10:53:30 [INFO] generate received request
2019/07/07 10:53:30 [INFO] received CSR
2019/07/07 10:53:30 [INFO] generating key: rsa-2048
2019/07/07 10:53:30 [INFO] encoded CSR
2019/07/07 10:53:30 [INFO] signed certificate with serial number 605272635170936057386255196971681816888287295153
[root@master01 cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

4 创建etcd证书签名请求文件

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.24.150.85",
    "172.24.150.86",
    "172.24.150.87"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "yunwei"
    }
  ]
}
EOF

简要介绍:
    hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,这里将 etcd 集群的三个节点 IP 都列在其中;

生成CA证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
    -ca-key=/etc/kubernetes/cert/ca-key.pem \
    -config=/etc/kubernetes/cert/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

5 将*.pem证书分发到其它两台etcd主机节点上即etcd02(master02)、 etcd03(master03)

另外ca证书也要分发到node01,node02上,因为创建flanneld网络时要用到

[root@master01 cert]# scp *.pem master02:/etc/kubernetes/cert
[root@master01 cert]# scp *.pem master03:/etc/kubernetes/cert               
[root@master01 cert]# scp ca.pem root@node01:/etc/kubernetes/cert 
[root@master01 cert]# scp ca.pem root@node02:/etc/kubernetes/cert 

etcd使用证书组件为:
​ > * ca.pem
​ > * etcd-key.pem
​ > * etcd.pem

二 部署etcd三节点集群

三个etcd节点都要安装etcd软件包(即:master01、master02、master03)

1 三节点下载软件包

wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar zxf etcd-v3.3.10-linux-amd64.tar.gz
cp  etcd-v3.3.10-linux-amd64/etcd* /usr/local/bin

2 三节点都创建etcd数据库工作目录

mkdir -p /var/lib/etcd

3 创建三节点的 systemd unit 文件

master01的systemd unit配置文件如下:
############################### 172.24.150.85 ################################

cat > /etc/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --name etcd01 \
  --cert-file=/etc/kubernetes/cert/etcd.pem \
  --key-file=/etc/kubernetes/cert/etcd-key.pem \
  --peer-cert-file=/etc/kubernetes/cert/etcd.pem \
  --peer-key-file=/etc/kubernetes/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --initial-advertise-peer-urls https://172.24.150.85:2380 \
  --listen-peer-urls https://172.24.150.85:2380 \
  --listen-client-urls https://172.24.150.85:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://172.24.150.85:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd01=https://172.24.150.85:2380,etcd02=https://172.24.150.86:2380,etcd03=https://172.24.150.87:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

############################### 172.24.150.86 ################################
master02的system unit配置文件如下:

cat > /etc/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --name etcd02 \
  --cert-file=/etc/kubernetes/cert/etcd.pem \
  --key-file=/etc/kubernetes/cert/etcd-key.pem \
  --peer-cert-file=/etc/kubernetes/cert/etcd.pem \
  --peer-key-file=/etc/kubernetes/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --initial-advertise-peer-urls https://172.24.150.86:2380 \
  --listen-peer-urls https://172.24.150.86:2380 \
  --listen-client-urls https://172.24.150.86:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://172.24.150.86:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd01=https://172.24.150.85:2380,etcd02=https://172.24.150.86:2380,etcd03=https://172.24.150.87:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

############################### 172.24.150.87 ################################
master03的system unit配置文件如下:

cat > /etc/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --name etcd03 \
  --cert-file=/etc/kubernetes/cert/etcd.pem \
  --key-file=/etc/kubernetes/cert/etcd-key.pem \
  --peer-cert-file=/etc/kubernetes/cert/etcd.pem \
  --peer-key-file=/etc/kubernetes/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --initial-advertise-peer-urls https://172.24.150.87:2380 \
  --listen-peer-urls https://172.24.150.87:2380 \
  --listen-client-urls https://172.24.150.87:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://172.24.150.87:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster etcd01=https://172.24.150.85:2380,etcd02=https://172.24.150.86:2380,etcd03=https://172.24.150.87:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

####################################################################################################
为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);

创建etcd.pem 证书时使用的 etcd-csr.json 文件的 hosts 字段包含所有 etcd 节点的IP,否则证书校验会出错;
–initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中.

4 最好三节点同时启动并且设置开机自启动(xshell工具,一个窗口连接3个主机)

systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd.service

5 查看etcd服务的状态信息

systemctl status etcd.service

如果不是一起启动的话,最先启动的etcd进程会卡住一段时间,等待其它节点的etcd进程加入集群,为正常现象

6 验证etcd集群状态,以及查看leader,下面命令可以在任何一个etcd节点执行,同时执行也无所谓

[root@master01 system]#  etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem cluster-health
member 2b7694970ad266e9 is healthy: got healthy result from https://172.24.150.86:2379
member 2de7ad1771e372b4 is healthy: got healthy result from https://172.24.150.87:2379
member cf6dea03cf608ee3 is healthy: got healthy result from https://172.24.150.85:2379
cluster is healthy
[root@master01 system]#  etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem member list
2b7694970ad266e9: name=etcd02 peerURLs=https://172.24.150.86:2380 clientURLs=https://172.24.150.86:2379 isLeader=false
2de7ad1771e372b4: name=etcd03 peerURLs=https://172.24.150.87:2380 clientURLs=https://172.24.150.87:2379 isLeader=false
cf6dea03cf608ee3: name=etcd01 peerURLs=https://172.24.150.85:2380 clientURLs=https://172.24.150.85:2379 isLeader=true
[root@master02 system]#  etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem cluster-health
member 2b7694970ad266e9 is healthy: got healthy result from https://172.24.150.86:2379
member 2de7ad1771e372b4 is healthy: got healthy result from https://172.24.150.87:2379
member cf6dea03cf608ee3 is healthy: got healthy result from https://172.24.150.85:2379
cluster is healthy
[root@master02 system]#  etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem member list
2b7694970ad266e9: name=etcd02 peerURLs=https://172.24.150.86:2380 clientURLs=https://172.24.150.86:2379 isLeader=false
2de7ad1771e372b4: name=etcd03 peerURLs=https://172.24.150.87:2380 clientURLs=https://172.24.150.87:2379 isLeader=false
cf6dea03cf608ee3: name=etcd01 peerURLs=https://172.24.150.85:2380 clientURLs=https://172.24.150.85:2379 isLeader=true
[root@master02 system]# 
[root@master03 system]#  etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem cluster-health
member 2b7694970ad266e9 is healthy: got healthy result from https://172.24.150.86:2379
member 2de7ad1771e372b4 is healthy: got healthy result from https://172.24.150.87:2379
member cf6dea03cf608ee3 is healthy: got healthy result from https://172.24.150.85:2379
cluster is healthy
[root@master03 system]#  etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem member list
2b7694970ad266e9: name=etcd02 peerURLs=https://172.24.150.86:2380 clientURLs=https://172.24.150.86:2379 isLeader=false
2de7ad1771e372b4: name=etcd03 peerURLs=https://172.24.150.87:2380 clientURLs=https://172.24.150.87:2379 isLeader=false
cf6dea03cf608ee3: name=etcd01 peerURLs=https://172.24.150.85:2380 clientURLs=https://172.24.150.85:2379 isLeader=true

到此ETCD TLS 3节点集群部署完成,下一篇是二进制搭建k8s三节点master高可用集群之第三篇配置flannel网络

上一篇我们已经搭建etcd高可用集群,这篇我们将搭建Flannel,目的使跨主机的docker能够相互通信,也是保障kubernetes集群的网络基础和保障,下面开始配置。

生成Flannel网络TLS证书

在所有集群节点都安装Flannel,下面的操作就只演示master01上,其它节点重复执行即可。(证书就在master01上生成一次就行,然后分发)

1 创建证书签名请求

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "yunwei"
    }
  ]
}
EOF

该证书只会被kubectl当做client证书使用,所以hosts字段为空;

生成证书和私钥:


[root@master01 cert]cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

2 将证书分发到所有集群节点/etc/kubernetes/cert/目录下

[root@master01 cert] scp  flanneld*.pem   root@master02:/etc/kubernetes/cert/
[root@master01 cert] scp  flanneld*.pem   root@master03:/etc/kubernetes/cert/
[root@master01 cert] scp  flanneld*.pem   root@node01:/etc/kubernetes/cert/
[root@master01 cert] scp  flanneld*.pem   root@node02:/etc/kubernetes/cert/

二 部署Flannel

1 下载Flannel(所有节点)

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz
cp {flanneld,mk-docker-opts.sh} /usr/local/bin

2 向etcd写入网段信息

下面两条命令在etcd集群中任意一台上执行一次就行,也就是创建一个flannel网段供docker分配使用

[root@master01 cert]# etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem mkdir /kubernetes/network
[root@master01 cert]# etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

第二条这么长是一个整命令哦,不要忘记了哦

3 创建system unit文件

在master01创建好,然后不用做任何修改分发到master02,master03,node01,node02 上。

cat >/etc/systemd/system/flannel.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \
  -etcd-certfile=/etc/kubernetes/cert/flanneld.pem \
  -etcd-keyfile=/etc/kubernetes/cert/flanneld-key.pem \
  -etcd-endpoints=https://172.24.150.85:2379,https://172.24.150.86:2379,https://172.24.150.87:2379 \
  -etcd-prefix=/kubernetes/network
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

简要介绍:
    mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。

    flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口。

4 启动所有节点开始启动flannel并且设置开启自启动

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld

5 查看flannel分配的子网信息

[root@master01 system]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=172.30.60.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.30.60.1/24 --ip-masq=true --mtu=1450"


[root@master01 system]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.60.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false

/run/flannel/docker是flannel分配给docker的子网信息,/run/flannel/subnet.env包含了flannel整个大网段以及在此节点上的子网段

6 查看flannel网络是否生效

[root@master01 system]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:dc:05:69:5c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.150.85  netmask 255.255.240.0  broadcast 172.24.159.255
        ether 00:16:3e:01:36:6e  txqueuelen 1000  (Ethernet)
        RX packets 698491  bytes 207475857 (197.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 631869  bytes 77810204 (74.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.30.60.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 32:1c:4c:05:4a:22  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2246  bytes 161117 (157.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2246  bytes 161117 (157.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

可以明显看到flannel1.1的网络信息,说明flannel网络已经正常。

三 配置docker支持flannel网络

1 所有节点(master01-03,node01-02)五个节点都安装docker,安装指定版本的docker就不介绍了

2 配置daocker支持flannel网络

所有docker节点都操作,对5个节点默认的docker system unit配置文件进行修改

vi     /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

##主要修改配置文件以下两行:
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
这里两行不同而已

3 重启docker,是配置生效

先看一下没有修改docker的system unit文件,重启前

[root@master01 system]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:dc:05:69:5c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.150.85  netmask 255.255.240.0  broadcast 172.24.159.255
        ether 00:16:3e:01:36:6e  txqueuelen 1000  (Ethernet)
        RX packets 806758  bytes 292970377 (279.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 729328  bytes 88585438 (84.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.30.60.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 32:1c:4c:05:4a:22  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2914  bytes 195853 (191.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2914  bytes 195853 (191.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#添加docker的system unit配置文件后,然后重启docker服务,ifconfig命令查看

[root@master01 system]# systemctl daemon-reload
[root@master01 system]# systemctl restart docker
[root@master01 system]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.30.60.1  netmask 255.255.255.0  broadcast 172.30.60.255
        ether 02:42:dc:05:69:5c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.150.85  netmask 255.255.240.0  broadcast 172.24.159.255
        ether 00:16:3e:01:36:6e  txqueuelen 1000  (Ethernet)
        RX packets 814217  bytes 293897650 (280.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 736551  bytes 89555535 (85.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.30.60.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 32:1c:4c:05:4a:22  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2993  bytes 200086 (195.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2993  bytes 200086 (195.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4 查看dockers网络是否生效

# 启动一个容器,查看容器分配的ip是否在flannel网络分配的网段内
[root@master01 system]# docker run -itd --name ceshi centos
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
8ba884070f61: Pull complete 
Digest: sha256:a799dd8a2ded4a83484bbae769d97655392b3f86533ceb7dd96bbac929809f3c
Status: Downloaded newer image for centos:latest
efbb88d013137b8014f3ca4c6a1f55b706fed2d4575c838b1a4b307c1d1e2508
[root@master01 system]# docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' efbb88
172.30.60.2

5 查看所有集群主机的网络情况

[root@master01 system]# etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.30.21.0-24
/kubernetes/network/subnets/172.30.60.0-24
/kubernetes/network/subnets/172.30.81.0-24
/kubernetes/network/subnets/172.30.80.0-24
/kubernetes/network/subnets/172.30.14.0-24

从输出可以看出容器使用了172.30.0.0/24网络,属于flannel分配的网络段,到此集群网络配置完成

下一篇将部署二进制搭建k8s三节点master高可用集群之配置k8s master及高可用

二进制搭建k8s三节点master高可用集群之配置k8s master及高可用
k8s master集群部署如下:
master01:172.24.150.85
master02:172.24.150.86
master03:172.24.150.87
haproxyharbor:172.24.150.90 (kube-apiserver的前端SLB)

一 配置kubernetes master集群(3master节点)

kubernetes master 节点包含的组件:

  • kube-apiserver
  • kube-scheduler
  • kbue-controller-manager
    目前这三个组件需要部署在同一台机器上(这句话不理解,难道是这三节点要么同时部署在master01,要么同时部署在master02,或master03上吗?难道不能kube-apiserver部署在master01,scheduler部署在master02,controller-manager部署在master03上吗?在或者两个组件部署在一个节点,另一个组件部署在其它两个节点任意节点上不可以吗?三个组件必须捆绑在一起吗?)
  • kube-scheduler,kube-controller-manager和kube-apiserver三者的功能紧密相关:
  • 同时只能有一个kube-sheduler、kube-controller-manager进程处于工作状态,如果运行多个,则需要通过选举产生一个leader;

部署kubectl命令工具

kubectl 是 kubernetes 集群的命令行管理工具,本文档介绍安装和配置它的步骤。
kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错。
~/.kube/config只需要部署一次,然后拷贝到其他的master。

1 下载kubectl

wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin

2 创建请求证书

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "yunwei"
    }
  ]
}
EOF

O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

3 创建~/.kube/config文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://172.24.150.90:6443 \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig

# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

4 分发~/.kube/config文件

[root@k8s-master01 cert]# cp kubectl.kubeconfig   ~/.kube/config
[root@k8s-master01 cert]# cp kubectl.kubeconfig   root@master02:~/.kube/config
[root@k8s-master01 cert]# cp kubectl.kubeconfig   root@master03:~/.kube/config

部署api-server

1 创建kube-apiserver的证书签名请求

[root@k8s-master01]cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "172.24.150.85",
    "172.24.150.86",
    "172.24.150.87",
    "172.24.150.90",
    "47.108.21.49",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "yunwei"
    }
  ]
}
EOF

hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IP、kubernetes 服务 IP 和域名;
域名最后字符不能是 .(如不能为 kubernetes.default.svc.cluster.local.),否则解析时失败,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";
如果使用非 cluster.local 域名,如 bqding.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.bqding、kubernetes.default.svc.bqding.com

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

2 将生成的证书和私钥文件拷贝到master节点:

[root@k8s-master1 cert]# scp kubernetes*.pem root@master02:/etc/kubernetes/cert/
[root@k8s-master1 cert]# scp kubernetes*.pem root@master03:/etc/kubernetes/cert/

3 创建加密配置文件

[root@master01 cert]# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

4 分发加密配置文件到master节点

[root@master01 cert]# scp encryption-config.yaml  root@master02:/etc/kubernetes/cert/
[root@master01 cert]# scp encryption-config.yaml  root@master03:/etc/kubernetes/cert/

5 创建kube-apiserver systemd unit文件

cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/etc/kubernetes/cert/encryption-config.yaml \
  --advertise-address=172.24.150.85 \
  --bind-address=172.24.150.85 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-38700 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://172.24.150.85:2379,https://172.24.150.86:2379,https://172.24.150.87:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.targe
EOF

--experimental-encryption-provider-config:启用加密特性;
--authorization-mode=Node,RBAC: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
--enable-admission-plugins:启用 ServiceAccount 和 NodeRestriction;
--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
--tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件。--client-ca-file 用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
--kubelet-client-certificate、--kubelet-client-key:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
--bind-address: 不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
--insecure-port=0:关闭监听非安全端口(8080);
--service-cluster-ip-range: 指定 Service Cluster IP 地址段;
--service-node-port-range: 指定 NodePort 的端口范围;
--runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
--apiserver-count=3:指定集群运行模式,多台 kube-apiserver 会通过 leader 选举产生一个工作节点,其它节点处于阻塞状态;

6 分发kube-apiserver.service文件到其它master

[root@master01 cert]# scp /etc/systemd/system/kube-apiserver.service root@master02:/etc/systemd/system/kube-apiserver.service
[root@master01 cert]# scp /etc/systemd/system/kube-apiserver.service root@master03:/etc/systemd/system/kube-apiserver.service

7 创建日志目录

mkdir -p /var/log/kubernetes/

8 启动api-server服务

[root@master01 cert] systemctl daemon-reload
[root@master01 cert] systemctl enable kube-apiserver
[root@master01 cert] systemctl start kube-apiserver

9 检查api-server和集群状态

[root@master01 cert]# netstat -ptln | grep kube-apiserve
tcp        0      0 192.168.80.9:6443       0.0.0.0:*               LISTEN      22348/kube-apiserve

[root@master01 cert]#kubectl cluster-info
Kubernetes master is running at https://172.24.150.90:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

10 授予kubernetes证书访问kubelet api权限

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

三 部署kube-controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
与 kube-apiserver 的安全端口通信时;
在安全端口(https,10252) 输出 prometheus 格式的 metrics;

1 创建kube-controller-manager证书请求

[root@master01 cert]cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "172.24.150.85",
      "172.24.150.86",
      "172.24.150.87"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "yunwei"
      }
    ]
}
EOF
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

2 将生成的证书和私钥分发到所有master节点

[root@master01 cert]# scp kube-controller-manager*.pem root@master02:/etc/kubernetes/cert/
[root@master01 cert]# scp kube-controller-manager*.pem root@master03:/etc/kubernetes/cert/

3 创建和分发kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://172.24.150.90:6443 \
  --kubeconfig=kube-controller-manager.kubeconfig


kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分发kube-controller-manager.kubeconfig到master02、master03节点

[root@master01 cert]# scp kube-controller-manager.kubeconfig root@master02:/etc/kubernetes/cert/
[root@master01 cert]# scp kube-controller-manager.kubeconfig root@master03:/etc/kubernetes/cert/

4 创建和分发kube-controller-manager systemd unit文件

[root@master01]cat > /etc/systemd/system/kube-controller-manager.service  << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


--address:指定监听的地址为127.0.0.1
--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
--cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
--experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
--root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
--service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
--service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
--feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
--controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
--horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
--tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
--use-service-account-credentials=true:

分发kube-controller-manager systemd unit文件

[root@master01 cert]# scp /etc/systemd/system/kube-controller-manager.service root@master02:/etc/systemd/system/kube-controller-manager.service
[root@master01 cert]# scp /etc/systemd/system/kube-controller-manager.service root@master03:/etc/systemd/system/kube-controller-manager.service

5 启动kube-controller-manager服务

[root@master01 cert]# systemctl daemon-reload
[root@master01 cert]# systemctl enable kube-controller-manager && systemctl start kube-controller-manager

6 检查kube-controller-manager服务

[root@master01 cert]# netstat -lnpt|grep kube-controll
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      17906/kube-controll 
tcp6       0      0 :::10257                :::*                    LISTEN      17906/kube-controll

7 查看当前kube-controller-manager的leader

[root@master01 ~]#  kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_318c0152-a096-11e9-a701-00163e0134cf","leaseDurationSeconds":15,"acquireTime":"2019-07-08T02:25:14Z","renewTime":"2019-07-09T14:40:23Z","leaderTransitions":1}'
  creationTimestamp: 2019-07-07T09:04:06Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "248855"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 2caad7f6-a096-11e9-9fca-00163e0132fb

当前leader为master03节点

四 部署kube-scheduler

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:
1 与 kube-apiserver 的安全端口通信;
2 在安全端口(https,10251) 输出 prometheus 格式的 metrics;

1 创建kube-scheduler证书请求


[root@master01]cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "172.24.150.85",
      "172.24.150.86",
      "172.24.150.87"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "yunwei"
      }
    ]
}
EOF
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书和私钥:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem
-ca-key=/etc/kubernetes/cert/ca-key.pem
-config=/etc/kubernetes/cert/ca-config.json
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

2 创建和分发kube-scheduler.kubeconfig文件

[root@master01] kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=https://172.24.150.90:6443 \
      --kubeconfig=kube-scheduler.kubeconfig

[root@master01] kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master01] kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

[root@master01] kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

上一步创建的证书、私钥以及 kube-apiserver 地址被写入到 kubeconfig 文件中;

分发kubeconfig到master02、master03节点

[root@master01 cert]# scp kube-scheduler.kubeconfig root@master02:/etc/kubernetes/cert/
[root@master01 cert]# scp kube-scheduler.kubeconfig root@master03:/etc/kubernetes/cert/

3 创建和分发kube-scheduler systemd unit文件

cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/cert/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


--address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

分发 systemd unit 文件到所有 master 节点:

[root@master01 cert]# scp /etc/systemd/system/kube-scheduler.service master02:/etc/systemd/system/kube-scheduler.service
[root@master01 cert]# scp /etc/systemd/system/kube-scheduler.service master03:/etc/systemd/system/kube-scheduler.service

4 启动kube-scheduler服务

[root@master01 cert] systemctl daemon-reload 
[root@master01 cert] systemctl enable kube-scheduler && systemctl start kube-scheduler

5 查看kube-scheduler运行监听端口

[root@master1 cert]# netstat -lnpt|grep kube-scheduler
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      17921/kube-schedule

6 查看当前kube-scheduler的leader

[root@master01 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_3e7bc3fc-a097-11e9-ba09-00163e0134cf","leaseDurationSeconds":15,"acquireTime":"2019-07-08T02:24:52Z","renewTime":"2019-07-09T14:51:37Z","leaderTransitions":1}'
  creationTimestamp: 2019-07-07T09:11:31Z
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "249869"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: 35b657c6-a097-11e9-8e39-00163e01366e

可见当前的leader为master03节点

七 在master01、master02、master03节点上验证功能是否正常

[root@master01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   


[root@master02 ~]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   


[root@master03 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   

八 haproxy做3master节点的高可用(未做keepalived的心跳检测)

haproxy 监听 haproxyharbor主机的6443端口,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能

1 haproxyharbor主机安装haproxy

[root@haproxyharbor ~]# cd /etc/haproxy/ && mv haproxy.cfg haproxy.cfg.bak && cat >/etc/haproxy/haproxy.cfg <<EOF
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1

defaults
    log     global
    timeout connect 5000
    timeout client  10m
    timeout server  10m

listen  admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth admin:123456
    stats hide-version
    stats admin if TRUE

listen kube-master
    bind 0.0.0.0:6443
    mode tcp
    option tcplog
    balance roundrobin
    server master01 172.24.150.85:6443 check inter 2000 fall 2 rise 2 weight 1
    server master02 172.24.150.86:6443 check inter 2000 fall 2 rise 2 weight 1
    server master03 172.24.150.87:6443 check inter 2000 fall 2 rise 2 weight 1
EOF

haproxy 在 10080 端口输出 status 信息;
haproxy 监听该主机上所有接口的 6443 端口,该端口与环境变量 ${KUBE_APISERVER} 指定的端口必须一致;
server 字段列出所有 kube-apiserver 监听的 IP 和端口;

2 启动haproxy服务

[root@haproxyharbor ~] systemctl enable haproxy && systemctl start haproxy

3 查看haproxy的服务状态:

[root@haproxyharbor ~] systemctl status haproxy|grep Active

###################################################################

对node01、node02节点部署k8s node服务

kubernetes work节点运行如下组件:docker、kubelet、kube-proxy、flannel

一 安装依赖包(node01和node02都安装)

[root@node01 ~]yum install -y epel-release wget conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs
[root@node02 ~]yum install -y epel-release wget conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

二 部署kubelet组件

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。

1 下载和分发kubelet二进制文件

[root@master01 src] wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
[root@master01 src] tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@master01 src] cp kubernetes/server/bin/
[root@master01 src] cp kubelet kube-proxy /usr/local/bin
[root@master01 src] scp  kubelet kube-proxy root@node01:/usr/local/bin
[root@master01 src] scp  kubelet kube-proxy root@node02:/usr/local/bin

2 创建kubelet bootstrap kubeconfig文件(master01执行)

###################由于我是两个node(node01,node02)节点只需创建两个token就可以了##################################

#创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
  --description kubelet-bootstrap-token \
  --groups system:bootstrappers:master01 \
  --kubeconfig ~/.kube/config)

kubectl config use-context default --kubeconfig=kubelet-bootstrap-master01.kubeconfig
[root@master01 cert]# export BOOTSTRAP_TOKEN=$(kubeadm token create \
>   --description kubelet-bootstrap-token \
>   --groups system:bootstrappers:master01 \
>   --kubeconfig ~/.kube/config)
I0709 20:02:07.166974   29695 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0709 20:02:07.167046   29695 version.go:94] falling back to the local client version: v1.12.3
##暂时不用理会这继续往下执行  

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://172.24.150.90:6443 \
  --kubeconfig=kubelet-bootstrap-master01.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap-master01.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap-master01.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-master02.kubeconfig
##########################################################################

#创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
  --description kubelet-bootstrap-token \
  --groups system:bootstrappers:master02 \
  --kubeconfig ~/.kube/config)

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://172.24.150.90:6443 \
  --kubeconfig=kubelet-bootstrap-master02.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap-master02.kubeconfig
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap-master02.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-master02.kubeconfig


kubelet-bootstrap-master0x.kubeconfig文件创建两次,分别为改成kubelet-bootstrap-master01.kubeconfig,kubelet-bootstrap-master02.kubeconfig
证书中写入 Token 而非证书,证书后续由 controller-manager 创建。

##########################################################################################
#创建 token

注意补充:

使用kubeadm安装需要下载相关的镜像,可以使用以下命令查看
[root@master01 bin]# kubeadm config images list
I0707 19:23:47.877237   13023 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0707 19:23:47.877324   13023 version.go:94] falling back to the local client version: v1.12.3
k8s.gcr.io/kube-apiserver:v1.12.3
k8s.gcr.io/kube-controller-manager:v1.12.3
k8s.gcr.io/kube-scheduler:v1.12.3
k8s.gcr.io/kube-proxy:v1.12.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

# 先在墙外的一台服务器上下载好需要的镜像,然后打tag标签,上传到自己的dockerhub上,再在墙内机器上下载,然后在反打回标签就可以了
[root@li1891-184 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
cntsp/kube-proxy                     v1.12.3             ab97fa69b926        7 months ago        96.5MB
k8s.gcr.io/kube-proxy                v1.12.3             ab97fa69b926        7 months ago        96.5MB
cntsp/kube-apiserver                 v1.12.3             6b54f7bebd72        7 months ago        194MB
k8s.gcr.io/kube-apiserver            v1.12.3             6b54f7bebd72        7 months ago        194MB
cntsp/kube-controller-manager        v1.12.3             c79022eb8bc9        7 months ago        164MB
k8s.gcr.io/kube-controller-manager   v1.12.3             c79022eb8bc9        7 months ago        164MB
cntsp/kube-scheduler                 v1.12.3             5e75513787b1        7 months ago        58.3MB
k8s.gcr.io/kube-scheduler            v1.12.3             5e75513787b1        7 months ago        58.3MB
cntsp/etcd                           3.2.24              3cab8e1b9802        9 months ago        220MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        9 months ago        220MB
cntsp/coredns                        1.2.2               367cdc8433a4        10 months ago       39.2MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        10 months ago       39.2MB
cntsp/pause                          3.1                 da86e6ba6ca1        18 months ago       742kB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        18 months ago       742kB

docker tag cntsp/kube-proxy:v1.12.3  k8s.gcr.io/kube-proxy:v1.12.3
docker tag cntsp/kube-apiserver:v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3
docker tag cntsp/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3
docker tag cntsp/kube-scheduler:v1.12.3  k8s.gcr.io/kube-scheduler:v1.12.3
docker tag cntsp/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
docker tag cntsp/coredns:1.2.2  k8s.gcr.io/coredns:1.2.3
docker tag cntsp/pause:3.1  k8s.gcr.io/pause:3.1

3 查看kubeadm为各个节点创建的token:

[root@master01 ~]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
5hdpcv.oov1vb6p2pdsk9cj   19h         2019-07-10T20:02:07+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master01    
5la9kt.rg86oqup9lmh8mc3   20h         2019-07-10T20:54:04+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master03        ##
7pmf04.7exbjzt4e0mqb1k6   <invalid>   2019-07-08T19:14:17+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master01
8thsbu.kczv9cg4p18qc1uh   <invalid>   2019-07-08T23:53:33+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master01
a2480e.fcpm6md4y3auhlbj   <invalid>   2019-07-08T18:01:24+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master01
broru2.ntzhoupwbokulsgy   20h         2019-07-10T20:50:56+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master01        ##
ovf435.wx9o306i9sof1ztm   20h         2019-07-10T20:53:12+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:master02        ##
创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话);
kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers

查看各token关联的Secret

[root@master01 ~]# kubectl get secrets  -n kube-system
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-pmwks              kubernetes.io/service-account-token   3      2d7h
bootstrap-signer-token-fz6jp                     kubernetes.io/service-account-token   3      2d7h
bootstrap-token-0a7w0g                           bootstrap.kubernetes.io/token         7      2d4h
bootstrap-token-2w1i45                           bootstrap.kubernetes.io/token         7      2d6h
bootstrap-token-2xbl5i                           bootstrap.kubernetes.io/token         7      6h28m
bootstrap-token-5hdpcv                           bootstrap.kubernetes.io/token         7      4h36m
bootstrap-token-5la9kt                           bootstrap.kubernetes.io/token         7      3h44m
bootstrap-token-7pmf04                           bootstrap.kubernetes.io/token         7      2d5h
bootstrap-token-8thsbu                           bootstrap.kubernetes.io/token         7      2d
bootstrap-token-a2480e                           bootstrap.kubernetes.io/token         7      2d6h
bootstrap-token-broru2                           bootstrap.kubernetes.io/token         7      3h47m
bootstrap-token-bz7en0                           bootstrap.kubernetes.io/token         7      2d4h
bootstrap-token-cssc84                           bootstrap.kubernetes.io/token         7      2d7h
bootstrap-token-iau28s                           bootstrap.kubernetes.io/token         7      2d7h
bootstrap-token-iufp5a                           bootstrap.kubernetes.io/token         7      2d4h
bootstrap-token-ovf435                           bootstrap.kubernetes.io/token         7      3h45m
bootstrap-token-tv8p15                           bootstrap.kubernetes.io/token         7      37h
bootstrap-token-wj6vmg                           bootstrap.kubernetes.io/token         7      2d4h
certificate-controller-token-gh8qq               kubernetes.io/service-account-token   3      2d7h
clusterrole-aggregation-controller-token-4nsb7   kubernetes.io/service-account-token   3      2d7h
cronjob-controller-token-l95gw                   kubernetes.io/service-account-token   3      2d7h
daemon-set-controller-token-4d5wk                kubernetes.io/service-account-token   3      2d7h
default-token-p5gvt                              kubernetes.io/service-account-token   3      2d7h
deployment-controller-token-jlnhh                kubernetes.io/service-account-token   3      2d7h
disruption-controller-token-xgt7n                kubernetes.io/service-account-token   3      2d7h
endpoint-controller-token-jgr6r                  kubernetes.io/service-account-token   3      2d7h
expand-controller-token-lhbpc                    kubernetes.io/service-account-token   3      2d7h
generic-garbage-collector-token-6t9mt            kubernetes.io/service-account-token   3      2d7h
horizontal-pod-autoscaler-token-f7zvp            kubernetes.io/service-account-token   3      2d7h
job-controller-token-5bq5b                       kubernetes.io/service-account-token   3      2d7h
namespace-controller-token-vcp7v                 kubernetes.io/service-account-token   3      2d7h
node-controller-token-kzjgc                      kubernetes.io/service-account-token   3      2d7h
persistent-volume-binder-token-2sz49             kubernetes.io/service-account-token   3      2d7h
pod-garbage-collector-token-nw6ck                kubernetes.io/service-account-token   3      2d7h
pv-protection-controller-token-jg9rq             kubernetes.io/service-account-token   3      2d7h
pvc-protection-controller-token-bj5c7            kubernetes.io/service-account-token   3      2d7h
replicaset-controller-token-jv5r5                kubernetes.io/service-account-token   3      2d7h
replication-controller-token-5nfzh               kubernetes.io/service-account-token   3      2d7h
resourcequota-controller-token-dpzk9             kubernetes.io/service-account-token   3      2d7h
service-account-controller-token-qxflv           kubernetes.io/service-account-token   3      2d7h
service-controller-token-8bhkb                   kubernetes.io/service-account-token   3      2d7h
statefulset-controller-token-sms2q               kubernetes.io/service-account-token   3      2d7h
token-cleaner-token-85hbw                        kubernetes.io/service-account-token   3      2d7h
ttl-controller-token-zdwb4                       kubernetes.io/service-account-token   3      2d7h

4 分发bootstrap kubeconfig文件

[root@master1 ~]# scp kubelet-bootstrap-master01.kubeconfig root@node01:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig
[root@master1 ~]# scp kubelet-bootstrap-master02.kubeconfig root@node02:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig

5 创建和分发kubelet参数配置文件

创建 kubelet 参数配置模板文件:

[root@master01 cert]cat > kubelet.config.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/cert/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "172.24.150.88",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.254.0.2"]
}
EOF

address:API 监听地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
authentication.anonymous.enabled:设置为 false,不允许匿名?访问 10250 端口;
authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数;
需要 root 账户运行;

为node01、node02分发kubelet配置文件

[root@master01 cert]scp kubelet.config.json root@node01:/etc/kubernetes/cert/kubelet.config.json
[root@master01 cert]scp kubelet.config.json root@node02:/etc/kubernetes/cert/kubelet.config.json

6 创建和分发kubelet systemd unit文件


cat >/etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/cert \
  --kubeconfig=/etc/kubernetes/cert/kubelet.kubeconfig \
  --config=/etc/kubernetes/cert/kubelet.config.json \
  --hostname-override=172.24.150.89 \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \
  --allow-privileged=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
--bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;

7 Bootstrap Token Auth和授予权限

kublet 启动时查找配置的 –kubeletconfig 文件是否存在,如果不存在则使用 –bootstrap-kubeconfig 向 kube-apiserver 发送证书签名请求 (CSR)。
kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。
默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:

sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
july 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378   26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
july 06 06:42:36 node01 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
july 06 06:42:36 node01 systemd[1]: kubelet.service: Failed with result 'exit-code'.

解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

root@master01 ~]#  kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

8 启动kubelet服务

mkdir -p /var/log/kubernetes && mkdir -p /var/lib/kubelet
systemctl daemon-reload 
systemctl enable kubelet 
systemctl restart kubelet

关闭 swap 分区,否则 kubelet 会启动失败;
必须先创建工作和日志目录;
kubelet 启动后使用 –bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 –kubeletconfig 文件。
注意:kube-controller-manager 需要配置 –cluster-signing-cert-file 和 –cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。
两个 node 节点的 csr 均处于 pending 状态;
** 此时kubelet的进程有,但是监听端口还未启动,需要进行下面步骤!**

9 approve kubelet csr请求

可以手动或自动 approve CSR 请求,** 推荐使用自动的方式**,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。
i、手动approve csr请求
查看 CSR 列表:

[root@master01 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-P7XcQAc2yNlXn1pUmQFxXNCdGyyt8ccVuW3bmoUZiK4   30m   system:bootstrap:e7n0o5   Pending
node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM   79m   system:bootstrap:ydbwyk   Pending
node-csr-u2sVzVkFYnMxPIYWjXHbqRJROtTZBYzA1s2vATPLzyo   30m   system:bootstrap:8w6j3n   Pending
approve CSR 
[root@master01 ~]# kubectl certificate approve node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM
certificatesigningrequest.certificates.k8s.io "node-csr gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM" approved

查看 Approve 结果:

[root@master01 ~]# kubectl describe csr node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM
Name:               node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Thu, 20 Dec 2018 19:55:39 +0800
Requesting User:    system:bootstrap:ydbwyk
Status:             Approved,Issued
Subject:
         Common Name:    system:node: 172.24.150.88
         Serial Number:  
         Organization:   system:nodes
Events:  <none>
Requesting User:请求 CSR 的用户,kube-apiserver 对它进行认证和授权;
Subject:请求签名的证书信息;
证书的 CN 是 system:node:192.168.80.10, Organization 是 system:nodes,kube-apiserver 的 Node 授权模式会授予该证书的相关权限;

ii、自动approve csr请求
创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:

[root@master01 ~]# cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF
auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

生效配置:

[root@master01]kubectl apply -f csr-crb.yaml

10 查看kubelet情况

等待一段时间(1-10 分钟),两个节点的 CSR 都被自动 approve

[root@master01 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-P7XcQAc2yNlXn1pUmQFxXNCdGyyt8ccVuW3bmoUZiK4   35m   system:bootstrap:e7n0o5   Approved,Issued
node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM   84m   system:bootstrap:ydbwyk   Approved,Issued

节点ready:

[root@master01 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
172.24.150.88   Ready    <none>   3h40m   v1.12.3
172.24.150.89   Ready    <none>   3h39m   v1.12.3

kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:

[root@node01 cert]# ll
total 52
-rw------- 1 root root 1679 Jul  7 14:18 ca-key.pem
-rw-r--r-- 1 root root 1359 Jul  7 14:18 ca.pem
-rw------- 1 root root 1675 Jul  7 13:30 flanneld-key.pem
-rw-r--r-- 1 root root 1391 Jul  7 13:30 flanneld.pem
-rw------- 1 root root 2158 Jul  9 20:57 kubelet-bootstrap.kubeconfig
-rw------- 1 root root 1273 Jul  9 21:17 kubelet-client-2019-07-09-21-17-43.pem
lrwxrwxrwx 1 root root   59 Jul  9 21:17 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-07-09-21-17-43.pem
-rw-r--r-- 1 root root  800 Jul  9 21:01 kubelet.config.json
-rw-r--r-- 1 root root 2185 Jul  9 21:07 kubelet.crt
-rw------- 1 root root 1679 Jul  9 21:07 kubelet.key
-rw------- 1 root root 2298 Jul  9 21:17 kubelet.kubeconfig
-rw-r--r-- 1 root root  321 Jul  9 21:36 kube-proxy.config.yaml
-rw------- 1 root root 6273 Jul  9 21:28 kube-proxy.kubeconfig
kubelet-server 证书会周期轮转;

三 部署kube-proxy组件

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。

1 创建证书

[root@master01 cert]# cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "yunwei"
    }
  ]
}
EOF
CN:指定该证书的 User 为 system:kube-proxy;
预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

2 创建和分发kubeconfig文件

root@master01 cert]#kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://172.24.150.90:6443 \
  --kubeconfig=kube-proxy.kubeconfig

[root@master01 cert]#kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

[root@master01 cert]#kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

[root@master01 cert]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径)

分发kubeconfig文件

[root@master01 cert]# scp kube-proxy.kubeconfig root@node01:/etc/kubernetes/cert/
[root@master01 cert]# scp kube-proxy.kubeconfig root@node02:/etc/kubernetes/cert/

3 创建kube-proxy配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 –write-config-to 选项生成该配置文件,或者参考 kubeproxyconfig 的类型定义源文件 :https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go
创建 kube-proxy config 文件模板:

[root@master01 cert]# cat >kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.24.150.88
clientConnection:
kubeconfig: /etc/kubernetes/cert/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 172.24.150.88:10256
hostnameOverride: k8s-node1
kind: KubeProxyConfiguration
metricsBindAddress: 172.24.150.88:10249
mode: “ipvs”
EOF
bindAddress: 监听地址;
clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
mode: 使用 ipvs 模式;
其中clusterc idr为flannel网络地址。

文件格式问题,注意参考格式见下
[root@kmaster01 kubernetes]# cat /etc/kubernetes/kube-proxy.config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.24.150.88
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 172.24.150.88:10256
hostnameOverride: k8s-master1
kind: KubeProxyConfiguration
metricsBindAddress: 172.24.150.88:10249
mode: “ipvs”
[root@k8s-master1 kubernetes]#
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig ## 注意这个前面的空格,没有就会报下面的错误:

Jul 06 00:35:13 node01 kube-proxy[25540]: I0706 00:35:13.307740 25540 server.go:412] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
Jul 06 00:35:13 node01 kube-proxy[25540]: F0706 00:35:13.307780 25540 server.go:360] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

为各节点创建和分发 kube-proxy 配置文件:
​```bash
[root@master01 cert]# scp kube-proxy.config.yaml root@node01:/etc/kubernetes/cert/
[root@master01 cert]# scp kube-proxy.config.yaml root@node02:/etc/kubernetes/cert/

4 在node01和node02上分别创建 kube-proxy systemd unit文件

[root@node01 cert]# cat >/etc/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/cert/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/lib/kube-proxy/log \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

[root@node02 cert]# cat >/etc/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/cert/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/lib/kube-proxy/log \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5 启动kube-proxy服务

[root@node01 cert]# mkdir -p /var/lib/kube-proxy/log
[root@node01 cert]# systemctl daemon-reload
[root@node01 cert]# systemctl enable kube-proxy
[root@node01 cert]# systemctl restart kube-proxy

[root@node02 cert]# mkdir -p /var/lib/kube-proxy/log
[root@node02 cert]# systemctl daemon-reload
[root@node02 cert]# systemctl enable kube-proxy
[root@node02 cert]# systemctl restart kube-proxy

必须先创建工作和日志目录;

6 检查启动结果(node01和node02上)

systemctl status kube-proxy|grep Active

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-proxy

查看监听端口状态

[root@node1 cert]# netstat -lnpt|grep kube-proxy
tcp        0      0  172.24.150.88:10256     0.0.0.0:*               LISTEN      9617/kube-proxy     
tcp        0      0  172.24.150.88:10249     0.0.0.0:*               LISTEN      9617/kube-proxy
10249:http prometheus metrics port;
10256:http healthz port;

7 查看ipvs路由规则

[root@node01 cert]# yum install ipvsadm
[root@node01 cert]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  ->  172.24.150.85:6443            Masq    1      0          0         
  ->  172.24.150.86:6443            Masq    1      0          0         
  ->  172.24.150.87:6443            Masq    1      0          0 

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到 kube-apiserver 的 6443 端口。
至此node节点部署完成。

四 验证集群功能

1 查看节点状况

[root@master01 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
172.24.150.88   Ready    <none>   13h   v1.12.3
172.24.150.89   Ready    <none>   13h   v1.12.3

2 创建nginx web测试文件

[root@master01 ~]# cat >nginx-web.yml<<EOF 
apiVersion: v1
kind: Service
metadata:
  name: nginx-web
  labels:
    tier: frontend
spec:
  type: NodePort
  selector:
    tier: frontend
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-con
  labels:
    tier: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx-pod
        image: nginx
        ports:
        - containerPort: 80
EOF

执行nginx-web.yaml文件:

[root@master01 ~]# kubectl create -f nginx-web.yml

3 查看各个Node上Pod IP的连通性

[root@master01 ~]#  kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
nginx-con-594b8d6b48-46gf7   1/1     Running   0          12h   172.30.21.2   172.24.150.88   <none>
nginx-con-594b8d6b48-9t2xt   1/1     Running   0          12h   172.30.14.2   172.24.150.89   <none>
nginx-con-594b8d6b48-vt589   1/1     Running   1          12h   172.30.21.3   172.24.150.88   <none>
可见,nginx 的 Pod IP 分别是 172.30.21.2、172.30.14.2、172.30.21.2,在node01、node02 上分别 ping 这三个 IP,看是否连通:
[root@node01 ~]# ping  172.30.21.3
PING 172.30.21.3 (172.30.21.3) 56(84) bytes of data.
64 bytes from 172.30.21.3: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from 172.30.21.3: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 172.30.21.3: icmp_seq=3 ttl=64 time=0.026 ms
64 bytes from 172.30.21.3: icmp_seq=4 ttl=64 time=0.022 ms
64 bytes from 172.30.21.3: icmp_seq=5 ttl=64 time=0.022 ms
64 bytes from 172.30.21.3: icmp_seq=6 ttl=64 time=0.024 ms
64 bytes from 172.30.21.3: icmp_seq=7 ttl=64 time=0.021 ms
64 bytes from 172.30.21.3: icmp_seq=8 ttl=64 time=0.020 ms
64 bytes from 172.30.21.3: icmp_seq=9 ttl=64 time=0.022 ms
^C
--- 172.30.21.3 ping statistics ---
9 packets transmitted, 9 received, 0% packet loss, time 7999ms
rtt min/avg/max/mdev = 0.020/0.029/0.074/0.017 ms
[root@node02 ~]# ping  172.30.21.3
PING 172.30.21.3 (172.30.21.3) 56(84) bytes of data.
64 bytes from 172.30.21.3: icmp_seq=1 ttl=63 time=0.415 ms
64 bytes from 172.30.21.3: icmp_seq=2 ttl=63 time=0.268 ms
64 bytes from 172.30.21.3: icmp_seq=3 ttl=63 time=0.215 ms
64 bytes from 172.30.21.3: icmp_seq=4 ttl=63 time=0.205 ms
64 bytes from 172.30.21.3: icmp_seq=5 ttl=63 time=0.208 ms
64 bytes from 172.30.21.3: icmp_seq=6 ttl=63 time=0.211 ms
64 bytes from 172.30.21.3: icmp_seq=7 ttl=63 time=0.216 ms
64 bytes from 172.30.21.3: icmp_seq=8 ttl=63 time=0.216 ms
64 bytes from 172.30.21.3: icmp_seq=9 ttl=63 time=0.203 ms
64 bytes from 172.30.21.3: icmp_seq=10 ttl=63 time=0.216 ms
64 bytes from 172.30.21.3: icmp_seq=11 ttl=63 time=0.216 ms
64 bytes from 172.30.21.3: icmp_seq=12 ttl=63 time=0.224 ms
64 bytes from 172.30.21.3: icmp_seq=13 ttl=63 time=0.217 ms

4 查看server的集群ip

[root@master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.254.0.1       <none>        443/TCP        2d17h
nginx-web    NodePort    10.254.189.106   <none>        80:36545/TCP   12h

10.254.189.106为nginx service的集群ip,代理的是前面的三个pod容器应用。
PORT 80是集群IP的端口,36545是node节点上的端口,可以用nodeip:nodeport方式访问服务
node01对应的公网IP为:47.108.67.44 ,在一台和这个k8s集群好不相干的主机上访问
[root@harbor1 ~]#  curl http://47.108.67.44:36545/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

文章作者: 阿培
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 阿培 !
 上一篇
centos7生成环境部署docker-ce centos7生成环境部署docker-ce
centos7系统生成环境docker-ce部署第一步:安装依赖包 yum install -y yum-utils device-mapper-persistent-data lvm2 第二步:添加Docker软件包源 yum-confi
2019-11-14
下一篇 
kubernetes集群中的pause容器 kubernetes集群中的pause容器
昨天晚上搭建好了k8s多主集群,启动了一个nginx的pod,然而每启动一个pod就伴随这一个pause容器,考虑到之前在做kubelet的systemd unit文件时有见到: [root@node01 ~]# docker ps CON
2019-07-10
  目录