k8s部署单节点

    科技2022-08-28  102

    文章目录

    k8s部署单节点Etcd集群部署制作证书过程ETCD 二进制包地址配置文件,命令文件,证书目录在node01和node02节点配置ETCD docker引擎部署flannel网络配置master节点配置3大控制主键制作证书过程配置kubernetestocken令牌认证node节点部署node02节点部署

    k8s部署单节点

    k8s群集环境规划

    角色ip组件k8s-master192.168.136.88kube-apiserver ,kube-controller-manager, kube-scheduler,etcdk8s-node01192.168.136.40kubelet,kube-proxy,docker,flannel,etcdk8s-node02192.168.136.30kubelet,kube-proxy,docker,flannel,etcd

    自签SSL证书

    组件使用的证书etcdca.pem, server.pem, server-key.pemflannelca.pem , server.pem , server-key.pemkube-apiserverca.pem , server.pem , server-key.pemkubeletca.pem , ca-key.pemkube-proxyca.pem , kube-proxy.pem , kube-proxy-key.pemkubectlca.pem , admin.pem , admin-key.pem

    Etcd集群部署

    制作证书过程

    master操作

    [root@localhost ~]# mkdir k8s [root@localhost ~]# cd k8s/

    证书工具下载

    [root@localhost k8s]# mkdir etcd-cert [root@localhost k8s]# cd etcd-cert/ [root@localhost etcd-cert]# ls //从宿主机拖进来 cfssl cfssl-certinfo cfssljson

    让系统识别

    [root@localhost etcd-cert]# chmod +x cfssl* [root@localhost etcd-cert]# mv cfssl* /usr/local/bin/

    //定义ca证书

    [root@promote k8s]# cd etcd-cert/ cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF

    实现证书签名

    //实现证书签名 cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF

    生产证书,生成ca-key.pem ca.pem

    [root@localhost etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

    我们查看一下是否生成

    [root@localhost etcd-cert]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

    指定etcd三个节点之间的通信验证

    cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.136.88", "192.168.136.40", "192.168.136.30" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF

    生成ETCD证书 server-key.pem server.pem

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

    四个证书一定要有

    [root@localhost etcd-cert]# ls ca-key.pem ca.pem server-key.pem server.pem

    ETCD 二进制包地址

    在master节点上布置

    [root@localhost k8s]# cd /root/k8s/ 宿主机拖入文件 etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz

    解压etcd

    [root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz

    配置文件,命令文件,证书目录

    [root@localhost#]# cd etcd-v3.3.10-linux-amd64/ [root@promote etcd-v3.3.10-linux-amd64]# mkdir -p /opt/etcd/{cfg,bin,ssl}

    导入命令文件

    [root@localhost etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/

    证书拷贝

    [root@localhost ~]# cd /root/k8s/etcd-cert/ [root@promote etcd-cert]# cp *.pem /opt/etcd/ssl/

    配置ETCD脚本

    [root@localhost k8s]# vim etcd.sh #!/bin/bash # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380 ETCD_NAME=$1 ETCD_IP=$2 ETCD_CLUSTER=$3 WORK_DIR=/opt/etcd cat <<EOF >$WORK_DIR/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=${WORK_DIR}/cfg/etcd ExecStart=${WORK_DIR}/bin/etcd \ --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=${WORK_DIR}/ssl/server.pem \ --key-file=${WORK_DIR}/ssl/server-key.pem \ --peer-cert-file=${WORK_DIR}/ssl/server.pem \ --peer-key-file=${WORK_DIR}/ssl/server-key.pem \ --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \ --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

    上面会生成2个文件etcd.service(启动脚本),/cfg/etcd(配置文件)

    进入卡住状态等待其他节点加入

    [root@localhost k8s]# bash etcd.sh etcd01 192.168.136.88 etcd02=https://192.168.136.40:2380,etcd03=https://192.168.136.30:2380

    使用另外一个会话打开,会发现etcd进程已经开启

    [root@localhost ~]# ps -ef | grep etcd 或 root@localhost ~]# systemctl status etcd.service ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: activating (start) since 一 2020-09-28 23:07:39 CST; 6s ago

    拷贝证书去其他节点

    [root@localhost k8s]# scp -r /opt/etcd/ root@192.168.136.40:/opt/ [root@localhost k8s]# scp -r /opt/etcd/ root@192.168.136.30:/opt

    启动脚本拷贝其他节点

    [root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.136.40:/usr/lib/systemd/system/ [root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.136.30:/usr/lib/systemd/system/

    在node01和node02节点配置ETCD

    下面配置要在2个node节点个配置一遍(改成本地地址)

    [root@localhost ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.136.40:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.136.40:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.136.40:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.136.40:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.136.88:2380,etcd02=https://192.168.136.40:2380,etcd03=https://192.168.136.30:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" ~

    开启服务

    [root@localhost ssl]# systemctl start etcd [root@localhost ssl]#systemctl enable etcd

    查看开启状态

    [root@localhost ssl]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since 一 2020-09-28 23:17:22 CST; 10s ago

    在master节点上检查群集状态(health为健康)

    [root@localhost k8s]# cd /root/k8s/etcd-cert/ [root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.136.88:2379,https://192.168.136.40:2379,https://192.168.136.30:2379" cluster-health

    docker引擎部署

    所有node节点部署docker引擎 详见docker安装脚本 在两个node上安装docker

    flannel网络配置

    Flannel是CoreOS开发,专门用于docker多机互联的一个工具,让集群中的不同节点主机创建的容器都具有全集群唯一的虚拟ip地址Flannel为每个host分配一个subnet,容器从这个subnet中分配IP,这些IP可以在host间路由,容器间无需使用nat和端口映射即可实现跨主机通信

    在master上操作

    写入分配的子网段到ETCD中

    [root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.136.88:2379,https://192.168.136.30:2379,https://192.168.136.40:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' 结果 { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

    vxlan:是逻辑节点

    查看写入的信息是否写入

    [root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.136.60:2379,https://192.168.136.10:2379,https://192.168.136.20:2379" get /coreos.com/network/config 结果 { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

    flannel网络配置过程

    下面配置要在2个node节点个配置一遍

    解压(2个node都要配置)

    [root@localhost ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz flanneld mk-docker-opts.sh README.md

    k8s工作目录(2个node都要配置)

    [root@localhost ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

    注意:cfg;配置文件 bin命令文件; ssl;证书

    配置flanneld脚本(2个node都要配置)

    [root@localhost ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld

    解释如下

    flannel访问的是ETCD 的业务端口2379

    开启flannel网络功能(2个node都要配置)

    [root@localhost ~]# bash flannel.sh https://192.168.136.88:2379,https://192.168.136.40:2379,https://192.168.136.30:2379

    配置docker连接flannel(2个node都要配置)

    [root@localhost ~]# vim /usr/lib/systemd/system/docker.service 14 EnvironmentFile=/run/flannel/subnet.env 声明环境变量 15 $DOCKER_NETWORK_OPTIONS 添加环境变量

    重启docker服务(2个node都要配置)

    [root@localhost ~]# systemctl daemon-reload [root@localhost ~]# systemctl restart docker

    查看docker0是否对接上flannel

    注意:docer之间相互通信时用的是子网段的地址172.17.80.0

    测试ping通对方docker0网卡 证明flannel起到路由作用

    创建容器

    [root@localhost ~]# docker run -it centos:7 /bin/bash [root@localhost ~]# yum install net-tools -y

    节点IP显示出来了

    不同节点互通

    master节点配置3大控制主键

    我们要开启3个主键master上面第一:apiserver 第二:Scheduler 第三:Controller Manager

    master配置

    制作证书过程

    在master上操作生成apiserver.sh的文件

    [root@localhost k8s]# mkdir master [root@localhost k8s]# cd master/ [root@localhost master]# unzip master.zip [root@localhost master]# ls apiserver.sh controller-manager.sh scheduler.sh [root@localhost master]# chmod +x controller-manager.sh

    创建工作目录(cfg;配置文件 bin命令文件; ssl;证书)

    [root@localhost master]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p

    创建证书目录

    [root@localhost k8s]# cd /root/k8s/ [root@localhost k8s]# mkdir k8s-cert [root@localhost k8s]# cd k8s-cert/ cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF ------ca证书签名------------------------ cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF

    生成证书

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

    生成服务端证书

    cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.136.88", "192.168.136.60", "192.168.136.100", "192.168.136.10", "192.168.136.20", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF

    生成证书

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

    生成管理员证书

    cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF

    生成证书

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

    如果有报错请进配置文件看看

    代理端的证书

    cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF

    生成证书

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

    查看证书是否缺失(8大证书)

    [root@localhost k8s-cert]# ls *.pem admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem

    把证书放到ssl中

    cp ca*pem server*pem /opt/kubernetes/ssl/

    配置kubernetes

    解压kubernetes压缩包

    [root@localhost k8s]# cd /root/k8s/ [root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz [root@localhost k8s]# cd /root/k8s/kubernetes/server/bin

    //复制关键命令文件

    [root@localhost k8s]# cd /root/k8s/kubernetes/server/bin/ cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

    tocken令牌认证

    随机生成序列号

    [root@localhost bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 41b1afc1eff1d13042da195f37460bf5可以随机生成序列

    配置令牌

    [root@localhost bin]# vim /opt/kubernetes/cfg/token.csv 41b1afc1eff1d13042da195f37460bf5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

    开aprserver服务

    [root@localhost bin]# cd /root/k8s/master/ [root@localhost master]# bash apiserver.sh 192.168.136.88 https://192.168.136.88:2379,https://192.168.136.30:2379,https://192.168.136.40:2379

    查看端口是否开启(http和htpps一起出现)

    [root@localhost cfg]# netstat -ntap | grep 6443 tcp 0 0 192.168.136.88:6443 0.0.0.0:* LISTEN 18333/kube-apiserve [root@localhost cfg]# netstat -ntap | grep 8080 tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 18333/kube-apiserve

    开启scheduler服务

    [root@localhost master]#./scheduler.sh 127.0.0.1

    启动controller-manager

    [root@localhost master]# ./controller-manager.sh 127.0.0.1

    查看master 节点状态

    [root@localhost master]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}

    node节点部署

    把 kubelet、kube-proxy拷贝到node节点上去

    [root@localhost bin]# cd /root/k8s/kubernetes/server/bin/ [root@localhost bin]# scp kubelet kube-proxy root@192.168.136.40:/opt/kubernetes/bin/ [root@localhost bin]# scp kubelet kube-proxy root@192.168.136.30:/opt/kubernetes/bin/

    nod01节点操作(复制node.zip到/root目录下再解压)

    [root@localhost ~]# unzip node.zip

    在master上操作

    拖入kubeconfig文件

    [root@localhost k8s]# mkdir kubeconfig [root@localhost k8s]# cd kubeconfig/ [root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig

    配置kubeconfig

    服务token的令牌

    [root@localhost kubeconfig]# cat /opt/kubernetes/cfg/token.csv 41b1afc1eff1d13042da195f37460bf5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

    配置kubeconfig

    [root@localhost kubeconfig]# vim kubeconfig ----------------删除以下部分---------------------------------------------------------------------- # 创建 TLS Bootstrapping Token #BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008 cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF ----------------------------------------------- //获取token信息(红色部分) [root@localhost ~]#cat /opt/kubernetes/cfg/token.csv 6351d652249951f79c33acdab329e4c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap" //配置文件修改为tokenID # 设置客户端认证参数 [root@localhost kubeconfig]# vim kubeconfig --token=6351d652249951f79c33acdab329e4c4 \

    设置环境变量

    [root@localhost kubeconfig]# vim /etc/profile 在末尾加上 export PATH=$PATH:/opt/kubernetes/bin/ [root@localhost kubeconfig]# source /etc/profile

    生成配置文件

    bash kubeconfig 192.168.136.88 /root/k8s/k8s-cert/ 下面生成成功: Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default".

    查看是否生成文件

    [root@localhost kubeconfig]# ls bootstrap.kubeconfig kube-proxy.kubeconfig

    拷贝配置文件到node节点

    scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.136.40:/opt/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.136.30:/opt/kubernetes/cfg/

    创建bootstrap角色赋予权限用于连接apiserver请求签名绑定集群(关键)

    kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

    在node01节点上操作

    开启服务

    [root@localhost ~]# bash kubelet.sh 192.168.136.40

    检查kubelet服务启动

    [root@localhost ~]# ps aux | grep kube root 82438 0.0 0.8 300552 16352 ? Ssl 14:18 0:10 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.136.88:2379,https://192.168.136.40:2379,https://192.168.136.30:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem root 109093 10.7 2.3 371788 44076 ? Ssl 19:38 0:01 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.136.40 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 109121 0.0 0.0 112724 988 pts/1 R+ 19:38 0:00 grep --color=auto kube

    master上操作

    检查到node01节点的请求(我们看到现在是等待审批状态) [root@localhost kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 93s kubelet-bootstrap Pending(等待集群给该节点颁发证书)

    给该节点颁发证书

    [root@localhost ~]# kubectl certificate approve node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ

    继续查看证书状态

    [root@localhost ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 4m34s kubelet-bootstrap Approved,Issued(已经被允许加入群集)

    查看群集节点,成功加入node01节点

    [root@localhost ~]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.136.40 Ready <none> 3m14s v1.12.3

    在node01节点操作,启动proxy服务

    [root@localhost ~]# bash proxy.sh 192.168.136.40

    查看服务是否开启

    systemctl status kube-proxy.service Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 日 2020-10-04 19:55:28 CST; 17s ago Main PID: 112611 (kube-proxy) Tasks: 0 Memory: 7.5M CGroup: /system.slice/kube-proxy.service ‣ 112611 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.136.40 --...

    node02节点部署

    在node01节点操作

    //把现成的/opt/kubernetes目录复制到其他节点进行修改即可 [root@localhost ~]# scp -r /opt/kubernetes/ root@192.168.136.30:/opt/

    我们看一下有有什么东西

    [root@localhost ~]# tree /opt/kubernetes/ /opt/kubernetes/ ├── bin │ ├── flanneld │ ├── kubelet │ ├── kube-proxy │ └── mk-docker-opts.sh ├── cfg │ ├── bootstrap.kubeconfig │ ├── flanneld │ ├── kubelet │ ├── kubelet.config │ ├── kubelet.kubeconfig │ ├── kube-proxy │ └── kube-proxy.kubeconfig └── ssl ├── kubelet-client-2020-10-04-19-43-17.pem ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2020-10-04-19-43-17.pem ├── kubelet.crt └── kubelet.key

    把kubelet,kube-proxy的service文件拷贝到node2中

    scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.136.30:/usr/lib/systemd/system/

    在node02上操作,进行修改 首先删除复制过来的证书,等会node02会自行申请证书

    [root@localhost ~]# cd /opt/kubernetes/ssl/ [root@localhost ssl]# rm -rf *

    修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)

    [root@localhost cfg]# cd /opt/kubernetes/cfg/ [root@localhost cfg]# vim kubelet

    [root@localhost cfg]# vim kubelet.config

    [root@localhost cfg]# vim kube-proxy

    启动服务

    [root@localhost cfg]# systemctl start kubelet.service [root@localhost cfg]# systemctl enable kubelet.service [root@localhost cfg]# systemctl start kube-proxy.service [root@localhost cfg]# systemctl enable kube-proxy.service

    在master上操作查看请求

    [root@localhost ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 37m kubelet-bootstrap Approved,Issued node-csr-l0pxa_bwNlGKIv1LM3zaeZr62kSXTYpnloFgJ9kEHqk 87s kubelet-bootstrap Pending

    授权许可加入群集

    [root@localhost ~]# kubectl certificate approve node-csr-l0pxa_bwNlGKIv1LM3zaeZr62kSXTYpnloFgJ9kEHqk

    //查看群集中的节点已经加入k8s

    [root@localhost k8s]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.136.30 Ready <none> 57s v1.12.3 192.168.136.40 Ready <none> 34m v1.12.3
    Processed: 0.010, SQL: 9