文章目录
前言一、 实验环境二、环境部署三、K8S单点部署3.1 master01操作3.2 master02部署3.3 负载均衡Nginx01和Nginx02配置
四、故障解决
前言
Kubernetes 是用于自动部署,扩展和管理容器化应用程序的开源系统。Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。名称 Kubernetes 源于希腊语,意为 "舵手"或 “飞行员”。Google 在 2014 年开源了 Kubernetes 项目。Kubernetes 建立在 Google在大规模运行生产工作负载方面拥有十几年的经验的基础上,结合了社区中最好的想法和实践。
Kubernetes 为您提供:
(1) 服务发现和负载均衡
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果到容器的流量很大,Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
(2) 存储编排
Kubernetes 允许您自动挂载您选择的存储系统,例如本地存储、公共云提供商等。
(3) 自动部署和回滚
您可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态更改为所需状态。例如,您可以自动化Kubernetes 来为您的部署创建新容器,删除现有容器并将它们的所有资源用于新容器。
(4) 自动二进制打包
Kubernetes 允许您指定每个容器所需 CPU 和内存(RAM)。当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。
(5) 自我修复
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。
(6) 密钥与配置管理
Kubernetes 允许您存储和管理敏感信息,例如密码、OAuth 令牌和 ssh密钥。您可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。 批处理 。提供一次性任务,定时任务;满足批量数据处理和分析的场景。
一、 实验环境
K8S 官网地址
二、环境部署
三、K8S单点部署
3.1 master01操作
(1)制作证书
[root@localhost ~] mkdir k8s
[root@localhost ~] cd k8s/
[root@localhost k8s] ls //从宿主机拖进来
etcd-cert.sh etcd.sh
[root@localhost k8s] mkdir etcd-cert
[root@localhost k8s] mv etcd-cert.sh etcd-cert
官方文档证书介绍https://kubernetes.io/zh/docs/concepts/cluster-administration/certificates/
//下载证书制作工具
[root@localhost k8s] vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
//下载cfssl官方包
[root@localhost k8s] bash cfssl.sh
[root@localhost k8s] ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
//加执行权限
//开始制作证书
//cfssl 生成证书工具 cfssljson通过传入json文件生成证书
cfssl-certinfo查看证书信息
//定义ca证书
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
//实现证书签名
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
//生产证书,生成ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/01/13 16:32:56 [INFO] generating a new CA key and certificate from CSR
2020/01/13 16:32:56 [INFO] generate received request
2020/01/13 16:32:56 [INFO] received CSR
2020/01/13 16:32:56 [INFO] generating key: rsa-2048
2020/01/13 16:32:56 [INFO] encoded CSR
2020/01/13 16:32:56 [INFO] signed certificate with serial number 595395605361409801445623232629543954602649157326
//指定etcd三个节点之间的通信验证
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.75.200",
"192.168.75.201",
"192.168.75.144"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
//生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/01/13 17
:01
:30 [INFO] generate received request
2020/01/13 17
:01
:30 [INFO] received CSR
2020/01/13 17
:01
:30 [INFO] generating
key: rsa-2048
2020/01/13 17
:01
:30 [INFO] encoded CSR
2020/01/13 17
:01
:30 [INFO] signed certificate with serial number 202782620910318985225034109831178600652439985681
2020/01/13 17
:01
:30 [WARNING] This certificate lacks a
"hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum
(https://cabforum.org
);
specifically, section 10.2.3
("Information Requirements").
ETCD 二进制包地址
https://github.com/etcd-io/etcd/releases
复制到centos7中
[root@localhost etcd-cert] ls
ca-config.json etcd-cert.sh server-csr.json
ca.csr etcd-v3.3.10-linux-amd64.tar.gz server-key.pem
ca-csr.json flannel-v0.10.0-linux-amd64.tar.gz server.pem
ca-key.pem kubernetes-server-linux-amd64.tar.gz
ca.pem server.csr
[root@localhost etcd-cert] mv *.tar.gz ../
[root@localhost k8s] ls
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@localhost k8s] tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s] ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
[root
@localhost k8s] mkdir /opt/etcd/{cfg,bin,ssl
} -p //配置文件,命令文件,证书
[root@localhost k8s] mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
//证书拷贝
[root@localhost k8s] cp etcd-cert/*.pem /opt/etcd/ssl/
//进入卡住状态等待其他节点加入
[root@localhost k8s] bash etcd.sh etcd01 192.168.75.200 etcd02=
https://192.168.75.201
:2380,etcd03=
https://192.168.75.144
:2380
//使用另外一个会话打开,会发现etcd进程已经开启
[root@localhost ~] ps -ef | grep etcd
//拷贝证书去其他节点
[root@localhost k8s] scp -r /opt/etcd/ root@192.168.75.201
:/opt/
[root@localhost k8s] scp -r /opt/etcd/ root@192.168.75.144
:/opt
//启动脚本拷贝其他节点
[root@localhost k8s] scp /usr/lib/systemd/system/etcd.service root@192.168.75.201
:/usr/lib/systemd/system/
[root@localhost k8s] scp /usr/lib/systemd/system/etcd.service root@192.168.75.144
:/usr/lib/systemd/system/
(2)在node01节点修改
[root@localhost ~] vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME=
"etcd02"
ETCD_DATA_DIR=
"/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=
"https://192.168.75.201:2380"
ETCD_LISTEN_CLIENT_URLS=
"https://192.168.75.201:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=
"https://192.168.75.201:2380"
ETCD_ADVERTISE_CLIENT_URLS=
"https://192.168.75.201:2379"
ETCD_INITIAL_CLUSTER=
"etcd01=https://192.168.75.200:2380,etcd02=https://192.168.75.201:2380,etcd03=https://192.168.75.144:2380"
ETCD_INITIAL_CLUSTER_TOKEN=
"etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE=
"new"
//启动
[root@localhost ssl] systemctl start etcd
[root@localhost ssl] systemctl status etcd
(3)node02节点修改
[root@localhost ~] vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME=
"etcd03"
ETCD_DATA_DIR=
"/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=
"https://192.168.75.144:2380"
ETCD_LISTEN_CLIENT_URLS=
"https://192.168.75.144:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=
"https://192.168.75.144:2380"
ETCD_ADVERTISE_CLIENT_URLS=
"https://192.168.75.144:2379"
ETCD_INITIAL_CLUSTER=
"etcd01=https://192.168.75.200:2380,etcd02=https://192.168.75.201:2380,etcd03=https://192.168.75.144:2380"
ETCD_INITIAL_CLUSTER_TOKEN=
"etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE=
"new"
//启动
[root@localhost ssl] systemctl start etcd
[root@localhost ssl]systemctl status etcd
//检查群集状态
[root@localhost etcd-cert]/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints=
"https://192.168.75.200:2379,https://192.168.75.201:2379,https://192.168.75.144:2379" cluster-health
member 3eae9a550e2e3ec is
healthy: got healthy result from
https://192.168.75.144
:2379
member 26cd4dcf17bc5cbd is
healthy: got healthy result from
https://192.168.75.201
:2379
member 2fcd2df8a9411750 is
healthy: got healthy result from
https://192.168.75.200
:2379
cluster is healthy
(4)docker引擎部署
所有node节点部署docker引擎
详见docker安装脚本
flannel网络配置
//写入分配的子网段到ETCD中,供flannel使用
```css
[root
@localhost etcd-cert]/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.75.200:2379,https://192.168.75.201:2379,https://192.168.75.144:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
//查看写入的信息
[root
@localhost etcd-cert]/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.75.200:2379,https://192.168.75.201:2379,https://192.168.75.144:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
//拷贝到所有node节点(只需要部署在node节点即可)
[root@localhost k8s] scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.75.201
:/root
[root@localhost k8s] scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.75.144
:/root
所有node节点操作解压
[root@localhost ~] tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
//k8s工作目录
[root
@localhost ~] mkdir /opt/kubernetes/{cfg,bin,ssl
} -p
[root@localhost ~] mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[root@localhost ~] vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1
:-
"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS
} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
Bash flannel.sh
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
//开启flannel网络功能
[root@localhost~]bash flannel.sh
https://192.168.75.200
:2379,
https://192.168.75.201
:2379,
https://192.168.75.144
:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
//配置docker连接flannel
[root@localhost ~] vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H
fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@localhost ~] cat /run/flannel/subnet.env
DOCKER_OPT_BIP=
"--bip=172.17.64.1/24"
DOCKER_OPT_IPMASQ=
"--ip-masq=false"
DOCKER_OPT_MTU=
"--mtu=1450"
//说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=
" --bip=172.17.64.1/24 --ip-masq=false --mtu=1450"
//重启docker服务
[root@localhost ~] systemctl daemon-reload
[root@localhost ~] systemctl restart docker
//查看flannel网络
[root@localhost ~] ifconfig
flannel.1
: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.64.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6
fe80::fc7c:e1ff:fe1d:224 prefixlen 64 scopeid 0x20<link>
ether
fe:7
c:e1:1
d:02
:24 txqueuelen 0
(Ethernet
)
RX packets 0 bytes 0
(0.0 B
)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0
(0.0 B
)
TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0
//测试ping通对方docker0网卡 证明flannel起到路由作用
[root@localhost ssl] ping 172.17.20.1
PING 172.17.20.1
(172.17.20.1
) 56(84
) bytes of data.
64 bytes from 172.17.20.1
: icmp_seq=1 ttl=64 time=3.04 ms
64 bytes from 172.17.20.1
: icmp_seq=2 ttl=64 time=0.481 ms
^C
--- 172.17.20.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.481/1.763/3.046/1.283 ms
[root@localhost ssl]#
[root@localhost ~] docker run -it
centos:7 /bin/bash
[root@5f9a65565b53 /] yum install net-tools -y
[root@5f9a65565b53 /] ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.64.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02
:42
:ac:11
:54
:02 txqueuelen 0
(Ethernet
)
RX packets 18192 bytes 13930229
(13.2 MiB
)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037
(329.1 KiB
)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1
(Local Loopback
)
RX packets 0 bytes 0
(0.0 B
)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0
(0.0 B
)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
//
再次测试ping通两个node中的centos:7容器
(5)部署master组件
//在master上操作,api-server生成证书
[root@localhost k8s] unzip master.zip
[root
@localhost k8s] mkdir /opt/kubernetes/{cfg,bin,ssl
} -p
[root@localhost k8s] mkdir k8s-cert
[root@localhost k8s] cd k8s-cert/
[root@localhost k8s-cert]# ls
k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.75.200", //master1
"192.168.75.122", //master2
"192.168.75.166", //vip
"192.168.75.155", //lb (master)
"192.168.75.177", //lb (backup)
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
//生成k8s证书
[root@localhost k8s-cert] bash k8s-cert.sh
2020/01/15 23
:31
:03 [INFO] generating a new CA key and certificate from CSR
2020/01/15 23
:31
:03 [INFO] generate received request
2020/01/15 23
:31
:03 [INFO] received CSR
2020/01/15 23
:31
:03 [INFO] generating
key: rsa-2048
2020/01/15 23
:31
:03 [INFO] encoded CSR
2020/01/15 23
:31
:03 [INFO] signed certificate with serial number 200957285008634365032949076461783766565292979186
2020/01/15 23
:31
:03 [INFO] generate received request
2020/01/15 23
:31
:03 [INFO] received CSR
2020/01/15 23
:31
:03 [INFO] generating
key: rsa-2048
2020/01/15 23
:31
:03 [INFO] encoded CSR
2020/01/15 23
:31
:03 [INFO] signed certificate with serial number 531833477097469967316212525772159687029821034128
2020/01/15 23
:31
:03 [WARNING] This certificate lacks a
"hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum
(https://cabforum.org
);
specifically, section 10.2.3
("Information Requirements").
2020/01/15 23
:31
:04 [INFO] generate received request
2020/01/15 23
:31
:04 [INFO] received CSR
2020/01/15 23
:31
:04 [INFO] generating
key: rsa-2048
2020/01/15 23
:31
:04 [INFO] encoded CSR
2020/01/15 23
:31
:04 [INFO] signed certificate with serial number 684040931566157342098288079791465097738732990534
2020/01/15 23
:31
:04 [WARNING] This certificate lacks a
"hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum
(https://cabforum.org
);
specifically, section 10.2.3
("Information Requirements").
2020/01/15 23
:31
:04 [INFO] generate received request
2020/01/15 23
:31
:04 [INFO] received CSR
2020/01/15 23
:31
:04 [INFO] generating
key: rsa-2048
2020/01/15 23
:31
:04 [INFO] encoded CSR
2020/01/15 23
:31
:04 [INFO] signed certificate with serial number 681469506930419424853732902538890426797365900103
2020/01/15 23
:31
:04 [WARNING] This certificate lacks a
"hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum
(https://cabforum.org
);
specifically, section 10.2.3 ("Information Requirements").
[root@localhost k8s-cert] ls *pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@localhost k8s-cert] cp ca*pem server*pem /opt/kubernetes/ssl/
[root@localhost k8s-cert] cd ..
//解压kubernetes压缩包
[root@localhost k8s] tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@localhost k8s] cd /root/k8s/kubernetes/server/bin
//复制关键命令文件
[root@localhost bin] cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@localhost k8s] cd /root/k8s
[root@localhost k8s] vim /opt/kubernetes/cfg/token.csv
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
序列号,用户名,id,角色
//使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 可以随机生成序列号
//二进制文件,token,证书都准备好,开启apiserver
[root@localhost k8s] bash apiserver.sh 192.168.75.200 https://192.168.75.200:2379,https://192.168.75.201:2379,https://192.168.75.144:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
//检查进程是否启动成功
[root@localhost k8s] ps aux | grep kube
//查看配置文件
[root@localhost k8s] cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.75.200:2379,https://192.168.75.201:2379,https://192.168.75.144:2379 \
--bind-address=192.168.75.200 \
--secure-port=6443 \
--advertise-address=192.168.75.200 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
//监听的https端口
[root@localhost k8s] netstat -ntap | grep 6443
tcp 0 0 192.168.75.200:6443 0.0.0.0:* LISTEN 46459/kube-apiserve
tcp 0 0 192.168.75.200:6443 192.168.75.200:36806 ESTABLISHED 46459/kube-apiserve
tcp 0 0 192.168.75.200:36806 192.168.75.200:6443 ESTABLISHED 46459/kube-apiserve
[root@localhost k8s] netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 46459/kube-apiserve
//启动scheduler服务
[root@localhost k8s] ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@localhost k8s] ps aux | grep ku
[root@localhost k8s] chmod +x controller-manager.sh
//启动controller-manager
[root@localhost k8s] ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
//查看master 节点状态
[root@localhost k8s] /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
--------------------------------------------------node节点部署------------------------------------------
//master上操作
//把 kubelet、kube-proxy拷贝到node节点上去
[root@localhost bin] scp kubelet kube-proxy root@192.168.75.201:/opt/kubernetes/bin/
root@192.168.75.201's password:
kubelet 100% 168MB 27.9MB/s 00:06
kube-proxy 100% 48MB 31.5MB/s 00:01
[root@localhost bin]# scp kubelet kube-proxy root@192.168.75.144:/opt/kubernetes/bin/
root@192.168.75.144's password:
kubelet 100% 168MB 56.1MB/s 00:03
kube-proxy 100% 48MB 37.3MB/s 00:01
//nod01节点操作(复制node.zip到/root目录下再解压)
[root@localhost ~] ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐
flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面
//解压node.zip,获得kubelet.sh proxy.sh
[root@localhost ~] unzip node.zip
//在master上操作
[root@localhost k8s] mkdir kubeconfig
[root@localhost k8s] cd kubeconfig/
//拷贝kubeconfig.sh文件进行重命名
[root@localhost kubeconfig] mv kubeconfig.sh kubeconfig
[root@localhost kubeconfig] vim kubeconfig
----------------删除以下部分----------------------------------------------------------------------
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN
},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
//获取token信息(红色部分)
[root@localhost ~] cat /opt/kubernetes/cfg/token.csv
6351d652249951f79c33acdab329e4c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//配置文件修改为tokenID
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=6351d652249951f79c33acdab329e4c4 \
--kubeconfig=bootstrap.kubeconfig
//设置环境变量(可以写入到/etc/profile中)
[root@localhost kubeconfig] export PATH=$PATH:/opt/kubernetes/bin/
[root@localhost kubeconfig] kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
//生成配置文件
[root@localhost kubeconfig] bash kubeconfig 192.168.75.200 /opt/k8s-cert
Cluster
"kubernetes" set.
User
"kubelet-bootstrap" set.
Context
"default" created.
Switched to context
"default".
Cluster
"kubernetes" set.
User
"kube-proxy" set.
Context
"default" created.
Switched to context
"default".
[root@localhost kubeconfig] ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
//拷贝配置文件到node节点
[root@localhost kubeconfig] scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.75.201
:/opt/kubernetes/cfg/
root@192.168.75.201's
password:
bootstrap.kubeconfig 100% 2169 2.1MB/s 00
:00
kube-proxy.kubeconfig 100% 6275 4.2MB/s 00
:00
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.75.144
:/opt/kubernetes/cfg/
root@192.168.75.144's
password:
bootstrap.kubeconfig 100% 2169 1.8MB/s 00
:00
kube-proxy.kubeconfig 100% 6275 5.3MB/s 00
:00
//创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)
[root@localhost kubeconfig] kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=
system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
(6)在node01节点上操作
[root@localhost ~] bash kubelet.sh 192.168.75.201
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
//检查kubelet服务启动
[root@localhost ~] ps aux | grep kube
root 106845 1.4 1.1 371744 44780 ? Ssl 00
:34 0
:01 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.75.201 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfgkubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/
pause-amd64:3.0
root 106876 0.0 0.0 112676 984 pts/0 S+ 00
:35 0
:00 grep --color=auto kube
//master上操作
//检查到node01节点的请求
[root@localhost kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A 4m27s kubelet-bootstrap Pending(等待集群给该节点颁发证书)
[root@localhost kubeconfig] kubectl certificate approve node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A
certificatesigningrequest.certificates.k8s.io/node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A approved
//继续查看证书状态
[root@localhost kubeconfig] kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A 8m56s kubelet-bootstrap Approved,Issued(已经被允许加入群集)
//查看群集节点,成功加入node01节点
[root@localhost kubeconfig] kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.75.201 Ready <none> 118s v1.12.3
//在node01节点操作,启动proxy服务
[root@localhost ~] bash proxy.sh 192.168.75.201
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@localhost ~] systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded
(/usr/lib/systemd/system/kube-proxy.service
; enabled
; vendor
preset: disabled
)
Active: active
(running
) since 日 2020-02-02 00
:47
:29 CST
; 11s ago
Main
PID: 108006
(kube-proxy
)
Memory: 7.5M
CGroup: /system.slice/kube-proxy.service
‣ 108006 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=1...
2月 02 00
:47
:32 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:32.040427 108006 config...te
2月 02 00
:47
:32 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:32.057419 108006 config...te
2月 02 00
:47
:34 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:34.059627 108006 config...te
2月 02 00
:47
:34 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:34.076914 108006 config...te
2月 02 00
:47
:36 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:36.091570 108006 config...te
2月 02 00
:47
:36 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:36.105162 108006 config...te
2月 02 00
:47
:38 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:38.103518 108006 config...te
2月 02 00
:47
:38 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:38.115902 108006 config...te
2月 02 00
:47
:40 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:40.113628 108006 config...te
2月 02 00
:47
:40 localhost.localdomain kube-proxy[108006]
: I0202 00
:47
:40.125818 108006 config...te
Hint: Some lines were ellipsized, use -l to show in full.
先具备单master节点部署环境
3.2 master02部署
//优先关闭防火墙和selinux服务
//在master01上操作
//复制kubernetes目录到master02
[root@localhost k8s] scp -r /opt/kubernetes/ root@192.168.75.121
:/opt
The authenticity of host
'192.168.75.121 (192.168.75.121)' can't be established.
ECDSA key fingerprint is
SHA256:IJ43xXlBWD7qPaL/uFG+4qW4qd7C8xBqUttHiYME8YE.
ECDSA key fingerprint is
MD5:cf:3
e:dc:e5:89
:86
:e9:43
:38
:ee:31
:9
d:8
c:d4:75
:9f.
Are you sure you want to continue connecting
(yes/no
)? yes
Warning: Permanently added
'192.168.75.121' (ECDSA
) to the list of known hosts.
root@192.168.75.121's
password:
token.csv 100% 84 86.1KB/s 00
:00
kube-apiserver 100% 939 1.2MB/s 00
:00
kube-scheduler 100% 94 52.0KB/s 00
:00
kube-controller-manager 100% 483 446.5KB/s 00
:00
kube-apiserver 100% 184MB 30.6MB/s 00
:06
kubectl 100% 55MB 32.1MB/s 00
:01
kube-controller-manager 100% 155MB 31.1MB/s 00
:05
kube-scheduler 100% 55MB 30.7MB/s 00
:01
ca-key.pem 100% 1679 741.3KB/s 00
:00
ca.pem 100% 1359 1.5MB/s 00
:00
server-key.pem 100% 1675 1.3MB/s 00
:00
server.pem 100% 1643 1.6MB/s 00
:00
//复制master中的三个组件启动脚本kube-apiserver.service kube-controller-manager.service kube-scheduler.service
[root
@localhost k8s] scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler
}.service root@192.168.75.121
:/usr/lib/systemd/system/
root@192.168.75.121's
password:
kube-apiserver.service 100% 282 268.1KB/s 00
:00
kube-controller-manager.service 100% 317 294.2KB/s 00
:00
kube-scheduler.service 100% 281 257.5KB/s 00
:00
//master02上操作
//修改配置文件kube-apiserver中的IP
[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]# vim kube-apiserver
KUBE_APISERVER_OPTS=
"--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.75.149:2379,https://192.168.75.150:2379,https://192.168.75.151:2379 \
--bind-address=192.168.75.121 \
--secure-port=6443 \
--advertise-address=192.168.75.121 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
//特别注意:master02一定要有etcd证书
//需要拷贝master01上已有的etcd证书给master02使用
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.75.121
:/opt/
root@192.168.75.121's
password:
etcd 100% 523 415.0KB/s 00
:00
etcd 100% 18MB 42.7MB/s 00
:00
etcdctl 100% 15MB 35.2MB/s 00
:00
ca-key.pem 100% 1675 612.1KB/s 00
:00
ca.pem 100% 1265 1.0MB/s 00
:00
server-key.pem 100% 1679 1.7MB/s 00
:00
server.pem 100% 1338 1.7MB/s 00
:00
//启动master02中的三个组件服务
[root@localhost cfg] systemctl start kube-apiserver.service
[root@localhost cfg] systemctl start kube-controller-manager.service
[root@localhost cfg] systemctl start kube-scheduler.service
//增加环境变量
[root@localhost cfg] vim /etc/profile
#末尾添加
export PATH=$
PATH:/opt/kubernetes/bin/
[root@localhost cfg] source /etc/profile
[root@localhost cfg] kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.75.150 Ready <none> 2d12h v1.12.3
192.168.75.151 Ready <none> 38h v1.12.3
3.3 负载均衡Nginx01和Nginx02配置
//Nginx01和 Nginx02操作
//安装nginx服务,把nginx.sh和keepalived.conf脚本拷贝到家目录
[root@localhost ~] systemctl stop firewalld.service
[root@localhost ~] setenforce 0
[root@localhost ~] vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
[root@localhost ~] yum install nginx -y
//添加四层转发
[root@localhost ~] vim /etc/nginx/nginx.conf
events {
worker_connections 1024
;
}
stream {
log_format main
'$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main
;
upstream k8s-apiserver {
server 192.168.75.201
:6443
;
server 192.168.75.144
:6443
;
}
server {
listen 6443
;
proxy_pass k8s-apiserver
;
}
}
http {
[root@localhost ~] systemctl start nginx
//部署keepalived服务
[root@localhost ~] yum install keepalived -y
//修改配置文件
[root@localhost ~] cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes
//注意:lb01是Mster配置如下:
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script
"/usr/local/nginx/sbin/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 166 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.75.166/24
}
track_script {
check_nginx
}
}//注意:Nginx02是Backup配置如下:
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script
"/usr/local/nginx/sbin/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.75.166/24
}
track_script {
check_nginx
}
}
[root@localhost ~] vim /usr/local/nginx/sbin/check_nginx.sh
count=$
(ps -ef |grep nginx |egrep -cv
"grep|$$")
if [
"$count" -eq 0 ]
;then
systemctl stop keepalived
fi
[root@localhost ~] chmod +x /usr/local/nginx/sbin/check_nginx.sh
[root@localhost ~] systemctl start keepalived
//查看lb01地址信息
[root@localhost ~] ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1660
link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.75.155/24 brd 192.168.75.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.75.166/24 scope global secondary ens33 //漂移地址在lb01中
valid_lft forever preferred_lft forever
inet6 fe80::53ba:daab:3e22:e711/64 scope link
valid_lft forever preferred_lft forever
//查看Nginx02地址信息
[root@localhost nginx] ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1660
link/ether 00:0c:29:c9:9d:88 brd ff:ff:ff:ff:ff:ff
inet 192.168.75.177/24 brd 192.168.75.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::55c0:6788:9feb:550d/64 scope link
valid_lft forever preferred_lft forever
//验证地址漂移(lb01中使用pkill nginx,再在lb02中使用ip a 查看)
//恢复操作(在lb01中先启动nginx服务,再启动keepalived服务)
//nginx站点/usr/share/nginx/html
//开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
[root@localhost cfg] vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost cfg] vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost cfg] vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
//统统修改为VIP
server: https://192.168.75.166:6443
[root@localhost cfg] systemctl restart kubelet.service
[root@localhost cfg] systemctl restart kube-proxy.service
//替换完成直接自检
[root@localhost cfg] grep 166 *
bootstrap.kubeconfig: server: https://192.168.75.166:6443
kubelet.kubeconfig: server: https://192.168.75.166:6443
kube-proxy.kubeconfig: server: https://192.168.75.166:6443
//在lb01上查看nginx的k8s日志
[root@localhost ~] tail /var/log/nginx/k8s-access.log
192.168.75.201 192.168.75.200:6443 - [05/Feb/2020:12:43:50 +0800] 200 1121
192.168.75.201 192.168.75.122:6443 - [05/Feb/2020:12:43:50 +0800] 200 1120
192.168.75.144 192.168.75.200:6443 - [05/Feb/2020:12:45:38 +0800] 200 1121
192.168.75.144 192.168.75.122:6443 - [05/Feb/2020:12:45:38 +0800] 200 1121
//在master01上操作
//测试创建pod
[root@localhost ~] kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
//查看状态
[root@localhost ~] kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-nf9sk 0/1 ContainerCreating 0 33s //正在创建中
[root@localhost ~] kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-nf9sk 1/1 Running 0 80s //创建完成,运行中
//注意日志问题
[root@localhost ~] kubectl logs nginx-dbddb74b8-nf9sk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-nf9sk)
[root@localhost ~] kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
//查看pod网络
[root@localhost ~] kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-dbddb74b8-nf9sk 1/1 Running 0 11m 172.17.31.3 192.168.75.150 <none>
//在对应网段的node节点上操作可以直接访问
[root@localhost cfg] curl 172.17.31.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em
;
margin: 0 auto
;
font-family: Tahoma, Verdana, Arial, sans-serif
;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=
"http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href=
"http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
//访问就会产生日志
//回到master01操作
[root@localhost ~] kubectl logs nginx-dbddb74b8-nf9sk
172.17.31.1 - - [05/Feb/2020
:05
:08
:36 +0000]
"GET / HTTP/1.1" 200 612
"-" "curl/7.29.0" "-"
四、故障解决
问题一:
故障描述:node1向master请求证书。Master没有下发证书。
故障原因:master在生成bootstrap.kubeconfig 、 kube-proxy.kubeconfig文件时,指向的证书路径错误,导致生成错误文件然后发送给node1和node2。从而之后node向master请求证书和master下发证书失败
故障解决:重新生成正确的bootstrap.kubeconfig 、 kube-proxy.kubeconfig文件下发给node1和node2. 问题二:
故障描述:node1和node2 docker0网卡,无法互通。
故障原因:node1和node2无法互通。并且docker 0和flanneld 的地址不相同。
故障解决:查看docker和flanneld的配置文件,地址是否相符。然后重启docker。之后再次查看网卡信息。验证node节点互通。 问题三:
故障描述:kubectl get node 发现node1 和node2 切换性is unhealthy.
故障原因:虚拟机挂起服务之后,可能会有一些服务没有正确启动状态。
故障解决: 检查flanneld服务器之间网络的互通性。如果发现问题重启docker。 检查ETCD是否running状态。如果出现问题尝试重启ETCD。
故障总结:每次关机或者挂载之后,flanneld、ETCD服务可能出现停止状态。需要手工再次启动。或者开启自启动。 服务启动
systemctl start etcd
systemctl status etcd
systemctl enable etcd
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl start kubelet.service
systemctl enable kubelet.service
systemctl start kube-proxy.service
systemctl enable kube-proxy.service
systemctl status kube-proxy.service
kubectl get cs //查看群集健康状态
kubectl get csr //查看证书状态
kubectl get node //查看节点
vim /usr/lib/systemd/system/docker.service //docker 配置文件路径
master节点的配置文件目录Master:192.168.75.200
[root@master kubernetes]# tree
.
├── bin
│?? ├── kube-apiserver
│?? ├── kube-controller-manager
│?? ├── kubectl
│?? └── kube-scheduler
├── cfg
│?? ├── kube-apiserver
│?? ├── kube-controller-manager
│?? ├── kube-scheduler
│?? └── token.csv
└── ssl
├── admin-key.pem
├── admin.pem
├── ca-key.pem
├── ca.pem
├── kube-proxy-key.pem
├── kube-proxy.pem
├── server-key.pem
└── server.pem
Node2:182.168.75.144
[root@localhost opt]# tree kubernetes/
kubernetes/
├── bin
│ ├── flanneld
│ ├── kubelet
│ ├── kube-proxy
│ └── mk-docker-opts.sh
├── cfg
│ ├── bootstrap.kubeconfig
│ ├── flanneld
│ ├── kubelet
│ ├── kubelet.config
│ ├── kubelet.kubeconfig
│ ├── kube-proxy
│ └── kube-proxy.kubeconfig
└── ssl
├── kubelet-client-2020-09-30-08-48-04.pem
├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2020-09-30-08-48-04.pem
├── kubelet.crt
└── kubelet.key
Node 1:192.168.75.201
[root@localhost kubernetes]# tree
.
├── bin
│ ├── flanneld
│ ├── kubelet
│ ├── kube-proxy
│ └── mk-docker-opts.sh
├── cfg
│ ├── bootstrap.kubeconfig
│ ├── flanneld
│ ├── kubelet
│ ├── kubelet.config
│ ├── kubelet.kubeconfig
│ ├── kube-proxy
│ └── kube-proxy.kubeconfig
└── ssl
├── kubelet-client-2020-09-29-19-11-00.pem
├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2020-09-29-19-11-00.pem
├── kubelet.crt
└── kubelet.key