Kubernetes二进制单节点部署
文章目录
Kubernetes二进制单节点部署一、环境部署负载均衡Master节点Node节点Harbor私有仓库
二、K8S部署Master:20.0.0.110/24 kube-apiserver、kube-controller-manager、kube-scheduler、etcdNode01:20.0.0.130/24 kubelet、kube-proxy、docker、flannel、etcdNode02:20.0.0.140/24 kubelet、kube-proxy、docker、flannel、etcd
三、部署etcd2.1 master操作etcd证书制作etcd-cert脚本ETCD二进制部署
2.2 Node01与Node02操作Docker引擎部署Flannel网络配置(node01与node02)flannel.sh脚本
四、部署MasterMaster操作,apiserver生成证书k8s-cert.sh脚本
五、部署NodeMaster操作Master操作kubeconfig.sh脚本Node01节点上操作Master操作Node02节点部署
一、环境部署
负载均衡
Load Balance(Master): 20.0.0.150/24
Load Balance(Backup): 20.0.0.160/24
Master节点
Master01: 20.0.0.110/24
Master02:20.0.0.120/24
Node节点
Node01: 20.0.0.130/24
Node02: 20.0.0.140/24
Harbor私有仓库
Registry: 20.0.0.166/24
官网地址:https://github.com/kubernetes/kubernetes/releases?after=v1.13.1
etcd: 使用 ca.pem、server-key.pem、server.pem;kube-apiserver: 使用 ca.pem、server-key.pem、server.pem;kubelet: 使用 ca.pem;kube-proxy: 使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;kubectl: 使用 ca.pem、admin-key.pem、admin.pem;kube-controller-manager: 使用 ca-key.pem、ca.pem
二、K8S部署
Master:20.0.0.110/24 kube-apiserver、kube-controller-manager、kube-scheduler、etcd
Node01:20.0.0.130/24 kubelet、kube-proxy、docker、flannel、etcd
Node02:20.0.0.140/24 kubelet、kube-proxy、docker、flannel、etcd
三、部署etcd
2.1 master操作
etcd证书制作
[root@localhost ~
]
[root@localhost ~
]
[root@master01 ~
]
[root@localhost ~
]
[root@localhost ~
]
[root@node01 ~
]
[root@localhost ~
]
[root@localhost ~
]
[root@node02 ~
]
[root@master01 ~
]
[root@master01 ~
]
[root@master01 k8s
]
etcd-cert.sh etcd.sh
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s
]
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master01 k8s
]
[root@master01 k8s
]
cfssl cfssl-certinfo cfssljson
cfssl 生成证书工具
cfssljson 通过传入json文件生成证书
cfssl-certinfo 查看证书信息
[root@master01 k8s
]
[root@master01 etcd-cert
]
[root@master01 etcd-cert
]
2020/09/30 09:40:57
[INFO
] generating a new CA key and certificate from CSR
2020/09/30 09:40:57
[INFO
] generate received request
2020/09/30 09:40:57
[INFO
] received CSR
2020/09/30 09:40:57
[INFO
] generating key: rsa-2048
2020/09/30 09:40:57
[INFO
] encoded CSR
2020/09/30 09:40:57
[INFO
] signed certificate with serial number 410796204422550690335906262042191550817128924636
2020/09/30 09:40:57
[INFO
] generate received request
2020/09/30 09:40:57
[INFO
] received CSR
2020/09/30 09:40:57
[INFO
] generating key: rsa-2048
2020/09/30 09:40:57
[INFO
] encoded CSR
2020/09/30 09:40:57
[INFO
] signed certificate with serial number 274902014881662587964729306950776951928266845738
2020/09/30 09:40:57
[WARNING
] This certificate lacks a
"hosts" field. This makes it unsuitable
for
websites. For
more information see the Baseline Requirements
for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum
(https://cabforum.org
);
specifically, section 10.2.3
("Information Requirements").
[root@master01 etcd-cert
]
ca-key.pem ca.pem server-key.pem server.pem
etcd-cert脚本
cat
>ca-config.json
<<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json
<<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json
| cfssljson -bare ca -
cat > server-csr.json
<<EOF
{
"CN": "etcd",
"hosts": [
"20.0.0.110",
"20.0.0.130",
"20.0.0.140"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca
=ca.pem -ca-key
=ca-key.pem -config
=ca-config.json -profile
=www server-csr.json
| cfssljson -bare server
ETCD二进制部署
https://github.com/etcd-io/etcd/releases
[root@master01 etcd-cert
]
[root@master01 k8s
]
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 ~
]
root 22372 21700 0 09:47 pts/1 00:00:00
bash etcd.sh etcd01 20.0.0.110 etcd02
=https://20.0.0.130:2380,etcd03
=https://20.0.0.140:2380
root 22420 22372 0 09:47 pts/1 00:00:00 systemctl restart etcd
root 22426 1 1 09:47 ? 00:00:00 /opt/etcd/bin/etcd --name
=etcd01 --data-dir
=/var/lib/etcd/default.etcd --listen-peer-urls
=https://20.0.0.110:2380 --listen-client-urls
=https://20.0.0.110:2379,http://127.0.0.1:2379 --advertise-client-urls
=https://20.0.0.110:2379 --initial-advertise-peer-urls
=https://20.0.0.110:2380 --initial-cluster
=etcd01
=https://20.0.0.110:2380,etcd02
=https://20.0.0.130:2380,etcd03
=https://20.0.0.140:2380 --initial-cluster-token
=etcd-cluster --initial-cluster-state
=new --cert-file
=/opt/etcd/ssl/server.pem --key-file
=/opt/etcd/ssl/server-key.pem --peer-cert-file
=/opt/etcd/ssl/server.pem --peer-key-file
=/opt/etcd/ssl/server-key.pem --trusted-ca-file
=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file
=/opt/etc/ssl/ca.pem
root 22484 22441 0 09:47 pts/2 00:00:00
grep --color
=auto etcd
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@node01 ~
]
ETCD_NAME
="etcd02"
ETCD_DATA_DIR
="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS
="https://20.0.0.130:2380"
ETCD_LISTEN_CLIENT_URLS
="https://20.0.0.130:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS
="https://20.0.0.130:2380"
ETCD_ADVERTISE_CLIENT_URLS
="https://20.0.0.130:2379"
ETCD_INITIAL_CLUSTER
="etcd01=https://20.0.0.110:2380,etcd02=https://20.0.0.130:2380,etcd03=https://20.0.0.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN
="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE
="new"
[root@node01 ~
]
[root@node01 ~
]
[root@node02 ~
]
ETCD_NAME
="etcd03"
ETCD_DATA_DIR
="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS
="https://20.0.0.140:2380"
ETCD_LISTEN_CLIENT_URLS
="https://20.0.0.140:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS
="https:// 20.0.0.140:2380"
ETCD_ADVERTISE_CLIENT_URLS
="https:// 20.0.0.140:2379"
ETCD_INITIAL_CLUSTER
="etcd01=https://20.0.0.110:2380,etcd02=https://20.0.0.130:2380,etcd03=https://20.0.0.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN
="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE
="new"
[root@node02 ~
]
[root@node02 ~
]
[root@master01 ~
]
[root@master01 ssl
]
member 13e88e37f3d86d3e is healthy: got healthy result from https://20.0.0.110:2379
member 1fd0474b2d772f8e is healthy: got healthy result from https://20.0.0.130:2379
member cc1bbfffdd5a9e7a is healthy: got healthy result from https://20.0.0.140:2379
cluster is healthy
2.2 Node01与Node02操作
Docker引擎部署
yum -y
install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y
install docker-ce
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i
"/^SELINUX=/s/enforcing/disabled/" /etc/selinux/config
systemctl start docker.service
systemctl
enable docker.service
tee /etc/docker/daemon.json
<<-
'EOF'
{
"registry-mirrors": ["https://oxjoh3ip.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker.service
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
service network restart
systemctl restart docker.service
Flannel网络配置(node01与node02)
[root@node01 ~
]
[root@node01 ssl
]
{ "Network": "172.17.0.0/16",
"Backend": {"Type": "vxlan"}}
[root@node02 ~
]
[root@node02 ssl
]
{ "Network": "172.17.0.0/16",
"Backend": {"Type": "vxlan"}}
[root@master01 ssl
]
root@20.0.0.130
's password:
flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 81.1MB/s 00:00
[root@master01 ssl]# scp /root/k8s/flannel-v0.10.0-linux-amd64.tar.gz root@20.0.0.140:/root
root@20.0.0.140's password:
flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 73.1MB/s 00:00
[root@node01 ssl
]
[root@node01 ~
]
flanneld
mk-docker-opts.sh
README.md
[root@node01 ~
]
[root@node01 ~
]
[root@node01 ~
]
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@node01 ~
]
[Service
]
Type
=notify
EnvironmentFile
=/run/flannel/subnet.env
ExecStart
=/usr/bin/dockerd
$DOCKER_NETWORK_OPTIONS -H fd:// --containerd
=/run/containerd/containerd.sock
ExecReload
=/bin/kill -s HUP
$MAINPID
TimeoutSec
=0
RestartSec
=2
Restart
=always
[root@node01 ~
]
DOCKER_OPT_BIP
="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ
="--ip-masq=false"
DOCKER_OPT_MTU
="--mtu=1450"
(说明:bip指定启动时的子网
)
DOCKER_NETWORK_OPTIONS
=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
[root@node01 ~
]
[root@node01 ~
]
[root@node01 ~
]
flannel.1: flags
=4163
<UP,BROADCAST,RUNNING,MULTICAST
> mtu 1450
inet 172.17.67.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::7c1e:b7ff:fec9:f38 prefixlen 64 scopeid 0x20
<link
>
ether 7e:1e:b7:c9:0f:38 txqueuelen 0
(Ethernet
)
RX packets 0 bytes 0
(0.0 B
)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0
(0.0 B
)
TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0
[root@node01 ~
]
[root@node01 ~
]
[root@node01 ~
]
eth0: flags
=4163
<UP,BROADCAST,RUNNING,MULTICAST
> mtu 1450
inet 172.17.67.2 netmask 255.255.255.0 broadcast 172.17.67.255
ether 02:42:ac:11:43:02 txqueuelen 0
(Ethernet
)
RX packets 15977 bytes 12475359
(11.8 MiB
)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7402 bytes 404638
(395.1 KiB
)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.sh脚本
#!/bin/bash
ETCD_ENDPOINTS
=$
{1:-
"http://127.0.0.1:2379"} (生产环境中填写etcd的IP:端口号
)
cat <<EOF
>/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS
="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF
>/usr/lib/systemd/system/flanneld.service
[Unit
]
Description
=Flanneld overlay address etcd agent
After
=network-online.target network.target
Before
=docker.service
[Service
]
Type
=notify
EnvironmentFile
=/opt/kubernetes/cfg/flanneld
ExecStart
=/opt/kubernetes/bin/flanneld --ip-masq \
$FLANNEL_OPTIONS
ExecStartPost
=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart
=on-failure
[Install
]
WantedBy
=multi-user.target
EOF
systemctl daemon-reload
systemctl
enable flanneld
systemctl restart flanneld
四、部署Master
Master操作,apiserver生成证书
[root@master01 k8s
]
cfssl.sh etcd.sh etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64 flannel-v0.10.0-linux-amd64.tar.gz master.zip
[root@master01 k8s
]
[root@master01 k8s
]
Archive: master.zip
inflating: /root/k8s/master/apiserver.sh
inflating: /root/k8s/master/controller-manager.sh
inflating: /root/k8s/master/scheduler.sh
[root@master01 k8s
]
[root@master01 master
]
apiserver.sh controller-manager.sh scheduler.sh
[root@master01 master
]
[root@master01 master
]
[root@master01 master
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@master01 k8s-cert
]
k8s-cert.sh
[root@master01 k8s-cert
]
[root@master01 k8s-cert
]
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
[root@master01 k8s-cert
]
[root@master01 k8s
]
[root@master01 bin
]
apiextensions-apiserver kube-apiserver kubectl kube-scheduler.docker_tag
cloud-controller-manager kube-apiserver.docker_tag kubelet kube-scheduler.tar
cloud-controller-manager.docker_tag kube-apiserver.tar kube-proxy mounter
cloud-controller-manager.tar kube-controller-manager kube-proxy.docker_tag
hyperkube kube-controller-manager.docker_tag kube-proxy.tar
kubeadm kube-controller-manager.tar kube-scheduler
[root@localhost bin
]
[root@master01 bin
]
75a331cb63be986d8fa074543ecbf29c,kubelet-bootstrap,10001,
"system:kubelet-bootstrap"
序列号,用户名,id,角色
使用
head -c 16 /dev/urandom
| od -An -t x
| tr -d
'' 可以随机生成序列号
[root@master01 bin
]
[root@master01 master
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master01 k8s
]
root 23639 104 8.1 399804 313804 ? Ssl 10:36 0:06 /opt/kubernetes/bin/kube-apiserver --logtostderr
=true --v
=4 --etcd-servers
=https://20.0.0.110:2379,https://20.0.0.130:2379,https://20.0.0.140:2379 --bind-address
=20.0.0.110 --secure-port
=6443 --advertise-address
=20.0.0.110 --allow-privileged
=true --service-cluster-ip-range
=10.0.0.0/24 --enable-admission-plugins
=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode
=RBAC,Node --kubelet-https
=true --enable-bootstrap-token-auth --token-auth-file
=/opt/kubernetes/cfg/token.csv --service-node-port-range
=30000-50000 --tls-cert-file
=/opt/kubernetes/ssl/server.pem --tls-private-key-file
=/opt/kubernetes/ssl/server-key.pem --client-ca-file
=/opt/kubernetes/ssl/ca.pem --service-account-key-file
=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile
=/opt/etcd/ssl/ca.pem --etcd-certfile
=/opt/etcd/ssl/server.pem --etcd-keyfile
=/opt/etcd/ssl/server-key.pem
root 23655 0.0 0.0 112724 988 pts/1 S+ 10:36 0:00
grep --color
=auto kube
[root@master01 master
]
KUBE_APISERVER_OPTS
="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.110:2379,https://20.0.0.130:2379,https://20.0.0.140:2379 \
--bind-address=20.0.0.110 \
--secure-port=6443 \
--advertise-address=20.0.0.110 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
[root@localhost k8s
]
tcp 0 0 192.168.195.149:6443 0.0.0.0:* LISTEN 46459/kube-apiserve
tcp 0 0 192.168.195.149:6443 192.168.195.149:36806 ESTABLISHED 46459/kube-apiserve
tcp 0 0 192.168.195.149:36806 192.168.195.149:6443 ESTABLISHED 46459/kube-apiserve
[root@localhost k8s
]
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 46459/kube-apiserve
[root@master01 master
]
[root@master01 master
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master01 master
]
root 23639 6.1 8.1 399804 314000 ? Ssl 10:36 0:08 /opt/kubernetes/bin/kube-apiserver --logtostderr
=true --v
=4 --etcd-servers
=https://20.0.0.110:2379,https://20.0.0.130:2379,https://20.0.0.140:2379 --bind-address
=20.0.0.110 --secure-port
=6443 --advertise-address
=20.0.0.110 --allow-privileged
=true --service-cluster-ip-range
=10.0.0.0/24 --enable-admission-plugins
=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode
=RBAC,Node --kubelet-https
=true --enable-bootstrap-token-auth --token-auth-file
=/opt/kubernetes/cfg/token.csv --service-node-port-range
=30000-50000 --tls-cert-file
=/opt/kubernetes/ssl/server.pem --tls-private-key-file
=/opt/kubernetes/ssl/server-key.pem --client-ca-file
=/opt/kubernetes/ssl/ca.pem --service-account-key-file
=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile
=/opt/etcd/ssl/ca.pem --etcd-certfile
=/opt/etcd/ssl/server.pem --etcd-keyfile
=/opt/etcd/ssl/server-key.pem
root 23733 1.4 0.5 46128 19692 ? Ssl 10:37 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr
=true --v
=4 --master
=127.0.0.1:8080 --leader-elect
root 23756 0.0 0.0 112724 984 pts/1 S+ 10:38 0:00
grep --color
=auto kube
[root@master01 master
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@localhost k8s
]
k8s-cert.sh脚本
cat > ca-config.json
<<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json
<<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json
| cfssljson -bare ca -
cat > server-csr.json
<<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"20.0.0.110", (master1)
"20.0.0.120", (master2)
"20.0.0.100", (vip)
"20.0.0.150", (lb (master))
"20.0.0.160", (lb (backup))
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca
=ca.pem -ca-key
=ca-key.pem -config
=ca-config.json -profile
=kubernetes server-csr.json
| cfssljson -bare server
cat > admin-csr.json
<<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca
=ca.pem -ca-key
=ca-key.pem -config
=ca-config.json -profile
=kubernetes admin-csr.json
| cfssljson -bare admin
cat > kube-proxy-csr.json
<<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca
=ca.pem -ca-key
=ca-key.pem -config
=ca-config.json -profile
=kubernetes kube-proxy-csr.json
| cfssljson -bare kube-proxy
五、部署Node
Master操作
[root@master01 master
]
[root@master01 bin
]
[root@master01 bin
]
Master操作
[root@master01 bin
]
[root@master01 k8s
]
[root@master01 k8s
]
[root@localhost kubeconfig
]
[root@localhost ~
]
75a331cb63be986d8fa074543ecbf29c,kubelet-bootstrap,10001,
"system:kubelet-bootstrap"
[root@localhost kubeconfig
]
kubectl config set-credentials kubelet-bootstrap \
--token
=75a331cb63be986d8fa074543ecbf29c \
(修改这一行的token
)
--kubeconfig
=bootstrap.kubeconfig
[root@master01 kebeconfig
]
...
export PATH
=$PATH:/opt/kubernetes/bin/
[root@master01 kebeconfig
]
[root@localhost kubeconfig
]
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy
{"health":"true"}
etcd-1 Healthy
{"health":"true"}
etcd-2 Healthy
{"health":"true"}
[root@master01 kebeconfig
]
Cluster
"kubernetes" set.
User
"kubelet-bootstrap" set.
Context
"default" created.
Switched to context
"default".
Cluster
"kubernetes" set.
error: error reading client-certificate data from /opt/kubernetes/ssl//kube-proxy.pem:
open /opt/kubernetes/ssl//kube-proxy.pem: no such
file or directory
Context
"default" created.
Switched to context
"default".
[root@master01 kebeconfig
]
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
[root@master01 kebeconfig
]
[root@master01 kebeconfig
]
[root@localhost kubeconfig
]
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
kubeconfig.sh脚本
APISERVER
=$1
SSL_DIR
=$2
export KUBE_APISERVER
="https://$APISERVER:6443"
kubectl config set-cluster kubernetes \
--certificate-authority
=$SSL_DIR/ca.pem \
--embed-certs
=true \
--server
=${KUBE_APISERVER} \
--kubeconfig
=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap \
--token
=${BOOTSTRAP_TOKEN} \
--kubeconfig
=bootstrap.kubeconfig
kubectl config set-context default \
--cluster
=kubernetes \
--user
=kubelet-bootstrap \
--kubeconfig
=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig
=bootstrap.kubeconfig
kubectl config set-cluster kubernetes \
--certificate-authority
=$SSL_DIR/ca.pem \
--embed-certs
=true \
--server
=${KUBE_APISERVER} \
--kubeconfig
=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate
=$SSL_DIR/kube-proxy.pem \
--client-key
=$SSL_DIR/kube-proxy-key.pem \
--embed-certs
=true \
--kubeconfig
=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster
=kubernetes \
--user
=kube-proxy \
--kubeconfig
=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig
=kube-proxy.kubeconfig
Node01节点上操作
[root@node01 ~
]
node.zip
[root@node01 ~
]
[root@localhost ~
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node01 ~
]
root 24092 0.0 0.4 325908 18680 ? Ssl 10:06 0:01 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints
=https://20.0.0.110:2379,https://20.0.0.130:2379,https://120.0.0.140:2379 -etcd-cafile
=/opt/etcd/ssl/ca.pem -etcd-certfile
=/opt/etcd/ssl/server.pem -etcd-keyfile
=/opt/etcd/ssl/server-key.pem
root 84898 3.3 1.1 476704 44624 ? Ssl 10:55 0:00 /opt/kubernetes/bin/kubelet --logtostderr
=true --v
=4 --hostname-override
=20.0.0.130 --kubeconfig
=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig
=/opt/kubernetes/cfg/bootstrap.kubeconfig --config
=/opt/kubernetes/cfg/kubelet.config --cert-dir
=/opt/kubernetes/ssl --pod-infra-container-image
=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 84929 0.0 0.0 112724 984 pts/1 S+ 10:55 0:00
grep --color
=auto kube
Master操作
[root@master01 kebeconfig
]
NAME AGE REQUESTOR CONDITION
node-csr-0K7kO9QDqs1DGi-YBGyLii_p1WO7TN_JQw6Kd8Dd6Ws 76s kubelet-bootstrap Pending(等待集群给该节点颁发证书)
[root@master01 kebeconfig
]
certificatesigningrequest.certificates.k8s.io/node-csr-0K7kO9QDqs1DGi-YBGyLii_p1WO7TN_JQw6Kd8Dd6Ws approved
[root@master01 kebeconfig
]
NAME AGE REQUESTOR CONDITION
node-csr-0K7kO9QDqs1DGi-YBGyLii_p1WO7TN_JQw6Kd8Dd6Ws 2m31s kubelet-bootstrap Approved,Issued(已经被允许加入群集)
[root@master01 kebeconfig
]
NAME STATUS ROLES AGE VERSION
20.0.0.130 Ready
<none
> 2m7s v1.12.3
[root@node01 ~
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node01 ~
]
[root@node01 ~
]
[root@node01 ~
]
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded
(/usr/lib/systemd/system/kube-proxy.service
; enabled
; vendor preset: disabled
)
Active: active
(running
) since 三 2020-09-30 11:00:43 CST
; 35s ago
Main PID: 86355
(kube-proxy
)
CGroup: /system.slice/kube-proxy.service
‣ 86355 /opt/kubernetes/bin/kube-proxy --logtostderr
=true --v
=4 --hostname-override
=20.0.0.130 --cluster-cidr
=...
9月 30 11:01:16 node01 kube-proxy
[86355
]: I0930 11:01:16.700913 86355 reflector.go:169
] Listing and watching *v1
...go:131
9月 30 11:01:16 node01 kube-proxy
[86355
]: E0930 11:01:16.702155 86355 reflector.go:134
] k8s.io/client-go/informe
... scope
9月 30 11:01:17 node01 kube-proxy
[86355
]: I0930 11:01:17.701195 86355 reflector.go:169
] Listing and watching *v1
...go:131
9月 30 11:01:17 node01 kube-proxy
[86355
]: I0930 11:01:17.702266 86355 reflector.go:169
] Listing and watching *v1
...go:131
9月 30 11:01:17 node01 kube-proxy
[86355
]: E0930 11:01:17.702370 86355 reflector.go:134
] k8s.io/client-go/informe
... scope
9月 30 11:01:17 node01 kube-proxy
[86355
]: E0930 11:01:17.703219 86355 reflector.go:134
] k8s.io/client-go/informe
... scope
9月 30 11:01:18 node01 kube-proxy
[86355
]: I0930 11:01:18.703592 86355 reflector.go:169
] Listing and watching *v1
...go:131
9月 30 11:01:18 node01 kube-proxy
[86355
]: I0930 11:01:18.703590 86355 reflector.go:169
] Listing and watching *v1
...go:131
9月 30 11:01:18 node01 kube-proxy
[86355
]: E0930 11:01:18.704671 86355 reflector.go:134
] k8s.io/client-go/informe
... scope
9月 30 11:01:18 node01 kube-proxy
[86355
]: E0930 11:01:18.704671 86355 reflector.go:134
] k8s.io/client-go/informe
... scope
Hint: Some lines were ellipsized, use -l to show
in full.
Node02节点部署
[root@node01 ~
]
The authenticity of host
'20.0.0.140 (20.0.0.140)' can
't be established.
ECDSA key fingerprint is SHA256:+3jHeqm5xkqqaUAmCEWoAuf2aHw8Y8GV40GD8uGZ6m8.
ECDSA key fingerprint is MD5:a3:39:bb:8a:09:ac:8f:53:01:8a:84:53:14:85:d5:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '20.0.0.140
' (ECDSA) to the list of known hosts.
root@20.0.0.140's password:
flanneld 100% 227 373.3KB/s 00:00
bootstrap.kubeconfig 100% 2164 3.4MB/s 00:00
kube-proxy.kubeconfig 100% 2082 4.4MB/s 00:00
kubelet 100% 374 784.6KB/s 00:00
kubelet.config 100% 264 604.0KB/s 00:00
kubelet.kubeconfig 100% 2293 4.1MB/s 00:00
kube-proxy 100% 186 341.3KB/s 00:00
mk-docker-opts.sh 100% 2139 4.6MB/s 00:00
scp: /opt//kubernetes/bin/flanneld: Text
file busy
kubelet 100% 168MB 158.4MB/s 00:01
kube-proxy 100% 48MB 163.9MB/s 00:00
kubelet.crt 100% 2169 3.8MB/s 00:00
kubelet.key 100% 1679 3.3MB/s 00:00
kubelet-client-2020-09-30-10-57-23.pem 100% 1269 329.7KB/s 00:00
kubelet-client-current.pem 100% 1269 421.4KB/s 00:00
[root@node01 ~
]
root@20.0.0.140's password:
kubelet.service 100% 264 422.8KB/s 00:00
kube-proxy.service 100% 231 373.7KB/s 00:00
[root@node02 ~
]
[root@node02 ssl
]
[root@node02 ssl
]
[root@node02 ssl
]
[root@node02 cfg
]
bootstrap.kubeconfig flanneld kubelet kubelet.config kubelet.kubeconfig kube-proxy kube-proxy.kubeconfig
[root@node02 cfg
]
KUBELET_OPTS
="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.140 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node02 cfg
]
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 20.0.0.140
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn:
false
authentication:
anonymous:
enabled:
true
[root@node02 cfg
]
KUBE_PROXY_OPTS
="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.140 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
[root@node02 cfg
]
[root@node02 cfg
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 cfg
]
[root@node02 cfg
]
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@master01 kebeconfig
]
NAME AGE REQUESTOR CONDITION
node-csr-0K7kO9QDqs1DGi-YBGyLii_p1WO7TN_JQw6Kd8Dd6Ws 14m kubelet-bootstrap Approved,Issued
node-csr-H78re3C9fGbFuBVqWxkr1GcFlPI8zUAFhgBrSR_LYrA 89s kubelet-bootstrap Pending
[root@master01 kebeconfig
]
certificatesigningrequest.certificates.k8s.io/node-csr-H78re3C9fGbFuBVqWxkr1GcFlPI8zUAFhgBrSR_LYrA approved
[root@master01 kebeconfig
]
NAME AGE REQUESTOR CONDITION
node-csr-0K7kO9QDqs1DGi-YBGyLii_p1WO7TN_JQw6Kd8Dd6Ws 15m kubelet-bootstrap Approved,Issued
node-csr-H78re3C9fGbFuBVqWxkr1GcFlPI8zUAFhgBrSR_LYrA 2m12s kubelet-bootstrap Approved,Issued
[root@master01 kebeconfig
]
NAME STATUS ROLES AGE VERSION
20.0.0.130 Ready
<none
> 14m v1.12.3
20.0.0.140 Ready
<none
> 77s v1.12.3