centos7 使用kubeadm部署kubernetes

    科技2024-10-10  20

    基础环境准备

    两台机器信息

    192.168.1.107  k8s-master

    192.168.1.111   k8s-node

    #设置hostname 的方法 hostnamectl set-hostname k8s-master #在 192.168.1.107 上执行 hostnamectl set-hostname k8s-node #在 192.168.1.110 上执行 hostnamectl --static #查看设置结果

    所有操作无特殊说明都需要在所有节点(k8s-master 和 k8s-node)上执行

    关闭防火墙 :: 如果不想启用防火墙,设置可以参考这里看一下kubernetes需要开放的端口 https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports

    systemctl stop firewalld.service systemctl disable firewalld.service yum upgrade

    关闭swap :: kubernetes1.8开始不关闭swap无法启动

    #去掉 /etc/fstab 里面这一行 /dev/mapper/centos-swap swap swap defaults 0 0 swapoff -a cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab cat /etc/fstab

    修改iptables参数 :: RHEL / CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

    cat <<EOF > /etc/sysctl.d/k8s.conf vm.swappiness = 0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF #使配置生效 modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf

    加载ipvs模块

    cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF #这条命令有点长 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

    安装docker :: 注意docker版本, 现在最高18.06版本做了验证

    yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7 systemctl start docker systemctl enable docker #查看docker版本号 docker -v Docker version 18.06.1-ce, build e68fc7a

    用kubeadm 部署 kubernetes

    安装kubeadm, kubelet  注意:: yum install 安装的时候一定要看一下kubernetes的版本号后面kubeadm init 的时候需要用到

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kube* EOF #安装 注意::这里一定要看一下版本号,因为 Kubeadm init 的时候 填写的版本号不能低于kuberenete版本 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes #注 如果需要指定版本 用下面的命令 kubelet-<version> yum install kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2 --disableexcludes=kubernetes #启动 kubelet systemctl enable kubelet.service && systemctl start kubelet.service

    启动kubelet.service之后 我们查看一下kubelet状态是未启动状态,查看原因发现是 “/var/lib/kubelet/config.yaml”文件不存在,这里可以暂时先不用处理,当kubeadm init 之后会创建此文件

    #查看 kubelet 状态 [root@centos2 ~]# systemctl status kubelet.service ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since 日 2019-03-31 16:18:55 CST; 7s ago Docs: https://kubernetes.io/docs/ Process: 4564 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255) Main PID: 4564 (code=exited, status=255) 3月 31 16:18:55 k8s-node systemd[1]: Unit kubelet.service entered failed state. 3月 31 16:18:55 k8s-node systemd[1]: kubelet.service failed. [root@centos2 ~]# #查看出错信息 [root@centos2 ~]# journalctl -xefu kubelet 3月 31 16:19:46 k8s-node systemd[1]: kubelet.service holdoff time over, scheduling restart. 3月 31 16:19:46 k8s-node systemd[1]: Stopped kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished shutting down. 3月 31 16:19:46 k8s-node systemd[1]: Started kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. 3月 31 16:19:46 k8s-node kubelet[4611]: F0331 16:19:46.989588 4611 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory 3月 31 16:19:46 k8s-node systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a 3月 31 16:19:46 k8s-node systemd[1]: Unit kubelet.service entered failed state. 3月 31 16:19:46 k8s-node systemd[1]: kubelet.service failed.

     我们在 k8s-master上用kubeadm ini初始化kubernetes :: 注意::这里的kubernetes-version 一定要和上面安装的版本号一致 否则会报错,报错信息可以参考文章后面错误集锦

    #只在 k8s-master上执行 node节点不执行 kubeadm init \ --apiserver-advertise-address=192.168.1.107 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.19.2 \ --pod-network-cidr=10.244.0.0/16

    注意:由于后面我又安装了一次k8s,版本是:

    kubeadm.x86_64 0:1.20.4-0 kubectl.x86_64 0:1.20.4-0 kubelet.x86_64 0:1.20.4-0 上面的语句要替换成: kubeadm init \ --apiserver-advertise-address=192.168.1.107 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.20.4 \ --pod-network-cidr=10.244.0.0/16

    --apiserver-advertise-addres :: 填写 k8s-master ip

    --image-repository :: 镜像地址

    --kubernetes-version :: 关闭版本探测,因为它的默认值是stable-1,会从https://storage.googleapis.com/kubernetes-release/release/stable-1.txt下载最新的版本号,指定版本跳过网络请求,再次强调一定要和Kubernetes版本号一致

     

    kubeadm init 初始化信息,  我们看一下初始化过程发现自动创建了 "/var/lib/kubelet/config.yaml" 这个文件 (由于node 节点不需要执行kubeadm init 所以手动拷贝这个文件到节点/var/lib/kubelet/config.yaml)

    [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [centos kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.6] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [centos localhost] and IPs [10.211.55.6 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [centos localhost] and IPs [10.211.55.6 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 19.507714 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "centos" as an annotation [mark-control-plane] Marking the node centos as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node centos as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: sfaff2.iet15233unw5jzql [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: #======这里是用时再使用集群之前需要执行的操作------qingfeng mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: #=====这是增加节点的方法 token过期 kubeadm join 10.211.55.6:6443 --token sfaff2.iet15233unw5jzql --discovery-token-ca-cert-hash sha256:f798c5be53416ca3b5c7475ee0a4199eb26f9e31ee7106699729c0660a70f8d7 [root@centos ~]#

    初始化成功后会提示在使用之前需要再配置一下,配置方法已经给出,另外会生成一个临时token以及增加节点的方法

    #普通用户要使用k8s 需要执行下面操作 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #如果是root 可以直接执行 export KUBECONFIG=/etc/kubernetes/admin.conf # 以上两个二选一即可,这里我是直接用的root 所以直接执行 export KUBECONFIG=/etc/kubernetes/admin.conf nano /etc/profile #####################末尾添加############################ export KUBECONFIG=/etc/kubernetes/admin.conf ################################################# #让/etc/profile文件修改后立即生效 source /etc/profile

    现在我们查看一下 kubelet 的状态 已经是 running 状态 ,启动成功

    [root@k8s-master ~]# systemctl status kubelet.service ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since 日 2019-03-31 16:11:57 CST; 26min ago Docs: https://kubernetes.io/docs/ Main PID: 32083 (kubelet) Tasks: 16 Memory: 39.6M CGroup: /system.slice/kubelet.service └─32083 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-... 3月 31 16:38:28 k8s-master kubelet[32083]: W0331 16:38:28.028997 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:28 k8s-master kubelet[32083]: E0331 16:38:28.752039 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:33 k8s-master kubelet[32083]: W0331 16:38:33.029684 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:33 k8s-master kubelet[32083]: E0331 16:38:33.754045 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:38 k8s-master kubelet[32083]: W0331 16:38:38.030077 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:38 k8s-master kubelet[32083]: E0331 16:38:38.756061 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:43 k8s-master kubelet[32083]: W0331 16:38:43.030827 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:43 k8s-master kubelet[32083]: E0331 16:38:43.757292 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:48 k8s-master kubelet[32083]: W0331 16:38:48.031403 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:48 k8s-master kubelet[32083]: E0331 16:38:48.758876 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized Hint: Some lines were ellipsized, use -l to show in full.

    查看状态 ::确认每个 组件都是  Healthy 状态

    [root@centos ~]kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}

    查看node状态  

    [root@centos ~]kubectl get node NAME STATUS ROLES AGE VERSION centos NotReady master 11m v1.13.4

    安装port Network( flannel )  :: k8s cluster 工作 必须安装pod网络,否则pod之间无法通信,k8s支持多种方案,这里选择flannel

    [root@centos ~]kubectl apply -f kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created [root@centos ~]

     

     

    k8s网络配置文件kube-flannel.yml 内容:

    --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.12.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.12.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg

    检查pod状态,需要确保当前Pod 都是 running

    [root@centos ~]kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-78d4cf999f-6b5wq 1/1 Running 0 5h1m 10.244.0.2 centos <none> <none> kube-system coredns-78d4cf999f-clhkc 1/1 Running 0 5h1m 10.244.0.3 centos <none> <none> kube-system etcd-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> kube-system kube-apiserver-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> kube-system kube-controller-manager-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> kube-system kube-flannel-ds-amd64-lnp55 1/1 Running 0 3m41s 10.211.55.6 centos <none> <none> kube-system kube-proxy-xsnr8 1/1 Running 0 5h1m 10.211.55.6 centos <none> <none> kube-system kube-scheduler-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> [root@centos ~]

    再次查看node状态; pod状态变为 Ready

    [root@centos ~]kubectl get nodes NAME STATUS ROLES AGE VERSION centos Ready master 5h2m v1.13.4 [root@centos ~]

    增加node节点 

     如果想在node节点上使用kubectl 命令需要把 k8s-master 上 /etc/kubernetes/admin.conf 文件copy到节点机器上并执行

    export KUBECONFIG=/etc/kubernetes/admin.conf #kubectl命令需要使用kubernetes-admin来运行,解决方法如下,将主节点中的【/etc/kubernetes/admin.conf】文件拷贝到从节点相同目录下,然后配置环境变量: echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile #立即生效 source ~/.bash_profile

    将master主节点下/var/lib/kubelet/config.yaml 拷贝到节点相同目录下:/var/lib/kubelet/config.yaml

    执行:

    kubeadm join 192.168.1.107:6443 --token 4v3eex.g6scy1z0n0uq6c74 \ --discovery-token-ca-cert-hash sha256:8e9b24e80469ffa3661822227e573b29b18d8f5489edf8b6cbaadaa20edab8b8

    sha256过期:如果过期了,可以在主节点重新生成,然后替换sha256:xxxxxxx ,“xxxxxx”里的内容

    #报错: #error execution phase preflight: couldn’t validate the identity of the API Server: abort #connecting to API servers after timeout of 5m0s #解决:master节点的token过期了 创建新的token #得到token >kubeadm token create #得到discovery-token-ca-cert-hash > openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

    增加节点时token过期,重新生成token的方法, 直接上命令了

    [root@k8s-master testnginx]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS uf2c4g.n7ibf1g8gxbkqz2z 23h 2019-04-03T15:28:40+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-master testnginx]# kubeadm token create w0r09e.e5olwz1rlhwvgo9p [root@k8s-master testnginx]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS uf2c4g.n7ibf1g8gxbkqz2z 23h 2019-04-03T15:28:40+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token w0r09e.e5olwz1rlhwvgo9p 23h 2019-04-03T16:19:56+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token [root@k8s-master testnginx]#

    检查集群:在节点机器上执行下面命令

    [root@localhost ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 106m v1.19.2 k8s-node Ready <none> 29m v1.19.2 [root@localhost ~]#

    删除node:

    删除节点之后,节点想再次加入到集群中 需要先执行  kubeadm reset , 之后再执行 kubeadm join

    [root@k8s-master testnginx]# kubectl delete node k8s-node ---k8s-node节点名称,当然不只这一种删除pod的方法,我这里不一一列出了

    参考文献:

    https://www.cnblogs.com/qingfeng2010/p/10540832.html

    Processed: 0.010, SQL: 8