Kubernetes笔记 -- 简介安装

    科技2025-11-06  2

    文章目录

    简介安装

    简介

    概念 容器化集群管理系统使用k8s进行容器化应用部署使用k8s利于应用扩展k8s目标是让部署容器化应用更加简洁和高效功能-1:自动部署、自我修复、水平扩展、服务发现、滚动更新功能-2:版本回退、密钥和配置管理、存储编排、批处理

    主控节点(Master Node) apiserver:集群统一入口,以restful方式,交给etcd存储scheduler:节点调度,选择node节点应用部署controller-manager:处理集群中常规后台任务,一个资源对应一个控制器ectd:存储系统,用于保存集群相关的数据 工作节点(Worker Node) kubelet:主控节点派到工作节点的代表,管理本机容器kube-proxy:提供网络代理,提供负载均衡

    ApiServer统一访问入口,交付Controller创建Pod部署容器

    Pod 最小部署单元一组容器的集合容器共享网络生命周期是短暂的 Controller 确保预期的pod副本数量确保所有的node运行同一个pod无状态应用部署和有状态应用部署一次性任务和定时任务 ApiServer 定义一组pod的访问规则

    安装

    主机IPV4HOSTMaster Node13.13.13.1/16master01Worker Node-113.13.13.31/16node01Worker Node-213.13.13.32/16node02 系统初始化(all) 关闭安全

    防火墙:也可以按照官网上的给出的端口开放指南进行配置。

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports

    [root@master01 ~]# systemctl stop firewalld [root@master01 ~]# systemctl disable firewalld Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@master01 ~]# setenforce 0 [root@master01 ~]# cp /etc/selinux/config{,.bak} [root@master01 ~]# sed -i 's/enforcing$/disabled/' /etc/selinux/config [root@master01 ~]# 关闭swap分区

    关闭swap主要是为了性能考虑,也可通过添加kubelet参数 --fail-swap-on=false来防止运行时报错。

    [root@master01 ~]# cp /etc/fstab{,.bak} [root@master01 ~]# sed '/swap/ s/\(.*\)/#\1/' /etc/fstab # # /etc/fstab # Created by anaconda on Sat Sep 26 16:29:04 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/mapper/cl-root / xfs defaults 0 0 UUID=90204c1b-0e93-4936-9b55-ee8f539c6c6a /boot ext4 defaults 1 2 #/dev/mapper/cl-swap swap swap defaults 0 0 [root@master01 ~]# sed -i '/swap/ s/\(.*\)/#\1/' /etc/fstab [root@master01 ~]# swapoff -a [root@master01 ~]# 配置/etc/hosts [root@master01 ~]# vi /etc/hosts [root@master01 ~]# tail -3 /etc/hosts 13.13.13.1 master01 13.13.13.31 node01 13.13.13.32 node02 [root@master01 ~]# 将桥接的IPv4流量传递到iptables的链 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#letting-iptables-see-bridged-traffic Make sure that the br_netfilter module is loaded. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter. As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf- call-iptables is set to 1 in your sysctl config, e.g. [root@master01 ~]# cat > /etc/sysctl.d/k8s.conf << EOF > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF [root@master01 ~]# sysctl --system [root@master01 ~]# 时间同步 [root@master02 ~]# dnf install chrony [root@master02 ~]# dnf enable chronyd.service [root@master02 ~]# dnf start chronyd.service [root@master01 ~]# timedatectl Local time: Thu 2020-10-08 10:18:49 CST Universal time: Thu 2020-10-08 02:18:49 UTC RTC time: Thu 2020-10-08 02:18:49 Time zone: Asia/Shanghai (CST, +0800) System clock synchronized: yes NTP service: active RTC in local TZ: no [root@master01 ~]# 安装配置软件 安装Docker(all) [root@master01 ~]# yum install yum-utils [root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo [root@master01 ~]# dnf install docker-ce [root@master01 ~]# systemctl enable docker.service Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. [root@master01 ~]# systemctl start docker.service 安装K8s(all) [root@master01 ~]# cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF [root@master01 ~]# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes [root@master01 ~]# systemctl enable kubelet.service 初始化k8s(master01) [root@master01 ~]# kubeadm init \ > --apiserver-advertise-address 13.13.13.1 \ > --image-repository registry.aliyuncs.com/google_containers \ > --pod-network-cidr 10.244.0.0/16 W1008 11:11:12.314810 19635 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.2 .... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 13.13.13.1:6443 --token zve4wu.dhoi0a02j6nywz11 \ --discovery-token-ca-cert-hash sha256:1b31f652576d21ae9311527f8d7f8cbb94049e4cd3cf208c968a6ef7a2aac014 [root@master01 ~]#

    配置(master01) [root@master01 ~]# mkdir -p $HOME/.kube [root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@master01 ~]# 加入节点(node01、node02) [root@node01 ~]# kubeadm join 13.13.13.1:6443 --token zve4wu.dhoi0a02j6nywz11 \ > --discovery-token-ca-cert-hash sha256:1b31f652576d21ae9311527f8d7f8cbb94049e4cd3cf208c968a6ef7a2aac014 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-tc]: tc not found in system path [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node01 ~]# 部署CNI网络插件 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network Caution: This section contains important information about networking setup and deployment order. Read all of this advice carefully before proceeding. You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed. Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. (If you find a collision between your network plugin's preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during kubeadm init with --pod- network-cidr and as a replacement in your network plugin's YAML). By default, kubeadm sets up your cluster to use and enforce use of RBAC (role based access control). Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it. If you want to use IPv6--either dual-stack, or single-stack IPv6 only networking--for your cluster, make sure that your Pod network plugin supports IPv6. IPv6 support was added to CNI in v0.6.0. [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady master 17m v1.19.2 node01 NotReady <none> 12m v1.19.2 node02 NotReady <none> 12m v1.19.2 [root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole clusterrole.rbac.authorization.k8s.io/flannel created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready master 19m v1.19.2 node01 Ready <none> 14m v1.19.2 node02 Ready <none> 14m v1.19.2 [root@master01 ~]# 测试k8s集群 [root@master01 ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-lzhr9 1/1 Running 0 12m [root@master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed [root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36m nginx NodePort 10.107.200.235 <none> 80:32685/TCP 16s [root@master01 ~]#

    Processed: 0.010, SQL: 8