2023年6月30日发(作者:)
Centos7.6安装k8s(kubadmin⾼可⽤)⼀、环境准备1,主机准备主机Ip172.22.204.106172.22.204.107172.22.204.108172.22.204.111172.22.204.112172.22.204.110主机名T-k8sMaster01T-k8sMaster02T-k8sMaster03T-k8sworker01T-k8sworker03VIP主功能etcd,apiserver,controller-manager,scheduler,docker,proxyetcd,apiserver,controller-manager,scheduler,docker,proxyetcd,apiserver,controller-manager,scheduler,docker,proxykubelet,docker,proxykubelet,docker,proxyVIP2,⾼可⽤架构图⽚.png⼆、安装准备1,配置主机1.1修改主机名对5台机器分别修改,同时修改hosts⽂件[***********.204.106-Template01:~]$hostnamectlset-hostnameT-k8sMaster01[***********.204.106-t-k8smaster01:~]$cat/etc/hosts127.0.0.1 localhost omain localhost4 omain4::1 localhost omain localhost6 omain6172.22.204.106 T-k8sMaster01172.22.204.107 T-k8sMaster02172.22.204.108 T-k8sMaster03172.22.204.111 T-k8sWorker01172.22.204.112 T-k8sWorker021.1.2修改其它的主机名[***********.204.106-t-k8smaster01:~]#cat>>/etc/hosts< 1.2.2关闭防⽕墙及selinuxsystemctl stop firewalld && systemctl disable firewalldsed -i 's/=enforcing/=disabled/g' /etc/selinux/config1.3内核参数本⽂的k8s⽹络使⽤flannel,该⽹络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。1.3.1br_netfilter模块加载查看br_netfilter模块:[***********.204.106-t-k8smaster01:~]$lsmod|grepbr_netfilter如果系统没有br_netfilter模块则执⾏下⾯的新增命令,如有则忽略。临时新增br_netfilter模块:[***********.204.106-t-k8smaster01:~]#modprobebr_netfilter该⽅式重启后会失效永久新增br_netfilter模块:[***********.204.106-t-k8smaster01:~]#cat>/etc/t< interface ens192 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.22.204.110 }}3.2.2 master02上keepalived配置[***********.204.115-t-i-k8s-master02:~]$more/etc/keepalived/! Configuration File for keepalivedglobal_defs { router_id t-k8smaster02}vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 50 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.22.204.110 }}3.2.3 master03上keepalived配置[***********.204.116-t-k8smaster03:~]$more/etc/keepalived/! Configuration File for keepalivedglobal_defs { router_id t-k8smaster03}vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 50 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.22.204.110 }}3.2.3 启动keepalivedservice keepalived startsystemctl enable keepalived[***********.204.106-t-k8smaster01:.d]$servicekeepalivedstartRedirecting to /bin/systemctl start e[***********.204.106-t-k8smaster01:.d]$systemctlenablekeepalived3.2.4验证四、安装k8s组件4.1. 安装kubelet、kubeadm和kubectl所有机器全部安装kubelet 运⾏在集群所有节点上,⽤于启动Pod和容器等对象的⼯具kubeadm ⽤于初始化集群,启动集群的命令⼯具kubectl ⽤于和集群通信的命令⾏,通过kubectl可以部署和管理应⽤,查看各种资源,创建、删除和更新各种组件安装版本为最新 1.22.2也可以根据⾃⼰所需要的版本来安装部署 yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2查看版本yum list kubelet --showduplicates | sort -r安装yum install -y kubelet kubeadm kubectlyum install -y kubelet kubeadm kubectl 4.22.3 启动kubelet启动kubelet并设置开机启动[***********.204.106-t-k8smaster01:~]$systemctlenablekubelet&&systemctlstartkubelet五、初始化k8s5.1在master01上执⾏1. iVersion: /v1beta2kind: ClusterConfigurationimageRepository: /google_containerskubernetesVersion: v1.22.2apiServer: certSANs: #填写所有kube-apiserver节点的hostname、IP、VIP - t-k8smaster01 - t-k8smaster02 - t-k8smaster03 - T-k8sWorker01 - T-k8sWorker02 - 172.22.204.106 - 172.22.204.107 - 172.22.204.108 - 172.22.204.111 - 172.22.204.112 - 172.22.204.110controlPlaneEndpoint: "172.22.204.110:6443"networking: podSubnet: "10.244.0.0/16"5.2 初始化masterkubeadm init --config=5.3 执⾏后如下[***********.204.106-t-k8smaster01:~]$kubeadminit--config=[init] Using Kubernetes version: v1.22.2[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes t t-k8smaster01 t-k8smaster0[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost t-k8smaster01] and IPs [172.22.204.106 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost t-k8smaster01] and IPs [172.22.204.106 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 9.037482 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node t-k8smaster01 as control-plane by adding the labels: [/master(deprecated) /control-pla[mark-control-plane] Marking the node t-k8smaster01 as control-plane by adding the taints [/master:NoSchedule][bootstrap-token] Using token: 3sxvnr1igi0xxm[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/ $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/u should now deploy a pod network to the "kubectl apply -f [podnetwork].yaml" with one of the options listed at: /docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:and service account keys on each node and then running the following as root: kubeadm join 172.22.204.110:6443 --token 3sxvnr1igi0xxm --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 --control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.22.204.110:6443 --token 3sxvnr1igi0xxm --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 5.4验证[***********.204.106-t-k8smaster01:~]$mkdir-p$HOME/.kube[***********.204.106-t-k8smaster01:~]$sudocp-i/etc/kubernetes/$HOME/.kube/config[***********.204.106-t-k8smaster01:~]$sudochown$(id-u):$(id-g)$HOME/.kube/config[***********.204.106-t-k8smaster01:~]$exportKUBECONFIG=/etc/kubernetes/[***********.204.106-t-k8smaster01:~]$kubectlgetnodes;NAME STATUS ROLES AGE VERSIONt-k8smaster01 NotReady control-plane,master 2m4s v1.22.2初始化失败,或出现以下错误,可以重新初始化accepts at most 1 arg(s), received 3To see the stack trace of this error execute with --v=5 or higher如果初始化失败,可执⾏kubeadm reset后重新初始化[***********.204.106-t-k8smaster01:~]#kubeadmreset[***********.204.106-t-k8smaster01:~]#rm-rf$HOME/.kube/config5.5添加其它机器5.5.1在其它的master添加公钥[***********.204.106-t-k8smaster01:~]$ssh-keygen-trsa[***********.204.106-t-k8smaster01:~]$ssh-copy-id-it-k8smaster02[***********.204.106-t-k8smaster01:~]$ssh-copy-id-it-k8smaster03记录kubeadm join的输出,后⾯需要这个命令将work节点和其他master节点加⼊集群中。master01分发证书:在master01上运⾏脚本,将证书分发⾄master02和master03USER=root CONTROL_PLANE_IPS="172.22.204.107 172.22.204.108" for host in ${CONTROL_PLANE_IPS}; do ssh ${USER}@${host} "mkdir -p /etc/kubernetes/pki/" ssh ${USER}@${host} "mkdir -p /etc/kubernetes/pki/etcd" scp /etc/kubernetes/pki/ "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ "${USER}"@$host:/etc/kubernetes/pki/etcd/ # Quote this line if you are using external etcd scp /etc/kubernetes/pki/etcd/ "${USER}"@$host:/etc/kubernetes/pki/etcd/ done[***********.204.106-t-k8smaster01:~]$5.5.2 master02加⼊集群kubeadm join 172.22.204.110:6443 --token 3sxvnr1igi0xxm --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 --control-plane 同时执⾏ mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/ $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config5.5.3 master03加⼊集群kubeadm join 172.22.204.110:6443 --token 3sxvnr1igi0xxm --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 --control-plane 同时执⾏ mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/ $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config5.5.4 work01及work02加⼊集群kubeadm join 172.22.204.110:6443 --token 3sxvnr1igi0xxm --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 work01加⼊集群[***********.204.111-t-k8sworker01:.d]$kubeadmjoin172.22.204.110:3sxvnr1igi0xxm> --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable e'[preflight] Reading configuration from [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection 'kubectl get nodes' on the control-plane to see this node join the 02加⼊集群[***********.204.112-t-k8sworker01:.d]$kubeadmjoin172.22.204.110:3sxvnr1igi0xxm> --discovery-token-ca-cert-hash sha256:93cd64a9104cf799e48f5521c957c1a7f4925c8891fb28443efc519c887e8db1 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable e'[preflight] Reading configuration from [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection 'kubectl get nodes' on the control-plane to see this node join the cluster.六、验证k8s集群[***********.204.106-t-k8smaster01:~]$kubectlgetnodes;NAME STATUS ROLES AGE VERSIONt-k8smaster01 NotReady control-plane,master 48m v1.22.2t-k8smaster02 NotReady control-plane,master 18m v1.22.2t-k8smaster03 NotReady control-plane,master 13m v1.22.2t-k8sworker01 NotReady valid_lft forever preferred_lft forever2: ens192: valid_lft forever preferred_lft forever3: docker0: link/ether 02:42:7a:8a:7f:36 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever4: flannel.1: link/ether 02:f0:d4:fe:9c:b7 brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::f0:d4ff:fefe:9cb7/64 scope link valid_lft forever preferred_lft forever查看master02[***********.204.107-t-k8smaster02:pki]$ipa1: lo: valid_lft forever preferred_lft forever2: ens192: valid_lft forever preferred_lft forever3: docker0: link/ether 02:42:4c:13:ce:33 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever4: flannel.1: link/ether 02:ba:d4:f9:46:65 brd ff:ff:ff:ff:ff:ff inet 10.244.1.0/32 brd 10.244.1.0 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::ba:d4ff:fef9:4665/64 scope link valid_lft forever preferred_lft forever正常转移。k8s⾼可⽤部署完毕七、主要参加⽂档/
发布者:admin,转转请注明出处:http://www.yc00.com/web/1688057733a72548.html
评论列表(0条)