2023年6月30日发(作者:)
linux安装部署k8s(kubernetes)和解决遇到的坑先安装DockerCentos7离线安装Docker设置主机名称#查看Linux内核版本uname -r
7.x86_64#或者使⽤ uname -a#设置主机名称为k8s-master,重新连接显⽰⽣效hostnamectl --static set-hostname k8s-master#查看主机名称hostname禁⽤SELinux#永久禁⽤SELinuxvim /etc/sysconfig/selinuxSELINUX=disabled#临时 禁⽤SELinux,让容器可以读取主机⽂件系统setenforce 0关闭系统Swap#关闭sawp分区 (可以不关闭,使⽤参数--ignore-preflight-errors=swap)#临时关闭swapoff -a
vi /etc/fstab#注释掉swap分区#/dev/mapper/centos-swap swap配置docker国内镜像加速#修改 ⽂件,没有就创建⼀个vim /etc/docker/
{ "registry-mirrors" : ["", "",""], "exec-opts": ["driver=systemd"], "insecure-registries":["192.168.1.5"]}#重新加载配置systemctl daemon-reload#重启dockersystemctl restart docker192.168.1.5 为私有仓库地址默认采⽤cgroupfs作为驱动修改为systemd驱动driver=systemd配置k8s的yum源 x86_64的源cat < /etc/.d/[kubernetes]name=Kubernetesbaseurl=/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=/kubernetes/yum/doc/ /kubernetes/yum/doc/#清除缓存yum clean all#把服务器的包信息下载到本地电脑缓存起来,makecache建⽴⼀个缓存yum makecache#列出kubectl可⽤的版本yum list kubectl --showduplicates | sort -r#列出信息如下:Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast已加载插件:fastestmirror, langpacks已安装的软件包可安装的软件包 * updates: ctl.x86_64 1.9.9-0 kubernetes
kubectl.x86_64 1.9.8-0 kubernetes
kubectl.x86_64 1.9.7-0 kubernetes
kubectl.x86_64 1.9.6-0 kubernetes
kubectl.x86_64 1.9.5-0 kubernetes
kubectl.x86_64 1.9.4-0 kubernetes
kubectl.x86_64 1.9.3-0 kubernetes
kubectl.x86_64 1.9.2-0 kubernetes
kubectl.x86_64 1.9.11-0 kubernetes
kubectl.x86_64 1.9.1-0 kubernetes
kubectl.x86_64 1.9.10-0 kubernetes
kubectl.x86_64 1.9.0-0 kubernetes
kubectl.x86_64 1.8.9-0 kubernetes
kubectl.x86_64 1.8.8-0 kubernetes
kubectl.x86_64 1.8.7-0 kubernetes
kubectl.x86_64 1.8.6-0 kubernetes
kubectl.x86_64 1.8.5-0 kubernetes
kubectl.x86_64 1.8.4-0 kubernetes
kubectl.x86_64 1.8.3-0 kubernetes
kubectl.x86_64 1.8.2-0 kubernetes
kubectl.x86_64 1.8.15-0 kubernetes
kubectl.x86_64 1.8.14-0 kubernetes
kubectl.x86_64 1.8.13-0 kubernetes
kubectl.x86_64 1.8.12-0 kubernetes
kubectl.x86_64 1.8.11-0 kubernetes
kubectl.x86_64 1.8.1-0 kubernetes
kubectl.x86_64 1.8.10-0 kubernetes
kubectl.x86_64 1.8.0-0 kubernetes
kubectl.x86_64 1.7.9-0 kubernetes
kubectl.x86_64 1.7.8-1 kubernetes
kubectl.x86_64 1.7.7-1 kubernetes
kubectl.x86_64 1.7.6-1 kubernetes
kubectl.x86_64 1.7.5-0 kubernetes
kubectl.x86_64 1.7.4-0 kubernetes
kubectl.x86_64 1.7.3-1 kubernetes
kubectl.x86_64 1.7.2-0 kubernetes
kubectl.x86_64 1.7.16-0 kubernetes
kubectl.x86_64 1.7.15-0 kubernetes
kubectl.x86_64 1.7.14-0 kubernetes
kubectl.x86_64 1.7.11-0 kubernetes
kubectl.x86_64 1.7.1-0 kubernetes
kubectl.x86_64 1.7.10-0 kubernetes
kubectl.x86_64 1.7.0-0 kubernetes
kubectl.x86_64 1.6.9-0 kubernetes
kubectl.x86_64 1.6.8-0 kubernetes
kubectl.x86_64 1.6.7-0 kubernetes
kubectl.x86_64 1.6.6-0 kubernetes
kubectl.x86_64 1.6.5-0 kubernetes
kubectl.x86_64 1.6.4-0 kubernetes
kubectl.x86_64 1.6.3-0 kubernetes
kubectl.x86_64 1.6.2-0 kubernetes
kubectl.x86_64 1.6.13-0 kubernetes
配置iptablescat < /etc/sysctl.d/-nf-call-ip6tables = -nf-call-iptables = ness=0EOF#让上述配置命令⽣效sysctl --system#或者这样去设置echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptablesecho "1" >/proc/sys/net/bridge/bridge-nf-call-ip6tables#保证输出的都是1cat /proc/sys/net/bridge/bridge-nf-call-ip6tablescat /proc/sys/net/bridge/bridge-nf-call-iptables安装kubelet,kubeadm,kubectl#安装最新版本,也可安装指定版本yum install -y kubelet kubeadm kubectl#安装指定版本的kubelet,kubeadm,kubectlyum install -y kubelet-1.19.3-0 kubeadm-1.19.3-0 kubectl-1.19.3-0#查看kubelet版本kubelet --version#版本如下:Kubernetes v1.19.3#查看kubeadm版本kubeadm version#版本信息如下:kubeadm version: &{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"20启动kubelet并设置开机启动服务#重新加载配置⽂件systemctl daemon-reload#启动kubeletsystemctl start kubelet#查看kubelet启动状态systemctl status kubelet#没启动成功,报错先不管,后⾯的kubeadm init会拉起#设置开机⾃启动systemctl enable kubelet#查看kubelet开机启动状态 enabled:开启, disabled:关闭systemctl is-enabled kubelet#查看⽇志journalctl -xefu kubelet初始化k8s集群Master--apiserver-advertise-address=192.168.0.5 为Master的IP--image-repository /google_containers 指定镜像仓库,如果不指定默认是,国内需要翻墙才能下载镜像#执⾏初始化命令kubeadm init --image-repository /google_containers --apiserver-advertise-address=192.168.0.5 --kubernetes-version=v1.19.3 --pod-network-cidr=10.24#报错⼀: [ERROR Swap]: running with swap on is not supported. Please disable swap#报错如下: 如果没关闭swap, 需要关闭swap 或者使⽤ --ignore-preflight-errors=swapW0525 15:17:52.768575 19864 :348] WARNING: kubeadm cannot validate component configs for API groups [ ][init] Using Kubernetes version: v1.19.3[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`To see the stack trace of this error execute with --v=5 or higher#报错⼆:The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get "localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused报错如下:[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get "localhost:10248/healthz": dial tcp [::1]:10248: connect: connection r[kubelet-check] The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get "localhost:10248/healthz": dial tcp [::1]:10248: connect: connection r[kubelet-check] The HTTP call equal to 'curl -sSL localhost:10248/healthz' failed with error: Get "localhost:10248/healthz": dial tcp [::1]:10248: connect: connection r Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)#添加⽂件: 主要是这个配置:--cgroup-driver=systemdvim /etc/systemd/system/e.d/ # Note: This dropin only works with kubeadm and kubelet v1.11+[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/ --kubeconfig=/etc/kubernetes/"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/"Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests"Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain="Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/"#Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"#Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=/pause-amd64:3.1"# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamicallyEnvironmentFile=-/var/lib/kubelet/# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use# the .tExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.#EnvironmentFile=-/etc/sysconfig/kubeletExecStart=ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET#成功, 打印如下信息表⽰成功:W0511 11:11:24.998096 15272 :348] WARNING: kubeadm cannot validate component configs for API groups [ ][init] Using Kubernetes version: v1.19.3[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03 [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes t ] and IPs [10.1.0.[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 16.501683 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "/master=''"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [/master:NoSchedule][bootstrap-token] Using token: rt0fpo.4axz6cd6eqpm1ihf[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/ $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the "kubectl apply -f [podnetwork].yaml" with one of the options listed at: /docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.0.5:6443 --token 1ihf --discovery-token-ca-cert-hash 516a41305c1c1fd5c7
⼀定要记住输出的最后⼀个命令: ###记住这个命令,后续添加节点时,需要此命令###kubeadm join 192.168.0.5:6443 --token #按提⽰要求执⾏如下命令:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/ $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config查看k8s集群节点#查看节点kubectl get node#输出如下:NAME STATUS ROLES AGE VERSIONk8s-master NotReady master 4m13s v1.19.3#发现状态是NotReady,是因为没有安装⽹络插件#查看kubelet的⽇志journalctl -xef -u kubelet -n 20#输出如下: 提⽰未安装cni ⽹络插件May 11 11:15:26 k8s-master kubelet[16678]: W0511 11:15:26.356793 16678 :239] Unable to update cni config: no networks found in /etc/cni/ 11 11:15:28 k8s-master kubelet[16678]: E0511 11:15:28.237122 16678 :2103] Container runtime network not ready: NetworkReady=false reason:NetworkP安装flannel⽹络插件(CNI)#创建⽂件夹mkdir flannel && cd flannel#下载⽂件curl -O /coreos/flannel/master/Documentation/# ⾥需要下载镜像,我这⾥提前先下载docker pull /coreos/flannel:v0.14.0-rc1#创建flannel⽹络插件kubectl apply -f #过⼀会查看k8s集群节点,变成Ready状态了kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master Ready master 9m39s v1.19.3节点添加到k8s集群中参考上⾯的,在节点安装好docker、kubelet、kubectl、kubeadm执⾏k8s初始化最后输出的命令kubeadm join 192.168.0.5:6443 --token #节点成功加⼊后,在Master上执⾏命令查看kubectl get nodesk8s-master Ready master 147d v1.19.3Node-1 Ready 146d v1.19.3#列出k8s需要下载的镜像kubeadm config images list#如下:I0511 09:36:15.377901 9508 :252] remote version is much newer: v1.21.0; falling back to: stable-1.19W0511 09:36:17.124062 9508 :348] WARNING: kubeadm cannot validate component configs for API groups [ ]/kube-apiserver:/kube-controller-manager:/kube-scheduler:/kube-proxy:/pause:/etcd:/coredns:1.7.0如果初始化没有配置--image-repository /google_containers 指定镜像仓库,就会要翻墙下载这些镜像,或者找其他镜像,然后修改镜像名注意:--apiserver-advertise-address=192.168.0.5 的IP使⽤内⽹IP,如果使⽤外⽹IP会报如下错误:W0511 09:58:49.950542 20273 :348] WARNING: kubeadm cannot validate component configs for API groups [ .k8s[init] Using Kubernetes version: v1.19.3[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03 [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes t ] and IPs [10.1.0.[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [116.65.37.123 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [116.65.37.123 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higher提⽰加上--v=5 可以打印详细信息#在次执⾏时kubeadm init --image-repository /google_containers --apiserver-advertise-address=116.73.117.123 --kubernetes-version=v1.19.3 --pod-network-cidr=1#输出错误如下:W0511 10:04:28.999779 24707 :348] WARNING: kubeadm cannot validate component configs for API groups [ ][init] Using Kubernetes version: v1.19.3[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03 [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such hosterror execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-10259]: Port 10259 is in use [ERROR Port-10257]: Port 10257 is in use [ERROR ]: /etc/kubernetes/manifests/ already exists [ERROR ]: /etc/kubernetes/manifests/ already exists [ERROR ]: /etc/kubernetes/manifests/ already exists [ERROR ]: /etc/kubernetes/manifests/ already exists [ERROR Port-10250]: Port 10250 is in use[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`To see the stack trace of this error execute with --v=5 or higher#10259 10257等端⼝已经被使⽤等错误信息#重置k8s
kubeadm reset#或者使⽤ kubeadm reset -f 命令#在重新初始化kubeadm init --image-repository /google_containers --apiserver-advertise-address=116.73.117.123 --kubernetes-version=v1.19.3 --pod-network-cidr=1#还是报错,卡在这⾥,原因就是⽤了外⽹IP导致,坑了⾃⼰⼀把:[kubelet-check] Initial timeout of 40s passed.
发布者:admin,转转请注明出处:http://www.yc00.com/news/1688056224a72220.html
评论列表(0条)