2023年6月30日发(作者:)
K8S集群部署Fabric笔记1,安装docker64 位版本的 CentOS 7CentOS 系统的内核版本⾼于 3.10(uname -r命令可查看系统内核版本)安装 Docker CE设置 Docker 的镜像仓库并从中进⾏安装下载 RPM 软件包并⼿动进⾏安装(例如,在不能访问互联⽹的隔离系统中安装 Docker)1.从镜像仓库安装确保 yum 包更新到最新:sudo yum update安装⼀些必要的系统⼯具:yum-utils 提供了 yum-config-manager 实⽤程序,⽤于设置yum源devicemapper 存储驱动需要 device-mapper-persistent-data 和 lvm2 sudo yum install -y yum-utils device-mapper-persistent-data lvm2设置镜像仓库:阿⾥云仓库地址(推荐)sudo yum-config-manager --add-repo /docker-ce/linux/centos/docker- 官⽅仓库地址sudo yum-config-manager --add-repo /linux/centos/更新 yum 软件包索引。sudo yum makecache fast查看仓库中所有docker版本sudo yum list docker-ce --showduplicates | sort -r安装 Docker CE 最新版本:sudo yum install docker-cesudo yum install docker-ce安装 Docker-ce 指定版本:sudo yum install docker-ce-版本号sudo yum -y install 启动 Dockersudo systemctl start docker 启动sudo systemctl restart docker 重启sudo systemctl enable docker 加⼊开机启动Docker 版本信息查看sudo docker version测试运⾏ hello-worldsudo docker run hello-worlddocekr拉取hello-world镜像并启动,后打印出Hello from Docker!查看安装的dockeryum list installed | grep docker卸载 Docker CEsudo yum remove docker-ce删除所有镜像、容器和存储卷sudo rm -rf /var/lib/docker2.0 安装golang:$ sudo tar -C /usr/local -xzf //解压添加环境变量,修改/etc/profile 或$HOME/.profile或/etc/profile.d/⽬录的⽂件$ sudo vi /etc/profile.d/ //添加以下内容export GOROOT=/usr/local/go //定义GOROOTexport PATH=$PATH:/usr/local/go/bin // 添加go/bin到系统环境变量PATH中export GOPATH=/home/gopath //添加GOPATH变量查看golang版本:$ go versiongo version go1.12.5 linux/amd64查看GOROOT$ go env GOROOT/usr/local/go查看GOPATH$ echo $GOPATH//home/gopath3.0 安装node:image2、解压安装包tar -xvf tar -C /usr/local -xzf 3、移动并改名⽂件夹(不改名也⾏)cd /usr/local/mv node-v12.18.2-linux-x64 /usr/local/mv node-v10.16.0.0-linux-64/ nodejsimage4、让npm和node命令全局⽣效软链接⽅式(推荐)ln -s /usr/local/nodejs/bin/npm /usr/local/bin/
ln -s /usr/local/nodejs/bin/node /usr/local/bin/5、查看nodejs是否安装成功node -vnpm -vimage4.0. 安装k8s1 系统准备查看系统版本[root@localhost]# cat /etc/centos-releaseCentOS Linux release 8.1.1911 (Core)配置⽹络[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=enp0s3UUID=039303a5-c70d-4973-8c91-97eaa071c23dDEVICE=enp0s3ONBOOT=yesIPADDR=192.168.122.21NETMASK=255.255.255.0GATEWAY=192.168.122.1DNS1=223.5.5.5添加阿⾥源[root@localhost ~]# rm -rfv /etc/.d/*[root@localhost ~]# curl -o /etc/.d/ /repo/配置主机名[root@master01 ~]# cat /etc/hosts127.0.0.1 localhost omain localhost4 omain4::1 localhost omain localhost6 omain6192.168.122.21 master01关闭swap,注释swap分区[root@master01 ~]# swapoff -a[root@master01 ~]# cat /etc/fstab## /etc/fstab# Created by anaconda on Tue Mar 31 22:44:34 2020## Accessible filesystems, by reference, are maintained under '/dev/disk/'.# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.## After editing this file, run 'systemctl daemon-reload' to update systemd# units generated from this file.#/dev/mapper/cl-root / xfs defaults 0 0UUID=5fecb240-379b-4331-ba04-f41338e81a6e /boot ext4 defaults 1 2/dev/mapper/cl-home /home xfs defaults 0 0#/dev/mapper/cl-swap swap swap defaults 0 0配置内核参数,将桥接的IPv4流量传递到iptables的链[root@master01 ~]# cat > /etc/sysctl.d/ <<-nf-call-ip6tables = -nf-call-iptables = 1EOFsysctl --system2 安装常⽤包[root@master01 ~]# yum install vim bash-completion net-tools gcc -y4 安装kubectl、kubelet、kubeadm添加阿⾥kubernetes源[root@master01 ~]# cat < /etc/.d/[kubernetes]name=Kubernetesbaseurl=/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=/kubernetes/yum/doc/ /kubernetes/yum/doc/安装[root@master01 ~]# yum install kubectl kubelet kubeadm[root@master01 ~]# systemctl enable kubelet5 初始化k8s集群[root@master01 ~]# kubeadm init --kubernetes-version=1.18.0 --apiserver-advertise-address=192.168.122.21 --image-repository /google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16POD的⽹段为: 10.122.0.0/16, api server地址就是master本机IP。这⼀步很关键,由于kubeadm 默认从官⽹下载所需镜像,国内⽆法访问,因此需要通过–image-repository指定阿⾥云镜像仓库地址。集群初始化成功后返回如下信息:W0408 09:36:36.121603 14098 :202] WARNING: kubeadm cannot validate component configs for API groups [ .k8s[init] Using Kubernetes version: v1.18.0[preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [ localhost] and IPs [192.168.122.21 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [ localhost] and IPs [192.168.122.21 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[kubeconfig] Writing "" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0408 09:36:43.343191 14098 :225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0408 09:36:43.344303 14098 :225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 23.002541 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node as control-plane by adding the label "/master=''"[mark-control-plane] Marking the node as control-plane by adding the taints [/master:NoSchedule][bootstrap-token] Using token: 2xhzetpktfz[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully![certs] apiserver serving cert is signed for DNS names [ kubernetes t ] and IPsYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/ $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the "kubectl apply -f [podnetwork].yaml" with one of the options listed at: /docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.122.21:6443 --token 2xhzetpktfz --discovery-token-ca-cert-hash sha256:daded8514c8350f7c238204979039ff9884d5b595ca950ba8bbce80724fd65d4[root@master01 ~]#记录⽣成的最后部分内容,此内容需要在其它节点加⼊Kubernetes集群时执⾏。根据提⽰创建kubectl[root@master01 ~]# mkdir -p $HOME/.kube[root@master01 ~]# sudo cp -i /etc/kubernetes/ $HOME/.kube/config[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config执⾏下⾯命令,使kubectl可以⾃动补充[root@master01 ~]# source <(kubectl completion bash)查看节点,[root@master01 ~]# kubectl get nodeNAME STATUS ROLES AGE NotReady master 2m29s v1.18.0[root@master01 ~]# kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7ff77c879f-fsj9l 0/1 Pending 0 2m12skube-system coredns-7ff77c879f-q5ll2 0/1 Pending 0 2m12skube-system 1/1 Running 0 2m22skube-system 1/1 Running 0 2m22skube-system 1/1 Running 0 2m22skube-system kube-proxy-th472 1/1 Running 0 2m12skube-system 1/1 Running 0 2m22s[root@master01 ~]#node节点为NotReady,因为corednspod没有启动,缺少⽹络pod6 安装calico⽹络[root@master01 ~]# kubectl apply -f /manifests/nfigmap/calico-config / / / / / / / / / / / / / / /calico-kube-controllers /calico-kube-controllers /calico-node /calico-node /calico-node createdserviceaccount/calico-node /calico-kube-controllers createdserviceaccount/calico-kube-controllers 查看pod和[root@master01 ~]# kubectl get pod --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-555fc8cc5c-k8rbk 1/1 Running 0 36skube-system calico-node-5km27 1/1 Running 0 36skube-system coredns-7ff77c879f-fsj9l 1/1 Running 0 5m22skube-system coredns-7ff77c879f-q5ll2 1/1 Running 0 5m22skube-system 1/1 Running 0 5m32skube-system 1/1 Running 0 5m32skube-system 1/1 Running 0 5m32skube-system kube-proxy-th472 1/1 Running 0 5m22skube-system 1/1 Running 0 5m32s[root@master01 ~]# kubectl get nodeNAME STATUS ROLES AGE Ready master 5m47s v1.18.0[root@master01 ~]#此时集群状态正常7 安装kubernetes-dashboard官⽅部署dashboard的服务没使⽤nodeport,将yaml⽂件下载到本地,在service⾥添加# Copyright 2017 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## /licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the sion: v1kind: Namespacemetadata: name: kube-dashboard---apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-dashboard---kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-dashboardspec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30000 nodePort: 30000 selector: k8s-app: kubernetes-dashboard---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-dashboardtype: Opaque---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kube-dashboardtype: Opaquedata: csrf: ""---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kube-dashboardtype: Opaque---kind: ConfigMapapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kube-dashboard---kind: RoleapiVersion: /v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-dashboardrules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"]---kind: ClusterRoleapiVersion: /v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboardrules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: [""] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"]---apiVersion: /v1kind: RoleBindingmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-dashboardroleRef: apiGroup: kind: Role name: kubernetes-dashboardsubjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-dashboard---apiVersion: /v1kind: ClusterRoleBindingmetadata: name: kubernetes-dashboardroleRef: apiGroup: kind: ClusterRole name: kubernetes-dashboardsubjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-dashboard---kind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-dashboardspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.0.0-rc7 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kube-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: /master effect: NoSchedule---kind: ServiceapiVersion: v1metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kube-dashboardspec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper---kind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kube-dashboardspec: replicas: 1 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: /pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.4 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: /master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}copy:kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardspec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30000 selector: k8s-app: kubernetes-dashboard[root@master01 ~]# kubectl create -f mespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings /kubernetes-dashboard /kubernetes-dashboard /kubernetes-dashboard /kubernetes-dashboard /kubernetes-dashboard createdservice/dashboard-metrics-scraper /dashboard-metrics-scraper 查看pod,serviceNAME READY STATUS RESTARTS AGEdashboard-metrics-scraper-dc6947fbf-869kf 1/1 Running 0 37skubernetes-dashboard-5d4dc8b976-sdxxt 1/1 Running 0 37s[root@master01 ~]# kubectl get svc -n kubernetes-dashboardNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdashboard-metrics-scraper ClusterIP 10.10.58.93 8000/TCP 44skubernetes-dashboard NodePort 10.10.132.66 443:30000/TCP 44s[root@master01 ~]#通过页⾯访问,推荐使⽤firefox浏览器使⽤kubeadm在Centos8上部署kubernetnes1.18使⽤token进⾏登录,执⾏下⾯命令获取[root@master01 ~]# kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard-token-t4hxz | grep token | awk 'NR==3{print $2}'eyJhbGciOiJSUzI1NiIsImtpZCI3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50登录后如下展⽰,如果没有namespace可选,并且提⽰找不到资源 ,那么就是权限问题使⽤kubeadm在Centos8上部署kubernetnes1.18通过查看dashboard⽇志,得到如下 信息[root@master01 ~]# kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-5d4dc8b976-sdxxt2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: namespaces is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashbo2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/cronjob/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.162020/04/08 01:54:31 Getting list of all cron jobs in the cluster2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashb2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 192.168.122.21:7788: { contents hidden }2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 1922020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192020/04/08 01:54:31 Non-critical error occurred during resource retrieval: is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-da2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" can2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: events is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" c2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 192.168.122.21:7788:2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 Getting list of all deployments in the cluster2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-da2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" can2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: events is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" c2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-das2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.12020/04/08 01:54:31 Getting list of all jobs in the cluster2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboar2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" can2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: events is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" c2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 Getting list of all pods in the cluster2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" can2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: events is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" c2020/04/08 01:54:31 Getting pod metrics2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.2020/04/08 01:54:31 Getting list of all replica sets in the cluster2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Incoming HTTP/2.0 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-das2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" can2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: events is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" c2020/04/08 01:54:31 [2020-04-08T01:54:31Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 01:54:31 Getting list of all replication controllers in the cluster2020/04/08 01:54:31 Non-critical error occurred during resource retrieval: replicationcontrollers is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes解决⽅法[root@master01 ~]# kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccount
/serviceaccount-cluster-admin created查看dashboard⽇志[root@master01 ~]# kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-5d4dc8b976-sdxx2020/04/08 02:07:03 Getting list of namespaces2020/04/08 02:07:03 [2020-04-08T02:07:03Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:08 [2020-04-08T02:07:08Z] Incoming HTTP/2.0 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122.21:2020/04/08 02:07:08 [2020-04-08T02:07:08Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:08 [2020-04-08T02:07:08Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 192.168.122.21:7788:2020/04/08 02:07:08 Getting list of namespaces2020/04/08 02:07:08 [2020-04-08T02:07:08Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:13 [2020-04-08T02:07:13Z] Incoming HTTP/2.0 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122.21:2020/04/08 02:07:13 [2020-04-08T02:07:13Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:13 [2020-04-08T02:07:13Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 192.168.122.21:7788:2020/04/08 02:07:13 Getting list of namespaces2020/04/08 02:07:13 [2020-04-08T02:07:13Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:18 [2020-04-08T02:07:18Z] Incoming HTTP/2.0 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122.21:2020/04/08 02:07:18 [2020-04-08T02:07:18Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:18 [2020-04-08T02:07:18Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 192.168.122.21:7788:2020/04/08 02:07:18 Getting list of namespaces2020/04/08 02:07:18 [2020-04-08T02:07:18Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:23 [2020-04-08T02:07:23Z] Incoming HTTP/2.0 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122.21:2020/04/08 02:07:23 [2020-04-08T02:07:23Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:23 [2020-04-08T02:07:23Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 192.168.122.21:7788:2020/04/08 02:07:23 Getting list of namespaces2020/04/08 02:07:23 [2020-04-08T02:07:23Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:28 [2020-04-08T02:07:28Z] Incoming HTTP/2.0 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122.21:2020/04/08 02:07:28 [2020-04-08T02:07:28Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:28 [2020-04-08T02:07:28Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 192.168.122.21:7788:2020/04/08 02:07:28 Getting list of namespaces2020/04/08 02:07:28 [2020-04-08T02:07:28Z] Outcoming response to 192.168.122.21:7788 with 200 status code2020/04/08 02:07:33 [2020-04-08T02:07:33Z] Incoming HTTP/2.0 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.122.21:2020/04/08 02:07:33 [2020-04-08T02:07:33Z] Outcoming response to 192.168.122.21:7788 with 200 status code此时再查看dashboard,即可看到有资源展⽰image5,部署区块链⽹络**********************:IBM/od +x setup_d +x ./setup_好了,完成了 :),剩下的事⼉只有等待。TIPS:由于这个过程中需要从 docker hub 拉取镜像,所以,你可能需要⼀点众所周知的⽹络能⼒当⼀切都完成之后:$ kubectl get podsNAME READY STATUS RESTARTS AGEblockchain-ca-77459f9b84-zd8qt 1/1 Running 0 2mblockchain-orderer-5c88f8cf95-25862 1/1 Running 0 2mblockchain-org1peer1-7d95cbfd64-wcvz6 1/1 Running 0 2mblockchain-org2peer1-d85dfcfc7-4kpmz 1/1 Running 0 2mblockchain-org3peer1-6cffb6cbc7-stnj5 1/1 Running 0 2mblockchain-org4peer1-84486f557c-pmvq8 1/1 Running 0 2mchaincodeinstall-skd26 0/4 Completed 0 1mchaincodeinstantiate-jm85m 0/1 Completed 0 1mcopyartifacts-kmcp9 0/1 Completed 0 3mcreatechannel-p6ggq 0/2 Completed 0 2mjoinchannel-lhqnx 0/4 Completed 0 2mutils-vr8fr 0/2 Completed 0 2m恭喜,你已经完成了 Fabric ⽹络的部署
发布者:admin,转转请注明出处:http://www.yc00.com/news/1688056508a72287.html
评论列表(0条)