Kubernetes全栈架构师(资源调度下)--学习笔记

Kubernetes全栈架构师(资源调度下)--学习笔记

2023年6月30日发(作者:)

Kubernetes全栈架构师(资源调度下)--学习笔记⽬录StatefulSet扩容缩容StatefulSet更新策略StatefulSet灰度发布StatefulSet级联删除和⾮级联删除守护进程服务DaemonSetDaemonSet的使⽤DaemonSet的更新和回滚Label&Selector什么是HPA?⾃动扩缩容HPA实践StatefulSet扩容缩容查看nginx副本12345[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 1 (7h1m ago) 22hweb-1 1/1 Running 1 (7h1m ago) 22hweb-2 1/1 Running 1 (7h1m ago) 22hStatefulSet副本启动顺序按照名称0,1,2,只有web-0完全启动之后才会启动web-1,web-1完全启动之后才会启动web-2删除的时候顺序与启动相反,从最后⼀个序号开始,2,1,0,如果web-2删除过程中,web-0挂掉了,那么web-1不会被删除,必须等待web-0启动状态变为ready之后,才会删除web-1打开另⼀个窗⼝监控StatefulSet12345[root@k8s-master01 ~]# kubectl get po -l app=nginx -wNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 1 (7h14m ago) 22hweb-1 1/1 Running 1 (7h14m ago) 22hweb-2 1/1 Running 1 (7h14m ago) 22h扩容到5个副本12[root@k8s-master01 ~]# kubectl scale --replicas=5 sts /web scaled监控情况(可以看到按顺序启动)[root@k8s-master01 ~]# kubectl get po -l app=nginx -wNAME READY STATUS RESTARTS AGEweb-3 0/1 Pending 0 0sweb-3 0/1 Pending 0 0sweb-3 0/1 ContainerCreating 0 0sweb-3 1/1 Running 0 1sweb-4 0/1 Pending 0 0sweb-4 0/1 Pending 0 0sweb-4 0/1 ContainerCreating 0 0sweb-4 1/1 Running 0 1s缩容到2个副本12[root@k8s-master01 ~]# kubectl scale --replicas=2 sts /web scaled监控情况(可以看到删除的顺序与启动的顺序相反)1112web-4 1/1 Terminating 0 14mweb-4 0/1 Terminating 0 14mweb-4 0/1 Terminating 0 14mweb-4 0/1 Terminating 0 14mweb-3 1/1 Terminating 0 14mweb-3 0/1 Terminating 0 14mweb-3 0/1 Terminating 0 14mweb-3 0/1 Terminating 0 14mweb-2 1/1 Terminating 1 (7h29m ago) 22hweb-2 0/1 Terminating 1 (7h29m ago) 22hweb-2 0/1 Terminating 1 (7h29m ago) 22hweb-2 0/1 Terminating 1 (7h29m ago) 22hStatefulSet滚动更新的时候会先删除旧的副本,再创建新的副本,如果只有⼀个副本的话,会导致业务不可⽤,所以要根据⾃⼰的实际情况选择使⽤StatefulSet或者Deployment,如果必须固定主机名或者pod名称,建议使⽤StatefulSet查看主机名称1234[root@k8s-master01 ~]# kubectl exec -ti web-0 -- sh# hostnameweb-0# exitStatefulSet更新策略RollingUpdateOnDeleteStatefulSet和Deployment⼀样,有⼏种更新⽅式RollingUpdate查看更新⽅式12345[root@k8s-master01 ~]# kubectl get sts -o yaml updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate # 默认滚动更新,从下往上更新扩容到3个副本12[root@k8s-master01 ~]# kubectl scale --replicas=3 sts /web scaled查看pod12345[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 53mweb-1 1/1 Running 1 (8h ago) 23hweb-2 1/1 Running 0 15s滚动更新顺序是web-2,web-1,web-0,从下往上更新,如果更新过程中web-0挂掉了,则会等待web-0恢复到状态为ready之后再继续从下往上滚动更新打开另⼀个窗⼝监控StatefulSet12345[root@k8s-master01 ~]# kubectl get po -l app=nginx -wNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 13sweb-1 1/1 Running 0 23sweb-2 1/1 Running 0 33s修改镜像地址触发更新1234[root@k8s-master01 ~]# kubectl edit sts web/image 回车# 修改镜像 - image: nginx:1.15.3查看更新过程12345[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 58mweb-1 0/1 Terminating 1 (8h ago) 23hweb-2 1/1 Running 0 4s查看监控22324web-2 1/1 Terminating 0 101sweb-2 0/1 Terminating 0 101sweb-2 0/1 Terminating 0 110sweb-2 0/1 Terminating 0 110sweb-2 0/1 Pending 0 0sweb-2 0/1 Pending 0 0sweb-2 0/1 ContainerCreating 0 0sweb-2 1/1 Running 0 2sweb-1 1/1 Terminating 0 102sweb-1 0/1 Terminating 0 103sweb-1 0/1 Terminating 0 110sweb-1 0/1 Terminating 0 110sweb-1 0/1 Pending 0 0sweb-1 0/1 Pending 0 0sweb-1 0/1 ContainerCreating 0 0sweb-1 1/1 Running 0 1sweb-0 1/1 Terminating 0 101sweb-0 0/1 Terminating 0 102sweb-0 0/1 Terminating 0 110sweb-0 0/1 Terminating 0 110sweb-0 0/1 Pending 0 0sweb-0 0/1 Pending 0 0sweb-0 0/1 ContainerCreating 0 0sweb-0 1/1 Running 0 1sOnDelete修改更新状态为OnDelete1234[root@k8s-master01 ~]# kubectl edit sts web# 修改以下内容 updateStrategy: type: OnDelete修改镜像地址1234[root@k8s-master01 ~]# kubectl edit sts web/image 回车# 修改镜像 - image: nginx:1.15.2查看pod,可以看到没有更新12345[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 3m26sweb-1 1/1 Running 0 3m36sweb-2 1/1 Running 0 3m49s⼿动删除pod触发更新12[root@k8s-master01 ~]# kubectl delete po web-2pod "web-2" deleted查看pod12345[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 5m6sweb-1 1/1 Running 0 5m16sweb-2 1/1 Running 0 9s查看web-2镜像,可以看到更新成功12345[root@k8s-master01 ~]# kubectl get po web-2 -oyaml | grep image - image: nginx:1.15.2 imagePullPolicy: IfNotPresent image: nginx:1.15.2 imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424查看web-1镜像,可以看到没有更新,所以只有删除的时候才会更新镜像12345[root@k8s-master01 ~]# kubectl get po web-1 -oyaml | grep image - image: nginx:1.15.3 imagePullPolicy: IfNotPresent image: nginx:1.15.3 imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7cce7d3d3删除两个pod123[root@k8s-master01 ~]# kubectl delete po web-0 web-1pod "web-0" deletedpod "web-1" deleted查看监控,可以看到按照删除顺序创建1234567web-0 0/1 Pending 0 0sweb-0 0/1 ContainerCreating 0 0sweb-0 1/1 Running 0 1sweb-1 0/1 Pending 0 0sweb-1 0/1 Pending 0 0sweb-1 0/1 ContainerCreating 0 0sweb-1 1/1 Running 0 1s查看所有pod镜像,可以看到三个pod的镜像都更新了11121314[root@k8s-master01 ~]# kubectl get po -oyaml | grep image imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7cce7d3d3 - image: nginx:1.15.2 imagePullPolicy: IfNotPresent image: nginx:1.15.2 imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424 - image: nginx:1.15.2 imagePullPolicy: IfNotPresent image: nginx:1.15.2 imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424 - image: nginx:1.15.2 imagePullPolicy: IfNotPresent image: nginx:1.15.2 imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424StatefulSet灰度发布修改配置123456[root@k8s-master01 ~]# kubectl edit sts web# 修改以下内容 updateStrategy: type: RollingUpdate

rollingUpdate: partition: 2 # ⼩于2的不会被更新打开另⼀个窗⼝监控12345[root@k8s-master01 ~]# kubectl get po -l app=nginx -wNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 44hweb-1 1/1 Running 0 44hweb-2 1/1 Running 0 44h修改镜像(nginx:1.15.2 -> nginx:1.15.3)12345[root@k8s-master01 ~]# kubectl edit sts web# 修改以下内容s spec: containers: - image: nginx:1.15.3查看监控,可以看到只有⼤于2的在更新111213[root@k8s-master01 ~]# kubectl get po -l app=nginx -wNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 44hweb-1 1/1 Running 0 44hweb-2 1/1 Running 0 44hweb-2 1/1 Terminating 0 44hweb-2 0/1 Terminating 0 44hweb-2 0/1 Terminating 0 44hweb-2 0/1 Terminating 0 44hweb-2 0/1 Pending 0 0sweb-2 0/1 Pending 0 0sweb-2 0/1 ContainerCreating 0 0sweb-2 1/1 Running 0 3s查看镜像,可以看到web-2的镜像是nginx:1.15.3,另外两个是nginx:1.15.2111213[root@k8s-master01 ~]# kubectl get po -oyaml | grep image - image: nginx:1.15.2 imagePullPolicy: IfNotPresent image: nginx:1.15.2 imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424 - image: nginx:1.15.2 imagePullPolicy: IfNotPresent image: nginx:1.15.2 imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424 - image: nginx:1.15.3 imagePullPolicy: IfNotPresent image: nginx:1.15.3 imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7cce7d3d3可以使⽤这种机制实现灰度机制,先发布⼀两个实例,确认没有问题之后再发布所有实例,这就是stateful的分段更新,相当于灰度发布的机制,也可以使⽤其它的⽅式,⽐如服务⽹格,或者myservicesStatefulSet级联删除和⾮级联删除级联删除:删除sts时同时删除Pod⾮级联删除:删除sts时不删Pod获取sts123[root@k8s-master01 ~]# kubectl get stsNAME READY AGEweb 3/3 2d20h级联删除12[root@k8s-master01 ~]# kubectl delete sts "web" deleted查看pod12345[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 0/1 Terminating 0 45hweb-1 0/1 Terminating 0 45hweb-2 0/1 Terminating 0 11m创建pod123[root@k8s-master01 ~]# kubectl create -f /web createdError from server (AlreadyExists): error when creating "": services "nginx" already exists查看pod1234[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 7sweb-1 1/1 Running 0 5s⾮级联删除123[root@k8s-master01 ~]# kubectl delete sts web --cascade=falsewarning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade= "web" deleted查看sts,可以看到sts被删除了12[root@k8s-master01 ~]# kubectl get stsNo resources found in default namespace.查看pod,可以看到pod依然存在,只是没有sts管理了,再次删除pod不会被重新创建1234[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGEweb-0 1/1 Running 0 3m37sweb-1 1/1 Running 0 3m35s删除web-1,web-0123[root@k8s-master01 ~]# kubectl delete po web-1 web-0pod "web-1" deletedpod "web-0" deleted查看pod,可以看到没有sts管理的pod,删除之后不会重新创建12[root@k8s-master01 ~]# kubectl get poNAME READY STATUS RESTARTS AGE守护进程服务DaemonSetDaemonSet:守护进程集,缩写为ds,在所有节点或者是匹配的节点上都部署⼀个Pod。使⽤DaemonSet的场景运⾏集群存储的daemon,⽐如ceph或者glusterd节点的CNI⽹络插件,calico节点⽇志的收集:fluentd或者是filebeat节点的监控:node exporter服务暴露:部署⼀个ingress nginxDaemonSet的使⽤新建DaemonSet829303132[root@k8s-master01 ~]# cp [root@k8s-master01 ~]# vim

# 修改内容如下apiVersion: apps/v1kind: DaemonSetmetadata: labels: app: nginx name: nginxspec: revisionHistoryLimit: 10 selector: matchLabels: app: nginx template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx:1.15.2 imagePullPolicy: IfNotPresent name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30创建⼀个ds,因为没有配置notselect,所有它会在每个节点启动⼀个12[root@k8s-master01 ~]# kubectl create -f /nginx created查看pod12345678[root@k8s-master01 ~]# kubectl get po -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-2xtms 1/1 Running 0 90s 172.25.244.196 k8s-master01 nginx-66bbc9fdc5-4xqcw 1/1 Running 0 5m43s 172.25.244.195 k8s-master01 nginx-ct4xh 1/1 Running 0 90s 172.17.125.2 k8s-node01 nginx-hx9ws 1/1 Running 0 90s 172.27.14.195 k8s-node02 nginx-mjph9 1/1 Running 0 90s 172.18.195.2 k8s-master03 nginx-p64rf 1/1 Running 0 90s 172.25.92.67 k8s-master02 给需要部署的容器打标签123[root@k8s-master01 ~]# kubectl label node k8s-node01 k8s-node02 ds=truenode/k8s-node01 labelednode/k8s-node02 labeled查看容器标签1234567[root@k8s-master01 ~]# kubectl get node --show-labelsNAME STATUS ROLES AGE VERSION LABELSk8s-master01 Ready 3d v1.20.9 /arch=amd64,/os=linux,/arch=amd64,kubernetes.k8s-master02 Ready 3d v1.20.9 /arch=amd64,/os=linux,/arch=amd64,kubernetes.k8s-master03 Ready 3d v1.20.9 /arch=amd64,/os=linux,/arch=amd64,kubernetes.k8s-node01 Ready 3d v1.20.9 /arch=amd64,/os=linux,ds=true,/arch=amd64,kubernetes.k8s-node02 Ready 3d v1.20.9 /arch=amd64,/os=linux,ds=true,/arch=amd64,kubernetes.修改12345[root@k8s-master01 ~]# vim #修改以下内容 spec: nodeSelector: ds: "true"更新配置[root@k8s-master01 ~]# kubectl replace -f 查看pod,可以看到不符合标签的pod被删除了12345[root@k8s-master01 ~]# kubectl get po -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-66bbc9fdc5-4xqcw 1/1 Running 0 15m 172.25.244.195 k8s-master01 nginx-gd6sp 1/1 Running 0 44s 172.27.14.196 k8s-node02 nginx-pl4dz 1/1 Running 0 47s 172.17.125.3 k8s-node01 DaemonSet的更新和回滚Statefulset 和 DaemonSet 更新回滚和 Deployment ⼀致更新策略推荐使⽤ OnDelete12updateStrategy: type: OnDelete因为 DaemonSet 可能部署在 k8s 集群的很多节点上,⼀开始先在⼀些节点上进⾏测试,删除后触发更新不影响其他节点查看更新记录kubectl rollout history ds nginxLabel&SelectorLabel:对k8s中各种资源进⾏分类、分组,添加⼀个具有特别属性的⼀个标签Selector:通过⼀个过滤的语法进⾏查找到对应标签的资源当Kubernetes对系统的任何API对象如Pod和节点进⾏“分组”时,会对其添加Label(key=value形式的“键-值对”)⽤以精准地选择对应的API对象。⽽Selector(标签选择器)则是针对匹配对象的查询⽅法。注:键-值对就是key-value pair例如,常⽤的标签tier可⽤于区分容器的属性,如frontend、backend;或者⼀个release_track⽤于区分容器的环境,如canary、production等Label定义 Label12[root@k8s-master01 ~]# kubectl label node k8s-node02 region=subnet7node/k8s-node02 labeled通过Selector对其筛选123[root@k8s-master01 ~]# kubectl get no -l region=subnet7NAME STATUS ROLES AGE VERSIONk8s-node02 Ready 3d17h v1.17.3在Deployment或其他控制器中指定将Pod部署到该节点1234567containers: ......dnsPolicy: ClusterFirstnodeSelector: region: subnet7restartPolicy: 对Service进⾏Label12[root@k8s-master01 ~]# kubectl label svc canary-v1 -n canary-production env=canary version=v1service/canary-v1 labeled查看Labels123[root@k8s-master01 ~]# kubectl get svc -n canary-production --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELScanary-v1 ClusterIP 10.110.253.62 8080/TCP 24h env=canary,version=v1查看所有Version为v1的svc123[root@k8s-master01 canary]# kubectl get svc --all-namespaces -l version=v1NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcanary-production canary-v1 ClusterIP 10.110.253.62 8080/TCP 25hSelectorSelector主要⽤于资源的匹配,只有符合条件的资源才会被调⽤或使⽤,可以使⽤该⽅式对集群中的各类资源进⾏分配假如对Selector进⾏条件匹配,⽬前已有的Label如下123456789[root@k8s-master01 ~]# kubectl get svc --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELSdetails ClusterIP 10.99.9.178 9080/TCP 45h app=detailskubernetes ClusterIP 10.96.0.1 443/TCP 3d19h component=apiserver,provider=kubernetesnginx ClusterIP 10.106.194.137 80/TCP 2d21h app=productpage,version=v1nginx-v2 ClusterIP 10.108.176.132 80/TCP 2d20h productpage ClusterIP 10.105.229.52 9080/TCP 45h app=productpage,tier=frontendratings ClusterIP 10.96.104.95 9080/TCP 45h app=ratingsreviews ClusterIP 10.102.188.143 9080/TCP 45h app=reviews选择app为reviews或者productpage的svc12345[root@k8s-master01 ~]# kubectl get svc -l 'app in (details, productpage)' --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELSdetails ClusterIP 10.99.9.178 9080/TCP 45h app=detailsnginx ClusterIP 10.106.194.137 80/TCP 2d21h app=productpage,version=v1productpage ClusterIP 10.105.229.52 9080/TCP 45h app=productpage,tier=frontend选择app为productpage或reviews但不包括version=v1的svc1234[root@k8s-master01 ~]# kubectl get svc -l version!=v1,'app in (details, productpage)' --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELSdetails ClusterIP 10.99.9.178 9080/TCP 45h app=detailsproductpage ClusterIP 10.105.229.52 9080/TCP 45h app=productpage,tier=frontend选择labelkey名为app的svc1234567[root@k8s-master01 ~]# kubectl get svc -l app --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELSdetails ClusterIP 10.99.9.178 9080/TCP 45h app=detailsnginx ClusterIP 10.106.194.137 80/TCP 2d21h app=productpage,version=v1productpage ClusterIP 10.105.229.52 9080/TCP 45h app=productpage,tier=frontendratings ClusterIP 10.96.104.95 9080/TCP 45h app=ratingsreviews ClusterIP 10.102.188.143 9080/TCP 45h app=reviews在实际使⽤中,Label的更改是经常发⽣的事情,可以使⽤overwrite参数修改标签修改标签,⽐如将version=v1改为version=v212345678[root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELScanary-v1 ClusterIP 10.110.253.62 8080/TCP 26h env=canary,version=v1[root@k8s-master01 canary]# kubectl label svc canary-v1 -n canary-production version=v2 --overwriteservice/canary-v1 labeled[root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELScanary-v1 ClusterIP 10.110.253.62 8080/TCP 26h env=canary,version=v2删除标签,⽐如删除version12345[root@k8s-master01 canary]# kubectl label svc canary-v1 -n canary-production version-service/canary-v1 labeled[root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labelsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELScanary-v1 ClusterIP 10.110.253.62 8080/TCP 26h env=canary什么是HPA?Horizontal Pod Autoscaler⽔平 pod ⾃动伸缩器k8s 不推荐使⽤ VPA,因为节点有很多,推荐将流量分发到不同的节点上,⽽不是分发到同⼀个节点上HPA v1为稳定版⾃动⽔平伸缩,只⽀持CPU指标V2为beta版本,分为v2beta1(⽀持CPU、内存和⾃定义指标)v2beta2(⽀持CPU、内存、⾃定义指标Custom和额外指标ExternalMetrics)⾃动扩缩容HPA实践必须安装metrics-server或其他⾃定义metrics-server必须配置requests参数不能扩容⽆法缩放的对象,⽐如DaemonSetdry-run导出yaml⽂件,以便于进⾏⼆次修改kubectl create deployment hpa-nginx --image=/dotbalo/nginx --dry-run=client -oyaml > 编辑⽂件 ,containers 添加参数123456containers:- image: /dotbalo/nginx name: nginx resources: requests: cpu: 10m创建kubectl create 暴露⼀个服务kubectl expose deployment hpa-nginx --port=80配置autoscalekubectl autoscale deployment hpa-nginx --cpu-percent=10 --min=1 --max=10循环执⾏提⾼cpu,暂停后cpu下降while true; do wget -q -O- 192.168.42.44 > /dev/null; done课程链接

发布者:admin,转转请注明出处:http://www.yc00.com/news/1688056745a72330.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信