ELK+filebeat收集K8S平台日志

ELK+filebeat收集K8S平台日志

2023年7月17日发(作者:)

ELK+filebeat收集K8S平台⽇志如果把⽇志保存在容器内部或通过数据卷挂载在宿主机上还是保持在远程存储上,⽐如保存在容器内部,也就是说没有经过任何改动,⾃是在容器⾥原封不动的启动了,起来之后⽇志还是和原来⼀样保持在原来的⽬录⾥,但是这个容器是会经常的删除,销毁和创建是常态。因此我们需要⼀种持久化的保存⽇志⽅式。如果⽇志还是放在容器内部,会随着容器删除⽽被删除容器数量很多,按照传统的查看⽇志⽅式变得不太现实容器本⾝特性容器密集,采集⽬标多:容器⽇志输出到控制台,docker本⾝提供了⼀种能⼒来采集⽇志了。如果落地到本地⽂件⽬前还没有⼀种好的采集⽅式容器的弹性伸缩:新扩容的pod属性信息(⽇志⽂件路径,⽇志源)可能会发送变化收集那些⽇志K8S系统的组件⽇志和应⽤程序⽇志,组件⽇志就是打到宿主机的固定⽂件和传统的⽇志收集⼀样,应⽤程序⽇志⼜分为了标准输出和⽇志⽂件ELK⽇志收集的三个⽅案⼤致分为采集阶段→数据存储→分析→展⽰Node上部署⼀个⽇志收集程序  DaemonSet⽅式部署⽇志收集程序,对本节点/var/log/pods/或/var/lib/docker/containers/两个⽬录下的⽇志进⾏收集Pod中附加专⽤⽇志收集的容器  每个运⾏应⽤程序的Pod中增加⼀个⽇志收集容器,使⽤emtyDir共享⽇志⽬录让⽇志收集程序读取到应⽤程序直接推送⽇志  应⽤程序直接将⽇志推送到远程存储上,不经过docker的管理和kubernetes的管理集群规划(kubeadm安装)IP地址节点名称安装的服务

部署NFS部署NFS是为了实现动态供给存储192.168.10.171K8s-masterNFS,K8S的必要组件服务192.168.10.172node-1NFS,K8S的必要组件服务192.168.10.173node-2NFS-server,K8S的必要组件服务部署NFS服务器,前提需要关闭防⽕墙和selinuxyum install -y nfs-utils #所有的节点都需要安装配置NFS共享的⽬录,no_root_squash是挂载后以匿名⽤户进⾏使⽤,通常变成nobody[root@node-2 ~]# echo "/ifs/kubernetes 192.168.10.0/24(rw,no_root_squash)" > /etc/exports

#多⾏不能使⽤清空⽅法,需要使⽤ >>进⾏追加[root@node-2 ~]# mkdir -p /ifs/kubernetes #共享⽬录不存在的话需要创建启动NFS 并设置开机⾃启动[root@node-2 ~]# systemctl enable nfs && systemctl start nfs查看已经共享的⽬录 (没有启动NFS服务的节点不能查询)[root@node-2 ~]# showmount -eExport list for node-2:/ifs/kubernetes 192.168.10.0/24[root@node-2 ~]#部署NFS实现⾃动创建PV插件yum install -y gitgit clone /kubernetes-incubator/external-storagecd external-storage/nfs-client/deploy/#顺序部署kubectl apply -f # 授权访问apiserverkubectl apply -f # 部署插件,需要修改⾥⾯NFS服务器地址和共享⽬录kubectl apply -f # 创建存储类型,是否启⽤归档kubectl get sc # 查看存储类型在K8S中部署ELK部署elasticsearch[root@k8s-maste ~]# cat

apiVersion: apps/v1kind: StatefulSetmetadata: name: elasticsearch namespace: kube-system labels: k8s-app: elasticsearchspec: serviceName: elasticsearch selector: matchLabels: k8s-app: elasticsearch template: metadata: labels: k8s-app: elasticsearch spec: containers: - image: elasticsearch:7.5.0 name: elasticsearch resources: limits: cpu: 1 memory: 2Gi requests: cpu: 0.5

memory: 500Mi env: - name: "" value: "single-node" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx2g"

ports: - containerPort: 9200 name: db protocol: TCP volumeMounts: - name: elasticsearch-data mountPath: /usr/share/elasticsearch/data volumeClaimTemplates: - metadata: name: elasticsearch-data spec: storageClassName: "managed-nfs-storage" accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi---apiVersion: v1kind: Servicemetadata: name: elasticsearch namespace: kube-systemspec: clusterIP: None ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch ⽣效清单⽂件[root@k8s-maste ~]# kubectl apply -f

/elasticsearch createdservice/elasticsearch created[root@k8s-maste ~]# kubectl get pods -n kube-system elasticsearch-0

NAME READY STATUS RESTARTS AGEelasticsearch-0 1/1 Running 0 50s[root@k8s-maste ~]#部署kibana[root@k8s-maste ~]# cat iVersion: apps/v1kind: Deploymentmetadata: name: kibana namespace: kube-system labels: k8s-app: kibanaspec: replicas: 1 selector: matchLabels: k8s-app: kibana template: metadata: labels: k8s-app: kibana spec: containers: - name: kibana image: kibana:7.5.0 resources: limits: cpu: 1 memory: 500Mi requests: cpu: 0.5

memory: 200Mi env: - name: ELASTICSEARCH_HOSTS value: -system:9200 - name: I18N_LOCALE value: zh-CN ports: - containerPort: 5601 name: ui protocol: TCP---apiVersion: v1kind: Servicemetadata: name: kibana namespace: kube-systemspec: type: NodePort ports: - port: 5601 protocol: TCP targetPort: ui nodePort: 30601 selector: k8s-app: kibana⽣效清单⽂件[root@k8s-maste ~]# kubectl apply -f

/kibana createdservice/kibana created[root@k8s-maste ~]# kubectl get pods -n kube-system |grep kibana

NAME READY STATUS RESTARTS AGEkibana-6cd7b9d48b-jrx79 1/1 Running 0 3m3s[root@k8s-maste ~]# kubectl get svc -n kube-system kibana

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkibana NodePort 10.98.15.252 5601:30601/TCP 105s[root@k8s-maste ~]#

filebeat采集标准输出⽇志filebeat⽀持动态的获取容器的⽇志[root@k8s-maste ~]# cat

apiVersion: v1kind: ConfigMapmetadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeatdata: : |- : inputs: # Mounted `filebeat-inputs` configmap: path: ${}/inputs.d/*.yml # Reload inputs configs as they change: d: false modules: path: ${}/modules.d/*.yml # Reload module configs as they change: d: false # To enable hints based autodiscover, remove `` configuration and uncomment this: #scover: # providers: # - type: kubernetes # d: true csearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']---apiVersion: v1kind: ConfigMapmetadata: name: filebeat-inputs namespace: kube-system labels: k8s-app: filebeatdata: : |- - type: docker : - "*" processors: - add_kubernetes_metadata: in_cluster: true---apiVersion: apps/v1

kind: DaemonSetmetadata: name: filebeat namespace: kube-system labels: k8s-app: filebeatspec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 containers: - name: filebeat image: elastic/filebeat:7.5.0 args: [ "-c", "/etc/", "-e", ] env: - name: ELASTICSEARCH_HOST value: -system

- name: ELASTICSEARCH_PORT value: "9200" securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/ readOnly: true subPath: - name: inputs mountPath: /usr/share/filebeat/inputs.d readOnly: true - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: inputs configMap: defaultMode: 0600 name: filebeat-inputs # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate---apiVersion: /v1beta1kind: ClusterRoleBindingmetadata: name: filebeatsubjects:- kind: ServiceAccount name: filebeat namespace: kube-systemroleRef: kind: ClusterRole name: filebeat apiGroup: ---apiVersion: /v1beta1kind: ClusterRolemetadata: name: filebeat labels: k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods verbs: - get - watch - list---apiVersion: v1kind: ServiceAccountmetadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat这⾥指定了es的路径 csearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']这⾥是⼀个处理器,他会⾃动的为⽇志添加k8s属性。传统配置⽇志采集⼯具重要的参数是什么呢?⽇志路径、⽇志源、写正则,格式化⽇志。add_kubernetes_metadata这个处理器可以⾃动的为k8s添加⽇志属性信息data: : |- - type: docker : - "*" processors: - add_kubernetes_metadata: in_cluster: true这⾥使⽤了hostpath挂载了docker的⼯作⽬录 - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers⽣效清单⽂件[root@k8s-maste ~]# kubectl apply -f

configmap/filebeat-config createdconfigmap/filebeat-inputs /filebeat /filebeat /filebeat createdserviceaccount/filebeat created[root@k8s-maste ~]# kubectl get pods -n kube-system | egrep "kibana|elasticsearch|filebeat"

NAME READY STATUS RESTARTS AGEelasticsearch-0 1/1 Running 0 12mfilebeat-bf5tt 1/1 Running 0 3m15sfilebeat-vjbf5 1/1 Running 0 3m15sfilebeat-sw1zt 1/1 Running 0 3m15skibana-6cd7b9d48b-jrx79 1/1 Running 0 35m运⾏起来之后就会⾃动采集⽇志收集⽇志中落盘的⽇志⽂件收集/var/log/message的⽇志,在所有node上部署⼀个filebeat,也就是⽤daemonsets去部署,挂载宿主机的messages⽂件到容器,编写配置⽂件去读message⽂件就可以了撒,所以YAML⽂件如下,Configmap和daemonset写到⼀起了注意:如果希望在master也分配需要配置容忍污点[root@k8s-maste ~]# cat iVersion: v1kind: ConfigMapmetadata: name: k8s-logs-filebeat-config namespace: kube-system

data: : | : - type: log paths: - /var/log/messages

fields: app: k8s

type: module

fields_under_root: true d: false : "k8s-module" n: "k8s-module-*" csearch: hosts: ['-system:9200'] index: "k8s-module-%{+}"---apiVersion: apps/v1kind: DaemonSet

metadata: name: k8s-logs namespace: kube-systemspec: selector: matchLabels: project: k8s

app: filebeat template: metadata: labels: project: k8s app: filebeat spec: containers: - name: filebeat image: elastic/filebeat:7.5.0 args: [ "-c", "/etc/", "-e", ] resources: requests: cpu: 100m memory: 100Mi limits: cpu: 500m memory: 500Mi securityContext: runAsUser: 0 volumeMounts: - name: filebeat-config mountPath: /etc/ subPath: - name: k8s-logs

mountPath: /var/log/messages volumes: - name: k8s-logs hostPath:

path: /var/log/messages - name: filebeat-config configMap: name: k8s-logs-filebeat-config这⾥主要将宿主机的⽬录挂载到容器中直接通过filebeat进⾏收集 volumeMounts: - name: filebeat-config mountPath: /etc/ subPath: - name: k8s-logs

mountPath: /var/log/messages volumes: - name: k8s-logs hostPath:

path: /var/log/messages - name: filebeat-config configMap: name: k8s-logs-filebeat-config⽣效清单⽂件[root@k8s-maste ~]# kubectl apply -f

configmap/k8s-logs-filebeat-config /k8s-logs created[root@k8s-maste ~]# kubectl get pods -n kube-system|grep k8s-logsNAME READY STATUS RESTARTS AGEk8s-logs-6q6f6 1/1 Running 0 37sk8s-logs-7qkrt 1/1 Running 0 37sk8s-logs-y58hj 1/1 Running 0 37s[root@k8s-maste ~]#

去kibana中添加k8s-module的索引

发布者:admin,转转请注明出处:http://www.yc00.com/news/1689544805a264912.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信