SpringBoot+Kafka+ELK完成海量日志收集(超详细)

SpringBoot+Kafka+ELK完成海量日志收集(超详细)

2023年7月17日发(作者:)

SpringBoot+Kafka+ELK完成海量⽇志收集(超详细)整体流程⼤概如下:服务器准备在这先列出各服务器节点,⽅便同学们在下⽂中对照节点查看相应内容节点名称192.168.11.31192.168.11.35192.168.11.36192.168.11.37192.168.11.111192.168.11.112192.168.11.113192.168.11.51节点作⽤SpringBoot部署ElasticSearch节点ElasticSearch节点ElasticSearch节点zookeeper节点zookeeper节点zookeeper节点kafka节点kafka注册、配置中⼼kafka注册、配置中⼼kafka注册、配置中⼼此节点为kafka brokerKibana部署节点节点备注SpringBoot项⽬准备引⼊log4j2替换SpringBoot默认log,demo项⽬结构如下:pom spring-boot-starter-web spring-boot-starter-logging

spring-boot-starter-log4j2

disruptor 3.3.4

logs collector [%d{yyyy-MM-dd'T'HH:mm:}] [%level{length=5}] [%thread-%tid] [%logger] [%X{hostName}] [%X{ip}] [%X{applicationName}] [%F,%L,%C,%M] [%m] ## '%ex'%n

IndexController测试Controller,⽤以打印⽇志进⾏调试@Slf4j@RestControllerpublic class IndexController { @RequestMapping(value = "/index") public String index() { ();

("我是⼀条info⽇志");

("我是⼀条warn⽇志"); ("我是⼀条error⽇志");

return "idx"; } @RequestMapping(value = "/err") public String err() { (); try { int a = 1/0; } catch (Exception e) { ("算术异常", e); } return "err"; }

}InputMDC⽤以获取log中的[%X{hostName}]、[%X{ip}]、[%X{applicationName}]三个字段值@Componentpublic class InputMDC implements EnvironmentAware { private static Environment environment;

@Override public void setEnvironment(Environment environment) { nment = environment; }

public static void putMDC() { ("hostName", alHostName()); ("ip", alIp()); ("applicationName", perty("")); }}NetUtilpublic class NetUtil {

public static String normalizeAddress(String address){ String[] blocks = ("[:]"); if( > 2){ throw new IllegalArgumentException(address + " is invalid"); } String host = blocks[0]; int port = 80; int port = 80; if( > 1){ port = f(blocks[1]); } else { address += ":"+port; //use default 80 }

String serverAddr = ("%s:%d", host, port); return serverAddr; }

public static String getLocalAddress(String address){ String[] blocks = ("[:]"); if( != 2){ throw new IllegalArgumentException(address + " is invalid address"); }

String host = blocks[0]; int port = f(blocks[1]);

if("0.0.0.0".equals(host)){ return ("%s:%d",alIp(), port); } return address; }

private static int matchedIndex(String ip, String[] prefix){ for(int i=0; i<; i++){ String p = prefix[i]; if("*".equals(p)){ //*, assumed to be IP if(With("127.") || With("10.") ||

With("172.") || With("192.")){ continue; } return i; } else { if(With(p)){ return i; } }

}

return -1; }

public static String getLocalIp(String ipPreference) { if(ipPreference == null){ ipPreference = "*>10>172>192>127"; } String[] prefix = ("[> ]+"); try { Pattern pattern = e("[0-9]+.[0-9]+.[0-9]+.[0-9]+"); Enumeration interfaces = workInterfaces(); String matchedIp = null; int matchedIdx = -1; while (eElements()) { NetworkInterface ni = ement(); Enumeration en = tAddresses();

while (eElements()) { InetAddress addr = ement(); String ip = tAddress();

Matcher matcher = r(ip); if (s()) {

int idx = matchedIndex(ip, prefix); if(idx == -1) continue; if(matchedIdx == -1){ if(matchedIdx == -1){ matchedIdx = idx; matchedIp = ip; } else { if(matchedIdx>idx){ matchedIdx = idx; matchedIp = ip; } } }

}

}

if(matchedIp != null) return matchedIp; return "127.0.0.1"; } catch (Exception e) {

return "127.0.0.1"; } }

public static String getLocalIp() { return getLocalIp("*>10>172>192>127"); }

public static String remoteAddress(SocketChannel channel){ SocketAddress addr = ().getRemoteSocketAddress(); String res = ("%s", addr); return res; }

public static String localAddress(SocketChannel channel){ SocketAddress addr = ().getLocalSocketAddress(); String res = ("%s", addr); return addr==null? res: ing(1); }

public static String getPid(){ RuntimeMXBean runtime = timeMXBean(); String name = e(); int index = f("@"); if (index != -1) { return ing(0, index); } return null; }

public static String getLocalHostName() { try { return (alHost()).getHostName(); } catch (UnknownHostException uhe) { String host = sage(); if (host != null) { int colon = f(':'); if (colon > 0) { return ing(0, colon); } } return "UnknownHost"; } }}启动项⽬,访问/index和/ero接⼝,可以看到项⽬中⽣成了和两个⽇志⽂件我们将Springboot服务部署在192.168.11.31这台机器上。Kafka安装和启⽤kafka安装步骤:⾸先kafka安装需要依赖与zookeeper,所以⼩伙伴们先准备好zookeeper环境(三个节点即可),然后我们来⼀起构建kafka broker。##

解压命令:tar -zxvf kafka_ -C /usr/local/##

改名命令:mv kafka_2.12-2.1.0/ kafka_2.12##

进⼊解压后的⽬录,修改ties⽂件:vim /usr/local/kafka_2.12/config/ties##

修改配置:=0port====/usr/local/kafka_2.12/ions=t=192.168.11.111:2181,192.168.11.112:2181,192.168.11.113:2181##

建⽴⽇志⽂件夹:mkdir /usr/local/kafka_2.12/kafka-logs##启动kafka:/usr/local/kafka_2.12/bin/ /usr/local/kafka_2.12/config/ties &创建两个topic##

创建 --zookeeper 192.168.11.111:2181 --create --topic app-log-collector --partitions 1 --replication-factor --zookeeper 192.168.11.111:2181 --create --topic error-log-collector --partitions 1 --replication-factor 1

我们可以查看⼀下topic情况 --zookeeper 192.168.11.111:2181 --topic app-log-test --describe可以看到已经成功启⽤了app-log-collector和error-log-collector两个topicfilebeat安装和启⽤:filebeat下载cd /usr/local/softwaretar -zxvf filebeat-6.6.0-linux-x86_ -C /usr/local/cd /usr/localmv filebeat-6.6.0-linux-x86_64/ filebeat-6.6.0配置filebeat,可以参考下⽅yml配置⽂件vim /usr/local/filebeat-5.6.2/###################### Filebeat Configuration Example #########################ctors:- input_type: log paths: ## app-服务名称.log,

为什么写死,防⽌发⽣轮转抓取历史数据 - /usr/local/logs/ #定义写⼊ ES

时的 _type

值 document_type: "app-log" multiline: #pattern: '^s*(d{4}|d{2})-(d{2}|[a-zA-Z]{3})-(d{2}|d{4})' #

指定匹配的表达式(匹配以 2017-11-15 08:04:23:889

时间格式开头的字符串) pattern: '^[' #

指定匹配的表达式(匹配以 "{

开头的字符串) negate: true #

是否匹配到 match: after #

合并到上⼀⾏的末尾 max_lines: 2000 #

最⼤的⾏数 timeout: 2s #

如果在规定时间没有新的⽇志事件就不等待后⾯的⽇志 fields: logbiz: collector logtopic: app-log-collector ##

按服务划分⽤作kafka topic evn: dev- input_type: log paths: - /usr/local/logs/ document_type: "error-log" multiline: #pattern: '^s*(d{4}|d{2})-(d{2}|[a-zA-Z]{3})-(d{2}|d{4})' #

指定匹配的表达式(匹配以 2017-11-15 08:04:23:889

时间格式开头的字符串) pattern: '^[' #

指定匹配的表达式(匹配以 "{

开头的字符串) negate: true #

是否匹配到 match: after #

合并到上⼀⾏的末尾 max_lines: 2000 #

最⼤的⾏数 timeout: 2s #

如果在规定时间没有新的⽇志事件就不等待后⾯的⽇志 fields: logbiz: collector logtopic: error-log-collector ##

按服务划分⽤作kafka topic evn: dev

: enabled: true hosts: ["192.168.11.51:9092"] topic: '%{[ic]}' : reachable_only: true compression: gzip max_message_bytes: 1000000 required_acks: _files: truefilebeat启动:检查配置是否正确cd /usr/local/filebeat-6.6.0./filebeat -c -configtest## Config OK启动filebeat/usr/local/filebeat-6.6.0/filebeat &检查是否启动成功ps -ef | grep filebeat可以看到filebeat已经启动成功然后我们访问192.168.11.31:8001/index和192.168.11.31:8001/err,再查看kafka的logs⽂件,可以看到已经⽣成了app-log-collector-0和error-log-collector-0⽂件,说明filebeat已经帮我们把数据收集好放到了kafka上。logstash安装logstash的安装可以参考。我们在logstash的安装⽬录下新建⼀个⽂件夹mkdir scrpit然后cd进该⽂件,创建⼀个⽂件cd scrpitvim ## multiline 插件也可以⽤于其他类似的堆栈式信息,⽐如 linux 的内核⽇志。input { kafka { ## app-log-服务名称 topics_pattern => "app-log-.*" bootstrap_servers => "192.168.11.51:9092" codec => json consumer_threads => 1 ## 增加consumer的并⾏消费线程数 decorate_events => true #auto_offset_rest => "latest" group_id => "app-log-group" }

kafka { ## error-log-服务名称 topics_pattern => "error-log-.*" bootstrap_servers => "192.168.11.51:9092" codec => json consumer_threads => 1 decorate_events => true #auto_offset_rest => "latest" group_id => "error-log-group" }

}filter {

## 时区转换 ruby { code => "('index_time',me('%Y.%m.%d'))" } } if "app-log" in [fields][logtopic]{ grok { ## 表达式,这⾥对应的是Springboot输出的⽇志格式 match => ["message", "[%{NOTSPACE:currentDateTime}] [%{NOTSPACE:level}] [%{NOTSPACE:thread-id}] [%{NOTSPACE:class}] [%{DATA:hostName}] [%{DATA:ip}] [%{DATA:applicationName}] [%{DATA:location}] [%{DATA:messageInfo}] ## (''|%{QUOTEDSTRING:throwable})"] } } if "error-log" in [fields][logtopic]{ grok { ## 表达式 match => ["message", "[%{NOTSPACE:currentDateTime}] [%{NOTSPACE:level}] [%{NOTSPACE:thread-id}] [%{NOTSPACE:class}] [%{DATA:hostName}] [%{DATA:ip}] [%{DATA:applicationName}] [%{DATA:location}] [%{DATA:messageInfo}] ## (''|%{QUOTEDSTRING:throwable})"] } }

}## 测试输出到控制台:output { stdout { codec => rubydebug }}## elasticsearch:output { if "app-log" in [fields][logtopic]{ ## es插件 elasticsearch { # es服务地址 hosts => ["192.168.11.35:9200"] # ⽤户名密码

user => "elastic" password => "123456" ## 索引名,+ 号开头的,就会⾃动认为后⾯是时间格式: ## javalog-app-service-2019.01.23

index => "app-log-%{[fields][logbiz]}-%{index_time}" # 是否嗅探集群ip:⼀般设置true;192.168.11.35:9200/_nodes/http?pretty # 通过嗅探机制进⾏es集群负载均衡发⽇志消息 sniffing => true # logstash默认⾃带⼀个mapping模板,进⾏模板覆盖 template_overwrite => true }

}

if "error-log" in [fields][logtopic]{ elasticsearch { hosts => ["192.168.11.35:9200"]

user => "elastic" password => "123456" index => "error-log-%{[fields][logbiz]}-%{index_time}" sniffing => true template_overwrite => true }

}

}启动logstash/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/script/ &等待启动成功,我们再次访问192.168.11.31:8001/err可以看到控制台开始打印⽇志ElasticSearch与Kibana节点名称192.168.11.35192.168.11.36192.168.11.37节点作⽤ElasticSearch节点ElasticSearch节点ElasticSearch节点节点备注Kibana部署节点ES和Kibana的搭建之前没写过博客,⽹上资料也⽐较多,⼤家可以⾃⾏搜索。ElasticSearch集群的的搭建可以参考。搭建完成后,访问Kibana的管理页⾯192.168.11.35:5601,选择Management -> Kinaba - Index Patterns然后Create index pattern1. index pattern 输⼊

app-log-*2. Time Filter field name 选择

currentDateTime这样我们就成功创建了索引。我们再次访问192.168.11.31:8001/err,这个时候就可以看到我们已经命中了⼀条log信息⾥⾯展⽰了⽇志的全量信息到这⾥,我们完整的⽇志收集及可视化就搭建完成了!

发布者:admin,转转请注明出处:http://www.yc00.com/web/1689544692a264901.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信