2023年7月17日发(作者:)
【ELK】【docker】【elasticsearch】2.使⽤elasticSearch。。。
1.拉取镜像docker pull elasticsearch:6.5.4docker pull kibana:6.5.4
2.启动容器docker run -d --name es1 -p 9200:9200 -p 9300:9300 --restart=always -e "=single-node" elasticsearch:6.5.4docker run -d -p 5601:5601 --name kibana --restart=always --link es1:elasticsearch kibana:6.5.4 如果启动ES仅是测试使⽤,启⽤单节点即可。如果启动ES是要给⽣产任务使⽤,需要启动ES集群。3.访问地址192.168.92.130:5601/status
4.安装ik分词器进⼊es容器sudo docker exec -it es1 /bin/bash
进⼊plugins⽬录cd plugins/
此时查看插件⽬录下,有两个插件的⽬录
下载对应es版本的ik的压缩包【安装插件的版本需要与es版本⼀致】wget /medcl/elasticsearch-analysis-ik/releases/download/v6.5.4/
创建ik⽬录,⽤于存放解压ik压缩包的⽂件mkdir elasticsearch-analysis-ik
解压ik压缩包到指定⽬录unzip -d elasticsearch-analysis-ik
删除源压缩包rm -f
exit 退出容器 重启es容器 查看启动⽇志加载插件信息exitdocker restart es1docker logs -f es1
验证ik分词器是否安装成功【analyzer参数值:ik_max_word 如果未安装成功,请求就会报错!】两种粗细粒度分别为: ik_max_word ik_smartPOST 192.168.92.130:9200/_analyze请求体:{ "analyzer":"ik_max_word", "text":"德玛西亚之⼒在北韩打倒了变形⾦刚"}结果:{ "tokens": [ { "token": "德", "start_offset": 0, "end_offset": 1, "type": "CN_CHAR", "position": 0 }, { "token": "玛", "start_offset": 1, "end_offset": 2, "type": "CN_CHAR", "position": 1 }, { "token": "西亚", "start_offset": 2, "end_offset": 4, "type": "CN_WORD", "position": 2 }, { "token": "之⼒", "start_offset": 4, "end_offset": 6, "type": "CN_WORD", "position": 3 }, { "token": "在", "start_offset": 6, "end_offset": 7, "type": "CN_CHAR", "position": 4 }, { "token": "北韩", "start_offset": 7, "end_offset": 9, "type": "CN_WORD", "position": 5 }, { "token": "打倒", "start_offset": 9, "end_offset": 11, "type": "CN_WORD", "position": 6 }, { "token": "倒了", "start_offset": 10, "end_offset": 12, "type": "CN_WORD", "position": 7 }, { "token": "变形⾦刚", "start_offset": 12, "end_offset": 16, "type": "CN_WORD", "position": 8 }, { "token": "变形", "start_offset": 12, "end_offset": 14, "type": "CN_WORD", "position": 9 }, { "token": "⾦刚", "start_offset": 14, "end_offset": 16, "type": "CN_WORD", "position": 10 } ]}View Code ik分词器成功安装
附加⼀个:查看某个index下某个type中的某条document的某个属性的属性值 分词效果:格式如下:你的index/你的type/document的id/_termvectors?fields=${字段名}192.168.92.130:9200/swapping/builder/6/_termvectors?fields=buildName【注意fields参数对应的是数组】
5.安装pinyin分词器 进⼊容器sudo docker exec -it es1 /bin/bash进⼊插件⽬录cd plugins/创建⽬录elasticsearch-analysis-pinyinmkdir elasticsearch-analysis-pinyin进⼊⽬录elasticsearch-analysis-pinyin,下载pinyin分词器压缩包【注意版本和es版本⼀致】cd elasticsearch-analysis-pinyin/wget /medcl/elasticsearch-analysis-pinyin/releases/download/v6.5.4/
解压压缩包,解压完成删除压缩包unzip -f
退出容器,重启es,查看⽇志exitdocker restart es1docker logs -f es1验证pinyin分词器是否安装成功 结果:{ "tokens": [ { "token": "de", "start_offset": 0, "end_offset": 0, "type": "word", "position": 0 }, { "token": "dmxyzlzbhddlbxjg", "start_offset": 0, "end_offset": 0, "type": "word", "position": 0 }, { "token": "ma", "start_offset": 0, "end_offset": 0, "type": "word", "position": 1 }, { "token": "xi", "start_offset": 0, "end_offset": 0, "type": "word", "position": 2 }, { "token": "ya", "start_offset": 0, "end_offset": 0, "type": "word", "position": 3 }, { "token": "zhi", "start_offset": 0, "end_offset": 0, "type": "word", "position": 4 }, { "token": "li", "start_offset": 0, "end_offset": 0, "type": "word", "position": 5 }, { "token": "zai", "start_offset": 0, "end_offset": 0, "type": "word", "position": 6 }, { "token": "bei", "start_offset": 0, "end_offset": 0, "type": "word", "position": 7 }, { "token": "han", "start_offset": 0, "end_offset": 0, "type": "word", "position": 8 }, { "token": "da", "start_offset": 0, "end_offset": 0, "type": "word", "position": 9 }, { "token": "dao", "start_offset": 0, "end_offset": 0, "type": "word", "position": 10 }, { "token": "le", "start_offset": 0, "end_offset": 0, "type": "word", "position": 11 }, { "token": "bian", "start_offset": 0, "end_offset": 0, "type": "word", "position": 12 }, { "token": "xing", "start_offset": 0, "end_offset": 0, "type": "word", "position": 13 }, { "token": "jin", "start_offset": 0, "end_offset": 0, "type": "word", "position": 14 }, { "token": "gang", "start_offset": 0, "end_offset": 0, "type": "word", "position": 15 } ]}View Code证明pinyin插件安装成功
6.繁简体转化分词器进⼊es容器sudo docker exec -it es1 /bin/bash进⼊plugins⽬录cd plugins/创建繁简体转化⽬录mkdir elasticsearch-analysis-stconvert进⼊⽬录cd elasticsearch-analysis-stconvert/下载插件压缩包wget /medcl/elasticsearch-analysis-stconvert/releases/download/v6.5.4/解压压缩包unzip 解压完成后,移除原压缩包rm -f 退出容器exit重启esdocker restart es1查看⽇志检验繁简体转化是否安装成功URL:POST192.168.92.130:9200/_analyze请求体:{ "analyzer":"stconvert", "text" : "国际电视台"}请求结果:
繁简体转化安装成功
7.安装启动logstashdocker拉取logstashdocker pull logstash:6.5.4启动logstashdocker run -d -p 5044:5044 -p 9600:9600 --restart=always --name logstash logstash:6.5.4查看⽇志docker logs -f logstash查看⽇志可以看出,虽然启动成功,但是并未连接上es,
这就需要修改logstash中的对接配置进⼊logstash容器内docker exec -it logstash /bin/bash进⼊config⽬录cd /usr/share/logstash/config/修改⽂件中的 修改url为⾃⼰的es所在IP:port退出容器,重启logstashexitdocker restart logstash查看⽇志可以看到启动成功并且es连接池中刚刚配置的连接地址已经连接成功
回到kibana,查看ELK状态以及运转情况 OK,ELK搭建完成
=================================================附录=============================================================================
⼀、ELK概念描述看到这⾥,有很多地⽅都是迷迷糊糊的吧。这⾥简单⼀说:ELK是⼀整套的分布式⽇志分析平台的解决⽅案。
在ELK【都是开源软件】中,E代表 es,⽤于存储⽇志信息【就是⼀个开源可持久化的分布式全⽂搜索引擎】L代表logstash,⽤于收集⽇志信息【开源数据收集引擎】K代表kibana,⽤于展⽰⽇志信息【开源的分析和可视化平台】
⼆、关于logstash插件的知识这⾥就要了解⼀些logstash的知识⽽对于logstash的收集功能,其实是由它的⼀个⼀个插件完成的。⽽主体的三个插件配置就是input--->filter--->output,如下图所⽰。 其中input和output是必须的,⽽filter是⾮必须的。input插件配置,是指定数据的输⼊源,配置标明要收集的数据是从什么地⽅来的。⼀个 pipeline是可以指定多个input插件的。 input可以是stdin、file、kafkafilter插件配置,是对原始数据进⾏类型转化、删除字段、格式化数据的。不是必须的配置。 filter可以是date、grok、dissect、mutate、json、geoip、rubyoutput插件配置,是将数据输出到指定位置。 output可以是stdout、file、elasticsearch
====================================================================================================
发布者:admin,转转请注明出处:http://www.yc00.com/web/1689544522a264884.html
评论列表(0条)