2023年7月17日发(作者:)
⽇志处理服务器_如何处理服务器⽇志⽇志处理服务器When things go south with our applications — as they sometimes do, whether we like it or not — our log files are normallyamong the first places where we go when we start the troubleshooting process. The big “but” here is that despite the factthat log files contain a wealth of helpful information about events, they are usually extremely difficult to decipher.当应⽤程序⽆法正常运⾏时(⽆论是否喜欢,有时有时会如此),我们的⽇志⽂件通常是在开始故障排除过程时⾸先要去的地⽅。 这⾥最⼤的“但是”是,尽管事实是⽇志⽂件包含了⼤量有关事件的有⽤信息,但是通常很难解密它们。A modern web application environment consists of multiple log sources, which collectively output thousands of log lineswritten in unintelligible machine language. If you, for example, have a LAMP stack set up, then you have PHP, Apache, andMySQL logs to go through. Add system and environment logs into the fray — together with framework-specific logs suchas Laravel logs — and you end up with an endless pile of machine data.现代的Web应⽤程序环境由多个⽇志源组成,这些⽇志源共同输出以难以理解的机器语⾔编写的数千条⽇志⾏。 例如,如果您设置了LAMP堆栈,那么您将需要通过PHP,Apache和MySQL⽇志。 将系统和环境⽇志与特定于框架的⽇志(例如Laravel⽇志)⼀起添加到磁盘堆中,最终您将获得⽆穷⽆尽的机器数据。Talk about a needle in a haystack.谈论⼤海捞针。The ELK Stack (, , and ) is quickly becoming the most popular way to handle this challenge. Already the most popular open-source log analysis platform — with 500,000 downloads a month, according to Elastic — ELK is a great way to centralizelogs from multiple sources, identify correlations, and perform deep-data 堆栈( , 和 )正Swift成为应对这⼀挑战的最流⾏⽅法。 根据Elastic的说法,ELK已经是最受欢迎的开源⽇志分析平台,每⽉下载量为500,000,是⼀种集中来⾃多个来源的⽇志,识别关联并执⾏深度数据分析的好⽅法。Elasticsearch is a search-and-analytics engine based on Apache Lucene that allows users to search and analyze largeamounts of data in almost real time. Logstash can ingest and forward logs from anywhere to anywhere. Kibana is thestack’s pretty face — a user interface that allows you to query, visualize, and explore Elasticsearch data csearch是基于Apache Lucene的搜索和分析引擎,它使⽤户⼏乎可以实时搜索和分析⼤量数据。 Logstash可以从任何地⽅提取⽇志并将其转发到任何地⽅。 Kibana是堆栈的漂亮⾯Kong-⽤户界⾯,可让您轻松查询,可视化和浏览Elasticsearch数据。This article will describe how to set up the ELK Stack on a local development environment, ship web server logs (Apachelogs in this case) into Elasticsearch using Logstash, and then analyze the data in Kibana.本⽂将介绍如何在本地开发环境上设置ELK Stack,如何使⽤Logstash将Web服务器⽇志(在这种情况下为Apache⽇志)运送到Elasticsearch中,然后在Kibana中分析数据。安装Java (Installing Java)The ELK Stack requires Java 7 and higher (only Oracle’s Java and the OpenJDK are supported), so as an initial step,update your system and run the following:ELK Stack需要Java 7和更⾼版本(仅⽀持Oracle的Java和OpenJDK),因此,第⼀步,请更新系统并运⾏以下命令:sudo apt-get install default-jre安装ELK (Installing ELK)There are numerous ways of installing the ELK Stack — you can use Docker, Ansible, Vagrant, Microsoft Azure, AWS, or ahosted ELK solution — just take your pick. There is a vast number of tutorials and guides that will help you along the way,one being this that we at put together.安装ELK堆栈的⽅法有很多-您可以选择使⽤Docker,Ansible,Vagrant,Microsoft Azure,AWS或托管的ELK解决⽅案。 有⼤量的教程和指南将为您提供帮助,其中⼀个就是我们整理的 。安装Elasticsearch (Installing Elasticsearch)We’re going to start the installation process with installing Elasticsearch. There are various ways of setting upElasticsearch but we will use Apt.我们将通过安装Elasticsearch开始安装过程。 设置Elasticsearch的⽅法有多种,但我们将使⽤Apt。First, download and install Elastic’s public signing key:⾸先,下载并安装Elastic的公共签名密钥:wget -qO - /GPG-KEY-elasticsearch | sudo apt-key add -Next, save the repository definition to /etc/apt/.d/:接下来,将存储库定义保存到/etc/apt/.d/ :echo "deb /elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/.d/st but not least, update the repository cache and install Elasticsearch:最后但并⾮最不重要的⼀点是,更新存储库缓存并安装Elasticsearch:sudo apt-get update && sudo apt-get install elasticsearchElasticsearch is now installed. Before we continue to the next components, we’re going to tweak the configuration file a bit:现在已安装Elasticsearch。 在继续下⼀个组件之前,我们将对配置⽂件进⾏⼀些调整:sudo nano /etc/elasticsearch/e common configurations involve the restriction of external access to Elasticsearch, so data cannot be hacked ordeleted via HTTP API:⼀些常见的配置涉及外部访问Elasticsearch的限制,因此⽆法通过HTTP API⼊侵或删除数据:: localhostYou can now restart Elasticsearch:您现在可以重新启动Elasticsearch:sudo service elasticsearch restartTo verify that Elasticsearch is running properly, query the following URL using the cURL command:要验证Elasticsearch是否正常运⾏,请使⽤cURL命令查询以下URL:sudo curl 'localhost:9200'You should see the following output in your terminal:您应该在终端中看到以下输出:{ "name" : "Jebediah Guthrie", "cluster_name" : "elasticsearch", "version" : { "number" : "2.3.1", "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39", "build_timestamp" : "2016-04-04T12:25:05Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search"}To make the service start on boot, run:要使服务在启动时启动,请运⾏:sudo update-rc.d elasticsearch defaults 95 10安装Logstash (Installing Logstash)Logstash, the “L” in the “ELK Stack”, is used at the beginning of the log pipeline, ingesting and collecting data beforesending it on to sh是“ ELK堆栈”中的“ L”,⽤于⽇志管道的开头,在将数据发送到Elasticsearch之前先对其进⾏摄取和收集。To install Logstash, add the repository definition to your /etc/apt/ file:要安装Logstash,请将存储库定义添加到您的/etc/apt/⽂件中:echo "deb /logstash/2.2/debian stable main" | sudo tee -a /etc/apt/date your system so that the repository will be ready for use and then install Logstash:更新系统,以便可以使⽤存储库,然后安装Logstash:sudo apt-get update && sudo apt-get install logstashWe’ll be returning to Logstash later to configure log shipping into Elasticsearch.我们稍后将返回Logstash来配置将⽇志传送到Elasticsearch中。安装Kibana (Installing Kibana)The final piece of the puzzle is Kibana – the ELK Stack’s pretty face. First, create the Kibana source list:难题的最后⼀块是Kibana – ELK Stack的漂亮⾯Kong。 ⾸先,创建Kibana来源列表:echo "deb /kibana/4.5/debian stable main" | sudo tee -a /etc/apt/en, update and install Kibana:然后,更新并安装Kibana:sudo apt-get update && apt-get install kibanaConfigure the Kibana configuration file at /opt/kibana/config/:配置在Kibana配置⽂件/opt/kibana/config/ :sudo vi /opt/kibana/config/omment the following lines:取消注释以下⾏:: : “0.0.0.0”Last but not least, start Kibana:最后但并⾮最不重要的⼀点是,启动Kibana:sudo service kibana startTo start analyzing logs in Kibana, at least one index pattern needs to be defined. An index is how Elasticsearch organizesdata, and it can be compared to a database in the world of RDBMS, with mapping defining multiple types.要开始在Kibana中分析⽇志,⾄少需要定义⼀个索引模式。 索引是Elasticsearch组织数据的⽅式,并且可以与RDBMS领域中的数据库进⾏⽐较,其中映射定义了多种类型。You will notice that since we have not yet shipped any logs, Kibana is unable to fetch mapping (as indicated by the greybutton at the bottom of the page). We will take care of this in the next few steps.您会注意到,由于我们尚未交付任何⽇志,因此Kibana⽆法获取映射(如页⾯底部的灰⾊按钮所⽰)。 在接下来的⼏个步骤中,我们将对此进⾏处理。Tip: By default, Kibana connects to the Elasticsearch instance running on localhost, but you can connect to a differentElasticsearch instance. Simply modify the Elasticsearch URL in the Kibana configuration file that you had edited earlier andthen restart Kibana.提⽰:默认情况下,Kibana连接到在localhost上运⾏的Elasticsearch实例,但是您可以连接到其他Elasticsearch实例。 只需在您先前编辑的Kibana配置⽂件中修改Elasticsearch URL,然后重新启动Kibana。运输⽇志 (Shipping Logs)Our next step is to set up a log pipeline into Elasticsearch for indexing and analysis using Kibana. There are various ways offorwarding data into Elasticsearch, but we’re going to use Logstash.我们的下⼀步是建⽴⼀个到Elasticsearch的⽇志管道,以使⽤Kibana进⾏索引和分析。 有多种将数据转发到Elasticsearch的⽅法,但我们将使⽤Logstash。Logstash configuration files are written in JSON format and reside in /etc/logstash/conf.d. The configuration consists of threeplugin sections: input, filter, and sh配置⽂件以JSON格式编写,并位于/etc/logstash/conf.d 。 该配置包含三个插件部分:输⼊,过滤器和输出。Create a configuration file called :创建⼀个名为的配置⽂件:sudo vi /etc/logstash/conf.d/r first task is to configure the input section, which defines where data is being pulled from.我们的第⼀个任务是配置输⼊部分,该部分定义从何处提取数据。In this case, we’re going to define the path to our Apache access log, but you could enter a path to any other set of logfiles (e.g. the path to your PHP error logs).在这种情况下,我们将定义Apache访问⽇志的路径,但是您可以输⼊其他任何⽇志⽂件集的路径(例如,PHP错误⽇志的路径)。Before doing so, however, I recommend doing some research into supported input plugins and how to define them. In somecases, other log forwarders such as and are recommended.但是,在此之前,我建议对⽀持的输⼊插件以及如何定义它们进⾏⼀些研究。
在某些情况下,建议使⽤其他⽇志转发器,例如和 。The input configuration:输⼊配置:input { file { path => "/var/log/apache2/" type => "apache-access" }}Our next task is to configure a filter.我们的下⼀个任务是配置过滤器。Filter plugins allow us to take our raw data and try to make sense of it. One of these plugins is grok — a plugin used toderive structure out of unstructured data. Using grok, you can define a search and extract part of your log lines intostructured fields.过滤器插件使我们能够获取原始数据并尝试加以利⽤。 这些插件之⼀是grok-⼀种⽤于从⾮结构化数据中派⽣结构的插件。 使⽤grok,您可以定义搜索并将部分⽇志⾏提取到结构化字段中。filter { if [type] == "apache-access" { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } }}The last section of the Logstash configuration file is the Output section, which defines the location to where the logs aresent. In our case, it is our local Elasticsearch instance on our localhost:Logstash配置⽂件的最后⼀部分是“输出”部分,它定义了⽇志发送到的位置。 在我们的例⼦中,它是本地主机上的本地Elasticsearch实例:output { elasticsearch {}}That’s it. Once you’re done, start Logstash with the new configuration:⽽已。 完成后,使⽤新配置启动Logstash:/opt/logstash/bin/logstash -f /etc/logstash/conf.d/u should see the following JSON output from Logstash indicating that all is in order:您应该从Logstash看到以下JSON输出,指⽰⼀切正常:{ "message" => "127.0.0.1 - - [24/Apr/2016:11:41:59 +0000] "GET / HTTP/1.1" 200 11764 "-" "curl/7.35.0"", "@version" => "1", "@timestamp" => "2016-04-24T11:43:34.245Z", "path" => "/var/log/apache2/", "host" => "ip-172-31-46-40", "type" => "apache-access", "clientip" => "127.0.0.1", "ident" => "-", "auth" => "-", "timestamp" => "24/Apr/2016:11:41:59 +0000", "verb" => "GET", "request" => "/", "httpversion" => "1.1", "response" => "200", "bytes" => "11764", "referrer" => ""-"", "agent" => ""curl/7.35.0""}Refresh Kibana in your browser, and you’ll notice that the index pattern for our Apache logs was identified:在浏览器中刷新Kibana,您会注意到已确定我们的Apache⽇志的索引模式:Click the Create button, and then select the Discover tab:单击创建按钮,然后选择发现选项卡:From this point onwards, Logstash is tailing the Apache access log for messages so that any new entries will be forwardedinto Elasticsearch.从现在开始,Logstash将在Apache访问⽇志中添加消息,以便所有新条⽬都将转发到Elasticsearch。分析⽇志 (Analyzing Logs)Now that our pipeline is up and running, it’s time to have some fun.现在我们的管道已经启动并正在运⾏,现在该找点乐⼦了。To make things a bit more interesting, let’s simulate some noise on our web server. To do this I’m going to downloadsome and insert them into the Apache access log. Logstash is already tailing this log, so these messages will be indexed intoElasticsearch and displayed in Kibana:为了使事情更有趣,让我们在Web服务器上模拟⼀些噪⾳。 为此,我将下载⼀些并将其插⼊Apache访问⽇志。 Logstash已经在添加此⽇志,因此这些消息将被索引到Elasticsearch中并显⽰在Kibana中:wget /sample-datasudo -i
cat /home/ubuntu/sample-data >> /var/log/apache2/t正在搜寻 (Searching)Searching is the bread and butter of the ELK Stack, and it’s an art unto itself. There is a large amount of documentationavailable online, but I thought I’d cover the essentials so that you will have a solid base from which to start yourexploration work.搜索是ELK Stack的基础,这本⾝就是⼀门艺术。 在线上有⼤量⽂档,但是我想我会介绍要点,以便您有⼀个坚实的基础来开始您的勘探⼯作。Let’s start with some simple searches.让我们从⼀些简单的搜索开始。The most basic search is the “free text” search that is performed against all indexed fields. For example, if you’reanalyzing web server logs, you could search for a specific browser type (searching is performed using the wide search boxat the top of the page):最基本的搜索是针对所有索引字段执⾏的“⾃由⽂本”搜索。 例如,如果要分析Web服务器⽇志,则可以搜索特定的浏览器类型(使⽤页⾯顶部的宽搜索框执⾏搜索):ChromeIt’s important to note that free text searches are NOT case-sensitive unless you use double quotes, in which case thesearch results show exact matches to your query.重要的是要注意,⾃由⽂本搜索不区分⼤⼩写,除⾮您使⽤双引号,在这种情况下,搜索结果显⽰与查询完全匹配。“Chrome”Next up are the field-level searches.接下来是字段级搜索。To search for a value in a specific field, you need to add the name of the field as a prefix to the value:要在特定字段中搜索值,您需要添加字段名称作为该值的前缀:type:apache-accessSay, for example, that you’re looking for a specific web server response. Enter response:200 to limit results to thosecontaining that response.举例来说,假设您正在寻找特定的Web服务器响应。 输⼊response:200以将结果限制为包含该响应的结果。You can also search for a range within a field. If you use brackets [], the results will be inclusive. If you use curly braces {},the results will exclude the specified values in the query.您也可以在字段中搜索范围。 如果使⽤⽅括号[],则结果将包含在内。 如果使⽤⼤括号{},则结果将排除查询中的指定值。Now, it’s time to take it up a notch.现在,是时候提⾼⾃⼰的⽔平了。The next types of searches involve using logical statements. These are quite intuitive but require some finesse because theyare extremely syntax-sensitive.下⼀类搜索涉及使⽤逻辑语句。 这些⾮常直观,但是需要⼀些技巧,因为它们对语法⾮常敏感。These statements include the use of the Boolean operators AND, OR, and NOT:这些语句包括布尔运算符AND,OR和NOT的使⽤:type:apache-access AND (response:400 OR response:500)In the above search, I’m looking for Apache access logs with only a 400 or 500 response. Note the use of parentheses asan example of how more complex queries can be constructed.在上⾯的搜索中,我正在寻找仅响应400或500的Apache访问⽇志。 请注意使⽤括号作为如何构造更复杂查询的⽰例。There are many more search options available (I recommend referring to ’s for more information) such as regularexpressions, fuzzy searches, and proximity searches, but once you’ve pinpointed the required data, you can save thesearch for future reference and as the basis to create Kibana visualizations.有更多搜索选项可⽤(我建议参考的以获取更多信息),例如正则表达式,模糊搜索和邻近搜索,但是⼀旦确定了所需数据,就可以保存搜索以供将来使⽤参考,并作为创建Kibana可视化的基础。可视化 (Visualizing)One of the most prominent features in the ELK Stack in general and Kibana in particular is the ability to create beautifulvisualizations with the ingested data. These visualizations can then be aggregated into a dashboard that you can use to get acomprehensive view of all the various log files coming into Elasticsearch.通常,ELK Stack(尤其是Kibana)中最突出的功能之⼀就是能够使⽤所摄取的数据创建精美的可视化效果。 然后,这些可视化内容可以聚合到⼀个仪表板中,您可以使⽤该仪表板来全⾯查看进⼊Elasticsearch的所有各种⽇志⽂件。To create a visualization, select the Visualize tab in Kibana:要创建可视化,请在Kibana中选择“可视化”选项卡:There are a number of visualization types that you can select, and which type you will choose will greatly depend on thepurpose and end-result you are trying to achieve. In this case, I’m going to select the good ol’ pie chart.您可以选择多种可视化类型,并且将选择哪种类型在很⼤程度上取决于您要达到的⽬的和最终结果。 在这种情况下,我将选择良好的饼图。We then have another choice — we can create the visualization from either a saved search or a new search. In this case,we’re going with the latter.然后,我们有了另⼀种选择-我们可以从保存的搜索或新的搜索中创建可视化。 在这种情况下,我们将使⽤后者。Our next step is to configure the various metrics and aggregations for the graph’s X and Y axes. In this case, we’re goingto use the entire index as our search base (by not entering a search query in the search box) and then cross reference thedata with browser type: Chrome, Firefox, Internet Explorer, and Safari:我们的下⼀步是为图形的X和Y轴配置各种指标和聚合。 在这种情况下,我们将使⽤整个索引作为搜索基础(不在搜索框中输⼊搜索查询),然后交叉引⽤浏览器类型的数据:Chrome,Firefox,Internet Explorer和Safari:Once you are finished, save the visualization. You can then add it to a custom dashboard in the Dashboard tab in Kibana.完成后,保存可视化⽂件。 然后,您可以将其添加到Kibana中“仪表板”选项卡中的⾃定义仪表板。Visualizations are incredibly rich tools to have, and they are the best way to understand the trends within your data.可视化是⾮常丰富的⼯具,它们是了解数据趋势的最佳⽅法。结论 (Conclusion)The ELK Stack is becoming THE way to analyze and manage logs. The fact that the stack is open source and that it’sbacked by a strong community and a fast growing ecosystem is driving its 堆栈正在成为分析和管理⽇志的⽅式。 堆栈是开源的,并且有强⼤的社区和快速发展的⽣态系统作为后盾,这推动了其流⾏。DevOps is not the sole realm of log analysis, and ELK is being used by developers, sysadmins, SEO experts, and marketersas well. Log-driven development — the development process in which code is monitored using metrics, alerts, and logs — isgaining traction within more and more R&D teams, and it would not be a stretch of the imagination to tie this to the growingpopularity of 并不是⽇志分析的唯⼀领域,开发⼈员,系统管理员,SEO专家和营销⼈员也正在使⽤ELK。 ⽇志驱动的开发(使⽤度量,警报和⽇志监控代码的开发过程)在越来越多的研发团队中越来越受欢迎,并且将其与ELK的⽇益普及联系起来不是想象⼒的延伸。Of course, no system is perfect and there are pitfalls that users need to avoid, especially when handling big productionoperations. But this should not deter you from trying it out, especially because there are numerous sources of informationthat will guide you through the process.当然,没有⼀个系统是完美的,⽤户需要避免⼀些陷阱,尤其是在进⾏⼤型⽣产操作时。 但这并不能阻⽌您尝试⼀下,尤其是因为有许多信息源可以指导您完成整个过程。Good luck, and happy indexing!祝你好运,索引愉快!This article was by , , and . Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!本⽂由 , 和 了 。
感谢所有SitePoint的同⾏评审⼈员使SitePoint内容达到最佳状态!⽇志处理服务器
发布者:admin,转转请注明出处:http://www.yc00.com/web/1689542362a264716.html
评论列表(0条)