抱歉,您的瀏覽器無法訪問本站
本頁面需要瀏覽器支持(啟用)JavaScript
了解詳情 >

相关环境

环境 版本
Linux Ubuntu 4.4.0-187-generic
Elasticsearch 6.3.2
Logstash 6.2.4
kibana 6.3.2

注意:kibana的版本不能大于Elasticsearch

基于docker容器搭建ELK环境

Elasticsearch安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 拉取镜像
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.2
# 安装镜像 开放端口 9200,并且把容器名命名为elasticsearch
docker run -d --name elasticsearch -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:6.3.2
# 运行容器后,进入容器
docker exec -it elasticsearch /bin/bash
cd config
# 修改配置文件
vi elasticsearch.yml
#==========================================重点在这==================================
# 加入跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 修改完后,退出容器,并重启
docker restart elasticsearch

随后访问服务器的ip+9200显示效果如下就可以了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name" : "e8FlK8L",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "ks8-UMmOT-u4uIbTJPK3cA",
"version" : {
"number" : "6.3.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "053779d",
"build_date" : "2018-07-20T05:20:23.451332Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Logstash安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 拉取镜像并运行容器
docker run --name -p 4560:4560 es_logstash docker.elastic.co/logstash/logstash:6.2.4
# 进入容器
docker exec -it es_logstash /bin/bash
# 修改配置信息
cd config
vi logstash.yml
#==========================================重点在这==================================
# xpack.monitoring.elasticsearch.url为本机机器的IP地址
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://192.168.2.153:9200
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changme
#==========================================重点在这==================================
# 修改pipeline文件夹下的logstash.conf文件
cd pipeline
vi logstash.conf

input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => "192.168.2.153:9200"
index => "springboot-logstash-%{+YYYY.MM.dd}"
}
}
#==========================================重点在这==================================
# 进入bin目录
cd /bin/
# 安装插件
logstash-plugin install logstash-codec-json_lines

kibana安装

1
2
3
# 拉取镜像并运行容器
docker pull kibana:6.3.2
docker run --name es_kibana -p 5601:5601 -d -e ELASTICSEARCH_URL=http://192.168.2.153:9200 kibana:6.3.2

重启容器

随后根据顺序重启容器分别是elasticsearch、kibana、logstash

使用ELK环境

登录kibana可视化界面,点击management,创建index pattern,name就为springboot-logstash-*,随后选择@timestamp。随后在discovery界面查看日志就好啦。

Springboot+logstash

首先导入相关依赖,然后在resources文件下创建logback-spring.xml文件让logback的日志输出到logstash中

gradle依赖文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
plugins {
id 'org.springframework.boot' version '2.5.3'
id 'io.spring.dependency-management' version '1.0.11.RELEASE'
id 'java'
}

group = 'pers.czj'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '1.8'

repositories {
mavenCentral()
}

dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'net.logstash.logback:logstash-logback-encoder:6.6'
implementation 'io.micrometer:micrometer-registry-prometheus:1.5.1'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}

test {
useJUnitPlatform()
}
logback-spring.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<!--应用名称-->
<property name="APP_NAME" value="mall-admin"/>
<!--日志文件保存路径-->
<property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/>
<contextName>${APP_NAME}</contextName>
<!--每天记录日志到文件appender-->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE_PATH}/${APP_NAME}-%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
</appender>
<!--输出到logstash的appender-->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--可以访问的logstash日志收集端口 这里注意是要自己的服务器IP地址-->
<destination>192.168.3.101:4560</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
<appender-ref ref="LOGSTASH"/>
</root>
</configuration>

随后运行相关接口,打印日志,就可以在discovery界面看到相关日志被收集到了。

文章参考