E(Elasticsearch)+L(Logstash)+K(Kibana)
E(Elasticsearch)+F(Filebeat)+K(Kibana)
Redis/Mq/kafka大数据高提高弹性时候可选
graph LR
B(beats数据采集)-.->G([redis/mq/kafka])-.->L[Logstash数据管道/处理]-->E[Elasticsearch数据存储/索引]-->K[Kibana数据分析/显示]
B-->L
Logstash 输入支持:tcp,http,file,beats,kafka,rabbitmq,redis,log4j,elasticsearch,jdbc,websocket
过滤器支持:grok,ruby,mutate,json
输出支持:elasticsearch,File,Emial,http,Kafka,Redis,MongoDB,Rabbitmq,Syslog,Tcp,Websocket,Zabbix,Stdout,Csv
Filebeat vs fluent Filebeat主要用于数据采集,轻量对应用服务器消耗较小,虽然Logstash也可以采集数据,但Logstash占用应用服务器性能比Filebeat大
fluent是Filebeat的高级版,支持很多其他日志输入模式
springboot log框架包
架构选型 方案一 EFK(docker log模式) 利用Filebeat采集docker日志,从而监控docker上所有或指定服务的日志,实现SpringCloud的日志监听
优点: 对现有服务侵入无,不需要改造现有服务
缺点:强依赖于docker,只能监听docker
graph LR
B(Filebeat收集docker日志)-->E[Elasticsearch]-->K[Kibana]
方案二 Logstash采用 优点:简洁,搭建快速
缺点:没缓冲,可能会有瓶颈
graph LR
B(Logstash收集本地log文件)-->E[Elasticsearch]-->K[Kibana]
方案三 Logstash+redis+Logstash(未验证) 参考:搭建 ELK 实时日志平台并在 Spring Boot 和 Nginx 项目中使用
优点:
直接读取日志文件,对原来的系统入侵无,
支持所有服务,例如nginx,springboot等只要能生成日志文件的
缺点:
需要在读取日志文件的服务器都安装Logstash(shipper角色)
采用docker部署的时候,springboot需要映射日志目录
graph LR
B(Logstash收集本地log文件/Shipper)--写入-->G([redis])--读取-->L[Logstash/Indexer角色]-->E[Elasticsearch]-->K[Kibana]
方案四 kafka+logstash(未验证) 参考:Spring Cloud集成ELK完成日志收集实战(elasticsearch、logstash、kibana
优点:
不需要在应用服务器安装额外的服务
支持docker部署,不需要额外映射服务目录
缺点:
需要改造springboot
不支持nginx、数据库等服务
graph LR
B(springboot)--写入-->G([kafka])--读取-->L[Logstash]-->E[Elasticsearch]-->K[Kibana]
参考:sndyuk /logback-more-appenders
优点:直接通过jar集成logback框架,干净
缺点:只适合于springboot
graph LR
A(springboot依赖logback-more-appenders)-->B(fluent)-->E[Elasticsearch]-->K[Kibana]
步骤:
Docker安装fluent
,fluent镜像需要自己制作exxk/fluent-elasticsearch:latest 参考fluentd/container-deployment/docker-compose
1 2 3 4 5 FROM fluent/fluentd:v1.6 -debian-1 USER rootRUN ["gem" , "install" , "fluent-plugin-elasticsearch" , "--no-document" , "--version" , "3.5.2" ] USER fluent
springboot引入maven依赖
1 2 3 // https://mvnrepository.com/artifact/com.sndyuk/logback-more-appenders compile group: 'com.sndyuk', name: 'logback-more-appenders', version: '1.4.2' compile group: 'org.fluentd', name: 'fluent-logger', version: '0.3.4'
springboot在logback.xml添加fluentd的日志输出模式(具体见logback的配置)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <springProperty scope ="context" name ="fluentHost" source ="logback.fluent.host" /> <springProperty scope ="context" name ="fluentPort" source ="logback.fluent.port" /> <appender name ="FLUENT" class ="ch.qos.logback.more.appenders.FluentLogbackAppender" > <tag > ${APP_NAME}</tag > <label > logback</label > <remoteHost > ${fluentHost}</remoteHost > <port > ${fluentPort}</port > <layout > <pattern > ${LOG_FORMAT}</pattern > </layout > </appender >
动态配置application
1 2 3 spring.cloud.config.logback-profile =FLUENT logback.fluent.host =${LOGBACK_FLUENT_HOST:xxx.cn} logback.fluent.port =${LOGBACK_FLUENT_PORT:14021}
然后运行springboot触发日志
然后去页面配置见日志EFK框架中elastic的配置使用
EFK/ELK部署 参考deviantony/docker-elk
docker-stack.yml
内容如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 version: '3.3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0 configs: - source: elastic_config target: /usr/share/elasticsearch/config/elasticsearch.yml environment: ES_JAVA_OPTS: "-Xmx256m -Xms256m" ELASTIC_PASSWORD: changeme discovery.type: single-node TZ: Asia/Shanghai deploy: mode: replicated replicas: 1 placement: constraints: [node.hostname == me ] logstash: image: docker.elastic.co/logstash/logstash:7.9.0 configs: - source: logstash_config target: /usr/share/logstash/config/logstash.yml - source: logstash_pipeline target: /usr/share/logstash/pipeline/logstash.conf environment: LS_JAVA_OPTS: "-Xmx256m -Xms256m" TZ: Asia/Shanghai deploy: mode: replicated replicas: 0 placement: constraints: [node.hostname == me ] kibana: image: docker.elastic.co/kibana/kibana:7.9.0 environment: TZ: Asia/Shanghai ports: - "14020:5601" configs: - source: kibana_config target: /usr/share/kibana/config/kibana.yml deploy: mode: replicated replicas: 1 placement: constraints: [node.hostname == me ] filebeat: image: docker.elastic.co/beats/filebeat:7.9.0 user: root command: filebeat -e -strict.perms=false environment: TZ: Asia/Shanghai configs: - source: filebeat_config target: /usr/share/filebeat/filebeat.yml volumes: - /var/lib/docker/containers:/var/lib/docker/containers:ro - /var/run/docker.sock:/var/run/docker.sock:ro deploy: mode: replicated replicas: 0 placement: constraints: [node.hostname == me ] fluent: image: exxk/fluent-elasticsearch:latest environment: TZ: Asia/Shanghai ports: - "14021:24224" - "14021:24224/udp" configs: - source: fluent_config target: /fluentd/etc/fluent.conf deploy: mode: replicated replicas: 1 placement: constraints: [node.hostname == me ] configs: elastic_config: external: true logstash_config: external: true logstash_pipeline: external: true kibana_config: external: true filebeat_config: external: true fluent_config: external: true
各个配置文件配置内容如下:
elastic_config
1 2 3 4 5 6 7 8 9 10 11 12 13 --- cluster.name: "docker-cluster" network.host: 0.0 .0 .0 xpack.license.self_generated.type: trial xpack.security.enabled: true xpack.monitoring.collection.enabled: true
kibana_config
1 2 3 4 5 6 7 8 9 10 11 12 13 --- server.name: kibana server.host: 0.0 .0 .0 elasticsearch.hosts: [ "http://elasticsearch:9200" ]monitoring.ui.container.elasticsearch.enabled: true elasticsearch.username: elastic elasticsearch.password: changeme
logstash_config
1 2 3 4 5 6 7 8 9 10 11 12 --- http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: elastic xpack.monitoring.elasticsearch.password: changeme
logstash_pipeline
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 input { tcp { port => 5000 } } output { elasticsearch { hosts => "elasticsearch:9200" user => "elastic" password => "changeme" } }
filebeat_config
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 filebeat.config: modules: path: ${path.config}/modules.d/*.yml reload.enabled: false filebeat.autodiscover: providers: - type: docker hints.enabled: true processors: - add_cloud_metadata: ~ output.elasticsearch: hosts: 'elasticsearch:9200' username: 'elastic' password: 'changeme'
fluent_config
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 # fluentd/conf/fluent.conf <source > @type forward port 24224 bind 0.0.0.0 </source > <match *. **> @type copy <store > @type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix fluentd logstash_dateformat %Y%m%d include_tag_key true type_name access_log tag_key @log_name flush_interval 1s user elastic password changeme </store > <store > @type stdout </store > </match >