赞
踩
前言:
SSH框架中已集成logback,部门已搭建好Kafka + ELK平台(后称日志平台),因为需要容器化部署项目,日志不能像以往写在本地日志文件,而是需要通过Kafka+Logstash将日志消息收集存储在日志平台。不同于Springboot项目自带logback,改造方便,SSH项目需要自己改造日志工具并且集成kafka,有诸多不同地方,文章主要是对改造过程及出现问题做介绍。
前序知识:名词的简单介绍
更详细的介绍:
- 关于kafka + ELK: 基于Kafka+ELK搭建海量日志平台、
- 关于Kafka、消息队列:Kafka简明教程
- 关于SSH项目(非SpringBoot项目)集成logback的集成过程:log4j2升级为logback,logback+slf4jcommons-logging+Jboss-logging(SSH项目)
appender决定日志要输出到哪,日志自带的常见的如:RollingFileAppender,输出到文件;ConsoleAppender,输出到控制台。
不同于SpringBoot项目,使用的spring-kafka是在spring-client基础上再封装的,非spring-boot项目中好像用不了,所以直接用kafka-clients,目前未找到详细使用上的问题
依赖包汇总:
<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <timestamp> <timeZone>UTC</timeZone> </timestamp> <pattern> <pattern> { "log_level": "%-5level", "log_topic": "cw-log", "division": "cw", "appName": "test_grams_gramsapi_feature", "dmAppName": "xxx", "message": "${LOG_PATTERN}" } </pattern> </pattern> </providers> </encoder> <topic>cw-log</topic> <!-- we don't care how the log messages will be partitioned --> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/> <!-- use async delivery. the application threads are not blocked by logging --> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/> <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --> <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --> <!-- bootstrap.servers is the only mandatory producerConfig --> <producerConfig>bootstrap.servers=11.4.66.217:9094,11.4.66.220:9094,11.4.66.219:9094</producerConfig> <!-- don't wait for a broker to ack the reception of a batch. --> <!-- <producerConfig>acks=0</producerConfig>--> <!-- wait up to 1000ms and collect log messages before sending them as a batch --> <producerConfig>linger.ms=100</producerConfig> <!-- even if the producer buffer runs full, do not block the application but start to drop messages --> <!-- <producerConfig>max.block.ms=0</producerConfig>--> <!-- define a client-id that you use to identify yourself against the kafka broker --> <!-- <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>--> </appender>
到对应的Kinana平台,根据对应的过滤条件筛选到我们输出的日志。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。