首页 > 编程知识 正文

kafka日志级别设置,kafka能解析日志吗

时间:2023-05-04 05:43:42 阅读:272435 作者:2286

kafka的log4j日志默认配置中,有如下配置:

log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH

这有什么问题呢,虽然说我们用一天一次的滚动日志,但是我们配置的DataPattern为小时级别的,所以只要每个小时有日志生成,那么每个小时都会生成日志,简单来说就是一个小时一个。

这个对于我们日志记录和问题排查都不太方便,所以我们需要对其进行修改,修改为一天一次。

将上述配置修改为

log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd

也就是去掉-HH

以下附上全部日志内容:

### Licensed to the Apache Software Foundation (ASF) under one# or more contributor license agreements. See the NOTICE file# distributed with this work for additional information# regarding copyright ownership. The ASF licenses this file# to you under the Apache License, Version 2.0 (the# "License"); you may not use this file except in compliance# with the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing,# software distributed under the License is distributed on an# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY# KIND, either express or implied. See the License for the# specific language governing permissions and limitations# under the License.###kafka.logs.dir=logslog4j.rootLogger=INFO, stdoutlog4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d{ISO8601}] %p %m (%c)%nlog4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-ddlog4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.loglog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d{ISO8601}] %p %m (%c)%nlog4j.appender.kafkaAppender.MaxFileSize = {{kafka_log_maxfilesize}}MBlog4j.appender.kafkaAppender.MaxBackupIndex = {{kafka_log_maxbackupindex}}log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-ddlog4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.loglog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.stateChangeAppender.layout.ConversionPattern=[%d{ISO8601}] %p %m (%c)%nlog4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.requestAppender.DatePattern='.'yyyy-MM-ddlog4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.loglog4j.appender.requestAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.requestAppender.layout.ConversionPattern=[%d{ISO8601}] %p %m (%c)%nlog4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-ddlog4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.loglog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d{ISO8601}] %p %m (%c)%nlog4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.controllerAppender.DatePattern='.'yyyy-MM-ddlog4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.loglog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.controllerAppender.layout.ConversionPattern=[%d{ISO8601}] %p %m (%c)%nlog4j.appender.controllerAppender.MaxFileSize = {{controller_log_maxfilesize}}MBlog4j.appender.controllerAppender.MaxBackupIndex = {{controller_log_maxbackupindex}}# Turn on all our debugging info#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender#log4j.logger.kafka.perf=DEBUG, kafkaAppender#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUGlog4j.logger.kafka=INFO, kafkaAppenderlog4j.logger.kafka.network.RequestChannel$=WARN, requestAppenderlog4j.additivity.kafka.network.RequestChannel$=false#log4j.logger.kafka.network.Processor=TRACE, requestAppender#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender#log4j.additivity.kafka.server.KafkaApis=falselog4j.logger.kafka.request.logger=WARN, requestAppenderlog4j.additivity.kafka.request.logger=falselog4j.logger.kafka.controller=TRACE, controllerAppenderlog4j.additivity.kafka.controller=falselog4j.logger.kafka.log.LogCleaner=INFO, cleanerAppenderlog4j.additivity.kafka.log.LogCleaner=falselog4j.logger.state.change.logger=TRACE, stateChangeAppenderlog4j.additivity.state.change.logger=false

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。