我想为我们拥有的一些用例实现数据重放,为此,我需要使用 Kafka 保留策略(我正在使用连接,我需要准确的窗口时间)。PS 我使用的是 Kafka 版本 0.10.1.1
我正在向这样的主题发送数据:
kafkaProducer.send(
new ProducerRecord<>(kafkaTopic, 0, (long) r.get("date_time") ,r.get(keyFieldName).toString(), r)
);
我像这样创建我的主题:
kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic myTopic
kafka-topics --zookeeper localhost --alter --topic myTopic --config retention.ms=172800000 kafka-主题 --zookeeper localhost --alter --topic myTopic --config segment.ms=172800000
所以通过上面的设置,我应该把我的topic的保留时间设置为48小时。
我扩展TimestampExtractor
以记录每条消息的实际时间。
public class ConsumerRecordOrWallclockTimestampExtractor implements TimestampExtractor {
private static final Logger LOG = LoggerFactory.getLogger(ConsumerRecordOrWallclockTimestampExtractor.class);
@Override
public long extract(ConsumerRecord<Object, Object> consumerRecord) {
LOG.info("TIMESTAMP : " + consumerRecord.timestamp() + " - Human readable : " + new Date(consumerRecord.timestamp()));
return consumerRecord.timestamp() >= 0.1 ? consumerRecord.timestamp() : System.currentTimeMillis();
}
}
为了测试,我向我的主题发送了 4 条消息,并收到了 4 条日志消息。
2017-02-28 10:23:39 信息 ConsumerRecordOrWallclockTimestampExtractor:21 - TIMESTAMP: 1488295086292 人类可读-2017 年 2 月 28 日星期二 10:18:06 EST
2017-02-28 10:24:01 INFO ConsumerRecordOrWallclock00000001TIMESTA832 -22:22: readble -Sun Jan 01 07:00:00 EST 2017
2017-02-28 10:26:11 INFO ConsumerRecordOrWallclockTimestampExtractor:21 - TIMESTAMP : 1485820800000 Human readble -Mon Jan 30 19:00:00 EST 2017
2017-02-28 10: 27:22 信息 ConsumerRecordOrWallclockTimestampExtractor:21 - TIMESTAMP : 1488295604411 人类可读 - 2017 年 2 月 28 日星期二 10:26:44 EST
因此,根据Kafka 的保留政策,我希望看到我的两条消息在 5 分钟后被清除/删除(第二条和第三条消息,因为它们是在 1 月 1 日和 1 月 30 日发送的)。但是我尝试使用我的主题一个小时,每次使用我的主题时,我都会收到所有 4 条消息。
kafka-avro-console-consumer --zookeeper localhost:2181 --from-beginning --topic myTopic
我的 Kafka 配置是这样的:
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
我做错了什么还是我错过了什么?