1

我使用 logstash 将数据从 Kafka 传输到 Elasticsearch,但出现以下错误:

WARN org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Auto offset commit failed for group kafka-es-sink: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我尝试调整会话超时(至 30000)和最大轮询记录(至 250)。

该主题每秒以 avro 格式生成 1000 个事件。有 10 个分区(2 个服务器)和两个 logstash 实例,每个实例有 5 个使用者线程。

我对每秒约 100-300 个事件的其他主题没有任何问题。

我认为这应该是一个配置问题,因为我在 Kafka 和 Elasticsearch 之间也有第二个连接器,它工作正常(confluent 的 kafka-connect-elasticsearch)

主要目的是比较 kafka connect 和 logstash 作为连接器。也许任何人也有一些一般的经验?

4

0 回答 0