0

我有如下要求:Kafka需要监听多个接口,一个外部接口和一个内部接口。系统中的所有其他组件都将 kafka 连接到内部接口。在安装时其他主机上的内部 ips 不可访问,需要进行一些配置以使其可访问,我们无法控制。因此,假设当 kafka 启动时,其他节点上的内部 IP 无法相互访问。

场景:我在集群中有两个节点:node1(外部IP:10.10.10.4,内部IP:5.5.5.4)node2(外部IP:10.10.10.5,内部IP:5.5.5.5)

现在,在安装时,10.10.10.4 可以 ping 到 10.10.10.5,反之亦然,但 5.5.5.4 无法连接到 5.5.5.5。一旦 kafka 安装完成,就会发生这种情况,之后有人会进行一些配置以使其可访问,因此在 kafka 安装之前,我们可以使它们可访问。

现在的要求是kafka brokers会在10.10.10接口上交换消息,这样就会形成集群,但是客户端会在5.5.5.X接口上发送消息。

我尝试如下:

listeners=USERS://0.0.0.0:9092,REPLICATION://0.0.0.0:9093
advertised.listeners=USERS://5.5.5.5:9092,REPLICATION://5.5.5.5:9093

其中 5.5.5.5 是内部 IP 地址。但是有了这个,在重新启动kafka时,我看到了下面的日志:

{"log":"[2020-06-23 19:05:34,923] INFO Creating /brokers/ids/2 (is it secure? false) (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.923403973Z"}
{"log":"[2020-06-23 19:05:34,925] INFO Result of znode creation at /brokers/ids/2 is: OK (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.925237419Z"}
{"log":"[2020-06-23 19:05:34,926] INFO Registered broker 2 at path /brokers/ids/2 with addresses: ArrayBuffer(EndPoint(5.5.5.5,9092,ListenerName(USERS),PLAINTEXT), EndPoint(5.5.5.5,9093,ListenerName(REPLICATION),PLAINTEXT)) (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.926127438Z"}

......

{"log":"[2020-06-23 19:05:35,078] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)\n","stream":"stdout","time":"2020-06-23T19:05:35.078444509Z"}
{"log":"[2020-06-23 19:05:35,078] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)\n","stream":"stdout","time":"2020-06-23T19:05:35.078471358Z"}
{"log":"[2020-06-23 19:05:35,079] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)\n","stream":"stdout","time":"2020-06-23T19:05:35.079436798Z"}
{"log":"[2020-06-23 19:05:35,136] ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '2' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)\n","stream":"stdout","time":"2020-06-23T19:05:35.136792119Z"}

之后,此消息不断出现。

{"log":"[2020-06-23 19:05:35,166] ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '2' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)\n","stream":"stdout","time":"2020-06-23T19:05:35.166895344Z"}

有什么方法可以实现吗?

关于,-M-

4

0 回答 0