转载请注明出处:kafka客户端java版本 <http://www.525.life/article?id=1510739742417>

<>下载jar包

maven版本的直接找到相关的pom配置即可。
https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients/2.0.0
<https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients/2.0.0>
我们这里使用的是2.0.0版本
如下:
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency> <groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId> <version>2.0.0</version> </dependency>
<>编写消息生产类MsgProducer

注意,不要与kafka-client中的类重名,例如不要用KafkaProducer。
package com.biologic.util; import java.util.Properties; import
org.apache.kafka.clients.producer.KafkaProducer; import
org.apache.kafka.clients.producer.Producer; import
org.apache.kafka.clients.producer.ProducerRecord; public class MsgProducer {
public static void main(String[] args) { Properties props = new Properties();
props.put("bootstrap.servers", "192.168.11.90:9092"); props.put("acks", "all");
props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms",
1); props.put("buffer.memory", 33554432); props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer"); Producer<String,
String> producer = new KafkaProducer<String, String>(props); producer.send(new
ProducerRecord<String, String>("test", "标题", "值")); producer.close(); } }
在kafka的服务器安装目录中运行监听consumer
bin/kafka-console-consumer.sh --bootstrap-server 192.168.11.90:9092 --topic
test --from-beginning
运行main方法后接收到消息如下:


<>编写消息消费类MsgConsumer

注意,不要与kafka-client中的类重名,例如不要用KafkaConsumer。
package com.biologic.util; import java.util.Properties; import
org.apache.kafka.clients.producer.KafkaProducer; import
org.apache.kafka.clients.producer.Producer; import
org.apache.kafka.clients.producer.ProducerRecord; public class MsgProducer {
public static void main(String[] args) { Properties props = new Properties();
props.put("bootstrap.servers", "192.168.11.90:9092"); props.put("acks", "all");
props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms",
1); props.put("buffer.memory", 33554432); props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer"); Producer<String,
String> producer = new KafkaProducer<String, String>(props); producer.send(new
ProducerRecord<String, String>("test", "标题", "值")); producer.close(); } }
运行main方法后开始监听消息

在kafka的服务器安装目录中运行producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
输入如下:


java控制台输出如下:


相关参数说明(kafka全部参数详见consumerconfigs):
bootstrap.servers
用于初始化时建立链接到kafka集群,以 host:port形式,多个以逗号分隔host1:port1,host2:port2;

group.id <http://group.id>
消费者的组id

kafka使用消费者分组的概念来允许多个消费者共同消费和处理同一个topic中的消息。分组中消费者成员是动态维护的,如果一个消费者处理失败了,那么之前分配给它的partition将被重新分配给分组中其他消费者;同样,如果分组中加入了新的消费者,也将触发整个partition的重新分配,每个消费者将尽可能的分配到相同数目的partition,以达到新的均衡状态;

enable.auto.commit
用于配置是否自动的提交消费进度;

auto.commit.interval.ms <http://auto.commit.interval.ms>
用于配置自动提交消费进度的时间;

session.timeout.ms <http://session.timeout.ms>

会话超时时长,客户端需要周期性的发送“心跳”到broker,这样broker端就可以判断消费者的状态,如果消费在会话周期时长内未发送心跳,那么该消费者将被判定为dead,那么它之前所消费的partition将会被重新的分配给其他存活的消费者;

key.serializer,value.serializer
说明了使用何种序列化方式将用户提供的key和vaule值序列化成字节。

转载请注明出处:kafka客户端java版本 <http://www.525.life/article?id=1510739742417>

<>参考链接

消费者示例
<https://www.tutorialspoint.com/apache_kafka/apache_kafka_consumer_group_example.htm>

生产者示例
<https://www.tutorialspoint.com/apache_kafka/apache_kafka_simple_producer_example.htm>

kafka官方文档
https://kafka.apache.org/documentation/#producerapi
<https://kafka.apache.org/documentation/#producerapi>

参数配置Broker Configs
https://kafka.apache.org/documentation/#brokerconfigs
<https://kafka.apache.org/documentation/#brokerconfigs>

kafka 2.0.0 API 生产者文档
<https://kafka.apache.org/20/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html>

kafka 2.0.0 API 消费者文档
<https://kafka.apache.org/20/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html>

kafka 1.0.1 API文档
<https://kafka.apache.org/10/javadoc/?org/apache/kafka/clients/producer/KafkaProducer.html>

使用阿里云 <https://help.aliyun.com/document_detail/68325.html>

confluent版本的示例
<https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/>

转载请注明出处:kafka客户端java版本 <http://www.525.life/article?id=1510739742417>