当前位置:   article > 正文

KAFKA协议解析和流量构造_kafka 协议

kafka 协议

先下个java 

yum install java-1.8.0-openjdk.x86_64

参考链接:https://kafka.apache.org/quickstart

Apache Kafka

APACHE KAFKA QUICKSTART

Interested in getting started with Kafka? Follow the instructions in this quickstart, or watch the video below.

STEP 1: GET KAFKA

Download the latest Kafka release and extract it:

$ tar -xzf kafka_2.13-3.0.0.tgz

$ cd kafka_2.13-3.0.0

STEP 2: START THE KAFKA ENVIRONMENT

NOTE: Your local environment must have Java 8+ installed.

Run the following commands in order to start all services in the correct order:

# Start the ZooKeeper service

# Note: Soon, ZooKeeper will no longer be required by Apache Kafka.

$ bin/zookeeper-server-start.sh config/zookeeper.properties

Open another terminal session and run:

# Start the Kafka broker service

$ bin/kafka-server-start.sh config/server.properties

修改kafka端口:

Vi config/server.properties

Listeners  advertised.listeners 的端口都修改

Once all services have successfully launched, you will have a basic Kafka environment running and ready to use.

STEP 3: CREATE A TOPIC TO STORE YOUR EVENTS

Kafka is a distributed event streaming platform that lets you read, write, store, and process events (also called records or messages in the documentation) across many machines.

Example events are payment transactions, geolocation updates from mobile phones, shipping orders, sensor measurements from IoT devices or medical equipment, and much more. These events are organized and stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.

So before you can write your first events, you must create a topic. Open another terminal session and run: 添加topic:

$ bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic quickstart-events --bootstrap-server localhost:9092

--topic后配置要添加的topic名称

--bootstrap-server后配置服务端ip和端口,没修改服务端端口默认则为9092

All of Kafka's command line tools have additional options: run the kafka-topics.sh command without any arguments to display usage information. For example, it can also show you details such as the partition count of the new topic:查看topic:

$ bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092

Topic:quickstart-events  PartitionCount:1    ReplicationFactor:1 Configs:

    Topic: quickstart-events Partition: 0    Leader: 0   Replicas: 0 Isr: 0



STEP 4: WRITE SOME EVENTS INTO THE TOPIC

A Kafka client communicates with the Kafka brokers via the network for writing (or reading) events. Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you need—even forever.

Run the console producer client to write a few events into your topic. By default, each line you enter will result in a separate event being written to the topic. 生产者发送消息,执行完以下命令后,会回显到下行,这时候输入的字符就会存储到topic里:

$ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092

This is my first event

This is my second event

You can stop the producer client with Ctrl-C at any time.

STEP 5: READ THE EVENTS

Open another terminal session and run the console consumer client to read the events you just created: 消费者查看消息:

$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092

This is my first event

This is my second event

--from-beginning 表示查看这个topic里的所有消息,不带这个参数的时候默认是只查看实时的消息(从执行命令开始新产生的消息)

You can stop the consumer client with Ctrl-C at any time.

Feel free to experiment: for example, switch back to your producer terminal (previous step) to write additional events, and see how the events immediately show up in your consumer terminal.

Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want. You can easily verify this by opening yet another terminal session and re-running the previous command again.

STEP 6: IMPORT/EXPORT YOUR DATA AS STREAMS OF EVENTS WITH KAFKA CONNECT

You probably have lots of data in existing systems like relational databases or traditional messaging systems, along with many applications that already use these systems. Kafka Connect allows you to continuously ingest data from external systems into Kafka, and vice versa. It is thus very easy to integrate existing systems with Kafka. To make this process even easier, there are hundreds of such connectors readily available.

Take a look at the Kafka Connect section to learn more about how to continuously import/export your data into and out of Kafka.

STEP 7: PROCESS YOUR EVENTS WITH KAFKA STREAMS

Once your data is stored in Kafka as events, you can process the data with the Kafka Streams client library for Java/Scala. It allows you to implement mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka topics. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, and distributed. The library supports exactly-once processing, stateful operations and aggregations, windowing, joins, processing based on event-time, and much more.

To give you a first taste, here's how one would implement the popular WordCount algorithm:

KStream<String, String> textLines = builder.stream("quickstart-events");

KTable<String, Long> wordCounts = textLines

            .flatMapValues(line -> Arrays.asList(line.toLowerCase().split(" ")))

            .groupBy((keyIgnored, word) -> word)

            .count();

wordCounts.toStream().to("output-topic", Produced.with(Serdes.String(), Serdes.Long()));

The Kafka Streams demo and the app development tutorial demonstrate how to code and run such a streaming application from start to finish.

STEP 8: TERMINATE THE KAFKA ENVIRONMENT

Now that you reached the end of the quickstart, feel free to tear down the Kafka environment—or continue playing around.

  1. Stop the producer and consumer clients with Ctrl-C, if you haven't done so already.
  2. Stop the Kafka broker with Ctrl-C.
  3. Lastly, stop the ZooKeeper server with Ctrl-C.

If you also want to delete any data of your local Kafka environment including any events you have created along the way, run the command:

$ rm -rf /tmp/kafka-logs /tmp/zookeeper

CONGRATULATIONS!

You have successfully finished the Apache Kafka quickstart.

To learn more, we suggest the following next steps:

  • Read through the brief Introduction to learn how Kafka works at a high level, its main concepts, and how it compares to other technologies. To understand Kafka in more detail, head over to the Documentation.
  • Browse through the Use Cases to learn how other users in our world-wide community are getting value out of Kafka.
  • Join a local Kafka meetup group and watch talks from Kafka Summit, the main conference of the Kafka community.

The contents of this website are © 2017 Apache Software Foundation under the terms of the Apache License v2. Apache Kafka, Kafka, and the Kafka logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.

工具使用:

https://cloud.tencent.com/developer/article/1927921

bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test-topic --partitions 3 --replication-factor 1

bin/kafka-topics.sh -list --bootstrap-server localhost:9092

bin/kafka-topics.sh  --bootstrap-server localhost:9092 --describe --topic test-topic

官方文档:

https://kafka.apache.org/protocol#The_Messages_Fetch

https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol

中文翻译:

Kafka通讯协议指南官方英文版本: A Guide To The Kafka Protocol中文翻译: watchword 翻译于2016年1月31日,修改于6月17日,基于原文2016年5月5日修改版本(v.106)修改翻译: Kafka通讯协议指南smallnest 基于原文 Jan 20, 2017版本修改。如果想深入了解Kafka的通讯协议的话,这篇文章不可不读。感谢 watchword 将原文翻译成https://colobu.com/2017/01/26/A-Guide-To-The-Kafka-Protocol/

协议字段讲解

Kafka通讯协议指南官方英文版本: A Guide To The Kafka Protocol中文翻译: watchword 翻译于2016年1月31日,修改于6月17日,基于原文2016年5月5日修改版本(v.106)修改翻译: Kafka通讯协议指南smallnest 基于原文 Jan 20, 2017版本修改。如果想深入了解Kafka的通讯协议的话,这篇文章不可不读。感谢 watchword 将原文翻译成https://colobu.com/2017/01/26/A-Guide-To-The-Kafka-Protocol/#%E8%8E%B7%E5%8F%96%E6%B6%88%E6%81%AF%E6%8E%A5%E5%8F%A3%EF%BC%88Fetch_API%EF%BC%89

请求,通用:

 side(lenth(不包含size字段的长度))-4字节、key-2字节、version-2字节、correlationid-4字节、clientid-4字节:

响应包,前几个字段通用,后面几个字段根据使用的version对应:

Lenth - 4字节 、 correlationid -4字节

每个version的api格式:

https://kafka.apache.org/protocol#The_Messages_Fetch

Api Keys

The following are the numeric codes that the ApiKey in the request can take for each of the below request types.

协议介绍:

Kafka通讯协议指南

https://www.cnblogs.com/frankdeng/p/9310684.html

https://www.jianshu.com/p/7c52a819495f

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/913441
推荐阅读
相关标签
  

闽ICP备14008679号