当前位置:   article > 正文

阿里云CentOS7安装Kafka单机_阿里云安装kafka

阿里云安装kafka

前提条件

阿里云CentOS7安装好ZooKeeper,可参考 安装ZooKeeper

切换到安装包目录(非必要)

[hadoop@node1 ~]$ cd softinstall

下载

[hadoop@node1 softinstall]$ wget https://archive.apache.org/dist/kafka/3.3.1/kafka_2.12-3.3.1.tgz

查看下载文件

[hadoop@node1 softinstall]$ ls
apache-zookeeper-3.7.1-bin.tar.gz  hadoop-3.3.4.tar.gz  jdk-8u271-linux-x64.tar.gz  kafka_2.12-3.3.1.tgz

解压安装包

[hadoop@node1 softinstall]$ tar -zxvf kafka_2.12-3.3.1.tgz -C ~/soft

查看解压后的文件

[hadoop@node1 softinstall]$ cd ~/soft
[hadoop@node1 soft]$ ls
apache-zookeeper-3.7.1-bin  hadoop-3.3.4  jdk1.8.0_271  kafka_2.12-3.3.1

设置环境变量

[hadoop@node1 soft]$ sudo nano /etc/profile.d/my_env.sh

添加如下内容

export KAFKA_HOME=/home/hadoop/soft/kafka_2.12-3.3.1
export PATH=$PATH:$KAFKA_HOME/bin

让环境变量立即生效

[hadoop@node1 soft]$ source /etc/profile

配置kafka

[hadoop@node1 soft]$ cd $KAFKA_HOME/config

[hadoop@node1 config]$ vim server.properties

找到如下配置项,并修改内容如下

log.dirs=/home/hadoop/soft/kafka_2.12-3.3.1/datas
zookeeper.connect=node1:2181/kafka

查看kafka命令脚本

[hadoop@node1 config]$ cd $KAFKA_HOME
​
[hadoop@node1 kafka_2.12-3.3.1]$ ls
bin  config  libs  LICENSE  licenses  NOTICE  site-docs
​
[hadoop@node1 kafka_2.12-3.3.1]$ ls bin/
connect-distributed.sh        kafka-dump-log.sh              kafka-server-stop.sh
connect-mirror-maker.sh       kafka-features.sh              kafka-storage.sh
connect-standalone.sh         kafka-get-offsets.sh           kafka-streams-application-reset.sh
kafka-acls.sh                 kafka-leader-election.sh       kafka-topics.sh
kafka-broker-api-versions.sh  kafka-log-dirs.sh              kafka-transactions.sh
kafka-cluster.sh              kafka-metadata-quorum.sh       kafka-verifiable-consumer.sh
kafka-configs.sh              kafka-metadata-shell.sh        kafka-verifiable-producer.sh
kafka-console-consumer.sh     kafka-mirror-maker.sh          trogdor.sh
kafka-console-producer.sh     kafka-producer-perf-test.sh    windows
kafka-consumer-groups.sh      kafka-reassign-partitions.sh   zookeeper-security-migration.sh
kafka-consumer-perf-test.sh   kafka-replica-verification.sh  zookeeper-server-start.sh
kafka-delegation-tokens.sh    kafka-run-class.sh             zookeeper-server-stop.sh
kafka-delete-records.sh       kafka-server-start.sh          zookeeper-shell.sh

启动服务

启动kafka服务之前,需要先启动ZooKeeper服务。

启动ZooKeeper服务

[hadoop@node1 kafka_2.12-3.3.1]$ zkServer.sh start

启动kafka服务

[hadoop@node1 kafka_2.12-3.3.1]$ kafka-server-start.sh -daemon config/server.properties 

查看进程

[hadoop@node1 kafka_2.12-3.3.1]$ jps
24024 QuorumPeerMain
26969 Kafka
26990 Jps
​

简单使用

创建topic(主题) ,主题名称为test1
[hadoop@node1 kafka_2.12-3.3.1]$ kafka-topics.sh --bootstrap-server node1:9092 --create --partitions 1 --replication-factor 1 --topic test1
​
查看topic
[hadoop@node1 kafka_2.12-3.3.1]$ kafka-topics.sh --bootstrap-server node1:9092 --list

启动生产者
[hadoop@node1 kafka_2.12-3.3.1]$ kafka-console-producer.sh --bootstrap-server node1:9092 --topic test1

此时终端进入监听状态,等待输入生产的消息(数据),注意:不要关闭这个终端。

启动消费者

启动一个新的终端,执行消费消息命令

[hadoop@node1 kafka_2.12-3.3.1]$ kafka-console-consumer.sh --bootstrap-server node1:9092 --topic test1

测试

生产者生产消息,消费者消费消息

看到生产着生产的消息,消费者能接收到消息,说明kafka能正常使用。

以上方式是在本地控制台使用kafka,如果需要远程连接kafka,需要在阿里云的安全组放开9092端口。

按住Ctrl+z退出生产者/消费者的监听状态,返回到Linux命令行。

关闭服务
[hadoop@node1 kafka_2.12-3.3.1]$ kafka-server-stop.sh 
[hadoop@node1 kafka_2.12-3.3.1]$ jps
24024 QuorumPeerMain
31739 Jps
[hadoop@node1 kafka_2.12-3.3.1]$ zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /home/hadoop/soft/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
[hadoop@node1 kafka_2.12-3.3.1]$ jps
31801 Jps
[hadoop@node1 kafka_2.12-3.3.1]$ 

kafka启停脚本

为了更方便使用kafka,创建kafka启停脚本

创建脚本kf.sh

[hadoop@node1 kafka_2.12-3.3.1]$ mkdir ~/bin
[hadoop@node1 kafka_2.12-3.3.1]$ cd ~/bin/
[hadoop@node1 bin]$ vim kf.sh

脚本内容

  1. #! /bin/bash
  2. case $1 in
  3. "start"){
  4. for i in node1
  5. do
  6. echo " --------启动 $i Kafka-------"
  7. ssh $i "zkServer.sh start"
  8. ssh $i "/home/hadoop/soft/kafka_2.12-3.3.1/bin/kafka-server-start.sh -daemon /home/hadoop/soft/kafka_2.12-3.3.1/config/server.properties"
  9. done
  10. };;
  11. "stop"){
  12. for i in node1
  13. do
  14. echo " --------停止 $i Kafka-------"
  15. ssh $i "zkServer.sh stop"
  16. ssh $i "/home/hadoop/soft/kafka_2.12-3.3.1/bin/kafka-server-stop.sh stop"
  17. done
  18. };;
  19. esac

添加权限

[hadoop@node1 bin]$ chmod +x kf.sh 

测试

[hadoop@node1 bin]$ kf.sh stop
 --------停止 node1 Kafka-------
ZooKeeper JMX enabled by default
Using config: /home/hadoop/soft/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
[hadoop@node1 bin]$ jps
8498 Jps
[hadoop@node1 bin]$ kf.sh start
 --------启动 node1 Kafka-------
ZooKeeper JMX enabled by default
Using config: /home/hadoop/soft/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@node1 bin]$ jps
8539 QuorumPeerMain
8940 Jps
8911 Kafka

遇到的问题

[hadoop@node1 kafka_2.12-3.3.1]$ kafka-server-start.sh config/server.properties 
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/hadoop/soft/kafka_2.12-3.3.1/hs_err_pid24921.log
​

云服务器内存不足,关闭一些开启的进程,释放资源,例如:关闭已开启的hadoop相关进程,即可解决。

完成!enjoy it!

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号