当前位置:   article > 正文

SpringBoot项目连接,有Kerberos认证的Kafka_springboot kafka kerberos

springboot kafka kerberos

在连接Kerberos认证kafka之前,需要了解Kerberos协议

二、什么是Kerberos协议

Kerberos是一种计算机网络认证协议 ,其设计目标是通过密钥系统为网络中通信的客户机(Client)/服务器(Server)应用程序提供严格的身份验证服务,确保通信双方身份的真实性和安全性。不同于其他网络服务,Kerberos协议中不是所有的客户端向想要访问的网络服务发起请求,他就能建立连接然后进行加密通信,而是在发起服务请求后必须先进行一系列的身份认证,包括客户端和服务端两方的双向认证,只有当通信双方都认证通过对方身份之后,才可以互相建立起连接,进行网络通信。即Kerberos协议的侧重在于认证通信双方的身份,客户端需要确认即将访问的网络服务就是自己所想要访问的服务而不是一个伪造的服务器,而服务端需要确认这个客户端是一个身份真实,安全可靠的客户端,而不是一个想要进行恶意网络攻击的用户。

三、Kerberos协议角色组成
Kerberos协议中存在三个角色,分别是:

客户端(Client):发送请求的一方
服务端(Server):接收请求的一方
密钥分发中心(Key distribution KDC)

一,首先需要准备三个文件

(user.keytab,krb5.conf,jass.conf)

其中user.keytab和krb5.conf是两个认证文件,需要厂商提供,就是你连接谁的kafka,让谁提供

jass.conf文件需要自己在本地创建

jass.conf文件内容如下,具体路径和域名需要换成自己的:

  1. debug: true
  2. fusioninsight:
  3. kafka:
  4. bootstrap-servers: 10.80.10.3:21007,10.80.10.181:21007,10.80.10.52:21007
  5. security:
  6. protocol: SASL_PLAINTEXT
  7. kerberos:
  8. domain:
  9. name: hadoop.798687_97_4a2b_9510_00359f31c5ec.com
  10. sasl:
  11. kerberos:
  12. service:
  13. name: kafka

其中kerberos.domain.name:hadoop.798687_97_4a2b_9510_00359f31c5ec.com

hadoop.798687_97_4a2b_9510_00359f31c5ec.com需要根据现场提供给你的域名

二、文件准备好后可以将三个配置文件,放在自己项目中,也可以放在服务器的某个目录下,只要确保项目启动后能读取到即可

我的目录结构如下:

pom依赖:

我用的是华为云的Kafka依赖

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
  4. <modelVersion>4.0.0</modelVersion>
  5. <groupId>com.example</groupId>
  6. <artifactId>kafka-sample-01</artifactId>
  7. <version>2.3.1.RELEASE</version>
  8. <packaging>jar</packaging>
  9. <name>kafka-sample-01</name>
  10. <description>Kafka Sample 1</description>
  11. <parent>
  12. <groupId>org.springframework.boot</groupId>
  13. <artifactId>spring-boot-starter-parent</artifactId>
  14. <version>2.2.0.RELEASE</version>
  15. <relativePath/> <!-- lookup parent from repository -->
  16. </parent>
  17. <properties>
  18. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  19. <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  20. <java.version>1.8</java.version>
  21. </properties>
  22. <dependencies>
  23. <dependency>
  24. <groupId>org.springframework.kafka</groupId>
  25. <artifactId>spring-kafka</artifactId>
  26. <exclusions>
  27. <exclusion>
  28. <groupId>org.apache.kafka</groupId>
  29. <artifactId>kafka-clients</artifactId>
  30. </exclusion>
  31. </exclusions>
  32. </dependency>
  33. <dependency>
  34. <groupId>org.apache.kafka</groupId>
  35. <artifactId>kafka-clients</artifactId>
  36. <version>2.4.0-hw-ei-302002</version>
  37. </dependency>
  38. <dependency>
  39. <groupId>org.springframework.boot</groupId>
  40. <artifactId>spring-boot-starter-test</artifactId>
  41. <scope>test</scope>
  42. </dependency>
  43. <dependency>
  44. <groupId>org.springframework.boot</groupId>
  45. <artifactId>spring-boot-starter-web</artifactId>
  46. </dependency>
  47. <!-- 华为 组件 kafka start -->
  48. <!-- <dependency>-->
  49. <!-- <groupId>com.huawei</groupId>-->
  50. <!-- <artifactId>kafka-clients</artifactId>-->
  51. <!-- <version>2.4.0</version>-->
  52. <!-- <scope>system</scope>-->
  53. <!-- <systemPath>${project.basedir}/lib/kafka-clients-2.4.0-hw-ei-302002.jar</systemPath>-->
  54. <!-- </dependency>-->
  55. </dependencies>
  56. <build>
  57. <plugins>
  58. <plugin>
  59. <groupId>org.springframework.boot</groupId>
  60. <artifactId>spring-boot-maven-plugin</artifactId>
  61. </plugin>
  62. </plugins>
  63. </build>
  64. <repositories>
  65. <repository>
  66. <id>huaweicloudsdk</id>
  67. <url>https://mirrors.huaweicloud.com/repository/maven/huaweicloudsdk/</url>
  68. <releases><enabled>true</enabled></releases>
  69. <snapshots><enabled>true</enabled></snapshots>
  70. </repository>
  71. <repository>
  72. <id>central</id>
  73. <name>Mavn Centreal</name>
  74. <url>https://repo1.maven.org/maven2/</url>
  75. </repository>
  76. </repositories>
  77. </project>

然后再SpringBoot项目启动类如下:

  1. package com.example;
  2. import com.common.Foo1;
  3. import org.apache.kafka.clients.admin.AdminClientConfig;
  4. import org.apache.kafka.clients.admin.NewTopic;
  5. import org.apache.kafka.clients.consumer.ConsumerRecord;
  6. import org.slf4j.Logger;
  7. import org.slf4j.LoggerFactory;
  8. import org.springframework.beans.factory.annotation.Value;
  9. import org.springframework.boot.SpringApplication;
  10. import org.springframework.boot.autoconfigure.SpringBootApplication;
  11. import org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer;
  12. import org.springframework.context.annotation.Bean;
  13. import org.springframework.kafka.annotation.KafkaListener;
  14. import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
  15. import org.springframework.kafka.core.ConsumerFactory;
  16. import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
  17. import org.springframework.kafka.core.DefaultKafkaProducerFactory;
  18. import org.springframework.kafka.core.KafkaAdmin;
  19. import org.springframework.kafka.core.KafkaTemplate;
  20. import org.springframework.kafka.core.ProducerFactory;
  21. import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
  22. import org.springframework.kafka.listener.SeekToCurrentErrorHandler;
  23. import org.springframework.kafka.support.converter.RecordMessageConverter;
  24. import org.springframework.kafka.support.converter.StringJsonMessageConverter;
  25. import org.springframework.util.backoff.FixedBackOff;
  26. import java.io.File;
  27. import java.util.HashMap;
  28. import java.util.Map;
  29. /**
  30. * @author
  31. */
  32. @SpringBootApplication
  33. public class Application {
  34. private final Logger logger = LoggerFactory.getLogger(Application.class);
  35. @Value("${fusioninsight.kafka.bootstrap-servers}")
  36. public String boostrapServers;
  37. @Value("${fusioninsight.kafka.security.protocol}")
  38. public String securityProtocol;
  39. @Value("${fusioninsight.kafka.kerberos.domain.name}")
  40. public String kerberosDomainName;
  41. @Value("${fusioninsight.kafka.sasl.kerberos.service.name}")
  42. public String kerberosServiceName;
  43. public static void main(String[] args) {
  44. // String filePath = System.getProperty("user.dir") + File.separator + "src" + File.separator + "main"
  45. // String filePath = "D:\\Java\\workspace\\20231123MOSPT4eB\\sample-01\\src\\main\\resources\\";
  46. String filePath = "/home/yxxt/";
  47. System.setProperty("java.security.auth.login.config", filePath + "jaas.conf");
  48. System.setProperty("java.security.krb5.conf", filePath + "krb5.conf");
  49. SpringApplication.run(Application.class, args);
  50. }
  51. @Bean
  52. public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
  53. ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
  54. ConsumerFactory<Object, Object> kafkaConsumerFactory, KafkaTemplate<String, String> template) {
  55. System.out.println(boostrapServers);
  56. ConcurrentKafkaListenerContainerFactory<Object, Object> factory
  57. = new ConcurrentKafkaListenerContainerFactory<>();
  58. configurer.configure(factory, kafkaConsumerFactory);
  59. factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template),
  60. new FixedBackOff(0L, 2))); // dead-letter after 3 tries
  61. return factory;
  62. }
  63. @Bean
  64. public RecordMessageConverter converter() {
  65. return new StringJsonMessageConverter();
  66. }
  67. // 指定消费监听,该topic有消息时立刻消费
  68. @KafkaListener(id = "fooGroup1", topics = "topic_ypgk")
  69. public void listen(ConsumerRecord<String, String> record) {
  70. System.out.println("监听到了消息-----");
  71. logger.info("Received:消息监听成功! " );
  72. System.out.println("监听到了-----");
  73. System.out.println(record);
  74. // if (foo.getFoo().startsWith("fail")) {
  75. // // 触发83行的 ErrorHandler,将异常数据写入 topic名称+.DLT的新topic中
  76. // throw new RuntimeException("failed");
  77. // }
  78. }
  79. // 创建topic,指定分区数、副本数
  80. // @Bean
  81. // public NewTopic topic() {
  82. // return new NewTopic("topic1", 1, (short) 1);
  83. // }
  84. @Bean
  85. public KafkaAdmin kafkaAdmin() {
  86. Map<String, Object> configs = new HashMap<>();
  87. configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, boostrapServers);
  88. configs.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, securityProtocol);
  89. configs.put("sasl.kerberos.service.name", kerberosServiceName);
  90. configs.put("kerberos.domain.name", kerberosDomainName);
  91. return new KafkaAdmin(configs);
  92. }
  93. @Bean
  94. public ConsumerFactory<Object, Object> consumerFactory() {
  95. Map<String, Object> configs = new HashMap<>();
  96. configs.put("security.protocol", securityProtocol);
  97. configs.put("kerberos.domain.name", kerberosDomainName);
  98. configs.put("bootstrap.servers", boostrapServers);
  99. configs.put("sasl.kerberos.service.name", kerberosServiceName);
  100. configs.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  101. configs.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  102. return new DefaultKafkaConsumerFactory<>(configs);
  103. }
  104. @Bean
  105. public KafkaTemplate<String, String> kafkaTemplate() {
  106. Map<String, Object> configs = new HashMap<>();
  107. configs.put("security.protocol", securityProtocol);
  108. configs.put("kerberos.domain.name", kerberosDomainName);
  109. configs.put("bootstrap.servers", boostrapServers);
  110. configs.put("sasl.kerberos.service.name", kerberosServiceName);
  111. configs.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  112. configs.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  113. ProducerFactory<String, String> producerFactory = new DefaultKafkaProducerFactory<>(configs);
  114. return new KafkaTemplate<>(producerFactory);
  115. }
  116. }

生产者:通过发送请求进行向主题里发送消息

  1. package com.example;
  2. import org.springframework.beans.factory.annotation.Autowired;
  3. import org.springframework.kafka.core.KafkaTemplate;
  4. import org.springframework.web.bind.annotation.PathVariable;
  5. import org.springframework.web.bind.annotation.PostMapping;
  6. import org.springframework.web.bind.annotation.RestController;
  7. import com.common.Foo1;
  8. /**
  9. * @author haosuwei
  10. *
  11. */
  12. @RestController
  13. public class Controller {
  14. @Autowired
  15. private KafkaTemplate<String, String> template;
  16. @PostMapping(path = "/send/foo/{what}")
  17. public void sendFoo(@PathVariable String what) {
  18. Foo1 foo1 = new Foo1(what);
  19. this.template.send("topic1", foo1.toString());
  20. }
  21. }

运行成功,就可以监听到主题消息了

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Li_阴宅/article/detail/950994
推荐阅读
相关标签
  

闽ICP备14008679号