赞
踩
Kerberos是一种计算机网络认证协议 ,其设计目标是通过密钥系统为网络中通信的客户机(Client)/服务器(Server)应用程序提供严格的身份验证服务,确保通信双方身份的真实性和安全性。不同于其他网络服务,Kerberos协议中不是所有的客户端向想要访问的网络服务发起请求,他就能建立连接然后进行加密通信,而是在发起服务请求后必须先进行一系列的身份认证,包括客户端和服务端两方的双向认证,只有当通信双方都认证通过对方身份之后,才可以互相建立起连接,进行网络通信。即Kerberos协议的侧重在于认证通信双方的身份,客户端需要确认即将访问的网络服务就是自己所想要访问的服务而不是一个伪造的服务器,而服务端需要确认这个客户端是一个身份真实,安全可靠的客户端,而不是一个想要进行恶意网络攻击的用户。
客户端(Client):发送请求的一方
服务端(Server):接收请求的一方
密钥分发中心(Key distribution KDC)
其中user.keytab和krb5.conf是两个认证文件,需要厂商提供,就是你连接谁的kafka,让谁提供
jass.conf文件需要自己在本地创建
jass.conf文件内容如下,具体路径和域名需要换成自己的:
- debug: true
-
- fusioninsight:
- kafka:
- bootstrap-servers: 10.80.10.3:21007,10.80.10.181:21007,10.80.10.52:21007
- security:
- protocol: SASL_PLAINTEXT
- kerberos:
- domain:
- name: hadoop.798687_97_4a2b_9510_00359f31c5ec.com
- sasl:
- kerberos:
- service:
- name: kafka
其中kerberos.domain.name:hadoop.798687_97_4a2b_9510_00359f31c5ec.com
hadoop.798687_97_4a2b_9510_00359f31c5ec.com需要根据现场提供给你的域名
pom依赖:
我用的是华为云的Kafka依赖
- <?xml version="1.0" encoding="UTF-8"?>
- <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
-
- <groupId>com.example</groupId>
- <artifactId>kafka-sample-01</artifactId>
- <version>2.3.1.RELEASE</version>
- <packaging>jar</packaging>
-
- <name>kafka-sample-01</name>
- <description>Kafka Sample 1</description>
-
- <parent>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-parent</artifactId>
- <version>2.2.0.RELEASE</version>
- <relativePath/> <!-- lookup parent from repository -->
- </parent>
-
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
- <java.version>1.8</java.version>
- </properties>
-
- <dependencies>
-
- <dependency>
- <groupId>org.springframework.kafka</groupId>
- <artifactId>spring-kafka</artifactId>
- <exclusions>
- <exclusion>
- <groupId>org.apache.kafka</groupId>
- <artifactId>kafka-clients</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
-
- <dependency>
- <groupId>org.apache.kafka</groupId>
- <artifactId>kafka-clients</artifactId>
- <version>2.4.0-hw-ei-302002</version>
- </dependency>
-
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-test</artifactId>
- <scope>test</scope>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-web</artifactId>
- </dependency>
-
-
-
- <!-- 华为 组件 kafka start -->
- <!-- <dependency>-->
- <!-- <groupId>com.huawei</groupId>-->
- <!-- <artifactId>kafka-clients</artifactId>-->
- <!-- <version>2.4.0</version>-->
- <!-- <scope>system</scope>-->
- <!-- <systemPath>${project.basedir}/lib/kafka-clients-2.4.0-hw-ei-302002.jar</systemPath>-->
- <!-- </dependency>-->
- </dependencies>
-
- <build>
- <plugins>
- <plugin>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-maven-plugin</artifactId>
- </plugin>
- </plugins>
- </build>
-
- <repositories>
-
- <repository>
- <id>huaweicloudsdk</id>
- <url>https://mirrors.huaweicloud.com/repository/maven/huaweicloudsdk/</url>
- <releases><enabled>true</enabled></releases>
- <snapshots><enabled>true</enabled></snapshots>
- </repository>
-
- <repository>
- <id>central</id>
- <name>Mavn Centreal</name>
- <url>https://repo1.maven.org/maven2/</url>
- </repository>
-
- </repositories>
- </project>

然后再SpringBoot项目启动类如下:
- package com.example;
-
- import com.common.Foo1;
-
-
- import org.apache.kafka.clients.admin.AdminClientConfig;
- import org.apache.kafka.clients.admin.NewTopic;
- import org.apache.kafka.clients.consumer.ConsumerRecord;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.boot.SpringApplication;
- import org.springframework.boot.autoconfigure.SpringBootApplication;
- import org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer;
- import org.springframework.context.annotation.Bean;
- import org.springframework.kafka.annotation.KafkaListener;
- import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
- import org.springframework.kafka.core.ConsumerFactory;
- import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
- import org.springframework.kafka.core.DefaultKafkaProducerFactory;
- import org.springframework.kafka.core.KafkaAdmin;
- import org.springframework.kafka.core.KafkaTemplate;
- import org.springframework.kafka.core.ProducerFactory;
- import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
- import org.springframework.kafka.listener.SeekToCurrentErrorHandler;
- import org.springframework.kafka.support.converter.RecordMessageConverter;
- import org.springframework.kafka.support.converter.StringJsonMessageConverter;
- import org.springframework.util.backoff.FixedBackOff;
-
- import java.io.File;
- import java.util.HashMap;
- import java.util.Map;
-
- /**
- * @author
- */
- @SpringBootApplication
- public class Application {
-
- private final Logger logger = LoggerFactory.getLogger(Application.class);
-
- @Value("${fusioninsight.kafka.bootstrap-servers}")
- public String boostrapServers;
-
- @Value("${fusioninsight.kafka.security.protocol}")
- public String securityProtocol;
-
- @Value("${fusioninsight.kafka.kerberos.domain.name}")
- public String kerberosDomainName;
-
- @Value("${fusioninsight.kafka.sasl.kerberos.service.name}")
- public String kerberosServiceName;
-
- public static void main(String[] args) {
- // String filePath = System.getProperty("user.dir") + File.separator + "src" + File.separator + "main"
- // String filePath = "D:\\Java\\workspace\\20231123MOSPT4eB\\sample-01\\src\\main\\resources\\";
- String filePath = "/home/yxxt/";
- System.setProperty("java.security.auth.login.config", filePath + "jaas.conf");
- System.setProperty("java.security.krb5.conf", filePath + "krb5.conf");
- SpringApplication.run(Application.class, args);
- }
-
- @Bean
- public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
- ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
- ConsumerFactory<Object, Object> kafkaConsumerFactory, KafkaTemplate<String, String> template) {
- System.out.println(boostrapServers);
- ConcurrentKafkaListenerContainerFactory<Object, Object> factory
- = new ConcurrentKafkaListenerContainerFactory<>();
- configurer.configure(factory, kafkaConsumerFactory);
- factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template),
- new FixedBackOff(0L, 2))); // dead-letter after 3 tries
- return factory;
- }
-
- @Bean
- public RecordMessageConverter converter() {
- return new StringJsonMessageConverter();
- }
-
- // 指定消费监听,该topic有消息时立刻消费
- @KafkaListener(id = "fooGroup1", topics = "topic_ypgk")
- public void listen(ConsumerRecord<String, String> record) {
- System.out.println("监听到了消息-----");
- logger.info("Received:消息监听成功! " );
- System.out.println("监听到了-----");
- System.out.println(record);
- // if (foo.getFoo().startsWith("fail")) {
- // // 触发83行的 ErrorHandler,将异常数据写入 topic名称+.DLT的新topic中
- // throw new RuntimeException("failed");
- // }
- }
-
- // 创建topic,指定分区数、副本数
- // @Bean
- // public NewTopic topic() {
- // return new NewTopic("topic1", 1, (short) 1);
- // }
-
- @Bean
- public KafkaAdmin kafkaAdmin() {
- Map<String, Object> configs = new HashMap<>();
- configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, boostrapServers);
- configs.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, securityProtocol);
- configs.put("sasl.kerberos.service.name", kerberosServiceName);
- configs.put("kerberos.domain.name", kerberosDomainName);
- return new KafkaAdmin(configs);
- }
-
- @Bean
- public ConsumerFactory<Object, Object> consumerFactory() {
- Map<String, Object> configs = new HashMap<>();
- configs.put("security.protocol", securityProtocol);
- configs.put("kerberos.domain.name", kerberosDomainName);
- configs.put("bootstrap.servers", boostrapServers);
- configs.put("sasl.kerberos.service.name", kerberosServiceName);
- configs.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
- configs.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
- return new DefaultKafkaConsumerFactory<>(configs);
- }
-
- @Bean
- public KafkaTemplate<String, String> kafkaTemplate() {
- Map<String, Object> configs = new HashMap<>();
- configs.put("security.protocol", securityProtocol);
- configs.put("kerberos.domain.name", kerberosDomainName);
- configs.put("bootstrap.servers", boostrapServers);
- configs.put("sasl.kerberos.service.name", kerberosServiceName);
- configs.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
- configs.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
- ProducerFactory<String, String> producerFactory = new DefaultKafkaProducerFactory<>(configs);
- return new KafkaTemplate<>(producerFactory);
- }
- }

生产者:通过发送请求进行向主题里发送消息
- package com.example;
-
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.kafka.core.KafkaTemplate;
- import org.springframework.web.bind.annotation.PathVariable;
- import org.springframework.web.bind.annotation.PostMapping;
- import org.springframework.web.bind.annotation.RestController;
-
- import com.common.Foo1;
-
- /**
- * @author haosuwei
- *
- */
- @RestController
- public class Controller {
-
- @Autowired
- private KafkaTemplate<String, String> template;
-
- @PostMapping(path = "/send/foo/{what}")
- public void sendFoo(@PathVariable String what) {
- Foo1 foo1 = new Foo1(what);
- this.template.send("topic1", foo1.toString());
- }
-
- }

运行成功,就可以监听到主题消息了
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。