当前位置:   article > 正文

手把手教你一套完善且高效的k8s离线部署方案_docker-compose离线部署kubernates集群

docker-compose离线部署kubernates集群

作者:郝建伟

背景

面对更多项目现场交付,偶而会遇到客户环境不具备公网条件,完全内网部署,这就需要有一套完善且高效的离线部署方案。

系统资源

编号主机名称IP资源类型CPU内存磁盘
01k8s-master110.132.10.91CentOS-74c8g40g
02k8s-master110.132.10.92CentOS-74c8g40g
03k8s-master110.132.10.93CentOS-74c8g40g
04k8s-worker110.132.10.94CentOS-78c16g200g
05k8s-worker210.132.10.95CentOS-78c16g200g
06k8s-worker310.132.10.96CentOS-78c16g200g
07k8s-worker410.132.10.97CentOS-78c16g200g
08k8s-worker510.132.10.98CentOS-78c16g200g
09k8s-worker610.132.10.99CentOS-78c16g200g
10k8s-harbor&deploy10.132.10.100CentOS-74c8g500g
11k8s-nfs10.132.10.101CentOS-72c4g2000g
12k8s-lb10.132.10.120lb内网2c4g40g

参数配置

注:在全部节点执行以下操作

系统基础设置

工作、日志及数据存储目录设定

  1. $ mkdir -p /export/servers
  2. $ mkdir -p /export/logs
  3. $ mkdir -p /export/data
  4. $ mkdir -p /export/upload

内核及网络参数优化

  1. $ vim /etc/sysctl.conf
  2. # 设置以下内容
  3. fs.file-max = 1048576
  4. net.ipv4.tcp_syncookies = 1
  5. net.ipv4.tcp_tw_reuse = 1
  6. net.ipv4.tcp_fin_timeout = 5
  7. net.ipv4.neigh.default.gc_stale_time = 120
  8. net.ipv4.conf.default.rp_filter = 0
  9. net.ipv4.conf.all.rp_filter = 0
  10. net.ipv4.conf.all.arp_announce = 2
  11. net.ipv4.conf.lo.arp_announce = 2
  12. vm.max_map_count = 262144
  13. # 及时生效
  14. sysctl -w vm.max_map_count=262144

ulimit优化

  1. $ vim /etc/security/limits.conf
  2. # 设置以下内容
  3. * soft memlock unlimited
  4. * hard memlock unlimited
  5. * soft nproc 102400
  6. * hard nproc 102400
  7. * soft nofile 1048576
  8. * hard nofile 1048576

基础环境准备

ansible安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
ansible2.9.27
节点deploy

2. 部署说明

物联管理平台机器数量繁多,需要ansible进行批量操作机器,节省时间,需要从部署节点至其他节点root免密。

  1. 注:在不知道root密码情况下,可以手动操作名密,按以下操作步骤执行:
  2. # 需要在部署机器上执行以下命令生成公钥
  3. $ ssh-keygen -t rsa
  4. # 复制~/.ssh/id_rsa.pub内容,并粘贴至其他节点~/.ssh/authorized_keys文件里面
  5. # 如果没有authorized_keys文件,可先执行创建创建在进行粘贴操作
  6. $ touch ~/.ssh/authorized_keys

3. 部署步骤

1) 在线安装

$ yum -y install https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.9.27-1.el7.ans.noarch.rpm

2) 离线安装

  1. # 提前上传ansible及所有依赖rpm包,并切换至rpm包目录
  2. $ yum -y ./*rpm

3) 查看版本

  1. $ ansible --version
  2. ansible 2.9.27
  3. config file = /etc/ansible/ansible.cfg
  4. configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  5. ansible python module location = /usr/lib/python2.7/site-packages/ansible
  6. executable location = /usr/bin/ansible
  7. python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

4) 设置管理主机列表

  1. $ vim /etc/ansible/hosts
  2. [master]
  3. 10.132.10.91 node_name=k8s-master1
  4. 10.132.10.92 node_name=k8s-master2
  5. 10.132.10.93 node_name=k8s-master3
  6. [worker]
  7. 10.132.10.94 node_name=k8s-worker1
  8. 10.132.10.95 node_name=k8s-worker2
  9. 10.132.10.96 node_name=k8s-worker3
  10. 10.132.10.97 node_name=k8s-worker4
  11. 10.132.10.98 node_name=k8s-worker5
  12. 10.132.10.99 node_name=k8s-worker6
  13. [etcd]
  14. 10.132.10.91 etcd_name=etcd1
  15. 10.132.10.92 etcd_name=etcd2
  16. 10.132.10.93 etcd_name=etcd3
  17. [k8s:children]
  18. master
  19. worker

5) 禁用ssh主机检查

  1. $ vi /etc/ansible/ansible.cfg
  2. # 修改以下设置
  3. # uncomment this to disable SSH key host checking
  4. host_key_checking = False

6) 取消SELINUX设定及放开防火墙

  1. $ ansible k8s -m command -a "setenforce 0"
  2. $ ansible k8s -m command -a "sed --follow-symlinks -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config"
  3. $ ansible k8s -m command -a "firewall-cmd --set-default-zone=trusted"
  4. $ ansible k8s -m command -a "firewall-cmd --complete-reload"
  5. $ ansible k8s -m command -a "swapoff -a"

7)hosts设置

  1. $ cd /export/upload && vim hosts_set.sh
  2. #设置以下脚本内容
  3. #!/bin/bashcat > /etc/hosts << EOF
  4. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  5. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  6. 10.132.10.100 deploy harbor
  7. 10.132.10.91 master01
  8. 10.132.10.92 master02
  9. 10.132.10.93 master03
  10. 10.132.10.94 worker01
  11. 10.132.10.95 worker02
  12. 10.132.10.96 worker03
  13. 10.132.10.97 worker04
  14. 10.132.10.98 worker05
  15. 10.132.10.99 worker06
  16. EOF
  17. $ ansible new_worker -m copy -a 'src=/export/upload/hosts_set.sh dest=/export/upload'
  18. $ ansible new_worker -m command -a 'sh /export/upload/hosts_set.sh'

docker安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
dockerdocker-ce-20.10.17
节点deploy

2. 部署说明

k8s容器运行环境docker部署

3. 部署方法

1) 在线安装

$ yum -y install docker-ce-20.10.17

2) 离线安装

  1. # 提前上传docker及所有依赖rpm包,并切换至rpm包目录
  2. $ yum -y ./*rpm

3) 重新加载配置文件,启动并查看状态

  1. $ systemctl start docker
  2. $ systemctl status docker

4) 设置开机自启

$ systemctl enable docker 

5) 查看版本

  1. $ docker version
  2. Client: Docker Engine - Community
  3. Version: 20.10.17
  4. API version: 1.41
  5. Go version: go1.17.11
  6. Git commit: 100c701
  7. Built: Mon Jun 6 23:05:12 2022
  8. OS/Arch: linux/amd64
  9. Context: default
  10. Experimental: true
  11. Server: Docker Engine - Community
  12. Engine:
  13. Version: 20.10.17
  14. API version: 1.41 (minimum version 1.12)
  15. Go version: go1.17.11
  16. Git commit: a89b842
  17. Built: Mon Jun 6 23:03:33 2022
  18. OS/Arch: linux/amd64
  19. Experimental: false
  20. containerd:
  21. Version: 1.6.8
  22. GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  23. runc:
  24. Version: 1.1.4
  25. GitCommit: v1.1.4-0-g5fd4c4d
  26. docker-init:
  27. Version: 0.19.0
  28. GitCommit: de40ad0

docker-compose安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
docker-composedocker-compose-linux-x86_64
节点deploy

2. 部署说明

harbor私有镜像库依赖。

3. 部署方法

1) 下载docker-compose并上传至服务器

$ curl -L https://github.com/docker/compose/releases/download/v2.9.0/docker-compose-linux-x86_64 -o docker-compose

2) 修改docker-compose执行权限

  1. $ mv docker-compose /usr/local/bin/
  2. $ chmod +x /usr/local/bin/docker-compose
  3. $ docker-compose version

3) 查看版本

  1. $ docker-compose version
  2. Docker Compose version v2.9.0

harbor安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
harborharbor-offline-installer-v2.4.3
节点harbor

2. 部署说明

私有镜像仓库。

3. 下载harbor离线安装包并上传至服务器

$ wget https://github.com/goharbor/harbor/releases/download/v2.4.3/harbor-offline-installer-v2.4.3.tgz

4. 解压安装包

  1. $ tar -xzvf harbor-offline-installer-v2.4.3.tgz -C /export/servers/
  2. $ cd /export/servers/harbor

5. 修改配置文件

  1. $ mv harbor.yml.tmpl harbor.yml
  2. $ vim harbor.yml

6. 设置以下内容

  1. hostname: 10.132.10.100
  2. http.port: 8090
  3. data_volume: /export/data/harbor
  4. log.location: /export/logs/harbor

7. 导入harbor镜像

  1. $ docker load -i harbor.v2.4.3.tar.gz
  2. # 等待导入harbor依赖镜像文件
  3. $ docker images
  4. REPOSITORY TAG IMAGE ID CREATED SIZE
  5. goharbor/harbor-exporter v2.4.3 776ac6ee91f4 4 weeks ago 81.5MB
  6. goharbor/chartmuseum-photon v2.4.3 f39a9694988d 4 weeks ago 172MB
  7. goharbor/redis-photon v2.4.3 b168e9750dc8 4 weeks ago 154MB
  8. goharbor/trivy-adapter-photon v2.4.3 a406a715461c 4 weeks ago 251MB
  9. goharbor/notary-server-photon v2.4.3 da89404c7cf9 4 weeks ago 109MB
  10. goharbor/notary-signer-photon v2.4.3 38468ac13836 4 weeks ago 107MB
  11. goharbor/harbor-registryctl v2.4.3 61243a84642b 4 weeks ago 135MB
  12. goharbor/registry-photon v2.4.3 9855479dd6fa 4 weeks ago 77.9MB
  13. goharbor/nginx-photon v2.4.3 0165c71ef734 4 weeks ago 44.4MB
  14. goharbor/harbor-log v2.4.3 57ceb170dac4 4 weeks ago 161MB
  15. goharbor/harbor-jobservice v2.4.3 7fea87c4b884 4 weeks ago 219MB
  16. goharbor/harbor-core v2.4.3 d864774a3b8f 4 weeks ago 197MB
  17. goharbor/harbor-portal v2.4.3 85f00db66862 4 weeks ago 53.4MB
  18. goharbor/harbor-db v2.4.3 7693d44a2ad6 4 weeks ago 225MB
  19. goharbor/prepare v2.4.3 c882d74725ee 4 weeks ago 268MB

8. 启动harbor

  1. ./prepare # 如果有二次修改harbor.yml文件,请执行使配置文件生效
  2. ./install.sh --help # 查看启动参数
  3. ./install.sh --with-chartmuseum

运行环境搭建

docker安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
dockerdocker-ce-20.10.17
节点k8s集群全部节点

2. 部署说明

k8s容器运行环境docker部署

3. 部署方法

1) 上传docker及依赖rpm包

$ ls /export/upload/docker-rpm.tgz 

2) 分发安装包

  1. $ ansible k8s -m copy -a "src=/export/upload/docker-rpm.tgz dest=/export/upload/"
  2. # 全部节点返回以下信息
  3. CHANGED => {
  4. "ansible_facts": {
  5. "discovered_interpreter_python": "/usr/bin/python"
  6. },
  7. "changed": true,
  8. "checksum": "acd3897edb624cd18a197bcd026e6769797f4f05",
  9. "dest": "/export/upload/docker-rpm.tgz",
  10. "gid": 0,
  11. "group": "root",
  12. "md5sum": "3ba6d9fe6b2ac70860b6638b88d3c89d",
  13. "mode": "0644",
  14. "owner": "root",
  15. "secontext": "system_u:object_r:usr_t:s0",
  16. "size": 103234394,
  17. "src": "/root/.ansible/tmp/ansible-tmp-1661836788.82-13591-17885284311930/source",
  18. "state": "file",
  19. "uid": 0
  20. }

3) 执行解压并安装

$ ansible k8s -m shell -a "tar xzvf /export/upload/docker-rpm.tgz -C /export/upload/ && yum -y install /export/upload/docker-rpm/*"

4) 设置开机自启并启动

$ ansible k8s -m shell -a "systemctl enable docker && systemctl start docker"

5) 查看版本

  1. $ ansible k8s -m shell -a "docker version"
  2. # 全部节点返回以下信息
  3. CHANGED | rc=0 >>
  4. Client: Docker Engine - Community
  5. Version: 20.10.17
  6. API version: 1.41
  7. Go version: go1.17.11
  8. Git commit: 100c701
  9. Built: Mon Jun 6 23:05:12 2022
  10. OS/Arch: linux/amd64
  11. Context: default
  12. Experimental: true
  13. Server: Docker Engine - Community
  14. Engine:
  15. Version: 20.10.17
  16. API version: 1.41 (minimum version 1.12)
  17. Go version: go1.17.11
  18. Git commit: a89b842
  19. Built: Mon Jun 6 23:03:33 2022
  20. OS/Arch: linux/amd64
  21. Experimental: false
  22. containerd:
  23. Version: 1.6.8
  24. GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  25. runc:
  26. Version: 1.1.4
  27. GitCommit: v1.1.4-0-g5fd4c4d
  28. docker-init:
  29. Version: 0.19.0
  30. GitCommit: de40ad0

kubernetes安装

有网环境安装

  1. # 添加阿里云YUM的软件源:
  2. cat > /etc/yum.repos.d/kubernetes.repo << EOF
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=0
  8. repo_gpgcheck=0
  9. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF

下载离线安装包

  1. # 创建rpm软件存储目录:
  2. mkdir -p /export/download/kubeadm-rpm
  3. # 执行命令:
  4. yum install -y kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4 --downloadonly --downloaddir /export/download/kubeadm-rpm

无网环境安装

1) 上传kubeadm及依赖rpm包

  1. $ ls /export/upload/
  2. kubeadm-rpm.tgz

2) 分发安装包

  1. $ ansible k8s -m copy -a "src=/export/upload/kubeadm-rpm.tgz dest=/export/upload/"
  2. # 全部节点返回以下信息
  3. CHANGED => {
  4. "ansible_facts": {
  5. "discovered_interpreter_python": "/usr/bin/python"
  6. },
  7. "changed": true,
  8. "checksum": "3fe96fe1aa7f4a09d86722f79f36fb8fde69facb",
  9. "dest": "/export/upload/kubeadm-rpm.tgz",
  10. "gid": 0,
  11. "group": "root",
  12. "md5sum": "80d5bda420db6ea23ad75dcf0f76e858",
  13. "mode": "0644",
  14. "owner": "root",
  15. "secontext": "system_u:object_r:usr_t:s0",
  16. "size": 67423355,
  17. "src": "/root/.ansible/tmp/ansible-tmp-1661840257.4-33361-139823848282879/source",
  18. "state": "file",
  19. "uid": 0
  20. }

3) 执行解压并安装

$ ansible k8s -m shell -a "tar xzvf /export/upload/kubeadm-rpm.tgz -C /export/upload/ && yum -y install /export/upload/kubeadm-rpm/*"

4) 设置开机自启并启动

  1. $ ansible k8s -m shell -a "systemctl enable kubelet && systemctl start kubelet"
  2. 注:此时kubelet启动失败,会进入不断重启,这个是正常现象,执行initjoin后问题会自动解决,对此官网有如下描述,也就是此时不用理会kubelet.service,可执行发下命令查看kubelet状态。
  3. $ journalctl -xefu kubelet

4) 分发依赖镜像至集群节点

  1. # 可以在有公网环境提前下载镜像
  2. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
  3. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
  4. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
  5. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
  6. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
  7. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
  8. $ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
  9. $ docker pull rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
  10. $ docker pull rancher/mirrored-flannelcni-flannel:v0.19.1
  11. # 导出镜像文件,上传部署节点并导入镜像库
  12. $ ls /export/upload
  13. $ docker load -i google_containers-coredns-v1.8.4.tar
  14. $ docker load -i google_containers-etcd:3.5.0-0.tar
  15. $ docker load -i google_containers-kube-apiserver:v1.22.4.tar
  16. $ docker load -i google_containers-kube-controller-manager-v1.22.4.tar
  17. $ docker load -i google_containers-kube-proxy-v1.22.4.tar
  18. $ docker load -i google_containers-kube-scheduler-v1.22.4.tar
  19. $ docker load -i google_containers-pause-3.5.tar
  20. $ docker load -i rancher-mirrored-flannelcni-flannel-cni-plugin-v1.1.0.tar
  21. $ docker load -i rancher-mirrored-flannelcni-flannel-v0.19.1.tar
  22. # 镜像打harbor镜像库tag
  23. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 10.132.10.100:8090/community/coredns:v1.8.4
  24. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 10.132.10.100:8090/community/etcd:3.5.0-0
  25. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4 10.132.10.100:8090/community/kube-apiserver:v1.22.4
  26. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4 10.132.10.100:8090/community/kube-controller-manager:v1.22.4
  27. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4 10.132.10.100:8090/community/kube-proxy:v1.22.4
  28. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4 10.132.10.100:8090/community/kube-scheduler:v1.22.4
  29. $ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 10.132.10.100:8090/community/pause:3.5
  30. $ docker tag rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
  31. $ docker tag rancher/mirrored-flannelcni-flannel:v0.19.1 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
  32. # 推送至harbor镜像库
  33. $ docker push 192.168.186.120:8090/community/coredns:v1.8.4
  34. $ docker push 192.168.186.120:8090/community/etcd:3.5.0-0
  35. $ docker push 192.168.186.120:8090/community/kube-apiserver:v1.22.4
  36. $ docker push 192.168.186.120:8090/community/kube-controller-manager:v1.22.4
  37. $ docker push 192.168.186.120:8090/community/kube-proxy:v1.22.4
  38. $ docker push 192.168.186.120:8090/community/kube-scheduler:v1.22.4
  39. $ docker push 192.168.186.120:8090/community/pause:3.5
  40. $ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
  41. $ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel:v0.19.1

5)部署首个master

  1. $ kubeadm init \
  2. --control-plane-endpoint "10.132.10.91:6443" \
  3. --image-repository 10.132.10.100/community \
  4. --kubernetes-version v1.22.4 \
  5. --service-cidr=172.16.0.0/16 \
  6. --pod-network-cidr=10.244.0.0/16 \
  7. --token "abcdef.0123456789abcdef" \
  8. --token-ttl "0" \
  9. --upload-certs
  10. # 显示以下信息
  11. [init] Using Kubernetes version: v1.22.4
  12. [preflight] Running pre-flight checks
  13. [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  14. [preflight] Pulling images required for setting up a Kubernetes cluster
  15. [preflight] This might take a minute or two, depending on the speed of your internet connection
  16. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  17. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  18. [certs] Generating "ca" certificate and key
  19. [certs] Generating "apiserver" certificate and key
  20. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [172.16.0.1 10.132.10.91]
  21. [certs] Generating "apiserver-kubelet-client" certificate and key
  22. [certs] Generating "front-proxy-ca" certificate and key
  23. [certs] Generating "front-proxy-client" certificate and key
  24. [certs] Generating "etcd/ca" certificate and key
  25. [certs] Generating "etcd/server" certificate and key
  26. [certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [10.132.10.91 127.0.0.1 ::1]
  27. [certs] Generating "etcd/peer" certificate and key
  28. [certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [10.132.10.91 127.0.0.1 ::1]
  29. [certs] Generating "etcd/healthcheck-client" certificate and key
  30. [certs] Generating "apiserver-etcd-client" certificate and key
  31. [certs] Generating "sa" key and public key
  32. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  33. [kubeconfig] Writing "admin.conf" kubeconfig file
  34. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  35. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  36. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  37. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  38. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  39. [kubelet-start] Starting the kubelet
  40. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  41. [control-plane] Creating static Pod manifest for "kube-apiserver"
  42. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  43. [control-plane] Creating static Pod manifest for "kube-scheduler"
  44. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  45. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  46. [apiclient] All control plane components are healthy after 11.008638 seconds
  47. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  48. [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
  49. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
  50. [upload-certs] Using certificate key:
  51. 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
  52. [mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  53. [mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  54. [bootstrap-token] Using token: abcdef.0123456789abcdef
  55. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  56. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  57. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  58. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  59. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  60. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  61. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  62. [addons] Applied essential addon: CoreDNS
  63. [addons] Applied essential addon: kube-proxy
  64. Your Kubernetes control-plane has initialized successfully!
  65. To start using your cluster, you need to run the following as a regular user:
  66. mkdir -p $HOME/.kube
  67. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  68. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  69. Alternatively, if you are the root user, you can run:
  70. export KUBECONFIG=/etc/kubernetes/admin.conf
  71. You should now deploy a pod network to the cluster.
  72. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  73. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  74. You can now join any number of the control-plane node running the following command on each as root:
  75. kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef \
  76. --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
  77. --control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
  78. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  79. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  80. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  81. Then you can join any number of worker nodes by running the following on each as root:
  82. kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef \
  83. --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2

6)生成kubelet环境配置文件

  1. # 执行命令
  2. $ mkdir -p $HOME/.kube
  3. $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  4. $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

7)配置网络插件flannel

  1. # 创建flannel.yml文件
  2. $ touch /export/servers/kubernetes/flannel.yml
  3. $ vim /export/servers/kubernetes/flannel.yml
  4. # 设置以下内容,需要关注有网无网时对应的地址切换
  5. ---
  6. kind: Namespace
  7. apiVersion: v1
  8. metadata:
  9. name: kube-flannel
  10. labels:
  11. pod-security.kubernetes.io/enforce: privileged
  12. ---
  13. kind: ClusterRole
  14. apiVersion: rbac.authorization.k8s.io/v1
  15. metadata:
  16. name: flannel
  17. rules:
  18. - apiGroups:
  19. - ""
  20. resources:
  21. - pods
  22. verbs:
  23. - get
  24. - apiGroups:
  25. - ""
  26. resources:
  27. - nodes
  28. verbs:
  29. - list
  30. - watch
  31. - apiGroups:
  32. - ""
  33. resources:
  34. - nodes/status
  35. verbs:
  36. - patch
  37. ---
  38. kind: ClusterRoleBinding
  39. apiVersion: rbac.authorization.k8s.io/v1
  40. metadata:
  41. name: flannel
  42. roleRef:
  43. apiGroup: rbac.authorization.k8s.io
  44. kind: ClusterRole
  45. name: flannel
  46. subjects:
  47. - kind: ServiceAccount
  48. name: flannel
  49. namespace: kube-flannel
  50. ---
  51. apiVersion: v1
  52. kind: ServiceAccount
  53. metadata:
  54. name: flannel
  55. namespace: kube-flannel
  56. ---
  57. kind: ConfigMap
  58. apiVersion: v1
  59. metadata:
  60. name: kube-flannel-cfg
  61. namespace: kube-flannel
  62. labels:
  63. tier: node
  64. app: flannel
  65. data:
  66. cni-conf.json: |
  67. {
  68. "name": "cbr0",
  69. "cniVersion": "0.3.1",
  70. "plugins": [
  71. {
  72. "type": "flannel",
  73. "delegate": {
  74. "hairpinMode": true,
  75. "isDefaultGateway": true
  76. }
  77. },
  78. {
  79. "type": "portmap",
  80. "capabilities": {
  81. "portMappings": true
  82. }
  83. }
  84. ]
  85. }
  86. net-conf.json: |
  87. {
  88. "Network": "10.244.0.0/16",
  89. "Backend": {
  90. "Type": "vxlan"
  91. }
  92. }
  93. ---
  94. apiVersion: apps/v1
  95. kind: DaemonSet
  96. metadata:
  97. name: kube-flannel-ds
  98. namespace: kube-flannel
  99. labels:
  100. tier: node
  101. app: flannel
  102. spec:
  103. selector:
  104. matchLabels:
  105. app: flannel
  106. template:
  107. metadata:
  108. labels:
  109. tier: node
  110. app: flannel
  111. spec:
  112. affinity:
  113. nodeAffinity:
  114. requiredDuringSchedulingIgnoredDuringExecution:
  115. nodeSelectorTerms:
  116. - matchExpressions:
  117. - key: kubernetes.io/os
  118. operator: In
  119. values:
  120. - linux
  121. hostNetwork: true
  122. priorityClassName: system-node-critical
  123. tolerations:
  124. - operator: Exists
  125. effect: NoSchedule
  126. serviceAccountName: flannel
  127. initContainers:
  128. - name: install-cni-plugin
  129. # 在有网环境下可以切换下面地址
  130. # image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
  131. # 在无网环境下需要使用私有harbor地址
  132. image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
  133. command:
  134. - cp
  135. args:
  136. - -f
  137. - /flannel
  138. - /opt/cni/bin/flannel
  139. volumeMounts:
  140. - name: cni-plugin
  141. mountPath: /opt/cni/bin
  142. - name: install-cni
  143. # 在有网环境下可以切换下面地址
  144. # image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
  145. # 在无网环境下需要使用私有harbor地址
  146. image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
  147. command:
  148. - cp
  149. args:
  150. - -f
  151. - /etc/kube-flannel/cni-conf.json
  152. - /etc/cni/net.d/10-flannel.conflist
  153. volumeMounts:
  154. - name: cni
  155. mountPath: /etc/cni/net.d
  156. - name: flannel-cfg
  157. mountPath: /etc/kube-flannel/
  158. containers:
  159. - name: kube-flannel
  160. # 在有网环境下可以切换下面地址
  161. # image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
  162. # 在无网环境下需要使用私有harbor地址
  163. image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
  164. command:
  165. - /opt/bin/flanneld
  166. args:
  167. - --ip-masq
  168. - --kube-subnet-mgr
  169. resources:
  170. requests:
  171. cpu: "100m"
  172. memory: "50Mi"
  173. limits:
  174. cpu: "100m"
  175. memory: "50Mi"
  176. securityContext:
  177. privileged: false
  178. capabilities:
  179. add: ["NET_ADMIN", "NET_RAW"]
  180. env:
  181. - name: POD_NAME
  182. valueFrom:
  183. fieldRef:
  184. fieldPath: metadata.name
  185. - name: POD_NAMESPACE
  186. valueFrom:
  187. fieldRef:
  188. fieldPath: metadata.namespace
  189. - name: EVENT_QUEUE_DEPTH
  190. value: "5000"
  191. volumeMounts:
  192. - name: run
  193. mountPath: /run/flannel
  194. - name: flannel-cfg
  195. mountPath: /etc/kube-flannel/
  196. - name: xtables-lock
  197. mountPath: /run/xtables.lock
  198. volumes:
  199. - name: run
  200. hostPath:
  201. path: /run/flannel
  202. - name: cni-plugin
  203. hostPath:
  204. path: /opt/cni/bin
  205. - name: cni
  206. hostPath:
  207. path: /etc/cni/net.d
  208. - name: flannel-cfg
  209. configMap:
  210. name: kube-flannel-cfg
  211. - name: xtables-lock
  212. hostPath:
  213. path: /run/xtables.lock
  214. type: FileOrCreate

8)安装网络插件flannel

  1. # 生效yml配置文件
  2. $ kubectl apply -f kube-flannel.yml
  3. # 查看pods状态
  4. $ kubectl get pods -A
  5. NAMESPACE NAME READY STATUS RESTARTS AGE
  6. kube-flannel kube-flannel-ds-kjmt4 1/1 Running 0 148m
  7. kube-system coredns-7f84d7b4b5-7qr8g 1/1 Running 0 4h18m
  8. kube-system coredns-7f84d7b4b5-fljws 1/1 Running 0 4h18m
  9. kube-system etcd-master01 1/1 Running 0 4h19m
  10. kube-system kube-apiserver-master01 1/1 Running 0 4h19m
  11. kube-system kube-controller-manager-master01 1/1 Running 0 4h19m
  12. kube-system kube-proxy-wzq2t 1/1 Running 0 4h18m
  13. kube-system kube-scheduler-master01 1/1 Running 0 4h19m

9)加入其他master节点

  1. # 在master01执行如下操作
  2. # 查看token列表
  3. $ kubeadm token list
  4. # master01执行init操作后生成加入命令如下
  5. $ kubeadm join 10.132.10.91:6443 \
  6. --token abcdef.0123456789abcdef \
  7. --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
  8. --control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
  9. # 在其他master节点执行如下操作
  10. # 分别执行上一步的加入命令,加入master节点至集群
  11. $ kubeadm join 10.132.10.91:6443 \
  12. --token abcdef.0123456789abcdef \
  13. --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
  14. --control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
  15. # 此处如果报错,一般是certificate-key过期,可以在master01执行如下命令更新
  16. $ kubeadm init phase upload-certs --upload-certs
  17. 3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f
  18. # 将上面生成的值替换certificate-key值再次在其他master节点执行如下命令
  19. $ kubeadm join 10.132.10.91:6443 \
  20. --token abcdef.0123456789abcdef \
  21. --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2
  22. --control-plane
  23. --certificate-key 3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f
  24. # 生成kubelet环境配置文件
  25. $ mkdir -p $HOME/.kube
  26. $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  27. $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  28. # 在任意master节点执行查看节点状态命令
  29. $ kubectl get nodes
  30. NAME STATUS ROLES AGE VERSION
  31. master01 Ready control-plane,master 5h58m v1.22.4
  32. master02 Ready control-plane,master 45m v1.22.4
  33. master03 Ready control-plane,master 44m v1.22.4

9)加入worker节点

  1. # 在其他worker节点执行master01执行init操作后生成的加入命令如下
  2. # 分别执行上一步的加入命令,加入master节点至集群
  3. $ kubeadm join 10.132.10.91:6443 \
  4. --token abcdef.0123456789abcdef \
  5. --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2
  6. # 此处如果报错,一般是token过期,可以在master01执行如下命令重新生成加入命令
  7. $ kubeadm token create --print-join-command
  8. kubeadm join 10.132.10.91:6443 \
  9. --token abcdef.0123456789abcdef \
  10. --discovery-token-ca-cert-hash sha256:cf30ddd3df1c6215b886df1ea378a68ad5a9faad7933d53ca9891ebbdf9a1c3f
  11. # 将上面生成的加入命令再次在其他worker节点执行
  12. # 查看集成状态
  13. $ kubectl get nodes
  14. NAME STATUS ROLES AGE VERSION
  15. master01 Ready control-plane,master 6h12m v1.22.4
  16. master02 Ready control-plane,master 58m v1.22.4
  17. master03 Ready control-plane,master 57m v1.22.4
  18. worker01 Ready <none> 5m12s v1.22.4
  19. worker02 Ready <none> 4m10s v1.22.4
  20. worker03 Ready <none> 3m42s v1.22.4

10)配置kubernetes dashboard

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: kubernetes-dashboard
  5. ---
  6. apiVersion: v1
  7. kind: ServiceAccount
  8. metadata:
  9. labels:
  10. k8s-app: kubernetes-dashboard
  11. name: kubernetes-dashboard
  12. namespace: kubernetes-dashboard
  13. ---
  14. kind: Service
  15. apiVersion: v1
  16. metadata:
  17. labels:
  18. k8s-app: kubernetes-dashboard
  19. name: kubernetes-dashboard
  20. namespace: kubernetes-dashboard
  21. spec:
  22. type: NodePort
  23. ports:
  24. - port: 443
  25. targetPort: 8443
  26. nodePort: 31001
  27. selector:
  28. k8s-app: kubernetes-dashboard
  29. ---
  30. apiVersion: v1
  31. kind: Secret
  32. metadata:
  33. labels:
  34. k8s-app: kubernetes-dashboard
  35. name: kubernetes-dashboard-certs
  36. namespace: kubernetes-dashboard
  37. type: Opaque
  38. ---
  39. apiVersion: v1
  40. kind: Secret
  41. metadata:
  42. labels:
  43. k8s-app: kubernetes-dashboard
  44. name: kubernetes-dashboard-key-holder
  45. namespace: kubernetes-dashboard
  46. type: Opaque
  47. ---
  48. kind: ConfigMap
  49. apiVersion: v1
  50. metadata:
  51. labels:
  52. k8s-app: kubernetes-dashboard
  53. name: kubernetes-dashboard-settings
  54. namespace: kubernetes-dashboard
  55. ---
  56. kind: Role
  57. apiVersion: rbac.authorization.k8s.io/v1
  58. metadata:
  59. labels:
  60. k8s-app: kubernetes-dashboard
  61. name: kubernetes-dashboard
  62. namespace: kubernetes-dashboard
  63. rules:
  64. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  65. - apiGroups: [""]
  66. resources: ["secrets"]
  67. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  68. verbs: ["get", "update", "delete"]
  69. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  70. - apiGroups: [""]
  71. resources: ["configmaps"]
  72. resourceNames: ["kubernetes-dashboard-settings"]
  73. verbs: ["get", "update"]
  74. # Allow Dashboard to get metrics.
  75. - apiGroups: [""]
  76. resources: ["services"]
  77. resourceNames: ["heapster", "dashboard-metrics-scraper"]
  78. verbs: ["proxy"]
  79. - apiGroups: [""]
  80. resources: ["services/proxy"]
  81. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
  82. verbs: ["get"]
  83. ---
  84. kind: ClusterRole
  85. apiVersion: rbac.authorization.k8s.io/v1
  86. metadata:
  87. labels:
  88. k8s-app: kubernetes-dashboard
  89. name: kubernetes-dashboard
  90. rules:
  91. # Allow Metrics Scraper to get metrics from the Metrics server
  92. - apiGroups: ["metrics.k8s.io"]
  93. resources: ["pods", "nodes"]
  94. verbs: ["get", "list", "watch"]
  95. ---
  96. apiVersion: rbac.authorization.k8s.io/v1
  97. kind: RoleBinding
  98. metadata:
  99. labels:
  100. k8s-app: kubernetes-dashboard
  101. name: kubernetes-dashboard
  102. namespace: kubernetes-dashboard
  103. roleRef:
  104. apiGroup: rbac.authorization.k8s.io
  105. kind: Role
  106. name: kubernetes-dashboard
  107. subjects:
  108. - kind: ServiceAccount
  109. name: kubernetes-dashboard
  110. namespace: kubernetes-dashboard
  111. ---
  112. apiVersion: rbac.authorization.k8s.io/v1
  113. kind: ClusterRoleBinding
  114. metadata:
  115. name: kubernetes-dashboard
  116. roleRef:
  117. apiGroup: rbac.authorization.k8s.io
  118. kind: ClusterRole
  119. name: kubernetes-dashboard
  120. subjects:
  121. - kind: ServiceAccount
  122. name: kubernetes-dashboard
  123. namespace: kubernetes-dashboard
  124. ---
  125. kind: Deployment
  126. apiVersion: apps/v1
  127. metadata:
  128. labels:
  129. k8s-app: kubernetes-dashboard
  130. name: kubernetes-dashboard
  131. namespace: kubernetes-dashboard
  132. spec:
  133. replicas: 1
  134. revisionHistoryLimit: 10
  135. selector:
  136. matchLabels:
  137. k8s-app: kubernetes-dashboard
  138. template:
  139. metadata:
  140. labels:
  141. k8s-app: kubernetes-dashboard
  142. spec:
  143. securityContext:
  144. seccompProfile:
  145. type: RuntimeDefault
  146. containers:
  147. - name: kubernetes-dashboard
  148. image: kubernetesui/dashboard:v2.5.0
  149. imagePullPolicy: Always
  150. ports:
  151. - containerPort: 8443
  152. protocol: TCP
  153. args:
  154. - --auto-generate-certificates
  155. - --namespace=kubernetes-dashboard
  156. # Uncomment the following line to manually specify Kubernetes API server Host
  157. # If not specified, Dashboard will attempt to auto discover the API server and connect
  158. # to it. Uncomment only if the default does not work.
  159. # - --apiserver-host=http://my-address:port
  160. volumeMounts:
  161. - name: kubernetes-dashboard-certs
  162. mountPath: /certs
  163. # Create on-disk volume to store exec logs
  164. - mountPath: /tmp
  165. name: tmp-volume
  166. livenessProbe:
  167. httpGet:
  168. scheme: HTTPS
  169. path: /
  170. port: 8443
  171. initialDelaySeconds: 30
  172. timeoutSeconds: 30
  173. securityContext:
  174. allowPrivilegeEscalation: false
  175. readOnlyRootFilesystem: true
  176. runAsUser: 1001
  177. runAsGroup: 2001
  178. volumes:
  179. - name: kubernetes-dashboard-certs
  180. secret:
  181. secretName: kubernetes-dashboard-certs
  182. - name: tmp-volume
  183. emptyDir: {}
  184. serviceAccountName: kubernetes-dashboard
  185. nodeSelector:
  186. "kubernetes.io/os": linux
  187. # Comment the following tolerations if Dashboard must not be deployed on master
  188. tolerations:
  189. - key: node-role.kubernetes.io/master
  190. effect: NoSchedule
  191. ---
  192. kind: Service
  193. apiVersion: v1
  194. metadata:
  195. labels:
  196. k8s-app: dashboard-metrics-scraper
  197. name: dashboard-metrics-scraper
  198. namespace: kubernetes-dashboard
  199. spec:
  200. ports:
  201. - port: 8000
  202. targetPort: 8000
  203. selector:
  204. k8s-app: dashboard-metrics-scraper
  205. ---
  206. kind: Deployment
  207. apiVersion: apps/v1
  208. metadata:
  209. labels:
  210. k8s-app: dashboard-metrics-scraper
  211. name: dashboard-metrics-scraper
  212. namespace: kubernetes-dashboard
  213. spec:
  214. replicas: 1
  215. revisionHistoryLimit: 10
  216. selector:
  217. matchLabels:
  218. k8s-app: dashboard-metrics-scraper
  219. template:
  220. metadata:
  221. labels:
  222. k8s-app: dashboard-metrics-scraper
  223. spec:
  224. securityContext:
  225. seccompProfile:
  226. type: RuntimeDefault
  227. containers:
  228. - name: dashboard-metrics-scraper
  229. image: kubernetesui/metrics-scraper:v1.0.7
  230. ports:
  231. - containerPort: 8000
  232. protocol: TCP
  233. livenessProbe:
  234. httpGet:
  235. scheme: HTTP
  236. path: /
  237. port: 8000
  238. initialDelaySeconds: 30
  239. timeoutSeconds: 30
  240. volumeMounts:
  241. - mountPath: /tmp
  242. name: tmp-volume
  243. securityContext:
  244. allowPrivilegeEscalation: false
  245. readOnlyRootFilesystem: true
  246. runAsUser: 1001
  247. runAsGroup: 2001
  248. serviceAccountName: kubernetes-dashboard
  249. nodeSelector:
  250. "kubernetes.io/os": linux
  251. # Comment the following tolerations if Dashboard must not be deployed on master
  252. tolerations:
  253. - key: node-role.kubernetes.io/master
  254. effect: NoSchedule
  255. volumes:
  256. - name: tmp-volume
  257. emptyDir: {}

11)生成dashboard自签证书

  1. $ mkdir -p /export/servers/kubernetes/certs && cd /export/servers/kubernetes/certs/
  2. $ openssl genrsa -out dashboard.key 2048
  3. $ openssl req -days 3650 -new -key dashboard.key -out dashboard.csr -subj /C=CN/ST=BEIJING/L=BEIJING/O=JD/OU=JD/CN=172.16.16.42
  4. $ openssl x509 -req -days 3650 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

12)执行以下操作命令

  1. # 去除主节点的污点
  2. $ kubectl taint nodes --all node-role.kubernetes.io/master-
  3. # 创建命名空间
  4. $ kubectl create namespace kubernetes-dashboard
  5. # 创建Secret
  6. $ kubectl create secret tls kubernetes-dashboard-certs -n kubernetes-dashboard --key dashboard.key \
  7. --cert dashboard.crt

13)生效dashboard yml配置文件

  1. $ kubectl apply -f /export/servers/kubernetes/dashboard.yml
  2. # 查看pods状态
  3. $ kubectl get pods -A | grep kubernetes-dashboard
  4. kubernetes-dashboard dashboard-metrics-scraper-c45b7869d-rbdt4 1/1 Running 0 15m
  5. kubernetes-dashboard kubernetes-dashboard-764b4dd7-rt66t 1/1 Running 0 15m

14)访问dashboard页面

# web浏览器访问地址:IP地址为集群任意节点(可以是LB地址)https://192.168.186.121:31001/#/login

15)制作访问token

  1. # 新增配置文件 dashboard-adminuser.yaml
  2. $ touch /export/servers/kubernetes/dashboard-adminuser.yaml && vim /export/servers/kubernetes/dashboard-adminuser.yaml
  3. # 输入以下内容
  4. ---
  5. apiVersion: v1
  6. kind: ServiceAccount
  7. metadata:
  8. name: admin-user
  9. namespace: kubernetes-dashboard
  10. ---
  11. apiVersion: rbac.authorization.k8s.io/v1
  12. kind: ClusterRoleBinding
  13. metadata:
  14. name: admin-user
  15. roleRef:
  16. apiGroup: rbac.authorization.k8s.io
  17. kind: ClusterRole
  18. name: cluster-admin
  19. subjects:
  20. - kind: ServiceAccount
  21. name: admin-user
  22. namespace: kubernetes-dashboard
  23. # 执行yaml文件
  24. $ kubectl create -f /export/servers/kubernetes/dashboard-adminuser.yaml
  25. # 预期输出结果
  26. serviceaccount/admin-user created
  27. clusterrolebinding.rbac.authorization.k8s.io/admin-user created
  28. # 说明:上面创建了一个叫admin-user的服务账号,并放在kubernetes-dashboard命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,直接绑定即可
  29. # 查看admin-user账户的token
  30. $ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
  31. # 预期输出结果
  32. Name: admin-user-token-9fpps
  33. Namespace: kubernetes-dashboard
  34. Labels: <none>
  35. Annotations: kubernetes.io/service-account.name: admin-user
  36. kubernetes.io/service-account.uid: 72c1aa28-6385-4d1a-b22c-42427b74b4c7
  37. Type: kubernetes.io/service-account-token
  38. Data
  39. ====
  40. ca.crt: 1099 bytes
  41. namespace: 20 bytes
  42. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjFEckU0NXB5Yno5UV9MUFkxSUpPenJhcTFuektHazM1c2QzTGFmRzNES0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTlmcHBzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MmMxYWEyOC02Mzg1LTRkMWEtYjIyYy00MjQyN2I3NGI0YzciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.oA3NLhhTaXd2qvWrPDXat2w9ywdWi_77SINk4vWkfIIzMmxBEHnqvDIBvhRC3frIokNSvT71y6mXN0KHu32hBa1YWi0MuzF165ZNFtM_rSQiq9OnPxeFvLaKS-0Vzr2nWuBx_-fTt7gESReSMLEJStbPb1wOnR6kqtY66ajKK5ILeIQ77I0KXYIi7GlPEyc6q4bIjweZ0HSXDPR4JSnEAhrP8Qslrv3Oft4QZVNj47x7xKC4dyyZOMHUIj9QhkpI2gMbiZ8XDUmNok070yDc0TCxeTZKDuvdsigxCMQx6AesD-8dca5Hb8Sm4mEPkGJekvMzkLkM97y_pOBPkfTAIA
  43. # 把上面命令执行获取到的Token复制到登录界面的Token输入框中,即可正常登录dashboard

13)登录dashboard如下





kubectl安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
kubectlkubectl-1.22.4-0.x86_64
节点deploy

2. 部署说明

Kubernetes kubectl客户端。

3. 解压之前上传的kubadm-rpm包

$ tar xzvf kubeadm-rpm.tgz 

4. 执行安装

$ rpm -ivh bc7a9f8e7c6844cfeab2066a84b8fecf8cf608581e56f6f96f80211250f9a5e7-kubectl-1.22.4-0.x86_64.rpm

5. 增加执行权限

  1. # 生成kubelet环境配置文件
  2. $ mkdir -p $HOME/.kube
  3. $ sudo touch $HOME/.kube/config
  4. $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  5. # 从任意master节点复制内容至上面的配置文件

6. 查看版本

  1. $ kubectl version
  2. Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
  3. Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

helm安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
helmhelm-v3.9.3-linux-amd64.tar.gz
节点deploy

2. 部署说明

Kubernetes资源包及配置管理工具。

3. 下载helm离线安装包并上传至服务器

$ wget https://get.helm.sh/helm-v3.9.3-linux-amd64.tar.gz

4. 解压安装包

  1. $ tar -zxvf helm-v3.9.3-linux-amd64.tar.gz -C /export/servers/
  2. $ cd /export/servers/linux-amd64

5. 增加执行权限

  1. $ cp linux-amd64/helm /usr/local/bin/
  2. $ chmod +x /usr/local/bin/helm

6. 查看版本

  1. $ helm version
  2. version.BuildInfo{Version:"v3.9.3", GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", GitTreeState:"clean", GoVersion:"go1.17.13"}

设置本地存储挂载nas

  1. $ mkdir /export/servers/helm_chart/local-path-storage && cd /export/servers/helm_chart/local-path-storage/local-path-storage.yaml
  2. $ vim local-path-storage.yaml
  3. # 设置以下内容,设置"paths":["/home/admin/local-path-provisioner"] 为nas目录,没有目录需要创建
  4. apiVersion: v1
  5. kind: Namespace
  6. metadata:
  7. name: local-path-storage
  8. ---
  9. apiVersion: v1
  10. kind: ServiceAccount
  11. metadata:
  12. name: local-path-provisioner-service-account
  13. namespace: local-path-storage
  14. ---
  15. apiVersion: rbac.authorization.k8s.io/v1
  16. kind: ClusterRole
  17. metadata:
  18. name: local-path-provisioner-role
  19. rules:
  20. - apiGroups: [ "" ]
  21. resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
  22. verbs: [ "get", "list", "watch" ]
  23. - apiGroups: [ "" ]
  24. resources: [ "endpoints", "persistentvolumes", "pods" ]
  25. verbs: [ "*" ]
  26. - apiGroups: [ "" ]
  27. resources: [ "events" ]
  28. verbs: [ "create", "patch" ]
  29. - apiGroups: [ "storage.k8s.io" ]
  30. resources: [ "storageclasses" ]
  31. verbs: [ "get", "list", "watch" ]
  32. ---
  33. apiVersion: rbac.authorization.k8s.io/v1
  34. kind: ClusterRoleBinding
  35. metadata:
  36. name: local-path-provisioner-bind
  37. roleRef:
  38. apiGroup: rbac.authorization.k8s.io
  39. kind: ClusterRole
  40. name: local-path-provisioner-role
  41. subjects:
  42. - kind: ServiceAccount
  43. name: local-path-provisioner-service-account
  44. namespace: local-path-storage
  45. ---
  46. apiVersion: apps/v1
  47. kind: Deployment
  48. metadata:
  49. name: local-path-provisioner
  50. namespace: local-path-storage
  51. spec:
  52. replicas: 1
  53. selector:
  54. matchLabels:
  55. app: local-path-provisioner
  56. template:
  57. metadata:
  58. labels:
  59. app: local-path-provisioner
  60. spec:
  61. serviceAccountName: local-path-provisioner-service-account
  62. containers:
  63. - name: local-path-provisioner
  64. image: rancher/local-path-provisioner:v0.0.21
  65. imagePullPolicy: IfNotPresent
  66. command:
  67. - local-path-provisioner
  68. - --debug
  69. - start
  70. - --config
  71. - /etc/config/config.json
  72. volumeMounts:
  73. - name: config-volume
  74. mountPath: /etc/config/
  75. env:
  76. - name: POD_NAMESPACE
  77. valueFrom:
  78. fieldRef:
  79. fieldPath: metadata.namespace
  80. volumes:
  81. - name: config-volume
  82. configMap:
  83. name: local-path-config
  84. ---
  85. apiVersion: storage.k8s.io/v1
  86. kind: StorageClass
  87. metadata:
  88. name: local-path
  89. provisioner: rancher.io/local-path
  90. volumeBindingMode: WaitForFirstConsumer
  91. reclaimPolicy: Delete
  92. ---
  93. kind: ConfigMap
  94. apiVersion: v1
  95. metadata:
  96. name: local-path-config
  97. namespace: local-path-storage
  98. data:
  99. config.json: |-
  100. {
  101. "nodePathMap":[
  102. {
  103. "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
  104. "paths":["/nas_data/jdiot/local-path-provisioner"]
  105. }
  106. ]
  107. }
  108. setup: |-
  109. #!/bin/sh
  110. while getopts "m:s:p:" opt
  111. do
  112. case $opt in
  113. p)
  114. absolutePath=$OPTARG
  115. ;;
  116. s)
  117. sizeInBytes=$OPTARG
  118. ;;
  119. m)
  120. volMode=$OPTARG
  121. ;;
  122. esac
  123. done
  124. mkdir -m 0777 -p ${absolutePath}
  125. teardown: |-
  126. #!/bin/sh
  127. while getopts "m:s:p:" opt
  128. do
  129. case $opt in
  130. p)
  131. absolutePath=$OPTARG
  132. ;;
  133. s)
  134. sizeInBytes=$OPTARG
  135. ;;
  136. m)
  137. volMode=$OPTARG
  138. ;;
  139. esac
  140. done
  141. rm -rf ${absolutePath}
  142. helperPod.yaml: |-
  143. apiVersion: v1
  144. kind: Pod
  145. metadata:
  146. name: helper-pod
  147. spec:
  148. containers:
  149. - name: helper-pod
  150. image: busybox

注:以上依赖镜像需要从公网环境下载依赖并导入镜像库,需要设置以上对应镜像地址从私有镜像库拉取镜像

生效本地存储yaml

$ kubectl apply -f local-path-storage.yaml -n local-path-storage

设置k8s默认存储

$ kubectl patch storageclass local-path  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

注:后面部署的中间件及服务需要修改对应的存储为本地存储:"storageClass": "local-path"

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/秋刀鱼在做梦/article/detail/977183
推荐阅读
相关标签
  

闽ICP备14008679号