当前位置:   article > 正文

k8s持久化存储PV和PVC

k8s持久化存储PV和PVC

一、PV和PVC

1.PersistentVolume (PV) 
        PersistentVolume (PV) 是外部存储系统中的⼀块存储空间,由管理员创建和维护。与 Volume⼀样, PV 具有持久性,⽣命周期独⽴于 Pod;
2.PersistentVolumeClaim (PVC)
        PersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim) PVC 通常由普通⽤户创建和维护。需要为 Pod 分配存储资源时,⽤户可以创建⼀个 PVC ,指明存储资源的容量⼤⼩和访问模式(⽐如只读)等信息,Kubernetes 会查找并提供满⾜条件的 PV;

二、通过NFS实现持久化存储

1.配置nfs
nfs--servernfs-client
k8s-masterk8s-node1、k8s-node2
1)安装nfs服务——所有节点
[root@k8s-master ~] #   yum install -y nfs-common nfs-utils
2)创建共享目录并授权——在nfs-server操作

[root@k8s-master ~]#  mkdir /nfsdata

[root@k8s-master ~]#  chmod 666 /nfsdata

3)编辑exports文件——在nfs-server操作
[root@k8s-master ~] #  vim /etc/exports
[root@k8s-master ~] #  cat /etc/exports
/nfsdata *(rw,no_root_squash,no_all_squash,sync)
4)启动rpc和nfs——在nfs-server操作
[root@k8s-master ~] #  systemctl start rpcbind
[root@k8s-master ~] #  systemctl start nfs
5)测试NFS挂载是否可用
在nfs-client操作
[root@k8s-node2 ~] # mkdir /test
[root@k8s-node2 ~] # mount -t nfs 192.168.22.139:/nfsdata /test/    #nfs-server的IP
[root@k8s-node2 ~] # df -Th|grep "/test"
192.168.22.139: /nfsdata nfs4 19G 9 .9G 9 .0G 53 % /test
[root@k8s-node2 ~] # touch /test/ip.txt
[root@k8s-node2 ~] # ls /test/
ip.txt
 在nfs-server操作
[root@k8s-master ~] # ls /nfsdata/
ip.txt
[root@k8s-node2 ~] # umount /test         #测试完成之后,就可以卸载了
2.创建PV
1)编写yaml配置文件
  1. [root@k8s-master ~]# vim nfs-pv1.yaml
  2. [root@k8s-master ~]# cat nfs-pv1.yaml
  3. apiVersion: v1
  4. kind: PersistentVolume
  5. metadata:
  6. name: mypv1
  7. spec:
  8. capacity: #指定PV的容量
  9. storage: 1Gi
  10. accessModes: #指定访问模式
  11. - ReadWriteOnce #指PV能以read-write模式mount到单个节点
  12. persistentVolumeReclaimPolicy: Recycle #指定当前PV的回收策略为Recycle,清除PV中的数据
  13. storageClassName: nfs #指定PV的class为nfs;相当于为PV设置了一个分类
  14. nfs:
  15. path: /nfsdata
  16. server: 192.168.22.139 #指定nfs目录所在的机器的地址

        PS: 

        1)accessModes 指定访问模式为 ReadWriteOnce ,⽀持的访问模式有:

                ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
                ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
                ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点
        2)persistentVolumeReclaimPolicy 指定当前 PV 的回收策略, ⽀持的策略有:
                Retain – 需要管理员⼿⼯回收。
                Recycle – 清除 PV 中的数据,效果相当于执⾏ rm -rf /nfsdata/* 。
                Delete – 删除 Storage Provider 上的对应存储资源,例如 AWS EBS、GCE PD、Azure         Disk、OpenStack Cinder Volume 等
2)应用并创建mypv1
  1. [root@k8s-master ~]# kubectl apply -f nfs-pv1.yaml
  2. persistentvolume/mypv1 created
  3. [root@k8s-master ~]# kubectl get pv
  4. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  5. mypv1 1Gi RWO Recycle Available nfs 8s
  6. # STATUS为Available,表示mypv1准备就绪,可以被PVC申请
3.创建PVC
1)编写yaml文件
  1. [root@k8s-master ~]# vim nfs-pvc1.yaml
  2. [root@k8s-master ~]# cat nfs-pvc1.yaml
  3. apiVersion: v1
  4. kind: PersistentVolumeClaim
  5. metadata:
  6. name: mypvc1
  7. spec:
  8. accessModes: #指定访问模式
  9. - ReadWriteOnce
  10. resources:
  11. requests:
  12. storage: 1Gi #指定访问PV的容量
  13. storageClassName: nfs #指定访问PV的class
2)应用并查看
  1. [root@k8s-master ~]# kubectl apply -f nfs-pvc1.yaml
  2. persistentvolumeclaim/mypvc1 created
  3. [root@k8s-master ~]# kubectl get pvc
  4. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  5. mypvc1 Bound mypv1 1Gi RWO nfs 6s
  6. [root@k8s-master ~]# kubectl get pv
  7. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  8. mypv1 1Gi RWO Recycle Bound default/mypvc1 nfs 12m
  9. #从查询pv和pvc的结果来看,mypvc1已经Bound到mypv1,申请成功
4.创建pod
1)编写pod的yaml文件并应用
  1. [root@k8s-master ~]# vim pod1.yaml
  2. [root@k8s-master ~]# cat pod1.yaml
  3. apiVersion: v1
  4. kind: Pod
  5. metadata:
  6. name: nfs-pod-nginx
  7. labels:
  8. app: nginx
  9. spec:
  10. containers:
  11. - name: mypod1
  12. image: daocloud.io/library/nginx
  13. ports:
  14. - containerPort: 80
  15. volumeMounts:
  16. - mountPath: "/usr/share/nginx/html"
  17. name: mydata
  18. volumes:
  19. - name: mydata
  20. persistentVolumeClaim:
  21. claimName: mypvc1
  22. [root@k8s-master ~]# kubectl apply -f pod1.yaml
  23. pod/nfs-pod-nginx created
5.验证
  1. [root@k8s-master ~]# kubectl exec -it nfs-pod-nginx /bin/bash
  2. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
  3. root@nfs-pod-nginx:/# ls /usr/share/nginx/html/
  4. ip.txt #上述/nfsdata中的文件
  5. root@nfs-pod-nginx:/# echo "hello!" > /usr/share/nginx/html/index.html
  6. root@nfs-pod-nginx:/# exit
  7. exit
  8. command terminated with exit code 130
  9. [root@k8s-master ~]# ls /nfsdata/ #也可在nfs的共享⽬录中查看到,说明卷共享成功
  10. index.html ip.txt
  11. [root@k8s-master ~]# cat /nfsdata/index.html
  12. hello!

三、PV的回收

1.Retain回收策略

2.删除pod、pvc、pv
  1. [root@k8s-master ~]# kubectl delete pod nfs-pod-nginx
  2. pod "nfs-pod-nginx" deleted
  3. [root@k8s-master ~]# kubectl delete pvc mypvc1
  4. persistentvolumeclaim "mypvc1" deleted
  5. [root@k8s-master ~]# kubectl get pv
  6. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  7. mypv1 1Gi RWO Retain Released default/mypvc1 nfs 98m
  8. # 虽然 mypv1 中的数据得到了保留,但其 PV 状态会⼀直处于 Released ,不能被其他PVC申请;
  9. # 为了重新使⽤存储资源,可以删除并重新创建mypv1;删除操作只是删除了PV对象,存储空间中的数据并不会被删除
  10. [root@k8s-master ~]# kubectl delete pv mypv1
  11. persistentvolume "mypv1" deleted
  12. [root@k8s-master ~]# ls /nfsdata/index.html
  13. /nfsdata/index.html
  14. [root@k8s-master ~]# cat /nfsdata/index.html
  15. hello!

四、PV&PVC应用在MySQL的持久化存储

1.创建pv和pvc
1)创建pv,mysql-pv.yaml文件
  1. [root@k8s-master mysqlpv]# vim mysql-pv.yaml
  2. [root@k8s-master mysqlpv]# cat mysql-pv.yaml
  3. apiVersion: v1
  4. kind: PersistentVolume
  5. metadata:
  6. name: mysql-pv
  7. spec:
  8. capacity:
  9. storage: 1Gi
  10. accessModes:
  11. - ReadWriteOnce
  12. persistentVolumeReclaimPolicy: Retain
  13. storageClassName: nfs
  14. nfs:
  15. path: /nfsdata/mysql-pv #记得创建这个目录
  16. server: 192.168.22.139
  17. [root@k8s-master mysqlpv]# kubectl apply -f mysql-pv.yaml
  18. persistentvolume/mysql-pv created
2)创建PVC,mysql-pvc.yaml文件
  1. [root@k8s-master mysqlpv]# vim mysql-pvc.yaml
  2. [root@k8s-master mysqlpv]# cat mysql-pvc.yaml
  3. apiVersion: v1
  4. kind: PersistentVolumeClaim
  5. metadata:
  6. name: mysql-pvc
  7. spec:
  8. accessModes:
  9. - ReadWriteOnce
  10. resources:
  11. requests:
  12. storage: 1Gi
  13. storageClassName: nfs
  14. [root@k8s-master mysqlpv]# kubectl apply -f mysql-pvc.yaml
  15. persistentvolumeclaim/mysql-pvc created
2.部署MySQL

编写mysql-pod.yaml文件

  1. [root@k8s-master mysqlpv]# vim mysql-pod.yaml
  2. [root@k8s-master mysqlpv]# cat mysql-pod.yaml
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. name: mysql
  7. spec:
  8. ports:
  9. - port: 3306
  10. targetPort: 3306
  11. selector:
  12. app: mysql
  13. ---
  14. apiVersion: apps/v1
  15. kind: Deployment
  16. metadata:
  17. name: mysql
  18. spec:
  19. selector:
  20. matchLabels:
  21. app: mysql
  22. template:
  23. metadata:
  24. labels:
  25. app: mysql
  26. spec:
  27. containers:
  28. - image: daocloud.io/library/mysql:5.7.5-m15
  29. name: mysql
  30. env:
  31. - name: MYSQL_ROOT_PASSWORD
  32. value: qinxue@123
  33. ports:
  34. - containerPort: 3306
  35. name: mysql
  36. volumeMounts:
  37. - name: mysql-persistent-storage
  38. mountPath: /var/lib/mysql
  39. volumes:
  40. - name: mysql-persistent-storage
  41. persistentVolumeClaim:
  42. claimName: mysql-pvc
  43. [root@k8s-master mysqlpv]# kubectl apply -f mysql-pod.yaml
  44. service/mysql created
  45. deployment.apps/mysql created
3.向MySQL添加数据
1)查看MySQL部署在node2节点
  1. [root@k8s-master mysqlpv]# kubectl get pod -o wide | grep mysql
  2. mysql 1/1 Running 6 (3h27m ago) 13d 10.244.1.45 k8s-node1 <none> <none>
  3. mysql-55c4f546d-4nkt9 1/1 Running 0 43s 10.244.2.52 k8s-node2 <none> <none>
2)进入容器并登录mysql
  1. [root@k8s-master mysqlpv]# kubectl exec -it mysql-bd87b4f8f-l6tdx /bin/bash
  2. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
  3. root@mysql-bd87b4f8f-l6tdx:/# mysql -uroot -p‘qinxue@123
3)添加数据
  1. mysql> create database db1
  2. Query OK, 1 row affected (0.00 sec)
4.验证数据一致性

1)删除deployment,pvc,pv;然后重新创建pv,pvc,deployment;数据在Mysql中,仍然挂载成功;

五、PV/PVC动态供应项目实战

Dynamic Provisioning机制⼯作的核⼼在于StorageClass的API对象。
StorageClass声明存储插件,⽤于⾃动创建PV
Kubernetes⽀持动态供给的存储插件:
https://kubernetes.io/docs/concepts/storage/storage-classes/
因为NFS不⽀持动态存储,所以我们需要借⽤这个存储插件。
nfs动态相关部署可以参考:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

1.定义一个storage
  1. [root@k8s-master pv-pvc]# vim storageclass-nfs.yaml
  2. [root@k8s-master pv-pvc]# cat storageclass-nfs.yaml
  3. apiVersion: storage.k8s.io/v1
  4. kind: StorageClass
  5. metadata:
  6. name: managed-nfs-storage
  7. provisioner: fuseim.pri/ifs
  8. [root@k8s-master pv-pvc]# kubectl apply -f storageclass-nfs.yaml
  9. storageclass.storage.k8s.io/managed-nfs-storage created
  10. [root@k8s-master pv-pvc]# kubectl get sc
  11. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
  12. managed-nfs-storage fuseim.pri/ifs Delete Immediate false 20s
2.部署授权

        1)因为storage⾃动创建pv需要经过kube-apiserver,所以要进⾏授权

        2)创建1个serviceaccount;创建1个clusterrole,并赋予应该具有的权限,⽐如对于⼀些基本api资源的增删改查; 创建1个clusterrolebinding,将sa和clusterrole绑定到⼀起;这样sa就有权限了;然后pod中再使⽤这个sa,那么pod再创建的时候,会⽤到sa,sa具有创建pv的权限,便可以⾃动创建pv;

  1. [root@k8s-master pv-pvc]# vim rabc.yaml
  2. [root@k8s-master pv-pvc]# cat rabc.yaml
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: nfs-client-provisioner
  7. ---
  8. kind: ClusterRole
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. metadata:
  11. name: nfs-client-provisioner-runner
  12. rules:
  13. - apiGroups: [""]
  14. resources: ["persistentvolumes"]
  15. verbs: ["get", "list", "watch", "create", "delete"]
  16. - apiGroups: [""]
  17. resources: ["persistentvolumeclaims"]
  18. verbs: ["get", "list", "watch", "update"]
  19. - apiGroups: ["storage.k8s.io"]
  20. resources: ["storageclasses"]
  21. verbs: ["get", "list", "watch"]
  22. - apiGroups: [""]
  23. resources: ["events"]
  24. verbs: ["list", "watch", "create", "update", "patch"]
  25. ---
  26. kind: ClusterRoleBinding
  27. apiVersion: rbac.authorization.k8s.io/v1
  28. metadata:
  29. name: run-nfs-client-provisioner
  30. subjects:
  31. - kind: ServiceAccount
  32. name: nfs-client-provisioner
  33. namespace: default
  34. roleRef:
  35. kind: ClusterRole
  36. name: nfs-client-provisioner-runner
  37. apiGroup: rbac.authorization.k8s.io
  38. [root@k8s-master pv-pvc]# kubectl apply -f rabc.yaml
  39. serviceaccount/nfs-client-provisioner created
  40. clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
  41. clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
  42. [root@k8s-master pv-pvc]# kubectl get sa
  43. NAME SECRETS AGE
  44. default 0 14d
  45. nfs-client-provisioner 0 14s
  46. [root@k8s-master pv-pvc]# kubectl get cr |grep nfs
  47. error: the server doesn't have a resource type "cr"
  48. [root@k8s-master pv-pvc]# kubectl get clusterrole |grep nfs
  49. nfs-client-provisioner-runner 2024-08-06T04:52:23Z
  50. [root@k8s-master pv-pvc]# kubectl get clusterrolebinding |grep nfs
  51. run-nfs-client-provisioner ClusterRole/nfs-client-provisioner-runner 3m58s
3.部署一个自动创建pv的pod服务
        这⾥⾃动创建pv的服务由nfs-client-provisioner 完成
  1. [root@k8s-master pv-pvc]# vim deployment-nfs.yaml
  2. [root@k8s-master pv-pvc]# cat deployment-nfs.yaml
  3. kind: Deployment
  4. apiVersion: apps/v1
  5. metadata:
  6. name: nfs-client-provisioner
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: nfs-client-provisioner
  11. replicas: 1
  12. strategy:
  13. type: Recreate
  14. template:
  15. metadata:
  16. labels:
  17. app: nfs-client-provisioner
  18. spec:
  19. nodeName: k8s-node2
  20. serviceAccount: nfs-client-provisioner
  21. containers:
  22. - name: nfs-client-provisioner
  23. image: quay.io/external_storage/nfs-client-provisioner:latest
  24. volumeMounts:
  25. - name: nfs-client-root
  26. mountPath: /persistentvolumes
  27. env:
  28. - name: PROVISIONER_NAME
  29. value: fuseim.pri/ifs
  30. - name: NFS_SERVER
  31. value: 192.168.22.139
  32. - name: NFS_PATH
  33. value: /opt/container_data
  34. volumes:
  35. - name: nfs-client-root
  36. nfs:
  37. server: 192.168.22.139
  38. path: /opt/container_data
  39. [root@k8s-master pv-pvc]# kubectl apply -f deployment-nfs.yaml
  40. deployment.apps/nfs-client-provisioner created
  41. # nfs-client-provisioner 会以pod运⾏在k8s中
  42. [root@k8s-master pv-pvc]# kubectl get pod |grep nfs
  43. nfs-client-provisioner-6c745f9d9-msrtp 1/1 Running 0 6s
4.部署有状态服务,测试自动创建pv

部署yaml⽂件参考:https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/

这里部署nginx服务

  1. [root@k8s-master pv-pvc]# cat nginx.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: nginx
  6. labels:
  7. app: nginx
  8. spec:
  9. ports:
  10. - port: 80
  11. name: web
  12. clusterIP: None
  13. selector:
  14. app: nginx
  15. ---
  16. apiVersion: apps/v1
  17. kind: StatefulSet
  18. metadata:
  19. name: web
  20. spec:
  21. serviceName: "nginx"
  22. replicas: 2
  23. selector:
  24. matchLabels:
  25. app: nginx
  26. template:
  27. metadata:
  28. labels:
  29. app: nginx
  30. spec:
  31. containers:
  32. - name: nginx
  33. image: daocloud.io/library/nginx:1.13.0-alpine
  34. ports:
  35. - containerPort: 80
  36. name: web
  37. volumeMounts:
  38. - name: www
  39. mountPath: /usr/share/nginx/html
  40. volumeClaimTemplates:
  41. - metadata:
  42. name: www
  43. spec:
  44. accessModes: [ "ReadWriteOnce" ]
  45. storageClassName: "managed-nfs-storage"
  46. resources:
  47. requests:
  48. storage: 1Gi
  49. [root@k8s-master pv-pvc]# kubectl apply -f nginx.yaml
  50. service/nginx created
  51. statefulset.apps/web created
  52. [root@k8s-master pv-pvc]# kubectl get pod
  53. NAME READY STATUS RESTARTS AGE
  54. configmap-pod 1/1 Running 5 (52m ago) 12d
  55. configmap-test-pod 1/1 Running 5 (52m ago) 12d
  56. mypod 1/1 Running 7 (52m ago) 13d
  57. mysql 1/1 Running 7 (52m ago) 13d
  58. nfs-client-provisioner-6c745f9d9-msrtp 1/1 Running 0 19m
  59. tomcat 1/1 Running 7 (52m ago) 14d
  60. web-0 1/1 Running 0 42s
  61. web-1 1/1 Running 0 16s
  62. # web-0创建成功后才会创建web-1

2)进入容器内在/usr/share/nginx/html目录下创建文件验证;删除一个pod后,数据仍然存在,不会丢失;

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/在线问答5/article/detail/942776
推荐阅读
相关标签
  

闽ICP备14008679号