10 k8s 容器持久化存储


Volume & PersistentVolume

1
2
官方文档
https://kubernetes.io/docs/concepts/storage/volumes/
  1. Kubernetes中的Volume提供了在容器中挂载外部存储的能力
  2. Pod需要设置卷来源(spec.volume)和挂载点(spec.containers.volumeMounts)两个信息后才可以使用相应的Volume
  3. 本地数据卷
    • hostPath
    • emptyDir

Volume 概念

  1. 容器中的文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题。
  2. 首先,当容器崩溃时,kubelet 将重新启动容器,容器中的文件将会丢失——因为容器会以干净的状态重建。
  3. 其次,当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。
  4. Kubernetes 抽象出 Volume 对象来解决这两个问题。

Volume 支持的类型

  1. 本地 emptyDir hostPath
  2. 网络 自建存储 ceph

emptyDir

  1. 创建一个空卷,挂载到Pod中的容器。
  2. Pod删除该卷也会被删除,随着pod的生命周期 而存在
  3. 应用场景:Pod中容器之间数据共享,一个pod中有多个容器,他们之间完成数据共享,不使用数据卷容器之间的文件系统是隔离的,只能看到自己的
  4. 使用数据卷让容器之间某个目录达到共享
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@k8s-master demo2]# vim emptydir.yaml 

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: write
image: centos:7
command: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]
volumeMounts:
- name: data
mountPath: /data

- name: read
image: centos:7
command: ["bash","-c","tail -f /data/hello"]
volumeMounts:
- name: data
mountPath: /data

volumes:
- name: data
emptyDir: {}
1
2
3
4
5
6
7
8
9
10
11
12
[root@k8s-master demo2]# kubectl apply -f emptydir.yaml 
pod/my-pod created

[root@k8s-master demo2]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-pod 2/2 Running 0 4s

# 两个容器
# write负责写
# read读取到了数据卷中的数据
# -f 实时输出
[root@k8s-master demo2]# kubectl logs my-pod -c read -f

hostPath

  1. 挂载 Node 文件系统上文件或者目录到Pod中的容器。
  2. 应用场景:Pod中容器需要访问宿主机文件
  3. hostPath 有点像 Bind Mounts
  4. emptyDir 有点像 Volume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master demo2]# vim hostPath.yaml

apiVersion: v1
kind: Pod
metadata:
name: my-pod2
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 36000
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
hostPath:
path: /tmp
type: Directory
1
2
[root@k8s-master demo2]# kubectl apply -f hostPath.yaml 
pod/my-pod2 created
1
2
[root@k8s-master demo2]# kubectl get pod -o wide
my-pod2 1/1 Running 10.244.0.33 k8s-node1
1
2
3
4
5
6
7
8
9
10
11
12
[root@k8s-master demo2]# kubectl exec -it my-pod2 sh
/ # cd /data/
/data # ls
Aegis-<Guid(5A2C30A2-A87D-490A-9281-6765EDAD7CBA)> systemd-private-4528dd471fc14018952a13c04541edd8-chronyd.service-GCklT8


[root@k8s-node1 tmp]# touch pod2
/data # ls -l
total 4
srwxr-xr-x 1 root root 0 Nov 9 05:47 Aegis-<Guid(5A2C30A2-A87D-490A-9281-6765EDAD7CBA)>
-rw-r--r-- 1 root root 0 Nov 9 07:39 pod2
drwx------ 3 root root 4096 Nov 9 05:47 systemd-private-4528dd471fc14018952a13c04541edd8-chronyd.service-GCklT8

NFS(网络存储)

  1. 本地数据卷 只能绑定指定的node上,如果node出现问题,node上的pod会被放到其他node上,数据就无从获取
  2. 挂载网络卷 就算拉一起一个新的也能访问到

  3. 安装nfs

1
2
3
# 选择master2 作为服务端
[root@k8s-master2 ~]# yum install -y nfs-utils
# 客户端也就是node上也要安装
  1. 配置服务端的访问路径 启动服务端守护进程
1
2
3
4
5
6
7
[root@k8s-master2 ~]# vim /etc/exports
/data/nfs *(rw,no_root_squash)

[root@k8s-master2 ~]# systemctl start nfs
[root@k8s-master2 ~]# ps -ef|grep nfs
# 得有这个目录才能挂载
[root@k8s-master2 ~]# mkdir -p /data/nfs
  1. 配置启动
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# k8s帮我们mount 在配置文件里指定,只要安装nfs客户端即可
[root@k8s-master demo2]# vim nfs.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-nfs-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
nfs:
server: 172.17.70.245
path: /data/nfs
1
2
[root@k8s-master demo2]# kubectl apply -f nfs.yaml 
deployment.apps/nginx-nfs-deployment created
1
2
3
4
5
[root@k8s-master demo2]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-nfs-deployment-5b8f7c4d57-9tbgq 1/1 Running 0 3s
nginx-nfs-deployment-5b8f7c4d57-mgvzh 1/1 Running 0 3s
nginx-nfs-deployment-5b8f7c4d57-wqwtg 1/1 Running 0 3s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@k8s-master demo2]# kubectl exec -it nginx-nfs-deployment-5b8f7c4d57-9tbgq bash
root@nginx-nfs-deployment-5b8f7c4d57-9tbgq:/# cd /usr/share/nginx/html/

# 去nfs数据目录服务端加点东西再查看
[root@k8s-master2 nfs]# echo "hello nginx" >> index.html
# pod中的容器有了同样的文件 挂载成功
root@nginx-nfs-deployment-5b8f7c4d57-9tbgq:/usr/share/nginx/html# ls
index.html


# 为这组pod创建svc

[root@k8s-master1 demo]# vim nfs-svc.yaml

apiVersion: v1
kind: Service
metadata:
labels:
app: nfs-nginx
name: nfs-nginx
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
selector:
app: nginx

[root@k8s-master1 demo]# kubectl apply -f nfs-svc.yaml
service/nfs-nginx created


[root@k8s-master demo2]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 172.17.70.246:6443 2d7h
my-service 10.244.0.35:80,10.244.1.35:80,10.244.1.36:80 24h

# web访问
http://123.56.14.192:31000/

# 如果pod被销毁 但是数据不会丢
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 销毁
[root@k8s-master demo2]# kubectl delete -f nfs.yaml
deployment.apps "nginx-nfs-deployment" deleted
# 启动
[root@k8s-master demo2]# kubectl apply -f nfs.yaml
deployment.apps/nginx-nfs-deployment created

[root@k8s-master demo2]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-nfs-deployment-5b8f7c4d57-2pg97 1/1 Running 0 28s
nginx-nfs-deployment-5b8f7c4d57-84c6d 1/1 Running 0 28s
nginx-nfs-deployment-5b8f7c4d57-9rh6k 1/1 Running 0 28s

[root@k8s-master demo2]# kubectl exec -it nginx-nfs-deployment-5b8f7c4d57-9rh6k bash
root@nginx-nfs-deployment-5b8f7c4d57-9rh6k:/# cd /usr/share/nginx/html/
root@nginx-nfs-deployment-5b8f7c4d57-9rh6k:/usr/share/nginx/html# ls
index.html

PersistentVolume 持久存储数据卷

  1. PersistenVolume(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理。(专业的存储人员来做)
    • 静态 手动创建资源
    • 动态 自动创建PV
  2. PersistentVolumeClaim(PVC):让用户不需要关心具体的Volume实现细节,只关心用多大的容量。

  3. 作用: 将存储资源作为集群的一部分来管理,开发者不用关系如何创造出存储,也不必担心暴露存储的位置。

  4. Pod使用持久卷在任何地点都能访问,即使POD销毁再拉起也能使用。

  5. PV是提供存储容量的,PVC是消费存储的。PV与PVC的关系是绑定,绑定后其他人就无法使用了。

pv 静态供给

  1. 容器中指定 pvc
  2. 创建pvc需求模板
  3. 创建pv

创建 pv

  1. pv可以是存储人员定义,他们会创建很多pv等待pvc来挂载
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master1 demo]# cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /data/nfs
server: 172.31.228.53

[root@k8s-master1 demo]# vim pv-nfs.yaml

[root@k8s-master1 demo]# kubectl apply -f pv-nfs.yaml
persistentvolume/my-pv created

[root@k8s-master1 demo]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Available 4s

# Available 可用状态

创建 deployment 指定pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@k8s-master1 demo]# vim nfs-pvc.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-nfs-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
persistentVolumeClaim:
# pv 名称
claimName: my-pvc

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# 匹配 claimName
name: my-pvc
spec:
# 访问模式 RMX 可以同时在多个节点上挂载并被不同的Pod使用
accessModes:
- ReadWriteMany
resources:
requests:
# 请求存储大小 5G
storage: 5Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-master1 demo]# kubectl apply -f nfs-pvc.yaml 

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-nfs-deployment-7ccdb4f76f-7m6cp 1/1 Running 0 12s
nginx-nfs-deployment-7ccdb4f76f-fmmqt 1/1 Running 0 12s
nginx-nfs-deployment-7ccdb4f76f-x2fbj 1/1 Running 0 12s

[root@k8s-master1 demo]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/my-pv 5Gi RWX Retain Bound default/my-pvc 2m55s

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/my-pvc Bound my-pv 5Gi RWX 39s

# my-pv 状态 Bound
# my-pvc 状态 Bound VOLUME = my-pv
# pv 与 pvc 绑定成功
1
2
3
4
[root@k8s-master1 demo]# kubectl exec -it nginx-nfs-deployment-7ccdb4f76f-7m6cp bash
root@nginx-nfs-deployment-7ccdb4f76f-7m6cp:/# cd /usr/share/nginx/html/
root@nginx-nfs-deployment-7ccdb4f76f-7m6cp:/usr/share/nginx/html# ls
index.html

删除 pvc 和 pv

  1. 默认情况下 删除的pvc,之前与他绑定的pv也不能够再使用了,需要手动清理pv,但是数据还在
1
2
3
4
5
6
7
8
9
10
[root@k8s-master1 demo]# kubectl delete -f nfs-pvc.yaml 
deployment.apps "nginx-nfs-deployment" deleted
persistentvolumeclaim "my-pvc" deleted

[root@k8s-master1 demo]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/my-pv 5Gi RWX Retain Released default/my-pvc 9m7s

# 手动删除pv
[root@k8s-master1 demo]# kubectl delete -f pv-nfs.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 重新部署应用 , 查看数据
[root@k8s-master1 demo]# kubectl apply -f pv-nfs.yaml
[root@k8s-master1 demo]# kubectl apply -f nfs-pvc.yaml


[root@k8s-master1 demo]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/my-pv 5Gi RWX Retain Bound default/my-pvc 11s

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/my-pvc Bound my-pv 5Gi RWX 3s

[root@k8s-master1 demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-nfs-deployment-7ccdb4f76f-9p94w 1/1 Running 0 49s
nginx-nfs-deployment-7ccdb4f76f-jtfsm 1/1 Running 0 49s
nginx-nfs-deployment-7ccdb4f76f-rllm5 1/1 Running 0 49s

[root@k8s-master1 demo]# kubectl exec -it nginx-nfs-deployment-7ccdb4f76f-rllm5 bash
root@nginx-nfs-deployment-7ccdb4f76f-rllm5:/# cd /usr/share/nginx/html/
root@nginx-nfs-deployment-7ccdb4f76f-rllm5:/usr/share/nginx/html# cat index.html
123
root@nginx-nfs-deployment-7ccdb4f76f-rllm5:/usr/share/nginx/html# echo 666 > index.html

[root@k8s-node2 wwwroot]# cd /data/nfs/wwwroot/
[root@k8s-node2 wwwroot]# cat index.html
666

总结 PV 静态供给

  1. 持久卷静态供给,pod需要申请pvc,可以在deployment同一个yaml中定义
  2. 提供数据卷定义,创建 pv
  3. pvc会根据绑定关系,尤其是存储的容量和访问模式去匹配pvc
  4. 这种情况下创建pv需要手动创建,如何可以自动部署绑定

pv 动态供给

  1. 主要是针对容量问题,手动划分非常麻烦,如果pvc的容量匹配不上pv就无法绑定
  2. k8s的动态供给就是可以动态划分容量
  3. Dynamic Provisioning机制工作的核心在于StorageClass的API对象。
  4. StorageClass声明存储插件,用于自动创建PV。

k8s 支持持久卷的存储插件

1
https://kubernetes.io/docs/concepts/storage/persistent-volumes/

StorageClass

  1. StorageClass 是能够自动操作后端存储,并且自动创建pv 。
  2. StorageClass 声明使用哪种存储插件,它来对接存储。
  3. Kubernetes支持动态供给的存储插件:
1
https://kubernetes.io/docs/concepts/storage/storage-classes/

1
2
3
4
5
6
7
8
9
10
11
12
# 上传 storage-class

[root@k8s-master1 storage-class]# pwd
/opt/storage-class

[root@k8s-master1 storage-class]# ls -l
total 20
-rw-r--r-- 1 root root 886 Dec 31 16:45 deployment-nfs.yaml
-rw-r--r-- 1 root root 842 Dec 31 16:45 nginx-demo.yaml
-rw-r--r-- 1 root root 703 Dec 31 16:45 pvc.yaml
-rw-r--r-- 1 root root 949 Dec 31 16:45 rbac.yaml
-rw-r--r-- 1 root root 120 Dec 31 16:45 storageclass-nfs.yaml

StorageClass 定义

1
2
3
4
5
6
7
8
[root@k8s-master1 storage-class]# cat storageclass-nfs.yaml 
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
# StorageClass 名字
name: managed-nfs-storage
# 提供者标识 外部插件
provisioner: fuseim.pri/ifs

提供者 自动创建pv

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 该服务帮我们自动创建pv

[root@k8s-master1 storage-class]# vim deployment-nfs.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
# 授权
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: lizhenliang/nfs-client-provisioner:v2.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
# 定义nfs地址
- name: NFS_SERVER
value: 172.31.228.53
- name: NFS_PATH
value: /data/nfs
volumes:
- name: nfs-client-root
nfs:
server: 172.31.228.53
path: /data/nfs
1
2
3
参考地址:
https://github.com/kubernetes-incubator/external-storage
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

授权

  1. 动态创建pv插件需要连接apiserver ,所以需要授权
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@k8s-master1 storage-class]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

创建 pv 动态供给

1
2
3
4
5
6
7
8
9
10
11
12
[root@k8s-master1 storage-class]# kubectl apply -f storageclass-nfs.yaml
[root@k8s-master1 storage-class]# kubectl apply -f rbac.yaml
[root@k8s-master1 storage-class]# kubectl apply -f deployment-nfs.yaml

[root@k8s-master1 storage-class]# kubectl get sc
NAME PROVISIONER AGE
managed-nfs-storage fuseim.pri/ifs 9m27s

# 查看 自动创建pv的pod 当申请资源的时候这个pod会自动去创建
[root@k8s-master1 storage-class]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-57998f486c-8nqsp 1/1 Running 0 86s

自动供给流程

  1. kubectl 部署有状态应用,唯一的网路身份标识符,主机名=dns名称和持久存储
  2. mysql主从的数据不一样的,所以需要存储保持不同的数据
  3. Statefulset 也会去维护存储,网络身份和存储都标识 0 1 2
  4. 部署一个应用存储部分会去请求 -> storageclass -> nfs-client-provisioner 这个pod -> 请求nfs创建pv
  5. 应用里面指定好 使用哪个 storageclass 就行

pv 动态供给应用案例 Statefulset + MySQL

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[root@k8s-master1 storage-class]# cat mysql-demo.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql-public

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
serviceName: "mysql"
selector:
matchLabels:
app: mysql-public
replicas: 3
template:
metadata:
labels:
app: mysql-public
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
- name: MYSQL_DATABASE
value: test
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-data
volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
accessModes: ["ReadWriteMany"]
# 动态pv name
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 2Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master1 storage-class]# kubectl get pod,pv,pvc
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 32m
pod/db-1 1/1 Running 0 105s
pod/db-2 1/1 Running 0 28s
pod/nfs-client-provisioner-57998f486c-8nqsp 1/1 Running 0 91m

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845 2Gi RWX Delete Bound default/mysql-data-db-0 managed-nfs-storage 32m
persistentvolume/default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07 2Gi RWX Delete Bound default/mysql-data-db-1 managed-nfs-storage 105s
persistentvolume/default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e 2Gi RWX Delete Bound default/mysql-data-db-2 managed-nfs-storage 28s

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-data-db-0 Bound default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845 2Gi RWX managed-nfs-storage 32m
persistentvolumeclaim/mysql-data-db-1 Bound default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07 2Gi RWX managed-nfs-storage 105s
persistentvolumeclaim/mysql-data-db-2 Bound default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e 2Gi RWX managed-nfs-storage 29s


[root@k8s-node2 nfs]# ls -l
total 16
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:07 default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:37 default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:37 default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e
drwxr-xr-x 2 root root 4096 Dec 31 16:04 wwwroot

创建mysql pod 去测试连接数据库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# mysql 通过dns访问
# dns=pod名称.service名称.命名空间

[root@k8s-master1 storage-class]# kubectl run -it --image mysql:5.7 mysql-client --restart=Never --rm /bin/bash
root@mysql-client:/# mysql -h db-0.mysql.default -uroot -p

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
5 rows in set (0.00 sec)

mysql> create database leo;

测试删除一个pod 看能否自动挂载存储

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[root@k8s-master1 storage-class]# kubectl delete pod db-0 

# 删除后自动拉起了pod
[root@k8s-master1 storage-class]# kubectl get pod,pv,pvc
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 5s
pod/db-1 1/1 Running 0 12m
pod/db-2 1/1 Running 0 11m
pod/nfs-client-provisioner-57998f486c-8nqsp 1/1 Running 0 101m

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845 2Gi RWX Delete Bound default/mysql-data-db-0 managed-nfs-storage 42m
persistentvolume/default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07 2Gi RWX Delete Bound default/mysql-data-db-1 managed-nfs-storage 12m
persistentvolume/default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e 2Gi RWX Delete Bound default/mysql-data-db-2 managed-nfs-storage 11m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-data-db-0 Bound default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845 2Gi RWX managed-nfs-storage 42m
persistentvolumeclaim/mysql-data-db-1 Bound default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07 2Gi RWX managed-nfs-storage 12m
persistentvolumeclaim/mysql-data-db-2 Bound default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e 2Gi RWX managed-nfs-storage 11m

# 查看数据是否还在
[root@k8s-master1 storage-class]# kubectl run -it --image mysql:5.7 mysql-client --restart=Never --rm /bin/bash
root@mysql-client:/# mysql -h db-0.mysql.default -uroot -p

# 之前创建的库还存在
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| leo |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
6 rows in set (0.01 sec)

删除 pv

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
1. 先删除 与其关联的 Pod 及 PVC
[root@k8s-master1 storage-class]# kubectl delete -f .

2. 如果还存在pv和pvc

[root@k8s-master1 storage-class]# kubectl get pod,pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845 2Gi RWX Delete Terminating default/mysql-data-db-0 managed-nfs-storage 50m
persistentvolume/default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07 2Gi RWX Delete Bound default/mysql-data-db-1 managed-nfs-storage 20m
persistentvolume/default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e 2Gi RWX Delete Bound default/mysql-data-db-2 managed-nfs-storage 19m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-data-db-0 Bound default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845 2Gi RWX managed-nfs-storage 50m
persistentvolumeclaim/mysql-data-db-1 Bound default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07 2Gi RWX managed-nfs-storage 20m
persistentvolumeclaim/mysql-data-db-2 Bound default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e 2Gi RWX managed-nfs-storage 19m

# 手动删除
[root@k8s-master1 storage-class]# kubectl delete persistentvolumeclaim/mysql-data-db-0
[root@k8s-master1 storage-class]# kubectl delete persistentvolumeclaim/mysql-data-db-1
[root@k8s-master1 storage-class]# kubectl delete persistentvolumeclaim/mysql-data-db-2

[root@k8s-master1 storage-class]# kubectl delete persistentvolume/default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845
[root@k8s-master1 storage-class]# kubectl delete persistentvolume/default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07
[root@k8s-master1 storage-class]# kubectl delete persistentvolume/default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e

# 数据还在存储里
[root@k8s-node2 nfs]# ls -l

drwxrwxrwx 7 polkitd root 4096 Dec 31 18:50 default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:50 default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:50 default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e
drwxr-xr-x 2 root root 4096 Dec 31 16:04 wwwroot

重新建立pv,pvc和 mysql有状态部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# 如果删除了 pv和pvc 再重新创建pod 是会重新分配存储 
[root@k8s-master1 storage-class]# kubectl apply -f storageclass-nfs.yaml
[root@k8s-master1 storage-class]# kubectl apply -f rbac.yaml
[root@k8s-master1 storage-class]# kubectl apply -f deployment-nfs.yaml
[root@k8s-master1 storage-class]# kubectl apply -f mysql-demo.yaml

[root@k8s-master1 storage-class]# kubectl get pods,pvc,pv
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 2m35s
pod/db-1 1/1 Running 0 103s
pod/db-2 1/1 Running 0 101s
pod/nfs-client-provisioner-57998f486c-m5x25 1/1 Running 0 116s

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-data-db-0 Bound default-mysql-data-db-0-pvc-55657d89-f615-420c-a481-a1d4422c86c5 2Gi RWX managed-nfs-storage 4m7s
persistentvolumeclaim/mysql-data-db-1 Bound default-mysql-data-db-1-pvc-e94b491f-380a-48a2-b152-a7c76af2c603 2Gi RWX managed-nfs-storage 103s
persistentvolumeclaim/mysql-data-db-2 Bound default-mysql-data-db-2-pvc-a8c0a2d8-af2d-422f-910b-01f2f2a11563 2Gi RWX managed-nfs-storage 101s

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-mysql-data-db-0-pvc-55657d89-f615-420c-a481-a1d4422c86c5 2Gi RWX Delete Bound default/mysql-data-db-0 managed-nfs-storage 105s
persistentvolume/default-mysql-data-db-1-pvc-e94b491f-380a-48a2-b152-a7c76af2c603 2Gi RWX Delete Bound default/mysql-data-db-1 managed-nfs-storage 103s
persistentvolume/default-mysql-data-db-2-pvc-a8c0a2d8-af2d-422f-910b-01f2f2a11563 2Gi RWX Delete Bound default/mysql-data-db-2 managed-nfs-storage 100s

# 新建了存储目录
[root@k8s-node2 nfs]# ls -l
total 28
drwxrwxrwx 6 polkitd root 4096 Dec 31 19:02 default-mysql-data-db-0-pvc-55657d89-f615-420c-a481-a1d4422c86c5
drwxrwxrwx 7 polkitd root 4096 Dec 31 18:50 default-mysql-data-db-0-pvc-ebc9a67d-1558-4cf6-b35d-004ae6393845
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:50 default-mysql-data-db-1-pvc-da2b75e9-7d93-40c6-b47e-9b3df2ebab07
drwxrwxrwx 5 polkitd root 4096 Dec 31 19:02 default-mysql-data-db-1-pvc-e94b491f-380a-48a2-b152-a7c76af2c603
drwxrwxrwx 6 polkitd root 4096 Dec 31 18:50 default-mysql-data-db-2-pvc-298c380a-8d61-4372-9503-8d886723089e
drwxrwxrwx 6 polkitd root 4096 Dec 31 19:02 default-mysql-data-db-2-pvc-a8c0a2d8-af2d-422f-910b-01f2f2a11563


[root@k8s-master1 storage-class]# kubectl run -it --image mysql:5.7 mysql-client --restart=Never --rm /bin/bash
root@mysql-client:/# mysql -h db-0.mysql.default -uroot -p

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
5 rows in set (0.00 sec)