k8s创建NFSStorageClass.md

好的,我来以NFS为例详细演示如何为Nginx和MongoDB创建完整的存储解决方案。

环境准备

假设NFS服务器信息:

  • NFS服务器IP:192.168.1.100
  • 共享路径:/data/nfs

1. 创建NFS StorageClass

首先创建通用的NFS StorageClass:

1
2
3
4
5
6
7
8
9
10
11
12
# nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-storage # 修改为与PROVISIONER_NAME一致
parameters:
archiveOnDelete: "false"
reclaimPolicy: Delete
volumeBindingMode: Immediate

应用配置:

1
kubectl apply -f nfs-storageclass.yaml

2. 部署NFS Provisioner

我们需要先部署NFS客户端provisioner:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# nfs-provisioner-china-fixed.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage # 这个值必须与StorageClass的provisioner一致
- name: NFS_SERVER
value: 192.168.1.100 # 替换为你的NFS服务器IP
- name: NFS_PATH
value: /data/nfs # 替换为你的NFS共享路径
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.100 # 替换为你的NFS服务器IP
path: /data/nfs # 替换为你的NFS共享路径
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-storage # 必须与Deployment中的PROVISIONER_NAME一致
parameters:
archiveOnDelete: "false"
reclaimPolicy: Delete
volumeBindingMode: Immediate

应用配置:

1
kubectl apply -f nfs-provisioner-china-fixed.yaml

3. 为Nginx创建PVC和Deployment

Nginx需要存储HTML文件,使用ReadWriteMany访问模式:

1
2
3
4
5
6
7
8
9
10
11
12
# nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-html-pvc
spec:
storageClassName: nfs-storage
accessModes:
- ReadWriteMany # 多个Pod可以同时读写
resources:
requests:
storage: 5Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: html-storage
mountPath: /usr/share/nginx/html
# 初始化HTML内容
command: ['/bin/sh', '-c']
args:
- |
echo "<h1>Welcome to Nginx on Kubernetes!</h1>" > /usr/share/nginx/html/index.html &&
echo "<p>This content is stored on NFS persistent storage.</p>" >> /usr/share/nginx/html/index.html &&
nginx -g 'daemon off;'
volumes:
- name: html-storage
persistentVolumeClaim:
claimName: nginx-html-pvc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer

4. 为MongoDB创建PVC和Deployment

MongoDB需要存储数据库文件,使用ReadWriteOnce访问模式:

1
2
3
4
5
6
7
8
9
10
11
12
# mongodb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-data-pvc
spec:
storageClassName: nfs-storage
accessModes:
- ReadWriteOnce # 单个Pod读写
resources:
requests:
storage: 10Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# mongodb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1 # 数据库通常单实例或使用有状态集
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:6.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password123"
volumeMounts:
- name: mongodb-storage
mountPath: /data/db
- name: mongodb-config
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mongodb-storage
persistentVolumeClaim:
claimName: mongodb-data-pvc
- name: mongodb-config
configMap:
name: mongodb-init-script
---
# 初始化脚本ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-init-script
data:
init.js: |
db = db.getSiblingDB('mydatabase');
db.createCollection('users');
db.users.insertOne({ name: 'John Doe', email: 'john@example.com' });
print('Database initialized successfully');
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017

5. 部署所有资源

1
2
3
4
5
6
7
# 创建Nginx相关资源
kubectl apply -f nginx-pvc.yaml
kubectl apply -f nginx-deployment.yaml

# 创建MongoDB相关资源
kubectl apply -f mongodb-pvc.yaml
kubectl apply -f mongodb-deployment.yaml

6. 验证部署

1
2
3
4
5
6
7
8
9
10
11
# 检查所有资源状态
kubectl get pvc
kubectl get pv
kubectl get pods
kubectl get services

# 检查StorageClass
kubectl get storageclass

# 检查NFS目录结构(在NFS服务器上)
ls -la /data/nfs/

预期输出:

1
2
3
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb-data-pvc Bound pvc-12345678-1234-1234-1234-123456789012 10Gi RWO nfs-storage 5m
nginx-html-pvc Bound pvc-87654321-4321-4321-4321-210987654321 5Gi RWX nfs-storage 5m

7. 测试应用

测试Nginx:

1
2
3
4
5
# 获取Nginx服务的外部IP
kubectl get service nginx-service

# 访问Nginx
curl http://<EXTERNAL-IP>

测试MongoDB:

1
2
3
4
5
6
# 连接到MongoDB Pod
kubectl exec -it $(kubectl get pod -l app=mongodb -o name) -- mongosh -u admin -p password123

# 在MongoDB中验证数据
> use mydatabase
> db.users.find()

目录结构说明

在NFS服务器上,会自动创建如下目录结构:

1
2
3
4
5
/data/nfs/
├── default-nginx-html-pvc-pvc-[hash]/
│ └── index.html
└── default-mongodb-data-pvc-pvc-[hash]/
└── mongodb-data-files/

关键点总结

  1. StorageClass:定义了NFS存储的供应方式
  2. PVC:应用声明需要的存储大小和访问模式
  3. PV:由provisioner自动创建的实际存储资源
  4. 访问模式
    • Nginx:ReadWriteMany(多个Pod共享)
    • MongoDB:ReadWriteOnce(单个Pod独占)

这种架构确保了:

  • 数据持久性:Pod重启不会丢失数据
  • 数据隔离:每个应用有独立的存储空间
  • 自动化:PV自动创建和绑定
  • 可扩展性:易于添加更多应用