织梦CMS - 轻松建站从此开始!

欧博ABG官网-欧博官方网址-会员登入

How to mount a persistent volume on a DeploymentDG

时间:2025-08-01 06:15来源: 作者:admin 点击: 9 次
When you create a PVC without specifying a PV or type of StorageClass in GKE clusters it will fall back to default option: StorageClass: standard Pr

When you create a PVC without specifying a PV or type of StorageClass in GKE clusters it will fall back to default option:

StorageClass: standard

Provisioner: kubernetes.io/gce-pd

Type: pd-standard

Please take a look on official documentation: Cloud.google.com: Kubernetes engine persistent volumes

There could be a lot of circumstances that can produce error message encountered.

As it's unknown how many replicas are in your deployment as well as number of nodes and how pods were scheduled on those nodes, I've tried to reproduce your issue and I encountered the same error with following steps (GKE cluster was freshly created to prevent any other dependencies that might affect the behavior).

Steps:

Create a PVC

Create a Deployment with replicas > 1

Check the state of pods

Additional links

Create a PVC

Below is example YAML definition of a PVC the same as yours:

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: volume-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi

After applying above definition please check if it created successfully. You can do it by using below commands:

$ kubectl get pvc volume-claim

$ kubectl get pv

$ kubectl describe pvc volume-claim

$ kubectl get pvc volume-claim -o yaml

Create a Deployment with replicas > 1

Below is example YAML definition of deployment with volumeMounts and replicas > 1:

apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-deployment spec: selector: matchLabels: app: ubuntu replicas: 10 # amount of pods must be > 1 template: metadata: labels: app: ubuntu spec: containers: - name: ubuntu image: ubuntu command: - sleep - "infinity" volumeMounts: - mountPath: /app/folder name: volume volumes: - name: volume persistentVolumeClaim: claimName: volume-claim

Apply it and wait for a while.

Check the state of pods

You can check the state of pods with below command:

$ kubectl get pods -o wide

Output of above command:

NAME READY STATUS RESTARTS AGE IP NODE ubuntu-deployment-2q64z 0/1 ContainerCreating 0 4m27s <none> gke-node-1 ubuntu-deployment-4tjp2 1/1 Running 0 4m27s 10.56.1.14 gke-node-2 ubuntu-deployment-5tn8x 0/1 ContainerCreating 0 4m27s <none> gke-node-1 ubuntu-deployment-5tn9m 0/1 ContainerCreating 0 4m27s <none> gke-node-3 ubuntu-deployment-6vkwf 0/1 ContainerCreating 0 4m27s <none> gke-node-1 ubuntu-deployment-9p45q 1/1 Running 0 4m27s 10.56.1.12 gke-node-2 ubuntu-deployment-lfh7g 0/1 ContainerCreating 0 4m27s <none> gke-node-3 ubuntu-deployment-qxwmq 1/1 Running 0 4m27s 10.56.1.13 gke-node-2 ubuntu-deployment-r7k2k 0/1 ContainerCreating 0 4m27s <none> gke-node-3 ubuntu-deployment-rnr72 0/1 ContainerCreating 0 4m27s <none> gke-node-3

Take a look on above output:

3 pods are in Running state

7 pods are in ContainerCreating state

All of the Running pods are located on the same gke-node-2

You can get more detailed information why pods are in ContainerCreating state by:

$ kubectl describe pod NAME_OF_POD_WITH_CC_STATE

The Events part in above command shows:

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned default/ubuntu-deployment-2q64z to gke-node-1 Warning FailedAttachVolume 14m attachdetach-controller Multi-Attach error for volume "pvc-7d756147-6434-11ea-a666-42010a9c0058" Volume is already used by pod(s) ubuntu-deployment-qxwmq, ubuntu-deployment-9p45q, ubuntu-deployment-4tjp2 Warning FailedMount 92s (x6 over 12m) kubelet, gke-node-1 Unable to mount volumes for pod "ubuntu-deployment-2q64z_default(9dc28e95-6434-11ea-a666-42010a9c0058)": timeout expired waiting for volumes to attach or mount for pod "default"/"ubuntu-deployment-2q64z". list of unmounted volumes=[volume]. list of unattached volumes=[volume default-token-dnvnj]

Pod cannot pass ContainerCreating state because of failed mounting of a volume. Mentioned volume is already used by other pods on a different node.

ReadWriteOnce: The Volume can be mounted as read-write by a single node.

Additional links

Please take a look at: .

There is detailed answer on topic of access mode: Stackoverflow.com: Why can you set multiple accessmodes on a persistent volume

As it's unknown what you are trying to achieve please take a look on comparison between Deployments and Statefulsets: .

Share

Improve this answer

Follow

answered Mar 12, 2020 at 15:32

Dawid Kruk's user avatar

Dawid KrukDawid Kruk

10k2 gold badges26 silver badges49 bronze badges

(责任编辑:)
------分隔线----------------------------
发表评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
用户名: 验证码:
发布者资料
查看详细资料 发送留言 加为好友 用户等级: 注册时间:2025-08-13 11:08 最后登录:2025-08-13 11:08
栏目列表
推荐内容