- Aug 21, 2002
- 18,368
- 11
- 81
Preface: I'm very new to K8s...
I'm doing a little POC in my home lab with K8s and an application called Minio. I've successfully deployed the application, but every time I reboot one of my worker nodes to patch it, the Minio pod goes into CrashLoopBackOff and this is the most descriptive error I can find:
One thing that strikes me as potentially an issue is that I am presenting Minio with a PV using NFS to store its data (Minio is an object storage server). In when I describe the PV, the type is NFS, and it has a note that says "(an NFS mount that lasts the lifetime of a pod)." Could that be the issue? When I reboot a node, the pod's lifetime ends and must be deleted and recreated? If so... that kinda sucks... doesn't sound very "persistent."
If that's the case, could I maybe deal with this using a liveness check? Can a liveness check delete a pod if its Status is "CrashLoopBackOff?"
I'm doing a little POC in my home lab with K8s and an application called Minio. I've successfully deployed the application, but every time I reboot one of my worker nodes to patch it, the Minio pod goes into CrashLoopBackOff and this is the most descriptive error I can find:
Code:
Warning FailedCreatePodSandBox pod/minio-3 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "minio-3": Error response from daemon: Conflict. The container name "/k8s_POD_minio-3_default_a1ab13b9-4540-470d-b754-05ffce747e81_6" is already in use by container "b4ca1cf5b889058de4ea797f4e9dccd581918ead9e44047f7cd3640a1274d4d2". You have to remove (or rename) that container to be able to reuse that name.
One thing that strikes me as potentially an issue is that I am presenting Minio with a PV using NFS to store its data (Minio is an object storage server). In when I describe the PV, the type is NFS, and it has a note that says "(an NFS mount that lasts the lifetime of a pod)." Could that be the issue? When I reboot a node, the pod's lifetime ends and must be deleted and recreated? If so... that kinda sucks... doesn't sound very "persistent."
If that's the case, could I maybe deal with this using a liveness check? Can a liveness check delete a pod if its Status is "CrashLoopBackOff?"