Hi,
Some information on my setup: I have a cluster of Intel Nucs with SSDs and most of the SSD space is a cluster wide "glusterfs" filesystem where all VMs and also LXC Container images are stored. Because of the fact that Containers can not be placed on GlusterFS by default I found ideas here in forum to create a "directory storage on the gluster-fs mountpoint" and use this for Containers. so I did that way.
I use the most current proxmox pve "community" (just updated yesterday)
Now to my problem:
VMs work just fine, they shotwdown on server shutdown/reboot and start correctly. No problem here!
But Containers might get problems on shudown and mostly get problems on boot up.
Shutdown Problem:
When I read the syslog correctly then that unmount is done "too early" and with this the directory is no longer available ... then the container end up in "not beeing stopped correctly because of i/o errors". To be honest I did not waited how long it takes to get killed, but a kill -9 of the lxc process (so the very hard way) solved it, but not nice way.
It can not be the glusterfs itself that is gone because one of the VMs (101) (also located on glusterfs) stopped successfully ... only the lxc container generates problems (should be that loop0 thingy)
Log: see attachement
On Bootup it is sometimes the other way around and the system tries to start lxc but directory is not yet ready ... I need to find a log for that. I was able to work around that by increasing the "restarts" in HA mode and so after he gives up he tries again
It feels to me that I "just" should add some systemd dependencies to the correct pve/unmount service to make sure the mounts are unmounted after it and started before it (so a Wants...) flag ... which would be the correct pve service to do so?
Could anyone advice what to add where?
Thank you!
Ingo
Some information on my setup: I have a cluster of Intel Nucs with SSDs and most of the SSD space is a cluster wide "glusterfs" filesystem where all VMs and also LXC Container images are stored. Because of the fact that Containers can not be placed on GlusterFS by default I found ideas here in forum to create a "directory storage on the gluster-fs mountpoint" and use this for Containers. so I did that way.
I use the most current proxmox pve "community" (just updated yesterday)
Now to my problem:
VMs work just fine, they shotwdown on server shutdown/reboot and start correctly. No problem here!
But Containers might get problems on shudown and mostly get problems on boot up.
Shutdown Problem:
When I read the syslog correctly then that unmount is done "too early" and with this the directory is no longer available ... then the container end up in "not beeing stopped correctly because of i/o errors". To be honest I did not waited how long it takes to get killed, but a kill -9 of the lxc process (so the very hard way) solved it, but not nice way.
It can not be the glusterfs itself that is gone because one of the VMs (101) (also located on glusterfs) stopped successfully ... only the lxc container generates problems (should be that loop0 thingy)
Log: see attachement
On Bootup it is sometimes the other way around and the system tries to start lxc but directory is not yet ready ... I need to find a log for that. I was able to work around that by increasing the "restarts" in HA mode and so after he gives up he tries again
It feels to me that I "just" should add some systemd dependencies to the correct pve/unmount service to make sure the mounts are unmounted after it and started before it (so a Wants...) flag ... which would be the correct pve service to do so?
Could anyone advice what to add where?
Thank you!
Ingo
Attachments
Last edited: