I test wise add a PBS storage to my 7 host PVE cluster ... the results was that in the pbs I had 2 processes with 130% CPU and a flooded syslog with authentication requests ... every second roughly 1 request per host ... It also feels like the hosts (especially one with the worst hardware) was...
I have basically the same problem I think ... but with a Synology. He creates the directory but then it seems he wants to execute the "chown" to set owner to backup.backup (which is unneeded because rights are drwxrwxrwx and I think this fails ...
I did not found a proper way to setup that in...
Is someone who is doing this also having HAguests? DOyou also experience issues like https://forum.proxmox.com/threads/problems-on-shutdown-boots-with-nfs-glusterfs-filesystems-and-ha-containers-systemd-order-of-services.71962/#post-324432 ?
@Dominic The deeper I think/look into it I'm more sure that the topic has nothing to do with glusterfs in fact but with when the storage mounts are executed. When you check my chart in...
Thank you. I see your points ... never hit them ;-) Yes having option to configure mor then 2 Gluster servers in config would be cool (I have a 3-replica sytem over 7 Nucs and one second 3 way replicated over 3 nucs; In fact the scenario like in the ticket is dangerour because does not provide...
@Dominic this thread is kind of the follow up from https://forum.proxmox.com/threads/pve-with-glusterfs-as-shored-storage-and-filesystem-gets-read-only-on-boot.58243/#post-277424
Do you have any idea?
if the "Mounts" are done by pve-guests service then having pve-ha-lrm starting (and ending)...
I digged deeper and I have a new idea ... The problem is that al HA guests on the system are not stopped by the "stop all guests "call ... so they continue to run and then the filesystem get killed (or glusterfs gets stopped) ... Whe you see the log above you see
Jun 25 14:28:11 pm7...
Maybe it is also really like https://bugzilla.redhat.com/show_bug.cgi?id=1701234 where blk-availability.service is just running "too early"
or https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=946882 ... but also no real solutions there when I see it correctly :-(
because of performance things I read I did not wanted to use nfs stuff and back to 5.x is also no option because EOL.
In general when looking in my syslog I find:
> Jun 25 14:28:15 pm7 systemd[1]: mnt-pve-glusterfs.mount: Succeeded.
Jun 25 14:28:15 pm7 systemd[1]: Unmounted /mnt/pve/glusterfs...
Hi,
Some information on my setup: I have a cluster of Intel Nucs with SSDs and most of the SSD space is a cluster wide "glusterfs" filesystem where all VMs and also LXC Container images are stored. Because of the fact that Containers can not be placed on GlusterFS by default I found ideas here...
I also had the feeling that it dit not worked in a first place .. but wasnt shure how to verify correctly. But with this info I will also place in both :-)
I have a one liner with eno1 in my file and added it to that ... no idea if correct :-(
Maybe someone can advice
iface eno1 inet manual
post-up /sbin/ethtool -K eno1 tso off gso off
In fact I use a Meross 4 socket plug thingy (https://www.meross.com/product/19/article/) so I can control that via API ... I "Just" need to get a trigger and info if a node got fenced :-) Yes external monitoring or such ... Does the Proxmox API returns a "fenced" status on a node?
So the "Only" risk is that if kernel hangs that soft-watchdog do not triggers a reboot and the mchine hangs and needs manual power off ... but fencing is done by the other hosts ... so ok ...
One aside question that you might know: When a node gets fenced there is an email sent out ... is there...