Instead of necroing a post from January of this year, I thought I'd ask a new one.
In my configuration there are two hosts in a Proxmox cluster and two JBODs connected to each host. On each JBOD there is a local ZFS pool which I have configured in storage.cfg. What I'm seeing is that each host mounts the ZFS pools locally which cause instantaneous file system corruption.
I decided to isolate the JBODs and opt to physically move the cable should one of the servers go down. This seemingly worked fine until exported the ZFS pool and unplugged it. I didn't know that it was reimported ratherly quickly. When I plugged the JBOD I inadvertently caused the devices to go offline which caused Linux's extremely poor IO subsystem to go into an infinite deadlock requiring the remaining server to need to be rebooted in order to clear the deadlock.
I read a post that explained that the ZFS plugin, for what I assume is the pve-ha-lrm daemon, attempts to import ZFS pools using the command "zpool import -d /dev/disk/by-id/ -a" when it doesn't import and/or mount all the resources configured in storage.cfg.
I'm hoping to get some advice. Please let me know if there is any information I could elaborate on.
In my configuration there are two hosts in a Proxmox cluster and two JBODs connected to each host. On each JBOD there is a local ZFS pool which I have configured in storage.cfg. What I'm seeing is that each host mounts the ZFS pools locally which cause instantaneous file system corruption.
I decided to isolate the JBODs and opt to physically move the cable should one of the servers go down. This seemingly worked fine until exported the ZFS pool and unplugged it. I didn't know that it was reimported ratherly quickly. When I plugged the JBOD I inadvertently caused the devices to go offline which caused Linux's extremely poor IO subsystem to go into an infinite deadlock requiring the remaining server to need to be rebooted in order to clear the deadlock.
I read a post that explained that the ZFS plugin, for what I assume is the pve-ha-lrm daemon, attempts to import ZFS pools using the command "zpool import -d /dev/disk/by-id/ -a" when it doesn't import and/or mount all the resources configured in storage.cfg.
I'm hoping to get some advice. Please let me know if there is any information I could elaborate on.