Prevent zpools from auto-importing 2.0?

dosmage

Active Member
Nov 30, 2016
27
0
41
41
Instead of necroing a post from January of this year, I thought I'd ask a new one.

In my configuration there are two hosts in a Proxmox cluster and two JBODs connected to each host. On each JBOD there is a local ZFS pool which I have configured in storage.cfg. What I'm seeing is that each host mounts the ZFS pools locally which cause instantaneous file system corruption.

I decided to isolate the JBODs and opt to physically move the cable should one of the servers go down. This seemingly worked fine until exported the ZFS pool and unplugged it. I didn't know that it was reimported ratherly quickly. When I plugged the JBOD I inadvertently caused the devices to go offline which caused Linux's extremely poor IO subsystem to go into an infinite deadlock requiring the remaining server to need to be rebooted in order to clear the deadlock.

I read a post that explained that the ZFS plugin, for what I assume is the pve-ha-lrm daemon, attempts to import ZFS pools using the command "zpool import -d /dev/disk/by-id/ -a" when it doesn't import and/or mount all the resources configured in storage.cfg.

I'm hoping to get some advice. Please let me know if there is any information I could elaborate on.
 
Hi,

do you mean when you write
two JBODs
that you have a raid and set two disks in JBOD mode?
If yes then this is your problem.
ZFS on a raid controller don't work proper together.

pve-ha-lrm daemon
has nothing to do with zfs. It is the High Availability local resource manager and manage HA resources.

If you like to disable zfs auto mounting, then /etc/defaults/zfs is the file where you can disable it.
 
Thank you for your reply!

that you have a raid and set two disks in JBOD mode?
If yes then this is your problem.
ZFS on a raid controller don't work proper together.

No, the JBOD is just a JBOD so it exposes 24 disks to the system. There is a ZFS striped vdev with 11 constituent mirrored vdevs.


has nothing to do with zfs. It is the High Availability local resource manager and manage HA resources.

If you like to disable zfs auto mounting, then /etc/defaults/zfs is the file where you can disable it.

I have already adjusted /etc/defaults/zfs to no avail. I don't mean to disagree with you but stopping the pve-ha-lrm service stops the automatic importing of the ZFS pools. Additionally switching from zfspool to directory type storage for the ZFS filesystem also resolves the issue by bypassing ZFS support in Proxmox. However doing so we lose support for Proxmox to automatically create ZFS file systems per virtual machine.

Here's my /etc/defaults/zfs
Code:
grep -v -e ^# -e ^$ /etc/default/zfs
ZFS_MOUNT='no'
ZFS_UNMOUNT='yes'
ZFS_SHARE='yes'
ZFS_UNSHARE='yes'
ZPOOL_IMPORT_ALL_VISIBLE='no'
ZFS_POOL_EXCEPTIONS="zvm1data zvm2data"
VERBOSE_MOUNT='no'
DO_OVERLAY_MOUNTS='no'
ZPOOL_IMPORT_OPTS=""
MOUNT_EXTRA_OPTIONS=""
ZFS_DKMS_ENABLE_DEBUG='no'
ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no'
ZFS_DKMS_DISABLE_STRIP='no'
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'
ZFS_INITRD_POST_MODPROBE_SLEEP='0'

This is the "working" storage.cfg example
Code:
# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: zvm1data-dir
        path /zvm1data/

dir: zvm2data-dir
        path /zvm2data/

Using this storage.cfg results in automatic import of ZFS pools
Code:
# cat /etc/pve/storage.cfg.JB20161129
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

zfspool: zvm1data
        pool zvm1data
        content rootdir,images
        sparse

zfspool: zvm2data
        pool zvm2data
        content rootdir,images
        sparse
 
could you try again with libpve-storage-perl >= 4.0-69 once that hits the repository? there are some improvements for the activation of zpool storages in that version (one of them is only importing the configured pool, and not all found pools)
 
could you try again with libpve-storage-perl >= 4.0-69 once that hits the repository? there are some improvements for the activation of zpool storages in that version (one of them is only importing the configured pool, and not all found pools)

Thank you, I most certainly can but I believe that the issue will still occur because the configured ZFS pools exist exclusively on each of the two hosts; there by continuing to try to import the other server's ZFS pool because the corresponding ZFS pools won't exist on that host, e.g. zvm1data exists exclusively on zvm1 and zvm2data exists exclusively on zvm2. There are no ZFS pools that aren't configured in storage.cfg.
 
Thank you, I most certainly can but I believe that the issue will still occur because the configured ZFS pools exist exclusively on each of the two hosts; there by continuing to try to import the other server's ZFS pool because the corresponding ZFS pools won't exist on that host, e.g. zvm1data exists exclusively on zvm1 and zvm2data exists exclusively on zvm2. There are no ZFS pools that aren't configured in storage.cfg.

okay, I misunderstood that. in this case, you need to tell PVE that the pool/storage exists only on one node (in the GUI you can select the nodes where a storage is available, or in the storage.cfg file you can set the "nodes" property). otherwise it assumes that the given storage is available on each node (and tries to activate it).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!