Prevent zpools from auto-importing?

thenickdude

Member
Oct 13, 2016
29
3
23
Hey everyone,

I ran "zpool export" on one of my single-disk zpools, which succeeded, and then I hot-unplugged that drive. Only to find out that, surprise, Proxmox had re-imported the pool while I was busy opening the drive enclosure, so was now very unhappy about the missing drive.

It looks like it is automatically importing pools every 10 seconds. If I just keep running export, then you can see the moment where it reattaches itself and "export" succeeds again:

# zpool export crashplan-pair-1
# zpool export crashplan-pair-1
cannot open 'crashplan-pair-1': no such pool
# zpool export crashplan-pair-1
cannot open 'crashplan-pair-1': no such pool
# zpool export crashplan-pair-1
cannot open 'crashplan-pair-1': no such pool
# zpool export crashplan-pair-1
#
This zpool doesn't store VMs and is not managed by proxmox's "storage" feature. How can I disable this automatic remount behaviour?
 
do you have some other zfspool configured as storage, where the pool is not imported/available? the zfs plugin currently does a "zpool import -d /dev/disk/by-id/ -a" when a configured pool is not yet imported, which could lead to the behaviour you describe if that pool is not importable. I'll see what I can do about importing only the pool that should be activated there ;)
 
Configured as storage in Proxmox? Hm, there may have been one such pool, I'm not certain as I was juggling around a bunch of devices. If you mean a pool that might have been in the zfs cache file, yeah there almost certainly would have been pools there that were missing from the system.

After starting up my host this morning, I wasn't able to replicate the auto-mount behaviour.
 
I meant in the PVE storage configuration. Anyway, I posted a patch to pve-devel to improve our pool handling there, but not yet applied ;)
 
I meant in the PVE storage configuration. Anyway, I posted a patch to pve-devel to improve our pool handling there, but not yet applied ;)

Sorry for replying to an old thread. But has there been any progress on this? Specially is there now a knob to disable auto-importing ZFS pool? I'm asking this because I find the auto-importing conflict with my working procedure.

My ZFS pool is built on top of several LUKS encrypted devices. After a system reboot, I have to enter passphrases to open LUKS devices one by one. When I'm lucky and the passphrases are entered quickly enough, the pool will be imported successfully. But more often than not, the auto-importing would trigger in the middle, resulting in some devices in the pool imported while others unavailable and the pool in degraded status.
 
Sorry for replying to an old thread. But has there been any progress on this? Specially is there now a knob to disable auto-importing ZFS pool? I'm asking this because I find the auto-importing conflict with my working procedure.

My ZFS pool is built on top of several LUKS encrypted devices. After a system reboot, I have to enter passphrases to open LUKS devices one by one. When I'm lucky and the passphrases are entered quickly enough, the pool will be imported successfully. But more often than not, the auto-importing would trigger in the middle, resulting in some devices in the pool imported while others unavailable and the pool in degraded status.

the patches have already been applied, but I think your problem is different.

if I understand you correctly, the main issue is that the storage activation happens in the middle of your "unlock disks one by one" procedure? you can just disable the storage before you start your procedure, and enable it again afterwards.
 
the patches have already been applied, but I think your problem is different.

if I understand you correctly, the main issue is that the storage activation happens in the middle of your "unlock disks one by one" procedure? you can just disable the storage before you start your procedure, and enable it again afterwards.

You are right. I just did a quick test and can confirm zfs auto-importing is not done when the storage is not enabled. I think this is a neat solution. Thank you for your help!
 
@hanru Sorry for replying this old thread - can you tell me what you mean with "disable the storage"? Do you mean the pool? It looks like i have the same configuration, my ZFS pool is built on top of several LUKS encrypted devices too, and i have the same problem like you.
 
@hanru Sorry for replying this old thread - can you tell me what you mean with "disable the storage"? Do you mean the pool? It looks like i have the same configuration, my ZFS pool is built on top of several LUKS encrypted devices too, and i have the same problem like you.

You can enable/disable a storage from Web UI at Datacenter -> Storage. A storage can also be enabled/disabled from command line (assuming the storage ID is 'zfs'):

Code:
# pvesm set zfs --disable 0|1

You could first disable the storage, then unlock the LUKS encrypted devices and import the ZFS pool, then enable the storage again.

Hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!