[SOLVED] After Upgrade ZFS pool gone

nasenmann72

Active Member
Dec 9, 2008
69
1
28
Germany, Saarland
Hello,

yesterday I made an apt-get update/upgrade where a new version of zfsutils had been installed, also new pve kernel. After reboot my zfs pool where the vms are stored is gone. If I try to import:

Code:
root@proxmc:/var/lib# zpool import tank 
cannot import 'tank': no such pool available
Noting happens as you can see.

Code:
proxmox-ve-2.6.32: 3.4-159 (running kernel: 2.6.32-40-pve)
pve-manager: 3.4-8 (running version: 3.4-8/5f8f4e78)
pve-kernel-2.6.32-40-pve: 2.6.32-159
pve-kernel-2.6.32-39-pve: 2.6.32-157
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1                                                                                                                                       
openais-pve: 1.1.4-3                                                                                                                                        
libqb0: 0.11.1-2                                                                                                                                            
redhat-cluster-pve: 3.2.0-2                                                                                                                                 
resource-agents-pve: 3.9.2-4                                                                                                                                
fence-agents-pve: 4.0.10-3                                                                                                                                  
pve-cluster: 3.0-18                                                                                                                                         
qemu-server: 3.4-6                                                                                                                                          
pve-firmware: 1.1-4                                                                                                                                         
libpve-common-perl: 3.0-24                                                                                                                                  
libpve-access-control: 3.0-16                                                                                                                               
libpve-storage-perl: 3.0-33                                                                                                                                 
pve-libspice-server1: 0.12.4-3                                                                                                                              
vncterm: 1.1-8                                                                                                                                              
vzctl: 4.0-1pve6                                                                                                                                            
vzprocps: 2.0.11-2                                                                                                                                          
vzquota: 3.1-2                                                                                                                                              
pve-qemu-kvm: 2.2-11                                                                                                                                        
ksm-control-daemon: 1.1-1                                                                                                                                   
glusterfs-client: 3.5.2-1 

dpkg -l | grep zfs
ii  libzfs2                          0.6.4-4~wheezy                amd64        Native ZFS filesystem library for Linux                                     
ii  zfs-doc                          0.6.3-3~wheezy                amd64        Native OpenZFS filesystem documentation and examples.                       
ii  zfs-initramfs                    0.6.4-4~wheezy                amd64        Native ZFS root filesystem capabilities for Linux                           
ii  zfsutils                         0.6.4-4~wheezy                amd64        command-line tools to manage ZFS filesystems
Can anybody please give a hint what to do to get the pool back or how to downgrade to former zfsutils version? Thank you.
 
Last edited:

nasenmann72

Active Member
Dec 9, 2008
69
1
28
Germany, Saarland
Ok, I got a quick & dirty solution for the moment. I downgraded the zfs packages to previous versions.

Code:
apt-get install libzfs2=0.6.4-3~wheezy
apt-get install zfs-initramfs=0.6.4-3~wheezy
apt-get install zfsutils=0.6.4-3~wheezy
At the downgrade of the package zfsutils I got some errors:

Code:
pkg: warning: downgrading zfsutils from 0.6.4-4~wheezy to 0.6.4-3~wheezy
(Reading database ... 34920 files and directories currently installed.)
Preparing to replace zfsutils 0.6.4-4~wheezy (using .../zfsutils_0.6.4-3~wheezy_amd64.deb) ...
Unpacking replacement zfsutils ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-40-pve
Processing triggers for man-db ...
Setting up zfsutils (0.6.4-3~wheezy) ...
Installing new version of config file /etc/default/zfs ...
Installing new version of config file /etc/init.d/zfs-share ...
Installing new version of config file /etc/init.d/zfs-mount ...
Installing new version of config file /etc/bash_completion.d/zfs ...
Installing new version of config file /etc/zfs/zed.d/zed.rc ...
insserv: There is a loop between service umountfs and zfs-zed if stopped
insserv:  loop involving service zfs-zed at depth 6
insserv:  loop involving service zfs-import at depth 5
insserv: There is a loop between service umountfs and zfs-zed if stopped
insserv:  loop involving service umountfs at depth 3
insserv:  loop involving service umountnfs at depth 2
insserv:  loop involving service zvol at depth 5
insserv:  loop involving service networking at depth 3
insserv:  loop involving service umountroot at depth 6
insserv: There is a loop between service zfs-zed and zfs-import if stopped
insserv: There is a loop between service zfs-import and zvol if stopped
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
dpkg: error processing zfsutils (--configure):
 subprocess installed post-installation script returned error exit status 1
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-40-pve
Errors were encountered while processing:
 zfsutils
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@proxmc:~# reboot
But I closed my eyes and did a reboot and after that my pool "tank" was up running again. PHEW!!!

I understand that this is not a real proxmox issue but nevertheless such things should not happen. What is the right way to handle such an ZFS upgrade?

Best regards
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,171
342
103
Hi,
have you special zfs configuration or partition?
 
Last edited:

nasenmann72

Active Member
Dec 9, 2008
69
1
28
Germany, Saarland
Hi Wolfgang,

yes, on the machine exist two ZFS pools: the proxmox "rpool" on 2 2,5" HDDs and the pool "tank" (where the vms are stored) on 2 x 3,5" HDDs. As I said, at the moment, after downgrade of ZFS packages, the pool tank is there again. When I will find a time window again, I will try the following:

- shutting down the VMs and proxmox services
- exporting the ZFS pool "tank"
- do an apt-get update/upgrade again
- reboot
- try to import the tank and if necessary upgrade the zpools

Hope this will work.

Best regards
 
Last edited:

cilurnum

New Member
Jul 27, 2015
6
0
1
I've just been bitten by this having thought an upgrade on a new system would be a good idea. No special pool setups, just the rpool. Creating zpools now doesn't function and I've noticed that a zpool status lists devices by sdX rather than by-id.
 

cilurnum

New Member
Jul 27, 2015
6
0
1
The pool consists of two drives in a RAIDZ-1 mirrored set up with six other unused drives in the system to add later. This was what was installed on installation.

Initially a zpool status listed drives by ID I noticed but after an update and reboot they were listed as sda and sdb I noticed. I didn't think anything of it until my zpool create command failed when adding devices to further zpools. I've now rebooted again and the boot process has failed and dropped to failsafe.
 

nasenmann72

Active Member
Dec 9, 2008
69
1
28
Germany, Saarland
Hi,

I've just installed a fresh system from proxmox-ve_3.4-102d4547-6.iso. Without any further updates I get:

Code:
root@pve1:~# dpkg -l | grep zfs
ii  libzfs2                          0.6.4-3~wheezy                amd64        Native ZFS filesystem library for Linux
ii   zfs-initramfs                    0.6.4-3~wheezy                 amd64        Native ZFS root filesystem capabilities for Linux
ii  zfsutils                         0.6.4-3~wheezy                amd64        command-line tools to manage ZFS filesystems
Code:
root@pve1:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            ata-WDC_WD5003ABYZ-011FA0_WD-WMAYP0F8219P-part2  ONLINE       0     0     0
            ata-WDC_WD5003ABYZ-011FA0_WD-WMAYP0FA7S4K-part2  ONLINE       0     0     0

errors: No known data errors
After doing an apt-get update, upgrade, dist-upgrade and a reboot I get the following outputs. The zfs packages versions are now 0.6.4-4~wheezy.

Code:
root@pve1:~# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors
Code:
zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.

POOL  FEATURE
---------------
rpool
      filesystem_limits
      large_blocks
Code:
zpool upgrade -a
This system supports ZFS pool feature flags.

cannot set property for 'rpool': invalid argument for this pool operation
The after downgrading the ZFS packages again to 0.6.4-3~wheezy and reboot it looks again ok:

Code:
root@pve1:~# zpool status 
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            ata-WDC_WD5003ABYZ-011FA0_WD-WMAYP0F8219P-part2  ONLINE       0     0     0
            ata-WDC_WD5003ABYZ-011FA0_WD-WMAYP0FA7S4K-part2  ONLINE       0     0     0

errors: No known data errors
I think there is something wrong either in libzfs2-0.6.4-4 or zfsutils-0.6.4-4. The pool is not detected correctly and can not get upgraded.
 
Last edited:

tycoonbob

Member
Aug 25, 2014
67
0
6
Did that fix ever come? I am running into this issue with a new build. Fully patched, can't create new pools.

Nevermind. I did an 'apt-get dist-upgrade' to update the kernel, a reboot, and it's working. I fought with this for hours today. :(
 
Last edited:

kriss35

New Member
Aug 5, 2015
3
0
1
We have a version mismatch in the zfs kernel module - we will upload a fix tomorrow.
Hi,i guess there is the same problem with pve 4.0 beta.To reproduce the issue :- install pve 4.0 beta with the ISO- choose a zfs filsystem to install proxmox (i choose a zfs mirror, but i guess with a single zfs disk you will have the problem too)- finish the install- rebootat this stage we have the 0.6.4-pve1~jessie version of zfs packages.if we do an apt-get update and apt-get upgrade, the system want to install the 0.6.4-pve2~jessie version of zfs packages.if we do the upgrade and reboot the system, the boot stop at an initramfs prompt and say it can't find the pool.a zpool list list anything and zpool import don't want to import the pool.is there a fix under developpment for that ?ps : i can write a separated thread if needed, but as the problem is similar i did it here
 

kriss35

New Member
Aug 5, 2015
3
0
1
We have a version mismatch in the zfs kernel module - we will upload a fix tomorrow.
Hi,i wrote a long message and submit it but it seem it was no recorded. So i will write another one but shorter :)I guess there is a similar problem on pve 4.0 beta (tell me if i need to create a separate thread, but the problem look similar).- install proxmox 4.0 beta from ISO- choose a zfs disk in the installer ( i choose a zfs mirror - raid 1 -)- finish installat this stage we have 0.6.4-pve1~jessie zfs packages.if we do apt-get update and upgrade, the system will install 0.6.4-pve2~jessie zfs packages.once done, if you reboot, the system stop at initramfs prompt, zpool list don't see anything and zpool import don't want to import anything. (i did this test ten times, it happen each time)is there a fix in developpement for that ?Thank you for this wonderful product :)
 

nasenmann72

Active Member
Dec 9, 2008
69
1
28
Germany, Saarland
After installing the latest updates the ZFS pools seem to be working without any issues again. I was able to import my second pool with

Code:
zfs import -d /dev/disk/by-id  POOLNAME
Regards
 

ckx3009

New Member
Feb 18, 2014
21
0
1
Hello,

I am facing the same problem: apt-get upgrade and this is the output
Code:
Setting up zfsutils (0.6.4-4~wheezy) ...
insserv: There is a loop between service zfs-mount and zfs-zed if stopped
insserv:  loop involving service zfs-zed at depth 5
insserv:  loop involving service zfs-import at depth 4
insserv:  loop involving service umountfs at depth 7
insserv:  loop involving service zfs-mount at depth 15
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
dpkg: error processing zfsutils (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of zfs-initramfs:
 zfs-initramfs depends on zfsutils; however:
  Package zfsutils is not configured yet.

dpkg: error processing zfs-initramfs (--configure):
 dependency problems - leaving unconfigured
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-40-pve
Errors were encountered while processing:
 zfsutils
 zfs-initramfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

I have a single raidz-3 pool with 9 disks, shared between data and VM disks.
I still have to reboot, but I wont until the problem is solved since I cannot afford hours of downtime.

I shall provide any additional detail that is needed to solve this problem.

Thank you very much
Regards
 

cilurnum

New Member
Jul 27, 2015
6
0
1
Yes, this now works. Looks like a mismatch between kernel module and userspace tools.
 

mozp

New Member
Aug 17, 2015
6
0
1
Yes, this now works. Looks like a mismatch between kernel module and userspace tools.
Speaking of version mismatch, maybe is there a mismatch between SPL/ZFS versions in the current PVE version (kernel 2.6.32-40-pve)?
Code:
dmesg | grep -E 'SPL:|ZFS:'
SPL: Loaded module v0.6.4-358_gaaf6ad2
ZFS: Loaded module v0.6.4.1-1099_g7939064, ZFS pool version 5000, ZFS filesystem version 5
As opposed to the output on kernel: 2.6.32-39-pve:
Code:
SPL: Loaded module v0.6.4.1-1
ZFS: Loaded module v0.6.4.1-1, ZFS pool version 5000, ZFS filesystem version 5
thanks!
 

mozp

New Member
Aug 17, 2015
6
0
1
Hi!

I decided to post this here because serious problems too started after upgrade from 2.6.32-39-pve to the currently latest 2.6.32-40-pve.

After the upgrade the bi-weekly scrub of a zfs pool, which took approx. 3h started to take horribly long and resulted in the eventual crash of the whole system (reset needed).
The only messages from zfs indicating errors where those and their stack traces in kern.log:
Code:
kernel: INFO: task txg_sync:2750 blocked for more than 120 seconds.
For whatever reason it is using ata-* disk names since 2.6.32-40-pve (and activating ZPOOL_IMPORT_PATH).
Code:
zpool status bankup 
  pool: bankup
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub in progress since Mon Aug 17 17:26:05 2015
    2.43T scanned out of 2.44T at 45.6M/s, 0h4m to go
    0 repaired, 99.51% done
config:

	NAME                                                   STATE     READ WRITE CKSUM
	bankup                                                 ONLINE       0     0     0
	  raidz2-0                                             ONLINE       0     0     0
	    ata-WDC_WD10JFCX-68N6GN0_WD-WX91AA46F9F8           ONLINE       0     0     0
	    ata-WDC_WD10JFCX-68N6GN0_WD-WX91AA46FE1Y           ONLINE       0     0     0
	    ata-WDC_WD10JFCX-68N6GN0_WD-WX91AA46FXY8           ONLINE       0     0     0
	    ata-WDC_WD10JFCX-68N6GN0_WD-WXC1E8459089           ONLINE       0     0     0
	    ata-WDC_WD10JFCX-68N6GN0_WD-WXD1E84E9JJR           ONLINE       0     0     0
	    ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E842LF6N           ONLINE       0     0     0
	logs
	  ata-Samsung_SSD_850_PRO_128GB_S1SMNSAG309255K-part3  ONLINE       0     0     0
On https://github.com/zfsonlinux/zfs/issues there seem to be quite some new issues with systems hanging with the latest zfs version. With 2.6.32-39-pve it worked flawlessly for months.
So maybe wait with upgrading to the current kernel 2.6.32-40-pve. Furthermore the upgrade itself was painful too, one reason being sdX was used as default device names instead of the ones from disk/by-id.

best regards
 

ckx3009

New Member
Feb 18, 2014
21
0
1
have you issued a apt-get update BEFORE the apt-get upgrade? You first have to download the updated repositories indices, then try to upgrade.
Hello,

well, yes, first of all I did, as usual, apt-get update.

As now, before the reboot, I have no version mismatch
Code:
 dmesg | grep -E 'SPL:|ZFS:'
SPL: Loaded module v0.6.4.1-1
ZFS: Loaded module v0.6.4.1-1, ZFS pool version 5000, ZFS filesystem version 5
SPL: using hostid 0xa8c00802
I am actually using the kernel 2.6.32-39-pve, that should be switched to 2.6.32-40-pve after the reboot.

Thank you
Best regards
 

mpond

New Member
Aug 28, 2015
2
0
1
Just had the similar issue myself with startup dependencies and apparently fixed it. This was related to the ZFS mounting workaround
pve.proxmox.com /wiki/Storage:_ZFS
and specifically adding zfs-mount to the local_fs target. The new zfsutils introduced extra scripts - zfs-import, zfs-zed, and that's what we have - zfs-mount now depends on zfs-import, zfs-import depends on zfs-zed and zfs-zed depends on local-fs target which includes zfs-import, for we told it so.

What I've done in my system to solve this is a) remove zfs-mount from local_fs definition in /etc/insserv.conf ; b) Update the new zfs-mount with "Default-Start: S" and "chkconfig: S 06 99", after which everything ran smoothly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!