Proxomx 3.4 zfs root install not using by-id?

c0mputerking

Active Member
Oct 5, 2011
174
5
38
Hello all i am using the latest install cd proxmox 3.4, and have setup zfs raid1, however my zpool status shows /dev/sde /dev/sdf which is not the recommended way to setup a pool? why does the proxmox install do this?

Anyways I would like switch these do /dev/by-id style entries as each time i add or remove a harddrive from this system i get problems

On a normal pool this is easy

zpool export tank
zpool import -d /dev/disk/by-vdev tank

However in the root pool this is not so easy

zpool export rpool
umount: /: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
cannot unmount '/': umount failed

ZOL is new and there are some posts on google about what to do but they all seem like a bit of a moving target depending on distro and grub versions zol versions etc

So i guess i am asking how i can fix this? using the proxmox way and version of zfs, grub, debian, hoping somebody has done this already but i do not see it post anyway on the proxmox forums

i have no problem taking this system offline if need be.
 
Its not the critical situation and not the problem at all. ZFS will work don`t it? Maybe Proxmox will update they install script in the future. If you sill want to change it (not to fix) try to boot with ubuntu live and load ZFS module and then you can change it.
 
As always thank so very much for your quick reply, and i now understand that it is not really usually a problem so not a priority for me either i have already run into a few problems because of this but they are probably caused my me trying to do something with proxmox i shouldnt.

However i do have a problem right now that may be related, then again it may not be related so i am creating a separate post for it.

Should i mark this thread as solved them as this is part of the proxmox design for now?

Thanks again for your reply
 
Just a flashing cursor ... sorry not very helpful information ... so i did a reinstall using the proxmox cd and am trying to restore rpool from a snapshot backup right now ... pretty sure it has to do with /dev names moving around really wish i could get /dev/by-id working ... not so trivial after install have to boot from live cd and chroot, probably mess with grub and all that, not sure where to begin ... i am going play around a bit more i will post my findings to the list.
 
Ok i did a fresh reinstall of proxmox 3.4 using a zfs raid1 mirror, and tested to see have resilient things were by removing a bunch of drives i even tried booting another system, as i want this setup to be portable and it was. Let me repeat i removed drives, proxomox booted fine zpools status sometimes showed different /dev/sdx but it always booted perfectly even on a different test system.

However i did an upgrade apt-get update && apt-get dist-upgrade and now when i remove drives i get errors see the attachment for the exact errorssmall.jpg
 
I have the same problem on my usb pen. If you want to continue to boot do it `zpool export POOLNAME && zpool import POOLNAME -R /root` then type exit and it will continue to boot.
 
# zpool export rpool
: now export all other pools too
# zpool import -d /dev/disk/by-id -f -N rpool
NOT working but does work with i mount by pool id #


: now import all other pools too
# mount -t zfs rpool/ROOT/debian-1 /root
: do not mount any other filesystem
# cp /etc/zfs/zpool.cache /root/etc/zfs/zpool.cache
# exit
System seems to boot ok after typing exit, but on a reboot i am back at the error i posted above.
 
Opps i Nemesiz i missed your post however i like your one line solution better than the one i found however this is still not a permanent fix as on reboot i end up with the same error ... not ideal for a remote system what if someone reboots it?
 
Can you describe how you jump into this problem ? I felt after upgrade ZFS from 0.6.2 to 0.6.3 and there was problem with grub. I had to download it from somewhere to make OS boot. Now i`m using 2.01-22debian1+zfs3-0.6.3.2-whezy on ubuntu and allways do export/import (had write script to make less type of commands). On server I use SSD to keep proxmox and ZFS pools is in others HDD so no problem with that.
 
Hi there, I'm glad to find this thread and just wanted to throw in some points (I'm using PVE 3.4 at the moment with a manual zfs data pool. It works great, but I'm very interested to run Proxmox as a zfs root install. The fact that proxmox does not use by-id is holding me back at the moment):

- from the proxinstall, it seems that by-id import was already implemented, but then dropped because of a strange import error (see line 849 here: https://github.com/proxmox/pve-installer/blob/master/proxinstall )
=> as a consequence, "fixing" this by-id issue should be straight forward once this error has been understood/fixed

- Regarding this strange "cannot create 'rpool': no such pool or dataset" error:
I could reproduce this error when installing Proxmox in a virtual environment: Whenever one forces ZFS to import the rpool using by-id, one gets this error and the pools will not import (using for instance: `zpool import -d /dev/disk/by-id rpool`). Using just `zpool import rpool` works right away.

- The very same rpool, could be imported using by-id in a live Debian with ZFS packages (http://wiki.complete.org/ZFSRescueDisc).

Without digging to much into this issue, I guess the best is to try the brand new ZoL release 0.6.4 ( http://list.zfsonlinux.org/pipermail/zfs-announce/2015-April/000001.html ). As usual, there were tons of bug fixes, some could very well be related to the above import issue.

Since I'm not too much into Proxmox, my question: Has anybody tried ZoL 0.6.4 with proxmox?
 
Last edited:
Just for full discloser i am running proxmox with 3 zfs pools for a couple of small setups;

rpool which is zfs raid1 on 2 32G USB sticks i know not the best idea but i moved swap to another pool and will do some other tweaks, like maybe even moving /var/lib/vz to another pool to make sure they are not going to self destruct, besides they are mirrored if one fails i can pop in another theoretically.

apool which is raidz 5 consisting of 3 SAS drives for main data storage

bpool which is a single ata drive and nightly backups of apool and rpool

The way to reproduce this error is shutdown and pull out a couple of drives boot backup up and you get the error seen in my screenshot, however thanks to Nemesiz, and some other post i found lying around the inteweb this can be fairly easily fixed, it is not ideal but gives me hope. Just run these commands at the error prompt

zpool export rpool && zpool import -d /dev/disk/by-id rpool -R /root
exit # which will boot proxmox

Once proxmox is booted
sudo zpool set cachefile=/etc/zfs/zpool.cache tank
update-initramfs -u

Now rpool is booted in its new location using by-id device names

NOTE even with by-id device names being used in the pool when i plug all my drives back in (changing back) i get the same error again and have to run the same commands again to get proxmox to boot. Therefore any change in devices, adding drives or removing drives you need to run the commands above to get things booting without intervention again.

PS Just to repeat this did not happen before updating using dist-upgrade and proxmox installed from the proxmox installer did not have this issue ... which i found weird as the pool was using /dev/sdx names which could change but the system would still boot fine.
 
Last edited:
Good to heard about your fix. Today I updated ZOL to 0.6.4 on my USB. After reboot I couldn't get to boot full-fully. It stuck at some point and I could do anything.
My ZFS pool was always using by-id path. So instead `zpool export rpool && zpool import -d /dev/disk/by-id rpool -R /root` was enough `zpool export rpool && zpool import rpool -R /root` for me. But it doesn't solved to boot the system completely. I had run into problem "An error occurred while mounting rpool/ROOT/ubuntu-1."

Had to play with ubuntu live cd. I changed grub to 2.02~beta2-9ubuntu1. But no boot success. Thinking now maybe I had not done some needed action or completely updated grub, but I founded information about plymouth so made some changed to "disable" plymouth (proxmox debian have not installed plymouth. Its for hadcored ubuntu). Now the problem with importing pool has gone.

I`m printing some info what I did. Maybe it will be informative for someone.

File: /etc/default/grub
Comment: trying to disable plymouth splash
Code:
#GRUB_HIDDEN_TIMEOUT=0
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_GFXPAYLOAD_LINUX=auto

File: /etc/initramfs-tools/conf.d/splash
Comment: plymouth runs before root FS is mounted. At the time ZFS allocate memory can be occurred problem with plymouth video memory allocation.
Code:
FRAMEBUFFER=y


Looking at website ( https://github.com/zfsonlinux/pkg-z...4.04-or-Later-to-a-Native-ZFS-Root-Filesystem )
zfs repositories has been upgraded to 0.6.4 which has known incompatibilites with upstream grub. Do NOT run zpool upgrade rpool as it could cause unbootable system
recommends not to upgrade pool version.
 
Super excited about that patiently awaiting upload ... and thanks in advance!

new zfs packages are now on pvetest repo. make sure that you also update kernel from pvetest, if you are using 3.10, you need to install manually 3.10.0-9 (apt-get install pve-kernel-3.10.0-9-pve).

we also plan to release a new ISO installer as soon as all tests are done.
 
I have tried multiple installs with the 3.4 iso and it usually doesn't work for me with zfs mostly after updating the system with apt-get update, i'm sure this has more to do with my strange setup, than it does with Proxmox. However things seem to work perfectly with ext3/4, but now that i have been spoiled by ZFS on root i would really like to use it.

Basically rather than trial and error to make things work, i think am waiting for the new version of the Proxmox ISO hopefully with a more stable working implementation of zfs

I am curious about an approximate time frame for the new or even a test ISO as i have a couple of systems on hold until then as i am not quite ready to go with ext3/4.

PS also hope it has lz4 compression on the rpool too here is some info i found about lz4

A rough comparative analysis of LZ4 performance vs. LZJB:

  • Approximately 50% faster compression when operating on compressible data.
  • Approximately 80% faster on decompression.
  • Over three times faster on compression of incompressible data
  • Higher compression ratio (up to 10% on the larger block sizes).
  • Performance on modern CPUs often exceeds 500 MB/s on compression and over 1.5 GB/s on decompression and incompressible data (per single CPU core).
 
I am curious about an approximate time frame for the new or even a test ISO as i have a couple of systems on hold until then as i am not quite ready to go with ext3/4.

next week (I am currently working on it).
 
That's great news, thanks for all your hard work, I'm really looking forward to it.

Getting sick of hacking around and trying to make things work on my own, zol is still a bit tricky with linux and the proxmox ISO and xorriso is a bit complex. So thanks for putting all the pieces together for us.

Proxmox is seriously the best/fastest/easiest hyper-visor around and with zfs under the hood now, nobody even comes close.

Thanks for such a great package and a great community too.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!