Proxmox ZFS raid-1 setup using UUID not /sdX

jim.bond.9862

Renowned Member
Apr 17, 2015
395
34
68
as the subj. says, is it possible to install Proxmox using ZFS raid-1 setup
so it would use disk UUID rather than /sdX name?

I am playing with different setup right now so I can reconfigure my current server with Proxmox and ZFS.
all setups are using Proxmox ISO. it all looks good and all. but all of the setups I run use the /sdX names for my rpool mirror. I wouldn't mind much most of the time, since my current setup uses SSD connected to SATA ports on the MB and all of my data disks are connected via HBA so /sda and /sdb names would usually be assigned to the SSDs drives first as all MB ports would come up ahead of anything else on PCI bus, but I also toying with setup using USB flash drives for OS and SSDs for cache and SLOG and that may present an issue as USB may load up after the on board SATA. I also like to use UUIDs as it makes it easier to identify the drive when needed.

si is there a way to install with UUID rather than old name designation?

thanks.
 
ZFS doesn't care about the drive naming, as it reconstructs the raid1 by identifiers on its partitions. You can remove and add a disk with UUID after install.
https://pve.proxmox.com/wiki/ZFS_on_Linux#_bootloader
even on the main pool "rpool" ? didn't know that.
I know I can create pools using UUID and also I could break a pool and reconstruct it with UUID on a new empty pool. but I never saw any help on how to do it if your pool already have data, except backup/redo the pool/restore.

but again that was data pool, not bootable Mirror pool.

also your link have nothing on how to remove and add the disk on an active OS pool.
in fact it has no info on how to add/remove disks on any pool. so no not helpful at all.
 
that helps :) thanks .

I also tried changing the config on my test setup using
zpool detach/attach without bringing device offline
it seems that it works too. need to let it resilver between commands
and second attach command needs to use the UUID for the device already in the pool , but works.
I had reboot the server twice, once a reboot, second shutdown and restart. works just fine.

nice to know. now if I read the wiki correctly , if I have a failed device I will have to take it offline and than replace.
will try it out and see if it works as seamlessly as a conversion did.

[EDIT 1 ] yes, it worked ok.
shutdown the VM , removed one of the disks (/sda) , restarted the node.
zpool show degraded mode with disk(partition 2 ) missing.
whent through the disk replacement steps fine
pool resilvered just fine. again this is an empty test VM setup with 127GB disks. took about 5 min.
boot up back with 2 disks ok, only got one error that recovered by itself. will definitely try on real hardware.
 
Last edited:
Could the installer be changed so that it uses disk-by-id instead of /dev/sda type identifiers. I'm seeing allot of fud on the net about replacements of drives down the road that are recognized as different drive letters. Rebooting is not an option in drive maintenance. It would be best to have drives ordered by disk-by-id. One test would be to shut down a test server and re-arrange the drives to a different sequence and see if it boots up.

Here's a layout example:
sda - Drive 1
sdb - Drive 2
sdc - Drive 3
sdd - Drive 4


If you remove drive3 and insert a new Drive 5 I think that it would be recognized as sde. Adding it to a zpool create, wouldn't be ideal to use an sde designation because on the next reboot, sde won't exist. Am I not correct about this?

Also, it's odd to me about creating additional pools, because it seems that one has to specify all the devices attached, does this mean that when one brings in a new drive that all the zpool create commands need to be re addressed to the new device names? I could see this as a downside to the device-by-id approach.

I've been doing Linux sysamin work for over 20 years, but am now just getting into using ZFS thanks to Proxmox. I commend Proxmox for using ZFS and for opening my eyes to the vital importance of ZFS on Linux.

Thanks!
 
I do not think the drive designation is important for ZFS most of the time. there is enough metadata stored in each drive to rebuild the array and pool no matter what the drive designation is.
my problem mostly is that the drive designation is important for boot drive/pool as this drives have to properly defined for system to boot.

I am really confused about why haven't the Linux kernel have been adjusted yet to use UUID by default everywhere and to ask for a short description that is stored on the drive when drive is connected. even create a short description for you if needed. Windows have been doing this for years. windows never get confused about drives. at least in my 20+ years I never seen any problem no matter where or how the disk is connected, it always accessible. if 2 bootable disks are connected the one that is on the first PCI/SATA bus is always default to boot unless use use the bios boot option to override it. why Linux can not do that is a mistery to me.
 
Strangely, every other recent distribution that I have used with the Linux Kernel uses UUID numbers for referencing drives. This is really odd that Proxmox with zfsonlinux doesn't do the same. I'm sure people can get around the problem by rebooting or rescanning the drives somehow.

I'm sure there are others with much more experience with ZFS than I have (mainly because I'm just now learning it.) So I don't have failed drive experience on ZFS yet.

On the other hand, I have over 20 years experience in IT and when I see pages on the internet suggesting to exercise caution to not build your zpools using designators like /dev/sda /dev/sdb, etc... I take note.

I think maybe a live cd could be booted up, then unmount the zfs partitions, export them and re-importing them with UUID identifiers might do the trick in addition to a rebuild of the initramfs.

I hope my tone isn't coming across in a bad way. I'm really delighted that I'm able to use ZFS on Linux! I'm building a samba file server for a past employer of mine using ProxMox VE. Funny though that we're not using the Virtualization part at all.

The zfs-auto-snapshot package from the debian repository really wows my customer to have automatic snapshots into the past. Also we're making great gains with the compression. Next I'll have to get some experience with the zfs send/receive tools for remote backups.

Thanks for your involvement with Proxmox!
 
why are you using Proxmox if you are not using virtualization than ?
isn't the Straight Debian distro with WebMin be better?
Debian Supports ZFS and webmin gives you WebUI and a pretty good one too.

also FYI, check out the TrunKey file server container for proxmox. it is good.
gives you everything, a WebDav, Samba ready configured, all managed through custom Webmin UI.
 
  • Like
Reactions: Joe Baker
why are you using Proxmox if you are not using virtualization than ?
isn't the Straight Debian distro with WebMin be better?
Debian Supports ZFS and webmin gives you WebUI and a pretty good one too.

also FYI, check out the TrunKey file server container for proxmox. it is good.
gives you everything, a WebDav, Samba ready configured, all managed through custom Webmin UI.

I want the samba to be connected as closely as possible to the ZFS filesystem. I tried running an Ubuntu container, then adding Zentyal into it and almost got that to work. It was suggested to mount the filesystems in though the containers, but I didn't see a way to modify the zfs permissions from the containers. But maybe the ACLs for NT Access Controls is all I really need to modify. I want to be able to do snapshots. Not sure if that all works with the TurnKeyLinux file server. I've got CUPS working already, and a Samba AD is running. Next I need to add openvpn (yes I know there's a container for that also - does it work?).

I guess I'm running Proxmox, because of the ability to run virtual machines and containers on the hardware. The "samba-tool domain provision" command seems to get allot of the heavy work going pretty easily. The customer looked over the cli commands and wasn't frightened by creating users and groups that way. Thanks for the suggestion of using webmin. I might just add that in.

I'm wondering if Proxmox would support the box if the customer paid for support. Yeah, I've never seen a debian installer show zfs options other than the one I've used here on Proxmox. So I guess that's largely why Proxmox was chosen. I like the community forums, and I've had my eye on using proxmox for about 4 years now.

One of my other projects I'd like to work on is integrating HAProxy with a Web Application Firewall to fence off attacks against my turnkey linux wordpress containers.

It's all working so nicely!
 
  • Like
Reactions: Pourya Mehdinejad
zfs does not care about /dev/sda vs /dev/disk/by-XXX/YYY - it uses the labels on disk anyway to find out which vdevs are available and belong to which pool.

it is just for the admin's convenience, e.g., when a vdev fails it is maybe more helpful for switching the disk if you know that the disk with your custom label "FOO" has died, or that the disk in enclosure 2 slot 1 has died, or ... instead of "sdb" has died.

that the PVE installer uses /dev/sdX when creating the pool is a result of a technical limitation of the installer environment, and not easy to change (if it were, we would already have switched to by-id ;))
 
  • Like
Reactions: gsupp and Joe Baker
I found this thread cause of a strange problem: I have 2 zfs pools. Now I want to add 2 new hdds for a third pool. When I attach the new hdds, my zfs pools doesn't work any more. Got an error that required drives are missing. I found out that the first hdd from pool 1 is mounted on sda. After I attached the new hdd (without any pool yet), the new one is mounted in sda.

How can this behaviour be possible, when zfs doesn't use the mount point internally?
And how to solve this?
 
I found this thread cause of a strange problem: I have 2 zfs pools. Now I want to add 2 new hdds for a third pool. When I attach the new hdds, my zfs pools doesn't work any more. Got an error that required drives are missing. I found out that the first hdd from pool 1 is mounted on sda. After I attached the new hdd (without any pool yet), the new one is mounted in sda.

How can this behaviour be possible, when zfs doesn't use the mount point internally?
And how to solve this?


contrary to fabian post , I am pretty sure ZFS uses the disk designation internally, otherwise we wouldn't have the problem you are having on import/mount the pools.
hence the post asking on how to use UUIDS instead of disk names to prevent exactly this.
so for your problem, I would convert the pool setup to use UUID first, before adding any disks to the machine. if your pool is a 'root' pool where OS is installed, like,"rpool" in Proxmox install , you can follow the suggestion to do the conversion during boot time. or one of mine, an online conversion on live system. if this is a secondary pool, i.e. data pool, just export it and import using UUID.
 
I've a separate SSD for the OS, my zfs pools are only for data. As I see and wonder it seems possible to export the pool and then re-import them using -d switch, which accepts the path to devices where I could use by-uid. According to a post on superusers.com which I can't post cause of restrictions in this forum it may only works until the next reboot, either I want to try it.

Strangely I cannot export the pool, although I'm using the -f switch for forcing. I always get the error "cannot export 'homes': pool is busy"
 
if you move your home folder to the pool it will be locked(busy) on active system.
or if something else is connected to the pool. check your clients, if any thing is using the pool.
but "zpool import -d /dev/disk/by-id <myPoolName>" should work permanently.

if that fails, you can try changing the pool online using detach/attach options but that would be slower as you will have to wait for pool resilver on each device. here is the link for some zfs command references. https://www.csparks.com/ZFS Without Tears.html
 
zfs does not care about /dev/sda vs /dev/disk/by-XXX/YYY - it uses the labels on disk anyway to find out which vdevs are available and belong to which pool.

it is just for the admin's convenience, e.g., when a vdev fails it is maybe more helpful for switching the disk if you know that the disk with your custom label "FOO" has died, or that the disk in enclosure 2 slot 1 has died, or ... instead of "sdb" has died.

that the PVE installer uses /dev/sdX when creating the pool is a result of a technical limitation of the installer environment, and not easy to change (if it were, we would already have switched to by-id ;))

Yes, ZFS doesn't care, but using /dev/sd? lead to issue for sure in case you have to remove and then insert a disk.
What was, in example, sdc, can became sde and ZFS doesn't allow you to "replace "sda" with "sde" because they are the same thing.

I had this issue yesterday. That's why the suggested way (by ZFS creators) to use disks with ZFS is with something immutable like by-id/by-path or so on.
if you remove and insert the same disk, ZFS will reuse the same because the name doesn't change and no conflicts are made.
 
Hi guys
I have a weird situtation tought you could help me
I have a cluster of three nodes running proxmox + ceph.
Ive installed the os (+ceph) on 2 x usb drive as zfs raid1, now I have high I/O wait on the CPU because the usbs are slow.
I added 2 x SAS 15K and I’m thinking if its possible to replace the USBs with SAS15Ks without having to rebuild the node.
ps:
I’ve already tried to install proxmox the same way from the beginning but my P420 didn’t boot when it was in HBA mode and I used zfsraid1.
It booted with windows or proxmox without raid.
 
I want the samba to be connected as closely as possible to the ZFS filesystem. I tried running an Ubuntu container, then adding Zentyal into it and almost got that to work. It was suggested to mount the filesystems in though the containers, but I didn't see a way to modify the zfs permissions from the containers. But maybe the ACLs for NT Access Controls is all I really need to modify. I want to be able to do snapshots. Not sure if that all works with the TurnKeyLinux file server. I've got CUPS working already, and a Samba AD is running. Next I need to add openvpn (yes I know there's a container for that also - does it work?).

I guess I'm running Proxmox, because of the ability to run virtual machines and containers on the hardware. The "samba-tool domain provision" command seems to get allot of the heavy work going pretty easily. The customer looked over the cli commands and wasn't frightened by creating users and groups that way. Thanks for the suggestion of using webmin. I might just add that in.

I'm wondering if Proxmox would support the box if the customer paid for support. Yeah, I've never seen a debian installer show zfs options other than the one I've used here on Proxmox. So I guess that's largely why Proxmox was chosen. I like the community forums, and I've had my eye on using proxmox for about 4 years now.

One of my other projects I'd like to work on is integrating HAProxy with a Web Application Firewall to fence off attacks against my turnkey linux wordpress containers.

It's all working so nicely!
Proxmox has some packages to buy for updates and supports, and a strong community.
What kind of WAF you want to use with HAPROXY,
I’ve had a terrible experience when I tried to do the same with PFSENSE and Snort with HA that’s off topic, just wanted to give you a head up
 
That's why the suggested way (by ZFS creators) to use disks with ZFS is with something immutable like by-id/by-path or so on.

Happy Christmas; )

The best is to use by-id because if one disk fail, you can stop the server and for sure you can remove the faulty drive and not a good one. by-id aka id hdd is printed on the physical disck on his label, so is not room for any mistake. with by-id you are 100% sure that you remove what you need.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!