Proxmox ZFS Inconsistent Disk Labels

socegov

New Member
Sep 13, 2019
7
0
1
37
Hello everyone, I have installed a fresh copy of Proxmox VE 6 with the ZFS file system. My server has 10 drives in RAIDZ10. My issue is that the first two drives as labelled as scsi-x and the other 8 drives are labelled sdx. I would like to have all my drives labelled in the sdx format. Below is the layout

Code:
NAME                                              STATE     READ WRITE CKSUM
    rpool                                             ONLINE       0     0     0
      mirror-0                                        ONLINE       0     0     0
        scsi-364cd98f06a11110024d42f1e5d9525c9-part3  ONLINE       0     0     0
        scsi-364cd98f06a11110024d430ebb1af8034-part3  ONLINE       0     0     0
      mirror-1                                        ONLINE       0     0     0
        sdc                                           ONLINE       0     0     0
        sdd                                           ONLINE       0     0     0
      mirror-2                                        ONLINE       0     0     0
        sde                                           ONLINE       0     0     0
        sdf                                           ONLINE       0     0     0
      mirror-3                                        ONLINE       0     0     0
        sdg                                           ONLINE       0     0     0
        sdh                                           ONLINE       0     0     0
      mirror-4                                        ONLINE       0     0     0
        sdi                                           ONLINE       0     0     0
        sdj                                           ONLINE       0     0     0

Could you please assist, Thanks :D
 
Hello, I have tried following that tutorial previously, and I get error

umount: /: target is busy.

I had installed proxmox with ZFS during the installation process
 
sorry should've thought of that - the export does not work if you've booted off zfs on root...

what should work is to temporarily switch away from using the zpool.cache file for importing during boot - then zfs-import-scan.service should import the pool with the /dev/disk/by-id special files and then recreating the zpool.cache with that config:
Code:
rm /etc/zfs/zpool.cache
update-initramfs -k all -u
systemctl enable zfs-import-scan.service
reboot

and after the reboot:
Code:
zpool set cachefile=/etc/zfs/zpool.cache rpool
update-initramfs -k all -u
systemctl disable zfs-import-scan.service

(not directly tested, but those should be the steps)
 
  • Like
Reactions: socegov
Great it worked, mostly. I went with the /dev/disk/by-vdev method. I also had to set the ZPOOL_IMPORT_PATH="/dev/disk/by-vdev" in the /etc/default/zfs file. My only gripe is that the two first drives have the phrase part3 annotated at the end.

Code:
    NAME              STATE     READ WRITE CKSUM
    rpool             ONLINE       0     0     0
      mirror-0        ONLINE       0     0     0
        disk-1-part3  ONLINE       0     0     0
        disk-2-part3  ONLINE       0     0     0
      mirror-1        ONLINE       0     0     0
        disk-3        ONLINE       0     0     0
        disk-4        ONLINE       0     0     0
      mirror-2        ONLINE       0     0     0
        disk-5        ONLINE       0     0     0
        disk-6        ONLINE       0     0     0
      mirror-3        ONLINE       0     0     0
        disk-7        ONLINE       0     0     0
        disk-8        ONLINE       0     0     0
      mirror-4        ONLINE       0     0     0
        disk-9        ONLINE       0     0     0
        disk-10       ONLINE       0     0     0

its not too big of an issue, but would be glad to have consistency

Many thanks
 
ZPOOL_IMPORT_PATH="/dev/disk/by-vdev" i
This would need to be configured AFAIK - see https://github.com/zfsonlinux/zfs/wiki/FAQ

My only gripe is that the two first drives have the phrase part3 annotated at the end.

The reason for this is that the first mirror-vdev is the one the system actually boots from - those drives get partitioned and not handed as whole drives to ZFS:
* a small 1M partition for bios_boot compat sdX1
* a 512M partition for ESP (UEFI boot) sdX2
* the rest for the ZFS-pool sdX3

I hope this explains it
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!