Proxmox ZFS Management

That's odd, whether -by-id, or -by-partlabel, those are symlinks after all. I don't remember this behaviour specifically with symlinks from before. Either the device is there or it is not. I will admit I never use /dev/sd* because those can change, in fact sometimes you see someone asking here how to "fix" that after the fact, when they find out.

Yeah. Do feel the proxmox pros could elaborate on ZFS wiki that just says <device>
And the fact the default ZFS install rpool is also build using /dev/sda , /dev/sdb
So a noob like me would go for also using /dev/sd...

Yeah, if you get normal behavior with by-partlabel then it must be something with local ZFS Proxmox?
You use TrueNas for ZFS + iSCSI or NFS shared storage for Proxmox?

Upon re-attaching disk4 partition it did immediately show up in /dev/disk/by-partlabel/zfs-disk4

Got it back via - zpool online zfs-raid10 /dev/disk/by-partlabel/zfs-disk4
(Even when pool build via /dev/sd{letter} I observed ZFS would NOT auto online, least not by default)

But for disk to disappear yet ZFS say pool is healthy doesn't give me confidence that local Proxmox is happy with using by-partlabel.
Tried it again this time removing both disk4 + disk5 partitions. Can see they are gone here yet pool says a false "heathy"

Code:
root@LAB-SMPM-GRUB:/dev/disk/by-partlabel# lsblk -o +PARTLABEL
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS PARTLABEL
sda      8:0    0   20G  0 disk            
├─sda1   8:1    0 1007K  0 part            
├─sda2   8:2    0  512M  0 part            
└─sda3   8:3    0 19.5G  0 part            
sdb      8:16   0   20G  0 disk            
├─sdb1   8:17   0 1007K  0 part            
├─sdb2   8:18   0  512M  0 part            
└─sdb3   8:19   0 19.5G  0 part            
sdc      8:32   0   10G  0 disk            
├─sdc1   8:33   0   10G  0 part             zfs-disk1
└─sdc9   8:41   0    8M  0 part            
sdd      8:48   0   10G  0 disk            
├─sdd1   8:49   0   10G  0 part             zfs-disk2
└─sdd9   8:57   0    8M  0 part            
sde      8:64   0   10G  0 disk            
├─sde1   8:65   0   10G  0 part             zfs-disk3
└─sde9   8:73   0    8M  0 part            
sr0     11:0    1 1024M  0 rom


root@LAB-SMPM-GRUB:/dev/disk/by-partlabel# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 6.18M in 00:00:00 with 0 errors on Sun Jul 21 01:26:19 2024
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0

errors: No known data errors

  pool: zfs-raid10
 state: ONLINE
  scan: resilvered 120K in 00:00:00 with 0 errors on Tue Sep 24 13:09:18 2024
remove: Removal of vdev 3 copied 76K in 0h0m, completed on Mon Sep 23 15:50:37 2024
        792 memory used for removed device mappings
config:

        NAME           STATE     READ WRITE CKSUM
        zfs-raid10     ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            zfs-disk1  ONLINE       0     0     0
            zfs-disk2  ONLINE       0     0     0
          mirror-4     ONLINE       0     0     0
            zfs-disk3  ONLINE       0     0     0
            zfs-disk4  ONLINE       0     0     0
        spares
          zfs-disk5    AVAIL  

errors: No known data errors
 
Last edited:
Yeah. Do feel the proxmox pros could elaborate on ZFS wiki that just says <device>
And the fact the default ZFS install rpool is also build using /dev/sda , /dev/sdb
So a noob like me would go for also using /dev/sd...

I would not take Proxmox' docs on ZFS specifically as authoritative source though. Talking of which, I just found ...

Yeah, if you get normal behavior with by-partlabel then it must be something with local ZFS Proxmox?

https://github.com/openzfs/zfs/issues/14559

What I find interesting is that it's quite "recent", but maybe it's just what I found now.

I would just add that I personally find the by-path approach more ridiculous yet (paths can change, I have even seen them change upon BIOS/EFI update).

You use TrueNas for ZFS + iSCSI or NFS shared storage for Proxmox?

This will be a shock, but neither. If you ask about ZFS, my experience mostly comes from BSD, then when OpenZFS (back in the day called ZFS-on-Linux) it was what Ubuntu shipped (and then silently pulled back from). Might be I observed some "old" behaviour, the OpenZFS is not exactly what was shipped for Solaris once.

Just to be clear, earlier today I was posting this (note the OP's title):
https://forum.proxmox.com/threads/lost-all-data-on-zfs-raid10.154843/page-3#post-705651

As you can see, I am not a fan, I put reasons there too (or so I believe), but then dissenting voices are somehow vigilantes nowadays (sarcasm).

As a result, I am also not the most qualified to recommend you the best setup for ZFS with PVE as I do not run them. I do run some ZFS, but not with PVE. I do like it for the send / receive and versioning, it's when I use it still, but mostly on BSD then.

Upon re-attaching disk4 partition it did immediately show up in /dev/disk/by-partlabel/zfs-disk4

Got it back via - zpool online zfs-raid10 /dev/disk/by-partlabel/zfs-disk4
(Even when pool build via /dev/sd{letter} I observed ZFS would NOT auto online, least not by default)

But for disk to disappear yet ZFS say pool is healthy doesn't give me confidence that local Proxmox is happy with using by-partlabel.
Tried it again this time removing both disk4 + disk5 partitions. Can see they are gone here yet pool says a false "heathy"

With that out of the way and the linked GH issue, I do not know what to say, but I let anyone experienced enough to come and chip in now. ;)
 
  • Like
Reactions: TSAN

Thank you. I will review links shared above.

On the flip side I found fix for VMware proxmox udev.
(VM set for Debian11 x64)

VMX edit - disk.EnableUUID = “TRUE
https://kb.vmware.com/s/article/52815

Gonna destroy this pool and re-create with by-id and see how proxmox behaves with SCSI path changes.

Code:
root@LAB-SMPM-GRUB:/dev/disk/by-id# ls -lA /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root  9 Sep 24 13:43 ata-VMware_Virtual_IDE_CDROM_Drive_00000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  9 Sep 24 13:43 scsi-36000c290d4253290d19782652c8f3973 -> ../../sde
lrwxrwxrwx 1 root root 10 Sep 24 13:44 scsi-36000c290d4253290d19782652c8f3973-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c290d4253290d19782652c8f3973-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 scsi-36000c29274a30276127d92d2466d49e5 -> ../../sda
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29274a30276127d92d2466d49e5-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29274a30276127d92d2466d49e5-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29274a30276127d92d2466d49e5-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Sep 24 14:09 scsi-36000c2966a28d1950274c2a04886197e -> ../../sdf
lrwxrwxrwx 1 root root 10 Sep 24 14:09 scsi-36000c2966a28d1950274c2a04886197e-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Sep 24 14:09 scsi-36000c2966a28d1950274c2a04886197e-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 scsi-36000c296d15138ab7851ef6153b53581 -> ../../sdd
lrwxrwxrwx 1 root root 10 Sep 24 13:44 scsi-36000c296d15138ab7851ef6153b53581-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c296d15138ab7851ef6153b53581-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 scsi-36000c29a27441442868f6b82240c7840 -> ../../sdc
lrwxrwxrwx 1 root root 10 Sep 24 13:44 scsi-36000c29a27441442868f6b82240c7840-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29a27441442868f6b82240c7840-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 scsi-36000c29a340f7a50fd87e898375ab3f1 -> ../../sdb
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29a340f7a50fd87e898375ab3f1-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29a340f7a50fd87e898375ab3f1-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Sep 24 13:43 scsi-36000c29a340f7a50fd87e898375ab3f1-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Sep 24 14:09 scsi-36000c29b44c78fa08efe88439e8e6bfb -> ../../sdg
lrwxrwxrwx 1 root root 10 Sep 24 14:09 scsi-36000c29b44c78fa08efe88439e8e6bfb-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 Sep 24 14:09 scsi-36000c29b44c78fa08efe88439e8e6bfb-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 wwn-0x6000c290d4253290d19782652c8f3973 -> ../../sde
lrwxrwxrwx 1 root root 10 Sep 24 13:44 wwn-0x6000c290d4253290d19782652c8f3973-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c290d4253290d19782652c8f3973-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 wwn-0x6000c29274a30276127d92d2466d49e5 -> ../../sda
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29274a30276127d92d2466d49e5-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29274a30276127d92d2466d49e5-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29274a30276127d92d2466d49e5-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Sep 24 14:09 wwn-0x6000c2966a28d1950274c2a04886197e -> ../../sdf
lrwxrwxrwx 1 root root 10 Sep 24 14:09 wwn-0x6000c2966a28d1950274c2a04886197e-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Sep 24 14:09 wwn-0x6000c2966a28d1950274c2a04886197e-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 wwn-0x6000c296d15138ab7851ef6153b53581 -> ../../sdd
lrwxrwxrwx 1 root root 10 Sep 24 13:44 wwn-0x6000c296d15138ab7851ef6153b53581-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c296d15138ab7851ef6153b53581-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 wwn-0x6000c29a27441442868f6b82240c7840 -> ../../sdc
lrwxrwxrwx 1 root root 10 Sep 24 13:44 wwn-0x6000c29a27441442868f6b82240c7840-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29a27441442868f6b82240c7840-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Sep 24 13:43 wwn-0x6000c29a340f7a50fd87e898375ab3f1 -> ../../sdb
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29a340f7a50fd87e898375ab3f1-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29a340f7a50fd87e898375ab3f1-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Sep 24 13:43 wwn-0x6000c29a340f7a50fd87e898375ab3f1-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Sep 24 14:09 wwn-0x6000c29b44c78fa08efe88439e8e6bfb -> ../../sdg
lrwxrwxrwx 1 root root 10 Sep 24 14:09 wwn-0x6000c29b44c78fa08efe88439e8e6bfb-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 Sep 24 14:09 wwn-0x6000c29b44c78fa08efe88439e8e6bfb-part9 -> ../../sdg9
 
For anyone interested:

Creating zfs pool with by-id working. Immediately detects offline disk and spare re-silvers. Additionally when adding attaching disk back it is now automatically bringing original disk back online without need for - zpool online

Using by-id also accomplishes pool remaining intact moving around disks

Before the pairs were
sdc + sdd
sde + sdf

After switch a roo they're
sdd +sde
sdf +sdc

But it don't matter!

Code:
root@LAB-SMPM-GRUB:~# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 6.18M in 00:00:00 with 0 errors on Sun Jul 21 01:26:19 2024
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0

errors: No known data errors

  pool: zfs-raid10
 state: ONLINE
  scan: resilvered 348K in 00:00:01 with 0 errors on Tue Sep 24 14:38:46 2024
config:

        NAME                                        STATE     READ WRITE CKSUM
        zfs-raid10                                  ONLINE       0     0     0
          mirror-0                                  ONLINE       0     0     0
            wwn-0x6000c2928abc90bad21d94682c8d22f5  ONLINE       0     0     0
            wwn-0x6000c299ee1d8f2c761e95fc7fc80a96  ONLINE       0     0     0
          mirror-1                                  ONLINE       0     0     0
            wwn-0x6000c295750124ac87147f544b376bd3  ONLINE       0     0     0
            wwn-0x6000c29eb758af7f491cb3153a003ec7  ONLINE       0     0     0
        spares
          wwn-0x6000c292877b7c2748d74302da67b2ac    AVAIL  

errors: No known data errors


Found my one liner command I was looking for in OP. Has SCSI ID, /dev/sd.. , disk id all in one output


Code:
lsscsi --scsi_id --wwn

[0:0:0:0]    cd/dvd  NECVMWar VMware IDE CDR00 1.00                                  /dev/sr0   -
[2:0:0:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sda   36000c29274a30276127d92d2466d49e5
[2:0:1:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sdb   36000c29a340f7a50fd87e898375ab3f1
[2:0:2:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sdc   36000c29eb758af7f491cb3153a003ec7
[2:0:3:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sdd   36000c2928abc90bad21d94682c8d22f5
[2:0:4:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sde   36000c299ee1d8f2c761e95fc7fc80a96
[2:0:5:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sdf   36000c295750124ac87147f544b376bd3
[2:0:6:0]    disk    VMware   Virtual disk     2.0   0                                   /dev/sdg   36000c292877b7c2748d74302da67b2ac
 
Last edited:
  • Like
Reactions: esi_y
Thanks for sharing! I can only apologise for suggesting the partlabels originally. In fact I am now convinced any symlink that does not refer to entire disk suffers from this (e.g. wwn....-part1 as well).

However, if you examine the answers in the GH issue [1] and docs [2], even the by-id's are not as great as I would have expected, not for autoreplace.

I figured, at least by the docs [3], the "most resilient" way is then to use /etc/zfs/vdev_id.conf.

[1] https://github.com/openzfs/zfs/issues/14559
[2] https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux
[3] https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#setting-up-the-etc-zfs-vdev-id-conf-file
 
  • Like
Reactions: TSAN
Thanks for sharing! I can only apologise for suggesting the partlabels originally. In fact I am now convinced any symlink that does not refer to entire disk suffers from this (e.g. wwn....-part1 as well).

However, if you examine the answers in the GH issue [1] and docs [2], even the by-id's are not as great as I would have expected, not for autoreplace.

I figured, at least by the docs [3], the "most resilient" way is then to use /etc/zfs/vdev_id.conf.

[1] https://github.com/openzfs/zfs/issues/14559
[2] https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux
[3] https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#setting-up-the-etc-zfs-vdev-id-conf-file

Thank you. Eager to learn more ZFS. I will be reading up on all your links.

No worries about by-partlabel - It's all good learning with throw away zpools :)

Hmm - wmn-part1 issues too eh?

I destroyed the pool + disks with partitions using by-partlabel and setup new vdisks + created pool with just wmn device. No partitions. zpool create , made partitions 1+9.

The behavior I observed with wwn pool was:
* Move disks around with no conflict when detected different /dev/sd{letter}
* Detaching disk the pool immediately went degraded with spare immediately re-silvering (This was the same as /dev/sd{letter} pool but not by-partlabel)
* Unlike /dev/sd{letter} pool , upon re-attaching disk the wwn defined pool immediately brought back the disk back online and spare went back to spare. I did however not have any new data. So upon re-attaching disk if it did so it was so fast by the time I ran zpool status it was online already. Didn't see re-silvering. I'm not sure why the difference in behavior here that the disk auto re-joined pool online without requiring a manual zpool online like the /dev/sd{letter} pool did in my tests yesterday.
 
Thank you. Eager to learn more ZFS. I will be reading up on all your links.

No worries about by-partlabel - It's all good learning with throw away zpools :)

Hmm - wmn-part1 issues too eh?

I meant anything that refer to a partition only, i.e. it's not about the type of label, but what it refers to that is determining the behaviour, except ...

* Unlike /dev/sd{letter} pool , upon re-attaching disk the wwn defined pool immediately brought back the disk back online and spare went back to spare.

I can only speculate this is some sort of safety feature, if there is suddenly new sdX device on the system, there's literally nothing one can assume about it, could be same disk same place as before, different device, different path, could be same device that previously showed under different sdX, it's completely undefined what it is, so it would be prudent not to try to do automatically anything with it.

After all, the OpenZFS docs openly say /dev/sdX references are good for development/testing only.
 
Last edited:
  • Like
Reactions: TSAN
Thanks for sharing! I can only apologise for suggesting the partlabels originally. In fact I am now convinced any symlink that does not refer to entire disk suffers from this (e.g. wwn....-part1 as well).

However, if you examine the answers in the GH issue [1] and docs [2], even the by-id's are not as great as I would have expected, not for autoreplace.

I figured, at least by the docs [3], the "most resilient" way is then to use /etc/zfs/vdev_id.conf.

[1] https://github.com/openzfs/zfs/issues/14559
[2] https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux
[3] https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#setting-up-the-etc-zfs-vdev-id-conf-file

YES! I like it! Will now always be using /etc/zfs/vdev_id.conf
Made up a rough template.
I also like that upon import only /dev/disk/by-vdev has to be specified and zfs does the rest.
# by-vdev
# name fully qualified or base name of device link


# *** SPARES CONFIG ***
# S0 Model = *** , Org Path = pci-0000:03:00.0-scsi-0:0:6:0
alias S0 /dev/disk/by-id/scsi-36000c292877b7c2748d74302da67b2ac
#alias S0 /dev/disk/by-id/wwn-0x6000c292877b7c2748d74302da67b2ac


# *** MIRROR CONFIG ***

# Pool = zfs-raid10
# Mirror0

# A0 Model = *** , Org Path = pci-0000:03:00.0-scsi-0:0:3:0
alias A0 /dev/disk/by-id/scsi-36000c2928abc90bad21d94682c8d22f5
#alias A0 /dev/disk/by-id/wwn-0x6000c2928abc90bad21d94682c8d22f5

# B0 Model = *** , Org Path = pci-0000:03:00.0-scsi-0:0:4:0
alias B0 /dev/disk/by-id/scsi-36000c299ee1d8f2c761e95fc7fc80a96
#alias B0 /dev/disk/by-id/wwn-0x6000c299ee1d8f2c761e95fc7fc80a96

# Pool = zfs-raid10
# Mirror1

# A1 Model = *** , Org Path = pci-0000:03:00.0-scsi-0:0:5:0
alias A1 /dev/disk/by-id/scsi-36000c295750124ac87147f544b376bd3
#alias A1 /dev/disk/by-id/wwn-0x6000c295750124ac87147f544b376bd3

# B1 Model = *** , Org Path = pci-0000:03:00.0-scsi-0:0:2:0
alias B1 /dev/disk/by-id/scsi-36000c29eb758af7f491cb3153a003ec7
#alias B1 /dev/disk/by-id/wwn-0x6000c29eb758af7f491cb3153a003ec7

# END Pool = zfs-raid10

# *** RAIDZ2 CONFIG ***

# Pool = zfs-raidz2
# vdev0

# RZ2-A0 Model = *** . Org Path =
#alias RZ2-A0

# RZ2-A1 Model = *** . Org Path =
#alias RZ2-A1

# RZ2-A2 Model = *** . Org Path =
#alias RZ2-A2

# RZ2-A3 Model = *** . Org Path =
#alias RZ2-A3

# RZ2-A4 Model = *** . Org Path =
#alias RZ2-A4

# RZ2-A5 Model = *** . Org Path =
#alias RZ2-A5

# Pool = zfs-raidz2
# vdev1

# RZ2-B0 Model = *** . Org Path =
#alias RZ2-B0

# RZ2-B1 Model = *** . Org Path =
#alias RZ2-B1

# RZ2-B2 Model = *** . Org Path =
#alias RZ2-B2

# RZ2-B3 Model = *** . Org Path =
#alias RZ2-B3

# RZ2-B4 Model = *** . Org Path =
#alias RZ2-B4

# RZ2-B5 Model = *** . Org Path =
#alias RZ2-B5

# END Pool = zfs-raidz2
pool: zfs-raid10
state: ONLINE
scan: resilvered 528K in 00:00:00 with 0 errors on Tue Sep 24 16:35:10 2024
config:

NAME STATE READ WRITE CKSUM
zfs-raid10 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
A0 ONLINE 0 0 0
B0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
A1 ONLINE 0 0 0
B1 ONLINE 0 0 0
spares
S0 AVAIL

Yes and I understood the issue you shared that is specific to defining partition rather than device.
Now my curiosity is with Proxmox default zfs rpool that does use partitions sda3 + sdb3. And Grub isn't redundantly synced.
I did play with breaking rpool few months ago and used proxmox-boot-tool to repair grub after new disk swap.
Need to visit this again.

I'm not clear yet on Proxmox with active mounts. I had 3 datasets added to via Proxmox UI.
Upon defining new vdevs I attempted to export pool. zpool export command went without error but Proxmox immediately re-imported / re-mounted. I had 1 VM associated disk with zvol dataset. VM wasn't running. So I tried removing just the 1 dataset. Still same. Had to remove all 3 datasets , VMs, Containers, ISOs via Proxmox UI. Then exported pool remained exported, and could re-import with new vdev names.

This is all a bit weird to me. Vsphere won't let you think of removing a datastore until every resource has been unassociated.
Strange to me that VM acts like all is well even though its disk is missing. Unlike Vsphere (disconnected) (inaccessible)

Bahh ZFS + Proxmox + Ceph + - there's just so many items I want to addresses for myself before saying good bye to VMware for good. Gonna be a moment...Was using VMware server 2 back in ~2008. Every ESXi version has been in my home + supported Biz.

Virtual concepts remain but feels like Proxmox is just does everything a bit different. And more difficult / cumbersome.
Example: Detach disk from VM. Attach existing disk to different VM.
Set existing disk unused by old VM + Leave GUI for CLI - zfs rename to existing VM ID + CLI qm rescan - Add unused disk on new VM - be 100% certain all is well. Delete remaining unused disk orphaned entry on old VM.

With Vsphere - Remove disk, don't delete. Edit VM - Add existing disk. No CLI...2 intuitive steps opposed to like 6 unintuitive steps. (IMO)

As much as I might disagree with how some thing are done in Proxmox I do believe it and opensource my future. Need to learn it.
I don't trust MS Hyper-V to not pull some Broadcom equivalent BS like SaaS subscription and/or Azure cloud only in not too distant future.
 
Last edited:
  • Like
Reactions: scyto