ZFS reconfiguration

Elleni

Active Member
Jul 6, 2020
150
6
38
51
After having had problems with a faulty nvme disk which resulted in an unbootable ProxMox I got some help from an zfs experienced user, to recover. We are up and running again, but I would like to ask for some guidance to optimize our actual setup. Initially we had installed Proxmox on 2 nvme 2TB disks as mirror. Then the system got unbootable because of a faulty disk which produced chksum errors, until the system shut down the pool.

We decided to install proxmox on a separate ssd so we now have a separate bootpool rpool and a datapool rpool-old.

zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 236G 947M 235G - - 0% 0% 1.00x ONLINE -
rpool-old 1.86T 1.06T 822G - - 10% 56% 1.00x ONLINE -

Today we got the replacement nvme and brought the mirror back online containing rpool-old, but probably not in the optimal way. But the system is prod now, so I must not make any mistake thus thought, I should ask for help here first, before doing something wrongly.

zpool status rpool-old reads:
pool: rpool-old
state: ONLINE
scan: scrub repaired 0B in 0 days 00:27:12 with 0 errors on Wed Jul 15 23:05:43 2020
config:

NAME STATE READ WRITE CKSUM
rpool-old ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.0025388201008d4e-part3 ONLINE 0 0 0
nvme0n1 ONLINE 0 0 0

errors: No known data errors

I would like to change the following:

- change the pool name from rpool-old to something reasonable like rpool-data or simply datapool but fear to break proxmox installation:
Is it correct that I would do this by zpool export rpool-old followed by an import <newpoolname> and is this action safe?

I probably did an error when replacing the defective disk, so the newly added disk shows as nvme0n1. How can I change that so it is added with /dev/disk/by-id ?

I also see the not replaced disk obsolete partitions, as it is only a datapool now and not a bootpool anymore, but I am afraid deleting something I still need, so I would be glad if someone can guide me through this.

Disk: /dev/nvme1n1
Size: 1.9 TiB, 2048408248320 bytes, 4000797360 sectors
Label: gpt, identifier: C1C6E9FE-E19C-4A56-8A98-33B9F108346E

Device Start End Sectors Size Type
>> /dev/nvme1n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme1n1p2 2048 1050623 1048576 512M EFI System
/dev/nvme1n1p3 1050624 4000797326 3999746703 1.9T Solaris /usr & Apple ZFS

Is my assumption correct, that I won't need /dev/nvme1n1p1 and /dev/nvme1n1p2 thus they can be deleted? Can I do this in cfdisk, or do you recommend another way? And is it worth booting proxmox with a sysrescuecd to enlarge the partition to regain emptied space from the deleted partitions?

The newly installed nvme disk looks like:

Disk: /dev/nvme0n1
Size: 1.9 TiB, 2048408248320 bytes, 4000797360 sectors
Label: gpt, identifier: 757A5D97-77F6-6745-B8C5-B066E27C4747

Device Start End Sectors Size Type
>> /dev/nvme0n1p1 2048 4000780287 4000778240 1.9T Solaris /usr & Apple ZFS
/dev/nvme0n1p9 4000780288 4000796671 16384 8M Solaris reserved 1

zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 945M 228G 104K /rpool
rpool-old 1.08T 741G 69.2G /rpool-old
rpool-old/data 716G 741G 96K /rpool-old/data
rpool-old/data/vm-100-disk-0 4.18G 741G 3.74G -
rpool-old/data/vm-100-state-preUpdate 4.63G 745G 859M -
rpool-old/data/vm-101-disk-0 83.6G 741G 64.0G -
rpool-old/data/vm-102-disk-0 34.0G 741G 28.5G -
rpool-old/data/vm-103-disk-0 416G 741G 391G -
rpool-old/data/vm-103-state-preUpdate 17.0G 752G 6.64G -
rpool-old/data/vm-104-disk-0 8.30G 741G 6.23G -
rpool-old/data/vm-104-state-preUpdate 8.76G 748G 2.32G -
rpool-old/data/vm-105-disk-0 86.1G 741G 17.8G -
rpool-old/data/vm-106-disk-0 4.30G 741G 1.92G -
rpool-old/data/vm-121-disk-0 33.5G 741G 21.5G -
rpool-old/data/vm-121-state-pre_sysprep 3.07G 741G 3.07G -
rpool-old/data/vm-121-state-pre_sysprep2 2.63G 741G 2.63G -
rpool-old/data/vm-121-state-pre_sysprep3 2.39G 741G 2.39G -
rpool-old/data/vm-121-state-pre_sysprep4 3.90G 741G 3.90G -
rpool-old/data/vm-121-state-pre_sysprep5 2.75G 741G 2.75G -
rpool-old/dateien 318G 741G 318G /rpool-old/dateien
rpool/ROOT 942M 228G 96K /rpool/ROOT
rpool/ROOT/pve-1 942M 228G 942M /
rpool/data 96K 228G 96K /rpool/data

ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Jul 15 22:29 ata-hp_PLDS_DVDRW_DU8AESH_XHDFPDYCPY72 -> ../../sr0
lrwxrwxrwx 1 root root 9 Jul 15 22:29 ata-INTEL_SSDSC2KW256G8_BTLA7385062G256CGN -> ../../sda
lrwxrwxrwx 1 root root 10 Jul 15 22:29 ata-INTEL_SSDSC2KW256G8_BTLA7385062G256CGN-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 15 22:29 ata-INTEL_SSDSC2KW256G8_BTLA7385062G256CGN-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 15 22:29 ata-INTEL_SSDSC2KW256G8_BTLA7385062G256CGN-part3 -> ../../sda3
lrwxrwxrwx 1 root root 13 Jul 15 22:29 nvme-eui.0025388201008d4e -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-eui.0025388201008d4e-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-eui.0025388201008d4e-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-eui.0025388201008d4e-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root 13 Jul 15 22:29 nvme-eui.002538899106c2bc -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-eui.002538899106c2bc-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-eui.002538899106c2bc-part9 -> ../../nvme0n1p9
lrwxrwxrwx 1 root root 13 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HALB-000H1_S4J0NE0N200230 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HALB-000H1_S4J0NE0N200230-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HALB-000H1_S4J0NE0N200230-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HALB-000H1_S4J0NE0N200230-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root 13 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HMLB-000H1_S424NY0M902359 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HMLB-000H1_S424NY0M902359-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Jul 15 22:29 nvme-SAMSUNG_MZVLB2T0HMLB-000H1_S424NY0M902359-part9 -> ../../nvme0n1p9
lrwxrwxrwx 1 root root 9 Jul 15 22:29 wwn-0x55cd2e414dfbc5ab -> ../../sda
lrwxrwxrwx 1 root root 10 Jul 15 22:29 wwn-0x55cd2e414dfbc5ab-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 15 22:29 wwn-0x55cd2e414dfbc5ab-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 15 22:29 wwn-0x55cd2e414dfbc5ab-part3 -> ../../sda3

Aparently on the new disk there were created two partitions nvme0n1p1 and nvme0n1p9?

I think it would be recommended to use disks instead of partitions for my data pool but how is this possible to change without dataloss? And what would be the correct steps to correct my setup? I would be very glad for some help on my first steps with zfs as I cannot afford loosing data on our prod system.

Sorry if those questions sound trivial, but after the shock I had been through the last days as I was afraid the data were gone, I am carefull to not do something stupid, while still being relatively new to proxmox and to zfs in particular.

After renaming rpool-old, will it be enough to correct data-zfs and directory dateien entries under Datacenter/storage?

Thanks in advance for any help, I would really apreciate.
 
Last edited:
Anyone?

Would it be possible to remove one disk from mirror, and add it as disk by-id instead and then do the same thing with the other partition? What would be the correct steps to do this? And does all this work without formatting the disk?

I would like try the following but would need to have confirmation in order not to lose data:

- zpool rpool-old remove nvme0n1 && zpool rpool-old add nvme-eui.002538899106c2bc
- wait for resilvering to complete, then
- zpool rpool-old remove nvme-eui.0025388201008d4e-part3 && zpool rpool-old add nvme-eui.0025388201008d4e
- zpool export rpool-old && zpool import rpool-old new-poolname

Finally re-enter entries in ProxUI/Datacenter/Storage

Thanks in advance for any thoughts/confirmation.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!