zpool replace
, you use the zpool attach rpool <already installed disk> <new disk>
.zpool status
. When you attach the disk, choose it via the /dev/disk/by-id path. There it will also be exposed via the manufacturer, model and serial number. Having them in the ZFS status output can speed up the process of figuring out which disk has an issue a lot.Then you can set the "hdsize" parameter during the installation. See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#advanced_zfs_optionsFor proxmox installation I would like to use only 40GB from each disk and this to be set as mirrored zpool
Why an extra partition for logs?Rest of disk space from fist drive I want to use for logs and from second for swap
Hi Aaron,Then you can set the "hdsize" parameter during the installation. See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#advanced_zfs_options
Why an extra partition for logs?
Using non-redundant swap does not sound like a good idea. What happens if that disk fails?
Also, may I ask what kind of disks (model, manufacturer, ...) you plan to use?
Doesn't matter as you are wiping the partition table in step 4 anyway to make the second disk bootable. You would have to extend the partitions after doing step 4.3. Partition the 4TB to 100GB and the rest untouch
Yes.And just to confirm, I can use the rest of the storage on both NVME to build an LVM VG or any other usage right?
I've been reading and reading and can't get this to work.Oneway would be to install it in ZFS RAID 0 mode on the smaller disk first. Then once the system is up and running, follow the "Changing a failed bootable device" procedure in the Proxmox VE Admin guide
The only difference! Instead ofzpool replace
, you use thezpool attach rpool <already installed disk> <new disk>
.
You can get the list of the currently used disk withzpool status
. When you attach the disk, choose it via the /dev/disk/by-id path. There it will also be exposed via the manufacturer, model and serial number. Having them in the ZFS status output can speed up the process of figuring out which disk has an issue a lot.
In the end you should have a mirrored rpool and the bootloader on both disks, so the system will boot if one fails.
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.000000000000000100a0752446a4a08c-part3 ONLINE 0 0 0
errors: No known data errors
nvme-eui.000000000000000100a0752446a4a08c
nvme-eui.000000000000000100a0752446a4a08c-part1
nvme-eui.000000000000000100a0752446a4a08c-part2
nvme-eui.000000000000000100a0752446a4a08c-part3
nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN
nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN_1
nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN_1-part1
nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN_1-part9
nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN-part1
nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN-part9
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C_1
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C_1-part1
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C_1-part2
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C_1-part3
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part1
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part2
nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part3
nvme-nvme.8086-42544c4a393038353038584a31503046474e-494e54454c205353445045324b583031305438-00000001
nvme-nvme.8086-42544c4a393038353038584a31503046474e-494e54454c205353445045324b583031305438-00000001-part1
nvme-nvme.8086-42544c4a393038353038584a31503046474e-494e54454c205353445045324b583031305438-00000001-part9
usb-Generic_Flash_Disk_A38EC09C-0:0
usb-Generic_Flash_Disk_A38EC09C-0:0-part1
Changing a failed bootable device
Depending on how Proxmox VE was installed it is either using systemd-boot or GRUB through proxmox-boot-tool [2] or plain GRUB as bootloader (see Host Bootloader). You can check by running:
# proxmox-boot-tool status
The first steps of copying the partition table, reissuing GUIDs and replacing the ZFS partition are the same. To make the system bootable from the new disk, different steps are needed which depend on the bootloader in use.
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool attach -f <pool> <part3 of existing disk > <part3 of new disk>
With proxmox-boot-tool:
Use the zpool status -v command to monitor how far the resilvering process of the new disk has progressed.
# proxmox-boot-tool format <new disk's ESP (part2)>
# proxmox-boot-tool init <new disk's ESP (part2)> [grub]
ESP stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP. With plain GRUB:
Make sure to pass grub as mode to proxmox-boot-tool init if proxmox-boot-tool status indicates your current disks are using GRUB, especially if Secure Boot is enabled!
# grub-install <new disk>
Plain GRUB is only used on systems installed with Proxmox VE 6.3 or earlier, which have not been manually migrated to using proxmox-boot-tool yet.
:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
77EB-8D67 is configured with: uefi (versions: 6.8.4-2-pve)
CorrectI'm assuming this is systemd-boot?
Use "ls -la /dev/disk/by-id" instead of your "cd /dev/disk/by-id; ls". An SSD could have multiple IDs pointing to the same SSD.Do you know why 4 nvme devices are listed when I only have 2 physical disks installed?
Yes. It will clone the partition table and both disks will have the same partitions. So you might want to install PVE to the smaller one so cloning won'T fail because of missing space.I don't see a part3 on the Intel disk yet. Will that be created with sgdisk?
root@x:~# sgdisk nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C -R nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN
Problem opening nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C for reading! Error is 2.
The specified file does not exist!
root@x:~# sgdisk nvme0n0 -R nvme1n1
Problem opening nvme0n0 for reading! Error is 2.
The specified file does not exist!
# sgdisk <healthy bootable device> -R <new device>Point it to the actual device: sgdisk /dev/disk/by-id/nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C ...
root@rx:~# sgdisk /dev/disk/by-id/nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C -R /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN
The operation has completed successfully.
root@x:~# sgdisk -G /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN
The operation has completed successfully.
root@x:~# zpool attach -f rpool /dev/disk/by-id/nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part3 /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN-part3
cannot attach /dev/disk/by-id/nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN-part3 to /dev/disk/by-id/nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part3: no such device in pool
root@x:~# zpool attach -f rpool nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part3 nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN-part3
cannot attach nvme-INTEL_SSDPE2KX010T8_BTLJ908508XJ1P0FGN-part3 to nvme-Micron_7450_MTFDKCB800TFS_240446A4A08C-part3: no such device in pool
root@x:~# zpool attach -f rpool nvme0n1p3 nvme1n1p3
cannot attach nvme1n1p3 to nvme0n1p3: no such device in pool
Just read the correct documentation, e.g. familiarize with manpages (just type man zpool) or read the PVE docs instead of some stuff your read in the forums, reddit or on the internet. Most people writing stuff - as always - don't know what they're saying or writing. They're no experts and "just want to share what they've learned" without understanding the underlying concepts and just cause trouble.the lack of documentation on syntax and referencing disks was difficult.