How to add new disk to ZFS pool?

It is, but can you please post some infos about your current pool and how you would like to add new disks? This way we can avoid any, potentially disastrous, misunderstanding :)
 
I have installed proxmox in zfs raid mirror 2 x 250GB NVMe => first pool
I have created second zfs pool as raid mirror 2 x 1.6 TB SSD => second pool
I have created third zfs pool as raidz1 4 x 2.4 TB HDD ==== => third pool
 
So which pool would you like to add new disks to and in what manner?

The output of zpool status would also be nice :)
 
You can add new drives to your pools but how to do it and what the limitations are depends on the actual pool setup.
For example adding disks to your first pool is complicated because ZFS is only used on one partition and stuff like the bootloader partition and so on need to be copied manually.
For the second pool you can only add drives in pairs. So you would need to add a additional mirror to get a 4 disk striped mirror.
For the third pool you could add single disks but that will only increase the capacity and not the performance. If you want the full performance and capacity you would have to destroy and recreate that raidz1 pool.
 
understood
one more question please is when I do reinstall for proxmox on the first pool disks, the new installation will not detect the second pool and third pool.
How to arrange for all pools detection after new installation?
 
You need to import your pools first (zpool import) and then add these pools as a ZFS storage to your PVE (in webUI under: Datacenter -> Storage -> Add -> ZFS)
 
Don't forget to backup your /etc folder. Your VM/LXC disks on the other ZFS pools are useless without the VM/LXC config files in "/etc/pve/qemu-server" and "/etc/pve/lxc".
 
Yes and I would backup them also on an regular basis in case your system disk suddenly dies.
 
So a follow-up to this question. This is for my homelab, not a production site,.
My proxmox is a 1TB single-disk raid1 with zfs as filesystem. Can I add a second 1TB HDD in order to make this a "real" raid-1 system? Performance is not an issue here.

root@proxmox:~# zpool status

pool: rpool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.00000000000000000026b778550a4b05-part3 ONLINE 0 0 0

errors: No known data errors

root@proxmox:~# df -h

Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.5M 3.2G 1% /run
rpool/ROOT/pve-1 848G 1.9G 846G 1% /
tmpfs 16G 49M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
rpool 846G 128K 846G 1% /rpool
rpool/data 846G 128K 846G 1% /rpool/data
rpool/ROOT 846G 128K 846G 1% /rpool/ROOT
rpool/data/subvol-112-disk-0 8.0G 557M 7.5G 7% /rpool/data/subvol-112-disk-0
rpool/data/subvol-102-disk-0 8.0G 476M 7.6G 6% /rpool/data/subvol-102-disk-0
rpool/data/subvol-111-disk-0 5.0G 450M 4.6G 9% /rpool/data/subvol-111-disk-0
/dev/fuse 128M 32K 128M 1% /etc/pve
nasbox:/proxmox 1.8T 732G 1015G 42% /nfs/nasbackup
nasbox:/proxmox 1.8T 732G 1015G 42% /mnt/pve/NAS2
tmpfs 3.2G 0 3.2G 0% /run/user/0

root@proxmox:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 53.8G 845G 96K /rpool
rpool/ROOT 1.90G 845G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.90G 845G 1.90G /
rpool/data 51.8G 845G 112K /rpool/data
rpool/data/subvol-102-disk-0 475M 7.54G 475M /rpool/data/subvol-102-disk-0
rpool/data/subvol-111-disk-0 450M 4.56G 450M /rpool/data/subvol-111-disk-0
rpool/data/subvol-112-disk-0 557M 7.46G 557M /rpool/data/subvol-112-disk-0
rpool/data/vm-100-disk-0 252K 845G 112K -
rpool/data/vm-100-disk-1 2.76G 845G 2.47G -
rpool/data/vm-100-state-snapshot-01 173M 845G 173M -
rpool/data/vm-100-state-snapshot-02 346M 845G 346M -
rpool/data/vm-100-state-snapshot-03 468M 845G 468M -
rpool/data/vm-100-state-snapshot-04 487M 845G 487M -
rpool/data/vm-101-disk-0 176K 845G 108K -
rpool/data/vm-101-disk-1 2.60G 845G 2.55G -
rpool/data/vm-101-state-snapshot-01 485M 845G 485M -
rpool/data/vm-101-state-snapshot-02 501M 845G 501M -
rpool/data/vm-103-disk-0 376K 845G 160K -
rpool/data/vm-103-disk-1 26.9G 845G 20.6G -
rpool/data/vm-103-disk-2 408K 845G 84K -
rpool/data/vm-103-state-snapshot-01 3.40G 845G 3.40G -
rpool/data/vm-103-state-snapshot-02 2.79G 845G 2.79G -
rpool/data/vm-103-state-snapshot-03 2.02G 845G 2.02G -
rpool/data/vm-103-state-snapshot-04 2.56G 845G 2.56G -
rpool/data/vm-900-disk-0 88K 845G 88K -
rpool/data/vm-900-disk-1 2.42G 845G 2.42G -
rpool/data/vm-901-disk-0 96K 845G 96K -
rpool/data/vm-901-disk-1 2.45G 845G 2.45G -
 
So now I have a second disk mounted, partitions cloned and UUID's randomised according to the guide.
This is now it looks now (sad is the newly added disk)

root@proxmox:~# fdisk -l /dev/nvme0n1 /dev/sda
Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: KINGSTON SNV2S1000G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8EF1489A-8587-4274-8477-867F75F2A9C6

Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 2099199 2097152 1G EFI System
/dev/nvme0n1p3 2099200 1953525134 1951425935 930.5G Solaris /usr & Apple ZFS


Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10JPVT-22A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 11986398-55BE-40E3-B9AE-9CBB0B31242B

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 2099199 2097152 1G EFI System
/dev/sda3 2099200 1953525134 1951425935 930.5G Solaris /usr & Apple ZFS

Here is the final command (and error message). What am I missing?

root@proxmox:~# zpool attach -f rpool /dev/nvme0n1p3 /dev/sda3
cannot attach /dev/sda3 to /dev/nvme0n1p3: no such device in pool
 
Edit: Found out that one have to use the GUID for the existing pool and the physical device name for the new disk.

So now it seems to be on the way ...

root@proxmox:~# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Sep 2 16:39:00 2023
53.8G scanned at 0B/s, 455M issued at 75.6M/s, 53.8G total
461M resilvered, 0.82% done, 00:12:03 to go
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.00000000000000000026b778550a4b05-part3 ONLINE 0 0 0
sda ONLINE 0 0 0 (resilvering)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!