Migrate host from mirror disk to mirror bigger disk

magickarle

New Member
Mar 13, 2023
11
0
1
First: thanks for taking the time.
The support forum is great help!

I have been scratching my head on how to to this.

I have 2 nvme (2TB each) in zfs raid 1 (mirror).
They contain the host and VMs.

I'm at 90% full (yes, I did cleanup but with the snapshots, it fills up quickly).

My mobo doesn't have 4 nvme so I bought a 4xnvme PCIe adaptor.

I've physically moved the original 2TB to the PCIe and installed 2x4TB on mobo.

Then used the clonezilla to move all disk and partitions to the 4TB.

Then reboot but obviously doesn't work.

I think part of the issue is the fact that the 4TB are not setup initially as zfs raid 1.

And grub needs to be reconfigured to point to the good new 4TB.

Am I on the right track here?
 
hum, after resilvering, I don't have all partitions the same as the original left nvme:
zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 229G in 00:09:34 with 0 errors on Sat Nov 11 21:55:50 2023
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b448b4e088f65-part3 ONLINE 0 0 0
nvme-Samsung_SSD_990_PRO_4TB_S7KGNJ0W934515X ONLINE 0 0 0

errors: No known data errors
root@pve:~# gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 3907029168 sectors, 1.8 TiB
Model: WD_BLACK SN850X 2000GB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 1FED9309-696B-4E61-A660-65315930C6EB
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 8-sector boundaries
Total free space is 3277883534 sectors (1.5 TiB)

Number Start (sector) End (sector) Size Code Name
1 34 2047 1007.0 KiB EF02
2 2048 1050623 512.0 MiB EF00
3 1050624 629145600 299.5 GiB BF01
root@pve:~# gdisk -l /dev/nvme1n1
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme1n1: 7814037168 sectors, 3.6 TiB
Model: Samsung SSD 990 PRO 4TB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 7FDADE0B-0D91-B141-93C2-D6EC3F0BA172
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3693 sectors (1.8 MiB)

Number Start (sector) End (sector) Size Code Name
1 2048 7814019071 3.6 TiB BF01 zfs-2073f4dea8be8cd7
9 7814019072 7814035455 8.0 MiB BF07
 
ok so i'v managed to resynch partitions sizes.
it fully resilvered and i was able to to this with the other 4TB.

So now i'm at a fully working proxmox.

I'm trying to expand the 3rd partition (since my new drives were bigger).
I was able to do it in parted and now they show as:
zpool list
it shows 3.35T under expandz.
i've activated autoexpand and trie to do a zpool online -e rpool
But i need to put the <device>
i've tried to put nvme2n2, nvme2n2p3 (and same with nvme3n3, nvme3n2p3)
I still get expandz at 3.35T and Free is not increased
 
Ah, i had to scrub rpool!
it's re-re-silvering lol
andddd finally the full size is available to pool.

Case closed
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!