[SOLVED] How to clone a 2x500gb drives ZFS mirror zpool to a larger 2x1tb drives mirror?

verulian

Well-Known Member
Feb 18, 2019
179
21
58
44
I have a situation where things got out of control on a standalone PVE system with a 500gb SSD as ZFS storage. I need to somehow pull the drive and clone it to a larger 1tb drive for the time being as a stopgap measure in a time critical situation. How can this be done so that the size is scaled up to the 1tb from 500gb? I fear a sector-by-sector clone is now sufficient...
 
Still been a bit worried/concerned on this situation and have been hoping someone who has previously encountered this might be able to chime in.
 
Hi,
You can use the "zfs send" and "zfs receive" commands. This will allow you to transfer the data from your current ZFS drive to the new larger drive, while also scaling the size up to 1tb.
Here are the steps to do this:
  1. Attach the new 1tb drive to your PVE system and make sure it's recognized by the system.
  2. Determine the name of your current ZFS drive and the name of the new 1tb drive. You can use the "zpool list" command to see the names of your ZFS drives.
  3. Run the following command to create a snapshot of the current ZFS drive:
    Code:
    zfs snapshot -r <pool-name>@<snapshot-name>
  4. Use the "zfs send" command to send the snapshot data to the larger drive:
    Code:
    zfs send -R <pool-name>@<snapshot-name> | zfs receive -F <larger-drive-pool-name>
  5. Once the data has been transferred and the size has been scaled up, you can use the "zpool list" command to verify that the new 1tb drive is being used as your ZFS storage.
  6. If everything looks good, you can then detach the old 500gb drive and use the larger 1tb drive as the new ZFS storage.
I hope this helps. Keep in mind that this process may take some time depending on the amount of data on the 500gb drive. It's also a good idea to backup any important data before proceeding with the cloning process. Let me know if you have any further questions.
 
In case that single ZFS disk is also your boot drive you would also need to clone the partition table first und write a new bootloader to it. Similar to what is described there in the paragraph "Changing a failed boot device": https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration
When using zfs send | zfs recv to copy the datasets, make sure to do that from a bootable Live Linux ISO with ZFS support (like Ubuntu) and not while your PVE is running. You might also need to change the pool name and mountpoints. Otherwise you might run into problems with same pool names and identical mountpoints when working with the datasets.

In case you can use both disks at the same, there also would be another option. You could clone the partition table and write the bootloader as linked above to the new disk. then you could delete the third partition of the new disk, create a new third partition that uses the full 1TB. And then you could add that third partition of the new 1TB disk to the existing pool to form a mirror. See the zpool attach command: https://openzfs.github.io/openzfs-docs/man/8/zpool-attach.8.html
After the resilvering has finished you get a mirror where both disks are bootable with a capacity that is still 500GB. Next you could remove that 500GB disk from the mirror with the zpool detach command: https://openzfs.github.io/openzfs-docs/man/8/zpool-detach.8.html. then you get a single disk 1TB pool. With the "autoexpand" pool option set to "on" the pool should then grow from 500GB to 1TB size.

But best you clone that 500 GB disk with clonezilla first, so you got a backup you can restore in case you screw something up.
 
Last edited:
  • Like
Reactions: rason
Thanks guys, I guess I'm still a little unclear here and maybe I need to get some additional information from my providing some more details:

This initial setup was a few years old and I forgot that I had it setup as a mirrored bootable root device.

I have two 500gb drives in my bootable zpool "rpool". I need to upgrade this to two 1tb drives that will replace these. Initially I was thinking about using `zpool attach` but then realized maybe I should use `zpool replace` since I see that this is mentioned in @Dunuin's shared link. I also started having questions about the rpool mirror-0 is using "-part3"...

So this led to more questions / worry that maybe I need to do something to prepare the drive for bootability.

The existing pool that I want to upgrade looks like this from `zpool status`:
Code:
 $ zpool status -v rpool
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:31:11 with 0 errors on Sun Nov 13 00:55:12 2022
config:

    NAME                                       STATE     READ WRITE CKSUM
    rpool                                      ONLINE       0     0     0
      mirror-0                                 ONLINE       0     0     0
        ata-CT500MX500SSD1_123456789a-part3    ONLINE       0     0     0
        ata-CT500MX500SSD1_123456789b-part3    ONLINE       0     0     0

So if I pursue the section as linked above: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_change_failed_dev on the "Changing a failed bootable device" it recommends using sgdisk /dev/disk/by-id/ata-CT500MX500SSD1_123456789a --replicate=/dev/disk/by-id/ata-NEW_DISK_ID to clone the GPT info...

WHICH WOULD INCLUDE drive size, etc.

Following that you then do -G to randomize the UUID properly and then it goes on into the zpool replace

I suppose another option could be to use the next one with proxmox-boot-tool format, but I realized I don't really have a properly formatted drive coming in (one was used on Windows briefly).

As I thought about methods to format it properly and so as to create the proper partition mapping to that might facilitate this second proxmox-boot-tool format option, I realized too that the first suggestion if using sgdisk to clone it I may end up in a scenario where the new larger drive will not actually ever be able to use the full drive space since, when I did that, the new 1tb drive was only showing up as a 500gb drive since the GPT data was cloned.

What is the proper way to handle this? Should I format the drive first on another system such as Ubuntu with a certain size scheme and number of volumes in Gparted or some other way? I'm just not clear what Proxmox does to create volumes initially and to structure it properly so that then I can properly proceed with cloning the 3rd volume/partition in the zpool replace step with the appropriate -part3 postfix...
 
In addition to my last reply, things have become even more frustrating/confusing insomuch that the documentation is not clear as to whether proxmox-boot-tool init should be used instead of or in conjunction with and after sgdisk. Again the size differential and creating wear on the drive is my concern and I hate to do another clone if I fail to clone this properly and size it up.

For example:
Code:
sgdisk /dev/disk/by-id/ata-CT500MX500SSD1_123456789a --replicate=/dev/disk/by-id/ata-DESTINATION_DRIVE_ID
sgdisk -G /dev/disk/by-id/ata-DESTINATION_DRIVE_ID

# should I do proxmox-boot-tool too or instead of?
# or should I just proceed with this?:
zpool replace -f <pool> <old zfs partition> <new zfs partition>

# and/or when this completes or is working, should I do the noted proxmox-boot-tool or
#     is this unnecessary?
# and/or how and when will the drive show up as larger - will ZFS simply update the
#     size of the overall mirror volume after I get the second 1tb drive mirrored over
#     to replace /dev/disk/by-id/ata-CT500MX500SSD1_123456789a ?
 
Here's my $.02.

Install 'ncdu' or run the following to see if there is anything obvious taking up space that you can delete... This may buy you some time.

Code:
# du / | sort -nr | head -99 | numfmt --from-unit=1024 --to=iec

Then get the 'hrmpf' rescue boot iso...

https://github.com/leahneukirchen/hrmpf

Write it to a flash drive after format, and boot off it...

https://lobotuerto.com/notes/format-a-usb-drive-as-fat32-in-linux

Add a single new ssd drive to the system and verify it's there with...
Code:
# lsblk -f -o +tran,model

Clone one of the single existing ssd's to the new one with ddrescue (on the hrmpf iso)

Code:
# ddrescue -d -r3 /dev/nvme0n1 /dev/nvme2n1 ~/nvme_clone_20221211.log

My example uses nvme device addresses but you get the idea.

Boot off the new ssd drive and expand the pool to use the whole disk.

https://www.kringles.org/linux/zfs/vmware/2015/02/10/linux-zfs-resize.html

(You may need to do the 'expand' step from the rescue iso, not sure.)

I'm not sure on the next steps re: adding a second drive to the mirror, but at least this gets you immediately out of the jam.
 
Last edited:
I have two 500gb drives in my bootable zpool "rpool". I need to upgrade this to two 1tb drives that will replace these. Initially I was thinking about using `zpool attach` but then realized maybe I should use `zpool replace` since I see that this is mentioned in @Dunuin's shared link.
Jup, then you basically follow that link and first replace one 500GB SSD with a 1TB SSD. Resilver it, and then do the same again for the second 500GB disk.
So if I pursue the section as linked above: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_change_failed_dev on the "Changing a failed bootable device" it recommends using sgdisk /dev/disk/by-id/ata-CT500MX500SSD1_123456789a --replicate=/dev/disk/by-id/ata-NEW_DISK_ID to clone the GPT info...

WHICH WOULD INCLUDE drive size, etc.

Following that you then do -G to randomize the UUID properly and then it goes on into the zpool replace
You can clone that partition table. The 3rd partition is then of cause not using the full SSD. So you could remove that 3rd partition (using fdisk or sgdisk or parted) and create a new third partition that uses the whole unallocated disk space before running the zpool replace command.
is not clear as to whether proxmox-boot-tool init should be used instead of or in conjunction with and after sgdisk
Yes, in conjunction and after sgdisk:
1.) clone partition table with sgdisk
2.) destroy 3rd partition and create a new bigger 3rd partition
3.) use "zpool replace" on third partition
4a.) either use "proxmox-boot-tool to" sync the systemd bootloader in case your server is running UEFI with CSM disabled or
4b.) use "grub-install" in case your server is using BIOS or UEFI with CSM enabled so booting from grub
 
Last edited:
Thanks @Dunuin - so to confirm you're saying the steps should be something like the following steps (unfortunately will need you to help me with one of them because I'm not finding a good example):
Steps:

1. sgdisk /dev/disk/by-id/ata-CT500MX500SSD1_123456789a --replicate=/dev/disk/by-id/ata-DESTINATION_DRIVE_ID

2. sgdisk -G /dev/disk/by-id/ata-DESTINATION_DRIVE_ID

3. somehow destroy + recreate /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part3 - how do I do this? For example:
3.1 maybe: sgdisk -d 3 /dev/disk/by-id/ata-DESTINATION_DRIVE_ID (it doesn't seem to start from 0 as in enumerating them 0, 1, 2 - but instead 1, 2, 3)​

3.2 CONFUSED - how do I create the properly sized partition at this point that starts and stops at the correct sectors?​

4. zpool replace -f rpool ata-CT500MX500SSD1_123456789a-part3 /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part3

5. proxmox-boot-tool status reports:
Code:
        Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
        System currently booted with uefi
        mkdir: cannot create directory '/var/tmp/espmounts/E442-FB73': No space left on device
        creation of mountpoint /var/tmp/espmounts/E442-FB73 failed - skipping
        WARN: /dev/disk/by-uuid/E443-B706 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping

So I assume this is UEFI and not CSM with GrUB and I would continue as:

6. proxmox-boot-tool format /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2

7. proxmox-boot-tool init /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2
 
1. sgdisk /dev/disk/by-id/ata-CT500MX500SSD1_123456789a --replicate=/dev/disk/by-id/ata-DESTINATION_DRIVE_ID

2. sgdisk -G /dev/disk/by-id/ata-DESTINATION_DRIVE_ID
jup
somehow destroy + recreate /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part3 - how do I do this? For example:
3.1 maybe: sgdisk -d 3 /dev/disk/by-id/ata-DESTINATION_DRIVE_ID (it doesn't seem to start from 0 as in enumerating them 0, 1, 2 - but instead 1, 2, 3)
3.2 CONFUSED - how do I create the properly sized partition at this point that starts and stops at the correct sectors?
I personally did it with fdisk that got a UI:
1.) run fdisk /dev/disk/by-id/ata-DESTINATION_DRIVE_ID
2.) use "d" to delete a partition. Select the 3rd one (should be default).
3.) use "n" to create a new partition. It will create a third partition, choose the right start and end sectors when you press enter. So it will make a new 3rd partition that will use the whole empty space.
4.) use "w" to write the changes to disk
4. zpool replace -f rpool ata-CT500MX500SSD1_123456789a-part3 /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part3
I think it should work but better use this: zpool replace -f rpool /dev/disk/by-id/ata-CT500MX500SSD1_123456789a-part3 /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part3
mkdir: cannot create directory '/var/tmp/espmounts/E442-FB73': No space left on device
Your root filesystem is already 100% filled? You should fix that first by deleting unneeded data, otherwise it will be read only and stuff might fail.
What does df -h report?

So I assume this is UEFI and not CSM with GrUB and I would continue as:

6. proxmox-boot-tool format /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2

7. proxmox-boot-tool init /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2
jup
 
Last edited:
Thank you very much @Dunuin - so yes, df reports:
Code:
 $ df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                           48G     0   48G   0% /dev
tmpfs                         9.5G   62M  9.4G   1% /run
rpool/ROOT/pve-1               13G   13G     0 100% /
tmpfs                          48G     0   48G   0% /dev/shm
tmpfs                         5.0M     0  5.0M   0% /run/lock
rpool                         128K  128K     0 100% /rpool
rpool/ROOT                    128K  128K     0 100% /rpool/ROOT
rpool/data                    256K  256K     0 100% /rpool/data
rpool/data/subvol-211-disk-0   57G   57G     0 100% /rpool/data/subvol-211-disk-0
rpool/data/subvol-212-disk-0  636M  636M     0 100% /rpool/data/subvol-212-disk-0
rpool/data/subvol-214-disk-0   49G   49G     0 100% /rpool/data/subvol-214-disk-0
rpool/data/subvol-213-disk-0  9.0G  9.0G     0 100% /rpool/data/subvol-213-disk-0
rpool/data/subvol-215-disk-0  1.2G  1.2G     0 100% /rpool/data/subvol-215-disk-0
rpool/data/subvol-216-disk-0  8.6G  8.6G     0 100% /rpool/data/subvol-216-disk-0
rpool/data/subvol-217-disk-0  1.2G  1.2G     0 100% /rpool/data/subvol-217-disk-0
tmpfs                         9.5G     0  9.5G   0% /run/user/0

The problem that is really vexing is that I don't have any LXC or QEMU VM guests that I can remove here and they are all that which consume this space. It was apparently done by someone else using the server that had to make a final important LXC in the process just a few weeks ago that has grown past its limit and has simultaneously become a critical component to a current project.
 
Try to find out what is using all that space. You can for example run find / -type f -printf '%s %p\n'| sort -nr | head -30 to list the 30 biggest files.

A good place to start freeing up space would be to make sure there are no failed uploads in "/var/tmp". These should be called "pveupload-*" and can be deleted. Another option would be to delete some old logs (for example all logs ending with a ".1", ".2", ".3" and so on) in "/var/log".
And you can delete some unneeded packages with apt autoremove .
You also might want top delete some ISOs, container templates or backups that also consume space of your root filesystem.

And you are using ZFS, so all VMs and LXCs will share the same space with the root filesystem. So you could make space for the root filesystem by removing VMs/LXCs.
Your pool is 100% and this is really bad! A zfs pool shouldn't be filled more than 80%, as it is a copy-on-write filesystem that always needs a lot of free space to be able to operate optimally. So I would first try to delete snapshots in case you create some. If not you could delete a VM or LXC in case you got recent backup so losing a VM/LXC isn't a big problem.

For the future I would recommend to set some quotas. In case of a 1TB pool for example a pool wide quota of 90% (so for example zfs set quota=900G rpool) so you can never completely fill up your pool by accident. Then a 32G quota for the root filesystem (so zfs set quota=32G rpool/ROOT) and a 90% - 32G quota for the VM/LXC storage (so for example zfs set quota=868G rpool/data).

And you should daily monitor your zfs pools, so you can delete stuff or add more disks as soon as the pool gets close to 80% usage.
 
Last edited:
Wow, LOL, I guess that's not going to work out since it's even giving an error on that @Dunuin:
Code:
$ find / -type f -printf '%s %p\n'| sort -nr | head -30
sort: cannot create temporary file in '/tmp': No space left on device

But this seems to work well enough:
Code:
find / -type f -size +1G -exec ls -lh {} \; -printf "%p\n"

I have found a couple of isos that were left behind in /var/lib/vz/template/iso that I didn't expect to see, so I've gotten about 20gb more now it seems to at least have some wiggle room...
 
Then I would first start to backup and remove some VMs/LXCs until you are below 80% pool usage.
 
Thanks, well I've gotten it as low as I can get it: 90%. I'm going to now have to proceed with the migration to the larger drives.
 
I ended up hitting two problems that I'll explain near the end of this message - one is especially important and I need some help on insomuch that it seems that the zpool is not resizing...

Initially I ran into a bit of a snag insomuch that I screwed up last night prior to understanding the step on using fdisk to obliterate and recreate partition #3 on the drive.

So I inserted the other 1tb drive I was going to use next and just repeated the first step of the process with it:
Code:
# get existing working mirror entry or entries:
zpool status

# do not yet plug in the drive and get a list of disks:
ls /dev/disk/by-id/ata-*

# then plug the drive in and get the list again so as to discover which drive you're adding:
ls /dev/disk/by-id/ata-*

# copy GPT structure of the existing bootable drive to new drive:
sgdisk /dev/disk/by-id/ata-SOURCE_DRIVE_ID --replicate=/dev/disk/by-id/ata-DESTINATION_DRIVE_ID

# randomize UID for new drive:
sgdisk -G /dev/disk/by-id/ata-DESTINATION_DRIVE_ID

# delete and create the 3rd partition:
fdisk /dev/disk/by-id/ata-DESTINATION_DRIVE_ID
# d 3               // delete partition 3
# n                 // make a new partition
# 3                 // should be default, but be sure to pick partition 3
# <default>         // select default start if it's at the right starting sector
# <default>         // select default end if it's at the end sector so as to use the entire space
# p                 // this step views the partition table as a sanity check - look closely to be sure you're good with it...
# w                 // write or save the partition table now and it will auto-exit.

# now replace the desired drive (either old and missing / failed or mistake or add new)
zpool replace -f rpool /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_or_FAILING_or_SMALL_DRIVE_ID-part3 /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part3

# check status again to see it resilvering the new drive:
zpool status

# now since i had messed up and did not resize partition 3 before i had started the previous mirror replacement, i shall do that now and this simply lists out its partitions first:
ls -a /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID*

# these steps are the same as earlier so i won't explain - i'm simply replacing the original 500gb working drive with the other 1tb drive that i'd initially messed up on the 3rd partition resizing to make sure i get 1tb:
sgdisk /dev/disk/by-id/ata-SOURCE_DRIVE_ID --replicate=/dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID
sgdisk -G /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID
fdisk /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID
zpool replace -f rpool /dev/disk/by-id/ata-SOURCE_DRIVE_ID-part3 /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID-part3

This all seemed to work great. So I went on to the step of being sure that they're bootable, and so I executed:
Code:
proxmox-boot-tool format /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2
proxmox-boot-tool init /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2

The above gave reasonable output:
Code:
 $ proxmox-boot-tool format /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdh" MOUNTPOINT=""
Formatting '/dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.

 $ proxmox-boot-tool init /dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="E202-4877" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdh" MOUNTPOINT=""
Mounting '/dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2' on '/var/tmp/espmounts/E202-4877'.
Installing systemd-boot..
Created "/var/tmp/espmounts/E202-4877/EFI/systemd".
Created "/var/tmp/espmounts/E202-4877/EFI/BOOT".
Created "/var/tmp/espmounts/E202-4877/loader".
Created "/var/tmp/espmounts/E202-4877/loader/entries".
Created "/var/tmp/espmounts/E202-4877/EFI/Linux".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/E202-4877/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/E202-4877/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/E202-4877/loader/random-seed successfully written (512 bytes).
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2'.
Adding '/dev/disk/by-id/ata-DESTINATION_DRIVE_ID-part2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Copying and configuring kernels on /dev/disk/by-uuid/E202-4877
    Copying kernel and creating boot-entry for 5.11.22-7-pve
    Copying kernel and creating boot-entry for 5.15.64-1-pve
    Copying kernel and creating boot-entry for 5.15.74-1-pve
WARN: /dev/disk/by-uuid/E442-FB73 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping
Copying and configuring kernels on /dev/disk/by-uuid/E443-B706
    Copying kernel and creating boot-entry for 5.11.22-7-pve
    Copying kernel and creating boot-entry for 5.15.64-1-pve
    Copying kernel and creating boot-entry for 5.15.74-1-pve

So then I thought I could proceed with the following:
Code:
proxmox-boot-tool format /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID-part2
proxmox-boot-tool init /dev/disk/by-id/ata-SCREWED_UP_DESTINATION_DRIVE_ID-part2

However this did not result in the same kind of output:
Code:
 $ proxmox-boot-tool format /dev/disk/by-id/ata-SECOND_DESTINATION_DRIVE_IN_MIRROR_ID-part2
UUID="C48A-4C17" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdl" MOUNTPOINT="/media/sdl2"
E: '/dev/disk/by-id/ata-ata-SECOND_DESTINATION_DRIVE_IN_MIRROR_ID-part2' is mounted on '/media/sdl2' - exiting.

 $ proxmox-boot-tool init /dev/disk/by-id/ata-SECOND_DESTINATION_DRIVE_IN_MIRROR_ID-part2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="C48A-4C17" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdl" MOUNTPOINT="/media/sdl2"

I wasn't sure if this was right, so I rebooted and found that the first drive "ata-DESTINATION_DRIVE_ID" was able to boot, but the second "ata-SECOND_DESTINATION_DRIVE_IN_MIRROR_ID" did not boot. I ended up having to unplug the first drive, reboot yet again and then replug one of the old earlier 500gb drives into the system and then perform the operations again on the "ata-SECOND_DESTINATION_DRIVE_IN_MIRROR_ID".

Is this normal or should I do something different so that BOTH drives in the mirror boot zpool get updated regularly for kernel updates, etc???

Also, and perhaps even more important, after all of this effort I'm not seeing the drive size update for the zpool mirror. For example, when I do zpool list I get:
Code:
$ zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   464G   353G   111G        -      464G    52%    76%  1.00x    ONLINE  -

I checked and found autoexpand was off initially:
Code:
$ zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  off     default

So I turned it and autoreplace on:
Code:
$ zpool set autoreplace=on rpool
$ zpool set autoexpand=on rpool
$ zpool get autoexpand
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local

But nothing is changing even after allowing it to sit for a while. I was very surprised to see my available drive space has somehow gone up to 111gb now and we're at 76% CAPacity, but even with that the SIZE is still reporting at 464gb versus being anywhere close to 1tb...

Any ideas?
 
E: '/dev/disk/by-id/ata-ata-SECOND_DESTINATION_DRIVE_IN_MIRROR_ID-part2' is mounted on '/media/sdl2' - exiting.
I think it complains that the ESP partition is mounted (so in use). Not sure how to solve this the best way. I could think of manually unmounting it or by booting a PVE ISO in rescue mode instead of booting from the ESP partition. But didn't tested it myself.

But nothing is changing even after allowing it to sit for a while. I was very surprised to see my available drive space has somehow gone up to 111gb now and we're at 76% CAPacity, but even with that the SIZE is still reporting at 464gb versus being anywhere close to 1tb...
Did you already replaced both 500GB disks? If one of them is still 500GB the autoexpand won't work. Did you try to reboot server so PVE has to import the pool again? If that doesn't work you can instruct ZFS to expand it with this command: zpool online -e YourPoolName /dev/disk/by-id/FirstDiskOfMirror-part3 /dev/disk/by-id/SecondDiskOfMirror-part3
 
@Dunuin just booting on each drive with the other unplugged was able to make the command function and so each drive was bootable after that. Very peculiar.

Yes, BOTH drives are now 1tb drives and both drives' partitions were expanded with fdisk and yes, I have rebooted it (multiple times). I went ahead and executed your final suggestion and IT WORKED:
Code:
$ zpool online -e rpool ata-DRIVE_1_ID-part3 ata-DRIVE_2_ID-part3

$ zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   928G   353G   575G        -         -    26%    38%  1.00x    ONLINE  -

Very much appreciate your instructions and patience.
 
  • Like
Reactions: Dunuin
Good to hear. Then make sure not to fill it completely up again by setting some quotas and setting up monitoring (for example zfs-zed and postfix).
Code:
zfs set quota=
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!