I searched the forums already and if someone else has this issue, the search tool has failed me. I also Google'd first thing because someone else could have made a grievous error such as mine and resolved the issue. But alas, my search as fruitless.
As per the title, I needed to upgrade the rpool storage size of a secondary server. Normally I would run a ZFS mirror but in this instance I felt I could get away with just using one. (I am a foolish mortal, probably.) Old disk is a SATA SSD, new one is NVME, going from 128GB to 512GB
Steps taken:
Loaded Ubuntu 24.04
dd if=/dev/disk/by-id/old_disk of=/dev/disk/by-id/new_desk status=progress
CTRL+C'd once I knew the first two partitions were copied
Popped open gparted to delete the last partition and make an unformatted one. Appeared to be successful.
Went to mount the rpool and Ubuntu threw an error about something crashing but it didn't seem relevant (again, I'm a foolish mortal...)
zpool autoexpand=on
When I did the
it appears from the zpool status output that everything went as planned.
zpool export rpool
Reboot.
However, when I attempt to boot the new disk, it says
Notes: The new disk is a second hand disk, but I'm not seeing any reported errors from the zpool status command or other utilities on the new disk. The original disk is showing SMART errors, and as best as I can decipher them, it's realloc errors, so I don't know if that would have impacted the transfer but I'm not sure.
Since I did a full replace, the original disk still has the data on it, but I obviously can't mount it since the zfs utils know I moved it. (Unless there is secret sauce I'm not aware of.)
I did make ZFS snapshots and did a full zfs send | zfs receive to an external disk attached to the server, and that also appears to be fine, so I may have that as an option.
So basically I'm hoping I can somehow learn a lesson and recover from this without doing a full rebuild of every VM in this server. I accept I may not have much of a choice, but I can dream right?
My thoughts are as follows: Install PVE to another disk completely separate from the two I have thus far, another SATA disk (unless I should start with an NVME, in which case I do have one I can borrow in a pinch) and try to copy off the configs from the borked pool (I figure, the /etc/pve folder for sure, not entirely sure what else to snag) and drop those into place on the new server. Then see if just simply doing another zfs send/receive back into the rpool/data area would work in restoring the right information since the VM and LXC #'s shouldn't change.
Honestly, any suggestions would be great since I am aware I should have probably mirrored the disk to another one in the interim so I had at least a bootable copy at all times, but I was trusting it to behave... (My favorite phrase: I love computers! They are predictably unpredictable!)
As per the title, I needed to upgrade the rpool storage size of a secondary server. Normally I would run a ZFS mirror but in this instance I felt I could get away with just using one. (I am a foolish mortal, probably.) Old disk is a SATA SSD, new one is NVME, going from 128GB to 512GB
Steps taken:
Loaded Ubuntu 24.04
dd if=/dev/disk/by-id/old_disk of=/dev/disk/by-id/new_desk status=progress
CTRL+C'd once I knew the first two partitions were copied
Popped open gparted to delete the last partition and make an unformatted one. Appeared to be successful.
Went to mount the rpool and Ubuntu threw an error about something crashing but it didn't seem relevant (again, I'm a foolish mortal...)
zpool autoexpand=on
When I did the
Code:
zpool replace rpool original-vdev-by-disk-id-part3 /dev/disk/by-id/new-disk-part3
zpool export rpool
Reboot.
However, when I attempt to boot the new disk, it says
Code:
WARNING: pool rpool has encountered an uncorrectable I/O error and has been suspended.
Notes: The new disk is a second hand disk, but I'm not seeing any reported errors from the zpool status command or other utilities on the new disk. The original disk is showing SMART errors, and as best as I can decipher them, it's realloc errors, so I don't know if that would have impacted the transfer but I'm not sure.
Since I did a full replace, the original disk still has the data on it, but I obviously can't mount it since the zfs utils know I moved it. (Unless there is secret sauce I'm not aware of.)
I did make ZFS snapshots and did a full zfs send | zfs receive to an external disk attached to the server, and that also appears to be fine, so I may have that as an option.
So basically I'm hoping I can somehow learn a lesson and recover from this without doing a full rebuild of every VM in this server. I accept I may not have much of a choice, but I can dream right?
My thoughts are as follows: Install PVE to another disk completely separate from the two I have thus far, another SATA disk (unless I should start with an NVME, in which case I do have one I can borrow in a pinch) and try to copy off the configs from the borked pool (I figure, the /etc/pve folder for sure, not entirely sure what else to snag) and drop those into place on the new server. Then see if just simply doing another zfs send/receive back into the rpool/data area would work in restoring the right information since the VM and LXC #'s shouldn't change.
Honestly, any suggestions would be great since I am aware I should have probably mirrored the disk to another one in the interim so I had at least a bootable copy at all times, but I was trusting it to behave... (My favorite phrase: I love computers! They are predictably unpredictable!)
Last edited: