Backup+upgrade - confirmation for novice

  1. I have a running PVE7.4 on a Dell R820 which boots & runs via a mirrored pair of 10k SAS drives on HBA(IT mode).
    1. rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
  2. I have a few LXC's & VM's which are on a PCIe+NVMe RaidZ1 (3+1) array. (/z1n/CTs /z1n/VMs /z1n/subvol-2001-disk-0 etc)
    1. z1n on /z1n type zfs (rw,xattr,noacl)
    2. This is unfortunately commercial quality equipment - which is fast enough, but not entirely safe - I plan to replace that as described below
  3. The machine has sufficient CPU/RAM to easily handle the load
  • I have snapshots of each/all VMs/LXCs
  • I also have .zst backups (/var/lib/vz/dump/*) of the VM/LXC's and made additional copies of those to a separate machine/location as an extra safety.
  • I have backups of my environment via scripts from here (and have read the warnings about new installs - this is a extra safety)
I have managed to get some SSD's to upgrade the machine and I just want to confirm a few things.
I want to install a new PVE8.1 onto the SSDs - also in a mirrored rpool.

My questions...
  1. I will remove all SAS drives temporarily, and install a fresh 8.1 on the new SSD rpool(mirror)...(unless there is a better way?)
  2. Is it safe to assume that once I restore the /etc/pve data and import the /z1n pool that the VM's & LXC's will be back to their "normal" running state?
  3. If that fails for any reason, is it correct to say I could recover that particular LXC if I restore the backed up zst file (eg. /var/lib/vz/dump/vzdump-lxc-2001-2024_01_11-18_16_19.tar.zst).
  4. Once the PVE8.1 is running, I will add additional ZFS SSD enterprise storage to house the VM's etc.
    1. I think a ZFS10 is correct here (2 striped vdevs : 2+2), but I would like to be sure - so any advice is welcome.
    2. Should I use a mirrored stripe or a striped mirror (if that makes any difference).
    3. Once this is up and running and tested, i will migrate the VM's to this ZFS...I assume I should use pvesm to export and import?
Any other tips will be great.

Thanks
 
I will remove all SAS drives temporarily, and install a fresh 8.1 on the new SSD rpool(mirror)...(unless there is a better way?)
You didn't said anything about disk sizes. If those SSDs are the same size or a bigger than your HDDs you could clone everything on block level from those HDDs to the SSDs. Then Upgrade PVE from 7 to 8. In case the SSDs are bigger this would also require to extend the partitions, and pools in case you want to make use of the additional space.

But yes, installing a fresh PVE 8.1 and restoring guests is probably the easiest solution unless you heavily customized your PVE via CLI.

Is it safe to assume that once I restore the /etc/pve data and import the /z1n pool that the VM's & LXC's will be back to their "normal" running state?
Running state not. In case you did the backup while the guests were running in snapshot-mode its more like starting the guest after a power outage with a hard reset. In case you used the stop-mode or the guests were stopped while you backed them up, it would be like starting a guest after a proper shutdown. So I would shut down those guests, create the backups, restore them and only then start the guests again.

But yes, once the pool is imported and the new PVE is set up identically (VM/LXC config files, storage.cfg, ...) those VMs/LXCs should be able to start without restoring them from backup.

If that fails for any reason, is it correct to say I could recover that particular LXC if I restore the backed up zst file (eg. /var/lib/vz/dump/vzdump-lxc-2001-2024_01_11-18_16_19.tar.zst).
Yes.

  1. I think a ZFS10 is correct here (2 striped vdevs : 2+2), but I would like to be sure - so any advice is welcome.
  2. Should I use a mirrored stripe or a striped mirror (if that makes any difference).
Not fully sure what you mean. There is only raid0 (stripe), raid1 (mirror) or raid10 (striped mirror). For a VM storage you usually want that raid10 = striped mirror. So two 2-disk mirror vdevs that then are striped.

Once this is up and running and tested, i will migrate the VM's to this ZFS...I assume I should use pvesm to export and import?
The webUI got a "move disk" button you can use to move a virtual disk between storages.
 
  • Like
Reactions: trentmu
You didn't said anything about disk sizes. If those SSDs are the same size or a bigger than your HDDs you could clone everything on block level from those HDDs to the SSDs. Then Upgrade PVE from 7 to 8. In case the SSDs are bigger this would also require to extend the partitions, and pools in case you want to make use of the additional space.
The boot SSDs are unfortunately smaller(480 vs 900), but a fresh install feels like a good idea. I have done minimal pve/cli pieces, but whatever I did I have scripted and can recreate
But yes, installing a fresh PVE 8.1 and restoring guests is probably the easiest solution unless you heavily customized your PVE via CLI.
No problem to shut these down and then do backups/snapshots so they are fresh/current
Running state not. In case you did the backup while the guests were running in snapshot-mode its more like starting the guest after a power outage with a hard reset. In case you used the stop-mode or the guests were stopped while you backed them up, it would be like starting a guest after a proper shutdown. So I would shut down those guests, create the backups, restore them and only then start the guests again.

But yes, once the pool is imported and the new PVE is set up identically (VM/LXC config files, storage.cfg, ...) those VMs/LXCs should be able to start without restoring them from backup.


Yes.
Thanks - it is just an extra layer of safety, but good to know it'll work if needed
Not fully sure what you mean. There is only raid0 (stripe), raid1 (mirror) or raid10 (striped mirror). For a VM storage you usually want that raid10 = striped mirror. So two 2-disk mirror vdevs that then are striped.
I have dabbled a little with ZFS via cli, and (I thought) you can create a vdev mirror and "extend" it with a strip, or a vdev stripe and "extend" that with a mirror - I just wasn't sure which, if either is a better option. I had considered a mirrored (3-disk stripe) - and didn't know if z10 would automatically handle that.
Will pve8.1 do the magic of creating the (two 2-disk mirror vdevs that then are striped) - it will be post-install - or would I need to do some CLI work to achieve that?

If the striped-mirror z10 is effectively the same, I'm happy to take the simpler route, and will keep it to 2+2 with 2 spares.
I'm assuming that keeping the O/S rpool separate from the VM/LXC pool is better.

Last question ... is the raid1 a ZFS-raid1, or a "standard" raid1 (I read that the ZFS option is superior but am too ignorant to really know if that's true or even in existence - or is it likely that they are referring to hardware raid1 [I have HBA so this shouldn't be a factor in my case, but I am curious])

The webUI got a "move disk" button you can use to move a virtual disk between storages.
Even better (for us novice-dummies).

Thank you for the help
 
I have dabbled a little with ZFS via cli, and (I thought) you can create a vdev mirror and "extend" it with a strip, or a vdev stripe and "extend" that with a mirror - I just wasn't sure which, if either is a better option.
You can only stripe vdevs. A stripe itself is not a vdev. So yes, you can create a mirror and later add another mirror and stripe them to turn your raid1 (mirror) into a raid10 (striped mirror). But you can't mirror a stripe or stripe a stripe.

Will pve8.1 do the magic of creating the (two 2-disk mirror vdevs that then are striped) - it will be post-install - or would I need to do some CLI work to achieve that?
WebUI can do that.

Last question ... is the raid1 a ZFS-raid1, or a "standard" raid1 (I read that the ZFS option is superior but am too ignorant to really know if that's true or even in existence - or is it likely that they are referring to hardware raid1 [I have HBA so this shouldn't be a factor in my case, but I am curious])
Its superior because it allows for bit rot protection, block-level compression, deduplication and so on. Don't think of ZFS as just raid. Its raid + volume managenemnt like LVM + filesystem at the same time. So it is an all-in-one solution. But the raid1 itself isn't that much different to other software raids like for example mdadm.
 
Last edited:
  • Like
Reactions: trentmu
You can only stripe top-level vdevs. A stripe is not a vdev. Its just striped single disk vdevs. So yes, you can create a mirror and later add another mirror and stripe them to turn your raid1 (mirror) into a raid10 (striped mirror). But you can't mirror a stripe.
One day, when I am big, I will understand this properly, but for now I will just say... OK, thanks
WebUI can do that.
brilliant!
Its superior because it allows for bit rot protection, block-level compression, deduplication and so on. Don't think of ZFS as just raid. Its raid + volume managenemnt like LVM + filesystem at the same time. So it is an all-in-one solution. But the raid1 itself isn't that much different to other software raids like for example mdadm.
This last part is what I wanted to confirm - awesome and thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!