Get more details of failed backup

The above is because you mentioned file corruption, and we probably need to make the target non-sparse, and then convert the ZFS volume out into a qcow2 formatted file. That'll allow you to copy it to an ext4 or other filesystem on your USB rescue HDD.
 
But If I were to `cd /rpool/data/`, there are no files there.
Let me try the qemu-img convert. Thank you.
 
But If I were to `cd /rpool/data/`, there are no files there.
Let me try the qemu-img convert. Thank you.
It should work, but be careful. The idea is you create the target file first on the USB backup drive; it needs to be the right size. Then you convert the ZFS instance into it using salvage etc. I highly recommend reading about those commands so you understand them a bit more.
 
1. easy to recover, no extensive knowledge or training required
2. recoverable part by part, i.e. I get PVE re-installed first, then bring back all/selected VMs
Better would be to use mirrors. Then you don't need to recover in the first place when a disk fails and you get bit rot protection.
But If I were to `cd /rpool/data/`, there are no files there.
Because those virtual disks are zvols, so block devcies, and not files. there is no file to copy. You will have to read those zvols on block level and write that to a image file like a qcow2 or pipe the output of "zfs send" to a file. but with the latter one you can only restore that virtual disk to another ZFS pool.
 
Yes, the config.db should have the settings for those VMs.
So, you need to mount the old proxmox installation and acquire the config.db from /var/lib/config.db
It's sort of being covered in this thread over here: https://forum.proxmox.com/threads/how-to-mount-a-zfs-drive-from-promox.37104/

Oh, regarding PVE at home, I keep the main installation on an NVMe [no ZFS there], and have ZFS set up in 2 disk mirror, plus a ZIL/READ cache device.

In terms of production, I have a hardware RAID 10 of 4 SSDs where LVM-Thin, local and Proxmox live. A 2 disk ZFS mirror for bulk data storage, which also has an overlay directory for backup storage.

Ok, so you separate your PVE installation to its own physical NVMe.

In my case of having everything on 1 physical NVMe, do you know if there a way to reinstall PVE without affecting the VMs, assuming I'm using ZFS? I'm trying to work out a plan in case PVE doesn't boot.
 
Last edited:
Better would be to use mirrors. Then you don't need to recover in the first place when a disk fails and you get bit rot protection.

Because those virtual disks are zvols, so block devcies, and not files. there is no file to copy. You will have to read those zvols on block level and write that to a image file like a qcow2 or pipe the output of "zfs send" to a file. but with the latter one you can only restore that virtual disk to another ZFS pool.

Yea, I need to decide which NVMe to buy for mirroring. My current 2TB 970 EVO plus maxes out at about 500 fsync/s (poor). But fsyncs stats are hard to come by.
 
Ok, so you separate your PVE installation to its own physical NVMe.

In my case of having everything on 1 physical NVMe, do you know if there a way to reinstall PVE without affecting the VMs, assuming I'm using ZFS? I'm trying to work out a plan in case PVE doesn't boot.
The only way would be to ensure the VMs are stored on an entirely separate array, and that you can restore a backup of the settings to a new PVE install. My general method at the moment is to add a disk with a new installation, restore the configuration backups, and then proceed to work on the original arrays to get extra backups as needed. The ZFS arrays are the backups for the VMs, so if I have to nuke the original array, then I can restore the configuration backups, followed by restoring the VM backups from ZFS.

It's probably not the best way of doing it, but I tend to plan for everything going nuclear on me, rather than a more graceful restore scenario.
 
Actually I used to do this with Windows, because it is safe (physically separated) and simple (re-installing a fresh OS), making it less prone to mistakes.

I had everything on 1 NVMe for PVE, hoping for the greatest performance.

I'm going to have them separated now. Haha.
 
Yea, I need to decide which NVMe to buy for mirroring. My current 2TB 970 EVO plus maxes out at about 500 fsync/s (poor). But fsyncs stats are hard to come by.
Basically everything with a power-loss protection should give you way higher fsync performance. But those are all U.2 or M.2 NVMe but 22110 in length. Have a look at the Samsung PM983/PM9A3 1.92TB or Micron 7300/7400/7450 PRO 1.92TB. But keep in mind that with a mirror your write performance will be defined by the slowest disk of that mirror, so your 970 EVO. And to mirror it, you need the exactly same or bigger size.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!