ZFS Next Steps

Retrograde_i486

New Member
Feb 28, 2022
4
0
1
124
Long story short: My drive hosting proxmox died, reinstalled proxmox on a new drive and have that back up and running. Next step is to remount the zfs pool that had my 3 vm's on that. That was on two Western Digital Drives that are still healthy. I've done that but well I'm kind stuck on what I do next?

zfs pool.png

Do I need more mountpoints for the other VMs? New to ZFS so trying to figure this out. Pretty sure all my data is still safe, just trying to figure out to get proxmox to see the VM's again

zfs2.png
 
First you need to add your imported pool as a ZFS storage so that PVE knows there is a pool and how to use it. In webUI got to Datacenter -> Storage -> Add -> ZFS and add your pool.

But your VMs virtual disks on that pool are useless without the VMs config files on your lost system disk (these were stored in "/etc/pve/qemu-server/"). So put your backuped config files in that directory. If you don't got a backup of them restore your VM Vzdump/PBS backups. If you also don't have them you have to recreate these config files from your memory.
 
  • Like
Reactions: Retrograde_i486
First you need to add your imported pool as a ZFS storage so that PVE knows there is a pool and how to use it. In webUI got to Datacenter -> Storage -> Add -> ZFS and add your pool.

But your VMs virtual disks on that pool are useless without the VMs config files on your lost system disk (these were stored in "/etc/pve/qemu-server/"). So put your backuped config files in that directory. If you don't got a backup of them restore your VM Vzdump/PBS backups. If you also don't have them you have to recreate these config files from your memory.
Done, added my pool and can see some stuff:
zfs3.png

Afraid those config files are gone since my soldering skills and lack of equipment leave me no way to get data off this:
zfs4.png

Didn't know I had to make a backup of those configs, least I do now.
I may be able to re-create those config file, I was really only using two of the vm's and I remember/have documentation on what I made them with.

So now what? Create a new vm and tell it's storage is something like vm-100-disk-0 from the zfs pool?

Thank you so much, feels like i'm getting there.
 

Attachments

  • zfs3.png
    zfs3.png
    56.3 KB · Views: 8
Jup, make sure to create the new VMs with the same VMIDs so you don't need to rename the zvols. After you created the VMs, remove the newly created virtual disks and run a qm rescan. Then you should seedthe old disks as unattached disks and you can attach them.

And always have backups. A raid never replaces a backup. Next time you maybe loose your pool with the virtual disks.
 
Last edited:
Jup, make sure to create the new VMs with the same VMIDs so you don't need to rename the zvols. After you created the VMs, remove the newly created virtual disks and run a qm rescan. Then you should seedthe old disks as unattached disks and you can attach them.

And always have backups. A raid never replaces a backup. Next time you maybe loose your pool with the virtual disks.
Before doing that, I would actually be very cautious about naming the VMs the same as the VM disk IDs. Because there is a chance that you might actually overwrite the virtual disks. Consider copying those virtual disks to another location.

Didn't know I had to make a backup of those configs, least I do now.
You should have a backup of everything, not just the configuration files. Having a ZFS Raid-Mirror is not a backup, it's production data. RAID =/= Backup ever!

Cheers,


Tmanok
 
  • Like
Reactions: Dunuin
I made a new config file and pointed scsi0 to the ZFS volume with the disk but it gets stuck forever at just "Booting from Hard Disk" so that didn't work. Here's the conf file, I assume it has to be exactly like the original (cores,memory,etc) or it won't boot right?

boot: order=scsi0;ide2;net0 cores: 4 ide2: local:iso/ubuntu-20.04.4-live-server-amd64.iso,media=cdrom memory: 4096 meta: creation-qemu=6.1.0,ctime=1646022848 net0: virtio=6A:3A:54:56:42:9C,bridge=vmbr0,firewall=1 numa: 0 ostype: l26 scsi0: ZFS1:vm-100-disk-0,size=1.1T scsihw: virtio-scsi-pci smbios1: uuid=545f3abf-21bf-4c2a-a1fd-1ea9f1514b70 sockets: 2 vmgenid: fbd3a100-3ec3-4c23-8446-46fd64cc079e
 

Attachments

  • 1646061308799.png
    1646061308799.png
    32.3 KB · Views: 5
Cores and memory should't be the problem. But cpu type, cpu flags, bios type, bios version, machine type, disk controller type, disk protocol. If that doesn't match your VM won't boot.
 
Dunuin is right, you don't need the same memory, NIC, or number of cores. Consider trying:
Code:
scsihw: virtio-scsi-single

Another thing to consider, two of those virtual disks appear to belong to the same VM. It appears to be a snapshot. Did you try creating new VMs with the other disks? For the snapshotted VM, I haven't had to import a VM with ZFS snapshots, it should be fairly straightforward but I haven't found any documentation on it yet for PVE.

PVE Developer might be able to chime in for the snapshots and spot what might be missing from your vm100 config, otherwise I don't see anything obviously incorrect about it (to prevent it from booting).
Cheers,

Tmanok
 
Last edited:
Thanks for the help everyone, I ran out of time before I leave for vacation so I ended up burning it down and rebuilding. New ZFS pool (properly built this time) and an upgraded version of ProxMox (was on 6). Probably for the best since I really only needed one VM as everything runs in Docker and the old one was running Ubuntu. Moved over to Rocky Linux since I work on RHEL machines all day at work anyhow. Thankfully I had a backup of my docker compose so spinning all that back up was super easy.

Apologies if this isn't the most helpful solution to anyone who may google upon this question in the future.
 
Thanks for the help everyone, I ran out of time before I leave for vacation so I ended up burning it down and rebuilding. New ZFS pool (properly built this time) and an upgraded version of ProxMox (was on 6). Probably for the best since I really only needed one VM as everything runs in Docker and the old one was running Ubuntu. Moved over to Rocky Linux since I work on RHEL machines all day at work anyhow. Thankfully I had a backup of my docker compose so spinning all that back up was super easy.

Apologies if this isn't the most helpful solution to anyone who may google upon this question in the future.
Well, I for one am glad that you reached an end that was not terribly stressful or negatively challenging to your life. I would strongly recommend performing backups in the future.
Cheers,

Tmanok
 
  • Like
Reactions: Retrograde_i486

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!