Unable to backup restored vm

mr-biz

New Member
Sep 4, 2021
8
0
1
58
Hello,
My tale of woe begins with 1 of 6 240 GB SSDs used as my system disk failing since fragmentation in the zpool was 77% I decided to reinstall the system on the same disks to see if the problem recurred. I restored the backups of my VMs and have been happily working thinking everything is fine until I decided to backup a VM and then things got weird. The backup error message says the VM system volume storage does not exist but I think I found it but don't know what I can do to fix it so I can backup my VM.
Thanks in advance for your help!

Code:
INFO: starting new backup job: vzdump 104 --compress zstd --notes-template '{{guestname}}' --node pve001 --remove 0 --storage NASZ001 --mode snapshot
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2023-02-12 09:30:18
INFO: status = running
INFO: VM Name: UbuntuToStatic
INFO: include disk 'scsi0' 'local-zfs:vm-104-disk-0' 32G
INFO: exclude disk 'efidisk0' 'local-zfs:vm-104-disk-1' (efidisk but no OMVF BIOS)
ERROR: Backup of VM 104 failed - storage 'local-zfs' does not exist
INFO: Failed at 2023-02-12 09:30:18
INFO: Backup job finished with errors
TASK ERROR: job errors

1676199098862.png
1676199125515.png
1676199231890.png

Code:
root@pve001:~# find /dev | grep 104
/dev/zvol/rpool/data/vm-104-disk-0-part2
/dev/zvol/rpool/data/vm-104-disk-0-part1
/dev/zvol/rpool/data/vm-104-disk-0-part3
/dev/zvol/rpool/data/vm-104-disk-0
/dev/zvol/rpool/data/vm-104-disk-1
/dev/rpool/data/vm-104-disk-0-part2
/dev/rpool/data/vm-104-disk-0-part1
/dev/rpool/data/vm-104-disk-0-part3
/dev/rpool/data/vm-104-disk-0
/dev/rpool/data/vm-104-disk-1

root@pve001:~# ls /dev/zvol/rpool/data/ -lah | grep 104
lrwxrwxrwx 1 root root  13 Feb 11 12:24 vm-104-disk-0 -> ../../../zd80
lrwxrwxrwx 1 root root  15 Feb 11 12:24 vm-104-disk-0-part1 -> ../../../zd80p1
lrwxrwxrwx 1 root root  15 Feb 11 12:24 vm-104-disk-0-part2 -> ../../../zd80p2
lrwxrwxrwx 1 root root  15 Feb 11 12:24 vm-104-disk-0-part3 -> ../../../zd80p3
lrwxrwxrwx 1 root root  13 Feb 11 12:24 vm-104-disk-1 -> ../../../zd48

root@pve001:~# ls /dev/rpool/data/ -lah | grep 104
lrwxrwxrwx 1 root root  10 Feb 11 12:24 vm-104-disk-0 -> ../../zd80
lrwxrwxrwx 1 root root  12 Feb 11 12:24 vm-104-disk-0-part1 -> ../../zd80p1
lrwxrwxrwx 1 root root  12 Feb 11 12:24 vm-104-disk-0-part2 -> ../../zd80p2
lrwxrwxrwx 1 root root  12 Feb 11 12:24 vm-104-disk-0-part3 -> ../../zd80p3
lrwxrwxrwx 1 root root  10 Feb 11 12:24 vm-104-disk-1 -> ../../zd48

So,
 

Attachments

  • 1676199041174.png
    1676199041174.png
    59.7 KB · Views: 4
Last edited:
Hi,
is local-zfs listed when you go to Datacenter > Storage? If not, and if /rpool/data is what was used for that storage previously (it is by default on Proxmox VE ZFS installations), you can just re-add the storage with Add > ZFS, with ID local-zfs and selecting rpool/data as the ZFS pool.
 
Hi Fiona,

Thank you for your reply.

Your questions answered:
is local-zfs listed when you go to Datacenter > Storage? No.
what was used for that storage previously? I cannot remember.

Here is Datacenter->Storage:
1676284025505.png
Here is PVE001->Disks->ZFS:
1676285275993.png
On the previous installation, the storage was called local-zfs.
When I restored the VM I specified to restore it to local, like this:
1676284189704.png
When it was restored the /etc/pve/qemu-server/???.conf looked like this.
1676284641770.png
So now when I want to backup my VM I cannot because:
1676285063141.png

It appears that there is a fault with the restore process because it does not update the VM config file to show where the system disk was restored to. Fixing that problem is up to you guys. I found a workaround for future restores which is to restore to NASZ001 storage and then move the system disk to local. That however does not help me with the other problem; not losing my data.
My problem is that when restoring the VM data on the VM has been updated on the system disk. I don't want to lose that data.

I know where the system disk is but I do not know how to correct the /etc/pve/qemu-server/???.conf file to correctly reflect that because if I update the config file with scsi0: local:vm-104-disk-0,size=32G,ssd=1 Proxmox is unable to find the system disk (I tried with another sacrificial VM).
1676286074356.png
 

Attachments

  • 1676284606412.png
    1676284606412.png
    120.6 KB · Views: 3
It appears that there is a fault with the restore process because it does not update the VM config file to show where the system disk was restored to. Fixing that problem is up to you guys.
Updating the configuration should happen and it does happen for me. There is a small issue that for EFI disks, the target storage override doesn't happen, but if local-zfs didn't exist at the point of the restore, you would've gotten an error. I'll look into the EFI disk issue.

What is the output of pveversion -v? Can you share the configuration included in the backup (Show Configuration button when you select the backup in the UI)? The output of the restore task can be seen if you go to the Task History panel of your VM and double click on the task, would also be interesting.

My guess is that the local-zfs storage still existed at that point and pointed to /rpool/data, which is why it got restored there. See my first post for how to add local-zfs back.

I know where the system disk is but I do not know how to correct the /etc/pve/qemu-server/???.conf file to correctly reflect that because if I update the config file with scsi0: local:vm-104-disk-0,size=32G,ssd=1 Proxmox is unable to find the system disk (I tried with another sacrificial VM).
Change it back to local-zfs:vm-disk-104-disk-0 after adding back the local-zfs storage to make it point to that disk.
 
root@pve001:/var/lib/vz/images# pveversion -v proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve) pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec) pve-kernel-helper: 7.3-4 pve-kernel-5.15: 7.3-2 pve-kernel-5.15.85-1-pve: 5.15.85-1 pve-kernel-5.15.74-1-pve: 5.15.74-1 ceph-fuse: 15.2.17-pve1 corosync: 3.1.7-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve2 libproxmox-acme-perl: 1.4.3 libproxmox-backup-qemu0: 1.3.1-1 libpve-access-control: 7.3-1 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.3-2 libpve-guest-common-perl: 4.2-3 libpve-http-server-perl: 4.1-5 libpve-storage-perl: 7.3-2 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.2-1 lxcfs: 5.0.3-pve1 novnc-pve: 1.3.0-3 proxmox-backup-client: 2.3.3-1 proxmox-backup-file-restore: 2.3.3-1 proxmox-mail-forward: 0.1.1-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.5.5 pve-cluster: 7.3-2 pve-container: 4.4-2 pve-docs: 7.3-1 pve-edk2-firmware: 3.20220526-1 pve-firewall: 4.2-7 pve-firmware: 3.6-3 pve-ha-manager: 3.5.1 pve-i18n: 2.8-2 pve-qemu-kvm: 7.1.0-4 pve-xtermjs: 4.16.0-1 qemu-server: 7.3-3 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.8.0~bpo11+2 vncterm: 1.7-1 zfsutils-linux: 2.1.9-pve1
 
Updating the configuration should happen and it does happen for me. There is a small issue that for EFI disks, the target storage override doesn't happen, but if local-zfs didn't exist at the point of the restore, you would've gotten an error. I'll look into the EFI disk issue.
Oh, the EFI disk is not even backed up when SeaBIOS is used. That's why it doesn't update storage upon restoring, because it is never restored for such a backup.
 
I will try a new restore.

Can you share the configuration included in the backup (Show Configuration button)?
Code:
boot: order=ide2;scsi0;net0
cores: 1
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=6.2.0,ctime=1658489088
name: mailcow
net0: virtio=7A:36:B8:99:A8:3B,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-118-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=451919d7-27c1-471b-a7ae-2c6bcd6e4841
sockets: 1
vmgenid: 94d669a1-9031-41f0-ab2a-85bb1294c80a
#qmdump#map:scsi0:drive-scsi0:local-zfs::

Restore with storage selected as "local."

1676292571094.png

The output of the restore task:
Code:
restore vma archive: zstd -q -d -c /mnt/pve/WDCloud/dump/vzdump-qemu-118-2023_02_01-22_12_07.vma.zst | vma extract -v -r /var/tmp/vzdumptmp401746.fifo - /var/tmp/vzdumptmp401746
CFG: size: 429 name: qemu-server.conf
CFG: size: 200 name: qemu-server.fw
DEV: dev_id=1 size: 34359738368 devname: drive-scsi0
CTIME: Wed Feb  1 22:12:08 2023
Formatting '/var/lib/vz/images/108/vm-108-disk-0.raw', fmt=raw size=34359738368 preallocation=off
new volume ID is 'local:108/vm-108-disk-0.raw'
map 'drive-scsi0' to '/var/lib/vz/images/108/vm-108-disk-0.raw' (write zeros = 0)
progress 1% (read 343605248 bytes, duration 2 sec)
progress 2% (read 687210496 bytes, duration 5 sec)
progress 3% (read 1030815744 bytes, duration 8 sec)
progress 4% (read 1374420992 bytes, duration 9 sec)
progress 5% (read 1718026240 bytes, duration 9 sec)
progress 6% (read 2061631488 bytes, duration 9 sec)
progress 7% (read 2405236736 bytes, duration 10 sec)
progress 8% (read 2748841984 bytes, duration 12 sec)
progress 9% (read 3092381696 bytes, duration 14 sec)
progress 10% (read 3435986944 bytes, duration 15 sec)
progress 11% (read 3779592192 bytes, duration 17 sec)
progress 12% (read 4123197440 bytes, duration 18 sec)
progress 13% (read 4466802688 bytes, duration 19 sec)
progress 14% (read 4810407936 bytes, duration 20 sec)
progress 15% (read 5154013184 bytes, duration 21 sec)
progress 16% (read 5497618432 bytes, duration 22 sec)
progress 17% (read 5841158144 bytes, duration 23 sec)
progress 18% (read 6184763392 bytes, duration 25 sec)
progress 19% (read 6528368640 bytes, duration 26 sec)
progress 20% (read 6871973888 bytes, duration 27 sec)
progress 21% (read 7215579136 bytes, duration 28 sec)
progress 22% (read 7559184384 bytes, duration 30 sec)
progress 23% (read 7902789632 bytes, duration 31 sec)
progress 24% (read 8246394880 bytes, duration 32 sec)
progress 25% (read 8589934592 bytes, duration 34 sec)
progress 26% (read 8933539840 bytes, duration 34 sec)
progress 27% (read 9277145088 bytes, duration 36 sec)
progress 28% (read 9620750336 bytes, duration 37 sec)
progress 29% (read 9964355584 bytes, duration 38 sec)
progress 30% (read 10307960832 bytes, duration 39 sec)
progress 31% (read 10651566080 bytes, duration 40 sec)
progress 32% (read 10995171328 bytes, duration 42 sec)
progress 33% (read 11338776576 bytes, duration 43 sec)
progress 34% (read 11682316288 bytes, duration 45 sec)
progress 35% (read 12025921536 bytes, duration 46 sec)
progress 36% (read 12369526784 bytes, duration 47 sec)
progress 37% (read 12713132032 bytes, duration 48 sec)
progress 38% (read 13056737280 bytes, duration 49 sec)
progress 39% (read 13400342528 bytes, duration 50 sec)
progress 40% (read 13743947776 bytes, duration 51 sec)
progress 41% (read 14087553024 bytes, duration 52 sec)
progress 42% (read 14431092736 bytes, duration 53 sec)
progress 43% (read 14774697984 bytes, duration 54 sec)
progress 44% (read 15118303232 bytes, duration 55 sec)
progress 45% (read 15461908480 bytes, duration 57 sec)
progress 46% (read 15805513728 bytes, duration 58 sec)
progress 47% (read 16149118976 bytes, duration 59 sec)
progress 48% (read 16492724224 bytes, duration 61 sec)
progress 49% (read 16836329472 bytes, duration 62 sec)
progress 50% (read 17179869184 bytes, duration 63 sec)
progress 51% (read 17523474432 bytes, duration 64 sec)
progress 52% (read 17867079680 bytes, duration 66 sec)
progress 53% (read 18210684928 bytes, duration 67 sec)
progress 54% (read 18554290176 bytes, duration 67 sec)
progress 55% (read 18897895424 bytes, duration 67 sec)
progress 56% (read 19241500672 bytes, duration 67 sec)
progress 57% (read 19585105920 bytes, duration 67 sec)
progress 58% (read 19928711168 bytes, duration 67 sec)
progress 59% (read 20272250880 bytes, duration 67 sec)
progress 60% (read 20615856128 bytes, duration 68 sec)
progress 61% (read 20959461376 bytes, duration 68 sec)
progress 62% (read 21303066624 bytes, duration 68 sec)
progress 63% (read 21646671872 bytes, duration 68 sec)
progress 64% (read 21990277120 bytes, duration 68 sec)
progress 65% (read 22333882368 bytes, duration 68 sec)
progress 66% (read 22677487616 bytes, duration 68 sec)
progress 67% (read 23021027328 bytes, duration 68 sec)
progress 68% (read 23364632576 bytes, duration 68 sec)
progress 69% (read 23708237824 bytes, duration 68 sec)
progress 70% (read 24051843072 bytes, duration 68 sec)
progress 71% (read 24395448320 bytes, duration 68 sec)
progress 72% (read 24739053568 bytes, duration 68 sec)
progress 73% (read 25082658816 bytes, duration 68 sec)
progress 74% (read 25426264064 bytes, duration 68 sec)
progress 75% (read 25769803776 bytes, duration 68 sec)
progress 76% (read 26113409024 bytes, duration 68 sec)
progress 77% (read 26457014272 bytes, duration 69 sec)
progress 78% (read 26800619520 bytes, duration 69 sec)
progress 79% (read 27144224768 bytes, duration 69 sec)
progress 80% (read 27487830016 bytes, duration 69 sec)
progress 81% (read 27831435264 bytes, duration 69 sec)
progress 82% (read 28175040512 bytes, duration 69 sec)
progress 83% (read 28518645760 bytes, duration 69 sec)
progress 84% (read 28862185472 bytes, duration 69 sec)
progress 85% (read 29205790720 bytes, duration 69 sec)
progress 86% (read 29549395968 bytes, duration 69 sec)
progress 87% (read 29893001216 bytes, duration 69 sec)
progress 88% (read 30236606464 bytes, duration 69 sec)
progress 89% (read 30580211712 bytes, duration 69 sec)
progress 90% (read 30923816960 bytes, duration 69 sec)
progress 91% (read 31267422208 bytes, duration 69 sec)
progress 92% (read 31610961920 bytes, duration 69 sec)
progress 93% (read 31954567168 bytes, duration 69 sec)
progress 94% (read 32298172416 bytes, duration 69 sec)
progress 95% (read 32641777664 bytes, duration 69 sec)
progress 96% (read 32985382912 bytes, duration 69 sec)
progress 97% (read 33328988160 bytes, duration 69 sec)
progress 98% (read 33672593408 bytes, duration 69 sec)
progress 99% (read 34016198656 bytes, duration 69 sec)
progress 100% (read 34359738368 bytes, duration 69 sec)
total bytes read 34359738368, sparse bytes 17791414272 (51.8%)
space reduction due to 4K zero blocks 2.12%
rescan volumes...
TASK OK

/etc/pve/qemu-server/108.conf
Code:
boot: order=ide2;scsi0;net0
cores: 1
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=6.2.0,ctime=1658489088
name: mailcow
net0: virtio=7A:36:B8:99:A8:3B,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local:108/vm-108-disk-0.raw,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=451919d7-27c1-471b-a7ae-2c6bcd6e4841
sockets: 1
vmgenid: c124775c-dcd6-4e70-9511-74d33de7c643

And now everything works fine. Typical.
 
Last edited:
I will try a new restore.

Can you share the configuration included in the backup (Show Configuration button)?
Code:
boot: order=ide2;scsi0;net0
cores: 1
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=6.2.0,ctime=1658489088
name: mailcow
net0: virtio=7A:36:B8:99:A8:3B,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-118-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=451919d7-27c1-471b-a7ae-2c6bcd6e4841
sockets: 1
vmgenid: 94d669a1-9031-41f0-ab2a-85bb1294c80a
#qmdump#map:scsi0:drive-scsi0:local-zfs::

Restore with storage selected as "local."

View attachment 46750

The output of the restore task:
Code:
restore vma archive: zstd -q -d -c /mnt/pve/WDCloud/dump/vzdump-qemu-118-2023_02_01-22_12_07.vma.zst | vma extract -v -r /var/tmp/vzdumptmp401746.fifo - /var/tmp/vzdumptmp401746
CFG: size: 429 name: qemu-server.conf
CFG: size: 200 name: qemu-server.fw
DEV: dev_id=1 size: 34359738368 devname: drive-scsi0
CTIME: Wed Feb  1 22:12:08 2023
Formatting '/var/lib/vz/images/108/vm-108-disk-0.raw', fmt=raw size=34359738368 preallocation=off
new volume ID is 'local:108/vm-108-disk-0.raw'
map 'drive-scsi0' to '/var/lib/vz/images/108/vm-108-disk-0.raw' (write zeros = 0)
progress 1% (read 343605248 bytes, duration 2 sec)
progress 2% (read 687210496 bytes, duration 5 sec)
progress 3% (read 1030815744 bytes, duration 8 sec)
progress 4% (read 1374420992 bytes, duration 9 sec)
progress 5% (read 1718026240 bytes, duration 9 sec)
progress 6% (read 2061631488 bytes, duration 9 sec)
progress 7% (read 2405236736 bytes, duration 10 sec)
progress 8% (read 2748841984 bytes, duration 12 sec)
progress 9% (read 3092381696 bytes, duration 14 sec)
progress 10% (read 3435986944 bytes, duration 15 sec)
progress 11% (read 3779592192 bytes, duration 17 sec)
progress 12% (read 4123197440 bytes, duration 18 sec)
progress 13% (read 4466802688 bytes, duration 19 sec)
progress 14% (read 4810407936 bytes, duration 20 sec)
progress 15% (read 5154013184 bytes, duration 21 sec)
progress 16% (read 5497618432 bytes, duration 22 sec)
progress 17% (read 5841158144 bytes, duration 23 sec)
progress 18% (read 6184763392 bytes, duration 25 sec)
progress 19% (read 6528368640 bytes, duration 26 sec)
progress 20% (read 6871973888 bytes, duration 27 sec)
progress 21% (read 7215579136 bytes, duration 28 sec)
progress 22% (read 7559184384 bytes, duration 30 sec)
progress 23% (read 7902789632 bytes, duration 31 sec)
progress 24% (read 8246394880 bytes, duration 32 sec)
progress 25% (read 8589934592 bytes, duration 34 sec)
progress 26% (read 8933539840 bytes, duration 34 sec)
progress 27% (read 9277145088 bytes, duration 36 sec)
progress 28% (read 9620750336 bytes, duration 37 sec)
progress 29% (read 9964355584 bytes, duration 38 sec)
progress 30% (read 10307960832 bytes, duration 39 sec)
progress 31% (read 10651566080 bytes, duration 40 sec)
progress 32% (read 10995171328 bytes, duration 42 sec)
progress 33% (read 11338776576 bytes, duration 43 sec)
progress 34% (read 11682316288 bytes, duration 45 sec)
progress 35% (read 12025921536 bytes, duration 46 sec)
progress 36% (read 12369526784 bytes, duration 47 sec)
progress 37% (read 12713132032 bytes, duration 48 sec)
progress 38% (read 13056737280 bytes, duration 49 sec)
progress 39% (read 13400342528 bytes, duration 50 sec)
progress 40% (read 13743947776 bytes, duration 51 sec)
progress 41% (read 14087553024 bytes, duration 52 sec)
progress 42% (read 14431092736 bytes, duration 53 sec)
progress 43% (read 14774697984 bytes, duration 54 sec)
progress 44% (read 15118303232 bytes, duration 55 sec)
progress 45% (read 15461908480 bytes, duration 57 sec)
progress 46% (read 15805513728 bytes, duration 58 sec)
progress 47% (read 16149118976 bytes, duration 59 sec)
progress 48% (read 16492724224 bytes, duration 61 sec)
progress 49% (read 16836329472 bytes, duration 62 sec)
progress 50% (read 17179869184 bytes, duration 63 sec)
progress 51% (read 17523474432 bytes, duration 64 sec)
progress 52% (read 17867079680 bytes, duration 66 sec)
progress 53% (read 18210684928 bytes, duration 67 sec)
progress 54% (read 18554290176 bytes, duration 67 sec)
progress 55% (read 18897895424 bytes, duration 67 sec)
progress 56% (read 19241500672 bytes, duration 67 sec)
progress 57% (read 19585105920 bytes, duration 67 sec)
progress 58% (read 19928711168 bytes, duration 67 sec)
progress 59% (read 20272250880 bytes, duration 67 sec)
progress 60% (read 20615856128 bytes, duration 68 sec)
progress 61% (read 20959461376 bytes, duration 68 sec)
progress 62% (read 21303066624 bytes, duration 68 sec)
progress 63% (read 21646671872 bytes, duration 68 sec)
progress 64% (read 21990277120 bytes, duration 68 sec)
progress 65% (read 22333882368 bytes, duration 68 sec)
progress 66% (read 22677487616 bytes, duration 68 sec)
progress 67% (read 23021027328 bytes, duration 68 sec)
progress 68% (read 23364632576 bytes, duration 68 sec)
progress 69% (read 23708237824 bytes, duration 68 sec)
progress 70% (read 24051843072 bytes, duration 68 sec)
progress 71% (read 24395448320 bytes, duration 68 sec)
progress 72% (read 24739053568 bytes, duration 68 sec)
progress 73% (read 25082658816 bytes, duration 68 sec)
progress 74% (read 25426264064 bytes, duration 68 sec)
progress 75% (read 25769803776 bytes, duration 68 sec)
progress 76% (read 26113409024 bytes, duration 68 sec)
progress 77% (read 26457014272 bytes, duration 69 sec)
progress 78% (read 26800619520 bytes, duration 69 sec)
progress 79% (read 27144224768 bytes, duration 69 sec)
progress 80% (read 27487830016 bytes, duration 69 sec)
progress 81% (read 27831435264 bytes, duration 69 sec)
progress 82% (read 28175040512 bytes, duration 69 sec)
progress 83% (read 28518645760 bytes, duration 69 sec)
progress 84% (read 28862185472 bytes, duration 69 sec)
progress 85% (read 29205790720 bytes, duration 69 sec)
progress 86% (read 29549395968 bytes, duration 69 sec)
progress 87% (read 29893001216 bytes, duration 69 sec)
progress 88% (read 30236606464 bytes, duration 69 sec)
progress 89% (read 30580211712 bytes, duration 69 sec)
progress 90% (read 30923816960 bytes, duration 69 sec)
progress 91% (read 31267422208 bytes, duration 69 sec)
progress 92% (read 31610961920 bytes, duration 69 sec)
progress 93% (read 31954567168 bytes, duration 69 sec)
progress 94% (read 32298172416 bytes, duration 69 sec)
progress 95% (read 32641777664 bytes, duration 69 sec)
progress 96% (read 32985382912 bytes, duration 69 sec)
progress 97% (read 33328988160 bytes, duration 69 sec)
progress 98% (read 33672593408 bytes, duration 69 sec)
progress 99% (read 34016198656 bytes, duration 69 sec)
progress 100% (read 34359738368 bytes, duration 69 sec)
total bytes read 34359738368, sparse bytes 17791414272 (51.8%)
space reduction due to 4K zero blocks 2.12%
rescan volumes...
TASK OK

/etc/pve/qemu-server/108.conf
Code:
boot: order=ide2;scsi0;net0
cores: 1
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=6.2.0,ctime=1658489088
name: mailcow
net0: virtio=7A:36:B8:99:A8:3B,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local:108/vm-108-disk-0.raw,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=451919d7-27c1-471b-a7ae-2c6bcd6e4841
sockets: 1
vmgenid: c124775c-dcd6-4e70-9511-74d33de7c643

And now everything works fine. Typical.
 

Attachments

  • 1676294133440.png
    1676294133440.png
    35.1 KB · Views: 4
  • 1676294673147.png
    1676294673147.png
    380.8 KB · Views: 4
  • 1676294934647.png
    1676294934647.png
    208.5 KB · Views: 5
Try going to Datacenter > Storage and re-add the storage with Add > ZFS, with ID local-zfs and selecting rpool/data as the ZFS pool. Otherwise, Proxmox VE won't see the images in /rpool/data.
 
Hi Fiona,
I really appreciate your help.
Try going to Datacenter > Storage and re-add the storage with Add > ZFS, with ID local-zfs and selecting rpool/data as the ZFS pool. Otherwise, Proxmox VE won't see the images in /rpool/data.

There is only one VM that I am worried about.
It is VMid 104.
Questions:
Where is the system disk file for VMid 104? Is below the correct location? If so, how can I get the VM working like it should so I can back it up?
View attachment 46762
 
Thank you, Fiona! That was a success!

Code:
INFO: starting new backup job: vzdump 104 --storage NASZ001 --mode snapshot --node pve001 --notes-template '{{guestname}}' --compress zstd --remove 0
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2023-02-13 13:44:53
INFO: status = running
INFO: VM Name: UbuntuToStatic
INFO: include disk 'scsi0' 'local-zfs:vm-104-disk-0' 32G
INFO: exclude disk 'efidisk0' 'local-zfs:vm-104-disk-1' (efidisk but no OMVF BIOS)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/NASZ001/dump/vzdump-qemu-104-2023_02_13-13_44_53.vma.zst'
INFO: started backup task 'a06a10fe-c2c9-4295-8ed9-a870a2e92b46'
INFO: resuming VM again
INFO:   1% (589.0 MiB of 32.0 GiB) in 3s, read: 196.3 MiB/s, write: 150.6 MiB/s
INFO:   3% (1.1 GiB of 32.0 GiB) in 6s, read: 171.0 MiB/s, write: 160.7 MiB/s
INFO:   5% (1.7 GiB of 32.0 GiB) in 9s, read: 215.1 MiB/s, write: 106.3 MiB/s
INFO:   6% (2.0 GiB of 32.0 GiB) in 12s, read: 113.6 MiB/s, write: 112.0 MiB/s
INFO:   7% (2.4 GiB of 32.0 GiB) in 15s, read: 129.3 MiB/s, write: 127.8 MiB/s
INFO:   8% (2.7 GiB of 32.0 GiB) in 18s, read: 110.0 MiB/s, write: 108.1 MiB/s
INFO:   9% (3.1 GiB of 32.0 GiB) in 21s, read: 108.8 MiB/s, write: 106.0 MiB/s
INFO:  10% (3.3 GiB of 32.0 GiB) in 24s, read: 95.5 MiB/s, write: 95.1 MiB/s
INFO:  11% (3.8 GiB of 32.0 GiB) in 27s, read: 164.3 MiB/s, write: 131.9 MiB/s
INFO:  14% (4.5 GiB of 32.0 GiB) in 30s, read: 240.5 MiB/s, write: 126.1 MiB/s
INFO:  19% (6.4 GiB of 32.0 GiB) in 33s, read: 623.2 MiB/s, write: 100.0 MiB/s
INFO:  21% (6.8 GiB of 32.0 GiB) in 36s, read: 138.7 MiB/s, write: 110.8 MiB/s
INFO:  22% (7.1 GiB of 32.0 GiB) in 39s, read: 104.5 MiB/s, write: 102.6 MiB/s
INFO:  23% (7.4 GiB of 32.0 GiB) in 42s, read: 107.5 MiB/s, write: 106.0 MiB/s
INFO:  24% (7.8 GiB of 32.0 GiB) in 45s, read: 153.5 MiB/s, write: 118.7 MiB/s
INFO:  25% (8.2 GiB of 32.0 GiB) in 48s, read: 117.2 MiB/s, write: 114.2 MiB/s
INFO:  26% (8.5 GiB of 32.0 GiB) in 51s, read: 107.3 MiB/s, write: 104.3 MiB/s
INFO:  27% (8.8 GiB of 32.0 GiB) in 54s, read: 99.4 MiB/s, write: 96.9 MiB/s
INFO:  28% (9.1 GiB of 32.0 GiB) in 57s, read: 96.7 MiB/s, write: 94.2 MiB/s
INFO:  29% (9.4 GiB of 32.0 GiB) in 1m, read: 108.8 MiB/s, write: 106.5 MiB/s
INFO:  30% (9.8 GiB of 32.0 GiB) in 1m 3s, read: 151.0 MiB/s, write: 117.5 MiB/s
INFO:  31% (10.2 GiB of 32.0 GiB) in 1m 6s, read: 113.6 MiB/s, write: 112.8 MiB/s
INFO:  32% (10.4 GiB of 32.0 GiB) in 1m 9s, read: 89.4 MiB/s, write: 88.6 MiB/s
INFO:  33% (10.7 GiB of 32.0 GiB) in 1m 12s, read: 100.3 MiB/s, write: 97.7 MiB/s
INFO:  34% (11.1 GiB of 32.0 GiB) in 1m 15s, read: 122.0 MiB/s, write: 114.6 MiB/s
INFO:  35% (11.4 GiB of 32.0 GiB) in 1m 18s, read: 102.7 MiB/s, write: 98.6 MiB/s
INFO:  36% (11.8 GiB of 32.0 GiB) in 1m 21s, read: 132.9 MiB/s, write: 123.0 MiB/s
INFO:  37% (12.1 GiB of 32.0 GiB) in 1m 24s, read: 118.4 MiB/s, write: 111.5 MiB/s
INFO:  38% (12.4 GiB of 32.0 GiB) in 1m 27s, read: 110.0 MiB/s, write: 101.7 MiB/s
INFO:  39% (12.7 GiB of 32.0 GiB) in 1m 30s, read: 83.4 MiB/s, write: 71.0 MiB/s
INFO:  40% (13.1 GiB of 32.0 GiB) in 1m 33s, read: 130.5 MiB/s, write: 125.1 MiB/s
INFO:  41% (13.4 GiB of 32.0 GiB) in 1m 36s, read: 118.4 MiB/s, write: 114.4 MiB/s
INFO:  43% (13.8 GiB of 32.0 GiB) in 1m 39s, read: 135.3 MiB/s, write: 124.7 MiB/s
INFO:  44% (14.1 GiB of 32.0 GiB) in 1m 42s, read: 117.2 MiB/s, write: 112.4 MiB/s
INFO:  45% (14.5 GiB of 32.0 GiB) in 1m 45s, read: 123.2 MiB/s, write: 116.3 MiB/s
INFO:  46% (14.8 GiB of 32.0 GiB) in 1m 48s, read: 99.1 MiB/s, write: 97.2 MiB/s
INFO:  47% (15.1 GiB of 32.0 GiB) in 1m 51s, read: 107.5 MiB/s, write: 100.9 MiB/s
INFO:  48% (15.4 GiB of 32.0 GiB) in 1m 54s, read: 108.8 MiB/s, write: 102.9 MiB/s
INFO:  49% (15.8 GiB of 32.0 GiB) in 1m 57s, read: 128.1 MiB/s, write: 116.2 MiB/s
INFO:  50% (16.1 GiB of 32.0 GiB) in 2m, read: 103.9 MiB/s, write: 98.5 MiB/s
INFO:  51% (16.4 GiB of 32.0 GiB) in 2m 3s, read: 117.2 MiB/s, write: 112.6 MiB/s
INFO:  52% (16.8 GiB of 32.0 GiB) in 2m 6s, read: 117.2 MiB/s, write: 103.5 MiB/s
INFO:  56% (18.1 GiB of 32.0 GiB) in 2m 9s, read: 433.8 MiB/s, write: 168.1 MiB/s
INFO:  60% (19.3 GiB of 32.0 GiB) in 2m 12s, read: 432.8 MiB/s, write: 168.2 MiB/s
INFO:  67% (21.5 GiB of 32.0 GiB) in 2m 15s, read: 748.9 MiB/s, write: 162.9 MiB/s
INFO:  73% (23.6 GiB of 32.0 GiB) in 2m 18s, read: 693.6 MiB/s, write: 165.4 MiB/s
INFO:  79% (25.6 GiB of 32.0 GiB) in 2m 21s, read: 681.5 MiB/s, write: 169.0 MiB/s
INFO: 100% (32.0 GiB of 32.0 GiB) in 2m 24s, read: 2.1 GiB/s, write: 84.9 MiB/s
INFO: backup is sparse: 15.71 GiB (49%) total zero data
INFO: transferred 32.00 GiB in 144 seconds (227.6 MiB/s)
INFO: archive file size: 5.90GB
INFO: adding notes to backup
INFO: Finished Backup of VM 104 (00:02:24)
INFO: Backup finished at 2023-02-13 13:47:17
INFO: Backup job finished successfully
TASK OK
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!