Veeam error Failed to map disks (since upgrade v8 to v9)

pmt_cnq

New Member
Aug 4, 2025
14
1
3
We did the upgrade from v8 to v9 in our test lab. So far, everything is working fine, but the backup jobs with Veeam are now failing. (We use a VM Veeam worker in Proxmox, which is tied to an external Windows server that host the Veeam backup jobs).

I get the following error: "Failed to perform backup: Failed to map disks"

The VM's disk is loaded on a SAN (iSCSi).
 
Hi
I have the same problem since today. After a upgrade to Proxmox 9 new VMs with qemu version 10.0 won't be backuped by Veeam with the error "Failed to map disks"
My VMs are located on a SAN (FC) too.

But, when I change the Qemu version back to 9.2+pve1, backup works fine again.
 
Hi,
But, when I change the Qemu version back to 9.2+pve1, backup works fine again.
thank you for the report! That confirms that it's related to the internal change to -blockdev, because that is in effect for VMs starting with machine version 10.0, but not earlier. We'll add a note to the known issues section of the upgrade guide to let people know that Veeam backup is not compatible with these changes yet.
 
  • Like
Reactions: fireon and pmt_cnq
Ok. This is work for backup operations.
But restore entrie VM got same error. New VM has machine type is "latest" and cannot be changed....
 
Ok. This is work for backup operations.
But restore entrie VM got same error. New VM has machine type is "latest" and cannot be changed..

This refers to the Steamworker VM, not the VM that is to be restored. Or what exactly do you mean?
 
Oi
Estou com o mesmo problema desde hoje. Após a atualização para o Proxmox 9, novas VMs com o qemu versão 10.0 não serão mais copiadas pela Veeam, apresentando o erro "Falha ao mapear discos".
Minhas VMs também estão localizadas em uma SAN (FC).

Mas, quando altero a versão do Qemu de volta para 9.2+pve1, o backup funciona bem novamente.
Aqui está o link da documentação para alterar a versão do QEMU da VM: https://pve.proxmox.com/wiki/QEMU_Machine_Version_Upgrade

Isso funcionou para mim - alterei a configuração apenas de uma das minhas VMs do PVE e, o backup dela passou com sucesso via VBR!

Consultando a Veeam "via caso", fui informado que há uma incompatibilidade entre versões do PVE e VBR e que, estão trabalhando para melhorias contínuas.
 
Moin, not sure if my issue is related to this one but as it is about qemu (Windows 11) and PVE upgrade from 8 to 9 and I don't want to create unneccessary threads, I'll post here. Please don't blame me if it does not fit, I havn't found that much about this. topic and PVE 8 to 9, lets say, is still quite fresh ;)

About a week ago I did the upgrade and it seemed all fine until I mentioned my cyclic PBS (Also upgraded to 4 - running as LXC) backup did not finish the Windows VM. After some testing it seems not related to PBS as a direct backup via PVE on a mounted storage got stuck too.

Whats hapenning, if I try to backup it simply gets stuck. It says it will transfer the backup but does not start it (0 B/s) and thats it.
The VM is locked and nothing hapens for hours.
The VM itself runs fine. It has only to be unlocked manuall via
Bash:
qm unlock 5000
and forcefull shutdown
Bash:
qm shutdown 5000 --forceStop --skiplock

The following screenshot was created on 03.09.2025 at 04:33 so more than 24 hour after start of backup:
1757074852516.png

The Log of a the direct backup doesn't help me too:
Sep 03 09:07:08 MCP-Server pvedaemon[379258]: <root@pam> starting task UPID:MCP-Server:00083C5B:01FCCD04:68B7E91C:vzdump:5000:root@pam:
Sep 03 09:07:08 MCP-Server pvedaemon[539739]: INFO: starting new backup job: vzdump 5000 --storage MCP-Drive --node MCP-Server --notification-mode notification-system --notes-template '{{guestname}}' --compress zstd --remove 0 --mode snapshot
Sep 03 09:07:08 MCP-Server pvedaemon[539739]: INFO: Starting Backup of VM 5000 (qemu)
Sep 03 09:07:09 MCP-Server kernel: intel_vgpu_mdev 00000000-0000-0000-0000-000000005000: Adding to iommu group 15
Sep 03 09:07:09 MCP-Server systemd[1]: Started 5000.scope.
Sep 03 09:07:09 MCP-Server kernel: audit: type=1400 audit(1756883229.123:326): apparmor="DENIED" operation="capable" class="cap" profile="swtpm" pid=539768 comm="swtpm" capability=21 capname="sys_admin"
Sep 03 09:07:09 MCP-Server kernel: tap5000i0: entered promiscuous mode
Sep 03 09:07:09 MCP-Server kernel: vmbr0: port 24(fwpr5000p0) entered blocking state
Sep 03 09:07:09 MCP-Server kernel: vmbr0: port 24(fwpr5000p0) entered disabled state
Sep 03 09:07:09 MCP-Server kernel: fwpr5000p0: entered allmulticast mode
Sep 03 09:07:09 MCP-Server kernel: fwpr5000p0: entered promiscuous mode
Sep 03 09:07:09 MCP-Server kernel: vmbr0: port 24(fwpr5000p0) entered blocking state
Sep 03 09:07:09 MCP-Server kernel: vmbr0: port 24(fwpr5000p0) entered forwarding state
Sep 03 09:07:09 MCP-Server kernel: fwbr5000i0: port 1(fwln5000i0) entered blocking state
Sep 03 09:07:09 MCP-Server kernel: fwbr5000i0: port 1(fwln5000i0) entered disabled state
Sep 03 09:07:09 MCP-Server kernel: fwln5000i0: entered allmulticast mode
Sep 03 09:07:09 MCP-Server kernel: fwln5000i0: entered promiscuous mode
Sep 03 09:07:09 MCP-Server kernel: fwbr5000i0: port 1(fwln5000i0) entered blocking state
Sep 03 09:07:09 MCP-Server kernel: fwbr5000i0: port 1(fwln5000i0) entered forwarding state
Sep 03 09:07:10 MCP-Server kernel: fwbr5000i0: port 2(tap5000i0) entered blocking state
Sep 03 09:07:10 MCP-Server kernel: fwbr5000i0: port 2(tap5000i0) entered disabled state
Sep 03 09:07:10 MCP-Server kernel: tap5000i0: entered allmulticast mode
Sep 03 09:07:10 MCP-Server kernel: fwbr5000i0: port 2(tap5000i0) entered blocking state
Sep 03 09:07:10 MCP-Server kernel: fwbr5000i0: port 2(tap5000i0) entered forwarding state
Sep 03 09:07:10 MCP-Server pvedaemon[539739]: VM 5000 started with PID 539773.

What can I do here?
How can I maybe get more detailed information what is going wrong?
Does anybody have a solution for that?

Btw. my Linux VMs and LXCs are backup fine (Also the ones with forwarded internal CPU-Graphics - Of course my Windows Vm has this too).
Actually this VM has no actual backups :(

Appreciate any help!
 
Hi,
Moin, not sure if my issue is related to this one but as it is about qemu (Windows 11) and PVE upgrade from 8 to 9 and I don't want to create unneccessary threads, I'll post here. Please don't blame me if it does not fit, I havn't found that much about this. topic and PVE 8 to 9, lets say, is still quite fresh ;)
tip for the future: it's better to create new threads if you are not sure it's the same issue. Otherwise, it'll just become more difficult to follow the discussions.
Btw. my Linux VMs and LXCs are backup fine (Also the ones with forwarded internal CPU-Graphics - Of course my Windows Vm has this too).
Actually this VM has no actual backups :(
Please share the VM configuration, i.e. qm config 5000 and check if the backup works without the passthrough.
 
  • Like
Reactions: Johannes S
Hi,

tip for the future: it's better to create new threads if you are not sure it's the same issue. Otherwise, it'll just become more difficult to follow the discussions.

Please share the VM configuration, i.e. qm config 5000 and check if the backup works without the passthrough.
Moin Fiona, thanks, yeah, I was guessing if it does not fit, a moderator will move it. Anyway, I'll keep in mind :)

Config:
agent: 1
bios: ovmf
boot: order=scsi0;ide0;net0
cores: 4
cpu: host
description: Windows 11 Professional
efidisk0: local-btrfs:5000/vm-5000-disk-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
hostpci0: 0000:00:02.0,mdev=i915-GVTg_V5_4,pcie=1
ide0: none,media=cdrom
machine: pc-q35-10.0
memory: 8192
meta: creation-qemu=9.0.2,ctime=1735371428
name: WorkBench
net0: virtio=EE:0A:CE:28:09:F3,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-btrfs:5000/vm-5000-disk-1.raw,cache=writethrough,discard=on,iothread=1,size=150G
scsihw: virtio-scsi-single
smbios1: uuid=5b354a7e-915e-4364-91d3-fc6292c0941d
sockets: 1
tags: dev;prd
tpmstate0: local-btrfs:5000/vm-5000-disk-2.raw,size=4M,version=v2.0
vmgenid: 80203fee-ce58-48aa-bdd4-6f5c7cf9634b
A backup to the local storage just failed too (No transfer for about 10 min.).

VirtIO within the VM is update by latest virtio-win-0.1.271.iso.
 
Last edited: