PBS backup: attaching TPM drive failed"

tuxis

Famous Member
Jan 3, 2014
233
193
108
Ede, NL
www.tuxis.nl
Not sure if this should be in PVE or PBS, but I installed a new Windows server with all bells and whistles, and I am unable to backup the guest:

Code:
INFO: starting new backup job: vzdump 146 --remove 0 --notes-template '{{guestname}}' --node proxmox2-6 --mode snapshot --storage pbs_extra
INFO: Starting Backup of VM 146 (qemu)
INFO: Backup started at 2022-09-12 10:04:57
INFO: status = running
INFO: VM Name: XXX
INFO: include disk 'scsi0' 'ceph02:vm-146-disk-1' 60G
INFO: include disk 'efidisk0' 'ceph02:vm-146-disk-0' 528K
INFO: include disk 'tpmstate0' 'ceph02:vm-146-disk-2' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/146/2022-09-12T08:04:57Z'
INFO: attaching TPM drive to QEMU for backup
ERROR: attaching TPM drive failed
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 146 failed - attaching TPM drive failed
INFO: Failed at 2022-09-12 10:04:57
INFO: Backup job finished with errors
TASK ERROR: job errors

Is there something I should be changing? Or is this a known issue.

I upgraded the host and shutdown and started the guest before retrying. Now at:
Code:
root@proxmox2-6:~# pveversion
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.13.19-2-pve)
 
Last edited:
could you include the full output of pveversion -v and the VM config? thanks!
 
Sure:
Code:
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
efidisk0: ceph02:vm-146-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: none,media=cdrom
machine: pc-q35-6.2
memory: 8192
meta: creation-qemu=6.2.0,ctime=1657723669
name: XYZ
net0: virtio=CA:62:91:C6:DE:55,bridge=vmbr1,firewall=1
numa: 1
ostype: win11
scsi0: ceph02:vm-146-disk-1,discard=on,iothread=1,size=60G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=d6438a06-b151-4d4d-8f39-a7098b186168
sockets: 1
tpmstate0: ceph02:vm-146-disk-2,size=4M,version=v2.0
vga: virtio
vmgenid: 81ad85e5-8d2c-4e8c-b4ff-25abcf91070b

Code:
root@proxmox2-6:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-9
pve-kernel-helper: 7.2-9
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 
Last edited:
works for me here. anything special about your ceph storage? is it reproducible if you cold reboot the VM?

could you try with the following patch applied to /usr/share/perl5/PVE/VZDump/QemuServer.pm ? if you trigger the backup using the 'vzdump' CLI, you don't need to reload any services, else please reload pveproxy and pvedaemon on the node where the VM is currently running.

Code:
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 202e53dd..fef67ea7 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -443,7 +443,7 @@ my $attach_tpmstate_drive = sub {
 
     my $drive = "file=$task->{tpmpath},if=none,read-only=on,id=drive-tpmstate0-backup";
     my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"");
-    die "attaching TPM drive failed\n" if $ret !~ m/OK/s;
+    die "attaching TPM drive failed - $ret\n" if $ret !~ m/OK/s;
 };
 
 my $detach_tpmstate_drive = sub {
 
I get:
Code:
ERROR: attaching TPM drive failed - drive_add: string expected
ERROR: Try "help drive_add" for more information
INFO: aborting backup job
 
When I put $drive in the exit line:
Code:
ERROR:  on (file=rbd:proxmox2/vm-146-disk-2:mon_host=[fd64\:c2a0\:dae5\:0\:10\:10\:10\:11];[fd64\:c2a0\:dae5\:0\:10\:10\:10\:12];[fd64\:c2a0\:dae5\:0\:10\:10\:10\:13]:auth_supported=cephx:id=proxmox2:keyring=/etc/pve/priv/ceph/ceph02.keyring,if=none,read-only=on,id=drive-tpmstate0-backup)
 
thanks! could you also post the output of qm showcmd 146 --pretty?
 
Code:
/usr/bin/kvm \
  -id 146 \
  -name 'XXX,debug-threads=on' \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/146.qmp,server=on,wait=off' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/146.pid \
  -daemonize \
  -smbios 'type=1,uuid=d6438a06-b151-4d4d-8f39-a7098b186168' \
  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' \
  -drive 'if=pflash,unit=1,cache=writeback,format=raw,id=drive-efidisk0,size=540672,file=rbd:proxmox2/vm-146-disk-0:mon_host=[fd64\:c2a0\:dae5\:0\:10\:10\:10\:11];[fd64\:c2a0\:dae5\:0\:10\:10\:10\:12];[fd64\:c2a0\:dae5\:0\:10\:10\:10\:13]:auth_supported=cephx:id=proxmox2:keyring=/etc/pve/priv/ceph/ceph02.keyring:rbd_cache_policy=writeback' \
  -smp '4,sockets=1,cores=4,maxcpus=4' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc 'unix:/var/run/qemu-server/146.vnc,password=on' \
  -no-hpet \
  -cpu 'kvm64,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep' \
  -m 8192 \
  -object 'memory-backend-ram,id=ram-node0,size=8192M' \
  -numa 'node,nodeid=0,cpus=0-3,memdev=ram-node0' \
  -object 'iothread,id=iothread-virtioscsi0' \
  -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
  -device 'vmgenid,guid=81ad85e5-8d2c-4e8c-b4ff-25abcf91070b' \
  -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \
  -chardev 'socket,id=tpmchar,path=/var/run/qemu-server/146.swtpm' \
  -tpmdev 'emulator,id=tpmdev,chardev=tpmchar' \
  -device 'tpm-tis,tpmdev=tpmdev' \
  -device 'virtio-vga,id=vga,bus=pcie.0,addr=0x1' \
  -chardev 'socket,path=/var/run/qemu-server/146.qga,server=on,wait=off,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' \
  -chardev 'spicevmc,id=vdagent,name=vdagent' \
  -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' \
  -spice 'tls-port=61003,addr=::1,tls-ciphers=HIGH,seamless-migration=on' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:31379c3cca91' \
  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' \
  -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' \
  -drive 'file=rbd:proxmox2/vm-146-disk-1:mon_host=[fd64\:c2a0\:dae5\:0\:10\:10\:10\:11];[fd64\:c2a0\:dae5\:0\:10\:10\:10\:12];[fd64\:c2a0\:dae5\:0\:10\:10\:10\:13]:auth_supported=cephx:id=proxmox2:keyring=/etc/pve/priv/ceph/ceph02.keyring,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
  -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap146i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=CA:62:91:C6:DE:55,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' \
  -rtc 'driftfix=slew,base=localtime' \
  -machine 'type=pc-q35-6.2+pve0' \
  -global 'kvm-pit.lost_tick_policy=discard'
 
thanks! I think I have a likely culprit - could you check whether the following patch works?

Code:
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 202e53dd..bf0d1c56 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -442,8 +442,9 @@ my $attach_tpmstate_drive = sub {
     $self->loginfo('attaching TPM drive to QEMU for backup');
 
     my $drive = "file=$task->{tpmpath},if=none,read-only=on,id=drive-tpmstate0-backup";
+    $drive =~ s/\\/\\\\/g;
     my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"");
-    die "attaching TPM drive failed\n" if $ret !~ m/OK/s;
+    die "attaching TPM drive failed - $ret\n" if $ret !~ m/OK/s;
 };
 
 my $detach_tpmstate_drive = sub {

it applies on top of the stock version of the file - if you still have the first patch applied just adding the line with $drive =~ s/\\/\\\\/g; at that position should be fine as well.
 
Code:
INFO:   5% (3.0 GiB of 60.0 GiB) in 20s, read: 160.0 MiB/s, write: 158.7 MiB/s

Tadaaaa.wav
 
great :) in case you are curious - it only affects TPM state volumes on storages where the path contains escaped characters (which in this case would be "," or ":", since those separate the keys for the drive string and the fields within the file key respectively), which is probably mostly or even only Ceph with either a custom port or IPv6 as monhost ;)

I'll finalize the patch for inclusion in qemu-server!

edit: submitted: https://lists.proxmox.com/pipermail/pve-devel/2022-September/053937.html
 
Last edited:
  • Like
Reactions: tuxis

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!