issue with backing up a VM using PBS while the hard drive type in PVE is set to SATA.

shahin352

New Member
Aug 21, 2023
4
0
1
I am looking for assistance with backing up a Linux VM by using a SATA hard drive type. I have observed that when I use PBS to back up the VM, the partitions get deleted after a successful backup if I stop/start the VM or reboot it. This issue does not occur when I use ide or virtio hard drive types. Could you please provide me with guidance on how to resolve this problem?
 
Hi,
that sounds like a long standing elusive bug in QEMU's SATA emulation we haven't been able to reproduce yet: https://bugzilla.proxmox.com/show_bug.cgi?id=2874 It seems to hit about a dozen people a year, but even with a lot of test VMs and backups it just never triggered for us.

You can use a tool like TestDisk to try and recover the partition table: https://www.cgsecurity.org/wiki/TestDisk

Can you reproduce the issue with a test VM? If yes, I could come up with a GDB script for you to run, which hopefully would give us more information about what exactly is going on.

EDIT: Please also share the output of pveversion -v and qm config <ID> --current.
 
Last edited:
  • Like
Reactions: shahin352
Hi,
that sounds like a long standing elusive bug in QEMU's SATA emulation we haven't been able to reproduce yet: https://bugzilla.proxmox.com/show_bug.cgi?id=2874 It seems to hit about a dozen people a year, but even with a lot of test VMs and backups it just never triggered for us.

You can use a tool like TestDisk to try and recover the partition table: https://www.cgsecurity.org/wiki/TestDisk

Can you reproduce the issue with a test VM? If yes, I could come up with a GDB script for you to run, which hopefully would give us more information about what exactly is going on.

EDIT: Please also share the output of pveversion -v and qm config <ID> --current.
root@s1:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve)
pve-manager: 6.4-15 (running version: 6.4-15/af7986e6)
pve-kernel-5.4: 6.4-20
pve-kernel-helper: 6.4-20
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-5
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.14-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-2
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1


root@s1:~# qm config 100 --current
agent: 1
boot: order=sata0
cores: 8
memory: 8192
name: vm100
net0: e1000=56:C9:9F:80:EA:8B,bridge=vmbr0
onboot: 0
ostype: other
sata0: local:100/vm-100-disk-0.raw,size=200G
smbios1: uuid=22b4868f-a04f-47eb-b0d2-d4accddc5ef6
vmgenid: ee4edb9f-ddb4-4079-b4b8-2c79c200c666



yes i can reproduce the issue with a test VM
 
EDIT: completely forgot to mention that there is a fix now (but not yet applied): https://lists.proxmox.com/pipermail/pve-devel/2023-August/058873.html Thanks to your report, I gave it another try and was finally able to reproduce and diagnose the issue. Still, even once the fix lands, it's better to use VirtIO SCSI or VirtIO block instead of SATA for performance reasons and because the emulation code is better maintained.

root@s1:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve)
I suggest upgrading to a current version. Proxmox VE 6 is end-of-life since more than a year and won't receive any fixes. Since the issue is very rare, it's not clear that the fix will be backported to Proxmox VE 7 either. It should land in Proxmox VE 8 though.

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!