Backups and snapshots corrupting filesystems on qcows

croxis

Member
Nov 16, 2020
11
1
8
40
I am new to proxmox and virtualization, so thanks for your patience with me. I have spend my weekend reinstalling a debian 10 vm countless times. The intended goal is to run home assistant in docker on this vm. Whenever I create a backup or a snapshot the filesystem in the qcow2 gets damaged (specifically the ext4 journal gets deleted, as well as other errors), regardless if the vm is online or offline. Restoring from the backup or snapshot does not work, as it also has the same corruption. Obviously I would like to be able to backup my VM, where should I start looking to solve this problem?
 
pveversion
pve-manager/6.2-15/48bd51b6 (running kernel: 5.4.73-1-pve)

qm config
Linux pve 5.4.73-1-pve #1 SMP PVE 5.4.73-1 (Mon, 16 Nov 2020 10:52:16 +0100) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Sun Nov 22 09:01:24 2020 from 192.168.0.27 root@pve:~# qm config 105 boot: order=scsi0 cores: 3 ide2: iso-images:iso/debian-10.6.0-amd64-netinst.iso,media=cdrom memory: 4096 name: homeassistant net0: virtio=56:F2:CC:91:3F:AB,bridge=vmbr0,firewall=1 numa: 0 onboot: 1 ostype: l26 scsi0: vms:105/vm-105-disk-0.qcow2,size=64G scsihw: virtio-scsi-pci smbios1: uuid=cd842882-7d0b-466a-a312-b04d55c9cfcd sockets: 1 usb0: host=0658:0200 vmgenid: ef1cbba1-5f81-4fa1-b2b4-4e0f1ebb0f6a
 
Try to use a virtio device instead of scsi device, install the qemu guest agent within the vm and enable it in the proxmox options.

What kind of pool do you have (hdd or ssd, any kind of software or hardware raid?)?
 
Try to use a virtio device instead of scsi device, install the qemu guest agent within the vm and enable it in the proxmox options.

What kind of pool do you have (hdd or ssd, any kind of software or hardware raid?)?
This happened with a virtio configuration, I have not tried the guest agent yet.
The vms are in a Directory, which is connected to a locally mounted btrfs raid1 array with the mount options
defaults,compress=zstd,noatime,autodefrag,space_cache,subvol=vms,nodatacow
 
Hmm i actually don't know it but this might be a Copy on write filesystem + qcow combination issue. Your setup is not supported and definitely a questionable decision regarding performance.

Is there any reason why you prefer btrfs + qcow over direct zfs?

Edit: "scsi0" doesn't look like a virtio device?!?
 
Last edited:
  • Like
Reactions: Dominic
Sorry for the delay. I've been doing zoom parent teacher conferences for two days!

The nodatacow flag disables the copy on write feature. However I just read on the arch wiki that the fstab mount doesn't work with the way I configured it. Knowing that now I wouldn't be surprised the CoW feature is globbering things up. I'll need to fix it and test things out.

The reason for btrfs is, to gloss over some details, I have to move the hard drives over from my old sad server to my new server one at a time (copy files, move a disk, copy more files, move a disk). I would of loved to use zfs, but as far as I saw in its documentation it doesn't have the ability to add/remove arbitrary disks from the array like btrfs can. I would of preferred zfs otherwise.

scsi0 is not a virtio device, but I used virtio it in past configurations
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!