[SOLVED] CT backs up fine but all VM's fail

naffhouse

New Member
Sep 15, 2022
7
0
1
hey experts,

I just installed my PBS server and I'm having an issue backing up VM's but my sole CT backups, without any issue.

Here's the error that I get when trying to backup my VM's:

```

()

Task viewer: VM/CT 102 - Backup

OutputStatus

Stop

Download
INFO: starting new backup job: vzdump 102 --mode snapshot --node pve --storage backup --remove 0 --notes-template '{{guestname}}' --notification-mode auto
INFO: Starting Backup of VM 102 (qemu)
INFO: Backup started at 2024-02-26 14:08:49
INFO: status = running
INFO: VM Name: Auth
INFO: include disk 'scsi0' 'bigboy:102/vm-102-disk-0.qcow2' 100G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/102/2024-02-26T21:08:49Z'
ERROR: VM 102 qmp command 'backup' failed - backup register image failed: command error: ENODEV: No such device
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup register image failed: command error: ENODEV: No such device
INFO: Failed at 2024-02-26 14:08:49
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors

```

Here's output from my CT shoing I'm able to backup using the same datastore.


```

INFO: starting new backup job: vzdump 105 --remove 0 --notification-mode auto --notes-template '{{guestname}}' --node pve --mode snapshot --storage backup
INFO: Starting Backup of VM 105 (lxc)
INFO: Backup started at 2024-02-26 13:58:39
INFO: status = running
INFO: CT Name: pihole
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
Logical volume "snap_vm-105-disk-0_vzdump" created.
INFO: creating Proxmox Backup Server archive 'ct/105/2024-02-26T20:58:39Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp52422_105/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 105 --backup-time 1708981119 --repository root@pam@192.168.1.230:backup
INFO: Starting backup: ct/105/2024-02-26T20:58:39Z
INFO: Client name: pve
INFO: Starting backup protocol: Mon Feb 26 13:58:40 2024
INFO: No previous manifest available.
INFO: Upload config file '/var/tmp/vzdumptmp52422_105/etc/vzdump/pct.conf' to 'root@pam@192.168.1.230:8007:backup' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.230:8007:backup' as root.pxar.didx
INFO: root.pxar: had to backup 1.08 GiB of 1.08 GiB (compressed 381.349 MiB) in 19.48s
INFO: root.pxar: average backup speed: 56.752 MiB/s
INFO: Uploaded backup catalog (525.202 KiB)
INFO: Duration: 19.62s
INFO: End Time: Mon Feb 26 13:59:00 2024
INFO: adding notes to backup
INFO: cleanup temporary 'vzdump' snapshot
Logical volume "snap_vm-105-disk-0_vzdump" successfully removed.
INFO: Finished Backup of VM 105 (00:00:22)
INFO: Backup finished at 2024-02-26 13:59:01
INFO: Backup job finished successfully
INFO: notified via target `mail-to-root`
TASK OK

```

I am currently using MergerFS on my baremetal PBS server. I have tried creating datastores with directorys that are not using MergerFS, and those fail as well so I don't believe MergerFS is the issue..

anyhelp, thank you!
 
Hi,
please share the output of pveversion -v and qm config 102. What kind of filesystem is used by the bigboy storage? Does backing up a VM without disks or with a disk on a different storage work?
 
Hi,
please share the output of pveversion -v and qm config 102. What kind of filesystem is used by the bigboy storage? Does backing up a VM without disks or with a disk on a different storage work?
I should have mentioned that this issue is not specific to any VM. All of my VM's fail, but my one CT backed up without nay issue.

pve version -v on my proxmox node:

```
root@pve:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-9
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2
```
here's the qemu:

```
root@pve:~# qm config 102
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
ide2: local:iso/ubuntu-22.04.2-live-server-amd64.iso,media=cdrom,size=1929660K
memory: 8048
meta: creation-qemu=8.1.5,ctime=1708780820
name: Auth
net0: virtio=BC:24:11:CF:FC:84,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: bigboy:102/vm-102-disk-0.qcow2,discard=on,iothread=1,size=100G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=a6798833-9f8e-4464-b792-357680ce8480
sockets: 1
vmgenid: e5f6fe05-1359-4bad-a270-586a0e97693c
```
thanks for your help!!!
 
Hi,
please share the output of pveversion -v and qm config 102. What kind of filesystem is used by the bigboy storage? Does backing up a VM without disks or with a disk on a different storage work?
I tried creating different datastores on different seperate disks and one of the backups went up to 35% and then it failed with a similar error.

I thought the issue may be becuase I am using MergerFS on my PBS box but backup failed without a MergerFS datastore directory.

How do I check to see what kind of file system is being used by Big Boy storage?
 
I just tried backing up a different VM that I do not believe uses big boy, here is the output:

```

()


















Task viewer: VM/CT 101 - Backup

OutputStatus

Stop

Download
INFO: starting new backup job: vzdump 101 --node pve --mode snapshot --storage backup --remove 0 --notification-mode auto --notes-template '{{guestname}}'
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2024-02-27 09:37:38
INFO: status = running
INFO: VM Name: haos10.1
INFO: include disk 'scsi0' 'local-lvm:vm-101-disk-1' 32G
INFO: include disk 'efidisk0' 'local-lvm:vm-101-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2024-02-27T16:37:38Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
ERROR: VM 101 qmp command 'backup' failed - backup register image failed: command error: ENODEV: No such device
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 101 failed - VM 101 qmp command 'backup' failed - backup register image failed: command error: ENODEV: No such device
INFO: Failed at 2024-02-27 09:37:40
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors
```
 
Hi,
please share the output of pveversion -v and qm config 102. What kind of filesystem is used by the bigboy storage? Does backing up a VM without disks or with a disk on a different storage work?
Hi Fiona, any ideas?

should I try reformatting both disks in my PBS system and reinstalling PBS or do you think it's my PVE causing issues?

It's wild because there isn't a ton of documentation online about this specific error, and from what I've googled, there doesn't seem to be an obvious fix.
 
for anyone wondering, under my mergerFS fount options, I had to use the option - cache.files-partial in /etc/fstab -- for the mergerFS mount.. now it works successfully
 
for anyone wondering, under my mergerFS fount options, I had to use the option - cache.files-partial in /etc/fstab -- for the mergerFS mount.. now it works successfully
Glad to hear you found a solution :)
Please use the Edit thread button at the top and select the [SOLVED] prefix. This helps other users find solutions more quickly.

I am currently using MergerFS on my baremetal PBS server. I have tried creating datastores with directorys that are not using MergerFS, and those fail as well so I don't believe MergerFS is the issue..
Hmm, but what about the non-MergerFS directory datastore? Was the error exactly the same?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!