backup error - no value given

linhu

Member
Aug 19, 2023
38
0
6
i have other vm's without this error, so they are successfuly done with backup, but 3 vm's have this error

103: 2024-08-03 02:01:45 INFO: Starting Backup of VM 103 (qemu)
103: 2024-08-03 02:01:45 INFO: status = running
103: 2024-08-03 02:01:45 INFO: VM Name: h1
103: 2024-08-03 02:01:45 INFO: include disk 'scsi0' 'nfs-truenas:103/vm-103-disk-0.qcow2' 80G
103: 2024-08-03 02:01:45 INFO: include disk 'scsi1' 'nfs-truenas:103/vm-103-disk-1.qcow2' 2G
103: 2024-08-03 02:01:52 INFO: backup mode: snapshot
103: 2024-08-03 02:01:52 INFO: ionice priority: 7
103: 2024-08-03 02:01:52 INFO: creating Proxmox Backup Server archive 'vm/103/2024-08-03T00:01:45Z'
103: 2024-08-03 02:01:53 ERROR: no value given at /usr/share/perl5/PVE/Tools.pm line 1808.
103: 2024-08-03 02:01:53 INFO: aborting backup job
103: 2024-08-03 02:01:53 INFO: resuming VM again
103: 2024-08-03 02:01:53 ERROR: Backup of VM 103 failed - no value given at /usr/share/perl5/PVE/Tools.pm line 1808.
 
The messages indicate that the machine's disk is located on a shared NFS share. Is the storage on this NFS visible and accessible from the node on which you are making the backup?
 
Show us the configuration of the faulty machine and the working machine. You can call it with the command
Code:
qm config 103
.
And additionally how storage
Code:
cat /etc/pve/storage.cfg
is configured. Maybe something could be deduced by comparing.
 
Code:
# qm config 103
boot: order=scsi0;ide2
cores: 2
cpu: host
description: lan 192.168.1.32%0Awan 37.128.222.26
ide2: none,media=cdrom
memory: 8096
meta: creation-qemu=8.0.2,ctime=1694683180
name: h1
net0: virtio=C6:E8:F6:2A:A3:6C,bridge=vmbr0,firewall=1
net1: virtio=16:3B:DA:04:3A:EC,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: nfs-truenas:103/vm-103-disk-0.qcow2,iothread=1,size=80G
scsi1: nfs-truenas:103/vm-103-disk-1.qcow2,iothread=1,size=2G
scsihw: virtio-scsi-single
smbios1: uuid=d0122a45-ff8a-44d8-8537-ff8c5c3f5593
sockets: 2
startup: order=10
vmgenid: 234703bd-1726-469a-ac6a-ff83a83acf53

###working###
qm config 102
boot: order=scsi0;net0
cores: 2
cpu: host
description: lan 192.168.1.28
memory: 4096
meta: creation-qemu=8.0.2,ctime=1694518785
name: ns3
net0: virtio=02:95:2C:A6:0C:16,bridge=vmbr0,firewall=1
net1: virtio=DE:BC:65:75:1F:8D,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: nfs-truenas:102/vm-102-disk-0.qcow2,iothread=1,size=16G
scsihw: virtio-scsi-single
smbios1: uuid=ac01ca3b-187d-4bbe-a607-6350f5cb2d81
sockets: 1
startup: order=3
vmgenid: ce1899bc-d330-4f26-a391-59d3ddb70068
 
Show us the configuration of the faulty machine and the working machine. You can call it with the command
Code:
qm config 103
.
And additionally how storage
Code:
cat /etc/pve/storage.cfg
is configured. Maybe something could be deduced by comparing.
# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,images,rootdir,backup
prune-backups keep-all=1
shared 0

nfs: backup
export /prox-backup
path /mnt/pve/backup
server 192.168.1.231
content backup,snippets
prune-backups keep-all=1

nfs: nfs-truenas
export /mnt/pool2/dataset1
path /mnt/pve/nfs-truenas
server 192.168.1.234
content vztmpl,backup,snippets,rootdir,iso,images
prune-backups keep-all=1

zfspool: zpool1
pool zpool1
content rootdir,images
mountpoint /zpool1
nodes prox3,prox2,prox4,prox1
sparse 0

pbs: proxbackup1
datastore vmbackup1
server 192.168.1.154
content backup
prune-backups keep-all=1
username root@pam
 
Last edited:
Hi,
please share the output of pveversion -v. Are you using backup fleecing? If yes, to what kind of storage?

What is the output of the following?
Code:
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-0.qcow2) --output json
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-1.qcow2) --output json
 
Hi,
please share the output of pveversion -v. Are you using backup fleecing? If yes, to what kind of storage?

What is the output of the following?
Code:
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-0.qcow2) --output json
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-1.qcow2) --output json
backup fleecing yes, lokal storage zpool1 (zfs)
---
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-0.qcow2) --output json
{
"children": [
{
"name": "file",
"info": {
"children": [
],
"virtual-size": 85589295104,
"filename": "/mnt/pve/nfs-truenas/images/103/vm-103-disk-0.qcow2",
"format": "file",
"actual-size": 65485951488,
"format-specific": {
"type": "file",
"data": {
}
},
"dirty-flag": false
}
}
],
"virtual-size": 85899345920,
"filename": "/mnt/pve/nfs-truenas/images/103/vm-103-disk-0.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 65485951488,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"compression-type": "zlib",
"lazy-refcounts": false,
"refcount-bits": 16,
"corrupt": false,
"extended-l2": false
}
},
"dirty-flag": false
}
---
pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-3-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-4
proxmox-kernel-6.8.8-4-pve-signed: 6.8.8-4
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5.13-3-pve-signed: 6.5.13-3
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.2-1
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.3
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
 
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-0.qcow2) --output json
What about the other disk? AFAICT, the error happens when the size of the disk cannot be determined. The size is needed to allocate a fleecing image with the same size.
 
What about the other disk? AFAICT, the error happens when the size of the disk cannot be determined. The size is needed to allocate a fleecing image with the same size.
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-1.qcow2) --output json
{
"children": [
{
"name": "file",
"info": {
"children": [
],
"virtual-size": 2148073472,
"filename": "/mnt/pve/nfs-truenas/images/103/vm-103-disk-1.qcow2",
"format": "file",
"actual-size": 907293184,
"format-specific": {
"type": "file",
"data": {
}
},
"dirty-flag": false
}
}
],
"virtual-size": 2147483648,
"filename": "/mnt/pve/nfs-truenas/images/103/vm-103-disk-1.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 907293184,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"compression-type": "zlib",
"lazy-refcounts": false,
"refcount-bits": 16,
"corrupt": false,
"extended-l2": false
}
},
"dirty-flag": false
}
 
qemu-img info $(pvesm path nfs-truenas:103/vm-103-disk-1.qcow2) --output json
{
"children": [
{
"name": "file",
"info": {
"children": [
],
"virtual-size": 2148073472,
"filename": "/mnt/pve/nfs-truenas/images/103/vm-103-disk-1.qcow2",
"format": "file",
"actual-size": 907293184,
"format-specific": {
"type": "file",
"data": {
}
},
"dirty-flag": false
}
}
],
"virtual-size": 2147483648,
"filename": "/mnt/pve/nfs-truenas/images/103/vm-103-disk-1.qcow2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 907293184,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"compression-type": "zlib",
"lazy-refcounts": false,
"refcount-bits": 16,
"corrupt": false,
"extended-l2": false
}
},
"dirty-flag": false
}
The output looks okay. Does the error occur every time for this VM? Maybe there's an issue triggered by the increased load during backup. I wasn't able to reproduce the issue yet.

What is the output of
Code:
pvesh get /nodes/<your node name here>/storage/nfs-truenas/content/nfs-truenas:103/vm-103-disk-0.qcow2
pvesh get /nodes/<your node name here>/storage/nfs-truenas/content/nfs-truenas:103/vm-103-disk-1.qcow2
 
The output looks okay. Does the error occur every time for this VM? Maybe there's an issue triggered by the increased load during backup. I wasn't able to reproduce the issue yet.

What is the output of
Code:
pvesh get /nodes/<your node name here>/storage/nfs-truenas/content/nfs-truenas:103/vm-103-disk-0.qcow2
pvesh get /nodes/<your node name here>/storage/nfs-truenas/content/nfs-truenas:103/vm-103-disk-1.qcow2
│ key │ value │
╞════════╪═════════════════════════════════════════════════════╡
│ format │ qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ path │ /mnt/pve/nfs-truenas/images/103/vm-103-disk-0.qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ size │ 80.00 GiB │
├────────┼─────────────────────────────────────────────────────┤
│ used │ 61.02 GiB

│ key │ value │
╞════════╪═════════════════════════════════════════════════════╡
│ format │ qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ path │ /mnt/pve/nfs-truenas/images/103/vm-103-disk-1.qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ size │ 2.00 GiB │
├────────┼─────────────────────────────────────────────────────┤
│ used │ 831.62 MiB
 
The output looks okay. Does the error occur every time for this VM? Maybe there's an issue triggered by the increased load during backup. I wasn't able to reproduce the issue yet.

What is the output of
Code:
pvesh get /nodes/<your node name here>/storage/nfs-truenas/content/nfs-truenas:103/vm-103-disk-0.qcow2
pvesh get /nodes/<your node name here>/storage/nfs-truenas/content/nfs-truenas:103/vm-103-disk-1.qcow2
i activate the backup with flecing again and will check the backups tomorrow.
 
│ key │ value │
╞════════╪═════════════════════════════════════════════════════╡
│ format │ qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ path │ /mnt/pve/nfs-truenas/images/103/vm-103-disk-0.qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ size │ 80.00 GiB │
├────────┼─────────────────────────────────────────────────────┤
│ used │ 61.02 GiB

│ key │ value │
╞════════╪═════════════════════════════════════════════════════╡
│ format │ qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ path │ /mnt/pve/nfs-truenas/images/103/vm-103-disk-1.qcow2 │
├────────┼─────────────────────────────────────────────────────┤
│ size │ 2.00 GiB │
├────────┼─────────────────────────────────────────────────────┤
│ used │ 831.62 MiB
the last backups, it was not the same vm´s.
the backup is running without fleecing with no errors, so i will deactivate fleecing again.
 
the last backups, it was not the same vm´s.
So I'd guess there is an issue gathering the size from the storage during high load. You could try configuring a bandwidth limit or reducing the maximum workers (in the Advanced tab for the backup job in the UI).
 
it dit not help, i try to upgrade the storrage server to ssd and then i will give this a new try.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!