Err -5 - Input/output error

bujingai

Member
Jun 17, 2020
8
0
6
38
Hi guys,

I keep having this error when backing up specific VM. It just started giving this problem recently. The other VMs that I have are backed up fine.
Anyone here can advise?
Thanks
 
Hi,

Please post full output of the backup process, the VM config qm config <VMID> and output of pveversion -v as well
 
Hi,

Please post full output of the backup process, the VM config qm config <VMID> and output of pveversion -v as well
Hi @Moayad ,

Thanks for the reply and hope you can help. Below is the output;

VM Config
Code:
agent: 1
bios: ovmf
bootdisk: sata0
cores: 2
efidisk0: local-lvm:vm-100-disk-0,size=4M
memory: 4096
name: HassOS
net0: virtio=62:12:44:6A:5D:37,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: local-lvm:vm-100-disk-1,size=38G
scsihw: virtio-scsi-pci
smbios1: uuid=54deeb97-ffb4-40b7-ab7c-dd78d12fd9ae
sockets: 2
usb1: host=2-2,usb3=1
vmgenid: 51ac4675-f7e8-4953-b1d9-17133c51eda8

PVE version
Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-helper: 6.3-1
pve-kernel-5.4: 6.2-6
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.1-13
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
Hi @Moayad ,

Sorry I forgot the backup output.
Code:
Proxmox
Virtual Environment 6.2-11
Search
Datacenter
Search:
Server View
Logs
INFO: starting new backup job: vzdump 100 --mode snapshot --storage storageprox --mailnotification always --all 0 --quiet 1 --node chenhi --compress zstd
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2020-11-30 12:48:41
INFO: status = running
INFO: VM Name: HassOS
INFO: include disk 'sata0' 'local-lvm:vm-100-disk-1' 38G
INFO: include disk 'efidisk0' 'local-lvm:vm-100-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/data/backup/dump/vzdump-qemu-100-2020_11_30-12_48_41.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'ee8903e8-cf98-4c70-bfea-53fa093c4cb1'
INFO: resuming VM again
INFO:   1% (611.8 MiB of 38.0 GiB) in  3s, read: 203.9 MiB/s, write: 77.0 MiB/s
INFO:   2% (930.2 MiB of 38.0 GiB) in  6s, read: 106.1 MiB/s, write: 56.6 MiB/s
INFO:   3% (1.2 GiB of 38.0 GiB) in 12s, read: 53.5 MiB/s, write: 53.2 MiB/s
INFO:   4% (1.5 GiB of 38.0 GiB) in 28s, read: 19.8 MiB/s, write: 19.8 MiB/s
INFO:   5% (1.9 GiB of 38.0 GiB) in 37s, read: 43.4 MiB/s, write: 42.7 MiB/s
INFO:   6% (2.3 GiB of 38.0 GiB) in 57s, read: 18.9 MiB/s, write: 18.2 MiB/s
INFO:   7% (2.7 GiB of 38.0 GiB) in  1m  4s, read: 56.4 MiB/s, write: 53.1 MiB/s
INFO:   8% (3.0 GiB of 38.0 GiB) in  1m 12s, read: 48.2 MiB/s, write: 39.2 MiB/s
INFO:   9% (3.4 GiB of 38.0 GiB) in  1m 20s, read: 50.7 MiB/s, write: 49.7 MiB/s
INFO:  10% (3.8 GiB of 38.0 GiB) in  1m 31s, read: 34.7 MiB/s, write: 34.0 MiB/s
INFO:  11% (4.2 GiB of 38.0 GiB) in  1m 41s, read: 40.0 MiB/s, write: 39.4 MiB/s
INFO:  12% (4.6 GiB of 38.0 GiB) in  1m 51s, read: 38.0 MiB/s, write: 37.2 MiB/s
INFO:  13% (5.0 GiB of 38.0 GiB) in  1m 59s, read: 48.2 MiB/s, write: 40.9 MiB/s
INFO:  14% (5.3 GiB of 38.0 GiB) in  2m  5s, read: 63.0 MiB/s, write: 59.7 MiB/s
INFO:  15% (5.7 GiB of 38.0 GiB) in  2m 13s, read: 52.4 MiB/s, write: 47.2 MiB/s
INFO:  16% (6.1 GiB of 38.0 GiB) in  2m 23s, read: 40.3 MiB/s, write: 39.4 MiB/s
INFO:  17% (6.5 GiB of 38.0 GiB) in  2m 31s, read: 45.3 MiB/s, write: 42.2 MiB/s
INFO:  18% (6.9 GiB of 38.0 GiB) in  2m 40s, read: 51.2 MiB/s, write: 40.3 MiB/s
INFO:  19% (7.2 GiB of 38.0 GiB) in  2m 46s, read: 53.3 MiB/s, write: 52.7 MiB/s
INFO:  20% (7.6 GiB of 38.0 GiB) in  2m 56s, read: 40.2 MiB/s, write: 39.0 MiB/s
INFO:  20% (7.7 GiB of 38.0 GiB) in  2m 59s, read: 34.1 MiB/s, write: 33.6 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
ERROR: Backup of VM 100 failed - job failed with err -5 - Input/output error
INFO: Failed at 2020-11-30 12:51:40
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Just an update. I did upgrade to 6.3.2 now and it is still giving me the same error

Code:
INFO: starting new backup job: vzdump 100 --storage storageprox --node chenhi --compress zstd --mailnotification always --all 0 --mode snapshot --quiet 1
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2020-11-30 16:13:24
INFO: status = running
INFO: VM Name: HassOS
INFO: include disk 'sata0' 'local-lvm:vm-100-disk-1' 38G
INFO: include disk 'efidisk0' 'local-lvm:vm-100-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/data/backup/dump/vzdump-qemu-100-2020_11_30-16_13_24.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '66193421-594f-462c-be99-9cbf74a9bac9'
INFO: resuming VM again
INFO:   1% (437.0 MiB of 38.0 GiB) in  3s, read: 145.7 MiB/s, write: 75.7 MiB/s
INFO:   2% (826.2 MiB of 38.0 GiB) in  6s, read: 129.8 MiB/s, write: 23.3 MiB/s
INFO:   3% (1.2 GiB of 38.0 GiB) in 16s, read: 37.7 MiB/s, write: 37.5 MiB/s
INFO:   4% (1.5 GiB of 38.0 GiB) in 22s, read: 59.8 MiB/s, write: 59.7 MiB/s
INFO:   5% (1.9 GiB of 38.0 GiB) in 38s, read: 26.7 MiB/s, write: 26.3 MiB/s
INFO:   6% (2.3 GiB of 38.0 GiB) in 52s, read: 27.4 MiB/s, write: 26.3 MiB/s
INFO:   7% (2.7 GiB of 38.0 GiB) in  1m  1s, read: 44.9 MiB/s, write: 40.5 MiB/s
INFO:   8% (3.1 GiB of 38.0 GiB) in  1m 12s, read: 32.0 MiB/s, write: 27.1 MiB/s
INFO:   9% (3.4 GiB of 38.0 GiB) in  1m 21s, read: 42.1 MiB/s, write: 41.3 MiB/s
INFO:  10% (3.8 GiB of 38.0 GiB) in  1m 38s, read: 24.5 MiB/s, write: 24.1 MiB/s
INFO:  11% (4.2 GiB of 38.0 GiB) in  1m 48s, read: 36.6 MiB/s, write: 36.1 MiB/s
INFO:  12% (4.6 GiB of 38.0 GiB) in  2m  2s, read: 28.6 MiB/s, write: 28.0 MiB/s
INFO:  13% (5.0 GiB of 38.0 GiB) in  2m  9s, read: 55.8 MiB/s, write: 47.4 MiB/s
INFO:  14% (5.4 GiB of 38.0 GiB) in  2m 20s, read: 39.9 MiB/s, write: 37.3 MiB/s
INFO:  15% (5.7 GiB of 38.0 GiB) in  2m 26s, read: 57.7 MiB/s, write: 52.3 MiB/s
INFO:  16% (6.1 GiB of 38.0 GiB) in  2m 36s, read: 39.5 MiB/s, write: 38.7 MiB/s
INFO:  17% (6.5 GiB of 38.0 GiB) in  2m 46s, read: 35.8 MiB/s, write: 33.3 MiB/s
INFO:  18% (6.9 GiB of 38.0 GiB) in  2m 56s, read: 44.6 MiB/s, write: 34.8 MiB/s
INFO:  19% (7.2 GiB of 38.0 GiB) in  3m  5s, read: 36.3 MiB/s, write: 36.0 MiB/s
INFO:  20% (7.6 GiB of 38.0 GiB) in  3m 16s, read: 35.7 MiB/s, write: 34.6 MiB/s
INFO:  20% (7.7 GiB of 38.0 GiB) in  3m 20s, read: 33.0 MiB/s, write: 32.7 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
ERROR: Backup of VM 100 failed - job failed with err -5 - Input/output error
INFO: Failed at 2020-11-30 16:16:45
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Hi @Moayad ,

I tried to backup to another storage and it still give the same error. The weird thing is that it always stuck at 20% before giving error.

Code:
INFO: starting new backup job: vzdump 100 --mode snapshot --quiet 1 --all 0 --compress zstd --mailnotification always --node chenhi --storage local
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2020-11-30 16:29:13
INFO: status = running
INFO: VM Name: HassOS
INFO: include disk 'sata0' 'local-lvm:vm-100-disk-1' 38G
INFO: include disk 'efidisk0' 'local-lvm:vm-100-disk-0' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-qemu-100-2020_11_30-16_29_13.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '5ee557db-fe67-4a69-860e-cde8f38391bd'
INFO: resuming VM again
INFO:   0% (43.4 MiB of 38.0 GiB) in  3s, read: 14.5 MiB/s, write: 585.3 KiB/s
INFO:   1% (437.3 MiB of 38.0 GiB) in  6s, read: 131.3 MiB/s, write: 75.3 MiB/s
INFO:   2% (879.7 MiB of 38.0 GiB) in  9s, read: 147.5 MiB/s, write: 41.0 MiB/s
INFO:   3% (1.2 GiB of 38.0 GiB) in 15s, read: 60.1 MiB/s, write: 59.7 MiB/s
INFO:   4% (1.5 GiB of 38.0 GiB) in 22s, read: 45.7 MiB/s, write: 45.5 MiB/s
INFO:   5% (1.9 GiB of 38.0 GiB) in 29s, read: 61.8 MiB/s, write: 60.9 MiB/s
INFO:   6% (2.3 GiB of 38.0 GiB) in 36s, read: 49.1 MiB/s, write: 47.0 MiB/s
INFO:   7% (2.7 GiB of 38.0 GiB) in 43s, read: 56.4 MiB/s, write: 53.2 MiB/s
INFO:   8% (3.1 GiB of 38.0 GiB) in 49s, read: 67.5 MiB/s, write: 55.5 MiB/s
INFO:   9% (3.5 GiB of 38.0 GiB) in 58s, read: 45.5 MiB/s, write: 44.7 MiB/s
INFO:  10% (3.8 GiB of 38.0 GiB) in  1m  7s, read: 40.3 MiB/s, write: 39.5 MiB/s
INFO:  11% (4.2 GiB of 38.0 GiB) in  1m 17s, read: 37.5 MiB/s, write: 37.0 MiB/s
INFO:  12% (4.6 GiB of 38.0 GiB) in  1m 27s, read: 40.3 MiB/s, write: 39.4 MiB/s
INFO:  13% (4.9 GiB of 38.0 GiB) in  1m 34s, read: 54.6 MiB/s, write: 46.3 MiB/s
INFO:  14% (5.3 GiB of 38.0 GiB) in  1m 40s, read: 67.3 MiB/s, write: 64.0 MiB/s
INFO:  15% (5.7 GiB of 38.0 GiB) in  1m 46s, read: 67.0 MiB/s, write: 60.1 MiB/s
INFO:  16% (6.1 GiB of 38.0 GiB) in  1m 55s, read: 42.7 MiB/s, write: 41.7 MiB/s
INFO:  17% (6.5 GiB of 38.0 GiB) in  2m  3s, read: 50.1 MiB/s, write: 46.9 MiB/s
INFO:  18% (6.9 GiB of 38.0 GiB) in  2m  8s, read: 74.8 MiB/s, write: 55.3 MiB/s
INFO:  19% (7.2 GiB of 38.0 GiB) in  2m 15s, read: 54.0 MiB/s, write: 53.5 MiB/s
INFO:  20% (7.6 GiB of 38.0 GiB) in  2m 25s, read: 40.8 MiB/s, write: 39.6 MiB/s
INFO:  20% (7.7 GiB of 38.0 GiB) in  2m 28s, read: 33.1 MiB/s, write: 32.6 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
ERROR: Backup of VM 100 failed - job failed with err -5 - Input/output error
INFO: Failed at 2020-11-30 16:31:41
INFO: Backup job finished with errors
TASK ERROR: job errors
 
job failed with err -5 - Input/output error
This error means either storage or VM is corrupted, as you said
The other VMs that I have are backed up fine.
may you check your VM image "vm-100-disk-1" if everything are ok?
 
This error means either storage or VM is corrupted, as you said

may you check your VM image "vm-100-disk-1" if everything are ok?
May I know how do I check? The VM is running fine at the moment and only having problem backing up.
 
I have the same problem

I did xfs_repair on my centos vm finished with no errors

Only one vm affected. Replication, backup migration live and off line not working. Replication starts and finish with randon % off job done.
Log:
==
2021-01-27 23:09:00 202-0: start replication job
2021-01-27 23:09:00 202-0: guest => VM 202, running => 18446
2021-01-27 23:09:00 202-0: volumes => local-zfs:vm-202-disk-0,local-zfs:vm-202-state-backup
2021-01-27 23:09:02 202-0: create snapshot '__replicate_202-0_1611788940__' on local-zfs:vm-202-disk-0
2021-01-27 23:09:02 202-0: create snapshot '__replicate_202-0_1611788940__' on local-zfs:vm-202-state-backup
2021-01-27 23:09:02 202-0: using secure transmission, rate limit: none
2021-01-27 23:09:02 202-0: full sync 'local-zfs:vm-202-disk-0' (__replicate_202-0_1611788940__)
2021-01-27 23:09:03 202-0: full send of rpool/data/vm-202-disk-0@backup estimated size is 17.7G
2021-01-27 23:09:03 202-0: send from @backup to rpool/data/vm-202-disk-0@__replicate_202-0_1611788940__ estimated size is 36.9M
2021-01-27 23:09:03 202-0: total estimated size is 17.8G
2021-01-27 23:09:03 202-0: TIME SENT SNAPSHOT rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:04 202-0: 23:09:04 90.1M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:05 202-0: 23:09:05 201M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:06 202-0: 23:09:06 311M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:07 202-0: 23:09:07 415M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:08 202-0: 23:09:08 526M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:09 202-0: 23:09:09 637M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:10 202-0: 23:09:10 745M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:11 202-0: 23:09:11 854M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:12 202-0: 23:09:12 965M rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:13 202-0: 23:09:13 1.04G rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:14 202-0: 23:09:14 1.15G rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:15 202-0: 23:09:15 1.26G rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:16 202-0: 23:09:16 1.35G rpool/data/vm-202-disk-0@backup
2021-01-27 23:09:16 202-0: warning: cannot send 'rpool/data/vm-202-disk-0@backup': Input/output error
2021-01-27 23:09:16 202-0: TIME SENT SNAPSHOT rpool/data/vm-202-disk-0@__replicate_202-0_1611788940__
2021-01-27 23:09:17 202-0: 23:09:17 3.05M rpool/data/vm-202-disk-0@__replicate_202-0_1611788940__
2021-01-27 23:09:18 202-0: 23:09:18 3.05M rpool/data/vm-202-disk-0@__replicate_202-0_1611788940__
2021-01-27 23:09:19 202-0: cannot receive new filesystem stream: invalid backup stream
2021-01-27 23:09:19 202-0: cannot open 'rpool/data/vm-202-disk-0': dataset does not exist
2021-01-27 23:09:19 202-0: command 'zfs recv -F -- rpool/data/vm-202-disk-0' failed: exit code 1
2021-01-27 23:09:19 202-0: warning: cannot send 'rpool/data/vm-202-disk-0@__replicate_202-0_1611788940__': signal received
2021-01-27 23:09:19 202-0: cannot send 'rpool/data/vm-202-disk-0': I/O error
2021-01-27 23:09:19 202-0: command 'zfs send -Rpv -- rpool/data/vm-202-disk-0@__replicate_202-0_1611788940__' failed: exit code 1
2021-01-27 23:09:19 202-0: delete previous replication snapshot '__replicate_202-0_1611788940__' on local-zfs:vm-202-disk-0
2021-01-27 23:09:19 202-0: delete previous replication snapshot '__replicate_202-0_1611788940__' on local-zfs:vm-202-state-backup
2021-01-27 23:09:19 202-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-202-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_202-0_1611788940__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pr2' root@192.168.0.253 -- pvesm import local-zfs:vm-202-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 1
==
Host file:
==
root@pr1:~# qm config 202
balloon: 0
boot: order=scsi0
cores: 2
cpu: kvm64
memory: 3072
name: www-email-stats
net0: virtio=BA:49:09:13:43:A9,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
parent: backup
scsi0: local-zfs:vm-202-disk-0,size=20G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=3ebc05f9-cda8-4daa-b8fa-bc0faebb88a4
sockets: 1
startup: order=1
vga: virtio
vmgenid: 00485549-7560-4a33-8529-0fae1d191c6a
==

any help much appreciate
 
Last edited: