[SOLVED] VM io-error, yellow triangle

stprox

New Member
Dec 10, 2021
12
2
3
Bavaria
Hi everyone,

some days ago it started that some of my VMs are getting a yellow triangle and when hovering it says "status: io-error". I have learnt that this could mean I run out of space, but haven't found any culprit. I am using a NFS connection to a synology, where all VM reside.

At the moment all is green. But if I want to create a new VM, the new one gets a io-error soon, so some other VMs and stop responsing. All other VMs seem to run fine.

I am running 7.1.8.

lvs:
Code:
  LV   VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- <338.60g             0.00   0.50                           
  root pve -wi-ao----   96.00g                                                   
  swap pve -wi-ao----    8.00g

pvesm status:
Code:
Name             Type     Status           Total            Used       Available        %
Moria             pbs     active      3777797632       700177920      3077619712   18.53%
VMlinux           nfs     active     11238298112      2017643392      9220654720   17.95%
VMwin             nfs     active     11238298112      2017643392      9220654720   17.95%
local             dir     active        98559220        11831228        81678444   12.00%
local-lvm     lvmthin     active       355045376               0       355045376    0.00%

Regards,

Stefan
 
Have you checked the journal (journalctl) for any disk or I/O errors?
 
Thanks for your reply. I don't see any I/O errors with journalctl, this is what I get when the errors occur:

Code:
Dec 09 12:09:43 argo pvedaemon[1135]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:09:29 argo pvedaemon[2395699]: <root@pam> successful auth for user 'root@pam'
Dec 09 12:09:20 argo pvedaemon[2395699]: <root@pam> end task UPID:argo:002493AE:030CD61E:61B1E3DD:aptupdate::root@pam: OK
Dec 09 12:09:18 argo pvedaemon[2397102]: update new package list: /var/lib/pve-manager/pkgupdates
Dec 09 12:09:17 argo pvedaemon[2395699]: <root@pam> starting task UPID:argo:002493AE:030CD61E:61B1E3DD:aptupdate::root@pam:
Dec 09 12:08:59 argo pvedaemon[1135]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:08:53 argo pvedaemon[2395699]: VM 130 qmp command failed - VM 130 qmp command 'guest-ping' failed - got timeout
Dec 09 12:08:51 argo pvedaemon[1133]: <root@pam> end task UPID:argo:0024928C:030CC85F:61B1E3BA:vncproxy:130:root@pam: OK
Dec 09 12:08:42 argo pvedaemon[1133]: <root@pam> starting task UPID:argo:0024928C:030CC85F:61B1E3BA:vncproxy:130:root@pam:
Dec 09 12:08:42 argo pvedaemon[2396812]: starting vnc proxy UPID:argo:0024928C:030CC85F:61B1E3BA:vncproxy:130:root@pam:
Dec 09 12:08:34 argo pvedaemon[1135]: VM 130 qmp command failed - VM 130 qmp command 'guest-ping' failed - got timeout
Dec 09 12:08:28 argo pvedaemon[1133]: <root@pam> end task UPID:argo:00249233:030CC131:61B1E3A8:vncproxy:128:root@pam: OK
Dec 09 12:08:26 argo pveproxy[1142]: worker 2396726 started
Dec 09 12:08:26 argo pveproxy[1142]: starting 1 worker(s)
Dec 09 12:08:26 argo pveproxy[1142]: worker 2380211 finished
Dec 09 12:08:26 argo pveproxy[2380211]: worker exit
Dec 09 12:08:24 argo pvedaemon[2396723]: starting vnc proxy UPID:argo:00249233:030CC131:61B1E3A8:vncproxy:128:root@pam:
Dec 09 12:08:24 argo pvedaemon[1133]: <root@pam> starting task UPID:argo:00249233:030CC131:61B1E3A8:vncproxy:128:root@pam:
Dec 09 12:08:18 argo pvedaemon[1133]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:08:13 argo pveproxy[1142]: worker 2396678 started
Dec 09 12:08:13 argo pveproxy[1142]: starting 1 worker(s)
Dec 09 12:08:13 argo pveproxy[1142]: worker 2379903 finished
Dec 09 12:08:13 argo pveproxy[2379903]: worker exit
Dec 09 12:07:59 argo pvedaemon[1135]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:07:53 argo pvedaemon[1133]: VM 130 qmp command failed - VM 130 qmp command 'guest-ping' failed - got timeout
Dec 09 12:07:50 argo pvedaemon[2395699]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:07:44 argo pvedaemon[1135]: <root@pam> end task UPID:argo:00249172:030CB1DE:61B1E380:qmreset:128:root@pam: OK
Dec 09 12:07:44 argo pvedaemon[1135]: <root@pam> starting task UPID:argo:00249172:030CB1DE:61B1E380:qmreset:128:root@pam:
Dec 09 12:07:33 argo pvedaemon[1135]: <root@pam> end task UPID:argo:0024900E:030C957F:61B1E338:qmreboot:128:root@pam: VM quit/powerdown failed
Dec 09 12:07:33 argo pvedaemon[2396174]: VM quit/powerdown failed
Dec 09 12:07:33 argo pvedaemon[2396174]: VM 128 qmp command failed - received interrupt
Dec 09 12:07:31 argo pvedaemon[1135]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:07:25 argo pvedaemon[1133]: <root@pam> end task UPID:argo:00249119:030CAA63:61B1E36D:qmreset:130:root@pam: OK
Dec 09 12:07:25 argo pvedaemon[1133]: <root@pam> starting task UPID:argo:00249119:030CAA63:61B1E36D:qmreset:130:root@pam:
Dec 09 12:07:17 argo pvedaemon[2395699]: VM 130 qmp command failed - VM 130 qmp command 'guest-ping' failed - got timeout
Dec 09 12:07:10 argo pvedaemon[1135]: <root@pam> end task UPID:argo:0024909A:030C9FC7:61B1E352:vncproxy:130:root@pam: OK
Dec 09 12:06:58 argo pvedaemon[2396314]: starting vnc proxy UPID:argo:0024909A:030C9FC7:61B1E352:vncproxy:130:root@pam:
Dec 09 12:06:58 argo pvedaemon[1135]: <root@pam> starting task UPID:argo:0024909A:030C9FC7:61B1E352:vncproxy:130:root@pam:
Dec 09 12:06:58 argo pvedaemon[1133]: VM 130 qmp command failed - VM 130 qmp command 'guest-ping' failed - got timeout
Dec 09 12:06:43 argo pvedaemon[2395699]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:06:32 argo pvedaemon[1135]: <root@pam> starting task UPID:argo:0024900E:030C957F:61B1E338:qmreboot:128:root@pam:
Dec 09 12:06:32 argo pvedaemon[2396174]: requesting reboot of VM 128: UPID:argo:0024900E:030C957F:61B1E338:qmreboot:128:root@pam:
Dec 09 12:06:24 argo pvedaemon[1133]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:06:05 argo pvedaemon[2395699]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:06:01 argo pvedaemon[1135]: <root@pam> end task UPID:argo:00248F70:030C87C1:61B1E315:vncproxy:128:root@pam: OK
Dec 09 12:05:57 argo pvedaemon[1135]: <root@pam> starting task UPID:argo:00248F70:030C87C1:61B1E315:vncproxy:128:root@pam:
Dec 09 12:05:57 argo pvedaemon[2396016]: starting vnc proxy UPID:argo:00248F70:030C87C1:61B1E315:vncproxy:128:root@pam:
Dec 09 12:05:55 argo pvedaemon[2395699]: <root@pam> successful auth for user 'root@pam'
Dec 09 12:04:48 argo pvedaemon[1132]: worker 2395699 started
Dec 09 12:04:48 argo pvedaemon[1132]: starting 1 worker(s)
Dec 09 12:04:48 argo pvedaemon[1132]: worker 1134 finished
Dec 09 12:04:48 argo pvedaemon[1134]: worker exit
Dec 09 12:00:35 argo pmxcfs[999]: [dcdb] notice: data verification successful
Dec 09 12:00:33 argo smartd[670]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 62 to 69
Dec 09 11:50:54 argo pvedaemon[1135]: <root@pam> successful auth for user 'root@pam'
Dec 09 11:50:32 argo pveproxy[1142]: worker 2391725 started
Dec 09 11:50:32 argo pveproxy[1142]: starting 1 worker(s)
Dec 09 11:50:32 argo pveproxy[1142]: worker 2374478 finished
Dec 09 11:50:32 argo pveproxy[2374478]: worker exit
Dec 09 11:38:50 argo pveproxy[2379903]: Clearing outdated entries from certificate cache
Dec 09 11:35:55 argo pvedaemon[1134]: <root@pam> successful auth for user 'root@pam'
Dec 09 11:30:33 argo smartd[670]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 72 to 62
Dec 09 11:21:29 argo pveproxy[2374478]: Clearing outdated entries from certificate cache
Dec 09 11:20:53 argo pvedaemon[1133]: <root@pam> successful auth for user 'root@pam'
Dec 09 11:17:02 argo CRON[2382327]: pam_unix(cron:session): session closed for user root
Dec 09 11:17:02 argo CRON[2382328]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 09 11:17:02 argo CRON[2382327]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 09 11:09:59 argo pveproxy[2380211]: Clearing outdated entries from certificate cache
Dec 09 11:09:25 argo pveproxy[1142]: worker 2380211 started
Dec 09 11:09:25 argo pveproxy[1142]: starting 1 worker(s)
Dec 09 11:09:25 argo pveproxy[1142]: worker 2192325 finished
Dec 09 11:09:25 argo pveproxy[2192325]: worker exit
Dec 09 11:09:02 argo pveproxy[2192325]: Clearing outdated entries from certificate cache
Dec 09 11:08:42 argo pveproxy[2379903]: Clearing outdated entries from certificate cache
 
And this is journactl around the new VM create (115), I guess:

Code:
Dec 10 11:10:45 argo pvedaemon[1139]: VM 106 qmp command failed - VM 106 qmp command 'guest-ping' failed - got timeout
Dec 10 11:10:42 argo pvedaemon[1139]: <root@pam> end task UPID:argo:000520F5:006CAFE3:61B327A1:qmstart:106:root@pam: OK
Dec 10 11:10:42 argo kernel: fwbr106i0: port 2(tap106i0) entered forwarding state
Dec 10 11:10:42 argo kernel: fwbr106i0: port 2(tap106i0) entered blocking state
Dec 10 11:10:42 argo kernel: fwbr106i0: port 2(tap106i0) entered disabled state
Dec 10 11:10:42 argo kernel: fwbr106i0: port 2(tap106i0) entered blocking state
Dec 10 11:10:42 argo kernel: vmbr0: port 3(fwpr106p0) entered forwarding state
Dec 10 11:10:42 argo kernel: vmbr0: port 3(fwpr106p0) entered blocking state
Dec 10 11:10:42 argo kernel: device fwpr106p0 entered promiscuous mode
Dec 10 11:10:42 argo kernel: vmbr0: port 3(fwpr106p0) entered disabled state
Dec 10 11:10:42 argo kernel: vmbr0: port 3(fwpr106p0) entered blocking state
Dec 10 11:10:42 argo kernel: fwbr106i0: port 1(fwln106i0) entered forwarding state
Dec 10 11:10:42 argo kernel: fwbr106i0: port 1(fwln106i0) entered blocking state
Dec 10 11:10:42 argo kernel: device fwln106i0 entered promiscuous mode
Dec 10 11:10:42 argo kernel: fwbr106i0: port 1(fwln106i0) entered disabled state
Dec 10 11:10:42 argo kernel: fwbr106i0: port 1(fwln106i0) entered blocking state
Dec 10 11:10:42 argo systemd-udevd[336139]: Using default interface naming scheme 'v247'.
Dec 10 11:10:42 argo systemd-udevd[336139]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 11:10:42 argo systemd-udevd[336132]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 11:10:42 argo systemd-udevd[336132]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 11:10:42 argo kernel: device tap106i0 entered promiscuous mode
Dec 10 11:10:42 argo systemd-udevd[336132]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 11:10:42 argo systemd-udevd[336132]: Using default interface naming scheme 'v247'.
Dec 10 11:10:42 argo systemd[1]: Started 106.scope.
Dec 10 11:10:41 argo pvedaemon[1139]: <root@pam> starting task UPID:argo:000520F5:006CAFE3:61B327A1:qmstart:106:root@pam:
Dec 10 11:10:41 argo pvedaemon[336117]: start VM 106: UPID:argo:000520F5:006CAFE3:61B327A1:qmstart:106:root@pam:
Dec 10 11:09:29 argo pvedaemon[313009]: <root@pam> successful auth for user 'root@pam'
Dec 10 11:09:21 argo pmxcfs[996]: [status] notice: received log
Dec 10 11:09:20 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:58:08 argo pveproxy[323843]: Clearing outdated entries from certificate cache
Dec 10 10:56:56 argo pveproxy[323290]: Clearing outdated entries from certificate cache
Dec 10 10:54:29 argo pvedaemon[313377]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:46:33 argo pveproxy[328843]: Clearing outdated entries from certificate cache
Dec 10 10:44:38 argo pveproxy[1146]: worker 328843 started
Dec 10 10:44:38 argo pveproxy[1146]: starting 1 worker(s)
Dec 10 10:44:38 argo pveproxy[1146]: worker 315744 finished
Dec 10 10:44:38 argo pveproxy[315744]: worker exit
Dec 10 10:39:28 argo pvedaemon[1139]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:29:56 argo pveproxy[315744]: Clearing outdated entries from certificate cache
Dec 10 10:28:03 argo pveproxy[323843]: Clearing outdated entries from certificate cache
Dec 10 10:26:52 argo pveproxy[323290]: Clearing outdated entries from certificate cache
Dec 10 10:26:47 argo pveproxy[1146]: worker 323843 started
Dec 10 10:26:47 argo pveproxy[1146]: starting 1 worker(s)
Dec 10 10:26:47 argo pveproxy[1146]: worker 309756 finished
Dec 10 10:26:47 argo pveproxy[309756]: worker exit
Dec 10 10:26:21 argo pveproxy[309756]: Clearing outdated entries from certificate cache
Dec 10 10:24:53 argo pveproxy[1146]: worker 323290 started
Dec 10 10:24:53 argo pveproxy[1146]: starting 1 worker(s)
Dec 10 10:24:53 argo pveproxy[1146]: worker 310545 finished
Dec 10 10:24:53 argo pveproxy[310545]: worker exit
Dec 10 10:24:27 argo pvedaemon[313377]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:23:40 argo pmxcfs[996]: [dcdb] notice: data verification successful
Dec 10 10:17:01 argo CRON[321096]: pam_unix(cron:session): session closed for user root
Dec 10 10:17:01 argo CRON[321097]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 10 10:17:01 argo CRON[321096]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 10 10:09:27 argo pvedaemon[313009]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:08:30 argo pvedaemon[1139]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:08:01 argo cron[1086]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Dec 10 10:07:41 argo pvedaemon[313377]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:07:32 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:07:31 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:07:24 argo qmeventd[318406]: Finished cleanup for 106
Dec 10 10:07:24 argo qmeventd[318406]: Starting cleanup for 106
Dec 10 10:07:24 argo pvedaemon[1139]: <root@pam> end task UPID:argo:0004DBA5:0066E3FC:61B318CB:qmstop:106:root@pam: OK
Dec 10 10:07:23 argo systemd[1]: 106.scope: Consumed 10min 22.730s CPU time.
Dec 10 10:07:23 argo systemd[1]: 106.scope: Succeeded.
Dec 10 10:07:23 argo qmeventd[673]: read: Connection reset by peer
Dec 10 10:07:23 argo kernel: vmbr0: port 3(fwpr106p0) entered disabled state
Dec 10 10:07:23 argo kernel: device fwpr106p0 left promiscuous mode
Dec 10 10:07:23 argo kernel: fwbr106i0: port 1(fwln106i0) entered disabled state
Dec 10 10:07:23 argo kernel: device fwln106i0 left promiscuous mode
Dec 10 10:07:23 argo kernel: vmbr0: port 3(fwpr106p0) entered disabled state
Dec 10 10:07:23 argo kernel: fwbr106i0: port 1(fwln106i0) entered disabled state
Dec 10 10:07:23 argo kernel: fwbr106i0: port 2(tap106i0) entered disabled state
Dec 10 10:07:23 argo pvedaemon[1139]: <root@pam> starting task UPID:argo:0004DBA5:0066E3FC:61B318CB:qmstop:106:root@pam:
Dec 10 10:07:23 argo pvedaemon[318373]: stop VM 106: UPID:argo:0004DBA5:0066E3FC:61B318CB:qmstop:106:root@pam:
Dec 10 10:07:07 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:07:07 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:06:49 argo pvedaemon[313009]: <root@pam> end task UPID:argo:0004DB15:0066D672:61B318A8:vncproxy:115:root@pam: Failed to run vncproxy.
Dec 10 10:06:49 argo pvedaemon[318229]: Failed to run vncproxy.
Dec 10 10:06:48 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:06:48 argo pvedaemon[313009]: <root@pam> starting task UPID:argo:0004DB15:0066D672:61B318A8:vncproxy:115:root@pam:
Dec 10 10:06:48 argo pvedaemon[318229]: starting vnc proxy UPID:argo:0004DB15:0066D672:61B318A8:vncproxy:115:root@pam:
Dec 10 10:06:48 argo pvedaemon[1139]: <root@pam> end task UPID:argo:0004D944:0066AF97:61B31844:vncproxy:115:root@pam: OK
Dec 10 10:06:47 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:06:33 argo pvedaemon[1139]: <root@pam> end task UPID:argo:0004D7C5:00669201:61B317F9:vncshell::root@pam: OK
Dec 10 10:05:56 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:05:56 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:05:30 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:05:30 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:05:09 argo pvedaemon[317764]: starting vnc proxy UPID:argo:0004D944:0066AF97:61B31844:vncproxy:115:root@pam:
Dec 10 10:05:09 argo pvedaemon[1139]: <root@pam> starting task UPID:argo:0004D944:0066AF97:61B31844:vncproxy:115:root@pam:
Dec 10 10:04:37 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:04:37 argo pmxcfs[996]: [status] notice: received log
Dec 10 10:03:53 argo pvedaemon[313377]: <root@pam> successful auth for user 'root@pam'
Dec 10 10:03:53 argo pvedaemon[1139]: <root@pam> starting task UPID:argo:0004D7C5:00669201:61B317F9:vncshell::root@pam:
Dec 10 10:03:53 argo pvedaemon[317381]: starting termproxy UPID:argo:0004D7C5:00669201:61B317F9:vncshell::root@pam:
Dec 10 10:03:28 argo pvedaemon[313009]: <root@pam> end task UPID:argo:0004D2A9:00662438:61B316E0:vncproxy:115:root@pam: OK
Dec 10 09:59:12 argo pvedaemon[313009]: <root@pam> starting task UPID:argo:0004D2A9:00662438:61B316E0:vncproxy:115:root@pam:
Dec 10 09:59:12 argo pvedaemon[316073]: starting vnc proxy UPID:argo:0004D2A9:00662438:61B316E0:vncproxy:115:root@pam:
Dec 10 09:59:09 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:59:08 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:59 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:50 argo pveproxy[315744]: Clearing outdated entries from certificate cache
Dec 10 09:58:46 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:23 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:22 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:15 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:06 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:58:02 argo pveproxy[315743]: worker exit
Dec 10 09:58:01 argo pveproxy[1146]: worker 315744 started
Dec 10 09:58:01 argo pveproxy[1146]: starting 1 worker(s)
Dec 10 09:58:01 argo pveproxy[1146]: worker 147470 finished
Dec 10 09:57:54 argo pvedaemon[313377]: <root@pam> end task UPID:argo:0004D0CD:0065FBB9:61B31678:vncproxy:115:root@pam: OK
Dec 10 09:57:48 argo pvedaemon[1139]: <root@pam> successful auth for user 'root@pam'
Dec 10 09:57:28 argo pvedaemon[313377]: <root@pam> starting task UPID:argo:0004D0CD:0065FBB9:61B31678:vncproxy:115:root@pam:
Dec 10 09:57:28 argo pvedaemon[315597]: starting vnc proxy UPID:argo:0004D0CD:0065FBB9:61B31678:vncproxy:115:root@pam:
Dec 10 09:57:16 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:57:16 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:57:16 argo pmxcfs[996]: [status] notice: received log
Dec 10 09:56:31 argo pveproxy[310545]: Clearing outdated entries from certificate cache
Dec 10 09:56:21 argo pveproxy[147470]: Clearing outdated entries from certificate cache
Dec 10 09:56:21 argo pveproxy[309756]: Clearing outdated entries from certificate cache
Dec 10 09:54:22 argo kernel: fwbr130i0: port 2(tap130i0) entered forwarding state
Dec 10 09:54:22 argo kernel: fwbr130i0: port 2(tap130i0) entered blocking state
Dec 10 09:54:22 argo kernel: fwbr130i0: port 2(tap130i0) entered disabled state
Dec 10 09:54:22 argo kernel: fwbr130i0: port 2(tap130i0) entered blocking state
Dec 10 09:54:22 argo kernel: vmbr0: port 10(fwpr130p0) entered forwarding state
Dec 10 09:54:22 argo kernel: vmbr0: port 10(fwpr130p0) entered blocking state
Dec 10 09:54:22 argo kernel: device fwpr130p0 entered promiscuous mode
Dec 10 09:54:22 argo kernel: vmbr0: port 10(fwpr130p0) entered disabled state
Dec 10 09:54:22 argo kernel: vmbr0: port 10(fwpr130p0) entered blocking state
Dec 10 09:54:22 argo kernel: fwbr130i0: port 1(fwln130i0) entered forwarding state
Dec 10 09:54:22 argo kernel: fwbr130i0: port 1(fwln130i0) entered blocking state
Dec 10 09:54:22 argo kernel: device fwln130i0 entered promiscuous mode
Dec 10 09:54:22 argo kernel: fwbr130i0: port 1(fwln130i0) entered disabled state
Dec 10 09:54:22 argo kernel: fwbr130i0: port 1(fwln130i0) entered blocking state
Dec 10 09:54:22 argo systemd-udevd[314662]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 09:54:22 argo systemd-udevd[314661]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 09:54:22 argo systemd-udevd[314661]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 09:54:22 argo systemd-udevd[314661]: Using default interface naming scheme 'v247'.
Dec 10 09:54:22 argo kernel: device tap130i0 entered promiscuous mode
Dec 10 09:54:21 argo systemd-udevd[314662]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 10 09:54:21 argo systemd-udevd[314662]: Using default interface naming scheme 'v247'.
Dec 10 09:54:21 argo systemd[1]: Started 130.scope.
 
Oh, sorry, the two above are on another host. I do have a cluster with three nodes, that's the one where I wanted to create a new VM:

Code:
ec 10 08:58:55 fro pvestatd[1104]: status update time (5.240 seconds)
Dec 10 08:57:46 fro pmxcfs[991]: [status] notice: received log
Dec 10 08:56:33 fro pvestatd[1104]: closing with write buffer at /usr/share/perl5/IO/Multiplex.pm line 928.
Dec 10 08:54:15 fro pvestatd[1104]: status update time (5.202 seconds)
Dec 10 08:52:43 fro pvescheduler[257622]: INFO: Starting Backup of VM 122 (qemu)
Dec 10 08:52:43 fro pvescheduler[257622]: INFO: Finished Backup of VM 118 (00:02:32)
Dec 10 08:52:36 fro pvestatd[1104]: status update time (6.230 seconds)
Dec 10 08:52:36 fro pvestatd[1104]: VM 118 qmp command failed - VM 118 qmp command 'query-proxmox-support' failed - got timeout
Dec 10 08:52:07 fro pvestatd[1104]: status update time (6.550 seconds)
Dec 10 08:52:06 fro pvestatd[1104]: VM 118 qmp command failed - VM 118 qmp command 'query-proxmox-support' failed - got timeout
Dec 10 08:50:11 fro pvescheduler[257622]: INFO: Starting Backup of VM 118 (qemu)
Dec 10 08:50:10 fro pvescheduler[257622]: INFO: Finished Backup of VM 113 (01:21:08)
Dec 10 08:50:05 fro pvestatd[1104]: status update time (5.008 seconds)
Dec 10 08:48:46 fro pvestatd[1104]: status update time (5.907 seconds)
Dec 10 08:48:07 fro pvestatd[1104]: status update time (6.452 seconds)
Dec 10 08:48:07 fro pvestatd[1104]: VM 113 qmp command failed - VM 113 qmp command 'query-proxmox-support' failed - got timeout
Dec 10 08:43:05 fro pvestatd[1104]: status update time (5.047 seconds)
Dec 10 08:42:56 fro pmxcfs[991]: [status] notice: received log
Dec 10 08:41:55 fro pvestatd[1104]: status update time (5.085 seconds)
Dec 10 08:39:35 fro pvestatd[1104]: status update time (5.167 seconds)
Dec 10 08:37:45 fro pvestatd[1104]: status update time (5.246 seconds)
Dec 10 08:36:35 fro pvestatd[1104]: status update time (5.013 seconds)
Dec 10 08:35:53 fro pvestatd[1104]: closing with write buffer at /usr/share/perl5/IO/Multiplex.pm line 928.
Dec 10 08:29:12 fro pvescheduler[257622]: VM 113 qmp command failed - VM 113 qmp command 'guest-fsfreeze-thaw' failed - got timeout
Dec 10 08:29:02 fro pvescheduler[257622]: VM 113 qmp command failed - VM 113 qmp command 'guest-fsfreeze-freeze' failed - got timeout
Dec 10 08:23:40 fro pmxcfs[991]: [dcdb] notice: data verification successful
Dec 10 08:17:01 fro CRON[278839]: pam_unix(cron:session): session closed for user root
Dec 10 08:17:01 fro CRON[278840]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 10 08:17:01 fro CRON[278839]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
 
Today it happend again with one VM:

Code:
Dec 11 08:15:47 fro pvedaemon[1136]: <root@pam> end task UPID:fro:000A4386:00E06B8A:61B45023:qmstart:124:root@pam: >
Dec 11 08:15:47 fro kernel: fwbr124i0: port 2(tap124i0) entered forwarding state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 2(tap124i0) entered blocking state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 2(tap124i0) entered disabled state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 2(tap124i0) entered blocking state
Dec 11 08:15:47 fro kernel: vmbr0: port 2(fwpr124p0) entered forwarding state
Dec 11 08:15:47 fro kernel: vmbr0: port 2(fwpr124p0) entered blocking state
Dec 11 08:15:47 fro kernel: device fwpr124p0 entered promiscuous mode
Dec 11 08:15:47 fro kernel: vmbr0: port 2(fwpr124p0) entered disabled state
Dec 11 08:15:47 fro kernel: vmbr0: port 2(fwpr124p0) entered blocking state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 1(fwln124i0) entered forwarding state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 1(fwln124i0) entered blocking state
Dec 11 08:15:47 fro kernel: device fwln124i0 entered promiscuous mode
Dec 11 08:15:47 fro kernel: fwbr124i0: port 1(fwln124i0) entered disabled state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 1(fwln124i0) entered blocking state
Dec 11 08:15:47 fro systemd-udevd[672668]: Using default interface naming scheme 'v247'.
Dec 11 08:15:47 fro systemd-udevd[672668]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not>
Dec 11 08:15:47 fro systemd-udevd[672661]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not>
Dec 11 08:15:47 fro systemd-udevd[672661]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not>
Dec 11 08:15:47 fro kernel: device tap124i0 entered promiscuous mode
Dec 11 08:15:47 fro systemd-udevd[672661]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not>
Dec 11 08:15:47 fro systemd-udevd[672661]: Using default interface naming scheme 'v247'.
Dec 11 08:15:47 fro systemd[1]: Started 124.scope.
Dec 11 08:15:47 fro pvedaemon[1136]: <root@pam> starting task UPID:fro:000A4386:00E06B8A:61B45023:qmstart:124:root@>
Dec 11 08:15:47 fro pvedaemon[672646]: start VM 124: UPID:fro:000A4386:00E06B8A:61B45023:qmstart:124:root@pam:
Dec 11 08:15:22 fro pvedaemon[1136]: <root@pam> end task UPID:fro:000A4301:00E06160:61B45009:qmstop:124:root@pam: OK
Dec 11 08:15:22 fro qmeventd[672528]: Finished cleanup for 124
Dec 11 08:15:22 fro qmeventd[672528]:  OK
Dec 11 08:15:22 fro qmeventd[672528]: trying to acquire lock...
Dec 11 08:15:22 fro qmeventd[672528]: Starting cleanup for 124
Dec 11 08:15:21 fro systemd[1]: 124.scope: Consumed 22min 48.234s CPU time.
Dec 11 08:15:21 fro systemd[1]: 124.scope: Succeeded.
Dec 11 08:15:21 fro pvestatd[1104]: VM 124 qmp command failed - VM 124 qmp command 'query-proxmox-support' failed - c>
Dec 11 08:15:21 fro qmeventd[659]: read: Connection reset by peer
Dec 11 08:15:21 fro kernel: vmbr0: port 2(fwpr124p0) entered disabled state
Dec 11 08:15:21 fro kernel: device fwpr124p0 left promiscuous mode
Dec 11 08:15:21 fro kernel: fwbr124i0: port 1(fwln124i0) entered disabled state
Dec 11 08:15:21 fro kernel: device fwln124i0 left promiscuous mode
Dec 11 08:15:21 fro kernel: vmbr0: port 2(fwpr124p0) entered disabled state
Dec 11 08:15:21 fro kernel: fwbr124i0: port 1(fwln124i0) entered disabled state
Dec 11 08:15:21 fro kernel: fwbr124i0: port 2(tap124i0) entered disabled state
Dec 11 08:15:21 fro pvedaemon[1136]: <root@pam> starting task UPID:fro:000A4301:00E06160:61B45009:qmstop:124:root@p>
Dec 11 08:15:21 fro pvedaemon[672513]: stop VM 124: UPID:fro:000A4301:00E06160:61B45009:qmstop:124:root@pam:
 
I did a kernel update to 5.15.5-1, rebootet, and again there were some io-errors. I have no idea what this could be, as it was smooth until some days ago. It could be that it started with upgrade to 7.1.7 with kernel 5.13 (which I did some days ago, maybe a week), but I am not sure about that. Nothing changed in cabeling, there was no update for synology.
 
I also did a downgrade if pve-qemu-kvm to 6.0.0.4, but that made it even worse. Is there a way to downgrade to let's say 7.1.5 with all necessary parts like qemu-kvm and kernel?
 
Ok...as I have tried several things, now I have some VMs which are broken...I wanted to bring them back via Restore, but:

Code:
blk_pwrite failed at offset 75329699840 length 4194304 (-5) - Input/output error
pbs-restore: Failed to flush the L2 table cache: Disk quota exceeded
pbs-restore: Failed to flush the refcount block cache: Disk quota exceeded

What Disk quota is meant by this? Maybe that's what is wrong here?
 
Next step taken:
Recovery to local-thin has worked.
Maybe there is something wrong with NFS shares?

For the initial post:
I have set Async IO to "threads" and haven't got an io-error since then. Hopefully that was the solution.
 
Oh, dear...oh, dear...

Sorry for spaming this forum...yes, indeed there was something wrong with NFS shares. Always remind yourself about quotas when creating NFS shares! :cool:
 
  • Like
Reactions: mira
Any update on this or a solution? I have a similar issue with my VMs not starting any more with yellow triangle since I've updated my PVE from 7.0-xx to most recent version 7.2-11 today. These VMs also reside on an NFS storage. Nothing was changed on the storage. No quotas should be in place, afaik.

Update: Set default Kernel back to 5.11.22-7-pve (from 5.15.53-1-pve) as described here https://forum.proxmox.com/threads/select-default-boot-kernel.79582/post-398029 and now the issue seems to be gone.
 
Last edited:
Well at least that's what worked for me after the most recent update put my proxmox cluster into a state where it wasn't able to launch any VMs due to this I/O error.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!