Anyone seen this before?
Disk isn't out of space.
Here's some stuff from the syslog.
I've rebooted one of the VMs and it's back working and fine but this obviously isn't ideal.
Disk isn't out of space.
Code:
root@proxmox:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-8-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
pve-kernel-5.13: 7.1-9
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
proxmox-kernel-6.2: 6.2.16-8
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.4
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.7
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-4
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
Here's some stuff from the syslog.
Code:
2023-09-01T03:56:44.361173+10:00 proxmox pve-firewall[1754]: firewall update time (11.019 seconds)
2023-09-01T03:56:44.519520+10:00 proxmox pvestatd[1758]: status update time (11.188 seconds)
2023-09-01T03:56:44.574173+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/103: -1
2023-09-01T03:56:44.574808+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
2023-09-01T03:56:44.601931+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmox/local-zfs: -1
2023-09-01T03:56:44.602535+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmox/local: -1
2023-09-01T04:01:09.591368+10:00 proxmox pvestatd[1758]: status update time (15.293 seconds)
2023-09-01T04:01:09.754211+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/103: -1
2023-09-01T04:01:09.778434+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
2023-09-01T04:01:09.878180+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmox/local: -1
2023-09-01T04:01:09.878548+10:00 proxmox pmxcfs[1618]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmox/local-zfs: -1
2023-09-01T04:01:09.916901+10:00 proxmox pve-firewall[1754]: firewall update time (15.578 seconds)
2023-09-01T04:01:40.283266+10:00 proxmox pvestatd[1758]: status update time (10.838 seconds)
2023-09-01T04:01:40.284540+10:00 proxmox pve-firewall[1754]: firewall update time (11.018 seconds)
2023-09-01T04:03:07.712582+10:00 proxmox pvestatd[1758]: status update time (7.272 seconds)
2023-09-01T04:03:07.809692+10:00 proxmox pve-firewall[1754]: firewall update time (7.515 seconds)
2023-09-01T04:03:47.358090+10:00 proxmox pve-firewall[1754]: firewall update time (7.342 seconds)
2023-09-01T04:03:47.990995+10:00 proxmox pvestatd[1758]: status update time (7.049 seconds)
2023-09-01T04:04:56.866241+10:00 proxmox smartd[1276]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 61 to 60
2023-09-01T04:05:38.030596+10:00 proxmox pvestatd[1758]: status update time (17.180 seconds)
2023-09-01T04:05:38.069165+10:00 proxmox pve-firewall[1754]: firewall update time (7.290 seconds)
2023-09-01T04:06:15.407806+10:00 proxmox pvestatd[1758]: status update time (17.284 seconds)
2023-09-01T04:06:15.464802+10:00 proxmox pve-firewall[1754]: firewall update time (15.169 seconds)
2023-09-01T04:06:56.413818+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:01.414634+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:06.415211+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:11.416507+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:14.878861+10:00 proxmox pvescheduler[3352245]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
2023-09-01T04:07:14.879160+10:00 proxmox pvescheduler[3352214]: replication: cfs-lock 'file-replication_cfg' error: got lock request timeout
2023-09-01T04:07:16.417971+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:21.419449+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:26.420746+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:31.422229+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:36.423726+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:41.425171+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:46.426690+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:51.428221+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:07:56.429633+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:01.431113+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:06.432402+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:11.433900+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:14.895149+10:00 proxmox pvescheduler[3377383]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
2023-09-01T04:08:14.897000+10:00 proxmox pvescheduler[3377382]: replication: cfs-lock 'file-replication_cfg' error: got lock request timeout
2023-09-01T04:08:16.435202+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:21.436638+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:26.437885+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:31.439163+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
2023-09-01T04:08:36.440527+10:00 proxmox pve-ha-lrm[1825]: unable to write lrm status file - unable to delete old temp file: Input/output error
I've rebooted one of the VMs and it's back working and fine but this obviously isn't ideal.