Oh, dear...oh, dear...
Sorry for spaming this forum...yes, indeed there was something wrong with NFS shares. Always remind yourself about quotas when creating NFS shares! :cool:
Hallo zusammen,
in einem anderen Post hatte ich Probleme mit ständigen "status: io-error" und nicht mehr reagierenden VMs. Durch Rumprobiererei hab ich fast die komplette Umgebung zerstört, hab diese aber durch Besonnenheit zum größten Teil retten und vielleicht mittels "Async IO = threads" in...
Next step taken:
Recovery to local-thin has worked.
Maybe there is something wrong with NFS shares?
For the initial post:
I have set Async IO to "threads" and haven't got an io-error since then. Hopefully that was the solution.
Ok...as I have tried several things, now I have some VMs which are broken...I wanted to bring them back via Restore, but:
blk_pwrite failed at offset 75329699840 length 4194304 (-5) - Input/output error
pbs-restore: Failed to flush the L2 table cache: Disk quota exceeded
pbs-restore: Failed to...
I also did a downgrade if pve-qemu-kvm to 6.0.0.4, but that made it even worse. Is there a way to downgrade to let's say 7.1.5 with all necessary parts like qemu-kvm and kernel?
I did a kernel update to 5.15.5-1, rebootet, and again there were some io-errors. I have no idea what this could be, as it was smooth until some days ago. It could be that it started with upgrade to 7.1.7 with kernel 5.13 (which I did some days ago, maybe a week), but I am not sure about that...
Today it happend again with one VM:
Dec 11 08:15:47 fro pvedaemon[1136]: <root@pam> end task UPID:fro:000A4386:00E06B8A:61B45023:qmstart:124:root@pam: >
Dec 11 08:15:47 fro kernel: fwbr124i0: port 2(tap124i0) entered forwarding state
Dec 11 08:15:47 fro kernel: fwbr124i0: port 2(tap124i0)...
Oh, sorry, the two above are on another host. I do have a cluster with three nodes, that's the one where I wanted to create a new VM:
ec 10 08:58:55 fro pvestatd[1104]: status update time (5.240 seconds)
Dec 10 08:57:46 fro pmxcfs[991]: [status] notice: received log
Dec 10 08:56:33 fro...
And this is journactl around the new VM create (115), I guess:
Dec 10 11:10:45 argo pvedaemon[1139]: VM 106 qmp command failed - VM 106 qmp command 'guest-ping' failed - got timeout
Dec 10 11:10:42 argo pvedaemon[1139]: <root@pam> end task...
Thanks for your reply. I don't see any I/O errors with journalctl, this is what I get when the errors occur:
Dec 09 12:09:43 argo pvedaemon[1135]: VM 128 qmp command failed - VM 128 qmp command 'guest-ping' failed - got timeout
Dec 09 12:09:29 argo pvedaemon[2395699]: <root@pam> successful auth...
Hi everyone,
some days ago it started that some of my VMs are getting a yellow triangle and when hovering it says "status: io-error". I have learnt that this could mean I run out of space, but haven't found any culprit. I am using a NFS connection to a synology, where all VM reside.
At the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.