I just hit this kind of issue with PVE 8.1.3 and Debian 11 genericcloud image 2024-01-04.
Precisely:
- I downloaded the debian 11 genericcloud image in qcow2
- imported into a VM by using qm commands (see below for the script)
- I customized the cloud image using a yaml file to get the...
Update: as the uptime is 5 days, 19:13 hours, I could tell that, problem solved by removing iothread from the kvm config (virtio-scsi-single still in use).
My problem solved by this settings.
Exactly.
It can proof the followings:
- guest level discard has nothing to do with the issue
- cache probably has no effect (at least alone)
- enabled iothread with virtio-scsi-single seems problematic, at least up to now the issue happened with this combination (anyway, guest kernel is 4.x)
I...
Update: after the server died less than 12 hours, I restarted the test with new changes: I removed the iothread from all disks.
Virtio-scsi-single still presents.
Qemu-guest-agent installed, just to push the limits and increasing the risk to crash.
It is working perfectly for 2 days and 22+...
Update:
After the upgrade, my guest server died again. It was running for 15,5 hours.
Guest OS related data:
Load: 38.98 36.66 25.48 (it is not really interesting, because it is a consequence of other problems).
Guest OS kernel: 4.19.0-25-amd64
I restarted the guest os, cache disabled on...
Just for the record:
The test guest server was running for 58 hours without any jbd2/io lockups, after I removed the discard mount option from the guest os fstab.
As I use autosnap (qm snapshot) hourly, that means, I had 58 occasion, when the lockups did not happen, even, before this particular...
Just an update: the kvm server is still up and running, 49 hours, without any jbd2 hangups, disk lockups. Hourly qm snapshots, earlier the server already died after 12-24 hours. I did not upgrade the proxmox yet (still PVE 8.1.3).
Reminder: I removed the discard from the mount options inside the...
@roms2000
Thanks for the update!
After I removed discard option from the mount options of the guest, up to now 25 snapshots happened and the server still up and running. I even issued several fstrim commands, completed almost zero time.
Maybe this one only lowers the risk of stuck IO.
Anyway...
Hi,
may I join to the club?
I think I hit the same issue as you.
Short version: I use local zfs and proxmox-autosnap to create snapshots every hour on a PVE 8.1.
My guest UCS installation (kernel 4.19 or similar) produces jbd2/sdc1-8 dead processes, around this hourly snapshot.
The drive is...
Excluding a mount point is ok, but on the GUI I do not see any choice to exclude/include.
Unfortunately you are right, the Backup should be ticked in the settings of the mount point.
It was a PEBCAK :)
Thank you.
When I wrote without warning, I meant before the task start, not in the log. It is a similar situation to the Hitchhikers guide to Galaxy :)))
Thank you again for your help and explanation.
Reading only for fun:
Thank you, I forgot the switch for the du: --apparent-size
Anyway, is that a normal thing, the backup just excluded the lxc mount points, without any warning?
What would be the proper way to backup an lxc with multiple volume?
Thanks!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.