The backup of VMs is kind of broken https://bugzilla.proxmox.com/show_bug.cgi?id=2874 so it is very possible that all your backups are broken. Ofc this is not happen that frequently but there are already A LOT reports about it. I already hit by that problem twice so if you do not test your VM...
I am not sure if I oversaw it or it increased a lot due the r/w testes but I am currently at these smart values, looks broken to me :)
I used a spare Samsung SSD 970 EVO 2TB for non critical stuff, likewise not a good idea to cheap out at storage :)
not yet, the error is very sketchy and the nvme work overall just some parts are broken - that is rather strange if you ask me
I need to put it into another host and test it there, the NVME is also ~6months old, smart shows everything is very fine.
I have a VM running at a NVME/LVM. I can't move the disk anymore to any other storage or create a backup.
The VM has 2 disks, the problem appeared at both disks. I tried ~10 times at the smaller disk and it finally moved it.
The bigger disk always shows the same error at different percentages...
I updated the effected server with 5.15, not crashed yet (~1day running).
Meanwhile a other server (still at 5.13) is crashed with a slightly different error also leading to pve-root
[Tue Dec 14 14:36:15 2021] INFO: task khugepaged:215 blocked for more than 120 seconds.
[Tue Dec 14 14:36:15...
Ceph is used for storage and additional a NVME for some LXCs, this works stable so far.
I can't easily disable ceph, I can't reproduce the problem so I need to wait 1-3 days.
We just see frequently restarts of the network bridges for some reason but that didn't causes any network outages. Also...
Hi, we see now a kernel panic while medium usage of a node that results in complete crash of the device.
Sadly not reproduce able, It happens 3-4 times a week at random times (full load, no load ...).
This never happen before the upgrade from 6.4 -> 7.1 (two weeks ago) so chances are good this...
Hi, we see this kind of problems since at least Proxmox 6.4.
We have a working isc-dhcp server at a unprivileged ubuntu container. The isc is restarted frequently (several times a day) due changes at dhcp.
Every time the server is restarted (due changes at the config), or the lxc itself is...
sorry for late response (testing is difficult like already stated)
After fixing the object map it looks working again, TX !
Really wondering why those kind of errors are not shown at the ceph monior or somewhere else where it is really viewable.
Just want to add that the same symptomatic happens at cifs too https://forum.proxmox.com/threads/after-daily-backup-fails-node-is-not-usable.77679/
I also found similar sounding threads here in the forum, all starting with 6.2 till today - most of them have zero replies so no solution yet.
That...
After switching to a different backup server with completely different config/hw... and forced to smb3 only it still happens again.
Backup for several LXC/VM containers already finished till it hangs
INFO: Starting Backup of VM 148 (lxc)
INFO: Backup started at 2020-11-04 01:19:54
INFO: status...
Hi,
we have a 4 node cluster with ceph configured and running for years.
Recently starting with around 6.2-6 we got problems that our smb backups regular failing (ZSTD/Snaptshot). Same situation with 6.2-12.
Likely its due a network failure, but why this happens no idea - nothing obvious were...
That problem effects every proxmox installation that uses smb/samba at the proxmox host.
Open bugs for it
https://bugzilla.proxmox.com/show_bug.cgi?id=2333
https://bugzilla.samba.org/show_bug.cgi?id=12435
One time clean up is not a solution, you need to clean it frequently.
We had already the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.