That is a kernel patch, you can apply it directly to the kernel tree. Check out the pve-kernel repository and put the patch file in patches/kernel, then run git submodule update --init --recursive && apt-get build-dep . && make deb, that should give you a patched kernel .deb file to install...
"Uninstalling" an OS is simply overwriting it with a new one. If you couldn't boot from your USB drive, maybe you didn't flash it correctly? Balena Etcher is a user-friendly (if slightly bloated) tool to do so correctly. How did you install PVE, that procedure should work again?
There's not a lot of information to go on then, potentially try and narrow down where the issue occurs with iftop or similar performance metric tools. Maybe also check the other load on the machine, CPU and memory, etc...
I personally don't know of any real-world workloads affeced too heavily by containers, but that doesn't mean there aren't any. I believe MySQL (and other SQL DBs) are widely enough in use that such regressions would be caught be upstream LXC quite quickly, but truthfully the only way to test is...
Which version are you on? (pveversion -v) Are you sure the filesystem on the backup is valid and readable? The error itself only says that mounting failed. You can also check the log of the last file-restore at /var/log/proxmox-backup/file-restore/qemu.log.
Hm, that seems to be a different error though. Did it work on repeat attempts, or is the error 100% reproducible? We've seen the "timeout on cont" error before too, but couldn't reproduce that one yet. The "got wrong command id" one at least *should* be fixed by the "pvetest" version.
Have you configured your boot disk as the first entry in the bootorder in the VM options menu?
Also, could you post your VM config (qm config <vmid>) and pveversion -v, please.
If you have two rate-limited VMs that communicate with each other, the rate-limiting is applied twice. This might lead to such effects as you are seeing.
When you say ratelimit, do you mean you are actively rate-limiting, or are you just referring to the fact that the connection is slower than it could be?
What kind of performance do you get between a VM and the PVE host?
Hm, memory performance should not be affected at all by containers. Potentially, 'sysbench' is doing a lot of syscalls in this chosen memory access mode (mmap/munmap maybe?), which could be slightly slower due to filtering and restrictions. I don't think this should have any impact on real-world...
Do you see your guest's IPs in the status panel? If not, your guest agent may be misconfigured. You need to enable it in the VM options and also install and run/enable it within the guest. Also, not all filesystems in the guest are supported - which OS and FS are you using inside the VM?
"illegal instruction" is a weird error indeed. Can you post your VM config? (qm config <vmid>) Potentially try using a different CPU model for your VM, even going to 'host' may be interesting (if only for testing).
Nachdem der Task an sich noch aktiv ist (mit einer UPID), also ja wahrscheinlich auch noch in der Task-List aufscheint, ist davon auszugehen, dass auch entweder das VM oder das vzdump lock noch aktiv ist. Dann kann man keinen neuen Backup-Vorgang starten.
Ich würde auf jeden Fall vorschlagen...
"D" steht für einen "dead" process, das heißt der task wartet auf einen syscall. Bei einem NFS share kann das recht einfach passieren wenn z.B. die Verbindung abbricht.
Aus einem steckengebliebenen "D" state kommt man nur durch einen reboot wieder heraus. Das ist leider ein feature des Linux...
In welchem Status befinden sich denn die dazugehörigen Prozesse? (Spalte 'S' in 'top' oder 'htop' - F5 in htop ist sehr nützlich zum finden) Auf welche Art Ziel wird das Backup geschrieben? (lokaler mount, NFS, SMB, ...)
Das sind Hard-Disk Fehler. Die (SATA) Festplatte/SSD in dem Rechner scheint defekt zu sein. Wahrscheinlich ist es dabei zu Datenkorruption gekommen, weshalb PVE nicht mehr stabil läuft.
Neue Disk kaufen, Datenrettung von wichtigen VMs/CTs machen sofern kein aktuelles Backup verfügbar und PVE...
This should work, but these kinds of dongles are often a bit weird about it... I would suggest trying PCI passthrough of a USB controller, this should appear same as native to the VM.
It should automatically come back after a while, the kernel NFS client will retry by default if I'm not mistaken. If it doesn't, maybe the server is using an outdated NFS variant or something over there requires a full reconnect of sorts?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.