It is not possible to have a VM on two nodes at once. You can use shared storage, for example ceph, to allow fast failover via our HA stack though.
Please refer to the documentation I've linked above.
Also discussed here: https://forum.proxmox.com/threads/error-with-backup-when-backing-up-qmp-command-query-backup-failed-got-wrong-command-id.88017/post-413542
Does your VM hang after such a backup fails, or does it continue to run fine?
Yes, "local" is a reserved storage used to describe the root filesystem ('/') on which PVE resides.
It shouldn't be possible to add another storage with the same name however.
"Replication" in PVE does *not* mean the VM is configured on multiple nodes. It simply means that the storage will be synced across the network every couple of minutes, so if one node dies, the other can take over the VM via the HA stack. For this to work, *all* disks of a VM must be on a ZFS...
Migration between CPU vendors (Intel <-> AMD) is not supported in general. To exclude CPU issues, I would try migrating a VM using only the base 'kvm64' CPU model.
Also, after a VM crashes, check the syslog (both on source and target PVE, journalctl -e) for any errors.
Please post your VM config (qm config <vmid>) and pveversion -v output. There's a number of reasons why a VM could be slow, not least is the guest - have you tried with a Linux VM?
WiFi isn't supported on PVE. It's possible to set it up, since it's a regular Debian under the hood, but especially bridges work very poorly (WLAN in general doesn't like multiple MAC addresses per AP client). We have a wiki page for some assistance: https://pve.proxmox.com/wiki/WLAN
Hm, not saying it has to be related, but is there a reason you disable the hypervisor flag (args: -cpu 'host,hypervisor=off,svm=on,kvm=off')? This disables the entire Hyper-V enlightenment stack and can cause the VM to run a lot slower. I also wouldn't be surprised if it causes issues with...
If the VM gets stuck completely again, could you try to capture a gdb trace as described here: https://forum.proxmox.com/threads/all-vms-locking-up-after-latest-pve-update.85397/post-377434 ?
The logs you posted do not contain the "got wrong command id" error, which makes me think this is a different issue than the one discussed in this thread. It appears as though QEMU starts to hang at the guest agent interaction, does this error also happen with VMs that do not use the agent? How...
Exactly, the disk must not show up in /etc/fstab on the host.
It's not strictly true, as you can certainly reuse a disk from the host or in different VMs, but for the sake of the argument here, imagine passing through a disk will wipe it. Take precautions like that is the case, and you should...
Once again, you are not supposed to see the disk on the host at all. If you see the disk in the panel on the left of the web-GUI, you have configured something terribly wrong and are setting yourself up for permanent data loss. Please remove the disk from all configurations on the host, except...
Das sind ein wenig zu viele smart-fehler würde ich sagen... ist es ein hardware-raid? Dann liegts vielleicht am controller. Aber da ist definitiv etwas mit den platten oder der verbindung faul mmn.
If you pass through a disk like described in the wiki (i.e. with "qm set -scsiX /dev/disk/by-id/..."), you must not use it for anything else on the host, especially not as a PVE storage. Remove your storage from the datacenter screen.
I'm a bit unsure what you meant with the emergency mode...
PVE does not use rEFInd... we use either grub or systemd-boot (if you use ZFS on UEFI). I'd recommend running proxmox-boot-tool refresh and grub-install if you know your bootloader configuration.
Das sieht nach data corruption aus. Finden sich Einträge im syslog ('journalctl -e')? Ich würde in dem Fall einmal eine andere Disk versuchen, unter Umständen auch ein 'memtest' laufen lassen...
No, that appears to be a different error. This thread is talking about live-restore, you're talking about backup. For backups to be made, we need to temporarily start the VM (in a paused mode, the guest doesn't boot), but this fails because only 2 CPU cores are detected on your system (which is...
I have posted a tentative fix for the issue on the upstream QEMU mailing list. I am not sure this is indeed the right fix for this issue, so we need to wait and see what upstream has to say about it. https://lists.nongnu.org/archive/html/qemu-devel/2021-08/msg03586.html
Thank you very much for...
There's no way of knowing if it is the same issue, a kernel panic can have a huge number of reasons... In general, if you're not sure, open a new thread. In your case it seems to be a bad page-fault, so... bad RAM? bad installation disk? corrupted disk? just guessing though...
This seems to be a seperate issue from your first? ("got timeout" vs. "got wrong command id") Those logs don't tell us anything about the original problem unfortunately...
In this case though it could be a stuck or super slow storage target, so switching to NFS might be a short-time solution...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.