Yes, I experienced the exact same issue in my (two node) homelab. My guess is: This happens every time I reboot a guest while the node is low on RAM. Maybe you can try again with this in mind?
The option you listed here I guess is for seamless offline migrations. That's something I probably will resort to, as it's the simplest. It's just too bad it means sidestepping live migrations.
Of course I have my nfs storage tagged as shared on...
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one...
Troubleshooting is beyond what i can do at this point as its solely network related. I'd have to have access to your network & switch infrastructure, vxlan setup and more to be able to debug further. You maybe want to get in touch with your...
Troubleshooting is beyond what i can do at this point as its solely network related. I'd have to have access to your network & switch infrastructure, vxlan setup and more to be able to debug further. You maybe want to get in touch with your...
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one...
There's no answer for nc 211.182.233.2 53 &> /dev/null; echo $?
I think the return path is broken too, I checked the tcpdump, the returning ports are random. The exit node is node107(another node) and vm is on node94.
vm -> node94 ->...
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa...
After a day of hacking around with Chat GPT I've managed to get iGPU passthrough on Proxmox VE 9. Ive asked Chat GPT to write a summary of what we did to get this going. Hope this works for you.
Note massive credit goes to...
I think thats a pretty pretty old bug IMHO. Seen that couple of times already and encountered a similiar problem with an e1000 driver on one of my mini PCs aswell. I've looked up the old thread regarding this and its here.
TLDR;
Its most likely...
scheinbar nicht bzw. hat vermutlich verwaltungstechnische oder steuerliche gründe...
lasst uns diesen thread schliessen jetzt. es ist ales gesagt dazu.
Could you please check if the port is reachable by executing this on all VMs
nc 211.182.233.2 53 &> /dev/null; echo $?
That command just checks if the port is reachable.
Do you have some selective routing going on? To me it looks like the...
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one...
For VM disks to not being copied the underlying storage needs to be officially tagged "shared". Compare your settings with the table: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types
And it needs to be actually configured...
scheinbar nicht bzw. hat vermutlich verwaltungstechnische oder steuerliche gründe...
lasst uns diesen thread schliessen jetzt. es ist ales gesagt dazu.