Aber Roland, wer sich die Community Subscribtion leisten kann, der soll da ja bleiben. Auch bei Vereinen empfehle ich das wegen dem Enterprise Repo.
Es geht hier nur um die heulenden Home User, die nicht wissen wie eine Suchmaschine funktioniert.
Genau das ist die Frage. Wenn es hoch kommt melde ich mich vielleicht 1-2 mal die Woche auf meine Proxmox Gui an.
In meinen Augen ist das ganze spätestens nach der ersten Threadseite zu einem klassischen Popcorn Thread gewachsen
The option you listed here I guess is for seamless offline migrations. That's something I probably will resort to, as it's the simplest. It's just too bad it means sidestepping live migrations.
Of course I have my nfs storage tagged as shared on...
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa...
Unfortunately I have no NFS storage configured, so I can't confirm this.
But my only "cifs" share does confirm your observation: no explicit "shared 1" but it is shared.
Yes, I experienced the exact same issue in my (two node) homelab. My guess is: This happens every time I reboot a guest while the node is low on RAM. Maybe you can try again with this in mind?
The option you listed here I guess is for seamless offline migrations. That's something I probably will resort to, as it's the simplest. It's just too bad it means sidestepping live migrations.
Of course I have my nfs storage tagged as shared on...
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one...
Troubleshooting is beyond what i can do at this point as its solely network related. I'd have to have access to your network & switch infrastructure, vxlan setup and more to be able to debug further. You maybe want to get in touch with your...
Troubleshooting is beyond what i can do at this point as its solely network related. I'd have to have access to your network & switch infrastructure, vxlan setup and more to be able to debug further. You maybe want to get in touch with your...
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one...
There's no answer for nc 211.182.233.2 53 &> /dev/null; echo $?
I think the return path is broken too, I checked the tcpdump, the returning ports are random. The exit node is node107(another node) and vm is on node94.
vm -> node94 ->...
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa...
After a day of hacking around with Chat GPT I've managed to get iGPU passthrough on Proxmox VE 9. Ive asked Chat GPT to write a summary of what we did to get this going. Hope this works for you.
Note massive credit goes to...
I think thats a pretty pretty old bug IMHO. Seen that couple of times already and encountered a similiar problem with an e1000 driver on one of my mini PCs aswell. I've looked up the old thread regarding this and its here.
TLDR;
Its most likely...