Windows also tends to zero the memory under certain circumstances which causes it to be immediately allocated too. In general the RAM you give to a VM should be considered gone.
First, please search the forum before posting. Somebody (always "new member") asks this question every few days.
First, if you have any PCIe devices passed in to the VM, then all VM memory must be pinned at startup to avoid issues with DMA...
Danke erstmal allen hier für diesen superschnellen Support. Ich hatte bisher Proxmox und die VMs auf einer Platte, aber das werde ich nun auflösen. Damit sollte es dann noch etwas schneller gehen. Frohe Weihnachten
Add to that - flaky bus controller on the MB that when it cools down/caps get fully discharged it begins working again. Also check PSU (alternate) & RAM.
Corrosion on the electrical contacts, which gets remove by disconnecting and reconnecting?
Drive overheating and shutting down, until disconnected and cooled down?
hi,
im seeing NULL pointer dreferences quite often these days. Im running Proxmox Test Environments on KVM (rhel 9.1) and the problem started to appear randomly a few weeks ago, both on PVE 9.1 and the latest PVE 8 version available. Example...
Im PVE hatte ich bisher natürlich nur auf die virtuelle IP verbunden, Ich kenne es von den bisher genutzten Systemen so, dass darüber die anderen Pfade bekannt gemacht und automatisch verbunden werden.
Aber auch, wenn ich mehrere Verbindungen im...
Multiple LXCs can use the same GPU simultaneously because they're all running on the host kernel.
VM PCI passthrough is exclusive - the GPU gets fully assigned to one VM, and the host (and therefore all LXCs) lose access to it completely.
Multiple LXCs can use the same GPU simultaneously because they're all running on the host kernel.
VM PCI passthrough is exclusive - the GPU gets fully assigned to one VM, and the host (and therefore all LXCs) lose access to it completely.
same issue for me. tried everything in this post and others as well. going to completely kill this instance and start fresh on PVE8.
NFS shares unusable on PVE9 for me.
Welcome, @Gabriele_Lvi .
I'm not stating that your issue was also present at PVE 8,, but as far as I remember from the forum posts, the graphs in PVE 9 are more "spiky" than in PVE 8 because they are prepared other way than they used to be in PVE...
you shouldn't use consumer ssd like Samsung evo with zfs. zfs is doing synchronous write, and consumer ssd don't have a supercapacitor to keep sync writes in memory cache before writing the nand cell. (it's really something like 200~400iops on...
hi cptwonton,
maybe it was a caching issue related to the node renaming. How long a ago die you performed the renaming?
Private browsertabs might also help with verifying webui issues and the browser console like Dominik already pointed out...
@
@LongQT-sea
Hello! Could you please tell me if this method of passing integrated graphics (Intel N100 + Intel UHD 630) to Ubuntu 22.04 (installed from Proxmox VE scripts) is suitable for subsequent transcoding of video files in Docker using a...
Hello Swifty,
Exactly I'm affected by the apparmour problem. The solution turned out to disable named service from apparmor control.
Thanks for pointing that out.
Regards..
Yes sir, I even rebooted the entire server. Tried to include custom.cf in the template file.
No luck, also if I do a lint test it clearly gives me the correct output.
I also don't see the actual headers in proxmox mail gateway.
Was there anything else to this? I get basically a half freeze of the host OS and TrueNAS will never boot. I've tried several different firmware versions.
Just wondering if this is possible?
For my HBA, i want to pass some drives through to TrueNas, the rest to Proxmox and Proxmox has multple ZFS pools - TrueNas will have one zfs pool
Thanks