Im PVE hatte ich bisher natürlich nur auf die virtuelle IP verbunden, Ich kenne es von den bisher genutzten Systemen so, dass darüber die anderen Pfade bekannt gemacht und automatisch verbunden werden.
Aber auch, wenn ich mehrere Verbindungen im...
Multiple LXCs can use the same GPU simultaneously because they're all running on the host kernel.
VM PCI passthrough is exclusive - the GPU gets fully assigned to one VM, and the host (and therefore all LXCs) lose access to it completely.
Multiple LXCs can use the same GPU simultaneously because they're all running on the host kernel.
VM PCI passthrough is exclusive - the GPU gets fully assigned to one VM, and the host (and therefore all LXCs) lose access to it completely.
same issue for me. tried everything in this post and others as well. going to completely kill this instance and start fresh on PVE8.
NFS shares unusable on PVE9 for me.
Welcome, @Gabriele_Lvi .
I'm not stating that your issue was also present at PVE 8,, but as far as I remember from the forum posts, the graphs in PVE 9 are more "spiky" than in PVE 8 because they are prepared other way than they used to be in PVE...
you shouldn't use consumer ssd like Samsung evo with zfs. zfs is doing synchronous write, and consumer ssd don't have a supercapacitor to keep sync writes in memory cache before writing the nand cell. (it's really something like 200~400iops on...
hi cptwonton,
maybe it was a caching issue related to the node renaming. How long a ago die you performed the renaming?
Private browsertabs might also help with verifying webui issues and the browser console like Dominik already pointed out...
@
@LongQT-sea
Hello! Could you please tell me if this method of passing integrated graphics (Intel N100 + Intel UHD 630) to Ubuntu 22.04 (installed from Proxmox VE scripts) is suitable for subsequent transcoding of video files in Docker using a...
Hello Swifty,
Exactly I'm affected by the apparmour problem. The solution turned out to disable named service from apparmor control.
Thanks for pointing that out.
Regards..
Yes sir, I even rebooted the entire server. Tried to include custom.cf in the template file.
No luck, also if I do a lint test it clearly gives me the correct output.
I also don't see the actual headers in proxmox mail gateway.
Was there anything else to this? I get basically a half freeze of the host OS and TrueNAS will never boot. I've tried several different firmware versions.
Just wondering if this is possible?
For my HBA, i want to pass some drives through to TrueNas, the rest to Proxmox and Proxmox has multple ZFS pools - TrueNas will have one zfs pool
Thanks
https://forum.proxmox.com/threads/vm-reboot-issue-vm-stuck-on-proxmox-start-boot-option-screen.154925/#post-705799
Have a read of some this thread may help you
Disregard. I figured it out after reading through the docs again. You limit the size of the cache location.
I just applied (probably overkill) zfs set quota=2T backup/s3-cache
FWIW I called this out in another thread recently too. The pinning tool https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_using_the_pve_network_interface_pinning_tool says it will use nic* but also ends with paragraph, “It is recommended...
I use Dell Intel X550 rNDC in production without issues. Both the 2x1GbE-2x10GbE and 4x10GbE versions.
The 10GbE uses the ixgbe driver and the 1GbE uses the igb driver.
Use 'dmesg -t' to confirm. Obviously flash the rNDC to the latest firmware...
thanks for the suggestion Dominik, wish I had thought to do that yesterday.
I tried just now, with console open, and proxmox decides it wants to behave.
I've changed nothing about my cluster since posting yesterday, so really unclear why it's...