I have not tried "vfio-pci.disable_idle_d3=1".
But...my theory might have meat to it...
So my theory on vfio-pci driver being assigned vs unassigned might have some meat to it...
Tested on a different server, started that server up May 8th...
Hi, I am trying to track down an issue I am having. About a month ago or so 2 of my 5 nodes started to have an issue with the NFS share showing up as unknown. The NFS share is set up within the datacenter, and all 5 nodes have access. It’s a...
From what I gather, these modules have to either be in use, loadable through a user-level (not root) trigger, or already compiled default into the kernel to be exploitable in containers.
Doesnt the default confined apparmor profile for...
This may seem like a silly question, but I'm looking for some direction on logging the activity that is normally output to the command line when you run the 'Snapshot Create' option.
I do not find any reference in the documentation for...
Hello everyone,
The problem was solved, and it was more twisted than one might have imagined.
The interfaces of each of our cluster nodes could in fact communicate in a stable way for Corosync only in 10Gbps or 25Gbps.
What was totally...
Thanks @Neobin that was helpful. I was able to create the below script with the help of AI.
You can change "*/15" to adjust the schedule to your liking and add --rate to prevent storage/network bottlenecks, ie: --rate 20 (20 MB/s)
#!/bin/bash
#...
Those are on the cheaper and slower side of consumer SSDs. They will not perform well with sustained load and the primarily sync writes that Ceph does.
The recommendation for enterprise SSDs with power loss protection (PLP) is there for good...
Additionally, with just 3 nodes in a ceph cluster, make sure you have at least 4 OSDs in each. Because with just 2 per node, you will likely have issues if one of the OSDs fails. As then Ceph will recover the lost replicas to the only node it can...
thats an understatement.
the crucial bx series is one of the worst performing ssds i have ever seen, even on clients.
you may use it as cold storage, but anything warm or hot will perform terrible on it.
more so if its used with zfs/ceph.
even...
Hi, please enable backup fleecing https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_vm_backup_fleecing Most likely your VM is I/O starved due to a slow backup target.
any time you build an environment with such nested dependencies you're making a unsupportable/difficult to support solution. As a matter of design, your storage layer and your compute layer should not be interdependent.
Since you've decided that...
There is a new QEMU 11.0 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9.
After internally testing QEMU 11.0 for over two weeks and having this version available on the pve-test repository for over a week...
Exciting! We will put this into our CI/CD.
Thank you!
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi,
ggf. hilft dir ein tcpdump mit der -XX Option. Wenn verfügbar auf beiden Seiten Switch/PVE und dann im Vergleich alter und neuer Kernel, um Unterschiede in den (nicht) versendeten Paketen festzustellen und der Brotkrumenspur zu folgen...
Typo (the 2 keys are not far apart ;)).
I agree that behavior seems odd, only to remove (I believe) "Are there any companies that extensively use PVE in daily production environments?"
I must also say, I found the initial query rather...
I only added the second GPU to show that I tried primary GPU as well.
Thank you so much! Changing the processor type to "host" did indeed work. I didn't had to install the older driver, but will to compare performance/stability over the next...