Would be very nice if you could post this in the devel list [1] or at the very least post a feature request in bugzilla [2] to get this compiled by default, as it is a nice feature and this kind of modifications will be overwritten on updates and...
L40S seems to be supported by current driver version 18, so it should work. You can use it without licenses, i.e. to try if it works for you, but performance will be limited by the driver unless a Nvidia license is activated [1]
[1]...
XY problem... IMHO you should ask / investigate why it you are suffering from that load from pvestatd instead of asking how to disable it, which can't be done while keeping PVE working as expected.
First time I've heard about this issue, so...
Proxmox VE is the newest addition to the NVIDIA vGPU supported hypervisors, beginning with NVIDIA vGPU Software v18.0, released today.
NVIDIA vGPU software enables multiple virtual machines to share a single supported physical GPU, learn more at...
Just in case anyone wants to rebuild that for Windows Domain Machines:
Create a WMI filter "ProxmoxVM" with Query
SELECT * FROM Win32_ComputerSystem WHERE Manufacturer LIKE '%QEMU%'
to target VMs only.
Create a new GPO "QEMUGuestAgentSettings"...
Ceph cluster network is used for replication traffic only. All traffic from clients, MON, MDS, OSD flows through the public network. I would bond both 2.5 nics in an LACP LAG to two stacked/MLAG switches and set both public and cluster networks...
What exactly happend on the network and pve hosts when the VM did not move to another host in the cluster?
Remember that PVE HA will only try to restart VMs configured for HA if the host loses quorum, and for that all corosync links have to...
Had no idea about that. I used good old BIOS for the PVE VM I used to install ESXi.
I strongly advise against installing anything in the PVE host itself. Use another VM to get vCenter running.
That depends on your corosync configuration and which interfaces are used for corosync and Ceph. Corosync networks are completely different thing than Ceph networks. In fact, you may even have quorum in PVE and not in Ceph and vice versa. Also...
Only the PCI device. Remember that any passed through device can't be used in the host once the VM has started, don't pass through the one that you used for your vmbr0 or vmbr1!
Use MLAG with both F5 switches and configure LACP 802.3ad LAG both in PVE and in the switches and you will get both links in an active/active setup with failover/failback times tipically under 400ms. Remember to add the LAG to the Ceph vlan.
You...
This is surprising, given the added instructions of v3 and v4 over v2 or v2-AES. Here [1] is the definition of each x86-64 CPU type. Maybe you could try using a custom CPU and manually add each flag and bench your workload to try to find out...
I think I fixed this.
The issue is: on certain motherboards with EMA support, the Intel chipset is using the same physical port as the OS. By default Proxmox uses the bult-in NIC mac address. This means 2 IP addresses (1x EMA and 1x Proxmox) on...
IMHO something is off with that measurement: it's simply impossible that mechanical disks provide a 8 minute GC for a 2TB datastore, no matter the filesystem. Maybe it's using some cache? I do have some 2-4TB datastores using either single drives...
Thanks @VictorSTS. I have a test cluster but it inadvertently got ahead of my production cluster and is already at 18.2.4. Downgrading doesn't seem to be an option. 18.2.2 to 18.2.4 seems like a straight forward upgrade so I was going to do...
Yes, you need to disable CSO offloading inside ESXi. Don't really know why just yet, found it after 2 days of trial and error and browsing online docs. The idea came from here [1] and the fact that tcpdump shows lots of crc errors typically...