I tried googling about this, but ran into trouble filtering out SR-IOV info for Flex cards, NICs, and everything else Intel offers this feature on.
tl;dr I can't find information on how the system dynamically assigns iGPU VRAM to VF devices. I know from looking at the card in Windows bare metal with gpu-Z that in my 32 GB system, 16 GB is allocated as "shared VRAM," but I'm not sure what that means for using VFs in Proxmox. (If this all works, yes, I will get more RAM. I'm not putting any more cash into this box until I know this feature is working. )
I've set my system up like this:
My intent was to have 3 usable VFs--the card supports 7, so I left max_vfs set to that, but realistically trying to use that many on such a weak iGPU would be problematic. My goal here was to have one VF for a Linux VM that will need it all the time, one VF for a Windows VM that will need it all the time, and one left over to goof around with while I'm testing everything.
A few questions:
tl;dr I can't find information on how the system dynamically assigns iGPU VRAM to VF devices. I know from looking at the card in Windows bare metal with gpu-Z that in my 32 GB system, 16 GB is allocated as "shared VRAM," but I'm not sure what that means for using VFs in Proxmox. (If this all works, yes, I will get more RAM. I'm not putting any more cash into this box until I know this feature is working. )
I've set my system up like this:
Code:
root@andromeda2:~# batcat /etc/kernel/cmdline
───────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: /etc/kernel/cmdline
───────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 console=ttyS0,115200n8 console=tty0
───────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
root@andromeda2:~# batcat /etc/sysfs.conf
───────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: /etc/sysfs.conf
───────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ devices/pci0000:00/0000:00:02.0/sriov_numvfs = 3
───────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
root@andromeda2:~#
My intent was to have 3 usable VFs--the card supports 7, so I left max_vfs set to that, but realistically trying to use that many on such a weak iGPU would be problematic. My goal here was to have one VF for a Linux VM that will need it all the time, one VF for a Windows VM that will need it all the time, and one left over to goof around with while I'm testing everything.
A few questions:
- What is the total size of VRAM (shared RAM) that it uses versus system RAM?
- How does it slice it up? I don't need it sliced 7 ways, and in retrospect 3 is an odd number and not a great idea (4 would be better, I realize as I write this), but I'd rather understand what it's doing before I try to experiment. That said ...
- Use Case: How do I set the options so it gives me 4 VFs with 4 GB vRAM each?