how to get Resizable bar/rebar working

zenowl77

Member
Feb 22, 2024
42
6
8
i just got ahold of a Intel ARC A310 to use for encoding and i need to get Rebar working (wouldnt mind getting it working on my Tesla P4 too if possible but not sure the vbios supports it)

Code:
lspci -n -s b5:00.0 -v
b5:00.0 0300: 8086:56a6 (rev 05) (prog-if 00 [VGA controller])
        Subsystem: 1849:6007
        Flags: bus master, fast devsel, latency 0, IRQ 66, NUMA node 0, IOMMU group 4
        Memory at fa000000 (64-bit, non-prefetchable) [size=16M]
        Memory at 383fe0000000 (64-bit, prefetchable) [size=256M]
        Expansion ROM at fb000000 [disabled] [size=2M]
        Capabilities: [40] Vendor Specific Information: Len=0c <?>
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable+ 64bit+
        Capabilities: [d0] Power Management version 3
        Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [420] Physical Resizable BAR
        Capabilities: [400] Latency Tolerance Reporting
        Kernel driver in use: vfio-pci
        Kernel modules: i915

Code:
lspci -vvvs b5:00.0 | grep BAR
        Capabilities: [420 v1] Physical Resizable BAR
BAR 2: current size: 256MB, supported: 256MB 512MB 1GB 2GB 4GB

found this about enabling it for an AMD gpu:
VFIO: How to enable Resizeable BAR (ReBAR) in your VFIO Virtual Machine

LSPCI output for nvidia too if there is any way to get it working
Code:
lspci -n -s 02:00.0 -v
02:00.0 0302: 10de:1bb3 (rev a1)
        Subsystem: 10de:11d8
        Flags: bus master, fast devsel, latency 0, IRQ 77, NUMA node 0, IOMMU group 34
        Memory at 91000000 (32-bit, non-prefetchable) [size=16M]
        Memory at 380fe0000000 (64-bit, prefetchable) [size=256M]
        Memory at 380ff0000000 (64-bit, prefetchable) [size=32M]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] Secondary PCI Express
        Kernel driver in use: nvidia
        Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia_vgpu_vfio, nvidia

Motherboard is a Gigabyte X299 UD4 (cpu is i7-7820x) with 4G decoding & Rebar support, full VRAM sizes listed

if i set display to none or set arc as primary GPU it displays error 43, otherwise it works just without rebar and i seem to be getting encoding performance drops because of it (or so it i found it on github in QSVENC issues that other users solved the encoding fps drop by enabling rebar and the drop is massive like 30-60fps instead of 200-300+ because clocks lock into idle )

would prefer the most simple or permanent fix possible, if possible anyways, i have seen many things online about people getting it working but i cannot seem to get it.

GPU-Z in the VM lists rebar is supported, 4G decoding is enabled, but GPU-Z shows rebar is disabled in the bios despite it being enabled
 
Does running (as root) the commands from the quide you linked to before starting the VM not work?
echo 12 > /sys/bus/pci/devices/0000:b5:00.0/resource0_resize
echo 3 > /sys/bus/pci/devices/0000:b5:00.0/resource2_resize
I use the same (for an AMD GPU) in a hookscript on pre-start.
 
Does running (as root) the commands from the quide you linked to before starting the VM not work?
echo 12 > /sys/bus/pci/devices/0000:b5:00.0/resource0_resize
echo 3 > /sys/bus/pci/devices/0000:b5:00.0/resource2_resize
I use the same (for an AMD GPU) in a hookscript on pre-start.
it does not
first command = permission denied
second = write error: Device or resource busy

i am guessing i have to unbind the gpu adjust it and reassign it as they did in the guide with the AMD gpu, but not sure what the commands would be for sure for the intel gpu, it says its loaded under i915 but unbinding i915 = no such device

EDIT:
got it unbound
echo 0000:b5:00.0 > /sys/bus/pci/drivers/vfio-pci/unbind
works
still getting permission denied on
echo 12 > /sys/bus/pci/devices/0000:b5:00.0/resource0_resize
 
Last edited:
i am guessing i have to unbind the gpu adjust it and reassign it as they did in the guide with the AMD gpu, but not sure what the commands would be for sure for the intel gpu, it says its loaded under i915 but unbinding i915 = no such device
Yes, you need to unbind it. Use this to unbind any driver: echo '0000:5b:00.0' > '/sys/bus/pci/devices/0000:5b:00.0/driver/unbind'
 
Last edited:
i've got it unbound now, but the second command only accepts 8 (256mb) it will not allow it to be reduced to 3 (8mb) and the first command just says permission denied in general
 
Maybe the BARs are wired/mapped different on ARC. What is the output of ls '/sys/bus/pci/devices/0000:5b:00.0/' ?

EDIT: When there is no driver bound?
 
the output is:
Code:
ls '/sys/bus/pci/devices/0000:b5:00.0/'
ari_enabled               current_link_speed  enable         local_cpus      power         resource      rom
boot_vga                  current_link_width  iommu          max_link_speed  power_state   resource0                                             subsystem
broken_parity_status      d3cold_allowed      iommu_group    max_link_width  remove        resource2                                             subsystem_device
class                     device              irq            modalias        rescan        resource2_resize  subsystem_vendor
config                    dma_mask_bits       link           msi_bus         reset         resource2_wc      uevent
consistent_dma_mask_bits  driver_override     local_cpulist  numa_node       reset_method  revision      vendor
 
the output is:
Code:
ls '/sys/bus/pci/devices/0000:b5:00.0/'
ari_enabled               current_link_speed  enable         local_cpus      power         resource      rom
boot_vga                  current_link_width  iommu          max_link_speed  power_state   resource0                                             subsystem
broken_parity_status      d3cold_allowed      iommu_group    max_link_width  remove        resource2                                             subsystem_device
class                     device              irq            modalias        rescan        resource2_resize  subsystem_vendor
config                    dma_mask_bits       link           msi_bus         reset         resource2_wc      uevent
consistent_dma_mask_bits  driver_override     local_cpulist  numa_node       reset_method  revision      vendor
There is only a resource2_resize, which is 256MB by default, and no other _resize files. I guess you only need to do echo 12 > /sys/bus/pci/devices/0000:b5:00.0/resource2_resize to set it to 4GB (with no driver bound).
 
output is:

Code:
echo 12 > /sys/bus/pci/devices/0000:b5:00.0/resource2_resize
-bash: echo: write error: No space left on device
everything over 8 (256) results in "no space left on device"
 
found this in relation to the A380, assuming it possibly applies to the A310 aswell:

[RFC] Resizable BARs vs bridges with BARs
Code:
It's a shame that the hardware designers didn't mark the upstream port
BAR as non-prefetchable to avoid it living in the same resource
aperture as the resizable BAR on the downstream device.  In any case,
it's my understanding that our bridge drivers don't generally make use
of bridge BARs.  I think we can test whether a driver has done a
pci_request_region() or equivalent by looking for the IORESOURCE_BUSY
flag, but I also suspect this is potentially racy.

The patch below works for me, allowing the new resourceN_resize sysfs
attribute to resize the root port window within the provided bus
window.  Is this the right answer?  How can we make it feel less
sketchy?  Thanks,

Alex

diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
index b4096598dbcb..8c332a08174d 100644
--- a/drivers/pci/setup-bus.c
+++ b/drivers/pci/setup-bus.c
@@ -2137,13 +2137,19 @@ int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type)
     next = bridge;
     do {
         bridge = next;
-        for (i = PCI_BRIDGE_RESOURCES; i < PCI_BRIDGE_RESOURCE_END;
+        for (i = PCI_STD_RESOURCES; i < PCI_BRIDGE_RESOURCE_END;
              i++) {
             struct resource *res = &bridge->resource[i];
 
             if ((res->flags ^ type) & PCI_RES_TYPE_MASK)
                 continue;
 
+            if (i < PCI_STD_NUM_BARS) {
+                if (!(res->flags & IORESOURCE_BUSY))
+                    pci_release_resource(bridge, i);
+                continue;
+            }
+
             /* Ignore BARs which are still in use */
             if (res->child)
                 continue;
 
Got it, i added to
/etc/modprobe.d/i915.conf
Code:
options i915 enable_guc=3
options i915 modeset=1
options i915 memtest=true
options i915 lmem_bar_size=4048

and in /etc/default/grub

i915.enable_guc=3

under GRUB_CMDLINE_LINUX_DEFAULT=

and disabled CSM in bios (because apparently rebar doesnt work at all if CSM is enabled)

and now Rebar is working in the windows 10 VM
 
  • Like
Reactions: leesteken
Got it, i added to
/etc/modprobe.d/i915.conf
Code:
options i915 enable_guc=3
options i915 modeset=1
options i915 memtest=true
options i915 lmem_bar_size=4048
You could condense this into one line: options i915 enable_guc=3 modeset=1 memtest=true lmem_bar_size=4048.
I'm surprised it's not 4096. Where did you find this information?
and in /etc/default/grub
i915.enable_guc=3
under GRUB_CMDLINE_LINUX_DEFAULT=
That's not necessary when it's already in /etc/modprobe.d/i915.conf. (And not all Proxmox installations use GRUB.)
and disabled CSM in bios (because apparently rebar doesnt work at all if CSM is enabled)
Maybe this would change enough? Or maybe just enable Above 4G Decoding (which is required and is turned on when CSM is disabled)?
and now Rebar is working in the windows 10 VM
Please mark this thread as resolved by editing the first post and selecting Solved from the pull-down menu (so other people can find it easier in the future).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!