(Intel x520-DA2) - NIC interfaces not present, but PCI-e card detected

pringlein

New Member
Aug 20, 2022
4
0
1
I have a new PVE deployment PVE 7.2-3. I am trying to get a X520-DA2 10Gbe NIC PCI-e card to work. Currently the system uses the motherboard NIC which is 1Gbe. I’ve looked through forum posts most of which attribute issues with the NIC to use of "unsupported transceivers". I am using a Twinax cable from 10Gtek designed for Intel (according to the company). The same twinax cable /and NIC is already in use on my network on a different system. In that case, the x520-DA2 is connected via twinax to the same switch where I am trying to connect the new PVE system. I booted my PVE system in its current state into Windows to verify that the x520-DA2 and twinax works as expected. In the Windows environment it is fully functional:
Windows Functional X520-DA2 with GBIC.PNG

I have added GRUB_CMDLINE_LINUX=”ixgbe.allow_unsupported_sfp=1″ to /etc/default/grub, but I don’t know if I needed to. I never saw any messages regarding SFP incapability. I have not installed drives though I have been tempted to, since many forum posts indicate Proxmox should have native support for the x520-DA2 PCI-e cards.
What I am experiencing is the 'ip address' command shows no interfaces for the x520-DA2. I cannot perform any configurations to the ports. I intend to replace my network connection on the PVE with the 10Gbe one. The PCI-e card is in the first slot, so according to what I've been reading the enp1s0f0 and enp1s0f1 are the 2 ports on this NIC. They appear in the dmesg, and the ports are referenced in the network interfaces file, but are not present in 'ip address' command - they also cannot be bound in the GUI. I receive the message "No such device" (referencing enp1s0f1 or enp1s0f0 depending on which I am trying to bridge) when I try.
Adding enp1s0f1 to Linux Bridge.png

With a GUI-bound configuration, executing 'ifreload -a' give the following error:
# ifreload -a
Code:
error: netlink: enp1s0f1: cannot enslave link enp1s0f1 to vmbr0: operation failed with 'No such device' (19)
error: vmbr0: bridge port enp1s0f1 does not exist

Any help on getting my setup to work would be appreciated.

Here are the configurations and reports most forums were asking for.

# dmesg | grep ixgbe
Code:
[ 1.137988] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 1.137992] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 2.329529] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 20, Tx Queue count = 20 XDP Queue count = 0
[ 2.329819] ixgbe 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 2.330141] ixgbe 0000:01:00.0: MAC: 2, PHY: 1, PBA No: G73129-000
[ 2.330142] ixgbe 0000:01:00.0: a0:36:9f:37:43:e0
[ 2.332911] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
[ 2.501610] ixgbe 0000:01:00.1: Multiqueue Enabled: Rx Queue count = 20, Tx Queue count = 20 XDP Queue count = 0
[ 2.501900] ixgbe 0000:01:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 2.502222] ixgbe 0000:01:00.1: MAC: 2, PHY: 14, SFP+: 4, PBA No: G73129-000
[ 2.502224] ixgbe 0000:01:00.1: a0:36:9f:37:43:e2
[ 2.505017] ixgbe 0000:01:00.1: Intel(R) 10 Gigabit Network Connection
[ 10.002508] ixgbe 0000:01:00.1 enp1s0f1: renamed from eth3
[ 10.049440] ixgbe 0000:01:00.0 enp1s0f0: renamed from eth2
[ 14.447287] ixgbe 0000:01:00.0: registered PHC device on enp1s0f0
[ 14.651123] ixgbe 0000:01:00.1: registered PHC device on enp1s0f1
[ 14.845468] ixgbe 0000:01:00.1 enp1s0f1: detected SFP+: 4
[ 15.073653] ixgbe 0000:01:00.1 enp1s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 31.285492] ixgbe 0000:01:00.0: removed PHC on enp1s0f0
[ 31.625309] ixgbe 0000:01:00.0: complete
[ 31.645536] ixgbe 0000:01:00.1: removed PHC on enp1s0f1
[ 31.753280] ixgbe 0000:01:00.1: complete

# lspci -nnk [Manually trimmed to the relevant information]
Code:
01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet 10G 2P X520 Adapter [8086:154d] (rev 01)
Subsystem: Intel Corporation 10GbE 2P X520 Adapter [8086:7b11]
Kernel driver in use: vfio-pci
Kernel modules: ixgbe
01:00.1 Ethernet controller [0200]: Intel Corporation Ethernet 10G 2P X520 Adapter [8086:154d] (rev 01)
Subsystem: Intel Corporation 10GbE 2P X520 Adapter [8086:7b11]
Kernel driver in use: vfio-pci
Kernel modules: ixgbe

# /etc/network/interfaces
Code:
 auto lo
iface lo inet loopback

iface eno2 inet manual

iface usb0 inet manual

iface eno3 inet manual

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.0.100.252/24
gateway 10.0.100.1
bridge-ports eno2 enp1s0f1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

# ip a
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 3c:ec:ef:c0:38:10 brd ff:ff:ff:ff:ff:ff
altname enp4s0
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 3c:ec:ef:be:9e:12 brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
6: usb0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 4e:41:1c:a6:1a:06 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3c:ec:ef:be:9e:12 brd ff:ff:ff:ff:ff:ff
inet 10.0.100.252/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::3eec:efff:febe:9e12/64 scope link
valid_lft forever preferred_lft forever
8: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether 82:0e:c6:30:2e:10 brd ff:ff:ff:ff:ff:ff
9: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
link/ether 26:3d:a7:e1:6b:ee brd ff:ff:ff:ff:ff:ff
10: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:b0:9c:fd:c3:2c brd ff:ff:ff:ff:ff:ff
11: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 56:b5:4f:ce:75:f8 brd ff:ff:ff:ff:ff:ff
12: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
link/ether 42:f3:b7:48:e9:36 brd ff:ff:ff:ff:ff:ff
13: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr102i0 state UNKNOWN group default qlen 1000
link/ether 02:98:cc:74:87:f2 brd ff:ff:ff:ff:ff:ff
14: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether aa:cc:55:c0:2b:8a brd ff:ff:ff:ff:ff:ff
15: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether de:06:36:5d:bc:9e brd ff:ff:ff:ff:ff:ff
16: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
link/ether 8a:ab:b6:97:57:0a brd ff:ff:ff:ff:ff:ff
17: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr103i0 state UNKNOWN group default qlen 1000
link/ether b6:a0:2f:3a:04:c4 brd ff:ff:ff:ff:ff:ff
18: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:8c:28:27:27:11 brd ff:ff:ff:ff:ff:ff
19: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether c2:2b:01:61:f6:7b brd ff:ff:ff:ff:ff:ff
20: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
link/ether 92:1e:ad:4e:9f:8f brd ff:ff:ff:ff:ff:ff
 
No that was the default driver the NIC came up with. I haven't had much chance this last week working on it. I live in California and we're having a crazy 100ºF+ heatwave - that and in-laws visiting + my nearly 3mo baby wanting my attention have monopolized my time.

I have tried the following without success:
Code:
echo -n "0000:01:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
echo -n "0000:01:00.1" > /sys/bus/pci/drivers/vfio-pci/unbind
echo -n "0000:01:00.0" > /sys/bus/pci/drivers/ixgbe/bind
echo -n "0000:01:00.1" > /sys/bus/pci/drivers/ixgbe/bind

The command
# lspci -nnk [trimmed for relevant data] then produces
Code:
01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet 10G 2P X520 Adapter [8086:154d] (rev 01)
        Subsystem: Intel Corporation 10GbE 2P X520 Adapter [8086:7b11]
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe
01:00.1 Ethernet controller [0200]: Intel Corporation Ethernet 10G 2P X520 Adapter [8086:154d] (rev 01)
        Subsystem: Intel Corporation 10GbE 2P X520 Adapter [8086:7b11]
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

However on reboot it reverts to the 'vfio-pci' driver.

I haven't risked switching the NIC configuration to the new interface yet since I can't seem to get the PVE to use the new drivers on reboot. Is there a command / configuration I am missing to make this configuration perpetual?
 
Last edited:
  • Like
Reactions: ITT
I hadn't configured the NIC specifically for passthrough -unless it was done somehow unintentionally. I don't see how I could have stumbled into doing that. I am definitely a novice in this space. I am using the server to get outside my comfort zone with Linux and its my first foray with proxmox. I've used VMware's ESXI in the past at work. I can't break my work production environment and I wanted to learn more by doing - so I am setting up the proxmox server at home. The server was built this last month and only has 4 VM running on a fresh install of Proxmox. I do have an LSI HBA card being passed through to a VM - but for that I used the GUI: select VM > Hardware > Add > PCI Device.

Only after I had familiarized myself with the hypervisor and got the 4 initial VMs running did I try to switchover my network card from the onboard 1GbE to the 10GbE PCI-E card - and that was when I started running into these problems.

Could configuring one PCI-E card for passthrough have configured my other PCI-E card for passthrough? Is that what is being suggested?

In any case what would I look to do to force the X520 PCI-E card to use the ixgbe drivers perpetually [after reboots]. Do I need to explore disabling PCI passthrough for the network card?

None of the VMs are configured with this PCI-E card as passthrough. The one VM that is configured with a PCI-E card passthrough has the line: hostpci0: 0000:02:00 in its /etc/pve/qemu-server file- but this is for the LSI card not the NIC. -the NIC is in the first PCI-E slot (0000:01:00.0 & 0000:01:00.1).
 
Can you please provide the output from the PVE-host in code-tags of:
Bash:
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
and: pveversion -v

I do have an LSI HBA card being passed through to a VM - but for that I used the GUI: select VM > Hardware > Add > PCI Device.

What system/CPU do you have? Intel or AMD?

Did you not do any of the steps mentioned here: [1] (or: [2])? If you did, please list which steps you followed and post the content of the relevant config files.

Which driver gets used for the NIC, if you reboot the PVE-host and do not start the VM with the HBA passed through to it?

[1] https://pve.proxmox.com/wiki/PCI(e)_Passthrough
[2] https://pve.proxmox.com/wiki/Pci_passthrough
 
Last edited:
Thank you for your help troubleshooting this.

System:
I have an Intel CPU W-1290 and a Supermicro x12SCA-5f-O.

Steps for LSI passthrough:
Previous to my post, and for the HBA passthrough, I had enabled VT-d in the BIOS. I also added 'GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"' to grub config file - but now realize that was in error. I am using systemd-boot for my bootloader and so the configuration should be placed in the /etc/kernel/cmdline file.

I had ran the dmesg | grep -e DMAR -e IOMMU command and discovered the line: DMAR: IOMMU enabled was missing despite me having added the grub option to enable IOMMU.

I added the intel_iommu=on to the end of my /etc/kernel/cmdline file and ran proxmox-boot-tool refresh - then rebooted the server. Now IOMMU shows as enabled when I run dmesg | grep -e DMAR -e IOMMU.

The system's systemd-boot file '/etc/kernel/cmdline' contents are:
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on

This correction to the bootloader was done today - since the last post. The HBA passthrough was working without this step.


What I've discovered from the last post:
The LSI card and the NIC are in the same IOMMU Group - and I think this is the problem. I hadn't realized, the passthrough of one PCI-E card could impact another through shared IOMMU groups as I was not familiar with this concept.

# find /sys/kernel/iommu_groups/ -type l [trimmed for relevant section]
Code:
/sys/kernel/iommu_groups/1/devices/0000:03:00.0 #LSI HBA
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0 #Intel X520 Port 0
/sys/kernel/iommu_groups/1/devices/0000:00:01.1
/sys/kernel/iommu_groups/1/devices/0000:01:00.1 #Intel X520 Port 1

Output of the commands you requested:
# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
Code:
IOMMU group 0 00:00.0 Host bridge [0600]: Intel Corporation Device [8086:9b33] (rev 05)
IOMMU group 10 00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:438f] (rev 11)
IOMMU group 10 00:1f.3 Audio device [0403]: Intel Corporation Device [8086:f0c8] (rev 11)
IOMMU group 10 00:1f.4 SMBus [0c05]: Intel Corporation Device [8086:43a3] (rev 11)
IOMMU group 10 00:1f.5 Serial bus controller [0c80]: Intel Corporation Device [8086:43a4] (rev 11)
IOMMU group 10 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (11) I219-LM [8086:0d4c] (rev 11)
IOMMU group 11 05:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-LM [8086:15f2] (rev 03)
IOMMU group 12 06:00.0 PCI bridge [0604]: Tundra Semiconductor Corp. Device [10e3:8113] (rev 01)
IOMMU group 13 08:00.0 PCI bridge [0604]: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge [1a03:1150] (rev 04)
IOMMU group 13 09:00.0 VGA compatible controller [0300]: ASPEED Technology, Inc. ASPEED Graphics Family [1a03:2000] (rev 41)
IOMMU group 1 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
IOMMU group 1 00:01.1 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) [8086:1905] (rev 05)
IOMMU group 1 01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet 10G 2P X520 Adapter [8086:154d] (rev 01)
IOMMU group 1 01:00.1 Ethernet controller [0200]: Intel Corporation Ethernet 10G 2P X520 Adapter [8086:154d] (rev 01)
IOMMU group 1 03:00.0 RAID bus controller [0104]: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)
IOMMU group 2 00:14.0 USB controller [0c03]: Intel Corporation Device [8086:43ed] (rev 11)
IOMMU group 2 00:14.2 RAM memory [0500]: Intel Corporation Device [8086:43ef] (rev 11)
IOMMU group 3 00:15.0 Serial bus controller [0c80]: Intel Corporation Device [8086:43e8] (rev 11)
IOMMU group 3 00:15.1 Serial bus controller [0c80]: Intel Corporation Device [8086:43e9] (rev 11)
IOMMU group 3 00:15.2 Serial bus controller [0c80]: Intel Corporation Device [8086:43ea] (rev 11)
IOMMU group 4 00:16.0 Communication controller [0780]: Intel Corporation Device [8086:43e0] (rev 11)
IOMMU group 4 00:16.3 Serial controller [0700]: Intel Corporation Device [8086:43e3] (rev 11)
IOMMU group 5 00:17.0 SATA controller [0106]: Intel Corporation Device [8086:43d2] (rev 11)
IOMMU group 6 00:1c.0 PCI bridge [0604]: Intel Corporation Device [8086:43b8] (rev 11)
IOMMU group 7 00:1c.5 PCI bridge [0604]: Intel Corporation Device [8086:43bd] (rev 11)
IOMMU group 8 00:1c.6 PCI bridge [0604]: Intel Corporation Device [8086:43be] (rev 11)
IOMMU group 9 00:1c.7 PCI bridge [0604]: Intel Corporation Device [8086:43bf] (rev 11)

# pveversion -v
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-6
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.2-2
libqb0: not correctly installed
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.1.8-1
proxmox-backup-file-restore: 2.1.8-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-10
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-1
pve-ha-manager: 3.3-4
pve-i18n: 2.7-1
pve-qemu-kvm: 6.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

Without loading the VM that has passthrough enabled - the Intel NIC uses the ixgbe drivers. Once the VM is started up it switched to vfio-pci driver.

Looking around I've found mention of "ACS" allowing for better PCI-E group granularity - I however cannot find the option mentioned in my motherboard manual nor in Intel docs relating to my CPU. I am pretty confident that my hardware does not support it.

I have also found mention of a ACS kernel patch which can be applied that may trick linux into believing ACS has been enabled, but which introduces security issues and can cause system instability.

Lastly there is the option of just overhauling how I have planned out my system and VMs so that I am not performing PCI-E passthrough.

Are there other options available to me or does this cover it?
 
Excellent research you did there. Nothing really to add. :)

The LSI card and the NIC are in the same IOMMU Group - and I think this is the problem. I hadn't realized, the passthrough of one PCI-E card could impact another through shared IOMMU groups as I was not familiar with this concept.

Exactly; you can only passthrough a whole IOMMU-group. (That is why I requested the output; but you already figured it out on your own. :))

I have also found mention of a ACS kernel patch which can be applied that may trick linux into believing ACS has been enabled, but which introduces security issues and can cause system instability.

Also fully correct. If you want to try it anyway, you might want to have a look at the bottom of this chapter: [1] and here: [2] for an additional parameter.

Are there other options available to me or does this cover it?

That is basically all you can do, yes. You could try different bios/UEFI versions and see, if the IOMMU-groups get split up any further; but chances are generally very small. I would at least check, if you are already on the latest bios/UEFI version.
Since the mainboard only has those two physical x16 (electrical: 1x x16 or 2x x8) PCIe-slots, you even can not try another slot for one of the cards...
So your only option would be to use the ACS override patch; but it is not guaranteed to work either. You need to test it. (And it comes with the negatives you mentioned...)

Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)

Looks like you never updated that host after the initial installation with the 7.2-ISO. Small how-to here: [3]. ;)

[1] https://pve.proxmox.com/wiki/Pci_passthrough#Verify_IOMMU_Isolation
[2] https://wiki.archlinux.org/title/PC...passing_the_IOMMU_groups_(ACS_override_patch)
[3] https://forum.proxmox.com/threads/im-unable-to-upload-files-to-my-proxmox-server.114541/#post-495356
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!