Proxmox user base seems rather thin?

mike the newb

Member
Aug 24, 2025
32
5
8
I'm unaware of the status of the Proxmox "team," but it seems to be quite "thin."
There are clearly some wizards here, and that's awesome, but so few seem to pay attention to posts. Maybe it's just because I'm a total noob?
Bottom line, for me, is that, as a noob, I find myself doing hours-long searches to work through Proxmox related issues.
Makes me sad, really, as I thought Proxmox would be an alternative for me to use that would have some energetic individuals, engaged, and wanting to support the "platform."

For me, this has not happened. There are a few awesome individuals that have responded to my posts, but, as I said, the response value seems very "thin."

This gives me the impression that the Proxmox support and community is severely limited.

Feel free to tell me I'm FOS, as I hope to be proven wrong...seriously.
 
U = FOS

Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while.

A) Watch proxmox-related youtube videos

B) Read the last 30 days of forum posts, here and on Reddit (free education)

C) Take notes

D) Contribute - answer other questions when you have experience

E) If forum/peer support is not solving your issues, buy a PVE license subscription and open a support ticket.
 
I'm unaware of the status of the Proxmox "team," but it seems to be quite "thin."
There are clearly some wizards here, and that's awesome, but so few seem to pay attention to posts. Maybe it's just because I'm a total noob?
Bottom line, for me, is that, as a noob, I find myself doing hours-long searches to work through Proxmox related issues.
Makes me sad, really, as I thought Proxmox would be an alternative for me to use that would have some energetic individuals, engaged, and wanting to support the "platform."

For me, this has not happened. There are a few awesome individuals that have responded to my posts, but, as I said, the response value seems very "thin."

This gives me the impression that the Proxmox support and community is severely limited.

Feel free to tell me I'm FOS, as I hope to be proven wrong...seriously.

the forum is community so it is a highlight that staff members are even present and answer questions patiently

what do you expect (seriously, literally)?
 
U = FOS

Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while.

A) Watch proxmox-related youtube videos

B) Read the last 30 days of forum posts, here and on Reddit (free education)

C) Take notes

D) Contribute - answer other questions when you have experience

E) If forum/peer support is not solving your issues, buy a PVE license subscription and open a support ticket.
Understood. I do everything you've listed, and will continue to do so. You'd probably laugh if you saw all the YT videos I have saved as well as all my notes.
I haven't answered any questions yet because I think everyone here is light years ahead of me, but I definitely will when I reach the point where I can.
 
  • Like
Reactions: Kingneutron
the forum is community so it is a highlight that staff members are even present and answer questions patiently

what do you expect (seriously, literally)?

I honestly don't have a good answer for that question. I just thought there were tons and tons of PM users, but my seemingly endless scouring of the internet doesn't seem to result in many answers. I shouldn't have framed it in terms of just this forum.
 
out of curiosity, what was the vexing question you asked that had no results on the internet?


I asked it here as well, but it's related to Intel i350 NICs. Both onboard and a T4. Both were working in all of my previous installs, and now the T4 is not. GPT4 has probably the most concise response, but I don't want to downgrade the kernel, and I don't think that's the issue. Mostly because it was working before. I've been somewhat deliberately been installing PM, tweaking it, creating VM's and different things, then simulating drive failures/PM crash, then a simulated recovery. Reasoning is that if I'm going to change over to PM from Hyper-V I need to know how to work through things and assess how difficult it is if I need help, which I'm fairly certain will be the case since the last week or so is my first experience with Linux. Sorry for the book here.

Here are a couple outputs:

Code:
root@PVE:/# dmesg | egrep -i --color 'igb'
[    2.068624] igb: Intel(R) Gigabit Ethernet Network Driver
[    2.068899] igb: Copyright (c) 2007-2014 Intel Corporation.
[    2.144732] igb 0000:04:00.0: DCA enabled
[    2.149795] igb 0000:04:00.0: added PHC on eth0
[    2.150062] igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection
[    2.150542] igb 0000:04:00.0: eth0: (PCIe:5.0Gb/s:Width x4) 00:25:90:7c:2e:ea
[    2.150861] igb 0000:04:00.0: eth0: PBA No: 104900-000
[    2.151103] igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    2.205202] igb 0000:04:00.1: DCA enabled
[    2.207140] igb 0000:04:00.1: added PHC on eth1
[    2.207405] igb 0000:04:00.1: Intel(R) Gigabit Ethernet Network Connection
[    2.207648] igb 0000:04:00.1: eth1: (PCIe:5.0Gb/s:Width x4) 00:25:90:7c:2e:eb
[    2.207963] igb 0000:04:00.1: eth1: PBA No: 104900-000
[    2.208200] igb 0000:04:00.1: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[    2.209226] igb 0000:07:00.0: BAR 0: can't reserve [mem 0x80000000-0x800fffff]
[    2.209555] igb 0000:07:00.0: probe with driver igb failed with error -16
[    2.209868] igb 0000:07:00.1: BAR 0: can't reserve [mem 0x80100000-0x801fffff]
[    2.210162] igb 0000:07:00.1: probe with driver igb failed with error -16
[    2.210479] igb 0000:07:00.2: BAR 0: can't reserve [mem 0x80200000-0x802fffff]
[    2.210778] igb 0000:07:00.2: probe with driver igb failed with error -16
[    2.211096] igb 0000:07:00.3: BAR 0: can't reserve [mem 0x80300000-0x803fffff]
[    2.211415] igb 0000:07:00.3: probe with driver igb failed with error -16
[    4.389442] igb 0000:04:00.1 eno2: renamed from eth1
[    4.393593] igb 0000:04:00.0 eno1: renamed from eth0
[    9.378341] igb 0000:04:00.0 eno1: entered allmulticast mode
[    9.378767] igb 0000:04:00.0 eno1: entered promiscuous mode
[   13.311780] igb 0000:04:00.0 eno1: igb: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX

Code:
root@PVE:/# dmesg | grep -i ethernet
[    2.067890] igb: Intel(R) Gigabit Ethernet Network Driver
[    2.126004] igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection
[    2.186677] igb 0000:04:00.1: Intel(R) Gigabit Ethernet Network Connection
root@PVE:/# lspci | grep -i ethernet
04:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
07:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
07:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
07:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
07:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
 
it's related to Intel i350 NICs
So you have issues with 11 year old NIC supported by Intel driver of the same age, on yet to be named CPU and Motherboard.

Not having found an exact answer, or people falling over to help you on their free time, you decided to stir the pot "facebook style" : doughnuts are the worst - prove me wrong.

Reasoning is that if I'm going to change over to PM from Hyper-V I need to know how to work through things and assess how difficult it is if I need help,
If this is employment related - ask for modern, non-ancient, hardware for your lab. There is nothing wrong with running HyperV - stick with it if works for you.
 
So you have issues with 11 year old NIC supported by Intel driver of the same age, on yet to be named CPU and Motherboard.

Not having found an exact answer, or people falling over to help you on their free time, you decided to stir the pot "facebook style" : doughnuts are the worst - prove me wrong.


If this is employment related - ask for modern, non-ancient, hardware for your lab. There is nothing wrong with running HyperV - stick with it if works for you.

If anything is "facebook style" it is your response, and it's somewhat typical - "you're out of line saying what you said, and if something isn't working, stick with what you have. And, by the way, your hardware sucks and you should get some real stuff."
But I'm ok with that.
This is my own personal "lab," and I don't have the funds to upgrade it at the moment, though I'd love to do so.
Any answers or ideas otherwise?
 
Any answers or ideas otherwise?
GPT4 has probably the most concise response, but I don't want to downgrade the kernel
You already found the answer. the fact you're moving the goalposts isnt helping you. I'd advise to get rid of your "wants"- the newer kernel is probably providing you with no utility at all. Given that the issues with your NIC are known and easily remedied with a pinned earlier kernel should have had you happily cruising along.

This is my own personal "lab," and I don't have the funds to upgrade it
No one is telling you to. no one is going to support you either- so you either learn to do it on your own or maybe choose a hypervisor that is more conventionally (read: commercial) supported. oh, and you might not be surprised that that costs money too.

What you should understand is the following:
1. PVE is a built as a collection of open source packages. When you are experiencing a problem, it may or may not be with the code that the PVE developers wrote, or with packages that are specific to it (or the host os, or the firmware of your specific computer, etc, etc.) In your particular case, the problem is with your hardware with a specific kernel version- You might get more results asking in ubuntu (providers of the kernel used in PVE), intel, or linux kernel specific resources and communities.
2. GPT4 may have given you an answer, but I wouldnt trust any generative AI answer as they tend to not weigh relevent sources in their responses. it could be helpful- but should always be taken with a grain of salt.
3. Kernel updates account for a vast portfolio of use cases and edge cases. 90% of those don't apply to you. newer isnt always better, and in your case, detrimental.
 
I think you're still missing the point of my contention with the idea that downgrading the kernel is the solution.
The 1350 T4 was working fine with the same kernel previously. For whatever reason (which I haven't found yet), the memory reservation (attempt) is different this time around. As you can see, it is trying to reserve 0x80000000-0x800fffff and failing, but the other i350 NICs are working fine. They are being assigned to different address spaces. And pinning to an older kernel, according to everything I've read so far, is not at all a guaranteed solution.

I intend to get to the bottom of it. On top of this, a straight up Debian install works fine. So does Ubuntu and Ubuntu server (current versions). So it DOES seem to be an issue related to PM.

In addition to this, and also from my reading, PM is widely used on older hardware, and this issue seems to be rather "new." Your contention that "no one is going to support me" as I do everything possible to learn to do everything on my own seems a bit counter-intuitive to the premise that PM is a great/viable alternative to the competition with a vast community supporting the project.

Isn't it a much better idea to try to get to the bottom of it? Finding a solution, and maybe benefitting everyone that uses PM?
 
Isn't it a much better idea to try to get to the bottom of it?
For you- sure. for me- I dont have this hardware or this problem, so its not useful to me nor am I able to participate in the troubleshooting.

and maybe benefitting everyone that uses PM?
Please be sure to post any solution you uncover. that is, as you pointed out, the point and nature of the community :)
 
The problem probably isn't Proxmox kernel specific. Maybe other people on the internet also have this issue also with the same or similar motherboard? Maybe try the acpi_enforce_resources=lax kernel parameter (I just guessing here but I needed it for some other device)? In my case it's probably a bug in the ACPI table but still not fixed by the motherboard manufacturer (or a Windows versus Linux table interpretation issues).
 
As a newcomer to Linux and Linux-based virtualisation, your time might be better spent putting this network card issue aside for a while and sticking with PVE8. I have a similar network card in my setup, so was interested in your report. It's a common card so should surface again soon as more users upgrade and experienced users take an interest in it. But if it is associated with a specific and old hardware combination (motherboard?) you may be out of luck. I have winged it for a long time on old hardware but PVE9 may present some issues for me! I may be OK in this instance as I have not enables passthrough and your problem looks to be connected to IOMMU.
I have not come across another community forum where you'll find the same level engagement as you do from the staff here. This forum is an amazing resource. My only problem is using google these days to find the posts that help me.
 
For you- sure. for me- I dont have this hardware or this problem, so its not useful to me nor am I able to participate in the troubleshooting.


Please be sure to post any solution you uncover. that is, as you pointed out, the point and nature of the community :)

I will definitely post my results/process as I go through it. I've already compiled extensive notes, so good there.
 
  • Like
Reactions: keeka
Proxmox VE is built on Debian, chances are if there is a problem with the kernel and these network cards, it also appears on a stock Debian installation. Did you happen to check if they work if you run Debian 13 (could just try the live image)?

If not, you might be more successful in getting this fixed by reporting the issue to the Debian developers.
 
Proxmox VE is built on Debian, chances are if there is a problem with the kernel and these network cards, it also appears on a stock Debian installation.
Proxmox's Linux kernel (6.14) is based on Ubuntu instead of Debian and since drivers come with the kernel, maybe try an Ubuntu (installer without installing it) with the same kernel version (25.04).
EDIT: The user-space is indeed based on Debian stable (13).
 
Last edited:
The problem probably isn't Proxmox kernel specific. Maybe other people on the internet also have this issue also with the same or similar motherboard? Maybe try the acpi_enforce_resources=lax kernel parameter (I just guessing here but I needed it for some other device)? In my case it's probably a bug in the ACPI table but still not fixed by the motherboard manufacturer (or a Windows versus Linux table interpretation issues).

I may just try that. I think I'm going to reinstall Debian 13 which PM's kernel is based on, and run the same dmesg | egrep -i --color 'igb' and see what I get first.
BTW, did that kernel parameter cause you any issues at all?