Can't Access Proxmox Web GUI (New Server Build)

area51x

New Member
Nov 30, 2023
23
2
1
Hi all. I'm completely new to server builds and just built my first PC. It's based on the ROMED8-2T with an Epyc 7302P. 128 GB RAM (32x4). I have a 2TB SSD. 2x1TB M2 SSDs. 12TBx6 HDDs that will ultimately be in a raidz2 config. Noctua cooler. Corsair 750 power supply. Housing is Fractal Design Define 7.

The good news is that the CPU posted correctly the first time. All the drives and memory show up fine. I am able to connect to the admin web over the home LAN using the IPMI port. I was able to install Proxmox. However, I am unable to reach Proxmox web GUI over IPMI.

More importantly, I tried to instead use the the 2 10Gb LAN ports to connect and I'm not getting any DHCP assigned IPs. Also, when I plug in my ethernet cable into either of the 10Gb LAN ports, I don't see any status lights at all. On the other hand, plugging into the IPMI LAN port gives me status lights on the IPMI jack.

I have no idea what I'm doing wrong here. Am I even able to use those 10Gb ports within Proxmox? Is there a BIOS setting that I'm missing? Defective MB?

In summary: 1) how come I get a DHCP address via the IPMI and can reach the management web gui but can't reach the Proxmox gui and 2) how come I get no status lights or IP address when plugging into either of the 10Gb LAN ports?
 
  • Like
Reactions: mbmast
However, I am unable to reach Proxmox web GUI over IPMI.
You are not supposed to. Thats not what IPMI/BMC is for.
, I tried to instead use the the 2 10Gb LAN ports to connect and I'm not getting any DHCP assigned IPs. Also, when I plug in my ethernet cable into either of the 10Gb LAN ports, I don't see any status lights at all.
the two things are likely related, or you simply did not configure the interfaces properly and they are not being brought up.
The output of "ip a" would be helpful, along with "cat /etc/network/interfaces"
On the other hand, plugging into the IPMI LAN port gives me status lights on the IPMI jack.
two completely different and unrelated things
Am I even able to use those 10Gb ports within Proxmox?
Generally yes, but there are no guarantees in life.
Is there a BIOS setting that I'm missing? Defective MB?
May be and May be, but unlikely. Neither one has anything to do with Proxmox.
how come I get a DHCP address via the IPMI and can reach the management web gui
IPMI/BMC is a special OS that lives independent of your primary OS. Its embedded on a chip in your motherboard, or in more advanced servers on a special card. This OS and the network port dedicated to it is its own separate world.
how come I get no status lights or IP address when plugging into either of the 10Gb LAN ports?
You gave a very detailed and mostly completely irrelevant description of your PC, yet you said nothing of the setup of your NIC: built-in or addon, vendor, OS outputs (ip a, lspci, etc)

Boot to a live CD of a well known OS (Ubuntu, Debian, Centos, Windows, etc) - is the network card shown there?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
From your description it would appear that you installed Proxmox while only the IPMI Lan was connected.
I would suggest, starting again, reinstall Proxmox, with one of the 10gb Lan already plugged in to network/switch/router.
Then make sure during installation to choose the allotted 10gb NIC (DHCP address) for the Proxmox installation.
 
You are not supposed to. Thats not what IPMI/BMC is for.

the two things are likely related, or you simply did not configure the interfaces properly and they are not being brought up.
The output of "ip a" would be helpful, along with "cat /etc/network/interfaces"

two completely different and unrelated things

Generally yes, but there are no guarantees in life.

May be and May be, but unlikely. Neither one has anything to do with Proxmox.

IPMI/BMC is a special OS that lives independent of your primary OS. Its embedded on a chip in your motherboard, or in more advanced servers on a special card. This OS and the network port dedicated to it is its own separate world.

You gave a very detailed and mostly completely irrelevant description of your PC, yet you said nothing of the setup of your NIC: built-in or addon, vendor, OS outputs (ip a, lspci, etc)

Boot to a live CD of a well known OS (Ubuntu, Debian, Centos, Windows, etc) - is the network card shown there?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thanks for the advice. Hard to know what's relevant and what's not when you're a novice. The onboard NICs are:

- 2 x RJ45 10G base-T by Intel® X550-AT2
- 1 x RJ45 Dedicated IPMI LAN port by RTL8211E


Some updates:

I think the main problem was that I didn't put the metal I/O faceplate on properly. Again, being a total novice, I didn't realize the little prongs had to touch the ports a certain way. After adjusting the plate, I get the small LED light activity on all the LAN ports.

I was actually able to get Proxmox up and running on one of the 10Gb LAN ports getting an assigned IP from my router. One of the issues was that the IPMI and eno1 ports were bonded by default and I could only get Proxmox to work by unbonding the two ports in the BIOS. The third eno2 port doesn't appear in the BIOS (see below).

There are still some problems. I can't get the second 10Gb port to show up in the BIOS at all. It appears inactive. However, when I plug in an ethernet cable there is a blinking orange-ish light which is identical to the light on the the other 10Gb LAN port that does show up in the BIOS. That second port also shows up as the eno2 interface in Proxmox but it's listed as "inactive."

The other problem is that I was getting a lot of disconnections in Proxmox. I think I have an idea what the problem might be: the ASRock IPMI/BMC has a web gui interface on port 443 for that same IP. I wonder if there is some conflict between the Proxmox management port and the IPMI management port (not sure why that would be as they're both https but one is port 443 and the other is 8006 by default).

Now that Proxmox is up and running, the IPMI management GUI which runs off a separate port but same IP is not reachable. I bet if I get the IPMI to work, Proxmox will stop working. Again, seems like some kind of network conflict between the two.

Some data dumps:

-----

cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.209/24
gateway 192.168.1.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

iface enx5*********** inet manual

iface eno2 inet manual

source /etc/network/interfaces.d/*

-----

ethtool eno1
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 100baseT/Full
1000baseT/Full
10000baseT/Full
2500baseT/Full
5000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 100baseT/Full
1000baseT/Full
10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link

Link detected: yes


ethtool eno2
Settings for eno2:
Supported ports: [ TP ]
Supported link modes: 100baseT/Full
1000baseT/Full
10000baseT/Full
2500baseT/Full
5000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 100baseT/Full
1000baseT/Full
10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: no



ethtool enx5***********
Settings for enx5***********:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Half
Auto-negotiation: off
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Current message level: 0x00000007 (7)
drv probe link

Link detected: no
 
You are not supposed to. Thats not what IPMI/BMC is for.

the two things are likely related, or you simply did not configure the interfaces properly and they are not being brought up.
The output of "ip a" would be helpful, along with "cat /etc/network/interfaces"

two completely different and unrelated things

Generally yes, but there are no guarantees in life.

May be and May be, but unlikely. Neither one has anything to do with Proxmox.

IPMI/BMC is a special OS that lives independent of your primary OS. Its embedded on a chip in your motherboard, or in more advanced servers on a special card. This OS and the network port dedicated to it is its own separate world.

You gave a very detailed and mostly completely irrelevant description of your PC, yet you said nothing of the setup of your NIC: built-in or addon, vendor, OS outputs (ip a, lspci, etc)

Boot to a live CD of a well known OS (Ubuntu, Debian, Centos, Windows, etc) - is the network card shown there?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Also, I'm happy to provide the output of ip a but just wanted to make sure whether it's safe to paste MAC addresses publicly or not.

Here's lspci:

00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.5 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 7
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
02:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
02:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
03:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
03:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
03:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
40:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:01.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:01.5 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
42:00.0 Ethernet controller: Intel Corporation Ethernet Controller X550 (rev 01)
42:00.1 Ethernet controller: Intel Corporation Ethernet Controller X550 (rev 01)
43:00.0 USB controller: ASMedia Technology Inc. ASM2142/ASM3142 USB 3.1 Host Controller
44:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
45:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
46:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
46:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
47:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
47:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
47:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
47:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
48:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
49:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
80:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
80:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
80:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
80:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
80:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
80:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
81:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
81:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
82:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
82:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
83:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
84:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
c0:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
c0:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
c0:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
c0:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
c1:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
c1:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
c2:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
c2:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PTDMA
 
From your description it would appear that you installed Proxmox while only the IPMI Lan was connected.
I would suggest, starting again, reinstall Proxmox, with one of the 10gb Lan already plugged in to network/switch/router.
Then make sure during installation to choose the allotted 10gb NIC (DHCP address) for the Proxmox installation.

Yes you're correct. After unbonding eno1 and the IPMI LAN and plugging ethernet into eno1, I reinstalled Proxmox and selected the active eno1. That got things working but if you see my reply above, there are still some issues between accessing the Proxmox vs. IPMI management GUIs. The IPMI management GUI is usually still accessible even though I'm only hardwired into eno1. However, as stated above, once I got a stable Proxmox install, the GUI is no longer accessible.
 
Just an update. I was just making some edits to my proxmox host via ssh when I got this error:

"client_loop: send disconnect: Broken pipe"

I headed over to the Proxmox gui which showed a connection issue. I then headed over to the IPMI management gui and sure enough it came back online.

A few seconds later without me adjusting anything, the IPMI gui went offline and the Proxmox gui is working again.
 
One of the issues was that the IPMI and eno1 ports were bonded by default and I could only get Proxmox to work by unbonding the two ports in the BIOS
so you had vendor proprietary BMC network redundancy enabled which blocks the OS from seeing those ports...
You may find much more help on the hardware vendor forum. Keep in mind that PVE is based on Debian Linux OS. The network handling is basic Linux administration.
That second port also shows up as the eno2 interface in Proxmox but it's listed as "inactive."
impossible to say without actual command output
The other problem is that I was getting a lot of disconnections in Proxmox. I think I have an idea what the problem might be: the ASRock IPMI/BMC has a web gui interface on port 443 for that same IP. I wonder if there is some conflict between the Proxmox management port and the IPMI management port (not sure why that would be as they're both https but one is port 443 and the other is 8006 by default).

Now that Proxmox is up and running, the IPMI management GUI which runs off a separate port but same IP is not reachable. I bet if I get the IPMI to work, Proxmox will stop working. Again, seems like some kind of network conflict between the two.
Yes.. Its called "Duplicate IP" - dont do it. Assign a unique IP to each interface.

You also have link detection problem, that may be related to NIC, cable or switch.
Keep working on your networking, may be watch a few videos - there is some value behind all the fluff there.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!