Ok, this seems really silly, but I am really out of ideas...
I have a 6 port mini computer that works well with OPNSense but I wanted to use OPNSense in proxmox and replace my old router/switch.
It happens that during its configuration (when it's connected to my router), I can connect via web and SSH normally (and ping). BUT when I try to connect directly from my PC to proxmox, I can't.
Just to create a first summary:
This is my /etc/network/interfaces file:
And this is the dmesg of my 'official' management port, enp9s0.
*I remember that we used to mind patch and cross-over cables when connecting two devices from the same layer, but I don't think it applies nowadays as I can connect directly IF I install OPNSense instead of proxmox, but MAYBE, there is something in the OS I should mind?
Anyway, I am not using fancy VLAN or anything like this. The most fancy thing I tried, was to use a different bridge to use the VMs but, ultimately, I wanted to:
Thank you!
I have a 6 port mini computer that works well with OPNSense but I wanted to use OPNSense in proxmox and replace my old router/switch.
It happens that during its configuration (when it's connected to my router), I can connect via web and SSH normally (and ping). BUT when I try to connect directly from my PC to proxmox, I can't.
Just to create a first summary:
- My PC has a static address of 192.168.1.100/24
- My proxmox ve server is at 192.168.1.4/24 (vmbr0)
- I have 5 of my 6 interfaces in vmbr0 and I can plug my router cable to any of these ports that I will be able to connect from my PC.
This is my /etc/network/interfaces file:
auto lo
iface lo inet loopback
auto enp9s0
iface enp9s0 inet manual
#ETH6
auto enp1s0
iface enp1s0 inet manual
#ETH1 (WAN)
auto enp2s0
iface enp2s0 inet manual
#ETH3
auto eno1
iface eno1 inet manual
#ETH2
auto enp7s0
iface enp7s0 inet manual
#ETH4
auto enp8s0
iface enp8s0 inet manual
#ETH5
auto vmbr0
iface vmbr0 inet static
address 192.168.1.4/24
gateway 192.168.1.1
bridge-ports enp9s0 eno1 enp2s0 enp7s0 enp8s0
bridge-stp off
bridge-fd 0
#Management
iface lo inet loopback
auto enp9s0
iface enp9s0 inet manual
#ETH6
auto enp1s0
iface enp1s0 inet manual
#ETH1 (WAN)
auto enp2s0
iface enp2s0 inet manual
#ETH3
auto eno1
iface eno1 inet manual
#ETH2
auto enp7s0
iface enp7s0 inet manual
#ETH4
auto enp8s0
iface enp8s0 inet manual
#ETH5
auto vmbr0
iface vmbr0 inet static
address 192.168.1.4/24
gateway 192.168.1.1
bridge-ports enp9s0 eno1 enp2s0 enp7s0 enp8s0
bridge-stp off
bridge-fd 0
#Management
And this is the dmesg of my 'official' management port, enp9s0.
dmesg | grep -E "enp9s0|eth0"
[ 1.853173] igc 0000:01:00.0 eth0: MAC: 00:e2:69:12:24:ac
[ 2.467063] igc 0000:09:00.0 enp9s0: renamed from eth5
[ 2.481603] igc 0000:01:00.0 enp1s0: renamed from eth0
[ 5.935146] vmbr0: port 1(enp9s0) entered blocking state
[ 5.935155] vmbr0: port 1(enp9s0) entered disabled state
[ 5.935313] device enp9s0 entered promiscuous mode
[ 9.522794] igc 0000:09:00.0 enp9s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 9.523995] vmbr0: port 1(enp9s0) entered blocking state
[ 9.524025] vmbr0: port 1(enp9s0) entered forwarding state
[ 89.350105] igc 0000:09:00.0 enp9s0: NIC Link is Down
[ 89.350379] vmbr0: port 1(enp9s0) entered disabled state
[ 98.815072] igc 0000:09:00.0 enp9s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 98.924857] vmbr0: port 1(enp9s0) entered blocking state
[ 98.924894] vmbr0: port 1(enp9s0) entered forwarding state
[ 258.264974] igc 0000:09:00.0 enp9s0: NIC Link is Down
[ 258.265229] vmbr0: port 1(enp9s0) entered disabled state
...
[ 1.853173] igc 0000:01:00.0 eth0: MAC: 00:e2:69:12:24:ac
[ 2.467063] igc 0000:09:00.0 enp9s0: renamed from eth5
[ 2.481603] igc 0000:01:00.0 enp1s0: renamed from eth0
[ 5.935146] vmbr0: port 1(enp9s0) entered blocking state
[ 5.935155] vmbr0: port 1(enp9s0) entered disabled state
[ 5.935313] device enp9s0 entered promiscuous mode
[ 9.522794] igc 0000:09:00.0 enp9s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 9.523995] vmbr0: port 1(enp9s0) entered blocking state
[ 9.524025] vmbr0: port 1(enp9s0) entered forwarding state
[ 89.350105] igc 0000:09:00.0 enp9s0: NIC Link is Down
[ 89.350379] vmbr0: port 1(enp9s0) entered disabled state
[ 98.815072] igc 0000:09:00.0 enp9s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 98.924857] vmbr0: port 1(enp9s0) entered blocking state
[ 98.924894] vmbr0: port 1(enp9s0) entered forwarding state
[ 258.264974] igc 0000:09:00.0 enp9s0: NIC Link is Down
[ 258.265229] vmbr0: port 1(enp9s0) entered disabled state
...
*I remember that we used to mind patch and cross-over cables when connecting two devices from the same layer, but I don't think it applies nowadays as I can connect directly IF I install OPNSense instead of proxmox, but MAYBE, there is something in the OS I should mind?
Anyway, I am not using fancy VLAN or anything like this. The most fancy thing I tried, was to use a different bridge to use the VMs but, ultimately, I wanted to:
- Be able to access the management from any interface as this mini pc will be my router/switch (using a virtualized OPNSense instead);
- Launch an OPNSense VM,assign it to a bridge so I can connect directly from my PC independent of the port I am connecting phisycally
- Launch a pi-hole CT, assign it to a bridge so I can connect directly from my PC independent of the port I am connecting physically;
Thank you!
Last edited: