Another don't have any network connections thread... [SOLVED]

Oblique

New Member
Jun 7, 2023
13
0
1
I did it. Sort of on purpose though as there was an IP conflict breaking my web server. Anyway I don't have any SSH access only console. What is odd is that I have gone through the top 30 threads on this and have yet to either stumble upon an answer or move forward. What is different from 90% of the posts here is that I do know what the problem "is" - my two NIC ports are dead - no active connections can be made.

I had a port conflict with another VM instance and when I foolishly tried to change it in the GUI, it broke everything. I have no web interface, no ssh nothing.

Please give me the standard routine, I will have to copy them as images from the console since SSH is not available either.

console1.png

and...

console2.png


I know you need more, but I am burned out, please ask. I lost both of the NIC ports on my card - they simply don't respond - lights flashing and all.

I am missing I believe,

10.188.20.3 (web server)
10.188.20.4 (proxmox web console interface)
10.188.20.8 (experiential pfsense container)
*actually there are a couple of other ports bridged into CT's too, that obviously don't work as well.
When I ran some of the commands, the response was that the port/IP was DOWN. Sorry for another one of these posts, I figured I could fix it by aligning the ports and bridges in the /etc/network/interfaces, but I have completely failed. Never had it so the SSH was down too, that's a real bummer. Thank goodness for iDRAC.
 
Last edited:
OK, now through the magic of the internet I got the GUI back usable, but none of the bridges are working right. I don't know what to do to fix them. I know when it was running fine prior the 10.188.20.4 port did NOT have a gateway - and this seemed to allow 10.188.20.3 to have access as well. --And thank God I have ssh back, so I can help whomever help me easier.

gui1.png
 
Last edited:
Hello, so there are a few things that might be improved.

There is no config for vmbr0, there is one for vmbr03 though, the later seems like a typo and should be renamed to vmbr0.

Both interfaces vmbr0 and vmbr1 have the same network 10.118.20.{4,8}/24 (they resolve to the same range 10.188.20.0 - 10.188.20.255). You should set the second interface to something outside the range of the first, e.g. 10.188.21.4/24.

You can only set one default gateway, I would just set it only on vmbr0.

I would advice to read the network configuration documentation [1].

EDIT: I forgot to mention that you can reload the network config with ifreload -a.

[1]: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_network_configuration
 
Last edited:
Oh well thank you so very much for getting back to me! I must say I am at a loss somewhat about your details to get these VM's and Containers back up online or the internet.

What I did so far, got rid of the gateway line in interfaces on the vmbr1 setting and have no idea how that could even be set, but I have tried about everything so I likely did it. Now only vmbr0 has that 10.188.20.1 gateway.

Ahh, yes I caught the vmbr03 mislabeling and changed it to vmbr0.

What I am unsure about is setting the other bridges up for a different IP range. Will PVE handle the bridging? I was using my actual IP's in the virtual environment to reach each VM and CT - PVE allowed me to do that. -> Addition, changedvmbr1 to and saw no difference.

I will try changing the vmbr1 to a new IP range. BTW, while I have been all over the documentation, your link to the documentation address on my machine is just as broken as all the other VM's/CT's on the system so I can't get to it. Allow me to post where I am today,

I am still having problems for almost everything seems to have no access to the network (and Internet). Or maybe better said nothing can inwardly access the VM's/CT's. I get an IMMEDIATE error when I try to visit my websites, either "

This site can’t be reached​

ERR_CONNECTION_REFUSED

Or

This site can’t provide a secure connection​

ERR_SSL_VERSION_OR_CIPHER_MISMATCH

Which neither makes much sense. my 10.188.20.3 address has a hosting panel on it (CWP7) and the machine is not reachable nor are any of the sites that reside on it.

I've really been trying hard here and it has to be something dumb at this point. I hate to be taking up your time with something like this. Update of outputs:

Here is all I can think to include after a reboot:

Code:
root@carbon5:~# journalctl -u pve* -b0
Jul 06 18:13:02 carbon5 systemd[1]: Starting pvebanner.service - Proxmox VE Login Banner...
Jul 06 18:13:02 carbon5 systemd[1]: Starting pvefw-logger.service - Proxmox VE firewall logger...
Jul 06 18:13:02 carbon5 systemd[1]: Starting pvenetcommit.service - Commit Proxmox VE network changes...
Jul 06 18:13:02 carbon5 systemd[1]: Finished pvenetcommit.service - Commit Proxmox VE network changes.
Jul 06 18:13:02 carbon5 pvefw-logger[5479]: starting pvefw logger
Jul 06 18:13:02 carbon5 systemd[1]: Started pvefw-logger.service - Proxmox VE firewall logger.
Jul 06 18:13:03 carbon5 systemd[1]: Starting pve-lxc-syscalld.service - Proxmox VE LXC Syscall Daemon...
Jul 06 18:13:03 carbon5 systemd[1]: Started pve-lxc-syscalld.service - Proxmox VE LXC Syscall Daemon.
Jul 06 18:13:04 carbon5 systemd[1]: Finished pvebanner.service - Proxmox VE Login Banner.
Jul 06 18:13:08 carbon5 systemd[1]: Reached target pve-storage.target - PVE Storage Target.
Jul 06 18:13:08 carbon5 systemd[1]: Started pve-daily-update.timer - Daily PVE download activities.
Jul 06 18:13:09 carbon5 systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...
Jul 06 18:13:10 carbon5 systemd[1]: Started pve-cluster.service - The Proxmox VE cluster filesystem.
Jul 06 18:13:10 carbon5 systemd[1]: Starting pve-firewall.service - Proxmox VE firewall...
Jul 06 18:13:10 carbon5 systemd[1]: Starting pvedaemon.service - PVE API Daemon...
Jul 06 18:13:10 carbon5 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Jul 06 18:13:12 carbon5 pve-firewall[6927]: starting server
Jul 06 18:13:12 carbon5 pvestatd[6930]: starting server
Jul 06 18:13:13 carbon5 systemd[1]: Started pve-firewall.service - Proxmox VE firewall.
Jul 06 18:13:13 carbon5 systemd[1]: Started pvestatd.service - PVE Status Daemon.
Jul 06 18:13:13 carbon5 pvedaemon[6958]: starting server
Jul 06 18:13:13 carbon5 pvedaemon[6958]: starting 3 worker(s)
Jul 06 18:13:13 carbon5 pvedaemon[6958]: worker 6959 started
Jul 06 18:13:13 carbon5 pvedaemon[6958]: worker 6960 started
Jul 06 18:13:13 carbon5 pvedaemon[6958]: worker 6961 started
Jul 06 18:13:13 carbon5 systemd[1]: Started pvedaemon.service - PVE API Daemon.
Jul 06 18:13:13 carbon5 systemd[1]: Starting pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon...
Jul 06 18:13:13 carbon5 systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Jul 06 18:13:15 carbon5 pve-ha-crm[6970]: starting server
Jul 06 18:13:15 carbon5 pve-ha-crm[6970]: status change startup => wait_for_quorum
Jul 06 18:13:15 carbon5 systemd[1]: Started pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon.
Jul 06 18:13:16 carbon5 pveproxy[6969]: Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface.
Jul 06 18:13:16 carbon5 pveproxy[7098]: starting server
Jul 06 18:13:16 carbon5 pveproxy[7098]: starting 3 worker(s)
Jul 06 18:13:16 carbon5 pveproxy[7098]: worker 7101 started
Jul 06 18:13:16 carbon5 pveproxy[7098]: worker 7102 started
Jul 06 18:13:16 carbon5 pveproxy[7098]: worker 7103 started
Jul 06 18:13:16 carbon5 systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Jul 06 18:13:16 carbon5 systemd[1]: Starting pve-ha-lrm.service - PVE Local HA Resource Manager Daemon...
Jul 06 18:13:18 carbon5 pve-ha-lrm[7120]: starting server
Jul 06 18:13:18 carbon5 pve-ha-lrm[7120]: status change startup => wait_for_agent_lock
Jul 06 18:13:18 carbon5 systemd[1]: Started pve-ha-lrm.service - PVE Local HA Resource Manager Daemon.
Jul 06 18:13:18 carbon5 systemd[1]: Starting pve-guests.service - PVE guests...
Jul 06 18:13:20 carbon5 pve-guests[7122]: <root@pam> starting task UPID:carbon5:00001BD3:00001536:64A73C80:startall::root@pam:
Jul 06 18:13:20 carbon5 pve-guests[7122]: <root@pam> end task UPID:carbon5:00001BD3:00001536:64A73C80:startall::root@pam: OK
Jul 06 18:13:20 carbon5 systemd[1]: Finished pve-guests.service - PVE guests.
Jul 06 18:13:20 carbon5 systemd[1]: Starting pvescheduler.service - Proxmox VE scheduler...
Jul 06 18:13:21 carbon5 pvescheduler[7125]: starting server
Jul 06 18:13:21 carbon5 systemd[1]: Started pvescheduler.service - Proxmox VE scheduler.
root@carbon5:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 24:6e:96:55:d1:c4 brd ff:ff:ff:ff:ff:ff
    altname enp6s0f0
3: eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 24:6e:96:55:d1:c5 brd ff:ff:ff:ff:ff:ff
    altname enp6s0f1
    inet6 fe80::266e:96ff:fe55:d1c5/64 scope link
       valid_lft forever preferred_lft forever
4: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr1 state DOWN group default qlen 1000
    link/ether 24:6e:96:55:d1:c0 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 24:6e:96:55:d1:c2 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
6: enp3s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a0:36:9f:de:79:b4 brd ff:ff:ff:ff:ff:ff
7: enp3s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a0:36:9f:de:79:b6 brd ff:ff:ff:ff:ff:ff
8: enp131s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a0:36:9f:df:ca:c0 brd ff:ff:ff:ff:ff:ff
9: enp131s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a0:36:9f:df:ca:c2 brd ff:ff:ff:ff:ff:ff
10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:55:d1:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.188.20.4/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::266e:96ff:fe55:d1c4/64 scope link
       valid_lft forever preferred_lft forever
11: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 24:6e:96:55:d1:c0 brd ff:ff:ff:ff:ff:ff
    inet 10.188.21.4/24 scope global vmbr1
       valid_lft forever preferred_lft forever
root@carbon5:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno4

iface eno4 inet manual

auto eno3

iface eno3 inet manual

iface eno1 inet manual

iface eno2 inet manual

iface enp3s0f0 inet manual

iface enp3s0f1 inet manual

iface enp131s0f0 inet manual

iface enp131s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.188.20.4/24
        gateway 10.188.20.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr1
iface vmbr1 inet static
        address 10.188.21.4/24
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

root@carbon5:~# sysctl -p
root@carbon5:~# journalctl -f
Jul 06 18:13:25 carbon5 kernel: L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
Jul 06 18:13:25 carbon5 kernel: FS-Cache: Loaded
Jul 06 18:13:26 carbon5 kernel: NFS: Registering the id_resolver key type
Jul 06 18:13:26 carbon5 kernel: Key type id_resolver registered
Jul 06 18:13:26 carbon5 kernel: Key type id_legacy registered
Jul 06 18:13:26 carbon5 nfsrahead[7296]: setting /mnt/pve/NFS-VMs readahead to 128
Jul 06 18:14:09 carbon5 pvedaemon[6961]: <root@pam> successful auth for user 'root@pam'
Jul 06 18:17:01 carbon5 CRON[11788]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 06 18:17:01 carbon5 CRON[11789]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 06 18:17:01 carbon5 CRON[11788]: pam_unix(cron:session): session closed for user root
^C
root@carbon5:~# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:85            0.0.0.0:*               LISTEN      6958/pvedaemon
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      6730/master
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      6523/sshd: /usr/sbi
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/init
tcp6       0      0 ::1:25                  :::*                    LISTEN      6730/master
tcp6       0      0 :::8006                 :::*                    LISTEN      7098/pveproxy
tcp6       0      0 :::22                   :::*                    LISTEN      6523/sshd: /usr/sbi
tcp6       0      0 :::111                  :::*                    LISTEN      1/init
tcp6       0      0 :::3128                 :::*                    LISTEN      7118/spiceproxy
udp        0      0 0.0.0.0:111             0.0.0.0:*                           1/init
udp        0      0 127.0.0.1:323           0.0.0.0:*                           6608/chronyd
udp6       0      0 :::111                  :::*                                1/init
udp6       0      0 ::1:323                 :::*                                6608/chronyd
root@carbon5:~# systemctl status pve-cluster pveproxy pvedaemon
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; preset: enabled)
     Active: active (running) since Thu 2023-07-06 18:13:10 EDT; 5min ago
    Process: 6664 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 6692 (pmxcfs)
      Tasks: 6 (limit: 309296)
     Memory: 46.9M
        CPU: 576ms
     CGroup: /system.slice/pve-cluster.service
             └─6692 /usr/bin/pmxcfs

Jul 06 18:13:09 carbon5 systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...
Jul 06 18:13:10 carbon5 systemd[1]: Started pve-cluster.service - The Proxmox VE cluster filesystem.

● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled)
     Active: active (running) since Thu 2023-07-06 18:13:16 EDT; 4min 57s ago
    Process: 6964 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
    Process: 6969 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
   Main PID: 7098 (pveproxy)
      Tasks: 4 (limit: 309296)
     Memory: 159.8M
        CPU: 16.516s
     CGroup: /system.slice/pveproxy.service
             ├─7098 pveproxy
             ├─7101 "pveproxy worker"
             ├─7102 "pveproxy worker"
             └─7103 "pveproxy worker"

Jul 06 18:13:13 carbon5 systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Jul 06 18:13:16 carbon5 pveproxy[6969]: Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface.
Jul 06 18:13:16 carbon5 pveproxy[7098]: starting server
Jul 06 18:13:16 carbon5 pveproxy[7098]: starting 3 worker(s)
Jul 06 18:13:16 carbon5 pveproxy[7098]: worker 7101 started
Jul 06 18:13:16 carbon5 pveproxy[7098]: worker 7102 started
Jul 06 18:13:16 carbon5 pveproxy[7098]: worker 7103 started
Jul 06 18:13:16 carbon5 systemd[1]: Started pveproxy.service - PVE API Proxy Server.

● pvedaemon.service - PVE API Daemon
     Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; preset: enabled)
     Active: active (running) since Thu 2023-07-06 18:13:13 EDT; 5min ago
    Process: 6740 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
   Main PID: 6958 (pvedaemon)
      Tasks: 4 (limit: 309296)
     Memory: 211.1M
        CPU: 3.188s
     CGroup: /system.slice/pvedaemon.service
             ├─6958 pvedaemon
             ├─6959 "pvedaemon worker"
             ├─6960 "pvedaemon worker"
             └─6961 "pvedaemon worker"

Jul 06 18:13:10 carbon5 systemd[1]: Starting pvedaemon.service - PVE API Daemon...
Jul 06 18:13:13 carbon5 pvedaemon[6958]: starting server
Jul 06 18:13:13 carbon5 pvedaemon[6958]: starting 3 worker(s)
Jul 06 18:13:13 carbon5 pvedaemon[6958]: worker 6959 started
Jul 06 18:13:13 carbon5 pvedaemon[6958]: worker 6960 started
Jul 06 18:13:13 carbon5 pvedaemon[6958]: worker 6961 started
Jul 06 18:13:13 carbon5 systemd[1]: Started pvedaemon.service - PVE API Daemon.
Jul 06 18:14:09 carbon5 pvedaemon[6961]: <root@pam> successful auth for user 'root@pam'
root@carbon5:~# pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-5.15: 7.3-3
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
root@carbon5:~#


It's just that the VM's and containers are no longer "hooked up" to the "outside world."
 
Last edited:
Looking back upon those logs, it looks better, and I have the IP for the second bridge set to: 10.188.21.4/24 but don't know how to make anything talk to it. Do I need to adjust the hosts file too for IP's in the new range? I think I am getting closer, but I am still unsure how to proceed now. And BTW, thanks again for all of your assistance.
 
My bad, I updated the link to the public version of the docs.

Regarding "This site can’t provide a secure connection", that is to be expected, you are using a secure https without configuring it, you can tell your browser to proceed anyways.

Let us take a step back. You have two network bridges, why? Do you need to have two? If I understand correctly you only want to have

10.188.20.3 for a webservice inside a VM I gather?
10.188.20.4 for Proxmox VE's web interface
10.188.20.8 for the pfSense container

Could you please re post the contents of /etc/network/interfaces, /etc/pve/qemu-server/A.conf, /etc/pve/lxc/B.conf, here A and B are the number of the respective VM and containers. Please post the output inside a [CODE][/CODE] block instead of inline code block.

If I understand correctly your needs, you could remove (or ignore) vmbr1 from the network interfaces and make the container and VM use vmbr0 instead. Then you have to go to the /etc/network/interface inside of each VM (or container) and make sure they have the ip you need, for example you will want the following in your VM

Code:
auto vmbr0
iface vmbr0 inet static
    address 10.188.20.3/24
    gateway 10.188.20.1
    # bridge settings

The last network config you send from your Proxmox VE looks alright.
 
Last edited:
My bad, I updated the link to the public version of the docs.

Not a problem, I guessed this and read that section some more...

Regarding "This site can’t provide a secure connection", that is to be expected, you are using a secure https without configuring it, you can tell your browser to proceed anyways.

No, this is not your normal certificate issue, one has no choice to accept the warning as there is no radio button to click on.

Let us take a step back. You have two network bridges, why? Do you need to have two? If I understand correctly you only want to have:

This is from my hosts file:

Code:
127.0.0.1 localhost.localdomain localhost
10.188.20.3 helium11.mysites.com helium11
10.188.20.4 carbon5.mysites.com carbon5
10.188.20.5 proxmox.pbs.usereit.com proxmox
10.188.20.6 vapor.mysites.com vapor
10.188.20.7 nginxproxymanager.mysites.com nginxproxymanager
10.188.20.8 pfsense.mysites.com pfsense
="proxmox_one.png"
There is also freepbx on 10.188.20.105, but since that was totally internal, I never added it to the hosts file you see above. Also note that the Proxmox PBS instance is actually on another server for backups.

Yes, helium11 is a CWP7 web control panel with websites on it. carbon5 is the proxmox instance's name. vapor is a nextcloud server with collabora office installed as well. nginxproxymanager is what it says it is. I can delete it as I am not going to go that route, pfSense and HAproxy are the plan... - That's what the container is for, to help me become more knowledgable on pfsense and HAproxy. That (pfSense) is also the reason I set up onother bridge, to have a LAN side and a WAN side to make things work.

Could you please re post the contents of /etc/network/interfaces, /etc/pve/qemu-server/A.conf, /etc/pve/lxc/B.conf, here A and B are the number of the respective VM and containers.

cat /etc/network/interfaces

Code:
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno4
iface eno4 inet manual

auto eno3
iface eno3 inet manual

iface eno1 inet manual

iface eno2 inet manual

iface enp3s0f0 inet manual

iface enp3s0f1 inet manual

iface enp131s0f0 inet manual

iface enp131s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.188.20.4/24
        gateway 10.188.20.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr1
iface vmbr1 inet static
        address 10.188.21.4/24
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

Code:
root@carbon5:/etc/pve/qemu-server# cat 100.conf
boot: order=scsi0
cores: 14
ide2: ISO:iso/CentOS-7-x86_64-Minimal-2207-02.iso,media=cdrom,size=988M
memory: 32770
meta: creation-qemu=7.2.0,ctime=1685895413
name: helium11.mysite.com
net0: virtio=56:32:93:5B:28:0A,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: POOL01:vm-100-disk-0,iothread=1,size=1010G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=ee119198-0bf6-465b-98d7-a4b695573d20
sockets: 2
vmgenid: 8add64e9-f937-48db-8ba3-ffb4c8a98875

root@carbon5:/etc/pve/qemu-server# cat 102.conf
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 14
efidisk0: POOL02:vm-102-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: ISO:iso/SNG7-PBX16-64bit-2302-1.iso,media=cdrom,size=2375M
memory: 32770
meta: creation-qemu=7.2.0,ctime=1686648705
name: freePBX
net0: virtio=C2:D1:A1:53:81:30,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: POOL02:vm-102-disk-1,iothread=1,size=64G
scsihw: virtio-scsi-single
smbios1: uuid=35383e9c-4e8b-4f4b-8df5-564769f0b649
sockets: 2
vmgenid: 73774259-e784-4e70-a7fb-5d08dc2d91e2

root@carbon5:/etc/pve/qemu-server# cat 104.conf
boot: order=scsi0;ide2
cores: 14
ide2: ISO:iso/pfSense-CE-2.7.0-RELEASE-amd64.iso,media=cdrom,size=747284K
memory: 4096
meta: creation-qemu=7.2.0,ctime=1687997499
name: pfsense
net0: e1000=02:53:CB:C2:54:3E,bridge=vmbr0,queues=8
net1: e1000=12:B8:54:62:A7:67,bridge=vmbr1,queues=8
numa: 0
ostype: l26
scsi0: MIRROR01:vm-104-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=e4446353-8767-412f-81ff-48c6d34c810d
sockets: 2
vmgenid: eb2992f3-220a-4db9-9797-60174dce6903


root@carbon5:/etc/pve/qemu-server#


has the same files as above - I can't find anything for containers, just VM's

auto vmbr0
iface vmbr0 inet static
address 10.188.20.3/24
gateway 10.188.20.1
# bridge settings

Above would make me think that was the smoking gun, but I cannot find the /etc/network/interface or /etc/network/interface(s) anywhere in the VM's or containers.. *Edit, I found some. -->

Code:
vapor has at /etc/network/interfaces:

# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 10.188.20.6/24
        gateway 10.188.20.1
        dns-nameservers 8.8.8.8 8.8.4.4

nginxproxymanager has:

root@nginxproxymanager:/etc/network# cat interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 10.188.20.7/24
        gateway 10.188.20.1

pi-hole has:

root@Pi-hole:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 10.188.20.246/24
        gateway 10.188.20.1

When I read your reply I thought I was going to be all inteligent and the like and fix this with ease, but the notion all I can find about networking in any of the cases was for the VM's and they look OK to me, but what do I know? - I can't find the equivilent inside of containers...
 
Last edited:
Oh boy. I should have thought of this right away as my ISP came in a week ago or so and replaced one black box for two. Their equipment was/is all messed up. I will be forever grateful for the time you put in on this while I pulled my hair out grasping at straws...

Perhaps this exercise in futility will some day make another person question the, well in this case, THE OBVIOUS. Mark it solved (I'll get the title.).

I can't thank you enough for the time you put in on this. Thank you Maximiliano!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!