[SOLVED] Power on and no web, no ssh, and no ping but all VMs are reachable

Yam5429

New Member
Oct 24, 2022
7
0
1
Hi all,

I've run into a problem with my Proxmox server and am at a loss for what to do. It's been up and running in its current form for a couple of years now on the current hardware. Yesterday I powered it down to add more memory. Motherboard is either finicky or ram is bad and was only add one additional stick so I have 3 of 4 slots full until I can figure out if it's mobo or ram. Powered it back up and I can't access it. No :8006, no ssh, and no ping from either my workstation or my firewall. All VMs and containers inside seemingly fine and I can reach them on the lan without issue.

When I moved to this hardware I had added it as a cluster to my old hardware so I could move the VMs. When I rebooted yesterday I got a message the VMs couldn't start initially because they lost quorum. Fine, I've been planning on removing that older server anyways so I did that and they're now two independent proxmox servers. No more quorum errors and over the reboots I've done since yesterday, each time all of the VMs and containers start right up on reboot.

Usually connect to 192.168.1.202. That's on one of the ports in a dual-port 10g nic. You'll also see .203 below and that's the ethernet port on the motherboard I've tested in the past but usually leave disconnected.

Looks like there's a problem with zfs-import-scan? Would that prevent the node from being available to/from anything on the network? It seems like a zfs issue shouldn't take down the entire network for a node but in searching for a solution I do see a lot of people with seemingly unrelated problems taking out their gui. I also assume that having an odd number of sticks installed for ram wouldn't break connectivity to the node while the VMs/containers continue to be reachable.

I wrote out the results of what seems to be the usual suggestions or requests for information to a thumb drive so I could post it here. I hit the character limit trying to add them so I'll add some here and some in a reply. I have two zfs pools - one that's just two nvme drives and one that's a bunch of hard disks. The nvme one doesn't currently contain anything because I had a syslog vm setup on there and it ate away the TBW limits of the drives pretty quickly so they now show smart errors of almost 200% TBW usage. At some point I'll destroy that pool or replace the drives, but there's currently nothing saved inside of them as far as I remember.

pveversion -v
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-6 (running version: 7.1-6/4e61e21c)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-3
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

systemctl list-units --failed
Code:
  UNIT                    LOAD   ACTIVE SUB    DESCRIPTION

● zfs-import-scan.service loaded failed failed Import ZFS pools by device scanning



LOAD   = Reflects whether the unit definition was properly loaded.

ACTIVE = The high-level unit activation state, i.e. generalization of SUB.

SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.

ip addr
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state DOWN group default qlen 1000

    link/ether 7c:10:c9:41:02:37 brd ff:ff:ff:ff:ff:ff

3: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr2 state UP group default qlen 1000

    link/ether a0:36:9f:21:ff:ac brd ff:ff:ff:ff:ff:ff

4: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000

    link/ether a0:36:9f:21:ff:ae brd ff:ff:ff:ff:ff:ff

5: wlp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether b4:0e:de:7f:f7:29 brd ff:ff:ff:ff:ff:ff

6: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000

    link/ether 7c:10:c9:41:02:37 brd ff:ff:ff:ff:ff:ff

    inet 192.168.1.203/24 scope global vmbr0

       valid_lft forever preferred_lft forever

7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

    link/ether a0:36:9f:21:ff:ae brd ff:ff:ff:ff:ff:ff

    inet6 fe80::a236:9fff:fe21:ffae/64 scope link

       valid_lft forever preferred_lft forever

8: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

    link/ether a0:36:9f:21:ff:ac brd ff:ff:ff:ff:ff:ff

    inet 192.168.1.202/24 scope global vmbr2

       valid_lft forever preferred_lft forever

    inet6 fe80::a236:9fff:fe21:ffac/64 scope link

       valid_lft forever preferred_lft forever

9: veth107i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000

    link/ether fe:91:31:e5:55:db brd ff:ff:ff:ff:ff:ff link-netnsid 0

10: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000

    link/ether fe:4d:03:66:fe:25 brd ff:ff:ff:ff:ff:ff link-netnsid 1

11: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000

    link/ether de:a4:8a:a4:21:3f brd ff:ff:ff:ff:ff:ff

12: tap106i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000

    link/ether 5e:53:02:1f:f7:90 brd ff:ff:ff:ff:ff:ff

13: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000

    link/ether fe:ef:d3:7b:70:68 brd ff:ff:ff:ff:ff:ff link-netnsid 2

14: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000

    link/ether 32:c6:06:90:e2:26 brd ff:ff:ff:ff:ff:ff

cat /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp6s0 inet manual
#Mobo port

iface enp1s0f0 inet manual
#Inside 10g port

iface enp1s0f1 inet manual
#Outside 10g port

auto vmbr0
iface vmbr0 inet static
    address 192.168.1.203/24
    bridge-ports enp6s0
    bridge-stp off
    bridge-fd 0

iface wlp5s0 inet manual

auto vmbr1
iface vmbr1 inet manual
    bridge-ports enp1s0f1
    bridge-stp off
    bridge-fd 0

auto vmbr2
iface vmbr2 inet static
    address 192.168.1.202/24
    gateway 192.168.1.1
    bridge-ports enp1s0f0
    bridge-stp off
    bridge-fd 0

cat /etc/hosts
Code:
127.0.0.1 localhost.localdomain localhost
192.168.1.202 proxmox.local proxmox

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

hostname -A
Code:
proxmox.local

hostname -I
Code:
192.168.1.203 192.168.1.202

ip -c a
Code:
1: [36mlo: [0m<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback [33m00:00:00:00:00:00[0m brd [33m00:00:00:00:00:00[0m
    inet [35m127.0.0.1[0m/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 [34m::1[0m/128 scope host 
       valid_lft forever preferred_lft forever
2: [36menp6s0: [0m<NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state [31mDOWN [0mgroup default qlen 1000
    link/ether [33m7c:10:c9:41:02:37[0m brd [33mff:ff:ff:ff:ff:ff[0m
3: [36menp1s0f0: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr2 state [32mUP [0mgroup default qlen 1000
    link/ether [33ma0:36:9f:21:ff:ac[0m brd [33mff:ff:ff:ff:ff:ff[0m
4: [36menp1s0f1: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state [32mUP [0mgroup default qlen 1000
    link/ether [33ma0:36:9f:21:ff:ae[0m brd [33mff:ff:ff:ff:ff:ff[0m
5: [36mwlp5s0: [0m<BROADCAST,MULTICAST> mtu 1500 qdisc noop state [31mDOWN [0mgroup default qlen 1000
    link/ether [33mb4:0e:de:7f:f7:29[0m brd [33mff:ff:ff:ff:ff:ff[0m
6: [36mvmbr0: [0m<NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state [31mDOWN [0mgroup default qlen 1000
    link/ether [33m7c:10:c9:41:02:37[0m brd [33mff:ff:ff:ff:ff:ff[0m
    inet [35m192.168.1.203[0m/24 scope global vmbr0
       valid_lft forever preferred_lft forever
7: [36mvmbr1: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
    link/ether [33ma0:36:9f:21:ff:ae[0m brd [33mff:ff:ff:ff:ff:ff[0m
    inet6 [34mfe80::a236:9fff:fe21:ffae[0m/64 scope link 
       valid_lft forever preferred_lft forever
8: [36mvmbr2: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state [32mUP [0mgroup default qlen 1000
    link/ether [33ma0:36:9f:21:ff:ac[0m brd [33mff:ff:ff:ff:ff:ff[0m
    inet [35m192.168.1.202[0m/24 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 [34mfe80::a236:9fff:fe21:ffac[0m/64 scope link 
       valid_lft forever preferred_lft forever
9: [36mveth107i0@if2: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state [32mUP [0mgroup default qlen 1000
    link/ether [33mfe:91:31:e5:55:db[0m brd [33mff:ff:ff:ff:ff:ff[0m link-netnsid 0
10: [36mveth104i0@if2: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state [32mUP [0mgroup default qlen 1000
    link/ether [33mfe:4d:03:66:fe:25[0m brd [33mff:ff:ff:ff:ff:ff[0m link-netnsid 1
11: [36mtap106i0: [0m<BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether [33mde:a4:8a:a4:21:3f[0m brd [33mff:ff:ff:ff:ff:ff[0m
12: [36mtap106i1: [0m<BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether [33m5e:53:02:1f:f7:90[0m brd [33mff:ff:ff:ff:ff:ff[0m
13: [36mveth102i0@if2: [0m<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state [32mUP [0mgroup default qlen 1000
    link/ether [33mfe:ef:d3:7b:70:68[0m brd [33mff:ff:ff:ff:ff:ff[0m link-netnsid 2
14: [36mtap105i0: [0m<BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether [33m32:c6:06:90:e2:26[0m brd [33mff:ff:ff:ff:ff:ff[0m

systemctl is-enabled networking
Code:
enabled

ip route
Code:
default via 192.168.1.1 dev vmbr2 proto kernel onlink 
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.203 linkdown 
192.168.1.0/24 dev vmbr2 proto kernel scope link src 192.168.1.202

ip link
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 7c:10:c9:41:02:37 brd ff:ff:ff:ff:ff:ff
3: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr2 state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:21:ff:ac brd ff:ff:ff:ff:ff:ff
4: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:21:ff:ae brd ff:ff:ff:ff:ff:ff
5: wlp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether b4:0e:de:7f:f7:29 brd ff:ff:ff:ff:ff:ff
6: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 7c:10:c9:41:02:37 brd ff:ff:ff:ff:ff:ff
7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:21:ff:ae brd ff:ff:ff:ff:ff:ff
8: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:21:ff:ac brd ff:ff:ff:ff:ff:ff
9: veth107i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP mode DEFAULT group default qlen 1000
    link/ether fe:91:31:e5:55:db brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP mode DEFAULT group default qlen 1000
    link/ether fe:4d:03:66:fe:25 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether de:a4:8a:a4:21:3f brd ff:ff:ff:ff:ff:ff
12: tap106i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 5e:53:02:1f:f7:90 brd ff:ff:ff:ff:ff:ff
13: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP mode DEFAULT group default qlen 1000
    link/ether fe:ef:d3:7b:70:68 brd ff:ff:ff:ff:ff:ff link-netnsid 2
14: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 32:c6:06:90:e2:26 brd ff:ff:ff:ff:ff:ff
 
Continued from main post...

I also have the output of journalctl -b but it's 200k characters so it's not letting me post it here.

systemctl status 'pve*'
Code:
● pve-lxc-syscalld.service - Proxmox VE LXC Syscall Daemon
     Loaded: loaded (/lib/systemd/system/pve-lxc-syscalld.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:36 EDT; 32min ago
   Main PID: 3652 (pve-lxc-syscall)
      Tasks: 13 (limit: 56893)
     Memory: 1.6M
        CPU: 2ms
     CGroup: /system.slice/pve-lxc-syscalld.service
             └─3652 /usr/lib/x86_64-linux-gnu/pve-lxc-syscalld/pve-lxc-syscalld --system /run/pve/lxc-syscalld.sock

Oct 24 17:36:36 proxmox systemd[1]: Starting Proxmox VE LXC Syscall Daemon...
Oct 24 17:36:36 proxmox systemd[1]: Started Proxmox VE LXC Syscall Daemon.

● pvefw-logger.service - Proxmox VE firewall logger
     Loaded: loaded (/lib/systemd/system/pvefw-logger.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:36 EDT; 32min ago
    Process: 3615 ExecStart=/usr/sbin/pvefw-logger (code=exited, status=0/SUCCESS)
   Main PID: 3644 (pvefw-logger)
      Tasks: 2 (limit: 56893)
     Memory: 1.6M
        CPU: 129ms
     CGroup: /system.slice/pvefw-logger.service
             └─3644 /usr/sbin/pvefw-logger

Oct 24 17:36:36 proxmox systemd[1]: Starting Proxmox VE firewall logger...
Oct 24 17:36:36 proxmox pvefw-logger[3644]: starting pvefw logger
Oct 24 17:36:36 proxmox systemd[1]: Started Proxmox VE firewall logger.

● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:38 EDT; 32min ago
    Process: 3960 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 3974 (pmxcfs)
      Tasks: 6 (limit: 56893)
     Memory: 39.9M
        CPU: 1.711s
     CGroup: /system.slice/pve-cluster.service
             └─3974 /usr/bin/pmxcfs

Oct 24 17:36:37 proxmox systemd[1]: Starting The Proxmox VE cluster filesystem...
Oct 24 17:36:38 proxmox systemd[1]: Started The Proxmox VE cluster filesystem.

● pvebanner.service - Proxmox VE Login Banner
     Loaded: loaded (/lib/systemd/system/pvebanner.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2022-10-24 17:36:36 EDT; 32min ago
    Process: 3614 ExecStart=/usr/bin/pvebanner (code=exited, status=0/SUCCESS)
   Main PID: 3614 (code=exited, status=0/SUCCESS)
        CPU: 145ms

Oct 24 17:36:36 proxmox systemd[1]: Starting Proxmox VE Login Banner...
Oct 24 17:36:36 proxmox systemd[1]: Finished Proxmox VE Login Banner.

● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
     Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:40 EDT; 32min ago
    Process: 4120 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
   Main PID: 4124 (pve-ha-crm)
      Tasks: 1 (limit: 56893)
     Memory: 92.2M
        CPU: 542ms
     CGroup: /system.slice/pve-ha-crm.service
             └─4124 pve-ha-crm

Oct 24 17:36:39 proxmox systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Oct 24 17:36:40 proxmox pve-ha-crm[4124]: starting server
Oct 24 17:36:40 proxmox pve-ha-crm[4124]: status change startup => wait_for_quorum
Oct 24 17:36:40 proxmox systemd[1]: Started PVE Cluster HA Resource Manager Daemon.

● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
     Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:40 EDT; 32min ago
    Process: 4292 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=0/SUCCESS)
   Main PID: 4298 (pve-ha-lrm)
      Tasks: 1 (limit: 56893)
     Memory: 91.9M
        CPU: 649ms
     CGroup: /system.slice/pve-ha-lrm.service
             └─4298 pve-ha-lrm

Oct 24 17:36:40 proxmox systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Oct 24 17:36:40 proxmox pve-ha-lrm[4298]: starting server
Oct 24 17:36:40 proxmox pve-ha-lrm[4298]: status change startup => wait_for_agent_lock
Oct 24 17:36:40 proxmox systemd[1]: Started PVE Local HA Resource Manager Daemon.

● pve-container@104.service - PVE LXC Container: 104
     Loaded: loaded (/lib/systemd/system/pve-container@.service; static)
     Active: active (running) since Mon 2022-10-24 17:36:48 EDT; 31min ago
       Docs: man:lxc-start
             man:lxc
             man:pct
   Main PID: 4908 (lxc-start)
      Tasks: 0 (limit: 56893)
     Memory: 648.0K
        CPU: 522ms
     CGroup: /system.slice/system-pve\x2dcontainer.slice/pve-container@104.service
             ‣ 4908 /usr/bin/lxc-start -F -n 104

Oct 24 17:36:48 proxmox systemd[1]: Started PVE LXC Container: 104.

● pvestatd.service - PVE Status Daemon
     Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:39 EDT; 32min ago
    Process: 4083 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCESS)
   Main PID: 4088 (pvestatd)
      Tasks: 1 (limit: 56893)
     Memory: 108.6M
        CPU: 23.544s
     CGroup: /system.slice/pvestatd.service
             └─4088 pvestatd

Oct 24 17:36:38 proxmox systemd[1]: Starting PVE Status Daemon...
Oct 24 17:36:39 proxmox pvestatd[4088]: starting server
Oct 24 17:36:39 proxmox systemd[1]: Started PVE Status Daemon.
Oct 24 17:36:51 proxmox pvestatd[4088]: modified cpu set for lxc/107: 0-3
Oct 24 17:36:51 proxmox pvestatd[4088]: modified cpu set for lxc/104: 4-11

● pve-container@107.service - PVE LXC Container: 107
     Loaded: loaded (/lib/systemd/system/pve-container@.service; static)
     Active: active (running) since Mon 2022-10-24 17:36:41 EDT; 32min ago
       Docs: man:lxc-start
             man:lxc
             man:pct
   Main PID: 4306 (lxc-start)
      Tasks: 0 (limit: 56893)
     Memory: 832.0K
        CPU: 444ms
     CGroup: /system.slice/system-pve\x2dcontainer.slice/pve-container@107.service
             ‣ 4306 /usr/bin/lxc-start -F -n 107

Oct 24 17:36:41 proxmox systemd[1]: Started PVE LXC Container: 107.

● pve-firewall.service - Proxmox VE firewall
     Loaded: loaded (/lib/systemd/system/pve-firewall.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:39 EDT; 32min ago
    Process: 4080 ExecStartPre=/usr/bin/update-alternatives --set ebtables /usr/sbin/ebtables-legacy (code=exited, status=0/SUCCESS)
    Process: 4084 ExecStartPre=/usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy (code=exited, status=0/SUCCESS)
    Process: 4085 ExecStartPre=/usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy (code=exited, status=0/SUCCESS)
    Process: 4086 ExecStart=/usr/sbin/pve-firewall start (code=exited, status=0/SUCCESS)
   Main PID: 4089 (pve-firewall)
      Tasks: 1 (limit: 56893)
     Memory: 82.3M
        CPU: 7.327s
     CGroup: /system.slice/pve-firewall.service
             └─4089 pve-firewall

Oct 24 17:36:38 proxmox systemd[1]: Starting Proxmox VE firewall...
Oct 24 17:36:39 proxmox pve-firewall[4089]: starting server
Oct 24 17:36:39 proxmox systemd[1]: Started Proxmox VE firewall.

● pvenetcommit.service - Commit Proxmox VE network changes
     Loaded: loaded (/lib/systemd/system/pvenetcommit.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2022-10-24 17:36:36 EDT; 32min ago
    Process: 3616 ExecStartPre=/bin/rm -f /etc/openvswitch/conf.db (code=exited, status=0/SUCCESS)
    Process: 3619 ExecStart=sh -c if [ -f ${FN}.new ]; then mv ${FN}.new ${FN}; fi (code=exited, status=0/SUCCESS)
   Main PID: 3619 (code=exited, status=0/SUCCESS)
        CPU: 1ms

Oct 24 17:36:36 proxmox systemd[1]: Starting Commit Proxmox VE network changes...
Oct 24 17:36:36 proxmox systemd[1]: Finished Commit Proxmox VE network changes.

● pve-container@102.service - PVE LXC Container: 102
     Loaded: loaded (/lib/systemd/system/pve-container@.service; static)
     Active: active (running) since Mon 2022-10-24 17:38:02 EDT; 30min ago
       Docs: man:lxc-start
             man:lxc
             man:pct
   Main PID: 13987 (lxc-start)
      Tasks: 0 (limit: 56893)
     Memory: 628.0K
        CPU: 514ms
     CGroup: /system.slice/system-pve\x2dcontainer.slice/pve-container@102.service
             ‣ 13987 /usr/bin/lxc-start -F -n 102

Oct 24 17:38:02 proxmox systemd[1]: Started PVE LXC Container: 102.

● pvescheduler.service - Proxmox VE scheduler
     Loaded: loaded (/lib/systemd/system/pvescheduler.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:39 EDT; 32min ago
    Process: 4082 ExecStart=/usr/bin/pvescheduler start (code=exited, status=0/SUCCESS)
   Main PID: 4113 (pvescheduler)
      Tasks: 1 (limit: 56893)
     Memory: 103.6M
        CPU: 791ms
     CGroup: /system.slice/pvescheduler.service
             └─4113 pvescheduler

Oct 24 17:36:38 proxmox systemd[1]: Starting Proxmox VE scheduler...
Oct 24 17:36:39 proxmox pvescheduler[4113]: starting server
Oct 24 17:36:39 proxmox systemd[1]: Started Proxmox VE scheduler.

● pve-daily-update.timer - Daily PVE download activities
     Loaded: loaded (/lib/systemd/system/pve-daily-update.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Mon 2022-10-24 17:36:37 EDT; 32min ago
    Trigger: Tue 2022-10-25 04:29:53 EDT; 10h left
   Triggers: ● pve-daily-update.service

Oct 24 17:36:37 proxmox systemd[1]: Started Daily PVE download activities.

● pve-storage.target - PVE Storage Target
     Loaded: loaded (/lib/systemd/system/pve-storage.target; static)
     Active: active since Mon 2022-10-24 17:36:37 EDT; 32min ago

Oct 24 17:36:37 proxmox systemd[1]: Reached target PVE Storage Target.

● pvedaemon.service - PVE API Daemon
     Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:39 EDT; 32min ago
    Process: 4081 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
   Main PID: 4116 (pvedaemon)
      Tasks: 4 (limit: 56893)
     Memory: 139.2M
        CPU: 804ms
     CGroup: /system.slice/pvedaemon.service
             ├─4116 pvedaemon
             ├─4117 pvedaemon worker
             ├─4118 pvedaemon worker
             └─4119 pvedaemon worker

Oct 24 17:36:38 proxmox systemd[1]: Starting PVE API Daemon...
Oct 24 17:36:39 proxmox pvedaemon[4116]: starting server
Oct 24 17:36:39 proxmox pvedaemon[4116]: starting 3 worker(s)
Oct 24 17:36:39 proxmox pvedaemon[4116]: worker 4117 started
Oct 24 17:36:39 proxmox pvedaemon[4116]: worker 4118 started
Oct 24 17:36:39 proxmox pvedaemon[4116]: worker 4119 started
Oct 24 17:36:39 proxmox systemd[1]: Started PVE API Daemon.

● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-24 17:36:40 EDT; 32min ago
    Process: 4121 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
    Process: 4123 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
   Main PID: 4288 (pveproxy)
      Tasks: 4 (limit: 56893)
     Memory: 134.8M
        CPU: 947ms
     CGroup: /system.slice/pveproxy.service
             ├─4288 pveproxy
             ├─4289 pveproxy worker
             ├─4290 pveproxy worker
             └─4291 pveproxy worker

Oct 24 17:36:39 proxmox systemd[1]: Starting PVE API Proxy Server...
Oct 24 17:36:40 proxmox pveproxy[4288]: starting server
Oct 24 17:36:40 proxmox pveproxy[4288]: starting 3 worker(s)
Oct 24 17:36:40 proxmox pveproxy[4288]: worker 4289 started
Oct 24 17:36:40 proxmox pveproxy[4288]: worker 4290 started
Oct 24 17:36:40 proxmox pveproxy[4288]: worker 4291 started
Oct 24 17:36:40 proxmox systemd[1]: Started PVE API Proxy Server.

● pve-guests.service - PVE guests
     Loaded: loaded (/lib/systemd/system/pve-guests.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2022-10-24 17:39:47 EDT; 29min ago
    Process: 4299 ExecStartPre=/usr/share/pve-manager/helpers/pve-startall-delay (code=exited, status=0/SUCCESS)
    Process: 4300 ExecStart=/usr/bin/pvesh --nooutput create /nodes/localhost/startall (code=exited, status=0/SUCCESS)
   Main PID: 4300 (code=exited, status=0/SUCCESS)
        CPU: 850ms

Oct 24 17:38:02 proxmox pvesh[4300]: Starting CT 102
Oct 24 17:38:02 proxmox pve-guests[4301]: <root@pam> starting task UPID:proxmox:00003683:000037BE:635705BA:vzstart:102:root@pam:
Oct 24 17:38:02 proxmox pve-guests[13955]: starting CT 102: UPID:proxmox:00003683:000037BE:635705BA:vzstart:102:root@pam:
Oct 24 17:38:06 proxmox pvesh[4300]: Waiting for 60 seconds (startup delay)
Oct 24 17:39:06 proxmox pvesh[4300]: Starting VM 105
Oct 24 17:39:06 proxmox pve-guests[4301]: <root@pam> starting task UPID:proxmox:00005999:000050BF:635705FA:qmstart:105:root@pam:
Oct 24 17:39:06 proxmox pve-guests[22937]: start VM 105: UPID:proxmox:00005999:000050BF:635705FA:qmstart:105:root@pam:
Oct 24 17:39:07 proxmox pvesh[4300]: Waiting for 40 seconds (startup delay)
Oct 24 17:39:47 proxmox pve-guests[4300]: <root@pam> end task UPID:proxmox:000010CD:00001815:63570569:startall::root@pam: OK
Oct 24 17:39:47 proxmox systemd[1]: Finished PVE guests.
 
Last edited:
Powered it back up and I can't access it. No :8006, no ssh, and no ping from either my workstation or my firewall. All VMs and containers inside seemingly fine and I can reach them on the lan without issue.
If you cant ping or ssh into the box, then its not a cluster or PVE services issue. Its strange that you can access your VMs but not the underlying interface. What network are those VMs on? What about your workstation and firewall? What does "arp -n" show from the Workstation and firewall after you are trying to ping the PVE IP?
Force down the vmbr0 and dont bring it up on boot if you are not using it. Retest with the interface fully administratively down.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
continuing...
If you cant ping or ssh into the box, then its not a cluster or PVE services issue. Its strange that you can access your VMs but not the underlying interface. What network are those VMs on? What about your workstation and firewall? What does "arp -n" show from the Workstation and firewall after you are trying to ping the PVE IP?
Force down the vmbr0 and dont bring it up on boot if you are not using it. Retest with the interface fully administratively down.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thanks! To set it as down, is that just running ip link set vmbr0 down?
 
Same Issue here after Update this morning.
But if i modify my
Code:
/etc/network/interfaces
there is no change after reboot. it seams so, that new flags in this file wound read out. Interesting ist also, that my Unifi UDM Pro shows, that the VM´s only have an 10MB´s Connection, but i have an 10GB`s connection...
 
I tried also to change the Kernel to lower Version, but no change.

pveversion:

pve-manager/7.2-11/b76d3178 (running kernel: 5.11.22-7-pve) # the manual selected older Kernel
 
Same Issue here after Update this morning.
But if i modify my
Code:
/etc/network/interfaces
there is no change after reboot. it seams so, that new flags in this file wound read out. Interesting ist also, that my Unifi UDM Pro shows, that the VM´s only have an 10MB´s Connection, but i have an 10GB`s connection...

One of my VMs is my firewall. Just ran iperf between my desktop and the firewall and it's reporting 255Mbps so I'm not seeing that slow VM network speed you're seeing. I also haven't updated Proxmox recently, was planning on doing that once it came back up after adding the ram but didn't get that far. There shouldn't have been a change in the version of anything on my end unless it did it on its own which I don't think is the case.
 
I would "ip addr del 192.168.1.0/24 dev vmbr0" in addition to setting it down. And remove/comment it's IP assignment in /etc/network/interfaces


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

That did the trick. I ran ip link set vmbr0 down and then commented out the /etc/network/interfaces part referencing vmbr0 and before I could even get back to my computer it was working. Rebooted anyways to make sure it continued working and it is working. I tried running the ip addr del 192.168.1.0/24 dev vmbr0 and it gave a response of "RTNETLINK answers: Cannot assign requested address" which I don't know if that's an error or if it's correct. Either way, all good now.

Any recommendations for how I'd track down what the problem with vmbr0's configuration is? Not a big deal since I don't typically use that port but would be curious to know what went wrong/how to fix it should I need it in the future.

Thanks again, huge help and getting that fixed with your help is a huge weight off my shoulders!
 
Any recommendations for how I'd track down what the problem with vmbr0's configuration is?
I think you already did.. Dont dual-home a system unless you really know what you are doing and trying to achieve.
When troubleshooting a networking issue always try to reduce the number of variables to a minimum to isolate the problem.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I think you already did.. Dont dual-home a system unless you really know what you are doing and trying to achieve.
When troubleshooting a networking issue always try to reduce the number of variables to a minimum to isolate the problem.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

That makes sense. At least having gone through this problem I know if one of the ports on that pcie nic goes bad, I should be able to do the opposite of this and down the bad port, up vmbr0, and at least temporarily use the port on the motherboard. Thanks again!
 
That did the trick. I ran ip link set vmbr0 down and then commented out the /etc/network/interfaces part referencing vmbr0 and before I could even get back to my computer it was working. Rebooted anyways to make sure it continued working and it is working. I tried running the ip addr del 192.168.1.0/24 dev vmbr0 and it gave a response of "RTNETLINK answers: Cannot assign requested address" which I don't know if that's an error or if it's correct. Either way, all good now.

Any recommendations for how I'd track down what the problem with vmbr0's configuration is? Not a big deal since I don't typically use that port but would be curious to know what went wrong/how to fix it should I need it in the future.

Thanks again, huge help and getting that fixed with your help is a huge weight off my shoulders!
@bbgeek17 I am having the same issues, and followed these steps but now my Ethernet port has stopped responding altogether.

Please explain things like its to a 5 yo, I am not an IT engineer, and trying to learn as I face issues.

When you say you commented out "/etc/network/interfaces part referencing vmbr0" was it just the "auto vmbr0" or something else

this is my /etc/network/interfaces

Code:
to lo
iface lo inet loopback

iface enp2s0 inet manual


auto vmbr0
iface vmbr0 inet static
    address 192.168.2.10
    gateway 192.168.100.1
    bridge-ports enp2s0
    bridge-stp off
    bridge-fd 0

If Follow any of the steps my Lan port stops working till I use dhclient enp2s0, Then it comes back to life. What am I doing wrong and are there any suggestions on what I can do to fix it? any guidance would be greatly appreciated.

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!