Networking disabled after login with two devices?

Jorn

New Member
Jun 8, 2020
7
0
1
30
Hey guys,

I am pretty new to the server business and tried to set up a home server and ended up with the following problem:

I set up a proxmox server and everything was running properly. Access via webinterface and ssh from my linux machine has been working smoothly. Yesterday, I probably logged in from two devices via ssh at the same time ( a) smartphone via vpn connected to my LAN and terminal emulator and b) my linux machine terminal via WLAN). From this point on, I was able to login to the server via keyboard directly connected to the server. But the webinterface wasn't able to establish a connection (server did not respond, took too much timeto respond) and I wasn't able to connect via ssh anymore (from no device). I tried to PING the server from inside my LAN, without success. I tried to scan the server for open ports using nmap but the terminal said the >server seems to be down<. I then tried to install a package via my keyboard on the running server, which got me an error message like >failed to fetch package<. So I assumed my server has no connection to the internet. On the otherside, my fritzbox recognizes the server and WoL is still working.

So my best guess is, that proxmox disabled some networking settings as I tried to login with two different devices at the same time?

If you need further information please let me know. I am running the latest OS version. Any kind of help is greatly appreciated =)

Regards
Jorn
 
Hi,

Is your host still connected to the Internet after you connect from your smartphone?

did you checked if there two hosts in same name in your network?

can you also see journalctl -f if there any error?
 
Hi Moayad,

thank you for your reply. There is only one host in my network. I am not sure whether there is internet connection or not. But installing packages is not possible.

Unfortunately, I can't even make pictures in poor quality from my screen to be able to upload files as files become too large. Is there a proper way to do this? I guess sharing the output of journalctl -f would help a lot more.

I can see that port 1 and port 2 entered blocking state and then entered disabled state. Later on, they change from blocking to forwarding state. Furthermore, there is an RRDC update error. And I can find something like >apparmor="DENIED" operation="mount"< in between.
 
Hi,

Unfortunately, I can't even make pictures in poor quality from my screen to be able to upload files as files become too large. Is there a proper way to do this? I guess sharing the output of journalctl -f would help a lot more.

You can send output of journalctl to USB, then upload it.


Is firewall running?
 
Hi,

so the command pve-firewall status gives me Status: disabled/running. I managed to copy the file var/log/syslog and this is the first part of the result:

Code:
Jun  9 21:43:20 pve rsyslogd:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="762" x-info="https://www.rsyslog.com"] rsyslogd was HUPed
Jun  9 21:43:20 pve systemd[1]: logrotate.service: Succeeded.
Jun  9 21:43:20 pve systemd[1]: Started Rotate log files.
Jun  9 21:43:20 pve systemd[1]: Started Login Service.
Jun  9 21:43:20 pve postfix/postfix-script[1125]: warning: symlink leaves directory: /etc/postfix/./makedefs.out
Jun  9 21:43:20 pve postfix/postfix-script[1174]: starting the Postfix mail system
Jun  9 21:43:20 pve postfix/master[1179]: daemon started -- version 3.4.10, configuration /etc/postfix
Jun  9 21:43:20 pve systemd[1]: Started Postfix Mail Transport Agent (instance -).
Jun  9 21:43:20 pve systemd[1]: Starting Postfix Mail Transport Agent...
Jun  9 21:43:20 pve systemd[1]: Started Postfix Mail Transport Agent.
Jun  9 21:43:20 pve postfix/pickup[1181]: C563F240C86: uid=0 from=<root>
Jun  9 21:43:20 pve postfix/cleanup[1187]: C563F240C86: message-id=<20200609194320.C563F240C86@pve.homeserver.local>
Jun  9 21:43:20 pve postfix/qmgr[1183]: C563F240C86: from=<root@pve.homeserver.local>, size=1046, nrcpt=1 (queue active)
Jun  9 21:43:20 pve systemd[1]: apt-daily.service: Succeeded.
Jun  9 21:43:20 pve systemd[1]: Started Daily apt download activities.
Jun  9 21:43:20 pve systemd[1]: Starting Daily apt upgrade and clean activities...
Jun  9 21:43:21 pve pvemailforward[1199]: mail forward failed: Connection refused
Jun  9 21:43:21 pve postfix/local[1195]: C563F240C86: to=<root@pve.homeserver.local>, orig_to=<root>, relay=local, delay=0.6, delays=0.4/0.01/0/0.19, dsn=2.0.0, status=sent (delivered to command: /usr/bin/pvemailforward)
Jun  9 21:43:21 pve postfix/qmgr[1183]: C563F240C86: removed
Jun  9 21:43:21 pve systemd[1]: apt-daily-upgrade.service: Succeeded.
Jun  9 21:43:21 pve systemd[1]: Started Daily apt upgrade and clean activities.
Jun  9 21:43:21 pve kernel: [    7.576945] vmbr0: port 1(enp5s0) entered disabled state
Jun  9 21:43:21 pve iscsid: iSCSI daemon with pid=931 started!
Jun  9 21:43:21 pve systemd[1]: Started The Proxmox VE cluster filesystem.
Jun  9 21:43:21 pve systemd[1]: Started Regular background program processing daemon.
Jun  9 21:43:21 pve systemd[1]: Starting PVE Status Daemon...
Jun  9 21:43:21 pve systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun  9 21:43:21 pve systemd[1]: Starting PVE API Daemon...
Jun  9 21:43:21 pve cron[1256]: (CRON) INFO (pidfile fd = 3)
Jun  9 21:43:21 pve systemd[1]: Starting Daily PVE download activities...
Jun  9 21:43:21 pve systemd[1]: Starting Proxmox VE firewall...
Jun  9 21:43:21 pve cron[1256]: (CRON) INFO (Running @reboot jobs)
Jun  9 21:43:21 pve kernel: [    8.151317] r8169 0000:05:00.0 enp5s0: Link is Up - 100Mbps/Full - flow control rx/tx
Jun  9 21:43:21 pve kernel: [    8.151329] vmbr0: port 1(enp5s0) entered blocking state
Jun  9 21:43:21 pve kernel: [    8.151330] vmbr0: port 1(enp5s0) entered forwarding state
Jun  9 21:43:21 pve kernel: [    8.151409] IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
Jun  9 21:43:22 pve pve-firewall[1264]: starting server
Jun  9 21:43:22 pve systemd[1]: Started Proxmox VE firewall.
Jun  9 21:43:22 pve pvestatd[1268]: starting server
Jun  9 21:43:22 pve systemd[1]: Started PVE Status Daemon.
Jun  9 21:43:22 pve pvedaemon[1289]: starting server
Jun  9 21:43:22 pve pvedaemon[1289]: starting 3 worker(s)
Jun  9 21:43:22 pve pvedaemon[1289]: worker 1290 started
Jun  9 21:43:22 pve pvedaemon[1289]: worker 1291 started
Jun  9 21:43:22 pve pvedaemon[1289]: worker 1292 started
Jun  9 21:43:22 pve systemd[1]: Started PVE API Daemon.
Jun  9 21:43:22 pve systemd[1]: Starting PVE API Proxy Server...
Jun  9 21:43:22 pve systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Jun  9 21:43:22 pve pve-ha-crm[1297]: starting server
Jun  9 21:43:22 pve pve-ha-crm[1297]: status change startup => wait_for_quorum
Jun  9 21:43:22 pve systemd[1]: Started PVE Cluster HA Resource Manager Daemon.
Jun  9 21:43:23 pve pveproxy[1298]: starting server
Jun  9 21:43:23 pve pveproxy[1298]: starting 3 worker(s)
Jun  9 21:43:23 pve pveproxy[1298]: worker 1299 started
Jun  9 21:43:23 pve pveproxy[1298]: worker 1300 started
Jun  9 21:43:23 pve pveproxy[1298]: worker 1301 started
Jun  9 21:43:23 pve systemd[1]: Started PVE API Proxy Server.
Jun  9 21:43:23 pve systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Jun  9 21:43:23 pve systemd[1]: Starting PVE SPICE Proxy Server...
Jun  9 21:43:23 pve spiceproxy[1304]: starting server
Jun  9 21:43:23 pve spiceproxy[1304]: starting 1 worker(s)
Jun  9 21:43:23 pve spiceproxy[1304]: worker 1305 started
Jun  9 21:43:23 pve systemd[1]: Started PVE SPICE Proxy Server.
Jun  9 21:43:23 pve pve-ha-lrm[1306]: starting server
Jun  9 21:43:23 pve pve-ha-lrm[1306]: status change startup => wait_for_agent_lock
Jun  9 21:43:23 pve systemd[1]: Started PVE Local HA Resource Manager Daemon.
Jun  9 21:43:23 pve systemd[1]: Starting PVE guests...
Jun  9 21:43:24 pve pve-guests[1308]: <root@pam> starting task UPID:pve:0000051D:00000416:5EDFE65C:startall::root@pam:
Jun  9 21:43:24 pve pvesh[1308]: Starting CT 101
Jun  9 21:43:24 pve pve-guests[1310]: starting CT 101: UPID:pve:0000051E:00000417:5EDFE65C:vzstart:101:root@pam:
Jun  9 21:43:24 pve pve-guests[1309]: <root@pam> starting task UPID:pve:0000051E:00000417:5EDFE65C:vzstart:101:root@pam:
Jun  9 21:43:24 pve systemd[1]: Created slice PVE LXC Container Slice.
Jun  9 21:43:24 pve systemd[1]: Started PVE LXC Container: 101.
Jun  9 21:43:24 pve kernel: [   10.783365] EXT4-fs warning (device loop0): ext4_multi_mount_protect:322: MMP interval 42 higher than expected, please wait.
Jun  9 21:43:24 pve kernel: [   10.783365]
Jun  9 21:43:25 pve pve-guests[1308]: <root@pam> end task UPID:pve:0000051D:00000416:5EDFE65C:startall::root@pam: OK
Jun  9 21:43:25 pve systemd[1]: Started PVE guests.
Jun  9 21:43:25 pve systemd[1]: Reached target Multi-User System.
Jun  9 21:43:25 pve systemd[1]: Reached target Graphical Interface.
Jun  9 21:43:25 pve systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jun  9 21:43:25 pve systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Jun  9 21:43:25 pve systemd[1]: Started Update UTMP about System Runlevel Changes.
Jun  9 21:43:48 pve systemd[1]: systemd-fsckd.service: Succeeded.
Jun  9 21:44:00 pve systemd[1]: Starting Proxmox VE replication runner...
Jun  9 21:44:00 pve systemd[1]: pvesr.service: Succeeded.
Jun  9 21:44:00 pve systemd[1]: Started Proxmox VE replication runner.
Jun  9 21:44:02 pve pveupdate[1259]: <root@pam> starting task UPID:pve:0000058F:000012E1:5EDFE682:aptupdate::root@pam:
Jun  9 21:44:10 pve kernel: [   56.269122] EXT4-fs (loop0): 6 orphan inodes deleted
Jun  9 21:44:10 pve kernel: [   56.269123] EXT4-fs (loop0): recovery complete
Jun  9 21:44:10 pve kernel: [   56.275347] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null)
Jun  9 21:44:10 pve kernel: [   56.296925] kauditd_printk_skb: 6 callbacks suppressed
Jun  9 21:44:10 pve kernel: [   56.296926] audit: type=1400 audit(1591731850.024:18): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=1457 comm="apparmor_parser"
 
And the second part to show it all:

Code:
Jun  9 21:44:10 pve systemd-udevd[1460]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun  9 21:44:10 pve systemd-udevd[1460]: Using default interface naming scheme 'v240'.
Jun  9 21:44:10 pve systemd-udevd[1460]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun  9 21:44:10 pve systemd-udevd[1460]: Could not generate persistent MAC address for fwbr101i0: No such file or directory
Jun  9 21:44:10 pve systemd-udevd[1464]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun  9 21:44:10 pve systemd-udevd[1464]: Using default interface naming scheme 'v240'.
Jun  9 21:44:10 pve systemd-udevd[1464]: Could not generate persistent MAC address for fwpr101p0: No such file or directory
Jun  9 21:44:10 pve systemd-udevd[1455]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun  9 21:44:10 pve systemd-udevd[1455]: Using default interface naming scheme 'v240'.
Jun  9 21:44:10 pve systemd-udevd[1455]: Could not generate persistent MAC address for fwln101i0: No such file or directory
Jun  9 21:44:10 pve kernel: [   56.667192] fwbr101i0: port 1(fwln101i0) entered blocking state
Jun  9 21:44:10 pve kernel: [   56.667193] fwbr101i0: port 1(fwln101i0) entered disabled state
Jun  9 21:44:10 pve kernel: [   56.667238] device fwln101i0 entered promiscuous mode
Jun  9 21:44:10 pve kernel: [   56.667266] fwbr101i0: port 1(fwln101i0) entered blocking state
Jun  9 21:44:10 pve kernel: [   56.667266] fwbr101i0: port 1(fwln101i0) entered forwarding state
Jun  9 21:44:10 pve kernel: [   56.669344] vmbr0: port 2(fwpr101p0) entered blocking state
Jun  9 21:44:10 pve kernel: [   56.669346] vmbr0: port 2(fwpr101p0) entered disabled state
Jun  9 21:44:10 pve kernel: [   56.669389] device fwpr101p0 entered promiscuous mode
Jun  9 21:44:10 pve kernel: [   56.669414] vmbr0: port 2(fwpr101p0) entered blocking state
Jun  9 21:44:10 pve kernel: [   56.669415] vmbr0: port 2(fwpr101p0) entered forwarding state
Jun  9 21:44:10 pve kernel: [   56.671493] fwbr101i0: port 2(veth101i0) entered blocking state
Jun  9 21:44:10 pve kernel: [   56.671495] fwbr101i0: port 2(veth101i0) entered disabled state
Jun  9 21:44:10 pve kernel: [   56.671557] device veth101i0 entered promiscuous mode
Jun  9 21:44:10 pve kernel: [   56.690208] eth0: renamed from vethkcaKKA
Jun  9 21:44:10 pve systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 1458 (lxc-start)
Jun  9 21:44:10 pve systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jun  9 21:44:10 pve systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jun  9 21:44:10 pve kernel: [   56.805048] audit: type=1400 audit(1591731850.532:19): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/proc/sys/kernel/random/boot_id" pid=1458 comm="lxc-start" srcname="/dev/.lxc-boot-id" flags="rw, bind"
Jun  9 21:44:10 pve pvestatd[1268]: modified cpu set for lxc/101: 0
Jun  9 21:44:10 pve kernel: [   56.996316] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jun  9 21:44:10 pve kernel: [   56.996346] fwbr101i0: port 2(veth101i0) entered blocking state
Jun  9 21:44:10 pve kernel: [   56.996347] fwbr101i0: port 2(veth101i0) entered forwarding state
Jun  9 21:44:10 pve pvestatd[1268]: auth key pair too old, rotating..
Jun  9 21:44:10 pve pvestatd[1268]: status update time (38.775 seconds)
Jun  9 21:44:10 pve kernel: [   57.092870] audit: type=1400 audit(1591731850.820:20): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="/usr/bin/man" pid=1740 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.092872] audit: type=1400 audit(1591731850.820:21): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="man_filter" pid=1740 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.092873] audit: type=1400 audit(1591731850.820:22): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="man_groff" pid=1740 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.093956] audit: type=1400 audit(1591731850.820:23): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="/usr/sbin/tcpdump" pid=1743 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.094343] audit: type=1400 audit(1591731850.820:24): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="/usr/sbin/mysqld" pid=1741 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.096429] audit: type=1400 audit(1591731850.824:25): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="/sbin/dhclient" pid=1738 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.096431] audit: type=1400 audit(1591731850.824:26): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1738 comm="apparmor_parser"
Jun  9 21:44:10 pve kernel: [   57.096433] audit: type=1400 audit(1591731850.824:27): apparmor="STATUS" operation="profile_load" label="lxc-101_</var/lib/lxc>//&:lxc-101_<-var-lib-lxc>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=1738 comm="apparmor_parser"
Jun  9 21:44:10 pve pmxcfs[1049]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/ISO_Images: -1
Jun  9 21:44:10 pve pmxcfs[1049]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/S100: -1
Jun  9 21:44:10 pve pmxcfs[1049]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/local-lvm: -1
Jun  9 21:44:10 pve pmxcfs[1049]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/local: -1
Jun  9 21:45:00 pve systemd[1]: Starting Proxmox VE replication runner...
Jun  9 21:45:00 pve systemd[1]: pvesr.service: Succeeded.
Jun  9 21:45:00 pve systemd[1]: Started Proxmox VE replication runner.
Jun  9 21:45:02 pve pveupdate[1423]: update new package list: /var/lib/pve-manager/pkgupdates
Jun  9 21:45:04 pve pveupdate[1259]: <root@pam> end task UPID:pve:0000058F:000012E1:5EDFE682:aptupdate::root@pam: OK
Jun  9 21:45:04 pve systemd[1]: pve-daily-update.service: Succeeded.
Jun  9 21:45:04 pve systemd[1]: Started Daily PVE download activities.
Jun  9 21:45:04 pve systemd[1]: Startup finished in 19.123s (firmware) + 5.320s (loader) + 3.364s (kernel) + 1min 46.957s (userspace) = 2min 14.765s.
Jun  9 21:46:00 pve systemd[1]: Starting Proxmox VE replication runner...
Jun  9 21:46:00 pve systemd[1]: pvesr.service: Succeeded.
Jun  9 21:46:00 pve systemd[1]: Started Proxmox VE replication runner.
Jun  9 21:46:55 pve systemd[1]: Created slice User Slice of UID 0.
Jun  9 21:46:55 pve systemd[1]: Starting User Runtime Directory /run/user/0...
Jun  9 21:46:55 pve systemd[1]: Started User Runtime Directory /run/user/0.
Jun  9 21:46:55 pve systemd[1]: Starting User Manager for UID 0...
Jun  9 21:46:55 pve systemd[2983]: Listening on GnuPG network certificate management daemon.
Jun  9 21:46:55 pve systemd[2983]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Jun  9 21:46:55 pve systemd[2983]: Reached target Paths.
Jun  9 21:46:55 pve systemd[2983]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Jun  9 21:46:55 pve systemd[2983]: Listening on GnuPG cryptographic agent and passphrase cache.
Jun  9 21:46:55 pve systemd[2983]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Jun  9 21:46:55 pve systemd[2983]: Reached target Sockets.
Jun  9 21:46:55 pve systemd[2983]: Reached target Timers.
Jun  9 21:46:55 pve systemd[2983]: Reached target Basic System.
Jun  9 21:46:55 pve systemd[2983]: Reached target Default.
Jun  9 21:46:55 pve systemd[2983]: Startup finished in 28ms.
Jun  9 21:46:55 pve systemd[1]: Started User Manager for UID 0.
Jun  9 21:46:55 pve systemd[1]: Started Session 1 of user root.
Jun  9 21:47:00 pve systemd[1]: Starting Proxmox VE replication runner...
Jun  9 21:47:00 pve systemd[1]: pvesr.service: Succeeded.
Jun  9 21:47:00 pve systemd[1]: Started Proxmox VE replication runner.
Jun  9 21:48:00 pve systemd[1]: Starting Proxmox VE replication runner...
Jun  9 21:48:00 pve systemd[1]: pvesr.service: Succeeded.
Jun  9 21:48:00 pve systemd[1]: Started Proxmox VE replication runner.
Jun  9 21:49:00 pve systemd[1]: Starting Proxmox VE replication runner...
Jun  9 21:49:00 pve systemd[1]: pvesr.service: Succeeded.
Jun  9 21:49:00 pve systemd[1]: Started Proxmox VE replication runner.

Looking forward to your suggestions and thanks a lot in advance!

KR
Jorn
 
Hello,

Please post your network configuration cat /etc/network/interfaces and output of ip a and pveversion -v as well
 
Hi,

okay cat /etc/network/interfaces is modified in the last line but worked without problems. Not sure if this is the best way to enable WoL:

Code:
auto lo
iface lo inet loopback

iface enp5s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 192.168.178.25
    netmask 255.255.255.0
    gateway 192.168.178.1
    bridge_ports enp5s0
    bridge_stp off
    bridge_fd 0
        up ethtool -s enp5s0 wol g


ip a gives us:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether a8:a1:59:17:bf:48 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a8:a1:59:17:bf:48 brd ff:ff:ff:ff:ff:ff
    inet 192.168.178.25/24 brd 192.168.178.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa1:59ff:fe17:bf48/64 scope link
       valid_lft forever preferred_lft forever
4: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
    link/ether fe:02:8c:0e:f1:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:11:c4:aa:2e:05 brd ff:ff:ff:ff:ff:ff
6: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:8d:af:4d:ba:bc brd ff:ff:ff:ff:ff:ff
7: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
    link/ether 1a:11:c4:aa:2e:05 brd ff:ff:ff:ff:ff:ff


and pveversion -v:

Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Hi Moayad,

I tried to run the command ifreload -a, but unfortunately the answer was -bash: command not found and as I mentioned before I am not able to run package installations or updates...

But I removed the line up ethtool -s enp5s0 wol g from the network configs and made a system reboot, checked whether all changes have been applied successfully and then checked the web gui but I am still having no access to the server and the same with ssh...

KR Jorn
 
Hi Moayad,

short update:

I don't know why, but somehow alternating the IP-adress of my server from XXX.XXX.XXX.25 to XXX.XXX.XXX.40 and then back to XXX.XXX.XXX.25 (I have read something similar in another thread in this forum) and always rebooting the system inbetween made my server work again. I would love to understand why, so if there is a reasonable explanation for that please let me know =)

Anyway thank you very much for your time and guiding me towards a working system! But still... Why? :D

KR
Jorn
 
Hello, I think I may be having the same problem as you. I see "Could not generate persistent MAC address" error in my journalctl output as well.

How did you alternate the IP? Did you just edit /etc/network/interfaces back and forth? I'm unable to access the GUI.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!