[SOLVED] No gui after network change

janse

Member
Feb 11, 2021
14
1
8
25
Hey everyone,
I got my new firewall/router today and put my proxmox host into a new subnet. I changed the ip address in Proxmox to 192.168.2.99 from 192.168.188.99. The host is pingable and I can connect to it over ssh, however the Webgui of the server is not available. Can someone help me figure it out?

Thanks in advance!
 
this is my /etc/network/interfaces

auto lo iface lo inet loopback iface enp4s0 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.2.99 subnetmask 255.255.255.0 gateway 192.168.2.1 bridge-ports enp4s0 bridge-stp off bridge-fd 0

this is my /etc/hosts

192.168.2.99 pve # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
 
This is the output of my ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000 link/ether 54:04:a6:67:82:5c brd ff:ff:ff:ff:ff:ff 3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 54:04:a6:67:82:5c brd ff:ff:ff:ff:ff:ff inet 192.168.2.99/32 scope global vmbr0 valid_lft forever preferred_lft forever inet6 fe80::5604:a6ff:fe67:825c/64 scope link valid_lft forever preferred_lft forever
 
subnetmask 255.255.255.0
I think the correct config stanza would be 'netmask' - however nowadays writing the IP/netmask as CIDR would be preferred:
Code:
address 192.168.2.99/24

see the output of ip addr:
inet 192.168.2.99/32

I hope this helps!
 
I think the correct config stanza would be 'netmask' - however nowadays writing the IP/netmask as CIDR would be preferred:
Code:
address 192.168.2.99/24

see the output of ip addr:


I hope this helps!
Hey! Thank you for your tip.
I changed it back to /24 rebooted but I still can not access the gui. In an other thread someone mentioned tail -f /var/log/pveproxy/access.log
this is my output of that:
Code:
root@127:~# tail -f /var/log/pveproxy/access.log
192.168.188.84 - root@pam [11/02/2021:14:39:37 +0100] "GET /api2/json/cluster/resources HTTP/1.1" 200 1029
192.168.188.84 - root@pam [11/02/2021:14:39:39 +0100] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1125
192.168.188.84 - root@pam [11/02/2021:14:39:40 +0100] "GET /api2/extjs/nodes/pve/network?_dc=1613050820682 HTTP/1.1" 200 898
192.168.188.84 - root@pam [11/02/2021:14:39:40 +0100] "GET /api2/json/cluster/resources HTTP/1.1" 200 1045
192.168.188.84 - root@pam [11/02/2021:14:39:41 +0100] "GET /api2/json/nodes/pve/status HTTP/1.1" 200 704
192.168.188.84 - root@pam [11/02/2021:14:39:43 +0100] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1143
192.168.188.84 - root@pam [11/02/2021:14:39:44 +0100] "GET /api2/json/cluster/resources HTTP/1.1" 200 1050
192.168.188.84 - root@pam [11/02/2021:14:39:46 +0100] "PUT /api2/extjs/nodes/pve/network HTTP/1.1" 200 89
192.168.188.84 - root@pam [11/02/2021:14:39:46 +0100] "GET /api2/json/nodes/pve/tasks/UPID%3Apve%3A0000111F%3A000108ED%3A602533A2%3Asrvreload%3Anetworking%3Aroot%40pam%3A/status HTTP/1.1" 200 222
192.168.188.84 - root@pam [11/02/2021:14:39:46 +0100] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1171
^C
root@127:~# tail -f /var/log/pveproxy/access.log
192.168.188.84 - root@pam [11/02/2021:14:39:37 +0100] "GET /api2/json/cluster/resources HTTP/1.1" 200 1029
192.168.188.84 - root@pam [11/02/2021:14:39:39 +0100] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1125
192.168.188.84 - root@pam [11/02/2021:14:39:40 +0100] "GET /api2/extjs/nodes/pve/network?_dc=1613050820682 HTTP/1.1" 200 898
192.168.188.84 - root@pam [11/02/2021:14:39:40 +0100] "GET /api2/json/cluster/resources HTTP/1.1" 200 1045
192.168.188.84 - root@pam [11/02/2021:14:39:41 +0100] "GET /api2/json/nodes/pve/status HTTP/1.1" 200 704
192.168.188.84 - root@pam [11/02/2021:14:39:43 +0100] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1143
192.168.188.84 - root@pam [11/02/2021:14:39:44 +0100] "GET /api2/json/cluster/resources HTTP/1.1" 200 1050
192.168.188.84 - root@pam [11/02/2021:14:39:46 +0100] "PUT /api2/extjs/nodes/pve/network HTTP/1.1" 200 89
192.168.188.84 - root@pam [11/02/2021:14:39:46 +0100] "GET /api2/json/nodes/pve/tasks/UPID%3Apve%3A0000111F%3A000108ED%3A602533A2%3Asrvreload%3Anetworking%3Aroot%40pam%3A/status HTTP/1.1" 200 222
192.168.188.84 - root@pam [11/02/2021:14:39:46 +0100] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1171
^C
root@127:~# nano /etc/network/interfaces
#

If I interprete it correctly my connection inquiry isnt even reaching the website service
 
I think I found something.... can you have a look at this? @Stoiko Ivanov
Code:
root@127:~# netstat -an | grep 8006
tcp        2      0 0.0.0.0:8006            0.0.0.0:*               LISTEN
tcp      514      0 192.168.2.99:8006       192.168.1.10:62522      CLOSE_WAIT
 
That shows that pveproxy is listening on port 8006 (like it should)?

* do you have any firewall rules configured? (`iptables-save`)
* do you see any problematic messages in the host's journal? (`journalctl --since '2021-02-11'`)
 
I have not configured anything in the firewall here is the output of
That shows that pveproxy is listening on port 8006 (like it should)?

* do you have any firewall rules configured? (`iptables-save`)
* do you see any problematic messages in the host's journal? (`journalctl --since '2021-02-11'`)
I have not configured anything in the firewall here is the output of
Code:
iptables-save
Code:
root@127:~# iptables-save
# Generated by iptables-save v1.8.2 on Thu Feb 11 16:48:44 2021
*raw
:PREROUTING ACCEPT [3321:860996]
:OUTPUT ACCEPT [1768:160293]
COMMIT
# Completed on Thu Feb 11 16:48:44 2021
# Generated by iptables-save v1.8.2 on Thu Feb 11 16:48:44 2021
*filter
:INPUT ACCEPT [1928:370847]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1803:163686]
COMMIT
# Completed on Thu Feb 11 16:48:44 2021
root@127:~#

I dont know much about the output from journalctl but I cant see anything network related in there:
Code:
´root@127:~# journalctl --since '2021-02-11'
-- Logs begin at Thu 2021-02-11 16:08:11 CET, end at Thu 2021-02-11 16:50:41 CET. --
Feb 11 16:08:11 127.0.0.1localhost kernel: Linux version 5.4.78-2-pve (build@pve) (gcc version 8.3.0 (Debian 8.3.0-6)) #Feb 11 16:08:11 127.0.0.1localhost kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.78-2-pve root=/dev/mapper/pve-rootFeb 11 16:08:11 127.0.0.1localhost kernel: KERNEL supported cpus:
Feb 11 16:08:11 127.0.0.1localhost kernel:   Intel GenuineIntel
Feb 11 16:08:11 127.0.0.1localhost kernel:   AMD AuthenticAMD
Feb 11 16:08:11 127.0.0.1localhost kernel:   Hygon HygonGenuine
Feb 11 16:08:11 127.0.0.1localhost kernel:   Centaur CentaurHauls
Feb 11 16:08:11 127.0.0.1localhost kernel:   zhaoxin   Shanghai
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standFeb 11 16:08:11 127.0.0.1localhost kernel: BIOS-provided physical RAM map:
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009d7ff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x000000000009d800-0x000000000009ffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000001fffffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000020000000-0x00000000201fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000020200000-0x000000003fffffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000401fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000040200000-0x00000000bad29fff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bad2a000-0x00000000bad85fff] ACPI NVS
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bad86000-0x00000000bada5fff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bada6000-0x00000000badb6fff] ACPI NVS
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badb7000-0x00000000badcdfff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badce000-0x00000000badcffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badd0000-0x00000000badd9fff] reserved
lines 1-29...skipping...
-- Logs begin at Thu 2021-02-11 16:08:11 CET, end at Thu 2021-02-11 16:50:41 CET. --
Feb 11 16:08:11 127.0.0.1localhost kernel: Linux version 5.4.78-2-pve (build@pve) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP PVE 5.4.78-2 (Thu, 03 Dec 2020 14:26:17 +0100) ()
Feb 11 16:08:11 127.0.0.1localhost kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.78-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on
Feb 11 16:08:11 127.0.0.1localhost kernel: KERNEL supported cpus:
Feb 11 16:08:11 127.0.0.1localhost kernel:   Intel GenuineIntel
Feb 11 16:08:11 127.0.0.1localhost kernel:   AMD AuthenticAMD
Feb 11 16:08:11 127.0.0.1localhost kernel:   Hygon HygonGenuine
Feb 11 16:08:11 127.0.0.1localhost kernel:   Centaur CentaurHauls
Feb 11 16:08:11 127.0.0.1localhost kernel:   zhaoxin   Shanghai
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 11 16:08:11 127.0.0.1localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-provided physical RAM map:
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009d7ff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x000000000009d800-0x000000000009ffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000001fffffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000020000000-0x00000000201fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000020200000-0x000000003fffffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000401fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000040200000-0x00000000bad29fff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bad2a000-0x00000000bad85fff] ACPI NVS
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bad86000-0x00000000bada5fff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bada6000-0x00000000badb6fff] ACPI NVS
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badb7000-0x00000000badcdfff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badce000-0x00000000badcffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badd0000-0x00000000badd9fff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000badda000-0x00000000bade3fff] ACPI NVS
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bade4000-0x00000000bae3dfff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bae3e000-0x00000000bae80fff] ACPI NVS
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bae81000-0x00000000baffffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000bb800000-0x00000000bf9fffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fdfffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: NX (Execute Disable) protection: active
Feb 11 16:08:11 127.0.0.1localhost kernel: SMBIOS 2.6 present.
Feb 11 16:08:11 127.0.0.1localhost kernel: DMI: ASUSTeK Computer INC. CM6630_CM6730_CM6830./CM6630_CM6730_CM6830., BIOS 0505 07/18/2011
Feb 11 16:08:11 127.0.0.1localhost kernel: tsc: Fast TSC calibration using PIT
Feb 11 16:08:11 127.0.0.1localhost kernel: tsc: Detected 3392.110 MHz processor
Feb 11 16:08:11 127.0.0.1localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 11 16:08:11 127.0.0.1localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 11 16:08:11 127.0.0.1localhost kernel: last_pfn = 0x23fe00 max_arch_pfn = 0x400000000
Feb 11 16:08:11 127.0.0.1localhost kernel: MTRR default type: uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel: MTRR fixed ranges enabled:
Feb 11 16:08:11 127.0.0.1localhost kernel:   00000-9FFFF write-back
Feb 11 16:08:11 127.0.0.1localhost kernel:   A0000-BFFFF uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel:   C0000-D7FFF write-protect
Feb 11 16:08:11 127.0.0.1localhost kernel:   D8000-E7FFF uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel:   E8000-FFFFF write-protect
Feb 11 16:08:11 127.0.0.1localhost kernel: MTRR variable ranges enabled:
Feb 11 16:08:11 127.0.0.1localhost kernel:   0 base 000000000 mask E00000000 write-back
Feb 11 16:08:11 127.0.0.1localhost kernel:   1 base 200000000 mask FC0000000 write-back
Feb 11 16:08:11 127.0.0.1localhost kernel:   2 base 0BB800000 mask FFF800000 uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel:   3 base 0BC000000 mask FFC000000 uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel:   4 base 0C0000000 mask FC0000000 uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel:   5 base 23FE00000 mask FFFE00000 uncachable
Feb 11 16:08:11 127.0.0.1localhost kernel:   6 disabled
Feb 11 16:08:11 127.0.0.1localhost kernel:   7 disabled
Feb 11 16:08:11 127.0.0.1localhost kernel:   8 disabled
 
127.0.0.1localhost
seems something is off with your hostname - check /etc/hostname and /etc/hosts

you need to make sure that `ping -c1 $(uname -n)` runs successfully
 
I also found those errors:
Code:
root@127:~# service pve-cluster status
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2021-02-11 16:08:22 CET; 58min ago
  Process: 993 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
 Main PID: 999 (pmxcfs)
    Tasks: 6 (limit: 4915)
   Memory: 28.1M
   CGroup: /system.slice/pve-cluster.service
           └─999 /usr/bin/pmxcfs

Feb 11 16:08:21 127.0.0.1localhost pmxcfs[999]: [dcdb] crit: cpg_initialize failed: 2
Feb 11 16:08:21 127.0.0.1localhost pmxcfs[999]: [dcdb] crit: can't initialize service
Feb 11 16:08:21 127.0.0.1localhost pmxcfs[999]: [status] crit: cpg_initialize failed: 2
Feb 11 16:08:21 127.0.0.1localhost pmxcfs[999]: [status] crit: can't initialize service
Feb 11 16:08:22 127.0.0.1localhost systemd[1]: Started The Proxmox VE cluster filesystem.
Feb 11 16:08:27 127.0.0.1localhost pmxcfs[999]: [status] notice: update cluster info (cluster name  cluster, version = 2)
Feb 11 16:08:27 127.0.0.1localhost pmxcfs[999]: [dcdb] notice: members: 1/999
Feb 11 16:08:27 127.0.0.1localhost pmxcfs[999]: [dcdb] notice: all data is up to date
Feb 11 16:08:27 127.0.0.1localhost pmxcfs[999]: [status] notice: members: 1/999
Feb 11 16:08:27 127.0.0.1localhost pmxcfs[999]: [status] notice: all data is up to date
root@127:~# service pve-manager status
● pve-guests.service - PVE guests
   Loaded: loaded (/lib/systemd/system/pve-guests.service; enabled; vendor preset: enabled)
   Active: activating (start) since Thu 2021-02-11 16:08:25 CET; 58min ago
  Process: 1081 ExecStartPre=/usr/share/pve-manager/helpers/pve-startall-delay (code=exited, status=0/SUCCESS)
 Main PID: 1082 (pvesh)
    Tasks: 2 (limit: 4915)
   Memory: 111.9M
   CGroup: /system.slice/pve-guests.service
           ├─1082 /usr/bin/perl /usr/bin/pvesh --nooutput create /nodes/localhost/startall
           └─1083 task UPID:127:0000043B:00000795:6025486A:startall::root@pam:

Feb 11 16:08:25 127.0.0.1localhost systemd[1]: Starting PVE guests...
Feb 11 16:08:26 127.0.0.1localhost pve-guests[1082]: <root@pam> starting task UPID:127:0000043B:00000795:6025486A:startall::root@pam:
Feb 11 16:08:26 127.0.0.1localhost pvesh[1082]: waiting for quorum ...
root@127:~# service pvenetcommit status
● pvenetcommit.service - Commit Proxmox VE network changes
   Loaded: loaded (/lib/systemd/system/pvenetcommit.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2021-02-11 16:08:14 CET; 59min ago
  Process: 627 ExecStartPre=/bin/rm -f /etc/openvswitch/conf.db (code=exited, status=0/SUCCESS)
  Process: 633 ExecStartPre=/bin/mv /etc/network/interfaces.new /etc/network/interfaces (code=exited, status=1/FAILURE)
  Process: 638 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 638 (code=exited, status=0/SUCCESS)

Feb 11 16:08:14 127.0.0.1localhost systemd[1]: Starting Commit Proxmox VE network changes...
Feb 11 16:08:14 127.0.0.1localhost mv[633]: /bin/mv: cannot stat '/etc/network/interfaces.new': No such file or directory
Feb 11 16:08:14 127.0.0.1localhost systemd[1]: Started Commit Proxmox VE network changes.
root@127:~# service pveproxy status
● pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2021-02-11 16:08:24 CET; 1h 0min ago
  Process: 1068 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
  Process: 1070 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
 Main PID: 1072 (pveproxy)
    Tasks: 4 (limit: 4915)
   Memory: 135.2M
   CGroup: /system.slice/pveproxy.service
           ├─ 1072 pveproxy
           ├─13617 pveproxy worker
           ├─13618 pveproxy worker
           └─13619 pveproxy worker

Feb 11 17:08:33 127.0.0.1localhost pveproxy[1072]: starting 2 worker(s)
Feb 11 17:08:33 127.0.0.1localhost pveproxy[1072]: worker 13617 started
Feb 11 17:08:33 127.0.0.1localhost pveproxy[1072]: worker 13618 started
Feb 11 17:08:33 127.0.0.1localhost pveproxy[13617]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1775.
Feb 11 17:08:33 127.0.0.1localhost pveproxy[13618]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1775.
Feb 11 17:08:33 127.0.0.1localhost pveproxy[13601]: worker exit
Feb 11 17:08:33 127.0.0.1localhost pveproxy[1072]: worker 13601 finished
Feb 11 17:08:33 127.0.0.1localhost pveproxy[1072]: starting 1 worker(s)
Feb 11 17:08:33 127.0.0.1localhost pveproxy[1072]: worker 13619 started
Feb 11 17:08:33 127.0.0.1localhost pveproxy[13619]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1775.
 
check /etc/hostname and /etc/hosts
/etc/hostname
Code:
127.0.0.1 localhost
192.168.2.99 pve pve.server
/etc/hosts
Code:
192.168.2.99 pve

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 
seems something is off with your hostname - check /etc/hostname and /etc/hosts

you need to make sure that `ping -c1 $(uname -n)` runs successfully
runs successfully
Code:
root@pve:~# ping -c1 $(uname -n)
PING pve (192.168.2.99) 56(84) bytes of data.
64 bytes from pve (192.168.2.99): icmp_seq=1 ttl=64 time=0.054 ms
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!