brand new install -- networking not working on reboot

rykr

Member
Aug 22, 2020
35
1
8
54
I see lots of posts talking about having to do a ifdown vmbr0 followed by an ifup vmbr0 to get networking working. That is exactly my situation. This is a brand new install. I haven't installed anything. no vms. Just booting the host. What could be the problem?
 
Did you maybe choose the wrong NIC during installation?
You can edit /etc/network/interfaces to your liking.
 
I'm having the same issue, but only on one of my servers! It's a Ryzen 1600 using the ASROCK AB350 Pro4 motherboard. Even on a FRESH install, first boot works great. But after a reboot, I lose all network connectivity. Changed cables, switch ports, etc.

I still have a Windows SSD connected, so when I boot into that instead of PVE, the networking comes back. Then I can reboot back into PVE again with no issues. But once I reboot again, I loose my connection AGAIN.
 
I'm having the same issue, but only on one of my servers! It's a Ryzen 1600 using the ASROCK AB350 Pro4 motherboard. Even on a FRESH install, first boot works great. But after a reboot, I lose all network connectivity. Changed cables, switch ports, etc.

I still have a Windows SSD connected, so when I boot into that instead of PVE, the networking comes back. Then I can reboot back into PVE again with no issues. But once I reboot again, I loose my connection AGAIN.

can you post your /etc/hosts and /etc/network/interfaces files?
also the output of ip a command
 
can you post your /etc/hosts and /etc/network/interfaces files?
also the output of ip a command

See below. But I'm fairly certain this is unrelated to any settings. I have double checked everything several times. But now that I've seen at least two other people mention the same exact problem, I'm thinking it might possibly be a kernel/hardware related issue.



***/etc/hosts***

127.0.0.1 localhost.localdomain localhost
172.16.1.41 pve2.willyb.me pve2

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


***/etc/network/interfaces***

auto lo
iface lo inet loopback

iface enp37s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 172.16.1.41
netmask 255.255.255.0
gateway 172.16.1.1
bridge_ports enp37s0
bridge_stp off
bridge_fd 0

***ip address***

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp37s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 70:85:c2:2c:ae:36 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 70:85:c2:2c:ae:36 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.41/24 brd 172.16.1.255 scope global vmbr0
valid_lft forever preferred_lft forever
 
so I got my windows 10 vm up and running passing through my exisiting windows 10 disk and my GPU. All that is working well. What is not working is the network on the proxmox host is down on boot. I have to do a systemctl restart networking.service to bring it up. This is on every boot of the host.

Also, the windows vm has no network accerss. I'm using virtio and I've installed the driver. In side windows it's listed as ethernet 9 and it says it is connected and appears to have an ipaddress but I can only ping the host. I cannot ping the rest of my network.

Would love to sort this out.
 
I have this same issue. I built and set up a new server with 6.2-4 and I have to stop and start all networks to get the network running on the PVE host after boot.

I have PVE 6.1-3 running on different hardware without issue this issue. I'm wondering if I should attempt to install an older version to get around this problem.

Before bouncing network:
Code:
root@pve2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp6s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:95 brd ff:ff:ff:ff:ff:ff
3: enp11s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:30:48:c8:63:4e brd ff:ff:ff:ff:ff:ff
4: enp6s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:94 brd ff:ff:ff:ff:ff:ff
5: enp11s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:30:48:c8:63:4f brd ff:ff:ff:ff:ff:ff
6: enp5s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:97 brd ff:ff:ff:ff:ff:ff
7: enp5s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:96 brd ff:ff:ff:ff:ff:ff

After bouncing network:
Code:
root@pve2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp6s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:95 brd ff:ff:ff:ff:ff:ff
3: enp11s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:30:48:c8:63:4e brd ff:ff:ff:ff:ff:ff
4: enp6s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:94 brd ff:ff:ff:ff:ff:ff
5: enp11s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:30:48:c8:63:4f brd ff:ff:ff:ff:ff:ff
6: enp5s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:9f:83:97 brd ff:ff:ff:ff:ff:ff
7: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 00:15:17:9f:83:96 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:15:17:9f:83:96 brd ff:ff:ff:ff:ff:ff
    inet 192.168.77.12/24 brd 192.168.77.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:17ff:fe9f:8396/64 scope link
       valid_lft forever preferred_lft forever

/etc/network/interfaces
Code:
iface lo inet loopback

iface enp5s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.77.12
        netmask 255.255.255.0
        gateway 192.168.77.1
        bridge_ports enp5s0f1
        bridge_stp off
        bridge_fd 0

iface enp11s0f0 inet manual

iface enp6s0f0 inet manual

iface enp11s0f1 inet manual

iface enp6s0f1 inet manual

iface enp5s0f0 inet manual

/etc/hosts
Code:
127.0.0.1 localhost.localdomain localhost
192.168.77.12 pve2.mynetwork.local pve2

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 
Last edited:
hi,

sorry for not responding before.

could you take a look at the logs, like in /var/log/syslog dmesg and journalctl?

around the time of boot will be interesting. maybe it could be some driver issue?
 
I appreciate your reply. I'm curious about your line of thinking. Wouldn't a failed driver prevent the network from working at all? The network just fails to start during boot, but can start post-boot. It seems as if it's trying to start before dependencies are available.

Notable syslog events:
Code:
Sep 02 09:57:32 pve2 kernel: megaraid_sas 0000:02:00.0: Failed to transition controller to ready from megasas_init_fw!
Sep 02 09:57:33 pve2 kernel: megaraid_sas 0000:02:00.0: ADP_RESET_GEN2: HostDiag=1e2
Sep 02 09:58:10 pve2 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE
Sep 02 09:58:10 pve2 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.
Sep 02 09:58:10 pve2 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
Sep 02 09:58:10 pve2 systemd[1]: Dependency failed for Raise network interfaces.
Sep 02 09:58:10 pve2 systemd[1]: networking.service: Job networking.service/start failed with result 'dependency'.
Sep 02 09:58:10 pve2 systemd[1]: Reached target Network.
Sep 02 09:58:10 pve2 systemd[1]: Reached target Network is Online.
Sep 02 09:58:10 pve2 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
Sep 02 09:58:10 pve2 systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
Sep 02 09:58:10 pve2 systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
Sep 02 09:58:10 pve2 systemd[1]: Failed to start udev Wait for Complete Device Initialization.
Sep 02 09:58:10 pve2 systemd[1]: Dependency failed for Import ZFS pools by cache file.
Sep 02 09:58:10 pve2 systemd[1]: zfs-import-cache.service: Job zfs-import-cache.service/start failed with result 'dependency'.

Code:
Sep 02 09:58:51 pve2 systemd-udevd[669]: Using default interface naming scheme 'v240'.
Sep 02 09:58:51 pve2 systemd-udevd[669]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Sep 02 09:58:51 pve2 systemd-udevd[669]: Could not generate persistent MAC address for vmbr0: No such file or directory
Sep 02 09:58:51 pve2 kernel: vmbr0: port 1(enp5s0f1) entered blocking state
Sep 02 09:58:51 pve2 kernel: vmbr0: port 1(enp5s0f1) entered disabled state
Sep 02 09:58:51 pve2 kernel: device enp5s0f1 entered promiscuous mode

Code:
Sep 02 09:56:09 pve2 kernel: e1000e 0000:06:00.0 eth0: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:95
Sep 02 09:56:09 pve2 kernel: e1000e 0000:06:00.0 eth0: Intel(R) PRO/1000 Network Connection
Sep 02 09:56:09 pve2 kernel: e1000e 0000:06:00.0 eth0: MAC: 0, PHY: 4, PBA No: D72468-003
Sep 02 09:56:09 pve2 kernel: igb 0000:0b:00.0: added PHC on eth1
Sep 02 09:56:09 pve2 kernel: igb 0000:0b:00.0: eth1: (PCIe:2.5Gb/s:Width x4) 00:30:48:c8:63:4e
Sep 02 09:56:09 pve2 kernel: igb 0000:0b:00.0: eth1: PBA No: Unknown
Sep 02 09:56:09 pve2 kernel: e1000e 0000:06:00.1 eth2: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:94
Sep 02 09:56:09 pve2 kernel: e1000e 0000:06:00.1 eth2: Intel(R) PRO/1000 Network Connection
Sep 02 09:56:10 pve2 kernel: e1000e 0000:06:00.1 eth2: MAC: 0, PHY: 4, PBA No: D72468-003
Sep 02 09:56:10 pve2 kernel: igb 0000:0b:00.1: added PHC on eth3
Sep 02 09:56:10 pve2 kernel: igb 0000:0b:00.1: eth3: (PCIe:2.5Gb/s:Width x4) 00:30:48:c8:63:4f
Sep 02 09:56:10 pve2 kernel: igb 0000:0b:00.1: eth3: PBA No: Unknown
Sep 02 09:56:10 pve2 kernel: igb 0000:0b:00.1 enp11s0f1: renamed from eth3
Sep 02 09:56:10 pve2 kernel: igb 0000:0b:00.0 enp11s0f0: renamed from eth1
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.0 eth1: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:97
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.0 eth1: Intel(R) PRO/1000 Network Connection
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.0 eth1: MAC: 0, PHY: 4, PBA No: D72468-003
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.1 eth3: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:96
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.1 eth3: Intel(R) PRO/1000 Network Connection
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.1 eth3: MAC: 0, PHY: 4, PBA No: D72468-003
Sep 02 09:56:10 pve2 kernel: e1000e 0000:06:00.0 enp6s0f0: renamed from eth0
Sep 02 09:56:10 pve2 kernel: e1000e 0000:06:00.1 enp6s0f1: renamed from eth2
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.1 enp5s0f1: renamed from eth3
Sep 02 09:56:10 pve2 kernel: e1000e 0000:05:00.0 enp5s0f0: renamed from eth1

Here is everything "eth" in dmesg:
Code:
root@pve2:~# dmesg | grep eth
[    2.143818] e1000e 0000:06:00.0 eth0: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:95
[    2.143820] e1000e 0000:06:00.0 eth0: Intel(R) PRO/1000 Network Connection
[    2.143901] e1000e 0000:06:00.0 eth0: MAC: 0, PHY: 4, PBA No: D72468-003
[    2.163760] igb 0000:0b:00.0: added PHC on eth1
[    2.163763] igb 0000:0b:00.0: eth1: (PCIe:2.5Gb/s:Width x4) 00:30:48:c8:63:4e
[    2.163765] igb 0000:0b:00.0: eth1: PBA No: Unknown
[    2.303721] e1000e 0000:06:00.1 eth2: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:94
[    2.303723] e1000e 0000:06:00.1 eth2: Intel(R) PRO/1000 Network Connection
[    2.303804] e1000e 0000:06:00.1 eth2: MAC: 0, PHY: 4, PBA No: D72468-003
[    2.347791] igb 0000:0b:00.1: added PHC on eth3
[    2.347794] igb 0000:0b:00.1: eth3: (PCIe:2.5Gb/s:Width x4) 00:30:48:c8:63:4f
[    2.347796] igb 0000:0b:00.1: eth3: PBA No: Unknown
[    2.348891] igb 0000:0b:00.1 enp11s0f1: renamed from eth3
[    2.363589] igb 0000:0b:00.0 enp11s0f0: renamed from eth1
[    2.467719] e1000e 0000:05:00.0 eth1: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:97
[    2.467721] e1000e 0000:05:00.0 eth1: Intel(R) PRO/1000 Network Connection
[    2.467801] e1000e 0000:05:00.0 eth1: MAC: 0, PHY: 4, PBA No: D72468-003
[    2.631787] e1000e 0000:05:00.1 eth3: (PCI Express:2.5GT/s:Width x4) 00:15:17:9f:83:96
[    2.631789] e1000e 0000:05:00.1 eth3: Intel(R) PRO/1000 Network Connection
[    2.631869] e1000e 0000:05:00.1 eth3: MAC: 0, PHY: 4, PBA No: D72468-003
[    2.633036] e1000e 0000:06:00.1 enp6s0f1: renamed from eth2
[    2.683575] e1000e 0000:05:00.1 enp5s0f1: renamed from eth3
[    2.719512] e1000e 0000:06:00.0 enp6s0f0: renamed from eth0
[    2.759553] e1000e 0000:05:00.0 enp5s0f0: renamed from eth1
 
Installed 6.1-2 on this hardware and got the same result. Since I have 6.1 installed on other hardware without issues, I'm assuming that there's something about this hardware.

I will continue to investigate.
 
Wouldn't a failed driver prevent the network from working at all?

well i was thinking of a specific case, namely e1000 driver [0] since i know about the bug. but in that one it's an intermittent drop that happens every now and then, and not in your case i suppose.

Since I have 6.1 installed on other hardware without issues, I'm assuming that there's something about this hardware.
how is this hardware different from the other?



[0]: https://forum.proxmox.com/threads/e1000-driver-hang.58284/#post-279338
 
I'm using a z490 board with I225-v lan port. Works well as soon as I do a systemctl restart networking.service. No idea why it won't come up at boot.
 
how is this hardware different from the other?
My first ProxMox server is old consumer-grade hardware with a single processor, and a single on-board NIC. This new (to me) hardware is server-grade hardware with multiple processors, and multiple NICs (details below).

Since yesterday, I've successfully installed Kubuntu and Arch Linux on this hardware and had no issues with the network interfaces after the installation. As far as my hypothesis goes, I think this problem is specific to the ProxMox installation process combined with a subset of hardware.

The failing hardware in detail:
SuperMicro X8DT3-LN4F motherboard
2x Intel Xeon X5650
Intel PRO/1000 PT Quad Port Server Adapter (39Y6138)
Adaptec 2405
System drive is SATA (not on the RAID)

As an aside, it looks like this issue isn't new and may be related to multiple NICs:
https://desantolo.com/2017/02/troub...issues-after-fresh-install-of-proxmox-ve-4-4/

My next step is to install Debian in the same manner as Kubuntu and Arch to see if it works.
 
Debian 10.5.0 installs and the network interface works after reboot.

I installed the ProxMox packages on top of this Debian install and the network continues to start after boot. So, I think the problem is the result of the ProxMox ISO installer and not the ProxMox packages themselves.
 
Last edited:
Debian 10.5.0 installs and the network interface works after reboot.

I installed the ProxMox packages on top of this Debian install and the network continues to start after boot. So, I think the problem is the result of the ProxMox ISO installer and not the ProxMox packages themselves.
Do you have also installed the proxmox kernel on top of debian ?
 
I assume so, but maybe you can tell me. I followed these instructions:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Is there a way to verify?

uname -ar or pveversion -v

I installed the ProxMox packages on top of this Debian install and the network continues to start after boot. So, I think the problem is the result of the ProxMox ISO installer and not the ProxMox packages themselves.
thanks for looking into it! this is quite surprising since the installer doesn't really do anything special either. i've read the link you sent, will try to reproduce it here and let you know.
 
Code:
$ uname -ar
Linux [redacted] 5.4.60-1-pve #1 SMP PVE 5.4.60-1 (Mon, 31 Aug 2020 10:36:22 +0200) x86_64 GNU/Linux

Code:
$ pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.4.60-1-pve: 5.4.60-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-13
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-13
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
 
Code:
$ uname -ar
Linux [redacted] 5.4.60-1-pve #1 SMP PVE 5.4.60-1 (Mon, 31 Aug 2020 10:36:22 +0200) x86_64 GNU/Linux

Code:
$ pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.4.60-1-pve: 5.4.60-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-13
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-13
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1

yes you have the pve kernel installed (note the -pve at the end of the version for indication)

so it does seem to be an installer issue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!