Slow Internet Performance in the VM

bferrell

Well-Known Member
Nov 16, 2018
99
2
48
54
I'm having a similar difficulty to this thread that I'm hoping someone can help me with.

I have a cluster of Dell servers, each with 512G ram, and they are not stressed (screenshot at bottom). Each has a 1G NIC for corosync, and a 10G each for Image traffic (VLAN 101) and 10G for VM traffic (VLAN 100 - config below).

iperf on the host and the client are good (see below) and near the 10G limit. I have upgraded to AltaFiber 2G down/1G up connectivity, and I'm regularly getting it through my Unifi hardware. I'm using the VIRTIO network driver for the VM, and in the GUI my speeds are quite low, and using speedtest-cli on the host and the VM are significantly lower. The host gets nearly 1G down, the client about 1/2 that, and they are both limited to about 400M up. Web based speedtestets on the client are significantly works (149Mbps both ways). How can I improve performance?


speedtest.png
pveversion -v
root@svr-04:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-2-pve: 5.15.60-2
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-4.15: 5.4-8
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1

Host Networking
root@svr-04:/etc/network# more interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp4s0f0
iface enp4s0f0 inet manual

auto eno1
iface eno1 inet static
address 192.168.102.14/24

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto enp4s0f1
iface enp4s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.100.14/24
gateway 192.168.100.1
bridge-ports enp4s0f0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.101.14/24
bridge-ports enp4s0f1
bridge-stp off
bridge-fd 0

HOST iperf
Connecting to host 192.168.100.39, port 5201
[ 5] local 192.168.100.14 port 35440 connected to 192.168.100.39 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.07 GBytes 9.22 Gbits/sec 278 1.01 MBytes
[ 5] 1.00-2.00 sec 1.08 GBytes 9.28 Gbits/sec 277 1004 KBytes
[ 5] 2.00-3.00 sec 1.08 GBytes 9.31 Gbits/sec 151 980 KBytes
[ 5] 3.00-4.00 sec 1.09 GBytes 9.32 Gbits/sec 56 983 KBytes
[ 5] 4.00-5.00 sec 1.07 GBytes 9.19 Gbits/sec 85 916 KBytes
[ 5] 5.00-6.00 sec 1.08 GBytes 9.28 Gbits/sec 27 598 KBytes
[ 5] 6.00-7.00 sec 1.08 GBytes 9.29 Gbits/sec 77 1.35 MBytes
[ 5] 7.00-8.00 sec 1.08 GBytes 9.31 Gbits/sec 61 981 KBytes
[ 5] 8.00-9.00 sec 1.09 GBytes 9.32 Gbits/sec 38 1.01 MBytes
[ 5] 9.00-10.00 sec 1.08 GBytes 9.31 Gbits/sec 10 1.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.8 GBytes 9.28 Gbits/sec 1060 sender
[ 5] 0.00-10.04 sec 10.8 GBytes 9.24 Gbits/sec receiver

Host speedtest-cli
root@svr-04:~# speedtest-cli --server 14757
Retrieving speedtest.net configuration...
Testing from Cincinnati Bell (74.83.94.146)...
Retrieving speedtest.net server list...
Retrieving information for the selected server...
Hosted by Pineville Fiber (Pineville, NC) [564.44 km]: 29.822 ms
Testing download speed................................................................................
Download: 923.29 Mbit/s
Testing upload speed......................................................................................................
Upload: 308.68 Mbit/s

Client config (qm config 701)
root@svr-04:~# qm config 701
agent: 1
bootdisk: scsi0
cores: 4
memory: 16384
name: Tautulli
net0: virtio=66:6F:8E:08:21:17,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: BeforePythonFixes
scsi0: FN3_IMAGES:701/vm-701-disk-0.qcow2,discard=on,iothread=1,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=a4fec121-6a55-4fc8-9973-3fd08abfead0
sockets: 4
vga: vmware

Client network
bferrell@tautulli:/etc/netplan$ more 50-cloud-init.yaml
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
ens18:
dhcp4: true
version: 2

Client iperf (Ubuntu 22.04)
Connecting to host 192.168.100.39, port 5201
[ 5] local 192.168.100.137 port 36338 connected to 192.168.100.39 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1015 MBytes 8.51 Gbits/sec 951 979 KBytes
[ 5] 1.00-2.00 sec 1.03 GBytes 8.85 Gbits/sec 7 913 KBytes
[ 5] 2.00-3.00 sec 1.05 GBytes 9.02 Gbits/sec 0 1.47 MBytes
[ 5] 3.00-4.00 sec 1005 MBytes 8.43 Gbits/sec 0 1.70 MBytes
[ 5] 4.00-5.00 sec 780 MBytes 6.54 Gbits/sec 0 1.81 MBytes
[ 5] 5.00-6.00 sec 931 MBytes 7.81 Gbits/sec 16 1.14 MBytes
[ 5] 6.00-7.00 sec 1015 MBytes 8.52 Gbits/sec 42 949 KBytes
[ 5] 7.00-8.00 sec 1004 MBytes 8.41 Gbits/sec 0 1.33 MBytes
[ 5] 8.00-9.00 sec 1024 MBytes 8.59 Gbits/sec 121 1.10 MBytes
[ 5] 9.00-10.00 sec 975 MBytes 8.18 Gbits/sec 297 1.18 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 9.65 GBytes 8.29 Gbits/sec 1434 sender
[ 5] 0.00-10.04 sec 9.65 GBytes 8.26 Gbits/sec receiver

Client speedtest-cli
bferrell@tautulli:~$ speedtest-cli --server 14757
Retrieving speedtest.net configuration...
Testing from Cincinnati Bell (74.83.94.146)...
Retrieving speedtest.net server list...
Retrieving information for the selected server...
Hosted by Pineville Fiber (Pineville, NC) [564.44 km]: 30.316 ms
Testing download speed................................................................................
Download: 405.70 Mbit/s
Testing upload speed......................................................................................................
Upload: 366.38 Mbit/s

Host Load
1673898252738.png
 
Last edited:
I was GOING to bring out the old "measure, don't guess" but you've done an awesome job of measuring already! The only thing you haven't measured is the bandwidth available between the guest VM and another machine on your network.

You also should check you haven't done anything silly in /etc/network/interfaces - Feel free to look at my proxmox playbook, and here's a documented interfaces file you can refer to.

https://github.com/xrobau/ansible-proxmox-host/blob/master/example.interfaces.std

To answer your second post - no. That is not typical. I regularly get 10gbit wire speed from client VMs. Something is fiddling with/dropping traffic.
 
Well, to be fair, if you look at the VM client's IPERF to my M1 Mac on the LAN I get pretty close to line speed, I just don't get it to the WAN with Speedtest on the web or CLI. That's why I was trying to make the distinction, the host and VM are both able to pass traffic and near 10G line rate, they just don't do it to the WAN. When you say line speed I assume you mean iperf, but do you get that through the WAN as well?

But, the fact that can iperf at over 8G makes me think the network interfaces are, at least mostly, configured OK but I'm not a Linux network expert and my Unifi gear handles VLANs a bit oddly compared to the rest of the world. I haven't set jumbo packets as every time I've tried it has caused issues, and generally I don't find it has hurt me (with the possible exception I'm working here).

Looking at your guide, I think my setup is pretty similar. I'm not making proxmox host or guest VLAN aware, the host has 3 interfaces, VLAN100/101/102, where 100 and 101 are 10G NICs, 100 for general use and 101 to the FreeNAS storage network, and 102 is 1G for cluster management. They're all /24's manually configured, with 100/101 bridged to router, but not bonded because they're large pipes. I might have a mistake here, but if so it's not obvious to me. iperf is passing the traffic, and I don't have layer 3 switching, so that's all going to the router (a UXG), so there shouldn't be any additional overhead to send it out the WAN port. Actually, that's not true, the Mac and the Guest VM are both on VLAN100, so that is passing just on the switch. Each PVE host NIC is on a Switch Port dedicated to a specific VLAN, though, not trunked.

/etc/network/interfaces
root@svr-03:/etc/network# more interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp4s0f0
iface enp4s0f0 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto eno1
iface eno1 inet static
address 192.168.102.13/24

iface eno2 inet manual

auto enp4s0f1
iface enp4s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.100.13/24
gateway 192.168.100.1
bridge-ports enp4s0f0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.101.13/24
bridge-ports enp4s0f1
bridge-stp off
bridge-fd 0

This iperf below is between my guest VM and a machine on my network - 8.4 Gb/sec, which is quite good, but ony 200/300 on speedtest (when I 1,900/900 on my Mac). This last test is with the VM as the only guest on an R720 with 32 CPUs and 512G of ram, the guest is Ubuntu 22.04 allocated 8G and 4 processors, virtio scsi and network drivers.

bferrell@ubuntu-2204-template:~$ iperf3 -c 192.168.100.39
Connecting to host 192.168.100.39, port 5201
[ 5] local 192.168.100.198 port 40598 connected to 192.168.100.39 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 978 MBytes 8.20 Gbits/sec 336 1.74 MBytes
[ 5] 1.00-2.00 sec 1019 MBytes 8.54 Gbits/sec 0 2.08 MBytes
[ 5] 2.00-3.00 sec 1.01 GBytes 8.72 Gbits/sec 13 1.50 MBytes
[ 5] 3.00-4.00 sec 992 MBytes 8.32 Gbits/sec 191 956 KBytes
[ 5] 4.00-5.00 sec 1004 MBytes 8.42 Gbits/sec 0 1.42 MBytes
[ 5] 5.00-6.00 sec 1.03 GBytes 8.83 Gbits/sec 7 1.01 MBytes
[ 5] 6.00-7.00 sec 1.04 GBytes 8.97 Gbits/sec 0 1.59 MBytes
[ 5] 7.00-8.00 sec 961 MBytes 8.07 Gbits/sec 0 1.83 MBytes
[ 5] 8.00-9.00 sec 955 MBytes 8.01 Gbits/sec 11 936 KBytes
[ 5] 9.00-10.00 sec 1009 MBytes 8.46 Gbits/sec 0 1.47 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 9.84 GBytes 8.45 Gbits/sec 558 sender
[ 5] 0.00-10.04 sec 9.84 GBytes 8.42 Gbits/sec receiver

iperf Done.

guest VM (on VLAN100, @ 192.168.100.198)
1674164777555.png

M1 Mac that hosted the iperf result (also on VLAN100, but a couple of switches away from the VM @ 192.168.100.39)
1674164808554.png
 
Last edited:
OK, it is something in the routing, if I host iperf3 on my VMs and put my Mac on VLAN101, I get just over 3 Gbps, so cross the VLAN is cutting the speed by half for some reason. Not nearly as much I'm losing to the WAN, but there's something there, just not sure what.

It's also interesting, because if I do an iperf from my PVE host to my FreeNAS on VLAN101 it saturates at 9.4Gbps. Does that tell me the PVE host is sending it out the correct VLAN, and the switch is forwarding it, and when I'm relying on my Unifi UXG router, it's bottlenecking??

root@freenas3[~]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.101.13, port 55474
[ 5] local 192.168.101.103 port 5201 connected to 192.168.101.13 port 55488
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.09 GBytes 9.41 Gbits/sec
[ 5] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 2.00-3.00 sec 1.09 GBytes 9.41 Gbits/sec
[ 5] 3.00-4.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 4.00-5.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 5.00-6.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 6.00-7.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 7.00-8.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 8.00-9.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 9.00-10.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 10.00-10.00 sec 700 KBytes 8.85 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver

iperf from Mac on 192.168.101.75 VLAN101 (where the PVE FreeNAS storage lives)
Brett-MacBook-Pro:~ brettferrell$ iperf3 -c 192.168.100.198
Connecting to host 192.168.100.198, port 5201
[ 5] local 192.168.101.75 port 49369 connected to 192.168.100.198 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 447 MBytes 3.75 Gbits/sec
[ 5] 1.00-2.00 sec 403 MBytes 3.38 Gbits/sec
[ 5] 2.00-3.00 sec 403 MBytes 3.38 Gbits/sec
[ 5] 3.00-4.00 sec 357 MBytes 3.00 Gbits/sec
[ 5] 4.00-5.00 sec 449 MBytes 3.77 Gbits/sec
[ 5] 5.00-6.00 sec 367 MBytes 3.08 Gbits/sec
[ 5] 6.00-7.00 sec 324 MBytes 2.72 Gbits/sec
[ 5] 7.00-8.00 sec 316 MBytes 2.65 Gbits/sec
[ 5] 8.00-9.00 sec 401 MBytes 3.37 Gbits/sec
[ 5] 9.00-10.00 sec 429 MBytes 3.60 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 3.81 GBytes 3.27 Gbits/sec sender
[ 5] 0.00-10.00 sec 3.81 GBytes 3.27 Gbits/sec receiver

iperf Done.
 
Last edited:
Update: so, when I looked at the routing, I knew that wasn't right, so I popped by trusty XG-8 back into the network, I'd "upgraded" to the new UXG (it was limiting me to about 2.5Gbps) from Ubiquiti/Unifi as the XG-8 is EOL, but it has double the inter-VLAN routing performance that the UXG does, and now I see the below.

My HOST can route to VLAN100 at full 10G, and my GUEST can get close (7.5G), but the guest still only gets about 300Mbps to the internet. Test IPERF server is on VLAN101, a FreeNAS box for backing up VMs. The HOSTS can get 1G down and .5G up, but the GUESTS are limited to 300/300M. Shouldn't the GUEST be able to do as well as the HOST (assuming there's not significant contention with other GUESTS)? This R720 has dual 32 x Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz and 512G ram, and only NextCloud basically idling on with it currently, so it should have all the resources.

Summary
iperf to/from VM on the LAN exceeds 6-7Gbps in every case, even when crossing VLANs, and 8-9+ on the same VLAN. The host can speedtest-cli to the internet at 1G down, 4-500 up, whereas my Macs cand do full 2G/1G, but the GUEST VMs can only doo 300 both ways speedtest web, and worse in speedtest-cli.

Host Speedtest-CLI
Note that although it does 1G down and 500Mbps up, I should get 1.9G/.9G really (my Macs do), and this is very consistent across my 5 hosts (R620 and R720s).
root@svr-00:~# speedtest-cli
Retrieving speedtest.net configuration...
Testing from Cincinnati Bell (74.83.92.121)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Dish Wireless (Cleveland, OH) [343.58 km]: 25.863 ms
Testing download speed................................................................................
Download: 1028.47 Mbit/s
Testing upload speed......................................................................................................
Upload: 479.89 Mbit/s

VM Speedtest
1674857637463.png
and CLI is better on download but way worse on upload

bferrell@clone:~$ speedtest-cli
Retrieving speedtest.net configuration...
Testing from Cincinnati Bell (74.83.92.121)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Dish Wireless (Cleveland, OH) [343.58 km]: 19.035 ms
Testing download speed................................................................................
Download: 539.67 Mbit/s
Testing upload speed......................................................................................................
Upload: 4.12 Mbit/s

HOST PVE box
Accepted connection from 192.168.100.10, port 43318
[ 5] local 192.168.101.102 port 5201 connected to 192.168.100.10 port 43332
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 861 MBytes 7.22 Gbits/sec
[ 5] 1.00-2.00 sec 1.06 GBytes 9.08 Gbits/sec
[ 5] 2.00-3.00 sec 1.06 GBytes 9.12 Gbits/sec
[ 5] 3.00-4.00 sec 1.06 GBytes 9.13 Gbits/sec
[ 5] 4.00-5.00 sec 1.05 GBytes 9.03 Gbits/sec
[ 5] 5.00-6.00 sec 1.07 GBytes 9.21 Gbits/sec
[ 5] 6.00-7.00 sec 1.05 GBytes 8.99 Gbits/sec
[ 5] 7.00-8.00 sec 1.05 GBytes 9.06 Gbits/sec
[ 5] 8.00-9.00 sec 1.06 GBytes 9.12 Gbits/sec
[ 5] 9.00-10.00 sec 1.07 GBytes 9.20 Gbits/sec
[ 5] 10.00-10.00 sec 1.26 MBytes 9.16 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 10.4 GBytes 8.92 Gbits/sec receiver

Guest VM Ubuntu 22.04
Accepted connection from 192.168.100.186, port 49056
[ 5] local 192.168.101.102 port 5201 connected to 192.168.100.186 port 49060
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 654 MBytes 5.49 Gbits/sec
[ 5] 1.00-2.00 sec 688 MBytes 5.77 Gbits/sec
[ 5] 2.00-3.00 sec 860 MBytes 7.22 Gbits/sec
[ 5] 3.00-4.00 sec 822 MBytes 6.89 Gbits/sec
[ 5] 4.00-5.00 sec 949 MBytes 7.96 Gbits/sec
[ 5] 5.00-6.00 sec 944 MBytes 7.92 Gbits/sec
[ 5] 6.00-7.00 sec 912 MBytes 7.65 Gbits/sec
[ 5] 7.00-8.00 sec 1.01 GBytes 8.70 Gbits/sec
[ 5] 8.00-9.00 sec 1.06 GBytes 9.09 Gbits/sec
[ 5] 9.00-10.00 sec 1001 MBytes 8.39 Gbits/sec
[ 5] 10.00-10.00 sec 1.28 MBytes 6.53 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 8.74 GBytes 7.51 Gbits/sec receiver

Speedtest on Macbook M1 Ultra, 1.9G/.95G
bferrell@clone:~$ speedtest-cli
Retrieving speedtest.net configuration...
Testing from Cincinnati Bell (74.83.92.121)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Dish Wireless (Cleveland, OH) [343.58 km]: 19.035 ms
Testing download speed................................................................................
Download: 539.67 Mbit/s
Testing upload speed......................................................................................................
Upload: 4.12 Mbit/s
 
Last edited:
One more data point, I just installed OpenSpeedTest on one of my FreeNAS servers (this is the PVE cluster primary backup NAS) on VLAN101, and it can get full speed through the WAN.


OpenSpeedtest on the GUEST VM (192.168.100.186) to my M1 MacMini (192.168.100.183) with 10G port (VLAN100, same as GUEST), and then to FreeNAS #5 (VLAN101). Seems like it should be able to do better, but I would be happy if I could get this through the WAN. I would expect this to do well over 1G on the LAN though, since they can iperf that speed (9 Gbps). The bottom image is from my M1 Macbook to the M1 Mini, at full 10G speed...

iperf to the same M1 Mac (192.168.100.183)
bferrell@clone:~$ iperf3 -c 192.168.100.183
Connecting to host 192.168.100.183, port 5201
[ 4] local 192.168.100.186 port 35970 connected to 192.168.100.183 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 980 MBytes 8.22 Gbits/sec 0 4.02 MBytes
[ 4] 1.00-2.00 sec 1.02 GBytes 8.72 Gbits/sec 0 4.02 MBytes
[ 4] 2.00-3.00 sec 1.00 GBytes 8.62 Gbits/sec 0 4.02 MBytes
[ 4] 3.00-4.00 sec 1.07 GBytes 9.20 Gbits/sec 0 4.02 MBytes
[ 4] 4.00-5.00 sec 1.10 GBytes 9.41 Gbits/sec 0 4.02 MBytes
[ 4] 5.00-6.00 sec 1.09 GBytes 9.35 Gbits/sec 0 4.02 MBytes
[ 4] 6.00-7.00 sec 1.04 GBytes 8.91 Gbits/sec 0 4.02 MBytes
[ 4] 7.00-8.00 sec 1.03 GBytes 8.87 Gbits/sec 0 4.02 MBytes
[ 4] 8.00-9.00 sec 1.07 GBytes 9.15 Gbits/sec 0 4.02 MBytes
[ 4] 9.00-10.00 sec 1.05 GBytes 9.00 Gbits/sec 0 4.02 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 10.4 GBytes 8.94 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 10.4 GBytes 8.94 Gbits/sec receiver

iperf Done.

to M1 Mac Mini (SSD 10G port)
1674876413057.png


to FreeNAS host
1674875781579.png


From M1 Macbook to M1 Mini (both 10G ports)

1674876862829.png
 
Last edited:
Here is some more data.

My ISP gives out 2 IPs, and I have 2 10G Unifi routers, so I connected both of them to an unmanaged 2.5G switch, and had my MacBook run OpenSpeedTest on the 'other/external' (UXG) router's network so I could test my VM all the way across the WAN port of the main rounter (XG-8).

I think this tells me my speed issues are with the speedtest servers, and not the VM/PVE HOST interface correct? This test is from my GUEST VM across my LAN thorugh the WAN, in the WAN of my second Unifi router to my M1 MacBook, and it's 1G+ both directions (through the 2.5 switch and a 2.5G USB Ethernet adapter on the MacBook), so that's not bad. As you see below, my Mac Mini fully consumes the link, but the GUEST VM is doing pretty well.

What's not clear to me is why my Macs can do full bandwidth test to my ISP's speedtest server (speedtest.cincinnatibell.com/), but my guests are throttled, but I think that's what this test is telling me.

1674879793570.png

1674880571772.png

This is a quick diagram of how I did this test, essentially putting my Open Speed Test box on the internet, but only one hop away on hardware I own so that I could test all the way across my WAN port with known good hardware.

1674881705445.png
 
Last edited:
Just guessing at this point, but what's the guest load? Maybe try setting cpu to host and set multiqueue on the network device
Btw, iothread=1 on your disk only gives benefits with scsihw: virtio-scsi-single. IIrc the GUI even shows a warning for this combination :)
 
Just guessing at this point, but what's the guest load? Maybe try setting cpu to host and set multiqueue on the network device
Btw, iothread=1 on your disk only gives benefits with scsihw: virtio-scsi-single. IIrc the GUI even shows a warning for this combination :)

I have set network queues=32 (same as host) and CPU to host. Guest load was 1/3 of CPU and Memory or less. With CPU=HOST it does pretty well, although I'd still expect a bit better. A similar Ubuntu 22.04 guest on this box, as given 32 cpu and 32 network threads only did 177/292. Changed cpu to host and now it's 700/800, so that's what it's most sensitive to. Should I expect better?

1675108690671.png

1675104088685.png
1675104334793.png
 

Attachments

  • 1675104287585.png
    1675104287585.png
    51.7 KB · Views: 12
I've got same problem, in my situation inside VM go very slow, about 10mb/s :(
Please open a new thread and provide the following information
  • Output of qm config VMID
  • Performance on host
  • Hardware on host
  • How you measure the speed
  • anything else you might deem helpful
 
I have set network queues=32 (same as host) and CPU to host. Guest load was 1/3 of CPU and Memory or less. With CPU=HOST it does pretty well, although I'd still expect a bit better. A similar Ubuntu 22.04 guest on this box, as given 32 cpu and 32 network threads only did 177/292. Changed cpu to host and now it's 700/800, so that's what it's most sensitive to. Should I expect better?

Your host and the mac also uses 1500 MTU?

Some general tips:
On a multisocket system, you should enable numa.
The 32 cores for the guest seem a tad much to me. cpu=host just means that the host cpu is emulated. You don't need to use the same number of sockets/cores.
If CPU makes a big difference, you could also try playing around with the mitigation flags.

I would also expect a bit better performance, but I don't have a comparison to know for sure. Always make sure to compare host and VM.

I think this tells me my speed issues are with the speedtest servers, and not the VM/PVE HOST interface correct?
I rather trust iperf, and that seems to look decently enough : )
 
Last edited:
just to go on with the same problem, the problem is regarding SSL decoding and CPU handling this. If you try with a linux distro with desktop you can see how performance improve adding cpu. I tried setting vm cpu @host without success. Problem is cpu SSL decoding.
 
Of course, that's a good idea.

You can run iperf through ssh and see if the numbers match. E.g.
Bash:
# from the machine you want to use as iperf server
iperf -s
# open a new terminal
ssh -R CLIENT:5001:localhost:5001 USER@CLIENT
iperf -c localhost
 
The default cpu is 'kvm64' which has NO ACCELERATION. This means things like AES are done in software. Unless you have a reason otherwise, you should be either using cpu=host, or at BARE MINIMUM cpu=westmere. Also make sure numa is enabled (there's no reason not to, even on a single CPU server), and have sockets=1 and cores=howevermanyyouwant - within reason, obviously, don't overspec your host!
 
  • Like
Reactions: Matthias.
Hi Folks,

I have a very similar issue as OP and really scratching my head over this.

My setup:
Intel(R) Xeon(R) Gold 6132 CPU 14 Core CPU
96GB ram
C622 chipset
4x 1GB NICs Expansion PCIe card(one used for management, used by vmbr0)
2x 10gb SFP+ on Motherboard(one active for VMs, used by vmbr9)

Network is UNIFI (UDM Pro, 1GB switch is a a simple USW-24-PoE, USW-Aggregation for the 10GB
10GB switch port set to trunk (all VLANs), 1gb port is set to VLAN50

pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-13 (running version: 7.4-13/46c37d9c)
pve-kernel-5.15: 7.4-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

more interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp182s0f3 inet manual
iface enp182s0f0 inet manual
iface enp182s0f1 inet manual
iface enp182s0f2 inet manual
iface eno1 inet manual
mtu 9000
iface eno2 inet manual
mtu 9000
auto vmbr0
iface vmbr0 inet static
address 192.168.50.7/24
gateway 192.168.50.1
bridge-ports enp182s0f3
bridge-stp off
bridge-fd 0
auto vmbr99
iface vmbr99 inet static
address 192.168.7.0/24
bridge-ports eno1 eno2
bridge-stp off
bridge-fd 0
mtu 9000

lspci | awk '/[Nn]et/ {print $1}' | xargs -i% lspci -ks %
b5:00.0 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (rev 09)
DeviceName: Intel LAN X722 #1
Subsystem: Super Micro Computer Inc Ethernet Connection X722 for 10GbE SFP+
Kernel driver in use: i40e
Kernel modules: i40e
b5:00.1 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (rev 09)
DeviceName: Intel LAN X722 #2
Subsystem: Super Micro Computer Inc Ethernet Connection X722 for 10GbE SFP+
Kernel driver in use: i40e
Kernel modules: i40e
b6:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Subsystem: Cisco Systems Inc I350 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb
b6:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Subsystem: Cisco Systems Inc I350 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb
b6:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Subsystem: Cisco Systems Inc I350 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb
b6:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Subsystem: Cisco Systems Inc I350 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb

qm config 107
boot: c
bootdisk: scsi0
cipassword: **********
ciuser: benji
cores: 4
cpu: host,flags=+aes
ide2: directory:107/vm-107-cloudinit.qcow2,media=cdrom,size=4M
ipconfig0: ip=dhcp
memory: 8192
meta: creation-qemu=7.2.0,ctime=1687204953
name: supertest
nameserver: *******
net0: virtio=B2:2D:94:81:B5:75,bridge=vmbr0
numa: 1
scsi0: directory:107/vm-107-disk-0.raw,size=53452M
scsihw: virtio-scsi-pci
searchdomain:*********
serial0: socket
smbios1: uuid=c7681c86-8783-490e-b9ff-f8cfa5197e5f
sockets: 1
vga: serial0

more 50-cloud-init.yaml
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
dhcp4: true
match:
macaddress: b2:2d:94:81:b5:75
set-name: eth0

My main issue is that no matter which VLAN I use using bridge vmbr9 for my VMs, I always get crap internet connectivity. Host is fine and if I set to use vmbr0 (same as host, it's fine there too)

speedtest
Retrieving speedtest.net configuration...
Testing from EBOX (192.222.231.218)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Bell Mobility (Montréal, QC) [1.39 km]: 14.214 ms
Testing download speed................................................................................
Download: 21.15 Mbit/s
Testing upload speed......................................................................................................
Upload: 49.34 Mbit/s

speedtest
Retrieving speedtest.net configuration...
Testing from EBOX (192.222.231.218)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Altima Telecom 10G (Montréal, QC) [1.39 km]: 14.237 ms
Testing download speed................................................................................
Download: 11.69 Mbit/s
Testing upload speed......................................................................................................
Upload: 49.41 Mbit/s

speedtest
Retrieving speedtest.net configuration...
Testing from EBOX (192.222.231.218)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Fibrenoire Internet (Montréal, QC) [1.39 km]: 17.412 ms
Testing download speed................................................................................
Download: 10.66 Mbit/s
Testing upload speed......................................................................................................
Upload: 49.24 Mbit/s

speedtest
Retrieving speedtest.net configuration...
Testing from EBOX (192.222.231.218)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Bell Mobility (Montréal, QC) [1.39 km]: 14.109 ms
Testing download speed................................................................................
Download: 344.67 Mbit/s
Testing upload speed......................................................................................................
Upload: 49.09 Mbit/s

speedtest
Retrieving speedtest.net configuration...
Testing from EBOX (192.222.231.218)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Altima Telecom 10G (Montréal, QC) [1.39 km]: 13.468 ms
Testing download speed................................................................................
Download: 339.89 Mbit/s
Testing upload speed......................................................................................................
Upload: 48.84 Mbit/s

I can confirm the issue in all VMs on all my Proxmox nodes. That includes a Windows 11 VM. And it's not just the test, real downloads are dog-slow (for example using wget). I know the interface works correctly at 10gb because I can download a file from my Truenas Fileserver when on the same VLAN6 at 300MB/s from spinning rust disks.

I can also confirm that I have no such problems on any of my physical computers, laptops, I even setup a ubuntu baremetal server and it has no issues at all, regarless of the VLAN.

Another really strange aspect is I also have a Truenas Server and it's hosting a Ubuntu Server VM and it's also similarly having poor Internet connectivity. It used to work fine.

iperf3 results are a little weird
an iperf3 test from VLAN6 inside VM to Truenas server also on VLAN6 yields perfect results

iperf3 -c 192.168.6.5
Connecting to host 192.168.6.5, port 5201
[ 5] local 192.168.6.61 port 56604 connected to 192.168.6.5 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.04 GBytes 8.92 Gbits/sec 1566 1.06 MBytes
[ 5] 1.00-2.00 sec 1.07 GBytes 9.22 Gbits/sec 844 1.34 MBytes
[ 5] 2.00-3.00 sec 1.07 GBytes 9.16 Gbits/sec 1763 1.16 MBytes
[ 5] 3.00-4.00 sec 1.08 GBytes 9.27 Gbits/sec 743 1.42 MBytes
[ 5] 4.00-5.00 sec 1.08 GBytes 9.31 Gbits/sec 1048 966 KBytes
[ 5] 5.00-6.00 sec 1.08 GBytes 9.26 Gbits/sec 304 1.24 MBytes
[ 5] 6.00-7.00 sec 1.07 GBytes 9.18 Gbits/sec 181 1.40 MBytes
[ 5] 7.00-8.00 sec 845 MBytes 7.09 Gbits/sec 713 823 KBytes
[ 5] 8.00-9.00 sec 1.06 GBytes 9.07 Gbits/sec 621 689 KBytes
[ 5] 9.00-10.00 sec 1.07 GBytes 9.19 Gbits/sec 306 973 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.4 GBytes 8.97 Gbits/sec 8089 sender
[ 5] 0.00-10.04 sec 10.4 GBytes 8.93 Gbits/sec receiver
iperf Done.

Changing to any other VLAN and speed drops off a cliff:
iperf3 -c 192.168.6.5
Connecting to host 192.168.6.5, port 5201
[ 5] local 192.168.8.61 port 38986 connected to 192.168.6.5 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.22 MBytes 10.3 Mbits/sec 107 1.41 KBytes
[ 5] 1.00-2.00 sec 1.10 MBytes 9.21 Mbits/sec 78 2.83 KBytes
[ 5] 2.00-3.00 sec 1.49 MBytes 12.5 Mbits/sec 150 2.83 KBytes
[ 5] 3.00-4.00 sec 636 KBytes 5.21 Mbits/sec 47 1.41 KBytes
[ 5] 4.00-5.00 sec 191 KBytes 1.56 Mbits/sec 26 1.41 KBytes
[ 5] 5.00-6.00 sec 1.26 MBytes 10.5 Mbits/sec 108 1.41 KBytes
[ 5] 6.00-7.00 sec 1.12 MBytes 9.40 Mbits/sec 76 2.83 KBytes
[ 5] 7.00-8.00 sec 827 KBytes 6.78 Mbits/sec 75 1.41 KBytes
[ 5] 8.00-9.00 sec 318 KBytes 2.61 Mbits/sec 33 1.41 KBytes
[ 5] 9.00-10.00 sec 191 KBytes 1.56 Mbits/sec 26 1.41 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 8.30 MBytes 6.97 Mbits/sec 726 sender
[ 5] 0.00-10.04 sec 8.18 MBytes 6.83 Mbits/sec receiver

Even iperf3 looks like crap:

iperf3 -c 192.168.6.5
Connecting to host 192.168.6.5, port 5201
[ 5] local 192.168.50.7 port 48120 connected to 192.168.6.5 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.38 MBytes 11.5 Mbits/sec 111 2.83 KBytes
[ 5] 1.00-2.00 sec 382 KBytes 3.13 Mbits/sec 39 1.41 KBytes
[ 5] 2.00-3.00 sec 127 KBytes 1.04 Mbits/sec 14 1.41 KBytes
[ 5] 3.00-4.00 sec 255 KBytes 2.08 Mbits/sec 21 1.41 KBytes
[ 5] 4.00-5.00 sec 0.00 Bytes 0.00 bits/sec 1 1.41 KBytes
[ 5] 5.00-6.00 sec 0.00 Bytes 0.00 bits/sec 1 1.41 KBytes
[ 5] 6.00-7.00 sec 0.00 Bytes 0.00 bits/sec 2 2.83 KBytes
[ 5] 7.00-8.00 sec 509 KBytes 4.17 Mbits/sec 45 2.83 KBytes
[ 5] 8.00-9.00 sec 127 KBytes 1.04 Mbits/sec 19 2.83 KBytes
[ 5] 9.00-10.00 sec 382 KBytes 3.13 Mbits/sec 25 1.41 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 3.12 MBytes 2.61 Mbits/sec 278 sender
[ 5] 0.00-10.04 sec 2.92 MBytes 2.44 Mbits/sec receiver
iperf Done.

Really not sure what's going with iperf3 here...

I'm really at a loss here, this doesn't make any sense...
Best I can tell is there's something wrong with vmbr99 bridge interface but it works perfectly when on same VLAN as my Truenas server...

Any idea?

I think I'll upgrade the whole cluster to v8 to see if somehow this a bug that got fixed...
 
Update,

Update to v8 provided no improvement.

Here's a screenshot of a speedtest from the Win11 VM on the same host:

1687495098040.png
 
Could you solve the Problem? I have the exact same issue.

I have an Fresh Proxmox with an Opensense VM installed. Using VLans, all is working fine, without the speed of Proxmox->Opensens itself.

Proxmox Host:
enp1s0.100 = VLAN 100, to connect even Opensense is offline
vmbr0 = enp1s0 (vlan aware) bridge for Service VM/LCX

Opensense VM:
1 PCIe Card with Dual 1Gbit via PCI Passthrough
IGB0 = WAN
IGB1 = VLAN 10,20,30,40,100


enp1s0.100 & IGB1 connected on L2+ Switch TL-SG2218 1.0
IGB0 connected to WAN (modem)

Iperf3 on Proxmox nativ VLAN100 -> PC = 1Gbit

Speedtest from PC (VLAN 100) working with FullSpeed (1Gbit)
Speedtest from Wifi (VLAN20) working with FullSpeed (1Gbit)
Speedtest from Proxmox itself (VLAN 100) working with 10MBit

Since i have passthrough my PCIe dual nic card, i can't select virtio mode as suggest on many other solutions. I Don't know where the error is, because an iperf3 over the ETH (enp1s0 (VLAN 100) is working with 1Gbit. Only when proxmox is using this connection, it will have only use 10Mbit.
 

Attachments

  • Screenshot 2023-08-28 020754.png
    Screenshot 2023-08-28 020754.png
    20.3 KB · Views: 21
  • Screenshot 2023-08-28 020804.png
    Screenshot 2023-08-28 020804.png
    31.8 KB · Views: 20
  • Screenshot 2023-08-28 020929.png
    Screenshot 2023-08-28 020929.png
    33.1 KB · Views: 14
  • Screenshot 2023-08-28 020937.png
    Screenshot 2023-08-28 020937.png
    35.5 KB · Views: 17
  • Screenshot 2023-08-28 020946.png
    Screenshot 2023-08-28 020946.png
    49.8 KB · Views: 14
  • Screenshot 2023-08-28 021217.png
    Screenshot 2023-08-28 021217.png
    43.8 KB · Views: 15
  • Screenshot 2023-08-28 020732.png
    Screenshot 2023-08-28 020732.png
    21.6 KB · Views: 16
Hello, same problem here with proxmox 8, the node and vms with very low speeds, I formatted the server with debian 11 and the speed returned to normal.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!