UPDATE: I tried installing a brand new VM, replacing one of the migrated ones and it has none of the issues described in my initial post. I will probably resort to reinstalling the others over time as well. Any ideas as to what could be causing this with migrated VMs from an older KVM server?
Hello!
Not sure if this should be in the install/configure forum or here, but the main problem is networking related so...
This week I took the plunge and reinstalled my server moving to Proxmox. The old install was a regular Ubuntu server with KVM. I had no problems exporting my qcow2-files to an external disc and then importing them into Proxmox. After booting my VMs however, I'm experiencing a lot of packet loss with my VMs.
The hardware is an Intel Core i7 950 @ 3.07GHz, 24GB of ram and 4x1TB HDDs. I installed Debian stretch first and then added Proxmox (5.2.1) since I'm doing a software raid 5 on my disks. I know from a performance point of view that's not the best thing, but it worked fine for me on the old install. The VMs are a mix of Ubuntu 16.04 and Centos 6/7.
The problem: Almost any load on a VM, even just attempting to log in on ssh, or running any application on the VM will cause 100% packet loss. I experience no packet loss to the host itself. I have tried different network device models (virtIO, rtl8139 and E1000) and different disk devices as well (virtio, ide, scsi). They seem to make no difference. The ping response times are "normal" if they get through. ~0.2ms from host to VM and 1-3ms from my laptop on wifi to the VMs. The summary page for the host in proxmox web gui shows very little cpu load (~2-5%) and load average is about 0.47. I set up a bridge for all the interfaces, since that wasn't default due to my installing Debian separately.
My interfaces config:
Host kernel:
Linux 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200)
Kernel on one of the Ubuntu VMs:
4.4.0-127-generic
Any ideas?
Hello!
Not sure if this should be in the install/configure forum or here, but the main problem is networking related so...
This week I took the plunge and reinstalled my server moving to Proxmox. The old install was a regular Ubuntu server with KVM. I had no problems exporting my qcow2-files to an external disc and then importing them into Proxmox. After booting my VMs however, I'm experiencing a lot of packet loss with my VMs.
The hardware is an Intel Core i7 950 @ 3.07GHz, 24GB of ram and 4x1TB HDDs. I installed Debian stretch first and then added Proxmox (5.2.1) since I'm doing a software raid 5 on my disks. I know from a performance point of view that's not the best thing, but it worked fine for me on the old install. The VMs are a mix of Ubuntu 16.04 and Centos 6/7.
The problem: Almost any load on a VM, even just attempting to log in on ssh, or running any application on the VM will cause 100% packet loss. I experience no packet loss to the host itself. I have tried different network device models (virtIO, rtl8139 and E1000) and different disk devices as well (virtio, ide, scsi). They seem to make no difference. The ping response times are "normal" if they get through. ~0.2ms from host to VM and 1-3ms from my laptop on wifi to the VMs. The summary page for the host in proxmox web gui shows very little cpu load (~2-5%) and load average is about 0.47. I set up a bridge for all the interfaces, since that wasn't default due to my installing Debian separately.
My interfaces config:
Code:
auto lo
iface lo inet loopback
allow-hotplug enp4s0
allow-hotplug enp6s0
iface enp4s0 inet manual
iface enp6s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.15.6
netmask 255.255.255.0
gateway 192.168.15.1
bridge_ports enp4s0
bridge_stp off
bridge_fd 0
#Inside
auto vmbr1
iface vmbr1 inet manual
bridge_ports enp6s0
bridge_stp off
bridge_fd 0
#Outside
Host kernel:
Linux 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200)
Kernel on one of the Ubuntu VMs:
4.4.0-127-generic
Any ideas?
Last edited: