I wonder if this depends on the number of cores assigned to the VM. I have 6 cores / 12 threads and assigned 6 cores to both of my VMs, I overprovisioned this on purpose because the systems are probably never fully utilized at the same time.
It would be interesting to see if the number of cores...
@gkovacs Thank you for your comment.
I have just set and applied the settings you suggested, but it hadn't changed anything for me: I still get kicked out and can't reconnect with RDP while a backup is running and I get network delays when I have disk I/O.
As far as I understand the settings...
Thank you. The more I read and think about it, the more confusing it gets.
I do backups from within VMs on PVE 4.4-17 and one machine has 1.8 TB of data and even while doing a full backup I have no packet drops and my ping responses are fine. Using ZFS + KVM + virtIO + OpenVPN there but it is a...
Since the ping requests are handled mainly by CPU / RAM I didn't expect local storage IO to introduce this kind of latency. Even not with IOwait.
But doing some benchmarking on the storage was a good hint and honestly the results are confusing to me:
I downloaded CrystalDiskMark 5.2.2 x64 and...
No, my bad network experience is not limited to the communication over OpenVPN:
My monitoring daemon (monit) does ping tests from the PVE host to both Windows VMs directly via the bridge. There is no OpenVPN involved.
And these Pings are affected by the IO load in the guest VM. The backup...
So "iptables -nvL | grep NFQUEUE" does not return anything here.
Shorewall just compiles the rules from my config files and loads them with iptables on startup. It does not run in userspace.
It seems that we have different issues. Regarding to networking all I have done is I have defined the...
Interesting. So when you don't use NFQUEUE your network is reliable even under IO load in your VMs? And that's the only thing you've changed?
I wonder if we have the same issue then, since from my knowledge I don't use NFQUEUE. At least not on purpose.
I'm using shorewall on my PVE host, which...
Hi Udo,
most of the traffic is between VM100 + VM101. VM100 is the Domain Controller / Fileserver and Datev DB server while VM101 is the Terminal Server. The "outside world" connects via OpenVPN (tap0) to connect to the Terminal Server via RDP. There is also some traffic targeting a locally...
I'm from Hamburg. We probably both speak german, right? However, for a personal conversation let's choose PM or something else, I want to stay on topic in this thread.
Sounds interesting. However I don't think it is related to my problem: There is nothing bad about a bridge setup if I don't need additional switching features. I have this setup running on a few other PVE installs with 4.4 and previous versions and never had a problem with that.
My network is...
I have just created a bug report since this is a serious issue and I have no ideas left what I can try: https://bugzilla.proxmox.com/show_bug.cgi?id=1494
Good point, and I really crossed my fingers for this to help, but it did not. Same issue with iothread enabled for both virtual disks.
I did a ping test this time while booting the machine and noticed dropped packets and ping response times over 4.000 seconds. Even starting applications on the...
Looks similar to my ping tests but I get even over 8.000 mSec and dropped packets while doing a backup job from inside the VM (not PVE backup).
I can't try the io-thread option at the moment because the system is used during the working hours. But I will try it tonight.
@micro Have you tried...
Oh this makes sense: It happens especially when I do a backup which has high IO and network load at the same time.
Is this a "official" issue? Is there a bug opened for that?
I wonder which component introduces the issue: KVM version?
Are there any workarounds known that can make it less bad...
No you are right: Same here. My ping values was from my local DSL line through OpenVPN to the bridge. Getting 0.13 mSec response time on the local bridge from the PVE host.
I thought about it and should add a few details about my network setup:
The NICs of both Windows Server 2016 VMs are connected to a bridge:
brctl show vmbr1
bridge name bridge id STP enabled interfaces
vmbr1 8000.aace816c169a no tap0...
I guess you might get more opinions on the perfect storage setup than the number of peoples you ask, here are my thoughts:
- You use no hardware RAID controller in your setup and I think this is perfectly fine. Using JBOD is the correct way to deploy ZFS pools.
- You have your SSDs mirrored so...
I'm running PVE 5.0-30 with two KVM machines with Windows 2016 Servers installed.
I did everything like computerized William explained in this video https://www.proxmox.com/de/training/video-tutorials/item/install-windows-2016-server-on-proxmox-ve.
So I use ZFS and VirtIO for storage (SCSI) and...
While trying to do my first post I was informed that I'm not allowed to post URLs because of the anti Spam policy.
I replaced http with x in my thread, since the URLs are needed to clarify what patch I'm talking about.
Sorry for that, I really don't want to spam or advertise anything.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.