I have tried to install these two .deb packages:
#dpkg -i *.deb
Selecting previously unselected package libiscsi4.
(Reading database ... 66968 files and directories currently installed.)
Preparing to unpack libiscsi4_1.15.0-1_amd64.deb ...
Unpacking libiscsi4 (1.15.0-1) ...
dpkg: warning...
#uname -a
Linux pve7 4.13.3-1-pve #1 SMP PVE 4.13.3-2 (Wed, 27 Sep 2017 14:01:40 +0200) x86_64 GNU/Linux
I have installed pve-kernel-4.13.3-1-pve_4.13.3-2_amd64.deb on my test system and I can't see any difference. I have 18% packet loss during the boot process and I was pinging from the host...
No you haven't.
That was just my thoughts because I really want to get a real solution for this issue even though I can live with IDE at the moment.
So you suggested to install the pve-qemu-kvm package from PVE 4? I somehow missed that.
That's a good idea I think I will try that.
But I don't...
Same result here.
I already thought about installing PVE 4.4 on my test system to check if I get the same issue there. However I would loose me 5.0 test system then and I would not be able to do tests with PVE 5.0 until I reinstall everything again. Which is quite time consuming.
@aderumier Thank you.
I have installed your version of qemu-kvm:
# dpkg -i pve-qemu-kvm_2.9.1-1_amd64.deb
(Reading database ... 60826 files and directories currently installed.)
Preparing to unpack pve-qemu-kvm_2.9.1-1_amd64.deb ...
Unpacking pve-qemu-kvm (2.9.1-1) over (2.9.1-1) ...
Setting up...
I have installed the 4.4.8-1 kernel on my test system:
uname -a
Linux pve7 4.4.8-1-pve #1 SMP Tue May 31 07:12:32 CEST 2016 x86_64 GNU/Linux
But still the same issue.
Hi @aderumier, I just installed the .deb you provided and rebooted the host to make sure everything was started with the new version.
But I get an error when I try to start the VM:
kvm: symbol lookup error: kvm: undefined symbol: rbd_aio_writev
command 'kvm -version' failed: exit code 127
TASK...
Yes you were right. I switched back to IDE on my test system after a host reboot but couldn't get the Windows VM back up running. It kept rebooting over and over again. There was something going on with my guest system.
I removed the disks and added new ones and then the restored the VM and...
@Phinitris Please can you add your setup and the details from your post to this bug report as well? https://bugzilla.proxmox.com/show_bug.cgi?id=1494
Hopefully it helps us to locate the issue. We still don't have any confirmation from the Proxmox team on if someone was able to reproduce the issue.
Ok, so this should only be relevant in combination with iothread as aderumier explained.
I will check this out anyways as soon as I can afford the time.
I think if we can find this one thing that is different on your setup compared to one of our installations where we suffer from this issue, we...
In fact interesting. One difference I see now is that you are using "VirtIO SCSI single" and I haven't even tried that. I was using simply "VirtIO SCSI".
Can you tell the difference? Is it one single port then?
I have no I/O issues with LXC. I have a mail server running on the same host and it performs perfectly fine and does not drop or delay packets even under load.
Do you mean the transparent_hugepage setting is unrelated to the issue in general, or to the problem with my vm that doesn't boot anymore?
I can't refuse the second assumption, I had not the time yet to investigate any further.
Hi aderumier,
both options was on "madvise" on my system as well (whatever this means):
#cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
#cat /sys/kernel/mm/transparent_hugepage/defrag
always defer [madvise] never
I stopped my machine, set both options to never and...
I have PVE 5.0 running on two systems experiencing the issues stated before:
Intel® Xeon® E5-1650 v3 Hexa-Core with 256 GB DDR4 ECC RAM
Intel® Xeon® E3-1275 v5 Quad-Core with 64 GB DDR4 ECC RAM
both systems has 2 x 4 TB SATA 6 Gb/s 7200 rpm HDD Enterprise Class. These are custom systems from a...
@micro Thank you, that helped a lot!
Since this issue bothered me so much, I took a new server (different hardware, fewer cores, less RAM) and installed PVE 5.0 freshly via the installer ISO. Then I created a new KVM VM with the same settings that I used for my windows machines (just with less...
@micro So you have linux guests, right? I wonder if I can shut down my windows VMs, switch the storage from virtIO-SCSI to IDE and boot them up more or less safely to check out if this works better.
I've never done this before and I'm a little afraid I might bork the Windows VM.
I would expect the CPU load to raise dramatically when switching from virtIO to IDE, since IDE is fully emulated, right? Or is this not so much the case?
What about the R/W performance in your VMs with IDE?
Where did u get this IO Wait graph from? This is not from the Proxmox web interface, is...
Ok, this would be too easy for a solution anyways.
I just checked on another system with PVE 3.4-16 and a Windows Server 2012 how the network behaves when I stress the disk IO: Doing a full backup gives me an average of 0.2 ms and a max of 8.015 ms while doing ping tests over the whole process...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.