Proxmox VE 2.2 released!

Re: Low performance of data transfer over cifs mounts

So the problem only show up on smb mounts?

Sorry, i reversed the figures for scp, correct ones:

with 2 Cores : 4MB/s
with 1 Core : 70MB/s

The problems disappears on a 32bit Ubuntu 12.10 guest (but guest kernel is much more recent, of course)

rob
 
Re: Low performance of data transfer over cifs mounts

Last post got banned, think it might be because I had the real URL's standing.

So here goes. The transfer speed is equally bad, it does not matter HTTP, SCP, FTP etc. As long as VirtIO has more then 1 Core added. If you use Intel E1000 instead, it works decent, not as good as VirtIO used to.

From the Proxmox main server:

root@asgaard:~# wget -O /dev/null SOMEURL/1000M.zip
--2012-10-31 17:16:04-- SOMEURL/1000M.zip
Resolving SOMEURL... SOMEIP
Connecting to SOMEURL|SOMEIP|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576001 (1000M) [application/zip]
Saving to: â/dev/nullâ


100%[==================================================>] 1,048,576,001 98.0M/s in 10s


2012-10-31 17:16:15 (96.9 MB/s) - â/dev/nullâ


root@asgaard:~#
*******************************************************************************************


From a KVM VM Debian x64 stable testing (4 cores) with VirtIO:
root@mimer:~# wget -O /dev/null SOMEURL/1000M.zip
--2012-10-31 17:30:07-- SOMEURL/1000M.zip
Resolving SOMEURL... SOMEIP
Connecting to SOMEURL|SOMEIP|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576001 (1000M) [application/zip]
Saving to: â/dev/nullâ


100%[==================================================>] 1,048,576,001 10.4M/s in 1m 57s


2012-10-31 17:32:04 (8.56 MB/s) - â/dev/nullâ


root@mimer:~#
*******************************************************************************************


From the same KVM VM Debian x64 stable testing (4 cores) with Intel E1000:
root@mimer:~# wget -O /dev/null SOMEURL/1000M.zip
--2012-10-31 17:35:10-- SOMEURL/1000M.zip
Resolving SOMEURL... SOMEIP
Connecting to SOMEURL|SOMEIP|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576001 (1000M) [application/zip]
Saving to: â/dev/nullâ


100%[==================================================>] 1,048,576,001 90.9M/s in 11s


2012-10-31 17:35:22 (87.5 MB/s) - â/dev/nullâ


root@mimer:~#
*******************************************************************************************


Before I had better performance with VirtIO then with the Intel E1000, so I find it a bit odd the performance is so bad after upgrading to Proxmox 2.2.
 
Last edited:
Re: Low performance of data transfer over cifs mounts

Are we only two experiencing this problem?

Can anyone from the proxmox team reproduce this bug?
 
Re: Low performance of data transfer over cifs mounts

Not on Debian Testing (wheezy) but on Debian stable (squeeze). Upgrading the kernel inside Squeeze seems to fix the issue (from backports, 3.2).
 
Re: Low performance of data transfer over cifs mounts

Not on Debian Testing (wheezy) but on Debian stable (squeeze). Upgrading the kernel inside Squeeze seems to fix the issue (from backports, 3.2).

Thanks Tom you're right, I wrote wrong, the server I tested with was actually stable and upgrading the kernel from backports to 3.2 solved it. Thanks!
 
Last edited:
Re: Low performance of data transfer over cifs mounts

Thanks Tom you're right, the and I wrote wrong, the server I tested with was actually stable and upgrading the kernel from backports to 3.2 solved it. Thanks!

I confirm, too. linux-image-3.2.0-0.bpo.3-amd64 from squeeze-backports does not exibit the issue.

rob
 
Re: Low performance of data transfer over cifs mounts

I confirm, too. linux-image-3.2.0-0.bpo.3-amd64 from squeeze-backports does not exibit the issue.

rob

Hi,

Is this problem affecting Ubuntu 10.04 LTS (x64) ? I planned to upgrade my server from 2.1 to 2.2 this weekend...
I have some machines with 2 cores assigned.... If I understand correct, after reducing cores to 1 everything works OK yes ?

Regards,
michu
 
Re: Low performance of data transfer over cifs mounts

I suggest you assign one core for now. I just did some tests here with 10.04 LTS (x64), got 33 mb/s with 2 cores and 100 with one core. But we are looking for a fix, lets see.
 
Re: Low performance of data transfer over cifs mounts

I suggest you assign one core for now. I just did some tests here with 10.04 LTS (x64), got 33 mb/s with 2 cores and 100 with one core. But we are looking for a fix, lets see.

We did the same, being currently unable to upgrade that host to squeeze.

rob
 
Re: Low performance of data transfer over cifs mounts

Having trouble under 2.2, management interface, port-channel/bond0:

00:04.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 10)
00:05.0 Ethernet controller: 3Com Corporation 3c905B 100BaseTX [Cyclone] (rev 30)

dropping packets now, ssh dropping, started with the upgrade from 2.1 to 2.2. Would roll back if I could do it easily.... will suffer for now.
 
Re: Low performance of data transfer over cifs mounts

I suggest you assign one core for now. I just did some tests here with 10.04 LTS (x64), got 33 mb/s with 2 cores and 100 with one core. But we are looking for a fix, lets see.
Did you find the fix? I am facing the same problem.
 
Re: Low performance of data transfer over cifs mounts

I\ve installed PVE 2.2 on HP server (2xXeons, HW RAID w/ battery, 2xSAS disks), and the install process looked fast and rather good. But as I created a VM with 8 cores (4 core x 2 sockets) and installed Windows 2003 R2 x86 OS on that VM, I see VM is rather slow while CPU used up to 100%. The same VM if being used on PVE 2.1 server shows no slowness. Then I created new VM with several cores and installed Linux on it. The same slowness.

The forum given me the idea on "irqchip" problem, so I give it a try and add
Code:
args: -machine pc,kernel_irqchip=off
to the VM .conf file.

The my great surprise, Linux VM benefited from that, while Windows machine won't even boot.

Ok, I reinstalled server from Debian disk, and install PVE on it using apt repository. The same picture.

As a check, I used pvetest repo instead of simple pve, and installed PVE that way. And this time VMs are working much better!

So the question is: are these any plans to fix that "slowness issue" in the upcoming PVE version (2.3?), OR/AND will there be any package upgrade for 2.2 (so I'll wait to use 2.2 on other servers) in the nearest time, OR should I use pvetest to install 2.2 host? Latter idea is something really annoying as setting up production server with test branch is not good anyway.

Thank you anyway fro a great product like PVE!
 
Re: Low performance of data transfer over cifs mounts

The suggested fix is to install a newer guest kernel.

I'd like to, but the primary guest OS is Win2003R2x32, exactly this, and it won't work at all with "...kernel_irqchip=off". This fix looks like upcoming in 2.3, but not like it considered a problem at all.

Just for a minute, why do you think that inability to run (or ability to only run it really slow) old OS under hypervisor is not a bug and should be fixed on guest OS side? What would you feel if you have you car repaired by official service and after that find out that you can not sit in it since they pushed the roof down inch or two, and when you ask them they just recommend you to use shorter ("not so tall as you") driver instead of you? ;)
 
Re: Low performance of data transfer over cifs mounts

if you have issue with Win2003R2x32 please open a new thread, include all info and also your VMID.conf file.

btw, I am not aware of any problems with this guest OS.
 
Re: Low performance of data transfer over cifs mounts

What would you feel if you have you car repaired by official service and after that find out that you can not sit in it since they pushed the roof down inch or two, and when you ask them they just recommend you to use shorter ("not so tall as you") driver instead of you? ;)

You pay for your official car service (but you do not pay me)?
 
Re: Low performance of data transfer over cifs mounts

You pay for your official car service (but you do not pay me)?

Gottcha! Yes you're right, all I can do is thank you for your help anyway. Seriously, you guys doing really great job!

But (I think you'll agree) there's kind of bug (or this is an intended feature?) here as PVE 2.1 was able to run the same VM without that slowness.

Ok, I'll open separate thread, maybe someone else will find the discuss there useful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!