Issue with Kernel 2.6.35-1-pve

I personnaly use a custom 2.6.38 kernel (based on ubuntu natty)

i use it on amd opteron (4x12cores) and intel xeon(55XX ,56XX)

no cpu problem or disk io

i'm using iscsi direct lun on a nexenta san , 20000 iop/s read/write (virtio disks)


http://forum.proxmox.com/threads/6194-Troubles-with-latest-virtio-drivers-for-Windows-and-latest-PVE-1.8?p=36316&highlight=#post36316


Hi spirit, do you have an updated version of your custom kernel? I could share my dropbox if you whish.
Or could you tell me how you compiled it? Perhaps via PN.

Thx alot
macday
 
Another question,

does the Debian Squezze Backport-Kernel 2.6.39.bpo.xx work with proxmox. Which patches could be missing? I know it is not supported. Has anyone tried it?

macday
 
Ok I´m just testing....

Hardware: Dell PE 2950 Perc5i Sata Raid10
proxmox 1.8 -
Kernel: Linux 2.6.39-bpo.2-amd64 (from Squeeze Backports)

VM1: Debian Squeeze with Linux 2.6.39-bpo.2-amd64 (from Squeeze Backports)
CPU: 2
RAM: 4 GB
HDD: 600GB Virtio (LVM)
NIC: virtio
SW: lessfs for deduplication

Scenario: Copying 160GB from nfs to lessfs (stresstest for network-hdd-cpu)

...running...keeping you up2date
 
spirit, its a very best results

please a post .config custom or you kernel

im testing proxmox 2.6.32 on i7-920|24Gb ram| 2x1Tb raid 0 (marvel) its all good work
test on server 4x12 AMD - bag, bag, bag... :(

or plese help me serverup for test he max performance...

thanks
 
here an updated version of 2.6.38 kernel (from Ubuntu-2.6.38-10.46)

https://rapidshare.com/files/332034815/pve-kernel-2.6.38-1-pve_2.6.38-2_amd64.deb



Fan : my results are from linux vm.

did you have try from linux vm or only windows vm ?

I'm using a iscsi san, with 2x 1gbit for each kvm host, and i can sature both links, sequential or random read/write.

(the san is a Nexenta box, 100 GB ram,1TB ssd, 6TB raid10 15k)

i'm using directly the iscsi luns with cache=none , so i bypass the host filesystem.

vm are debian squeeze with 2.6.32 kernel.



here my benchmark, with fio bench.



random read

fio --filename=/dev/vdb --direct=1 --rw=randread --bs=4k --size=20G --numjobs=200 --runtime=10 --group_reporting --name=file1
result : 36170 io/sec

random write

commande de test : fio --filename=/dev/vdb --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=200 --runtime=10 --group_reporting --name=file1
result = 35140 io/sec



sequential read :

commande de test : fio --filename=/dev/vdb --direct=1 --rw=randread --bs=1m --size=5G --numjobs=200 --runtime=10 --group_reporting --name=file1
result = 232440 Kb/s

sequential write:

commande de test : fio --filename=/dev/vdb --direct=1 --rw=randwrite --bs=1m --size=5G --numjobs=200 --runtime=10 --group_reporting --name=file1
result = 200582KB/s
 
forget to say, i'm using deadline io scheduler.

/boot/grub/menu.lst

title Proxmox Virtual Environment, kernel 2.6.38-1-pve
root (hd0,0)
kernel /vmlinuz-2.6.38-1-pve root=/dev/mapper/pve-root ro elevator=deadline
initrd /initrd.img-2.6.38-1-pve
 
spirit, thanks
i go to compile and tests...
im run only windows vm (virtio 1.1.6 hdd&lan)
(4xAMD 6172/ 256Gb ram/ 6805 8x1Tb raid 0)
 
Last edited by a moderator:
... i have bad results :)

install 2.6.38-1-pve (not suuport adaptec 6805)
compile driver (aacraid-1.1.7-28000 Adaptec) from source kernel 2.6.35-1-pve - compile error 'ioctl'
compile driver (aacraid-1.1.7-28000 Adaptec) from source kernel 2.6.38.8 - compile error
compile driver (unpack source driver from kernel 2.6.39.1) from source kernel 2.6.38.8 or 2.6.39.1 - compile OK, run driver - invalid format

make and install kernel 2.6.39.1 for .config 2.6.38-1-pve - its ok
add on grub menu.lst - elevator=deadline
run test VM and have bad results - read/write speed = 30Mb/s for virtio-raw on windows guest

stress test - max run VMs:
2.6.32-x-pve = 40
2.6.35-x-pve = 60
2.6.39-1 = 60

web interface: CPU Utiliztion = 97%

top - 12:46:28 up 9:09, 1 user, load average: 34.48, 32.06, 29.61
Tasks: 464 total, 55 running, 409 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.4%us, 93.6%sy, 0.0%ni, 2.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 264678408k total, 19292736k used, 245385672k free, 230304k buffers
Swap: 0k total, 0k used, 0k free, 123960k cached

# pveversion -v
pve-manager: 1.8-19 (pve-manager/1.8/6379)
running kernel: 2.6.39.1-pve
proxmox-ve-2.6.35: 1.8-12
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.35-2-pve: 2.6.35-12
pve-kernel-2.6.38-1-pve: 2.6.38-2
qemu-server: 1.1-30
pve-firmware: 1.0-12
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.28-1pve1
vzdump: 1.2.6-1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6

small speed on data (raid 0 - 8x1Tb HDD SAS2 7.200)
# pveperf /var/lib/vz
CPU BOGOMIPS: 211205.03
REGEX/SECOND: 856446
HD SIZE: 3023.80 GB (/dev/mapper/pve-data)
BUFFERED READS: 366.76 MB/sec
AVERAGE SEEK TIME: 11.32 ms
FSYNCS/SECOND: 1399.80
DNS EXT: 64.69 ms
DNS INT: 2.17 ms (xxx)

BIOS setting: (chenged - not results)

RD890 Configuration - IOMMU: Enabled

Memory Configuration - Bank Interleaving: Auto
Memory Configuration - Node Interleaving: Disable (recommended VMware) / Auto
Memory Configuration - Channel Interleaving: Auto
Memory Configuration - CS Sparing Enable: Disabled
Memory Configuration - Bank Swizzle Mode: Enable
Memory Configuration - Ungang DCT: Always

What ideas?

probe install http://sourceforge.net/projects/kvm/files/qemu-kvm/0.15.0/ ???


=========================================================

Im download and install kernel-3.0.1 http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.0.1.tar.bz2

run test performance hdd windows guest - bad (30MB/s read|write)

run stress test - 60 VMs winXP - good :)

very good
60VMs - CPU Utilization (Web Interface) = 10-18 %
(for kernel-2.6.39.1: 45VMs = 30% CPU, 60VMs = 99% CPU)

top - 22:06:57 up 1:08, 1 user, load average: 0.40, 1.14, 1.52
Tasks: 458 total, 6 running, 452 sleeping, 0 stopped, 0 zombie
Cpu(s): 8.3%us, 6.5%sy, 0.0%ni, 85.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 264613392k total, 19032328k used, 245581064k free, 109216k buffers
Swap: 0k total, 0k used, 0k free, 28120k cached

need add max run VMs for CPU usage 80-90%

There is a question,
Why such low speed of disks?

Guest: WinXP sp3, virtio 1.1.6 driver for hdd & lan

config:
name: 101
vlan0: virtio=XX:XX:XX:XX:XX:XX
bootdisk: virtio0
virtio0: local:101/vm-101-disk-1.raw,cache=none
ostype: wxp
boot: c
memory: 256
sockets: 1
cpuunits: 8
vga: cirrus

what i create VMs:
#!/bin/bash
# Clounirovanie VMs

for (( i=102;i<999;i++)); do
echo ${i}
/usr/sbin/qm create $i --name $i --vlan0 virtio --virtio0 local:2,format=raw --bootdisk virtio0 --ostype wxp --memory 256 --onboot no --sockets 1 --boot c --vga cirrus --cpuunits 8

cp /var/lib/vz/wxp.raw /var/lib/vz/images/$i/vm-$i-disk-1.raw

done

Whether correctly I do, what I copy a reference image of a disk instead of the created?

ps: # pveversion -v
pve-manager: 1.8-19 (pve-manager/1.8/6379)
running kernel: 3.0.1-pve
proxmox-ve-2.6.35: 1.8-12
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.35-2-pve: 2.6.35-12
pve-kernel-2.6.38-1-pve: 2.6.38-2
qemu-server: 1.1-30
pve-firmware: 1.0-12
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.28-1pve1
vzdump: 1.2.6-1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6
 
Last edited by a moderator:
Thanks e100. Somehow missed that detail of your setup. For our test, we are only using the default LOCAL LVM (dev/pve/data) setup with an assortment of File Disk Types and Format.

We've narrowed the performance issue to a combination of image format and kernel version. VMDK with 2.6.32-4-pve has negative outcome; CACHE or NO CACHE.

http://accends.files.wordpress.com/2011/08/proxmoxkernelanddisktypes.gif
View attachment 528

This will be our only and final test - VMDK and QCOW2 for the kernel issue suggested. For now, we will stay with original Proxmox 1.8 release kernel due to our own study on performance between disk types - http://accends.files.wordpress.com/2011/08/proxmoxvedisks1.gif

This goes without saying, test, test, test, before you deploy into production.




Great,thank you for sharing.Helpful
 
FAN:


maybe this concern you:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=634149[h=1]kvm extremely __slow__ under 2.6.39-2-amd64 compared to 2.6.32-5-amd64[/h]"which went into mainline with 2.6.35, there has beenquite some changes in PIO emulation handling in kvmwhich resulted in correct but slow (as opposed byfast but incorrect) emulation. This slowed downguests that use PIO to access disks. Among theseare WinXP (it switches from DMA to PIO after someI/O errors), WinNT (ditto) and - apparently - Hurd.Someone needs to investigate why Hurd does not useDMA in this case - PIO is long obsolete technology.I'd mark this as "notabug" (not possible with BTS)or "wontfix", but it's definitely possible tooptimize the new code further to speed things up.If it's worth the effort is another question."


could you make your bench with win2008 to compare result ?
 
FAN:


maybe this concern you:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=634149kvm extremely __slow__ under 2.6.39-2-amd64 compared to 2.6.32-5-amd64

"which went into mainline with 2.6.35, there has beenquite some changes in PIO emulation handling in kvmwhich resulted in correct but slow (as opposed byfast but incorrect) emulation. This slowed downguests that use PIO to access disks. Among theseare WinXP (it switches from DMA to PIO after someI/O errors), WinNT (ditto) and - apparently - Hurd.Someone needs to investigate why Hurd does not useDMA in this case - PIO is long obsolete technology.I'd mark this as "notabug" (not possible with BTS)or "wontfix", but it's definitely possible tooptimize the new code further to speed things up.If it's worth the effort is another question."


could you make your bench with win2008 or win2003 to compare result ?

also, use virtio 1.2.0 drivers, they include speedup with block drivers




https://rhn.redhat.com/errata/RHBA-2011-0782.html: virtio-win bug fix and enhancement update
 
Last edited by a moderator:
spirit, thanks

I tested virtio 1.1.6 vs virtio 1.2.0 on 1 VMs winXP
1.1.6 = 30Mb/s
1.2.0 = 25Mb/s

if create new VMs and install winXP Atom, then results:
vm-1001-disk-1.raw drivers virtio 1.1.6
CrystalDiskMark301: (2|500Mb|c:42%(854/2044Mb))
56.44 | 224.1
50.26 | 211.3
1.821 | 9.665
17.76 | 10.53

HD Tune Pro 4.6.1
read
min - 20.9 MB/s
max - 908.8 MB/s
average - 341.9 MB/s
access time - 2.42 ms
burst rate - 166.9 MB/s
cpu usage - 13.3%

graphic:
-- -- ---
| | | | | |
| | | | |
---- -- --

if create new VM and copy vm-1001-disk-1.raw to vm-1002-disk-1.raw then results:

vm-1002-disk-1.raw driver virtio 1.1.6
CrystalDiskMark301: (2|500Mb|c:42%(854/2044Mb))
153.5 | 226.1
68.82 | 206.0
1.985 | 9.419
17.71 | 10.39

HD Tune Pro 4.6.1
read
min - 76.3 MB/s
max - 358.1 MB/s
average - 191.6 MB/s
access time - 4.74 ms
burst rate - 133.4 MB/s
cpu usage - 19.6%

graphic:


/\/\/\/\/\/\

ps: cache - default
 
tests old VM winxp VL


copy vm-1000-disk-1.raw
cache - default
CrystalDiskMark301: (2|500Mb|c:73%(1484/2044Mb))
85.21 | 212.8
57.56 | 198.7
1.266 | 8.438
10.99 | 8.929

HD Tune Pro 4.6.1
read
min - 35.8 MB/s
max - 371.9 MB/s
average - 139.2 MB/s
access time - 7.19 ms
burst rate - 131.9 MB/s
cpu usage - 12.2%

graphic:

/\//\/\/\/\/\

copy vm-1000-disk-1.raw
cache=none
CrystalDiskMark301: (2|500Mb|c:73%(1484/2044Mb))
92.43 | 217.6
52.80 | 200.4
1.187 | 8.281
11.74 | 9.095

HD Tune Pro 4.6.1
read
min - 45.0 MB/s
max - 327.0 MB/s
average - 139.2 MB/s
access time - 7.41 ms
burst rate - 120.2 MB/s
cpu usage - 14.2%

graphic:

/\/\/\/\/\/\

==========================
create vm-1000-disk-1.raw
cache - default
CrystalDiskMark301: (2|500Mb|c:73%(1483/2044Mb))
93.92 | 219.4
55.89 | 204.3
1.503 | 8.435
13.79 | 8.779

HD Tune Pro 4.6.1
read
min - 55.9 MB/s
max - 371.7 MB/s
average - 194.6 MB/s
access time - 6.27 ms
burst rate - 122.1 MB/s
cpu usage - 22.9%

graphic:

/\/\/\/\/\/\

create vm-1000-disk-1.raw
cache=none
CrystalDiskMark301: (2|500Mb|c:73%(1483/2044Mb))
84.88 | 217.4
55.29 | 198.3
1.531 | 8.617
13.67 | 9.314

HD Tune Pro 4.6.1
read
min - 51.3 MB/s
max - 356.8 MB/s
average - 126.9 MB/s
access time - 6.35 ms
burst rate - 116.9 MB/s
cpu usage - 15.0%

graphic:

/\/\/\/\/\/\

ps: all tests on kernel 3.0.1
 
The last couple of benchmarks would be better if it was laid out in a structure. The original OP had the right idea. These text based output requires un-necessary work just to decipher. Post the screenshots.
 
Im finded problem slow hdd

If use kernel 2.6.35 then pveperf /dev/pve/data = 550MB/s
If use kernel 3.0.1 then pveperf /dev/pve/data = 300MB/s (etx3) and 310MB/s (ext4)

used raid 0 from 8 HDD SAS2 on controller Adaptec 6805 (w/o BBU, writecahe is Always ON - in bios controller)

What ideas?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!