pveperf / Big Difference in FSYNCS/SECOND Kernel 2.6.32-34 vs 3.10.0-5 ?

Jan 9, 2012
282
2
18
I've installed Proxmox on a SSD (Crucial M4).

Why is there so a big Difference (FSYNCS/SECOND) in pveperf from Kernel 2.6.32-34 to Kernel 3.10.0-5 ?

Kernel 2.6.32-34

CPU BOGOMIPS: 18358.28
REGEX/SECOND: 1097405
HD SIZE: 14.52 GB (/dev/mapper/pve-root)
BUFFERED READS: 115.48 MB/sec
AVERAGE SEEK TIME: 0.19 ms
FSYNCS/SECOND: 1603.92
DNS EXT: 256.85 ms
DNS INT: 81.54 ms (local)

Kernel 3.10.0-5

CPU BOGOMIPS: 18358.76
REGEX/SECOND: 1177520
HD SIZE: 14.39 GB (/dev/mapper/pve-root)
BUFFERED READS: 129.98 MB/sec
AVERAGE SEEK TIME: 0.19 ms
FSYNCS/SECOND: 134.95
DNS EXT: 82.03 ms
DNS INT: 54.82 ms (local)


mount

sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=3044510,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2443980k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=4887940k)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,data=ordered)
/dev/sda1 on /boot type ext3 (rw,relatime,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)


df -h

Filesystem Size Used Avail Use% Mounted on
rootfs 15G 6.7G 7.1G 49% /
udev 10M 0 10M 0% /dev
tmpfs 2.4G 308K 2.4G 1% /run
/dev/mapper/pve-root 15G 6.7G 7.1G 49% /
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.7G 28M 4.7G 1% /run/shm
/dev/mapper/pve-data 30G 2.2G 27G 8% /var/lib/vz
/dev/sda1 487M 233M 225M 51% /boot
/dev/fuse 30M 20K 30M 1% /etc/pve
 
Probably in kernel 3.1 barrier is enabled by default, try to mount the ssd with "nobarrier" parameter in fstab.
i.e., me using ext4, I have:
Code:
   # grep nobarrier /etc/fstab
 /dev/pve/root / ext4 errors=remount-ro,noatime,nobarrier 0 1
 /dev/pve/data /var/lib/vz ext4 defaults,noatime,nobarrier 0 1
Take the time to understand the implication of nobarrier for your data safety.
 
I used the Standard-Installation of Proxmox 3.1, at that time with ext3:


cat /etc/fstab

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=832c72bb-b7f5-449a-a357-111663f5123b /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
Yes, but disable barrier, I posted you an example from my proxmox home server, that uses ext4 just to give you an idea. Please try my suggestion keeping your ext3 part (of course in the partition used by pvperf and of course reboot)