Weird pveperf results

jm.trillo

New Member
Dec 3, 2012
6
0
1
I'm a newbie in proxmox, I installed it in a machine to do some tests and see what we can do with it. Pveperf results are giving me results I dont know they're if they're normal.

If I just run pveperf, I get results that seem to be ok, considering it's using only a raid 1:

pveperf
CPU BOGOMIPS: 57596.88
REGEX/SECOND: 1498626
HD SIZE: 19.38 GB (/dev/sda1)
BUFFERED READS: 169.84 MB/sec
AVERAGE SEEK TIME: 7.58 ms
FSYNCS/SECOND: 1150.48
DNS EXT: 34.08 ms
DNS INT: 2.77 ms (ovh.net)

However, if I run pveperf over /var/lib/vz (the lvm volume) I get much lower fsync results:

pveperf /var/lib/vz
CPU BOGOMIPS: 57596.88
REGEX/SECOND: 1547928
HD SIZE: 968.87 GB (/dev/mapper/pve-data)
BUFFERED READS: 187.72 MB/sec
AVERAGE SEEK TIME: 12.75 ms
FSYNCS/SECOND: 167.90
DNS EXT: 31.41 ms
DNS INT: 3.00 ms (ovh.net)


The machine has a single raid 1 (hardware, with write cache on), with a primary partition in /, the LVM in /var/lib/vz and swap. I've tried reinstalling it all without LVM and this dont seem to happen, i got consinstently 800-900 fsyncs. So my questions are,

1) is this the normal behaviour i should expect with pveperf and lvm or am I losing that much fsync performance? After googling a bit I have not found a similar case.

2) If this is a weird hardware/kernel compatibility problem, would I be better off reinstalling everything without lvm and forgettting about snapshots features and so on? The machine is a ovh EG 64G btw,

Thanks in advance for any help.

PD: Sorry if this gets duplicated, having some problems posting. EDIT: Ah the " you need approval" message flashed too fast the first time I tried to post. Sorry :)
 
Last edited:
no, pveperf results should not differ with lvm. but make sure that your server is NOT under load when you run pveperf.

what filesystem do you use? post:

Code:
cat /proc/mounts
 
Hi,

Check filesystem type on / and /var/lib/vz and check mount options: cat /etc/fstab

Regards,
michu
 
Here:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda1 / ext3 errors=remount-ro 0 1
/dev/sda2 swap swap defaults 0 0
/dev/pve/data /var/lib/vz ext3 defaults 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0


System isnt under any load and i've repeated the tests serveral times. I'm using ext3

EDTI: and cat /proc/mounts

cat /proc/mounts
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
none /dev devtmpfs rw,relatime,size=32961120k,nr_inodes=8240280,mode=755 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
/dev/sda1 / ext3 rw,relatime,errors=remount-ro,barrier=0,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
none /sys/kernel/config configfs rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
 
Last edited:
I'm reinstalling manualy over debian and creating the new lvm manually etc to check if it's some error in the ovh installer. If this keep happening, will I get similar performance using only ext3? I can live without snapshots for the moment as long as the performance is good.
 
I suggest you test your harddrive (also check smart status) - is this a single drive?
 
2x 2TB SATA3 on a hardware RAID 1, i'll run some more hardware tests once i've finished this install. Since the problem only shows up on LVM I didnt check hardware much.
 
It looks like it's hardware after all. I've installed again, keeping the same default layout ( one partition /, another /var/lib/vz), except the one in /var/lib/vz is ext3 instead of lvm. The fsync count still goes down on that partition, so there must be faulty sectors or something on the hdd.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!