I just installed a new SSD and trying to do some performance tweaks before moving VMs to the box.
I initially thought ext4 would be the better option as it's the more modern filesystem.
When running pveperf though I was a bit concerned with the low FSYNCS/sec as I thought SSDs would kill at this. Here is my initial test with ext4:
It's not terrible, but definitely not what I was expecting.
Here are my mount points (note I added noatime just as a test, there was no significant change by adding it):
I then read through some of the threads here and decided to reinstall using ext3. To my surprise the numbers are now through the roof:
Mount points:
Why the huge difference? I hadn't tweaked anything, it was a fresh installation on both occasions. Needless to say I'm going to stick to ext3.
Relevant versions:
TIA
I initially thought ext4 would be the better option as it's the more modern filesystem.
When running pveperf though I was a bit concerned with the low FSYNCS/sec as I thought SSDs would kill at this. Here is my initial test with ext4:
Code:
root@proxdev:~# pveperf
CPU BOGOMIPS: 23145.28
REGEX/SECOND: 1085182
HD SIZE: 27.19 GB (/dev/mapper/pve-root)
BUFFERED READS: 514.59 MB/sec
AVERAGE SEEK TIME: 0.07 ms
FSYNCS/SECOND: 584.79
DNS EXT: 188.68 ms
DNS INT: 1.77 ms (opcommshq.local)
It's not terrible, but definitely not what I was expecting.
Here are my mount points (note I added noatime just as a test, there was no significant change by adding it):
Code:
root@proxdev:~# cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=3070269,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=2458496k,mode=755 0 0
/dev/mapper/pve-root / ext4 rw,noatime,errors=remount-ro,barrier=1,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=4916980k 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/mapper/pve-data /var/lib/vz ext4 rw,noatime,barrier=1,data=ordered 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
I then read through some of the threads here and decided to reinstall using ext3. To my surprise the numbers are now through the roof:
Code:
root@pve:~# pveperf
CPU BOGOMIPS: 23147.12
REGEX/SECOND: 1067073
HD SIZE: 27.31 GB (/dev/mapper/pve-root)
BUFFERED READS: 514.26 MB/sec
AVERAGE SEEK TIME: 0.07 ms
FSYNCS/SECOND: 3794.54
DNS EXT: 199.36 ms
DNS INT: 1.94 ms (opcommshq.local)
Mount points:
Code:
root@pve:~# cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=3070266,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=2458496k,mode=755 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=4916980k 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
Why the huge difference? I hadn't tweaked anything, it was a fresh installation on both occasions. Needless to say I'm going to stick to ext3.
Relevant versions:
Code:
root@pve:~# pveversion --verbose
proxmox-ve-2.6.32: 3.3-147 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-1 (running version: 3.4-1/3f2d890e)
pve-kernel-2.6.32-37-pve: 2.6.32-147
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.3-20
pve-firmware: 1.1-3
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-31
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-12
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
TIA