Hi
Got 2 servers same spec besides one has a HW raid controller in RAID 10 with BBU writeback on ext3 which gives the following with no VMs on it yet:
root@vz-cpt-1:~# dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 56.9353 s, 302 MB/s
root@vz-cpt-1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.36386 s, 319 MB/s
root@vz-cpt-1:~# pveperf
CPU BOGOMIPS: 57601.56
REGEX/SECOND: 1902166
HD SIZE: 196.86 GB (/dev/mapper/pve-root)
BUFFERED READS: 408.02 MB/sec
AVERAGE SEEK TIME: 6.98 ms
FSYNCS/SECOND: 4999.66
DNS EXT: 132.71 ms
DNS INT: 19.54 ms
Whereas another proxmox server with 4 KVM VMs on it and ZFS RAID10 gives this:
root@vz-cpt-2:/var/lib/vz# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.882696 s, 1.2 GB/s
root@vz-cpt-2:/var/lib/vz# dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 6.69261 s, 2.6 GB/s
root@vz-cpt-2:/var/lib/vz# pveperf
CPU BOGOMIPS: 55994.88
REGEX/SECOND: 2703129
HD SIZE: 5448.00 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 38.94
DNS EXT: 138.54 ms
DNS INT: 20.29 ms
Now ZFS server seems far faster ? Just bad fsync I see though.
Does this indicate throw away HW Raid Controller and stick to ZFS rather?
Got 2 servers same spec besides one has a HW raid controller in RAID 10 with BBU writeback on ext3 which gives the following with no VMs on it yet:
root@vz-cpt-1:~# dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 56.9353 s, 302 MB/s
root@vz-cpt-1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.36386 s, 319 MB/s
root@vz-cpt-1:~# pveperf
CPU BOGOMIPS: 57601.56
REGEX/SECOND: 1902166
HD SIZE: 196.86 GB (/dev/mapper/pve-root)
BUFFERED READS: 408.02 MB/sec
AVERAGE SEEK TIME: 6.98 ms
FSYNCS/SECOND: 4999.66
DNS EXT: 132.71 ms
DNS INT: 19.54 ms
Whereas another proxmox server with 4 KVM VMs on it and ZFS RAID10 gives this:
root@vz-cpt-2:/var/lib/vz# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.882696 s, 1.2 GB/s
root@vz-cpt-2:/var/lib/vz# dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 6.69261 s, 2.6 GB/s
root@vz-cpt-2:/var/lib/vz# pveperf
CPU BOGOMIPS: 55994.88
REGEX/SECOND: 2703129
HD SIZE: 5448.00 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 38.94
DNS EXT: 138.54 ms
DNS INT: 20.29 ms
Now ZFS server seems far faster ? Just bad fsync I see though.
Does this indicate throw away HW Raid Controller and stick to ZFS rather?
Last edited: