Proxmox 3 , Local Storage File System , xfs, ext3, or ext4

H

hsman

Guest
Hi ,

I am installing proxmox 3 iso, in SSD, and connected 4x 2TB disk into the same server, configured software Raid 10 in linux for installing VM later.

For this Raid 10 Storage (4x 2TB HDD Sata, usable 4TB after raid 10) , I am considering either xfs , ext3 or ext4 . Tried all three, following is the stats -

XFS
#pveperf /vmdisk
CPU BOGOMIPS: 52680.40
REGEX/SECOND: 1642595
HD SIZE: 3723.95 GB (/dev/md0)
BUFFERED READS: 242.83 MB/sec
AVERAGE SEEK TIME: 23.33 ms
FSYNCS/SECOND: 83.99
DNS EXT: 200.15 ms
DNS INT: 319.92 ms (domain.net


# dd if=/dev/zero of=output bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 1.07498 s, 2.0 GB/s

ext3
#pveperf /vmdisk
CPU BOGOMIPS: 52680.40
REGEX/SECOND: 1504592
HD SIZE: 3667.32 GB (/dev/md0)
BUFFERED READS: 275.75 MB/sec
AVERAGE SEEK TIME: 19.00 ms
FSYNCS/SECOND: 857.42
DNS EXT: 270.64 ms
DNS INT: 273.77 ms (domain.net)
root@proxmox:/#

dd if=/dev/zero of=output bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 2.4559 s, 874 MB/s

ext4
# pveperf /vmdisk
CPU BOGOMIPS: 52680.40
REGEX/SECOND: 1564521
HD SIZE: 3667.32 GB (/dev/md0)
BUFFERED READS: 283.32 MB/sec
AVERAGE SEEK TIME: 19.11 ms
FSYNCS/SECOND: 43.67
DNS EXT: 192.82 ms
DNS INT: 236.91 ms (domain.net)


# dd if=/dev/zero of=output bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 1.2642 s, 1.7 GB/s



The result show seems, ext3 has best fsync time , but the dd test is worst. I wonder which one shld I choose for this. Need advice.


Thank you
 
Also you need to make sure you align your disk partitions.

raid 10 4 2tb drives

root@host-02:~# pveperf
CPU BOGOMIPS: 72529.52
REGEX/SECOND: 890159
HD SIZE: 177.18 GB (/dev/mapper/pve-root)
BUFFERED READS: 352.57 MB/sec
AVERAGE SEEK TIME: 9.05 ms
FSYNCS/SECOND: 2612.64
DNS EXT: 81.47 ms
DNS INT: 66.70 ms (google.com)


CPU BOGOMIPS: 72698.80
REGEX/SECOND: 868050
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 252.89 MB/sec
AVERAGE SEEK TIME: 8.30 ms
FSYNCS/SECOND: 2873.22
DNS EXT: 95.27 ms
DNS INT: 66.85 ms (google.com)
 
Seeks times are really bad. And FSYNC rate below 100 is IMHO unusable.

Hi Diemar,

So suggested still on ext3 ? I hope i did not do anything config wrong here. I wonder the dd test show ext3 much slower then other two (ext4 and xfs), which means write is much slow ?
 
Also you need to make sure you align your disk partitions.

raid 10 4 2tb drives

root@host-02:~# pveperf
CPU BOGOMIPS: 72529.52
REGEX/SECOND: 890159
HD SIZE: 177.18 GB (/dev/mapper/pve-root)
BUFFERED READS: 352.57 MB/sec
AVERAGE SEEK TIME: 9.05 ms
FSYNCS/SECOND: 2612.64
DNS EXT: 81.47 ms
DNS INT: 66.70 ms (google.com)


CPU BOGOMIPS: 72698.80
REGEX/SECOND: 868050
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 252.89 MB/sec
AVERAGE SEEK TIME: 8.30 ms
FSYNCS/SECOND: 2873.22
DNS EXT: 95.27 ms
DNS INT: 66.85 ms (google.com)


Hi Dragoon,

Can share your config? I am using software raid 10 also, but the proxmox are installed on SSD. on SSD i can get the good result as you , but not the raid 10 storage that going to put VM there.
 
My suggestion is to use a reasonable HW RAID controller.

Hi diemar,

At this moment, there is no budget for good hardware raid card. for cheap raid card, is no differnet that software raid. And it create additional risk of raid card spoiled.

I found that there is also options of storage from direct block device, which means i can create volumngroup and proxmox can asssign space for it. will this more efficient ?

How can we test on this?

I also do are LV of small harddisk, 80G, to mount of ext3, and it is better result -
root@proxmox:/# pveperf /vmdisk
CPU BOGOMIPS: 52680.40
REGEX/SECOND: 1628508
HD SIZE: 78.74 GB (/dev/mapper/vg_vmdisk-vmdisk)
BUFFERED READS: 265.92 MB/sec
AVERAGE SEEK TIME: 10.72 ms
FSYNCS/SECOND: 919.57
DNS EXT: 269.67 ms
DNS INT: 473.31 ms (domain.net)

Obviously, ext3 only effective on small partition size, and not for large partition. XFS may handle it and may be ext4, but seems over here the performance drop on both.
 
Thanks for this advice, after the conv=fsync, it gets 200MB/s instead of 900MB/s for ext3 previously.
and getting approx the same at xfs at 220MB/s instead of 2G/s for xfs .

it seems dd has constant result here.
 
My suggestion is to use a reasonable HW RAID controller.

Hi Dietmar,

There is some interested stat here. I have a freeNas server up, partitioned software ZFS (on raidz1) , and allocated 100G space for NFS mount (on 100Mbit switch).

On the storage itself, dd, i get only 135MB . while over network at Proxmox svr, I get 9.7MB/s ( is ok as it is just 100Mbit switch). But the pveperf show below-

CPU BOGOMIPS: 52680.40
REGEX/SECOND: 1609659
HD SIZE: 100.00 GB (x.x.x.x:/mnt/vPool1/iso)
FSYNCS/SECOND: 450.44
DNS EXT: 236.19 ms
DNS INT: 286.55 ms (domain.net)


Amazing, the fsync value is higher the ext4 mounted local storage ? while it is on 100Mbit network connection and the local storage at least at 3Gbs sata cable !
 
Amazing, the fsync value is higher the ext4 mounted local storage ? while it is on 100Mbit network connection and the local storage at least at 3Gbs sata cable !

pveperf is not a perfect benchmarking tool. We just use that to detect hardware problem on local storage.
 
pveperf is not a perfect benchmarking tool. We just use that to detect hardware problem on local storage.

Hi Dietmar,

So, am I able to conclude, for local storage, still the ext3 be the best choice if for large storage , like 4x2TB case ?

And the recommendation of this is using block device, LVM or Mounted Local Storage? It looks like the Lvm will be faster right ? LVM has benefit of resizing, but will slow on snapshot, especialy if the allocation hdd to vm large like 1Tb etc... right ?

Thanks for the advice.
 
So, am I able to conclude, for local storage, still the ext3 be the best choice if for large storage , like 4x2TB case ?

We currently use ext3 as default for all installations.

And the recommendation of this is using block device, LVM or Mounted Local Storage? It looks like the Lvm will be faster right

Maybe, but I guess there is not much difference. Your drives seems to have 20ms seek times, so everything will be slow anyways.

? LVM has benefit of resizing, but will slow on snapshot, especialy if the allocation hdd to vm large like 1Tb etc... right ?

You can also resize files. And you can also have a file system on top of LVM, like we do with our default installation.
 
We currently use ext3 as default for all installations.



Maybe, but I guess there is not much difference. Your drives seems to have 20ms seek times, so everything will be slow anyways.



You can also resize files. And you can also have a file system on top of LVM, like we do with our default installation.


hi Dietmar,

Thank for the advice. I installed on lvm , and the dd speed inside vm with conv=fsync is 195MB/s

for LVM resize, the os on top of lvm must also install based on lvm devices, otherwise also no point right .
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!