ext4 with lvm vs zfs raid 10

sahostking

Renowned Member
Hi have test box with proxmox 3.4 not going to 4 yet. but would like to know whether I should go lvm on ext4 with 6 disks or should I do zfs for 6 disks raid 10. Reason is for snapshots but also need good performance on containers.
 
Hi have test box with proxmox 3.4 not going to 4 yet. but would like to know whether I should go lvm on ext4 with 6 disks or should I do zfs for 6 disks raid 10. Reason is for snapshots but also need good performance on containers.
Hi,
perhaps there is another option: lvm + ext4 on an (good) hw-raid controller ;)

You must test it on your config!
I have done such an test some month ago - 6 hdds as zfs raid-z2 (with SSD for jornal/readahead (zil+?)) and 5 disks in raid5 on an areca sas controller.
All disks are 24/7-sata-disks in this test!
The zfs-storage was much much slower for me than the raid-volume - so we stay for this server on an hw-raid-controller.
Perhaps it's looks better with different hardware (it's wasn't too less RAM - the server had 32GB).

Udo
 
If you are using a SSD zil (log) much depends on the quality of this SSD. Not only pure read or write speed but also latency. For random writes latency is crucial and since zil is only random writes you need a certain kind of SSD which favors random writes. To improve read speed you should add a cache. This should be an SSD and since cache is read-cache this SSD should be optimized for fast reads.
 
If you are using a SSD zil (log) much depends on the quality of this SSD. Not only pure read or write speed but also latency. For random writes latency is crucial and since zil is only random writes you need a certain kind of SSD which favors random writes. To improve read speed you should add a cache. This should be an SSD and since cache is read-cache this SSD should be optimized for fast reads.

Hi mir,
this is the reason why I used an Intel DC S3700 for tis test (a very good SSD I use also for ceph journaling).

Udo
 
Hi, I would love to setup a ZFS raid10 with 4 disks and SSD DC3710 (I've problems after boot, but is in another thread), but your message scares me! I've 32GB ram too.
Just for curiosity, what model of Areca card are you using? How did you set for ZFS to gain "direct access" to disks? Is that card well supported by Proxmox drivers or did you had to add more recent ones from the supplier site (if so, how did you "injected" during installation?), and what driver is it using? I was unable to make an Areca HBA 1320 work (lots of I/O errors, unusable).
Thanks a lot
 
Hi, I would love to setup a ZFS raid10 with 4 disks and SSD DC3710 (I've problems after boot, but is in another thread), but your message scares me! I've 32GB ram too.
Just for curiosity, what model of Areca card are you using? How did you set for ZFS to gain "direct access" to disks? Is that card well supported by Proxmox drivers or did you had to add more recent ones from the supplier site (if so, how did you "injected" during installation?), and what driver is it using? I was unable to make an Areca HBA 1320 work (lots of I/O errors, unusable).
Thanks a lot
Hi,
the sata-disks (and SSD for ZIL+caching) was connected with an LSI-SAS HBA (LSI-9201-16i) - driver mpt2sas.
The raidcontroller is an ARC-1880.

The SAS-HBA from Areca is unfortunalitie not realy linux aware - I'm also unable to use them as i give them a try (some time ago).

Udo
 
Hi,

I'm noticing something. We have the following:

6 x 2TB disks which we make ZFS RAID 10 - where OS and Virtual Machines reside on.
Should we rather use 2 x SSD disks (RAID 1 ?) for OS and then 4 x SATA Enterprise disks (ZFS RAID 10)

Will see a massive difference or not?

I see a lot of you choose ssd for OS or am I wrong?

Note we want a setup where we can host VPS servers on here for a few of our clients but obviously not too much.
 
In terms of random I/O performance you will see a significant improvement with SSD storage for OS. Now it depends how much I/O you are doing. If you have the usual boot + app startup + network I/O, you will benefit when booting multiple VMs simultaneously.
If you seldom reboot the hypervisor(s), then a "RAID 10" with L2ARC and slog will be the best (in terms of performance/price).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!