Spare hardware laying here, so I decided to benchmark some different local (not SAN) storage formats and attempt to show the pros/cons of each, help me out if i'm missing any important points.
Test bed:
Dell R730xd, H730P Raid card (array specs at the bottom)
2x Xeon e5-2683 / 64gb RAM
1x Sandisk Ultra ii pve boot drive, attached to H730, all cache disabled on single drive raid0
8x Samsung 850 Pro 512gb, RAID10 with all cache.
PVE 5.0b2, Windows 7 VM, 22 cores, 8gb RAM, nothing else is running on the host, no other data on storage.
I will compare LVM, ZFS, EXT4, raw + qcow2.
LVM thin pool (default install) = thin provisioning, only written bytes are used, fast snapshotting ~10% read perf hit vs bare LVM, supposedly slower write perf vs LVM, probably the best choice.
LVM = supposed fastest, no thin provisioning, each VM will immediately use 100% of the space you provision... not sure on write speed shown, obviously slower than LVM-thin.
ZFS = many bells and whistles, possibly unneeded complexity in tuning, fast remote snapshot diff backups, thin prov, easy expansion and more, 30-40% slower vs LVM
EXT4 formatted drive, mounted as directory, .raw file on directory This gives a ~55% read perf hit over LVM, (qcow2 known to slightly slower, but offers snapshots)
EXT4 formatted drive, mounted as directory, .qcow2 file on directory This gives a 60% read perf hit vs LVM.
Single SSD LVM-thin (pve OS drive):
Obviously this is all synthetic, take it for what you will, real world application behavior may differ.
Here are the storage specs:
Test bed:
Dell R730xd, H730P Raid card (array specs at the bottom)
2x Xeon e5-2683 / 64gb RAM
1x Sandisk Ultra ii pve boot drive, attached to H730, all cache disabled on single drive raid0
8x Samsung 850 Pro 512gb, RAID10 with all cache.
PVE 5.0b2, Windows 7 VM, 22 cores, 8gb RAM, nothing else is running on the host, no other data on storage.
I will compare LVM, ZFS, EXT4, raw + qcow2.
LVM thin pool (default install) = thin provisioning, only written bytes are used, fast snapshotting ~10% read perf hit vs bare LVM, supposedly slower write perf vs LVM, probably the best choice.
LVM = supposed fastest, no thin provisioning, each VM will immediately use 100% of the space you provision... not sure on write speed shown, obviously slower than LVM-thin.
ZFS = many bells and whistles, possibly unneeded complexity in tuning, fast remote snapshot diff backups, thin prov, easy expansion and more, 30-40% slower vs LVM
EXT4 formatted drive, mounted as directory, .raw file on directory This gives a ~55% read perf hit over LVM, (qcow2 known to slightly slower, but offers snapshots)
EXT4 formatted drive, mounted as directory, .qcow2 file on directory This gives a 60% read perf hit vs LVM.
Single SSD LVM-thin (pve OS drive):
Obviously this is all synthetic, take it for what you will, real world application behavior may differ.
Here are the storage specs:
Last edited: