I want to benchmark my ZFS pools in a server that it's already running with data (mostly testing, but I don't want to lose it). For that purpose I have found =278bafe4f7648fa8072a76aebfd271e2']this, a ZFS benchmark done by proxmox team. So it looks like a great fit.
Does anyone know how to test the benckmark on a ZFS-Pool? And how can I test it on a Linux VM and Windows VM?
In this PDF I saw the command has been used: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020
command: fio --ioengine=libaio --filename=/dev/sdx --direct=1...
After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
I have setup (and configured) Ceph on a 3-node-cluster.
All nodes have
- 48 HDDs
- 4 SSDs
For best performance I defined any HDD as data and SSD as log.
This means I created 12 partitions on each SSD and created an OSD like this on node A:
pveceph createosd /dev/sda -journal_dev...
An interesting article can be read here: http://www.ilsistemista.net/index.php/virtualization/47-zfs-btrfs-xfs-ext4-and-lvm-with-kvm-a-storage-performance-comparison.html
It compares performance between various file systems and combo of file systems + volume managers used as storage...