cluster performance degradation

I mean compression is off by default but if you enable it it defaults to lz4 which is fine.
If you want to benchmark your pool for vm/lxc usage which means read and write some blocks inside a big image than fio (apt instal fio) is the tool for that. If you want to test individual files to serve like a normal fileserver do I would prefere elbencho.
 
but then when I create zfs in the RAID level do I have to put mirror or RAID 10?, I can put 6 HDD of 16 TB

I want to test what is the best solution to have reading and writing performance, even at the expense of space
 
Last edited:
Don't talk raid10 etc when you mean zfs raid levels which have it's own names to not confuse any readers !
A zfs mirror is like a raid1 when using 2 disks and is automatically like a raid10 if you using a multiple of the 2 disks.
For your 6 disks it looks like this cmd "zpool create my-wish-name mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg" BUT
please do NOT use sd* disk names !! Go first there : "cd /dev/disk/by-id ; ls -l" and see and take that disk names !!
 
Yes, but these are striped mirrors, not just mirrors-mirrors, so it is easier to call them raid10(which means striped mirror).
by default raidz(1,2,3) will always be slower than anything else ,because they are doing mathematical calculations for parity.
Execute this on zfs pool:
sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

And you will get some numbers for read and write.
 
no, I don't have to use compression, I wanted to understand if I disable it will I have more performance in reading and writing, or does it change anything?
ue lz4 and just make sure monitor your node CPU/RAM/DISK usage with a monitoring tool such as https://www.librenms.org/

setup PBS and do proper backup schedule

Life should be good after :)
 
Last edited:
Ohje, ja, RAID10 im pve zfs gui ... sollte multi-mirror oder so heißen ...
Ohne compression wäre nur ggf. sinnvoll, wenn du einen Haufen nvme's zusammen baust und die cpu's mit dem Datenflow nicht mehr nachkommen, was in der dann jeweiligen Konfig zu messen wäre.
 
I was reading this, so it seemed to me that I have to add 10 G of ram for every 8 TB, doing the calculation with 48TB I should have 480 G of RAM? I in the file /etc/modprobe.d/zfs.conf
now have the value of 17179869184, which should be 17 G if I'm not mistaken, so should I increase it to improve the performance of the pool?

Limit ZFS Memory Usage​

ZFS uses 50 % of the host memory for the Adaptive Replacement Cache (ARC) by default. For new installations starting with Proxmox VE 8.1, the ARC usage limit will be set to 10 % of the installed physical memory, clamped to a maximum of 16 GiB. This value is written to /etc/modprobe.d/zfs.conf.
Allocating enough memory for the ARC is crucial for IO performance, so reduce it with caution. As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.
ZFS also enforces a minimum value of 64 MiB.
You can change the ARC usage limit for the current boot (a reboot resets this change again) by writing to the zfs_arc_max module parameter directly:
echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
To permanently change the ARC limits, add (or change if already present) the following line to /etc/modprobe.d/zfs.conf:
options zfs zfs_arc_max=8589934592
 
That's mostly for reading as writing has transactional 5s "start" mechanism. Arc mem depends on your installed memory also as you cannot give zfs 60g if you just have 64g installed. It depends on server usage also as for a fileserver you would give zfs up to 80% from installed mem but on a hypervisor like pve you need the mem for your vm/lxc ... would not go below 16GB absolut or less than 10% of ram as rule for pve.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!