cluster performance degradation

From one of my nodes, just adjust:
Code:
~# cat /etc/modprobe.d/zfs.conf 

#
#  4 GiB = 4294967296
#  8 GiB = 8589934592
# 12 GiB = 12884901888
# 16 GiB = 17179869184
# 24 BiB = 25769803776
# 32 GiB = 34359738368
#
# Nicht vergessen:
# update-initramfs -u 
 
options zfs zfs_arc_min=8589934592
options zfs zfs_arc_max=17179869184

To calculate such numbers by yourself:
Code:
$ echo "32 * 1024 * 1024 * 1024" | bc
34359738368
:)
 
updates, I reinstalled everything with local ZFS Raid10 it seems that now the latency is better and the vms respond quickly, but I have a problem on the cluster, is it normal that if I restart a node the other one also restarts? currently the cluster is made up of 2 nodes, but I thought since now since the vms run locally it could also be done with just one node, why if I restart pve1 does pve2 also restart and all the vms go down?
 
you should not have a cluster with 2 nodes only. if you want to have 2 nodes, you need to have a Qdevice.
 
 
I had another problem, I made a zfs pool with Raid 10 and 6 HDD, I tried to remove an HDD and the pool went SUSPEND, AND THE VMs ARE NO LONGER ONLINE, but shouldn't everything have continued to work with only one disk OUT?


1735507655555.png

How do I send the zpool clear command?
 
Last edited:
no, I haven't tried CEPH yet.
but CEPH or ZFS kill HDD , because they add overhead.
I can't buy sdd now, I bought 12 16TB HDDs, unfortunately they gave me the wrong advice
I can tell you that with ZFS locally in raid 10 we went much better than before I had ceph, and the 10G nics
 
Last edited: