cluster performance degradation

From one of my nodes, just adjust:
Code:
~# cat /etc/modprobe.d/zfs.conf 

#
#  4 GiB = 4294967296
#  8 GiB = 8589934592
# 12 GiB = 12884901888
# 16 GiB = 17179869184
# 24 BiB = 25769803776
# 32 GiB = 34359738368
#
# Nicht vergessen:
# update-initramfs -u 
 
options zfs zfs_arc_min=8589934592
options zfs zfs_arc_max=17179869184

To calculate such numbers by yourself:
Code:
$ echo "32 * 1024 * 1024 * 1024" | bc
34359738368
:)
 
updates, I reinstalled everything with local ZFS Raid10 it seems that now the latency is better and the vms respond quickly, but I have a problem on the cluster, is it normal that if I restart a node the other one also restarts? currently the cluster is made up of 2 nodes, but I thought since now since the vms run locally it could also be done with just one node, why if I restart pve1 does pve2 also restart and all the vms go down?
 
you should not have a cluster with 2 nodes only. if you want to have 2 nodes, you need to have a Qdevice.
 
 
I had another problem, I made a zfs pool with Raid 10 and 6 HDD, I tried to remove an HDD and the pool went SUSPEND, AND THE VMs ARE NO LONGER ONLINE, but shouldn't everything have continued to work with only one disk OUT?


1735507655555.png

How do I send the zpool clear command?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!