because now I'm out of money, what is the exact value of 32G to put in the file /etc/modprobe.d/zfs.conf
~# cat /etc/modprobe.d/zfs.conf
#
# 4 GiB = 4294967296
# 8 GiB = 8589934592
# 12 GiB = 12884901888
# 16 GiB = 17179869184
# 24 BiB = 25769803776
# 32 GiB = 34359738368
#
# Nicht vergessen:
# update-initramfs -u
options zfs zfs_arc_min=8589934592
options zfs zfs_arc_max=17179869184
$ echo "32 * 1024 * 1024 * 1024" | bc
34359738368
You do not need to; my file is just an examplein my /etc/modprobe.d/zfs.conf file I only find
options zfs zfs_arc_max= ...... I also have to insert options zfs zfs_arc_min
When a cluster member determines that it is no longer in the cluster quorum, the LRM waits for a new quorum to form. As long as there is no quorum the node cannot reset the watchdog. This will trigger a reboot after the...
from Shellhow do I clear the warnings?
zpool scrub local_ZFS
it should , try againshouldn't everything have continued to work with only one disk OUT?
cannot scrub local_ZFS: currently scrubbing; use 'zpool scrub -s' to cancel scrubfrom Shellzpool scrub local_ZFS
with zfs raid 10 local they are fasterBTW, HDD are slow, even more with CEPH or ZFS
at the end of the scrub errors are cleared. during scrub , disks are slowdown , because all data are read to verify integrity.cannot scrub local_ZFS: currently scrubbing; use 'zpool scrub -s' to cancel scrub
at the beginning. Will be too slow for example for Windows as RDS.with zfs raid 10 local they are faster
I can't buy sdd now, I bought 12 16TB HDDs, unfortunately they gave me the wrong adviceno, I haven't tried CEPH yet.
but CEPH or ZFS kill HDD , because they add overhead.