Hi,
we added email to Administration->User Blacklist:
But email is still delivered:
Sep 29 12:52:34 pmg-01 pmg-smtp-filter[1815039]: 401A7633578ECED94F: SA score=1/5 time=5.279 bayes=0.00 autolearn=no autolearn_force=no...
Hypervisor with 1 disk is nonsense if you don't have multiple such hypervisors with network storage.
Solution: 2x ssd raid (pve+vm/cs) + hdd raid as data/backup storage only or 2x ssd raid (pve) + 2x ssd raid (vm/ct) etc.
Drop 10Gbps rj45 and use sfp+ only (10,25Gbps variant, dac or fibre). Power consumption and latency are better.
You can use extra 2x nvme disks per node as other data pool.
ad 1]
a] 1 corosync link on ceph bond + 1 corosync link in mesh
b] 1 corosync link on 1gbps link on dedicated switch + 1 corosync link in mesh
c] 1 corosync link on 10gbps + 1 corosync link on 10gbps - all without mesh
etc...
tldr: split corosync link so they aren't dependent on one logical...
Check https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and other network related things https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
You can use same name. Search PVE docs for removing/adding node.
Non-uefi boot way: https://aaronlauterer.com/blog/2021/move-grub-and-boot-to-other-disk/ (or search google).
Do you really understand what are you benchmarking? You can't calculate ceph iops based on ssd iops.
Read this https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ .