It should usually be activated anyway. But you can check with
hdparm -W /dev/sda
.
By the way, SSDs are generally good for CEPH, but the Evo will certainly cost you a lot of performance. Here it is best to switch to enterprise SSDs such as the Samsung PM883.
The question here is why are you using this? If you want to ensure that no data is lost in the event of a power outage, then both none and writeback would not be for you. More about it here:
https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache
I've actually always used writeback and have never had performance problems or data loss.
You don't have to set host, but the x86-64-v2-AES will cost you performance.
If you only have the same CPU models in the cluster anyway, you can set host. If you have a mixed cluster, you should find and adjust the lowest common denominator. Live migration between old and new nodes is also possible. With Host CPU you usually cannot migrate between different CPU models.
You can only benefit from the performance of the CPU with the right CPU model. Otherwise, functions may be emulated that the CPU could do natively and faster. An example of this is AES.
Here you have to differentiate. The node does NUMA anyway, but now the VM could also do NUMA.
You only get a real benefit if your application is also NUMA aware.
In principle, NUMA would be advantageous if the resources assigned to your VM can no longer be served by a CPU with your RAM. In multi-CPU systems, each CPU manages its RAM. If a process is running on CPU 2 but is currently using CPU 1's RAM, CPU 2 must first access CPU 1 and then the RAM. The route via the QPI costs performance and latency. With NUMA, an application can assign the RAM used to one CPU and only use the RAM of the other CPU when necessary.
As far as I know, it's now set to none anyway.
In my opinion, this can be completely ignored and neglected today.
Often it doesn't make much of a difference (librbd (user space) or krbd (kernel page cache)). There is a video on the CEPH YouTube channel about:
https://www.youtube.com/watch?v=cJegSAGWnco
You have to decide for yourself and test what makes sense or not.
However, new features are often more available for librbd than for krbd. At least it used to be that way, but I don't know if it's still that way today.
But it has to be said that you have the greatest benefit of enterprise switches and SSDs, which account for the most latency and therefore performance. Things like I/O Scheduler, VM Cache or KRBD etc. are subtleties - if you have the time and desire to deal with them, you can do it. But you can also simply use the time for your customers and not look for the last 100 IOPS ;-)
//EDIT:
If you want to use an EFI/TPM device in the VM, then KRBD is always used:
https://forum.proxmox.com/threads/dual-stack-ceph-krbd-and-efi-tpm-problem.137234/