Where was you looking to see the CPU Frequency change?
On the host node if you run "cat /proc/cpuinfo | grep "MHz""
You should see the CPU Frequeny change, if you run something demanding you should see the frequency ramp up if possible depending on the CPU and the amount of active cores.
If the m.2 is being used just for the boot disk then I see no issues at all, your have a single point of failure, just make sure you keep a backup of your VM config files encase the m.2 ever dies.
Obviously if this is a mission critical server then you would be better off placing the boot drive...
You can create one OSD Per a disk, the amount of OSD you require really depends on your usage case.
How big are the disk? How much useable storage do you require on CEPH?
6 OSD Will work, 4 nodes would be better, but it is really down to what you require from the cluster in performance and...
Sorry, where did I say that?
All I stated that you said you have 3 servers each with 2 disks. Therefore you have the capacity for 6 OSD's.
That will work fine, only issue being with a replication of 2 if you lose a whole host CEPH wont be able to automatically repair and will run on a...
12 is just a base line recommendation.
From the looks of it you will have 6 OSDs? This would be a small cluster but no issues and would work and function. It just means if one node goes down your only have 2 replication while you repair or bring a new node online.
On Hetzner install image they have a custom image that is Debian 10 + Proxmox.
Once installed you just need to enable the bridge by adding the following to /etc/network/interfaces
iface eno1 inet manual
iface vmbr0 inet static
address "Server IP"
From the CEPH ML it's pretty much recommended that any disk that was good for File store Journal (https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/)
Will be good for the BL WAL + DB, as they both have similar I/O demands.
What your seeing in your screenshot is the size of the disk (it's max size) that you specified when creating the disk.
CEPH stores RBD data in 4MB objects, so will only use the amount of disk space that is actually used on the filesystem within the VM.
If you have saved lot's of files and then...
The main difference is the enterprise you get support via ticket's and the repo runs slightly older but more well tested packages.
OpenSource is more bleeding edge (still tested) and has the message when you login apart from that no differences in what it can do.
I currently have a cluster of 6 node's, I plan to bring up some new extra node's, however due to a network limitation I won't be able to connect them straight away to the cluster network however need to get some VM's running on them.
If I make sure I manually change the new VM ID's so they...
Caching is not recommend in CEPH and is slowly being filtered out of the software over time.
What you can do with Bluestore is place the WAL & DB of the OSD onto a SSD, so the metadata is retrieved quickly via the SSD and the SAS disk is left with just the raw data I/O this can be read about...
The config file just sets defaults, when you make a pool you can override these as you must have done to create a 2/2 pool.
Doing what you did was correct and once the sync is finished your get a health good message.
On the upgrade of the disks from 500GB -> 1TB, how much data does each disk...