Well we seem to know different professionals, most I know doesn't even have an opinion on systemd. Those I know who use Debian ( myself included on my Private Hardware) don't hate systemd, my guess is thst they would use Devuan if they do. The...
You talked about building an old Proxmox Wheezy version on Devuan - this is old software with known bugs and you will put users to risk. This is a really bad idea. Current Proxmox version heavily depends on systemd, so you cannot build this with...
Hello,
the HC Ceph doesn't utilize SDN, it works with basic Linux network interfaces/bridges. SDN is meant for usage with VM/CT.
Would you like to describe the setup, you have in mind?
Another issue might be that EC ( like ZFS RAIDZ compared to Mirrors) might hurt VM performance compared to the default setup or am I'm missing something? I'm aware that in larger ( 8 nodes and more) the scaleout-nature of Ceph fix this
With m=1 you have the same redundancy as with size=2 and min_size=1 or in other words you have a RAID5.
You will lose data in this setup.
You could run with k=2 and m=2 but will still have to cope with the EC overhead (more CPU and more network...
With 5 nodes you can have k=2 and m=2 which gives you 200% raw usage instead of 300% with size=3 replicated pools.
But this is still a very small cluster for erasure coding.
The total capacity of the cluster is defined as the sum of all OSDs. This number only changes when you add or remove disks.
Do not confuse that with the maximum available space for pools which depends on replication factor or erasure code...
Hello,
do you happen to have your OpenE-JovianDSS on dedicated storage hardware? I have no experience with the plattform but from what I could google it supports NFS, iscsi and CIFS, they even have a doc file on it...
Or as a real world example referenced by Proxmox developer @dcsapak in an earlier discussion on these parameters:
Here is another old discussion: https://forum.proxmox.com/threads/cannot-create-ceph-pool-with-min_size-1-or-min_size-2.77676/
So you will lose 25% capacity in case of a dead node.
Make sure to set the nearfull ratio to 0.75 so that you get a warning when OSDs have less than 25% free space.
https://bennetgallein.de/tools/ceph-calculator
I would guess, with 3/2 replication, at some point while rebalancing it ends up with 4 copies (better to have an extra than not enough), and eventually removes one to get back to 3.
The DB/WAL are both things you CAN put in other disks, only recommended if those disks are significantly faster than your disk. Eg. NVRAM for NVMe or NVMe/SAS SSD for spinning disks. You can read up on exactly what they do, but the WAL is...
OMG. My test VM was not optimized for Virtio.
I built another VM and retested.
Now it looks so much better that I doubt the results.
This is much better than the old VSAN cluster delivered.
Atto. Default test.
Atto. Write cache disabled...