gurubert's latest activity

  • gurubert
    gurubert reacted to Johannes S's post in the thread Building ProxMox on Devuan with Like Like.
    Well we seem to know different professionals, most I know doesn't even have an opinion on systemd. Those I know who use Debian ( myself included on my Private Hardware) don't hate systemd, my guess is thst they would use Devuan if they do. The...
  • gurubert
    gurubert reacted to tom's post in the thread Building ProxMox on Devuan with Like Like.
    You talked about building an old Proxmox Wheezy version on Devuan - this is old software with known bugs and you will put users to risk. This is a really bad idea. Current Proxmox version heavily depends on systemd, so you cannot build this with...
  • gurubert
    gurubert reacted to dietmar's post in the thread Building ProxMox on Devuan with Like Like.
    Seems you really believe that. But as said above, we need many features from systemd, so Devuan is no option.
  • gurubert
    gurubert reacted to dietmar's post in the thread Building ProxMox on Devuan with Like Like.
    Using old software is almost always a bad idea! Besides, I think that Debian is much more stable than Devuan and easier to handle (thanks to systemd).
  • gurubert
    gurubert replied to the thread SDN / Ceph Private Network.
    Why would you want to tunnel storage traffic through a tunnel like VXLAN?
  • gurubert
    gurubert reacted to fba's post in the thread SDN / Ceph Private Network with Like Like.
    Hello, the HC Ceph doesn't utilize SDN, it works with basic Linux network interfaces/bridges. SDN is meant for usage with VM/CT. Would you like to describe the setup, you have in mind?
  • gurubert
    Usually when building a Ceph cluster one starts with the MONs and not the OSDs.
  • gurubert
    gurubert reacted to Johannes S's post in the thread Ceph DB/WAL on SSD with Like Like.
    Another issue might be that EC ( like ZFS RAIDZ compared to Mirrors) might hurt VM performance compared to the default setup or am I'm missing something? I'm aware that in larger ( 8 nodes and more) the scaleout-nature of Ceph fix this
  • gurubert
    gurubert replied to the thread Ceph DB/WAL on SSD.
    With m=1 you have the same redundancy as with size=2 and min_size=1 or in other words you have a RAID5. You will lose data in this setup. You could run with k=2 and m=2 but will still have to cope with the EC overhead (more CPU and more network...
  • gurubert
    gurubert replied to the thread Ceph DB/WAL on SSD.
    With 5 nodes you can have k=2 and m=2 which gives you 200% raw usage instead of 300% with size=3 replicated pools. But this is still a very small cluster for erasure coding.
  • gurubert
    gurubert replied to the thread Ceph DB/WAL on SSD.
    EC with only 4 nodes is not useful. You need at least 8 or 10 nodes to get useful k and m values for erasure coding.
  • gurubert
    gurubert replied to the thread CEPH cache disk.
    The total capacity of the cluster is defined as the sum of all OSDs. This number only changes when you add or remove disks. Do not confuse that with the maximum available space for pools which depends on replication factor or erasure code...
  • gurubert
    Hello, do you happen to have your OpenE-JovianDSS on dedicated storage hardware? I have no experience with the plattform but from what I could google it supports NFS, iscsi and CIFS, they even have a doc file on it...
  • gurubert
    Or as a real world example referenced by Proxmox developer @dcsapak in an earlier discussion on these parameters: Here is another old discussion: https://forum.proxmox.com/threads/cannot-create-ceph-pool-with-min_size-1-or-min_size-2.77676/
  • gurubert
    gurubert replied to the thread Ceph DB/WAL on SSD.
    So you will lose 25% capacity in case of a dead node. Make sure to set the nearfull ratio to 0.75 so that you get a warning when OSDs have less than 25% free space. https://bennetgallein.de/tools/ceph-calculator
  • gurubert
    Read https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/ to get an idea
  • gurubert
    Do not run pools with size=2 and min_size=1. You will lose data.
  • gurubert
    gurubert reacted to SteveITS's post in the thread CEPH cache disk with Like Like.
    I would guess, with 3/2 replication, at some point while rebalancing it ends up with 4 copies (better to have an extra than not enough), and eventually removes one to get back to 3.
  • gurubert
    gurubert reacted to guruevi's post in the thread CEPH cache disk with Like Like.
    The DB/WAL are both things you CAN put in other disks, only recommended if those disks are significantly faster than your disk. Eg. NVRAM for NVMe or NVMe/SAS SSD for spinning disks. You can read up on exactly what they do, but the WAL is...
  • gurubert
    gurubert reacted to tcabernoch's post in the thread CEPH cache disk with Like Like.
    OMG. My test VM was not optimized for Virtio. I built another VM and retested. Now it looks so much better that I doubt the results. This is much better than the old VSAN cluster delivered. Atto. Default test. Atto. Write cache disabled...