Hello all, I currently have 4 servers, running nvme disk on a pci-e card. I setup a cluster and a ceph cluster but im getting really poor performance. When i run the ceph bench I get about 140 MB/s using a 1G line. Im going to upgrade to 10G next week (waiting on some NICs) but shouldnt i be getting more than 140MB/s with 1G ethernet? The nvme disk are rated for like 4000MB/s (i know distributed will never be close to bare metal)...
That being said this is for homelab i run things like homeassistant for my home automation that i want to keep up as much as possible, so thats why i picked ceph in the 1st place to give me redundancy, im also doing backups to an unraid server. Should i keep using ceph? not worth it>? should i switch to something else? I keep reading about needed a ton of servers or disk for ceph to make sense... I only got 4 servers and they all have 1 disk in ceph.
That being said this is for homelab i run things like homeassistant for my home automation that i want to keep up as much as possible, so thats why i picked ceph in the 1st place to give me redundancy, im also doing backups to an unraid server. Should i keep using ceph? not worth it>? should i switch to something else? I keep reading about needed a ton of servers or disk for ceph to make sense... I only got 4 servers and they all have 1 disk in ceph.