I have not used glusterfs "recently" (within the last few years), but I have used it in the past, and the results were unstable at best, abysmal at worst, and the management was miserable. We had consistent crashes of the daemons under benchmark loads, but even for backup storage it was finnicky and we found it unreliable and fragile.
The gluster mailing list is utterly dead, the website is dead, and the repo is more or less dead. it's in "critical maintenance fix" mode, which redhat said would happen.
I don't think ceph is unusably slow, but one critical issue that I see repeatedly is that people always set it up with very few OSDs, which is not what it was designed for.
In my homelab (or more accurately, home prod), I get very usable performance out of 24x 500gb 2.5" laptop SATA drives, paired with 8x 500gb SATA SSDs for DB/WAL across 4 hosts - which is not an especially unreasonable number of disks imo, and someone running fewer or less demanding workloads would likely be fine with half as many disks.
The RBD volumes on this ceph cluster back an elasticsearch + graylog cluster, with the bulk of the message volume being netflow from a pair of opnsense router VMs, so the workload is about 6-10x more write than read most of the time. I'm not really using cephfs as RBD volumes are more performant to begin with for backing VMs and containers.