You don't actually need a backup. You just need the FSID. The rest can be recreated by hand. I would scour your logs for the FSID. If you can't find it you're SOL.
If these are encrypted OSD and you lost the keys on the monitor you're SOL.
Edit: Actually you're probably also going to have to...
The usual. Smartctl can run long tests on offline disks and there are other options out there if you research.
They were in the cluster already so it's technically in production even if the cluster isn't. You want to test the disk for errors right after getting them so you can return them to...
I dunno. I've never run that command. Some quick research says its a bad command to run.
If your ceph cluster is still operational I would backup all data and reinstall. If it's not operational you might want to unplug all of the drives so they don't overwrite and send them to a data recovery...
You should be checking every disk for errors before putting them into production. Run a test on the disk before trying to RMA - if it's fine your issue is elsewhere.
Your 4k iops are horrid. But that could be due to the drives themselves and not anything wrong with your setup.
100MB/s on your write test is actually pretty good for such a small cluster. I'm more interested in your read tests.
You can use the following tests for more performance statistics...
As far as networking goes depending on how big your setup is I'd just put 2-3 2x40Gbps VPI IB cards in your main node and get a 4x10Gbps splitter cable bringing your total capacity to 16x10Gbps or 24x10Gbps. This removes the need for a switch. You can also still have a public+private network...
I found completely reinstalling from scratch fixed the issue. Only this time I didn't install from the ISO, I installed from debian 10 manually. I'm also not on a fully upgraded proxmox install, so it could also be one of the upgrades causing the problem.
Bumpity bumpity bump.
If compression is really controlled on the pool side then I shouldn't be seeing different osd with different compression statistics. I'm only using two pools, both connected to cephfs.
Are you able to access the internet through pfsense directly?
If not, this is a routing issue between your node and Hetzner. If you can then it's a configuration issue on the pfsense side of things.
Can I please get some assistance with this?
Some OSD's have compression. Others don't. Example: My 1TB HDD is compressing, but my 3TB one is not. See below output with bluestore_compressed.
root@e3-02:~# ceph daemon osd.6 perf dump | grep bluestore
"bluestore": {...
That's not what I've read in the ceph documentations or the user mailing list. From what I understand both the pool and the OSD need to have compression enabled. It's apparently a bluestore value that has to be set.
However, none of the documentation covers how to change the bluestore value on...
I've set compression up on the pool but can't find any documentation on how to change the bluestore values on the OSD themselves to enable compression. How would one change this?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.