Yea, there is some history on this. The ceph-1 node did fail (probably in 2020 as indicated) and I've removed it with eventual plans on reinstalling the OS and putting back into the cluster.
After reducing the nodes things seem to be better.
root@proxmox-ceph-2:~# ceph status
cluster...
Here is the output you requested.
root@proxmox-ceph-2:~# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 28.36053 - 28 TiB 2.4 TiB 2.4 TiB 121 MiB 26 GiB 26 TiB...
I'm not concerned with data loss at this point as I don't think the data exists to be recovered.
root@proxmox-ceph-2:~# ceph osd pool ls detail
pool 1 'ceph' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 pg_num_target 64 pgp_num_target 64...
Had a chassis die and a couple drive die in other chassis after a power event. I know there is going to be some data loss but I'm trying to get the cluster into a healthy state. Been working on this for about a week now and decided to ask for help.
I know there are pg issues with this cluster...
I'm running it from one of my compute nodes who does have access to the ceph filesystem via rdb. For what it's worth I did find a workaround by creating a backup & then extracting it to provide me the raw files for qemu-img.
How do I access discs on CEPH filesystems (not cephfs). Specifially, I need to move a couple VMs to VMware but when I try to run qemu-img on them to convert them it's saying unknown protocl ceph.
I've also tried this directly on one of my CEPH storage hosts but I get the exact same error.
Upon further research I see that running Proxmox off the FlexFlash is less than ideal due to the write capacity of the FlexFlash. Luckily for me this is just a lab environment so I'll probably just let it (the cluster) die off.
I've just lost my root drive which was on Cisco FlexFlash. It was supposed to be raid-0 but when I forced the master switch it is completely failing to boot where as on the other disk I was getting fsck errors and unable to write to the filesystem even after a repair. For simplicity I think...
Thanks alot for your time here. I was under the impression that the cephfs clients didn't need the ceph stuff installed since it was working without it until a few weeks ago...not sure exactly what upgrade broke this. I do see the documentation clearly states that ceph clients also need this...
Hoping it was something as simple as the authentication being setup wrong I followed the steps in https://pve.proxmox.com/wiki/Storage:_CephFS. Before I made any changes I validated that the appropriate keys were in place and they all are.
Here is my version info.
I've tried quite a few different mechanisims to mount this with no luck.
Looking through the dmesg output after doing this a few times I saw this.
I don't recall setting up authentication but maybe I did. Been some time. Anyway, I logged into one of the ceph...
Although there are ISOs on that as well the VM that I'm attempting to migrate doesn't have have one mounted. Below is a screenshot showing that VM setup.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.