Search results

  1. M

    Ceph Issues After Chassis Failure

    The ceph status hasn't changed since my last note. I think I'll just fully delete this and start over. I do appreciate your time.
  2. M

    Ceph Issues After Chassis Failure

    Yea, there is some history on this. The ceph-1 node did fail (probably in 2020 as indicated) and I've removed it with eventual plans on reinstalling the OS and putting back into the cluster. After reducing the nodes things seem to be better. root@proxmox-ceph-2:~# ceph status cluster...
  3. M

    Ceph Issues After Chassis Failure

    Here is the output you requested. root@proxmox-ceph-2:~# ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 28.36053 - 28 TiB 2.4 TiB 2.4 TiB 121 MiB 26 GiB 26 TiB...
  4. M

    Ceph Issues After Chassis Failure

    These nodes have been rebooted multiple times.
  5. M

    Ceph Issues After Chassis Failure

    I've restarted both manager services via the WebUI but I'm still unable to mark these as deleted.
  6. M

    Ceph Issues After Chassis Failure

    I'm not concerned with data loss at this point as I don't think the data exists to be recovered. root@proxmox-ceph-2:~# ceph osd pool ls detail pool 1 'ceph' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 pg_num_target 64 pgp_num_target 64...
  7. M

    Ceph Issues After Chassis Failure

    Had a chassis die and a couple drive die in other chassis after a power event. I know there is going to be some data loss but I'm trying to get the cluster into a healthy state. Been working on this for about a week now and decided to ask for help. I know there are pg issues with this cluster...
  8. M

    Accessing VM Discs on CEPH Filesystems

    I'm running it from one of my compute nodes who does have access to the ceph filesystem via rdb. For what it's worth I did find a workaround by creating a backup & then extracting it to provide me the raw files for qemu-img.
  9. M

    Accessing VM Discs on CEPH Filesystems

    I just did with the same result. Note the the error isn't about an invalid/missing file from qemu-img but an unknown protocol of "rdb".
  10. M

    Accessing VM Discs on CEPH Filesystems

    That doesn't seem to work either.
  11. M

    Accessing VM Discs on CEPH Filesystems

    How do I access discs on CEPH filesystems (not cephfs). Specifially, I need to move a couple VMs to VMware but when I try to run qemu-img on them to convert them it's saying unknown protocl ceph. I've also tried this directly on one of my CEPH storage hosts but I get the exact same error.
  12. M

    Ceph Node Down - Proper Restore Procedures

    Upon further research I see that running Proxmox off the FlexFlash is less than ideal due to the write capacity of the FlexFlash. Luckily for me this is just a lab environment so I'll probably just let it (the cluster) die off.
  13. M

    Ceph Node Down - Proper Restore Procedures

    I've just lost my root drive which was on Cisco FlexFlash. It was supposed to be raid-0 but when I forced the master switch it is completely failing to boot where as on the other disk I was getting fsck errors and unable to write to the filesystem even after a repair. For simplicity I think...
  14. M

    [SOLVED] Ceph Migrations Are Failing

    Thanks alot for your time here. I was under the impression that the cephfs clients didn't need the ceph stuff installed since it was working without it until a few weeks ago...not sure exactly what upgrade broke this. I do see the documentation clearly states that ceph clients also need this...
  15. M

    [SOLVED] Ceph Migrations Are Failing

    Ceph is 14.2.8 Proxmox VE is 6.1-8 Yes, I've got a 3 node ceph cluster with 7 compute hosts (10 servers total)
  16. M

    [SOLVED] Ceph Migrations Are Failing

    My cephfs clients are configured as follows. /etc/pve/priv/ceph/ceph.keyring [client.admin] key = {{ REDATED }} caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" /etc/pve/priv/ceph/cephfs.secret {{ REDATED }} The {{...
  17. M

    [SOLVED] Ceph Migrations Are Failing

    Hoping it was something as simple as the authentication being setup wrong I followed the steps in https://pve.proxmox.com/wiki/Storage:_CephFS. Before I made any changes I validated that the appropriate keys were in place and they all are.
  18. M

    [SOLVED] Ceph Migrations Are Failing

    Here is my version info. I've tried quite a few different mechanisims to mount this with no luck. Looking through the dmesg output after doing this a few times I saw this. I don't recall setting up authentication but maybe I did. Been some time. Anyway, I logged into one of the ceph...
  19. M

    [SOLVED] Ceph Migrations Are Failing

    Although there are ISOs on that as well the VM that I'm attempting to migrate doesn't have have one mounted. Below is a screenshot showing that VM setup.
  20. M

    [SOLVED] Ceph Migrations Are Failing

    Yes, I'm using cephfs on my "compute" hosts. dmesg doesn't have any output that seems useful either.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!