Search results

  1. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    So for the record, I've succeed in making ceph working. First there was some ghost monitors that I've succeed to delete from monmap Then, I had some ACL issues on the directory structure : rocksdb: IO error: While opening a file for sequentially reading...
  2. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    Hi @fabian , FMI, do you think that the official support of proxmox could cover this case ? I mean, for changing the ceph conf to make it work as it used to recently. Regards
  3. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    Thanks for your advice, I'll try to restore the behaviour that was working and upgrade ASAP.
  4. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    additionnal infos : the one node ceph was working perfectly, it has just failed recently and I'm looking for the reason. a recent update change the NIC name and maybe that's the reason but I don't find any clue for this hypothesis
  5. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    understood. Sadly I have some VM on this server that hasn't been saved for a long time. not so critics but with a long setup process. If I reinstall everything, will I be able to recover the osd ?
  6. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    hum actually it won't be possible to upgrade without making ceph work : pve6to7 fail
  7. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    My question behind this is : if I upgrade the server, will it allow ceph to run on this single node ? Regards
  8. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    Hi @fabian Thanks for this precision, you're right. So If I resume, ceph can't run on a single node ? OR do we need to adapt the corum also for ceph. Thanks for the time spent to read and answer
  9. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    so maybe it's not corosync the issue but ceph. The ceph logs are throwing : e13 handle_auth_request failed to assign global_id 2024-12-11T14:55:53.720+0100 7f815b4c5700 -1 mon.server@1(probing) e13 get_health_metrics reporting 4 slow ops, oldest is auth(proto 0 29 bytes epoch 0)...
  10. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    I've also tried to turn this into a standalone server : https://forum.proxmox.com/threads/proxmox-ve-6-removing-cluster-configuration.56259/#post-259203 Yet ceph is not starting but I have no more pmxfs issues at boot
  11. F

    [SOLVED] PVE6.4 pmxcfs fail to initialize and ceph failed on a one node cluster

    Hi everyone, I have a one node server that has been part of a 4 nodes cluster. The current server has 2 disk with OSD and VM + CT using ceph. A few days ago ceph had turned unresponsive with question mark and got timeout (500) in the web UI. We updated the PXVE6.4 to the latest release, all the...
  12. F

    Server crash when backup to PBS

    could you add SOLVED prefix to your issue
  13. F

    [SOLVED] CEPH Reef osd still shutdown

    I confirm that the firewall was responsible of this issue
  14. F

    [SOLVED] CEPH Reef osd still shutdown

    Hum... Firewall on the new node seems to have a bad setup. I'm monitoring the new setup And will add more logs anywy to be sure.
  15. F

    [SOLVED] CEPH Reef osd still shutdown

    thanks, I'll check this ASAP
  16. F

    Server crash when backup to PBS

    Hi @vch @feisal Do you have any more information ? Regards
  17. F

    [SOLVED] CEPH Reef osd still shutdown

    and all the logs between the osd start and stop : osd.4 pg_epoch: 34842 pg[2.1d7( v 34623'14234948 (34483'14233294,34623'14234948] lb MIN local-lis/les=34622/34623 n=0 ec=48/48 lis/c=34622/34620 les/c/f=34623/34621/0 sis=34838) [7,1] r=-1 lpr=34842 pi=[33949,34838)/1 crt=34623'14234948 lcod 0'0...
  18. F

    [SOLVED] CEPH Reef osd still shutdown

    and more lis/c=34622/34620 les/c/f=34623/34621/0 sis=34629) [6,0]/[6,1] r=-1 lpr=34831 pi=[34314,34629)/1 crt=34623'14369446 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray 2024-05-18T16:20:43.607+0000 71872b0006c0 0 log_channel(cluster) log [WRN] : Monitor...
  19. F

    [SOLVED] CEPH Reef osd still shutdown

    Here is thanks for your support @spirit Here are the logs that looks significant to me : y 2024-05-18T16:18:18.292+0000 71872b0006c0 0 log_channel(cluster) log [WRN] : Monitor daemon marked osd.4 down, but it is still running 2024-05-18T16:18:18.292+0000 71872b0006c0 0 log_channel(cluster)...
  20. F

    [SOLVED] CEPH Reef osd still shutdown

    hum.... seems to be also referenced here https://tracker.ceph.com/issues/43417

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!