Recent content by ProxCH

  1. P

    [SOLVED] cannot start ha resource when ceph in health_warn state

    Auto answer; putting mon osd reporter subtree level = osd on global level made it! Cheers
  2. P

    [SOLVED] cannot start ha resource when ceph in health_warn state

    Hello, I am encountering this same issue. Here is my architecture : 3 Nodes, 3 Ceph but only 2 nodes hosting 2 OSD each. I have the exact symptoms described above and I guess that your fix should suit for me but first I like to be sure that it is correct ...
  3. P

    Ceph - Multiples OSD and POOLS

    I'm not sure to follow you; why NUC cannot see the cluster ? For me, the NUC is fully integrated to the cluster now, do I miss something ? So in my case, I have this in the corosync config : totem { cluster_name: HomeLab config_version: 3 interface { bindnetaddr: 192.168.7.20...
  4. P

    Ceph - Multiples OSD and POOLS

    You talked about the totem magic, is that related to this KB ? https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network Talking about "Separate After Cluster Creation" with this example ?? totem { cluster_name: thomas-testcluster config_version: 3 ip_version: ipv4 secauth: on...
  5. P

    Ceph - Multiples OSD and POOLS

    Super ! Many many thanks again for all your answers. This was really helpful. Now I will look into how to present network interfaces in different subnet without binding IP on each. Cheers !
  6. P

    Ceph - Multiples OSD and POOLS

    I thank you a lot for your time a patience AlexLup. I have started over the Ceph confirugation. I ran a "pveceph purge" and re-created the ceph cluster and initialized it on the PUBLIC network this time so I have now the three nodes. See the new configuration : [global] auth client...
  7. P

    Ceph - Multiples OSD and POOLS

    Ok but ceph is broken with this configuration : [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 192.168.10.0/24 fsid = 532deb0f-4b17-4343-9112-g26f78ce6125 keyring =...
  8. P

    Ceph - Multiples OSD and POOLS

    To be sure, should this entry must be on PUBLIC or Storage LAN ? [mon.host2] host = host2 mon addr = 192.168.10.20:6789
  9. P

    Ceph - Multiples OSD and POOLS

    This is nice. I tried to modify the Ceph config but now everyhing is broken so I have to figure this out. But no more Ceph storage :) hopefully I only have a test machine on it.
  10. P

    Ceph - Multiples OSD and POOLS

    Hello ! So, with such configuration I may not need additionnal switch then ? As 3 nodes will be able to talk on public network... Right ? Cheers
  11. P

    Ceph - Multiples OSD and POOLS

    Hello Alexlup, Currently I have : [global] ... cluster network = 192.168.10.0/24 ... public network = 192.168.10.0/24 [mon.host1] host = host1 mon addr = 192.168.10.21:6789 [mon.host2] host = host2 mon addr =...
  12. P

    Ceph - Multiples OSD and POOLS

    I have build the corosync cluster on .7 network. It's still a LAN / private network. .10 Network is exclusively used by Ceph setup. I also think that having an additionnal switch will help. I choose a GS110MX which has 2x10GB and 8x1GB. What was the model you choose ?
  13. P

    Ceph - Multiples OSD and POOLS

    Yeah but the thing is that both first nodes are connected togheter with a direct cable so it's impossible to put the NUC between... except with a switch :( So I definitively need a switch. I should have one tomorrow. Many thanks !
  14. P

    Ceph - Multiples OSD and POOLS

    I fully understand. Just to be very clear : - All three nodes have cluster IP in network 192.168.7.0/24. This is where Proxmox cluster setup has been done. - Two of three nodes have Ceph mon / initialization in network 192.168.10.0/24. This is non-switched network as there is a cross cable...
  15. P

    Ceph - Multiples OSD and POOLS

    Hi Alexlup, Of course I mentionned 3 message above :P No worries... In fact my cluster has well 3 nodes now but the only thing is that when one of the two nodes having Ceph deployed, the other node lost access to the storage pool and is able to see it and restart the VM only after the other...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!