Recent content by Bruno Emanuel

  1. B

    Reinstall CEPH on Proxmox 6

    My problem, I solved doing an increment on timeout of mon service
  2. B

    Reinstall CEPH on Proxmox 6

    Unfortunattely I did the upgrade to Ceph 14 and I couldn't rollback. Can I send some log to you? Is there a way to contribute debugging ?
  3. B

    Summary Note

    I was looking on API if there is a way to put values on Note field. My idea is put the IP Address on Summary/Note from a dynamic way.
  4. B

    NFS Shares: storage is not online (500)

    Onliest to register. Recently i had problem with the access on my nfs storage. I'm using Proxmox 4.2 The message that i receive was "NFS_Storage is not online" I read on https://forums.freenas.org/index.php?threads/nfs-mount-times-out.7270/ : "During the mount process, NFS apparently does a...
  5. B

    Ceph Cluster using RBD down

    osd stat osdmap e1410: 7 osds: 7 up, 4 in; 38 remapped pgs osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 4.33992 root default -2 0.53998 host srvl00pmx002 0 0.26999...
  6. B

    Ceph Cluster using RBD down

    Thanks, i noted that the 4th node wasn't started monitor. After remove and recriate this works. The folder /var/lib/ceph/osd/ceph-$id wasn't created. When i recreate the folder it's ok. After this i forced start and now all it's ok. # pveceph status { "quorum_names" : [ "2", "0"...
  7. B

    Ceph Cluster using RBD down

    Crush and Ceph configuration attached.
  8. B

    Ceph Cluster using RBD down

    Hi, The First moment: #ceph -s cluster 901bdd67-0f28-4050-a0c9-68c45ee19dc1 health HEALTH_WARN 64 pgs degraded 64 pgs stuck degraded 64 pgs stuck unclean 64 pgs stuck undersized 64 pgs undersized recovery...
  9. B

    Ceph Cluster using RBD down

    We have 4 nodes using ceph and monitoring storage. When 1 of then is turned off the storage down. The network is working and its comunicating, but the storage stops.