Recent content by mariodt

  1. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    Now my cluster is up and running WITH this option, from August 24th, BUT I've a mixed osds type environment: bluestore and filestore, is it likely that with only bluestore osds this option is not required?
  2. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    Solved: reinserted the osd option, executed the scan-volume, fixed the json files, activated the volumes, manually started the mon. Now the OSDs and the Mon are up and running, I can continue the upgrade on the other nodes... Thank you very much Stoiko. Mario
  3. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    1) option added again, rebooted, not solved, option removed again; 2) I've restarted the OSDs only on one node (of 6) without success, if I run "systemctl restart ceph-osd.target" to restart OSDs on all nodes, all OSDs go down and VMs will be unresponsive. Do I need to shutdown ALL the VMs...
  4. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    You are right, I missed it. Now all the managers are running. The problem persists for OSDs (filestore type): they are in "down/in" state and rebooting or issuing "systemctl restart ceph-osd.target" does not solve. Note that in the old ceph.conf file there was the following option: [osd]...
  5. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    New configuration after upgrade: root@c01:/etc# cat /etc/pve/ceph.conf [global] auth client required = cephx auth cluster required = cephx auth service required = cephx auth supported = cephx cluster network = 192.168.0.0/22 filestore xattr...
  6. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    The hints are thousands, I paste the more interesting. It seems to be a keyring/authentication problem after adapted the ceph.conf as stated in the guide. journalctl -r: ... Aug 22 08:46:00 c01 systemd[1]: Starting Proxmox VE replication runner... Aug 22 08:45:26 c01 systemd[1]: Failed to...
  7. M

    [SOLVED] Problem upgrading Ceph to Nautilus

    Hello, I'm following this guide https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus to upgrade Ceph on my 6 nodes cluster. Everything goes fine till "systemctl restart ceph-mgr.target", at this point the 3 managers don't restart. Issuing "ceph -s" shows: services: ... mgr: no daemons...
  8. M

    [SOLVED] "pve configuration filesystem not mounted" after creating a cluster

    For me a reboot does not solve the problem, the network is up, /etc/pve is not available and in logs and on some commands I've: ipcc_send_rec failed: Connection refused