Search results

  1. W

    [SOLVED] ceph startup script not working

    Hi Alwin, We followed the guide you mentioned and everything went right. I find it strange that on a native debian 9 + ceph luminux + proxmox 5 , the ceph service is correctly launched by the /etc/init.d/ceph script but following the guide, the ceph.service script is a template copied from...
  2. W

    [SOLVED] ceph startup script not working

    Hi! Sorry for the late anwser, we found out that the source was a weird useless systemd script that was supposed to do the jobs (/etc/systemd/system/ceph.service). In fact, systemctl start ceph calls that script that does nothing so OSDs do not go up and filesystems are not mounted The trick...
  3. W

    [SOLVED] ceph startup script not working

    Hi! My configuration (before upgrade to proxmox 5 on debian stretch): - 3 proxmox nodes running Debian jessie - proxmox installed on top of Debian jessie - 2 hard drives per nodes as OSDs = total of 6 OSDs Today we upgraded our "proxmox 4 + ceph hammer" to "proxmox 5 + ceph luminous" following...
  4. W

    ceph upgrade

    Thank you a lot, The upgrade notes seems incomplete to me. Yes, I meant jewel to luminous. Talking about ceph hammer to jewel step: I have some worries if you could help me : - When upgrading Ceph since it is done node by node , do I have to shutdown the VMs/CTs on the to-be-upgraded proxmox...
  5. W

    ceph upgrade

    Hi! I have proxmox 4 on three nodes and ceph hammer on each: I want to upgrade ceph from hammer to jewel and then from jewel to hammer. Since the upgrade is done node by node, will there be a risk during the process while some nodes will run ceph hammer and the others ceph jewel (those being...
  6. W

    upgrading from 4.4-24 with ceph to 5.xx

    Tim, thank you for your reply, - I just meant we do not use any dedicated NFS for VMs storage - The upgrade document states "no VM or CT running", does it mean we would need to shut all the VMs down on all nodes at once? regards,
  7. W

    upgrading from 4.4-24 with ceph to 5.xx

    Hi, I am aware of this: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 - We have three (3) identical nodes: 256Gb of RAM, 4Tb of HD, ... same on each node - Each node is running proxmox 4.4-24 with CEPH enabled - We do not have any shared storage, all VMs are on nodes' hard drives Could...