Search results

  1. M

    Force quorum in ceph

    Try to force quorum by running this command: pvecm expected 1
  2. M

    Guest Software RAID 1

    I would recommend to let your Proxmox server handle the RAID and export that as storage. With this, you can run multiple VMs on the same storage if you want, or easily migrate your VM to another storage location.
  3. M

    XenServer to Proxmox

    Did you run your guest in Xen with UEFI bios? See this post.
  4. M

    CephFS MDS Failover

    Ok I realized that I had not created the proper metadata servers that I wanted. I must have only created a single metadata server, which has the same id as the cluster, so I'll have to remove that one and run on the ones named properly. I used this forum post as an example to create them. I...
  5. M

    CephFS MDS Failover

    Would you be willing to share the config file with the mds section so I can compare?
  6. M

    CephFS MDS Failover

    Oh wow... okay... I am only running 1 mds, the other 2 failed to start, and I have no idea why. This is the status: ceph-mds@54da8900-a9db-4a57-923c-a62dbec8c82a.service - Ceph metadata server daemon Loaded: loaded (/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: enabled)...
  7. M

    [SOLVED] CephFS Mount Connection Timed Out

    You are correct, I just ran into that problem. Proxmox will FAIL to boot if you try and mount cephfs through fstab on the same server running the services. I suspect it has to do with the fact that ceph tries to mount prior to ceph services running. I got around that by mounting the cephfs...
  8. M

    CephFS MDS Failover

    Hello everyone, There is a fully functional ceph fs running on a 3 node cluster. It was created very simply, here is the conf related to mds: [mds] keyring = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a/keyring mds data =...
  9. M

    [SOLVED] CephFS Mount Connection Timed Out

    Ahhh made such a dumb mistake. I was pointing to the Proxmox network, not ceph network address!!!! Freaking embarrassing.
  10. M

    [SOLVED] CephFS Mount Connection Timed Out

    Yes, I have created a cepfs_data and cepfs_metadata pool and a ceph filesystem on top of that.
  11. M

    [SOLVED] CephFS Mount Connection Timed Out

    This is on a 3 node cluster. The versions: Proxmox Kernel Version Linux 4.13.13-1-pve #1 SMP PVE 4.13.13-31 Ceph: 12.2.2 Successfully created a cephfs as far as I can tell. Ccephx is disabled. Though I did create the cephfs while the cluster still had cephx enabled, and I disabled it shortly...
  12. M

    Is Ceph too slow and how to optimize it?

    When disabling cephx, can I restart each host one by one, or does the entire cluster need to be off and then on again to get this to work?
  13. M

    Support for gotty (or similar) web terminal?

    I'm very interested in this as well, as I frequently copy and paste from putty. How exactly do I get this running and connect in this way?
  14. M

    Ceph - Network File Share Drive

    Okay, understood. Thanks
  15. M

    Ceph - Network File Share Drive

    After looking around what others have done, I see it can be quite a bit troublesome. Are there any plans implementing management of CephFS through the GUI?
  16. M

    Ceph - Network File Share Drive

    Thanks Wolfgang, now I know what I need to research. I already found a lot of threads with good information.
  17. M

    Ceph - Network File Share Drive

    I am aware that CEPH is not ideal for storing files, but I do not want to run both Gluster and CEPH on the nodes. Right now we are running the file shares from FreeNAS, and I'd like to remove single points of failure as much as possible. Are there any strategies for creating redundant network...
  18. M

    Extend Linux hard disk

    Boot the VM with a Live CD like Ubuntu-Live CD. Extend LVM with a GUI: sudo add-apt-repository universe sudo apt-get install system-config-lvm system-config-lvm Extend the LVM with that GUI. If you cannot change the lvm you may have to do this: sudo vgchange -ay. Once the LVM is extended...