Search results

  1. S

    CEPH warning: too many PGs per OSD (225 > max 200)

    Hi, I just updated my cluster today to: proxmox-ve: 5.1-30 (running kernel: 4.13.8-3-pve) from the previous version and I got the warning above. Having followed all the advice (using PG calculator etc) when I set up the cluster and having updated and rebooted many times over the past few...
  2. S

    After 5.1 update, local-lvm cannot be selected for new kvm or container

    I have the same boot error - how did you solve it please?
  3. S

    How to backup VM's on a 4 node cluster to USB drive

    Thank you for your answer, I now realise my question was badly worded - apologies for that. I want to back up my VM's to an external USB drive. I mounted the USB drive on node1 and added it as storage for VM backups, at which point all the nodes gained a storage icon that I thought was pointing...
  4. S

    How to backup VM's on a 4 node cluster to USB drive

    Hi all, What would be a good way to back up a cluster to an external USB drive? Has anyone done this at all?
  5. S

    CEPH: HEALTH_WARN mon 1 is low on available space

    Figured out the issue - it was backups going into the wrong place.....
  6. S

    CEPH: HEALTH_WARN mon 1 is low on available space

    Any suggestions as to what can be done? This is a vanilla 4 node proxmox install with CEPH with 3 VM's currently. Not yet in use really so I was very surprised to see this crop up. This is the output from df ... Filesystem Size Used Avail Use% Mounted on udev 16G...
  7. S

    CEPH: HEALTH_WARN mon 1 is low on available space

    Hi, I'm not sure what to do about this, can anyone help? The problem seems to be in /dev/mapper/pve-root - I'm not sure what this is. proxmox-ve: 5.0-20 (running kernel: 4.10.17-2-pve) pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc) pve-kernel-4.10.17-2-pve: 4.10.17-20...
  8. S

    Invalid server ID

    I've had to rebuild a node in my cluster and when I try to add the subscription ID - I get the error: "Invalid server Id" Please let me know what to do... Thanks!
  9. S

    help please: pveceph purge fails

    I'm trying to remove ceph on a node in a 4 node cluster before rebuilding the node. I have destroyed the OSD, removed the manager and monitor from the node. I removed the monitor IP's from storage.cfg I ran: pveceph stop Then ran: pveceph purge and got: detected running ceph services-...
  10. S

    mount ceph storage as extra disk in vm

    My GUI blindness I guess! I can now see that there's an add disk option for a VM....
  11. S

    mount ceph storage as extra disk in vm

    How can you mount some extra ceph storage in a linux vm? I'm sure this must easy but I can't see it right now!
  12. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    pretty sure that was the issue thanks. The create pool dialog is different now.
  13. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    root@clus1:~# ceph status cluster: id: 9f23f6cf-8065-48fe-a059-25d7842d85b1 health: HEALTH_OK services: mon: 4 daemons, quorum 0,1,2,3 mgr: clus1(active), standbys: clus3, clus4, clus2 osd: 4 osds: 4 up, 4 in data: pools: 1 pools, 300 pgs objects: 0...
  14. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    I tried it any way from the command line and it apparently worked fine. I can also now see it in the GUI.
  15. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    Is there anything you can advise I look into as everything I've looked at seems OK?
  16. S

    Ceph OSD on PVE5 "got timeout (500)" in GUI

    Please post if you manage to create pools from the GUI as it won't allow me to do this.
  17. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    Can I just add a pool using ceph command directly or does that break the integration?
  18. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    root@clus1:~# ceph status cluster: id: 9f23f6cf-8065-48fe-a059-25d7842d85b1 health: HEALTH_OK services: mon: 4 daemons, quorum 0,1,2,3 mgr: clus1(active), standbys: clus3, clus4, clus2 osd: 4 osds: 4 up, 4 in data: pools: 0 pools, 0 pgs...
  19. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    The also no default pool, which I thought seemed odd
  20. S

    Issue adding CEPH pools via GUI (PVE5.0-30)

    I rebooted each node in the cluster, not individual services

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!