Search results

  1. C

    HA Problems

    Any idea what could cause the fencing?
  2. C

    HA Problems

    The node INETC1434 just rebooted by itself when i added dummy VM to the HA root@INETC1434:~# ha-manager groupconfig group: HA nodes INETC1242,INETC1536,INETC1209,INETC1434 nofailback 0 restricted 0 when i now try and move a VM back to INETC1434 all i get is the...
  3. C

    HA Problems

    Here is the output Something does seem very wrong, and i cant see anything in the logs. root@INETC1434:~# systemctl status pve-ha-crm pve-ha-lrm watchdog-mux ● pve-ha-crm.service - PVE Cluster Ressource Manager Daemon Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor...
  4. C

    HA Problems

    Hello, I have added a new node to a Proxmox CEPH Cluster and added the new node to the HA Group and when i try to move a VM to the new node nothing happens or erorr, if i remove the VM from the HA Group it will migrate then when i add the VM back to the HA group and will say fencing then the VM...
  5. C

    too few PGs per OSD (21 < min 30)

    I also see on here that those commands are different then what is said on the CEPH page. http://docs.ceph.com/docs/master/rados/operations/placement-groups/#set-the-number-of-placement-groups and i also have to adjust the pgp_num as well.
  6. C

    too few PGs per OSD (21 < min 30)

    Hello, Thanks for your help so far, i am still a little concerned because looking at the script that calculator generates it makes a new pool ceph osd pool create STORAGE 256 ceph osd pool set STORAGE size 3 while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done It doesnt...
  7. C

    too few PGs per OSD (21 < min 30)

    Ok, just to confirm if i run this command ceph osd pool create STORAGE 256 This will not break the current storage pool or stop any VM's from working?
  8. C

    too few PGs per OSD (21 < min 30)

    Thank you, This is my current setup 3 OSD's per node and 3 nodes so 9 OSD's in total and all SSD and all 1TB each So what would you recommend i should set it to?
  9. C

    too few PGs per OSD (21 < min 30)

    Hello, I have added an extra disk and added them as OSD i have a total of 3 disks per node with 3 nodes. I am now getting the following error "too few PGs per OSD (21 < min 30)" in my ceph Is there a way to resolve this?
  10. C

    CEPH Cluster 1 Disk per node

    Hello, Is it possible for example to run a CEPH cluster with one CEPH physical disk per node? Also is it possible to run the proxmox OS from a USB thumb drive?
  11. C

    CLOUD-INIT UNABLE TO LOGIN

    Does anyone have any ideas on this?
  12. C

    CLOUD-INIT UNABLE TO LOGIN

    Hello, I have tried a few images, i am currently trying CentOS 7 Generic Image and when it loads up, it will update cloud-init and the os and i cant login via the console even if i change the root password via Cloud-Init within proxmox. i have also disabled these in the cloud.conf ssh_pwauth...
  13. C

    Mixed Standard and CEPH Cluster

    Hello. Is it possible to have both standard Proxmox on for example 3 server with local disks and CEPH on 3 separate servers all in the same cluster? Would that cause any issues? or do they all have to be one or the other? or make 2 separate clusters? Thanks, Chris
  14. C

    [SOLVED] CEPH Cluster

    I have now fixed it, i needed to zap and add via the command like for example ceph-disk zap /dev/sdb pveceph createosd /dev/sdb Then it should show in ceph osd stat Example root@INETC1084:~# ceph osd stat 6 osds: 6 up, 6 in
  15. C

    [SOLVED] CEPH Cluster

    Just done it manually and still does not works root@INETC1083:~# pveceph createosd /dev/sdc create OSD on /dev/sdc (bluestore) Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header...
  16. C

    [SOLVED] CEPH Cluster

    When i add the disk via the OSD, it seems to add but does not show up in OSD section. it just shows default. it does this on all 3 nodes. currently trying to do it manually.
  17. C

    [SOLVED] CEPH Cluster

    Hello, This is driving me insane. I have done exactly in the CEPH video and documentation but it does not work. When i create the OSD it makes it but doesnt show in the gui it just shows "default" I also made the pool and it shows as 0 available I think some documentation needs to be...
  18. C

    Backup Error

    would this need to be done on node itself or on FreeNas where NFS is presented? it has been working fine and suddenly happened.
  19. C

    Backup Error

    Hello, I am backing up a VM via NFS and i am getting the following error INFO: starting new backup job: vzdump 2108 --mode snapshot --node INETC1083 --remove 0 --compress lzo --storage BACKUPS ERROR: Backup of VM 2108 failed - unable to create temporary directory...
  20. C

    RAID-Z 1 ZFS - CEPH

    Hello, I have 2 SSD's in each the servers and installed proxmox as RAID-Z 1 then when i go to setup CEPH OSD i have "No Disk Unused" Is there a way around this? i want RAID 1 for the redundancy, and dont want to spend over £1000 for more SSD's Is there a fix for this?