Search results

  1. J

    Add more disks

    Well, a RAIDZ would be the RAID5 equivalent. http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/ Check this article for more information on setting it up. http://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/
  2. J

    drbdmanage license change

    I have used DRBD9 in the past (finally more than 2 nodes!) and it works very well, but I would bet that for the sake of simplicity, Ceph will continue to dominate their attention. The GPL deal might have been the final tip off the ledge.
  3. J

    DRBD in Promox 4?

    @Jerome, you can run CEPH with your 1 GB and regular SATA drives. I would suggest setting up a virtual environment with VirtualBox with 3 Proxmox 4.4 nodes and testing it out. You just won't get great performance using the lesser equipment. It is very IO intensive.
  4. J

    3 host cluster with ceph suggestion

    Your capacity with CEPH will be determined by the number of disks (osd's) and the number of replica's of the pg's.
  5. J

    How to I update without dist-upgrade?

    I have seen that kind of video effect myself, but given some time, it corrects itself on my end. Can you ssh/ping it after say, 5 minutes?
  6. J

    Proxmox cluster on a shared SAN storage

    I use Freenas 9.10 and it works quite well via NFS. Just make sure to read up on the ZFS file system.
  7. J

    Backup issue x2

    Just something to reference the error. http://stackoverflow.com/questions/20737204/comprehensive-list-of-rsync-error-codes
  8. J

    CEPH testing issue with Proxmox 4.4

    Ok, due to work matters, I never got back to connecting everything back up again, but I believe I know why this happened. My min_size is 3 and probably should have been 1. Due to only 1 replica being available, perhaps this affected the I/O in this case...
  9. J

    Totally lost with Ceph

    Please note the part about setting up the "keyring".
  10. J

    Totally lost with Ceph

    Within the browser for one of your nodes, select a node on the left-hand list. Under the CEPH option, find the Pool button. Once you create a pool, you choose the Datacenter button. Under storage, select RBD to set up a shared storage connection.
  11. J

    Ceph (or CephFS) for vzdump backup storage?

    Take your pick: http://tracker.ceph.com/projects/cephfs/issues?sort=tracker%2Cid%3Adesc
  12. J

    Proxmox backup speed extremely slow for VM's stored on Ceph??

    http://pve.proxmox.com/pipermail/pve-devel/2016-February/019282.html Dietmar specifically answers the question here.
  13. J

    Proxmox backup speed extremely slow for VM's stored on Ceph??

    You could always try this technique to get the entire pool(s). http://nicksabine.com/post/ceph-backup/
  14. J

    CEPH testing issue with Proxmox 4.4

    I had to take it offline for today but will post ASAP. Thanks for the help. To answer the quick questions: No, noout is not active No, it never recovered
  15. J

    CEPH testing issue with Proxmox 4.4

    # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device 3 osd.3 device 4 osd.4 device 5 osd.5 device 6 osd.6...
  16. J

    CEPH testing issue with Proxmox 4.4

    Has anyone tested a failure with CEPH 0.94.9 using 3 replications under 4.4? I have a test environment set up using 4 proxmox 4.4 nodes (clustered) each with 2 OSD's (8 total). Each OSD is 256GB. 3 monitors. A pool named test is configured for 3/3. 256 PG's. 3 VM's using 100 GB RAW disks...
  17. J

    Cannot upload ISO > 2GB

    I have seen this issue with previous versions of Proxmox, but 4.4 works fine for me. I uploaded a 5.6 GB ISO to multiple nodes with no problems. The local filesystem was ZFS, though.
  18. J

    Where are the virtual machines held?

    https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_vzdump Backup and restore.
  19. J

    How to fully remove/cleanup DRBD9 ?

    It would probably be easier just to setup the Ceph environment from scratch with your equipment. That re-configuring idea sounds like a disaster in the waiting.
  20. J

    HA beaviour

    Misunderstanding. Disregard.