Search results

  1. M

    Migrate VM's to Ceph storage

    Can you really daisy-chain IB HBAs together in a token-ring-like setup? I thought you could only have point to point connections...?
  2. M

    Migrate VM's to Ceph storage

    Yea but thats ebay of all places... Most likely second hand or refurbished, most likely doesnt have any warranty, no vendor support plus you cant even properly file ebay purchases with accounting, theyd just laugh at you (well at least over here they would). You really cant buy enterprise...
  3. M

    Migrate VM's to Ceph storage

    The timeouts you describe sound like your network couldnt handle the load plus you might not have separated ceph-cluster and VM networking (to prevent busy ceph recoveries being able to interfere with VM traffic). What you have described as far as performance goes is expected behaviour...
  4. M

    On-the-fly TrueCrypt like Encryption program

    dm-crypt is just an encryption, but yes, you can use that with ceph now.
  5. M

    On-the-fly TrueCrypt like Encryption program

    yea you can continue using TC for the time being, it just won't get any more upgrades / new features. as for ZFS - ZFS actually has its own (transparent!) encryption built in, but not on ZFS on linux because they forked an outdated zpool version (because its the last one whichs code base...
  6. M

    Ceph storage Journal question

    I wanna say no but that depends on whether the single-device setup is "fast enough" for what you want/expect. With SSDs it is very likely for the network to be the bottleneck anyhow. This is something that you should benchmark with your actual setup in my opinion.
  7. M

    Ceph storage Journal question

    hot-swap in this context means that you can remove a disk from ceph node A and put it into node B (to balance capacities for instance) and it will continue running there. While nice, I wouldn't consider this a deal-breaker if you lost this capability. Journals: At the moment a ceph write...
  8. M

    Cant start Guest on Promox 3.1 unable to parse worker upid '...'

    is there a quick workaround for that maybe? The server in question is about to be decommissioned anyhow (different guy, same problem. its a super old 2.3 even)
  9. M

    No Storage option in dropdown for backup operation

    storages added to Proxmox arent "allowed" to contain backups by default, you have to specify which one (of the supported storages) should contain backups in the Storages section
  10. M

    [SOLVED] Backup VM increases...

    This ominous "operation" is called trimming (same thing you do with SSDs) by the way ;) Might as well give the guy a the proper search term to use
  11. M

    Migration issues

    Ive had that issue between incompatible Xeons with containers, not entirely sure whether thatd apply here, but might be worth checking out. It turned out the older Xeon doesnt have the 'xsave' flag, so we had to add "noxsave" to the kernel boot line in /boot/grub/grub.cfg on the machines with...
  12. M

    [SOLVED] Mark [SOLVED]

    I was thinking... depending on how detailed the permission system of this forum (vb) is, might it be possible to give the permission to add this solved-tag to threads to certain trusted individuals so that they can change threads that have been solved even if the thread starter doesn't (not...
  13. M

    Ceph OSDs on Proxmox Node with VMs - not so good idea

    Well icinga is a fork of nagios because nagios is basically a 1-man project where the guy often refused changes and made development extremely slow, hence people moved to icinga. Its also a frequently stated recommendation. However I was talking more about the bigger picture. Once you get to a...
  14. M

    Ceph OSDs on Proxmox Node with VMs - not so good idea

    80% cpu load across all cores is way too high. If you're overcomitting ressources like that, I don't find it unreasonable to see this becoming a problem during recovery scenarios. Ceph has recommended hardware specs and obviously these don't just disappear when you colocate Ceph demons with...
  15. M

    pve-kernel-3.10.0-3-pve does not work on 2 of 4 systems

    Re: ceph cluster - which kernel is best to use? since kvm/qemu uses userland librbd to interface with ceph, the kernel doesnt matter. As such I would recommend staying away from experimental kernels since its... well... experimental. The ceph nodes itself as well as machines you want to use...
  16. M

    ZFS backup tool

    this would appear to be something similar to what SUSE has built for btrfs (snapper), yes? PS: why do I have to scroll so much on a web page that only has ~10 sentences on it? /E: yea no its not the screen size, haha
  17. M

    reset ceph cluster

    maybe ceph-deploy is looking for more than just the "ceph" package (ceph-common and librbd come to mind). luckily, ceph-deploy can take care of that too: ceph-deploy uninstall {hostname [hostname] ...} to get the packages back onto your nodes after you issued purgedata, you can use...
  18. M

    reset ceph cluster

    the easiest way I can think of, would be with ceph-deploy. Sadly I don't know whether thats available in the repositories a vanilla pve system has registered. If its not (check by "aptitude install ceph-deploy"), you'll have to add the repo: NOTE: all the following steps only need to be...
  19. M

    ceph osd down/out

    /var/log/ceph/ceph-osd.5.log should tell you why its not starting as for osd-6... it seems like you removed the /var/lib/ceph/osd/ceph-6 directory? PS: OSDs typically arent defined via ceph.conf anymore, thats deprecated. They are now started via udev rules, to allow you to move a disk to...