Search results

  1. M

    Proxmox VE 3.3 released!

    The normal upgrade procedures for ceph apply. First of all you should upgrade ceph before upgrading proxmox (as proxmox suggests). Then you upgrade the ceph packages on all nodes at the same time (ceph daemons will continue running the old version until restarted!). Then you can restart the...
  2. M

    Harddisk requirements

    I would suggest using 8GB flash drives as the bare minimum to store a linux OS. 16 or 32 would be better (most 2.5" disks you can stuff into internal slots of servers come with 32-64GB capacity). Also you should be aware that flash drives aren't exactly known for being durable. Even name...
  3. M

    Shellshock bash security update

    the danger of this vulnerability has been blown way out of proportion by the media yet again. your system is only vulnerable to this particular exploit, if your system has inherent security flaws and youre asking for trouble in the first place: - youre running (f)CGI - using svn/git over...
  4. M

    Few Ceph questions

    yes. also I edited my last post
  5. M

    Few Ceph questions

    It's a matter of risk analysis. If you want to be able to mitigate the failure of a disk while one of your datacenters is down then you need 3 copies. BTW the remainder of the thread I linked talks about dry-testing the crush ruleset with crushtool. Also the rule sage posted only works for 3...
  6. M

    Few Ceph questions

    Well yes having non-legacy tunables set is always a good idea, regardless of the situation :) Also regarding the crush rule I mentioned, sage actually gave a detailed example how to do that in this thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg13028.html
  7. M

    Few Ceph questions

    much appreciated. I'm actually currently looking at implementing such a metro cluster for 2 separate customers of mine that have asked this to be done. The required crushmap alterations should be as simple as introducing datacenter buckets to separate the hosts. Also it is possible to define...
  8. M

    Few Ceph questions

    since you still seem to be testing this. Would you mind testing whether /etc/pve syncs up fine after 3/5 nodes are down and you do some writing to /etc/pve on the 2 remaining nodes?
  9. M

    Few Ceph questions

    be aware that the /etc/pve filesystem will switch to read-only since cman will lose quorum as well. Meaning that you cant move the container/VM configs into different node folders. I suppose you could lower the "expected votes", but Im not sure how well /etc/pve deals with splitbrain situations...
  10. M

    Few Ceph questions

    AFAIK, for the situation you described, your only way to add monitors is by injecting a new monmap to the surviving monitors. This means that while the cluster is fully up and running, you grab a monmap (do all of this on one of the nodes that will still be up in your tests): ceph mon...
  11. M

    Few Ceph questions

    Not the first time Ive heard people not liking DRBD, cant blame them. The speed of the connection between your datacenters is completely irrelevant for this issue though. You could either have an equal number of monitors in each datacenter, or have 1 more mon than that in one location. Either...
  12. M

    Few Ceph questions

    as a side note: dont go above size=4 (replication count), as doing introduces a major slowdown to writes due to the write only being acknoledged after all copies have been written. Also, you actually CAN have a fully-auto setup with just 2 locations for Ceph. This is a little complicated...
  13. M

    IPv6 forwarding OpenVZ

    not from me Im afraid, sorry. Ive never touched openvswitch
  14. M

    IPv6 forwarding OpenVZ

    That is what I meant, yes. However I just realized that you may not need to do this when using vmbr at all. It's been a long while since I have messed with this
  15. M

    IPv6 forwarding OpenVZ

    That means you have to put the containers IPv6 address there. Like 2001:abcd::42 or whatever it may be. On the host, like my text specifies. no, that should work the same way.
  16. M

    Ceph backup

    maybe itd be worth it to collaborate with the Ceph guys on this. Especially since theres another issue involving backups to be solved (by the Ceph guys): cache pools. More to the point: if you have a cache pool in front of your Ceph storage and you take a backup, you essentially clear out the...
  17. M

    flat storage migration?

    Well the backup sparses things up, I wouldve hoped itd be possible to only write non-null sectors to the RBD during storage migration. But I suppose Id be better of duplicating the VM and rsyncing the contents then I suppose... thanks anyhow
  18. M

    flat storage migration?

    Hi, Im looking at Proxmox 3.0 cluster that is connected to 2 distinct Ceph clusters (dont ask). Now unfortunately somebody set up a KVM with a 4TB disk. the actual usage is like <100GB but I have noticed that storage migrations actually do reserve the space on the target storage, which is...
  19. M

    High available ZFS storage

    Im not exactly sure what your problem is, but nobody said anything about a debian platform. You do realize, that storage FOR Proxmox is generally hosted on OTHER physical machines, right?
  20. M

    Cloud Controller

    The cluster stack Synnefo is currently running on top of X amounts of ganeti clusters, maybe itd be worth it to collaborate with them on getting Proxmox support into Synnefo