Search results

  1. A

    Hardware/Concept for Ceph Cluster

    How are you presenting the disk to your server is it via a Raid Card?
  2. A

    [SOLVED] Proxmox 4 / Cluster over MPLS

    What is the latency like? You was able to run a large amount of omping without a drop? Can every node ping another / corosync hostname e.t.c
  3. A

    PVE 5.0 ETA

    Is the roadmap fairly up to date or is there any other key/big features planned for 5.0?
  4. A

    corosync update reboot

    Fully understand about maintenance, and yes if I am doing anything more than a "apt-get dist-upgrade" then I would remove all VM's, but when applying updates that have no requirement for a reboot of the node so can be done live, if their is an update for the corosync package, this obviously...
  5. A

    corosync update reboot

    I understood this was still a VM moving to another node, but this will also stop a node from powering off if corosync does not respond? I know they may sound the same just wanted to make sure!
  6. A

    KVM disk has disappeared from Ceph pool

    Will leave this down to the dev's, may be a bug or just a one off. Main thing is as you have proven your storage is still there and useable, to me does not look a CEPH issue otherwise you would not be able to create a new disk and make use of this.
  7. A

    KVM disk has disappeared from Ceph pool

    Can you paste the contents of your VM's config file? located @ /etc/pve/qemu-server
  8. A

    KVM disk has disappeared from Ceph pool

    What does "rbd ls pool2" show via terminal
  9. A

    corosync update reboot

    Recently been trying out HA in Proxmox, just run an apt-get dist-upgrade on one node, this had an update for corosync causing the node to be fenced (rebooted). 1/ I have done some searching on the forum and have found a reply stating to stop two services on the node in question before updating...
  10. A

    Understanding Ceph Failure Behavior

    Sure agree with you there was just confirming.
  11. A

    Understanding Ceph Failure Behavior

    Maybe I understand wrong, but I/O won't stop if osd_backfill_full_ratio is reached, only recovery will stop. I/O will only stop @ mon osd_full_ratio, which are two different values (backfill by default lower than osd_full_ratio to stop a backfill making OSD go full and cause write's to be blocked.
  12. A

    Understanding Ceph Failure Behavior

    CEPH has built in limits per OSD/PG/POOL, this will cause any rebalance to be suspended once this limit is hit, once new storage is added then it will continue to rebalance belongs the % is lower than the threshold. However this does mean during this time you run the risk of a further failure...
  13. A

    Advice Regarding 5 node Proxmox Ceph Setup

    A) It could help depends how good the RAID1 disks are B) No belongs they are setup correctly C) Size of 3 you can expect slightly worse performance over 2. The rados benchmark is what id expected as your hitting around 50% of the drives write performance.
  14. A

    Please help ! my proxmox broken after update

    What are you confused about we both said the same thing.
  15. A

    ceph.conf : I made changes, how to make them take effect now?

    Depending on the value you changed your need to restart that service for it to pickup the new value, however you can inject the vars live into the running services using : http://www.sebastien-han.fr/blog/2012/10/22/ceph-inject-configuration-without-restart/
  16. A

    Proxmox Ceph 10G setup

    It is documented in many online places including : http://ceph-users.ceph.narkive.com/px2L2fHc/is-it-still-unsafe-to-map-a-rbd-device-on-an-osd-server I have yet to see a clear place that a particular kernel version has the issue 100% fixed, and is not something I wish to test myself. KRBD vs...
  17. A

    Advice Regarding 5 node Proxmox Ceph Setup

    1/ I would set your PG Number to 2048 (this shouldn't effect performance too much but is a better value than 1050 you have set) 2/ What tests are you doing to check performance and what results are you getting? As your collocating the Journal on the same disks you will only get a max of 1/2...
  18. A

    Please help ! my proxmox broken after update

    apt-get update && apt-get dist-upgrade
  19. A

    [SOLVED] upgrade issue's

    apt-get update; apt-get dist-upgrade; I would hold off the autoremove until you have checked everything is working correctly.
  20. A

    Hardware/Concept for Ceph Cluster

    Using the hardware you have: Each server: 2* SM863 4 * MX300 SM863 on each create: 1 Partition for Proxmox (Equal Size) 80GB 2 * Journal Partition (4 * 10GB) ZFS the two first partitions from each SM863 during the Proxmox Installer, use the remaining 4 Journal partitions for each MX300 OSD...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!