Search results

  1. wahmed

    3rd party Ceph dashboard connection being refused on Proxmox 4

    Ceph Calamari did not work out very well for me. Every few hours Calamari server kept getting disconnected and Calamari Minions kept crashing. Tried 2 clean installs but same result. I went back to Ceph Dashboard from Crapworks. Got response from the developer. By default the Ceph Dashboard...
  2. wahmed

    Cluster Problem

    From Proxmox 4 you need multicast to form quorum. In a datacenter it is not always possible but you can contact SoYouStart and ask them to enable/configure multicast for your ports.
  3. wahmed

    systemd-sysv-generator messages during openvswitch installation

    Anybody getting this message during openvswitch installation and know what this means? Also to Proxmox dev: Is it possible to include the openvswitch package with base Proxmox installation ISO? For us who uses openvswitch it seems like we have to go through additional steps of configuring...
  4. wahmed

    3rd party Ceph dashboard connection being refused on Proxmox 4

    [SOLVED] Re: 3rd party Ceph dashboard connection being refused on Proxmox 4 Thanks Spirit! The Ceph Dashboard from Crapworks has port 5000 hard coded. That is probably why i cannot access that dashboard. I contacted the developer if he can make the port configurable. I installed Ceph Calamari...
  5. wahmed

    Proxmox VE Ceph Server released (beta)

    You can use VMs to configure as MONs for Ceph. But it is rather risky. If the storage the VMs are on becomes inaccessible for some reason your MONs will never come online thus potentially risking massive data loss. You can have majority of MONs in physical machine and some as VMs. We use all...
  6. wahmed

    3rd party Ceph dashboard connection being refused on Proxmox 4

    I have been using Ceph Dashboard (https://github.com/Crapworks/ceph-dash) with Proxmox 3.2 for quite some time without any issue. After clean upgrading to Proxmox 4 i can no longer access the dashboard on port 5000. The gui is usually accessed on Http://proxmox:5000. I also tried to install...
  7. wahmed

    Migrating Physical to Virtual

    What is the image type of your VM disk image, qcow2/raw/vmdk?Is it Windows/Linux on your physical node that you are trying to convert virtual?Did you restore the image using clonezilla inside the VM?
  8. wahmed

    Urgent: Proxmox/Ceph Support Needed

    The high load may be the result of the all the rebalancing ceph trying to do. Eric's original post says the cluster lost 33% of its disk. But do not know what caused the loss. After the loss i believe he marked the OSDs OUT. Which started rebalancing and as fas i can understand it never finished...
  9. wahmed

    NAS Solution for backup

    As others already made comments, I too vouch for FreeNAS. It is what you call "Just Works". We have 2+ dozens FreeNAS and Gluster deployment running 24/7 for last several years mainly used as backup storage. No complaint thus far. For distributed back storage with redundancy Gluster is excellent...
  10. wahmed

    Urgent: Proxmox/Ceph Support Needed

    Yep, agree with you. Thats why i suggested that he does not change replica before he achieves healthy cluster across 3 nodes. Eric, To make things clearer, you must achieve Healthy cluster with all 3 nodes and all OSDs active before you change replica size. Only after replica is changed should...
  11. wahmed

    Urgent: Proxmox/Ceph Support Needed

    Yes you need to decrease replica size for all Pools currently with 3 replicas. I would suggest to take down 1 OSD at a time from node 3.
  12. wahmed

    Urgent: Proxmox/Ceph Support Needed

    High IO/Latency is normal on Ceph OSDs during rebuilding as you probably already know. Using just about any SSD does not increase Ceph performance. For example, Intel DC3500 DC3700 series SSDs are extremely good for ceph journal. Whereas other brand/model will not give you any noticeable...
  13. wahmed

    Urgent: Proxmox/Ceph Support Needed

    For now let it finish rebuilding and achieve Health_OK status. After that disable noout with: #ceph osd unset noout Then observe Ceph cluster behavior. What are you trying to achieve here at the end? Take out OSDs from Node 3 or take the whole node 3 out of the cluster?
  14. wahmed

    Urgent: Proxmox/Ceph Support Needed

    Mark all the OSD In. Out tells ceph that those OSDs are not in use and it will try to move data out of those OSds and redistribute them in rest of the In OSDs. How does your #ceph osd tree looks like? Post results of : #ceph osd tree and #ceph -s If you have not taken out large amount of OSDs...
  15. wahmed

    Urgent: Proxmox/Ceph Support Needed

    No sure why you want to change replica from 3 to 2. I dont think that will help with the situation you currently have in hand. Some details of what caused the 33% loss would definitely help to provide you applicable help. Info such as what exactly was done (ex: marking OUT) before the issue...
  16. wahmed

    PVE 3.4 CEPH cluster - failed node recovery

    Run the following command to remove bucket/item from CrushMAP: #ceph osd crush remove host=pmc1 Since you are cluster is perfectly fine at this point, it is safe to remove the dead host from CrushMAP.
  17. wahmed

    PVE 3.4 CEPH cluster - failed node recovery

    I see you still have pmc1 joined in the Proxmox cluster. Remove it with : #pvecm delnode pmc1 If the host is still in crushmap after that run then just remove it from ceph.conf
  18. wahmed

    using ceph with proxmox

    Ceph management through Proxmox GUI is very limited for the right reason. You can manage OSD, Pool, MON and check status through GUI. But all other advanced tasks such as benhmarking, pg repair, retrieving/injecting Ceph configuration and many many more must be done through CLI. I dont see...
  19. wahmed

    using ceph with proxmox

    You are somewhat correct. We use proxmox nodes all around simply because it is just convenient to see all nodes in cluster from GUI. Other than that, you are correct. In emergency we can move some VM to Proxmox+Ceph if need to be. Sorry i should have abbreviated. Since you use Ceph i assumed...
  20. wahmed

    using ceph with proxmox

    For day to day operation it works fine assuming there are plenty of resources to go around. But when Ceph goes into rebalancing mode thats when those CPU and Memory starts to churn really. In those cases your heavyweight VMs or Ceph or Both will suffer due to CPU/Memory shortage. You can also...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!