Search results

  1. [SOLVED] PVE 5.1 Ceph - HA VM start timeout after relocation

    Hello Alwin, Thank you very much for your help ! No, datacenter is not a logic level, it reflects the physical locations. The addition of another PVE node on another datacenter is planned in order to always have 3 running PVE node in case of datacenter disaster ;) The issue was indeed the...
  2. [SOLVED] PVE 5.1 Ceph - HA VM start timeout after relocation

    Hi Alwin, Arround 12:46, I powered off the node on which test-ubuntu VM (id 114) is running, and it was relocated. Please find some log files. kernel.log Nov 27 12:48:40 node2 pve-ha-lrm[21325]: <root@pam> starting task UPID:node2:0000534E:09D30DB1:5A1BFB98:qmstart:114:root@pam: Nov 27...
  3. [SOLVED] PVE 5.1 Ceph - HA VM start timeout after relocation

    This is my storage.cfg file : zfspool: local-zfs pool rpool content rootdir,images nodes anonymized sparse 0 dir: local path /var/lib/vz content images,rootdir,vztmpl,iso maxfiles 0 rbd: ceph-rbd-ssd content images krbd 0...
  4. Multiple Ceph pools possible?

    Hi Alwin, Thank you again for your help. Best regards, Saiki
  5. [SOLVED] PVE 5.1 Ceph - HA VM start timeout after relocation

    Hi Alwin, As requested, I edited my previous message with code formatting :) Yes for every questions. The storage and network are properly configured. I could manually migrate test-ubuntu VM on every nodes without any issue. Best regards, Saiki
  6. [SOLVED] PVE 5.1 Ceph - HA VM start timeout after relocation

    Hi everyone, I have my test cluster setup with 4 PVE 5.1 nodes having their storage on ceph ssd pool. I followed the wiki in order to setup HA. I created a HA group, named cluster, including the 4 nodes with same priority. I then enabled HA on a test VM, named test-ubuntu, with the following...
  7. Multiple Ceph pools possible?

    Hi Alwin, Thank you for your help. I am refering to this article : http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map I eventually have this issue solved. I chose to use class in order to separate hdd and ssd and it works fine. This issue probably...
  8. [SOLVED] Ceph SSD pool slow performance

    Hi everyone, Thank you for your replies. I figured out the issue which comes from the network connection. Indeed, I noticed communication error between some node which obviously affect ceph performance. Furthermore, my cloud provider has QoS enabled which limited networking to 5 Gbps. It has...
  9. Multiple Ceph pools possible?

    My bad, please find the crush map configuration
  10. Multiple Ceph pools possible?

    Hello Fabian, Thanks for your reply. Please find the two versions of my crush map configurations. For the old version, these are the commands I have used to setup the replication rules : ceph osd crush rule create-replicated replicated-ssd root-ssd datacenter ceph osd crush rule...
  11. Multiple Ceph pools possible?

    Hello, Thank you for asking this question. I have the exact same need. There are 12 SSD OSD and 4 HDD OSD within my architecture (PVE 5.1 with integrated Ceph Luminous). I updated the CRUSH map adding datacenter levels and then I created two replication rules using these commands. ceph osd...
  12. [SOLVED] Ceph SSD pool slow performance

    Hello, I am currently facing performance issue with my Ceph SSD pool. There are 4 nodes (connected with 10Gbps) on two datacenter, each of them have 3 SSD OSDs. iperf benchmark reports minimal 5Gbps with 1500 maximum transmission units on the network interfaces between the two datacenter. SSD...
  13. Proxmox VE 5.1 and Ceph Filesystem

    Hello everyone, Context : I need to setup HA architecture between two datacenters for virtualization and distributed storage system. This latter would be used as block devices and filesystems for Proxmox virtual machines and also our legacy infrastructure. Implementation : - 2 datacenters - 4...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!