Hello Alwin,
Thank you very much for your help !
No, datacenter is not a logic level, it reflects the physical locations.
The addition of another PVE node on another datacenter is planned in order to always have 3 running PVE node in case of datacenter disaster ;)
The issue was indeed the...
Hi Alwin,
Arround 12:46, I powered off the node on which test-ubuntu VM (id 114) is running, and it was relocated.
Please find some log files.
kernel.log
Nov 27 12:48:40 node2 pve-ha-lrm[21325]: <root@pam> starting task UPID:node2:0000534E:09D30DB1:5A1BFB98:qmstart:114:root@pam:
Nov 27...
Hi Alwin,
As requested, I edited my previous message with code formatting :)
Yes for every questions. The storage and network are properly configured. I could manually migrate test-ubuntu VM on every nodes without any issue.
Best regards,
Saiki
Hi everyone,
I have my test cluster setup with 4 PVE 5.1 nodes having their storage on ceph ssd pool.
I followed the wiki in order to setup HA.
I created a HA group, named cluster, including the 4 nodes with same priority.
I then enabled HA on a test VM, named test-ubuntu, with the following...
Hi Alwin,
Thank you for your help.
I am refering to this article : http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map
I eventually have this issue solved. I chose to use class in order to separate hdd and ssd and it works fine.
This issue probably...
Hi everyone,
Thank you for your replies.
I figured out the issue which comes from the network connection.
Indeed, I noticed communication error between some node which obviously affect ceph performance.
Furthermore, my cloud provider has QoS enabled which limited networking to 5 Gbps. It has...
Hello Fabian,
Thanks for your reply.
Please find the two versions of my crush map configurations.
For the old version, these are the commands I have used to setup the replication rules :
ceph osd crush rule create-replicated replicated-ssd root-ssd datacenter
ceph osd crush rule...
Hello,
Thank you for asking this question.
I have the exact same need.
There are 12 SSD OSD and 4 HDD OSD within my architecture (PVE 5.1 with integrated Ceph Luminous).
I updated the CRUSH map adding datacenter levels and then I created two replication rules using these commands.
ceph osd...
Hello,
I am currently facing performance issue with my Ceph SSD pool.
There are 4 nodes (connected with 10Gbps) on two datacenter, each of them have 3 SSD OSDs.
iperf benchmark reports minimal 5Gbps with 1500 maximum transmission units on the network interfaces between the two datacenter.
SSD...
Hello everyone,
Context :
I need to setup HA architecture between two datacenters for virtualization and distributed storage system. This latter would be used as block devices and filesystems for Proxmox virtual machines and also our legacy infrastructure.
Implementation :
- 2 datacenters
- 4...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.