PVE Cluster Share Store

majorchen

New Member
May 27, 2019
13
0
1
39
During the test, it was found that there were 3 OSDs running in CEPH, one OSD network was interrupted or the host was powered off and restarted. All VMs using CEPH need to wait for the OSD heartbeat to be used normally.
In the production environment is not very friendly, in addition to modifying the heartbeat time, is there a better solution?
 
Hi!
What is your configuration? The informations are a little bit rare.
How many nodes do you have, how many OSDs per host, ceph cluster is in a physically separated network?
ceph.conf? OSD heartbeat?

And please change the topic "Tutorial", i think thats wrong.

regards,
roman
 
Thank you for pointing out the mistake.
Just a minimal cluster, 3host: 1host only 1osd, CEPH config except pool mini_size=1, othe default config, found in mysql imprort data for single point of failure test can not be written.
 
with three nodes and ceph min size 1 will not correctly work i guess. Make standard, size = 3 and min size = 2. If you change e. g. the size to 2 and migrate a vm, the ceph cluster is in read only mode, because of the size 2 in a three node cluster.
 
After modifying min_site, the test results are the same as before. It is necessary to wait for the osd heartbeat. It is considered that the problematic OSD is DOWN to resume writing.
 
Well, as you can see from this picture, do you have a shared storage solution that can be used in a production environment?
5e04259cf89855fb5e85fd20bca6acbe.jpg
 
Last edited:
Well, as you can see from this picture, do you have a shared storage solution that can be used in a production environment?
697113-20150928144456058-1281334527.jpg



I dont know what you wanna say with this picture, but yes of course, ceph is a shared storage solution that can be used in a production environment. We do that since more than five years!

Read this about ceph about RAID..

Avoid RAID
As Ceph handles data object redundancy and multiple parallel writes to disks (OSDs) on its own, using a RAID controller normally doesn’t improve performance or availability. On the contrary, Ceph is designed to handle whole disks on it’s own, without any abstraction in between. RAID controller are not designed for the Ceph use case and may complicate things and sometimes even reduce performance, as their write and caching algorithms may interfere with the ones from Ceph.


From: https://pve.proxmox.com/pve-docs/chapter-pveceph.html


Best regars, roman
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!