Proxmox HA storage and performances

supervache

Member
Dec 6, 2019
27
0
6
34
Hi.
I have multiple proxmox 7 clusters for different needs.
On a "staging" cluster (for staging / preproduction infrastructure) I don't have any HA. But on the production cluster, I configure two ceph storages (SSD and HDD).

Developers have always pointed out to me that deploying code is significantly slower in production than in pre-production. So today I bench lots of dedicated servers, lxc containers over ceph or not, using ceph HDD or ceph Nvme ....

For example on the production cluster I have 3 proxmox nodes. Each node is stricly identical in hardware configuration.
Code:
NAME                                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda (HDD RAID1)                                          8:0    0   5,5T  0 disk
├─sda1                                        8:1    0     1M  0 part
├─sda2                                        8:2    0   512M  0 part /boot
├─sda3                                        8:3    0     1G  0 part
└─sda4                                        8:4    0   5,5T  0 part
  └─system--lnwic-root                      253:2    0   5,5T  0 lvm  /
sdb (SAME HDD thand the 2 used for sda's RAID)         8:16   0   5,5T  0 disk
└─ceph--96b52363--2628--4fc3--bd2a--379305739b7f-osd--block--865c0e3e--47ec--4f41--a186--e9b0234bd29e
                                            253:1    0   5,5T  0 lvm 
sdc (Nvme)                                          8:32   0 953,3G  0 disk
└─ceph--98afa5f9--e075--4ee5--964c--4b5125bd645c-osd--block--7f397012--0efb--41ae--a5c1--84657eba72ff


dd if=/dev/zero of=/tmp/BenchFile bs=1G count=3 conv=fdatasync repeated several times on the same system always returns an equivalent order of magnitude. Comparison between each type of servers or storage shows big differences in speed. Here is a summary of my measurements:
On the proxmox host, writing on the RAID grape (sda4) : 250 MB/s
On a LXC container stored on ceph nvme : 123 MB/s
On a LXC container stored on ceph HDD : 95 MB/s
On an other server with a nvme disk but no ceph (and not virtualized system) : 415 Mb/s
On my nvme personal computer (intel nuc) : 1600 Mb/s . . .

So ceph, even if used on nvme disks seems slow. The developers were right (I never doubted them ;) )
Is it possible that my configuration was bad, or is this only the network liaison and protocol ?

What are other options on scaleway / OVH to use HA services on proxmox with something more efficient thant ceph ? bloc storage maybe ? How did you do ? Are your write speeds on your ceph clusters comparable to mine?

Thank you
 
Ceph speed depend of network capacity between each of the node who host osd.

We need more details about the tested cluster to help.

If for example you have a vm running on the same network for cluster storage and monitors you lose capacity there too.

I suggest you to have at least 4 10gb interface or 3x 10

1x management
1X public ceph
1x cluster ceph
1x internet transit for your customers

And configure a link1 for corosync on cluster ceph network as backup even if not ideal
 
On a LXC container stored on ceph nvme : 123 MB/s
On a LXC container stored on ceph HDD : 95 MB/s
Since the results are very close. Can you tell us the exact models for the HDDs and NVMEs?
What is the network speed and also latency between the nodes in the Ceph network?

Ceph will have a few network round trips for each write operation. That's why besides enough bandwidth, low latency is very important to have good performance.
 
@DC-CA1:
Thank you for your reminder. I suspected that network performance played an important role but my cluster architecture is not as good as what you describe. On each Host, I have an interface for the public (WAN) and a private interface (LAN) (LAN) that everything else uses: corosync, Ceph, and communications between my containers and VMs ...

This is a part of explanation. I don't know how to do better with hosted dedicated server.

@aaron :
We have 3 hosts on scaleway "dedibox" (previously named "online.net" core-4-L SATA with an additional SSD drive.


LAN speed : 10Gbps (not enough interfaces for a coherent mesh, I understand)
3X TOSHIBA MG04ACA6 (toshiba gives a writing speed of about 180Mib/s for these disks)
--> for the OS proxmox part, it's a RAID 1 with the H730P RAID controller
--> For ceph it's the third disk without RAID
1X SAMSUNG MZ7LN1T0 (it's an SSD and not an nvme as I thought, I can't find the write speed, it may be 520MB/s Seq. Write)

Other misc informations :
CPU: 2x Intel® Xeon E5 2660v4 14C/28T
RAM : 256 GB DDR4 @ 2133 MHz
WAN speed : 750 Mbps



Do you have experience with any particular vendors that offer the tools to do this kind of infrastructure? OVH? Scaleway ? (I work for a French company, which prefers to host its servers in Europe).
 
Okay, the SSD looks like a decent one. What is the latency between the nodes? That could be another reason for poor Ceph performance.
 
Are you asking an ICMP latency (ping return bettween nodes) ? Or is there a place to find this information with the pve / cli ?

Ping between the 3 nodes are about 0.165 ms to 0.195 ms
 
Last edited:
The architecture as provided by my datacenter provider does not allow me to make more connections between my servers. I know I couldn't do better with what I have now.

So I come back to my other question: proxmox, Vmware, xen .... same fight! What do companies use to manage their VMs in datacenters such as OVH / Scaleway / Hetzner ?
These are the main dedicated server providers with proxmox pre-installed in europe as far as I know. There is a need for redundancy in almost every business. There is probably an alternative solution to "Ceph in proxmox" for all these companies right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!