Hi.
I have multiple proxmox 7 clusters for different needs.
On a "staging" cluster (for staging / preproduction infrastructure) I don't have any HA. But on the production cluster, I configure two ceph storages (SSD and HDD).
Developers have always pointed out to me that deploying code is significantly slower in production than in pre-production. So today I bench lots of dedicated servers, lxc containers over ceph or not, using ceph HDD or ceph Nvme ....
For example on the production cluster I have 3 proxmox nodes. Each node is stricly identical in hardware configuration.
On the proxmox host, writing on the RAID grape (sda4) : 250 MB/s
On a LXC container stored on ceph nvme : 123 MB/s
On a LXC container stored on ceph HDD : 95 MB/s
On an other server with a nvme disk but no ceph (and not virtualized system) : 415 Mb/s
On my nvme personal computer (intel nuc) : 1600 Mb/s . . .
So ceph, even if used on nvme disks seems slow. The developers were right (I never doubted them
)
Is it possible that my configuration was bad, or is this only the network liaison and protocol ?
What are other options on scaleway / OVH to use HA services on proxmox with something more efficient thant ceph ? bloc storage maybe ? How did you do ? Are your write speeds on your ceph clusters comparable to mine?
Thank you
I have multiple proxmox 7 clusters for different needs.
On a "staging" cluster (for staging / preproduction infrastructure) I don't have any HA. But on the production cluster, I configure two ceph storages (SSD and HDD).
Developers have always pointed out to me that deploying code is significantly slower in production than in pre-production. So today I bench lots of dedicated servers, lxc containers over ceph or not, using ceph HDD or ceph Nvme ....
For example on the production cluster I have 3 proxmox nodes. Each node is stricly identical in hardware configuration.
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda (HDD RAID1) 8:0 0 5,5T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 512M 0 part /boot
├─sda3 8:3 0 1G 0 part
└─sda4 8:4 0 5,5T 0 part
└─system--lnwic-root 253:2 0 5,5T 0 lvm /
sdb (SAME HDD thand the 2 used for sda's RAID) 8:16 0 5,5T 0 disk
└─ceph--96b52363--2628--4fc3--bd2a--379305739b7f-osd--block--865c0e3e--47ec--4f41--a186--e9b0234bd29e
253:1 0 5,5T 0 lvm
sdc (Nvme) 8:32 0 953,3G 0 disk
└─ceph--98afa5f9--e075--4ee5--964c--4b5125bd645c-osd--block--7f397012--0efb--41ae--a5c1--84657eba72ff
dd if=/dev/zero of=/tmp/BenchFile bs=1G count=3 conv=fdatasync
repeated several times on the same system always returns an equivalent order of magnitude. Comparison between each type of servers or storage shows big differences in speed. Here is a summary of my measurements:On the proxmox host, writing on the RAID grape (sda4) : 250 MB/s
On a LXC container stored on ceph nvme : 123 MB/s
On a LXC container stored on ceph HDD : 95 MB/s
On an other server with a nvme disk but no ceph (and not virtualized system) : 415 Mb/s
On my nvme personal computer (intel nuc) : 1600 Mb/s . . .
So ceph, even if used on nvme disks seems slow. The developers were right (I never doubted them
Is it possible that my configuration was bad, or is this only the network liaison and protocol ?
What are other options on scaleway / OVH to use HA services on proxmox with something more efficient thant ceph ? bloc storage maybe ? How did you do ? Are your write speeds on your ceph clusters comparable to mine?
Thank you