Install Ceph Server on Proxmox VE (Video tutorial)

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,627
223
We just created a new tutorial for installing Ceph Jewel on Proxmox VE.

The Ceph Server integration is already available since three years and is a widely used component to get a real open source hyper-converged virtualization and storage setup, highly scalable and without limits.

Video Tutorial
Install Ceph Server on Proxmox VE

Documentation
https://pve.proxmox.com/wiki/Ceph_Server

Any comment and feedback is very welcome!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
thank you for the excellent video.

question - in the stretch kvm, which program is used for benchmarks?

we used the gnome-disk-utility (Disks)

> apt install gnome-disk-utility
 
That is an awesome video. Very easy to follow along, etc. I'll keep this in mind for a project I'm working on. Thanks
 
Hi,

I need your help. I’m getting very poor performance.
I have 3 nodes Proxmox Cluster setup with HP DL580 g7 Server. Each server has dual port 10 Gbps NIC.
Each node has 4 x 15K 600 2.5 SAS and 4 X 1 TB 7.2k SATA

Each node has following Partitions ( I'm use Logical Volume as OSD):

Node 1
100GB for Proxmox (7.2 K SATA)
2.63 TB (7.2K) OSD.3
1.63 TB (15K) OSD.0

Node 2
100GB for Proxmox (7.2 K SATA)
2.63 TB (7.2K) OSD.4
1.63 TB (15K) OSD.1

Node 3
100GB for Proxmox (7.2 K SATA)
2.63 TB (7.2K) OSD.3
1.63 TB (15K) OSD.2

I create two runsets and two pools


Runset 0
osd.0, osd.1, osd.2 and
Pool ceph-performance is using runset 0


Runset 1
osd.3, osd.4, osd.5
Pool ceph-capacity is using runset 1

Proxmox and CEPH versions:
pve-manager/4.4-1/eb2d6f1e (running kernel: 4.4.35-1-pve)
ceph version 0.94.10

Performance:
LXC VM Running on ceph-capacity Storage
[root@TEST01 /]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 15.9393 s, 67.4 MB/s

LXC VM Running on 7.2 K Storage without CEPH
[root@TEST02 /]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 2.12752 s, 505 MB/s

root@XXXXXXXX:/# rados -p ceph-capacity bench 10 write --no-cleanup
Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects
Object prefix: benchmark_data_xxxxxxx_65731
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)

0 0 0 0 0 0 - 0
1 16 46 30 119.951 120 0.694944 0.389095
2 16 76 60 119.959 120 0.547648 0.462089
3 16 80 64 85.3068 16 0.610866 0.468057
4 16 96 80 79.9765 64 1.83572 0.701601
5 16 107 91 72.7796 44 1.25032 0.774318
6 16 122 106 70.6472 60 0.799959 0.81822
7 16 133 117 66.8386 44 1.51327 0.8709
8 16 145 129 64.4822 48 1.11328 0.913094
9 16 158 142 63.0938 52 0.683712 0.917917
10 16 158 142 56.7846 0 - 0.917917

Total time run: 10.390136
Total writes made: 159
Write size: 4194304
Bandwidth (MB/sec): 61.2119
Stddev Bandwidth: 40.3764
Max bandwidth (MB/sec): 120
Min bandwidth (MB/sec): 0
Average IOPS: 15
Average Latency(s): 1.02432
Stddev Latency(s): 0.567672
Max latency(s): 2.57507
Min latency(s): 0.135008

Any help will be really appreciated.

Thank You,
Dave
 
Hi ... Love your video ...
I upgraded to 4.4 and ran all updates but ended up with 4.4-1 your video has 4.4-2 among other things mine does not have the win 10/2016 option under the OS-tab when creating a new kvm. Any suggestions ? Thank you :)
 
Hi,

as Udo say to your predecessor.
Please open a new thread.
 
We just created a new tutorial for installing Ceph Jewel on Proxmox VE.

The Ceph Server integration is already available since three years and is a widely used component to get a real open source hyper-converged virtualization and storage setup, highly scalable and without limits.

Any insight on when will erasure coded pools and CephFS become available through the Proxmox web interface?
 
There are no plans about this.

I'm curious for what you like CephFS on PVE GUI?

We would love to use CephFS for backup storage. Currently, to backup VMs to Ceph, we have to use OpenMediaVault running as a KVM guest, having a huge RAW disk on a Ceph pool, sharing it over NFS. This has a number of shortcomings:
- it's a single point of failure, even though Ceph isn't
- storage capacity is not easy to expand, have to extend partitions and filesystems (while adding to Ceph is trivial)
- hard NFS mounts can cause cluster communication problems and huge delays for reboots
- it's performance is not optimal

If CephFS were to provide a file store over a Ceph pool, it would be great to put backups, templates and ISO images on it, without the inherent problems and complexity of NFS.
 
Last edited:
i think in general you'd agree ceph should not be used to backup vm's that use same ceph storage.

If you replied to my post: I never said our VMs use the same Ceph storage. In fact they run on local storage, and we already use Ceph as backup storage that can withstand node failure. Anyway, this is not an argument against CephFS as backup storage, which would be very useful either way.
 
Last edited:
Hi all,
I have Proxmox Cluster with four nodes, two nodes, when I create OSD they do not see the disks!
Do you have any idea?

Best Regards
Roberto
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!