Ceph storage and VM in the same proxmox hosts?

openaspace

Active Member
Sep 16, 2019
486
13
38
Italy
Hi.
Looking this video guides
https://youtu.be/GgliWaOfvsA
https://youtu.be/jFFLINtNnXs
the user place ceph storage and VM in the same hosts with the ceph distribuited around 3 hosts.

My questions are:
1) when there is a migration, the transfer of a VM is fast because the data in the other hosts it's already replicated by ceph?
3) it's possible to start with this configuration starting with only one host and add other ceph hosts in the future? :rolleyes: yes , no Cuorum will exist, but the storage replication will works correctly? migrating the VM manually?
4) what's happen if the storage size on each host is different like 4hd of 4tb, 4hd of 3Tb and 4hd of 6tb? ceph will create the cluster using the most small hd size of 3Tb?
5) why in the guide don't use zfs, but use LVM without raid for main host proxmox os?

Thanks for the support.
 
1) Yes as the storage never exists on any particular node, Proxmox will sync across the RAM and the VM will move across with only a small blip seen. The speed it takes depends on how much changes are being made to the ram.
3) It is possible however will make growing to extra nodes much harder in the future as you need to adjust the monmap specifically for it to run on only one host. It will work but highly not suggested outside of dev/test. A min of 3 hosts to start with using 3-way replication is your best bet.
4) Ceph uses a value to determine the priority/size of a disk, a disk of the size 6TB will get double the data than a 3TB disk, so all disks technically will stay around the same storage level %, however, this does not always work and also can cause performance issues were a single disk is getting double the I/O vs a 3TB disk only getting 50%. It again is recommended to try and keep the disk sizes/specs as close as possible.
5) ZFS has high RAM requirements, CEPH requires a high amount of RAM, the Proxmox OS will do very little I/O work as all the I/O will be going to the CEPH disks so is a waste of RAM/resources that would be better used for CEPH, some people may use two small disks in RAID1, or network boot or even run off a small USB / SATA DOM. Keeping all free SATA/SAS ports for CEPH OSD disks.

Hi.
Looking this video guides
https://youtu.be/GgliWaOfvsA
https://youtu.be/jFFLINtNnXs
the user place ceph storage and VM in the same hosts with the ceph distribuited around 3 hosts.

My questions are:
1) when there is a migration, the transfer of a VM is fast because the data in the other hosts it's already replicated by ceph?
3) it's possible to start with this configuration starting with only one host and add other ceph hosts in the future? :rolleyes: yes , no Cuorum will exist, but the storage replication will works correctly? migrating the VM manually?
4) what's happen if the storage size on each host is different like 4hd of 4tb, 4hd of 3Tb and 4hd of 6tb? ceph will create the cluster using the most small hd size of 3Tb?
5) why in the guide don't use zfs, but use LVM without raid for main host proxmox os?

Thanks for the support.
 
  • Like
Reactions: openaspace
Really thank you "sg90" !
I will try to works with this xeon with spare ssd disk.
Would be better to add an usb key for proxmox using ssd for ceph cache or use the spare ssd for the proxmox itself?

...and finally.. i will run a single VM with 16 gb of ram and attached raw disk of 3tb (the full available disk space) ..
there would be some negative contraindication in this local storage approach ?


Server auction - Hetzner Online GmbH (2).png
 
Last edited:
So to confirm you're looking at running CEPH on a single host with 2 OSD's? this is really not possible, as a min you want 3 OSD's for 3 way replication.

But yes run Proxmox on the SSD and leave the two disks for OSD's, but your really better off just using one of their current lines that have 4 disks so you can at least run 3-way replication properly on a single host (or asking support if the auction server you're looking at supports an extra disk)

Is this a dev/test environment for you to learn and experience CEPH/Proxmox?
 
Hi ,
no I will work with 3 hosts, each one with ssd for the proxmox os and 2 disks of 3tb each for ceph for single host.
Production environment, not test.

I have already two spare host working with zfs sync for uptime in the case of failer of the main server.. I not want to works anymore losing time to setup spare server .. and hours of job following the actual system
 
Hi ,
no I will work with 3 hosts, each one with ssd for the proxmox os and 2 disks of 3tb each for ceph for single host

Then this will work, I am guessing you're be using their vSwitch? This will mean your CEPH network will be limited to 1Gbps, so don't expect crazy performance.
 
Then this will work, I am guessing you're be using their vSwitch? This will mean your CEPH network will be limited to 1Gbps, so don't expect crazy performance.
Yes I know.. but hetzner offer only 1gbps vswitch and is the only company with competitive prices with flat unmetered public traffic..

My VM will send all the time data at minimum 500mbps to public network with 1gbps band peaks.. and only hetzner allow this at this price...

It's an experimental file share system.. and I need to have low cost prices in the case I will not get money.. for 150€ month i can try to experiment if the business will work without loosing my blood :)
 
Last edited:
Yes I know.. but hetzner offer only 1gbps vswitch and is the only company with competitive prices with flat unmetered public traffic..

My VM will send all the time data at minimum 500mbps to public network with 1gbps band peaks.. and only hetzner allow this at this price...

Just making you aware, hopefully, your not expected that external 500mbps source to be from CEPH filesystem, as with the 1Gbps total you won't have the network for this.

I would suggest hosting the VM OS on the SSD (with external backups due to single SSD) and only storing slower / larger files on CEPH, ideally, you'd want Raid 1 SSD e.t.c but I am guessing you're doing this on a budget.
 
Fundamentally I need redundancy and big storage space, my files are minimum of 240 gb each as single files.
crypto chipers are slow... and after some tests the maximum speed I can reach decoding pgp encrypted files is only of 1gbps..

Therefore if I will gain money, I will works with groups of ceph cluster each of 3 hosts , behind a main 10Gbps load balancer that will only manage the traffic without doing any cpu works on the data.

Fundamentally 1gbps out on the public network for each cluster will be enough.. due to the limitations of crypto chiper speed.

One 10gbps front main load balancer server, and 3 separated cluster of 3 hosts each for a total of 3gbps upload speed , and 10 servers.
 
Really thank you.
I with 10€ plus for each server, there are offers with 2ssd plus 2enterprise hd.

View attachment 13684

That would help, would give you some protection against a single SSD failure (still backups are good for any files you really do care about), as long as the VM OS disk requirements can fit on 240GB along with 20-30GB for Proxmox and other OS files then the above 3 servers look fine.
 
Fundamentally I need redundancy and big storage space, my files are minimum of 240 gb each as single files.
crypto chipers are slow... and after some tests the maximum speed I can reach decoding pgp encrypted files is only of 1gbps..

Therefore if I will gain money, I will works with groups of ceph cluster each of 3 hosts , behind a main 10Gbps load balancer that will only manage the traffic without doing any cpu works on the data.

Fundamentally 1gbps out on the public network for each cluster will be enough.. due to the limitations of crypto chiper speed.

Each 3 server cluster if you use 6 * 3TB disk will give you a 100% max full cluster size of raw 18TB and useable 6TB, however ideally id say keep it less than 4TB used, as CEPH should never be run near full.
 
  • Like
Reactions: openaspace
Thank you "sg90"!
Last doubt , there are obviously no contraindications in saving the VM to the SSD disk and the second attached RAW disk to The Ceph storage?
 
Thank you "sg90"!
Last doubt , there are obviously no contraindications in saving the VM to the SSD disk and the second attached RAW disk to The Ceph storage?

Yeah no problems there, you can add disk from multiple storage pools/locations in Proxmox to the same VM.
 
  • Like
Reactions: openaspace
Give me your paypal email , I will offer you a beer for the new year! :)

No need :) donate to a local charity or something. If you need any help with the setup my Skype is in my profile feel free to fire me any questions glad to help as when I can.
 
  • Like
Reactions: openaspace
Yeah no problems there, you can add disk from multiple storage pools/locations in Proxmox to the same VM.
Further reflection... placing the main VM in the SSD raid1 it will be managed as zfs and will use the proxmox replication system, not the ceph...
at least as much as I will create a ceph also in the ssd....?
 
Further reflection... placing the main VM in the SSD raid1 it will be managed as zfs and will use the proxmox replication system, not the ceph...
at least as much as I will create a ceph also in the ssd....?

I'd suggest 100% not to use ZFS and just use mdadm Raid 1 which can be setup using hetzner installimage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!