Kevin Saruwatari

New Member
Dec 23, 2015
5
0
1
57
A previous post was taken down for mentioning a competitor so I won't mention them here and sorry if this is the wrong place for a feature request but there doesn't seem to be a place in the forums to ask.

In this other hypervisor, storage was done locally on the nodes (as opposed to a central backend) and data was duplicated node to node much like RAID5 across the network. This enables high availability, live migration and adding new nodes scales computing and storage in the cluster at the same time.

The other nice feature is each VM is allocated storage space but the hypervisor decides if the data is placed on ssd or platter. They maintain frequently accessed data on ssd's and move older data to platter (or back again if accessed frequently) all automatically; much like a hybrid drive I suppose.

I would like to see storage features like this in ProxMox or at least witness a debate on the pros and cons of the approach.

Kev
 
Guess I should state that I'm a light weight in this world. I play with clusters mostly for educational recreation.

I don't have experience with ceph of gluster but from a ProxMox standpoint, aren't they used to setup a standalone, backend storage infrastructure that gets mounted into the nodes as shared storage?

Or do you run them on a ProxMox node to make the node's local storage a component of a storage cluster?

The former is my understanding of how they would be used and the latter is more like what I am requesting... plus quite a bit more. Of course being a lightweight, maybe I am unaware of a way to do what I am requesting.

Also can't ZFS do de-dup?
 
with Ceph on proxmox you typically have only the OS-ssd allocated to Proxmox. you then use the SSDs/HDDs as OSDs for standalone Ceph node or Ceph clusters. Using replicated pools you can do Raid-1 with X copies. using Erasure Coded pools you can split your data into Data CHunks and Parity Chunks. Using a Custom location hook you can split SSD from HDD. With that you can create SSD and HDD pools and even setup SSD-cache pools for HDD-pools. (that basically decides automatically which data is kept on SSD and which on HDD). You then plug those pools into proxmox like you would NFS using (k)rdb.

Afaik Gluster (only did a short eval before we settled for Ceph) works like a raid-1 during writes, but a raid-0 during reads.


It is a broad oversimplification in both cases tho.

Yes, ZFS uses RAM for Deduplication if you enable it.
 
Interesting. I will have a closer look a Ceph. I recall that I looked into it when ProxMox talked about it in the release notes but don't remember why I didn't act on it... probably didn't understand it!

Should be easy to try. I always put Proxmox on a little 60GB ssd in a swap cartridge and mount my data drives. Makes it easy to Clonezilla it before I test something.

Thanks!
 
Interesting. I will have a closer look a Ceph. I recall that I looked into it when ProxMox talked about it in the release notes but don't remember why I didn't act on it... probably didn't understand it!

Should be easy to try. I always put Proxmox on a little 60GB ssd in a swap cartridge and mount my data drives. Makes it easy to Clonezilla it before I test something.

Thanks!

just be warned, i oversimplified this heavily. There is probably some 40-100 hours of reading Ceph documentation / Sebastian Han Blogs, consuming all of Wido den Hollander Ceph Mailing list comments and labbing before the example above becomes "feasible" :p
 
That's fine. Like I mentioned, I'm more into this as a recreational education exercise so I don't mind putting in time.

I went back and re-read the Promox ceph-server wiki and recall why I didn't pursue this. I couldn't afford to setup the 10GB backend network. But that must have been almost 3 years ago. I should look into switches, adapter, etc. and see if I can play with it now!

By the way, is Infiniband an option to create a ceph network? I have no experience with this either but I was browsing around on eBay awhile ago and noticed the equipment looked pretty affordable.
 
Dual/Quad 1G links are sufficient for small numbers of OSDs and small numbers of hosts.

heck for labbing even a Windows computer running Virtual Box, running 3 Proxmox nodes, running multiple Vdisks on a single SSD and a Single HDD works. You just can not compare the Benchmarks with anything usefull :p

anyways, Ceph does not do Deduplication (yet).
 
Interesting, I can definitely afford to do it 1G.

But is infiniband an good (trouble free) option? eBay has 40GB switches for under $400 (used) and dual NIC's for under $60. If this is a good setup, I'd be willing to try it on my little cluster.
 
By the way, is Infiniband an option to create a ceph network? I have no experience with this either but I was browsing around on eBay awhile ago and noticed the equipment looked pretty affordable.

Infiniband support is planned, but not yet available. This is the mellanox team which is working on it (xio-messenger).
Maybe for 2016, but I'm not sure.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!