Proxmox 4, LXC and GlusterFS

ianux

New Member
Oct 13, 2015
19
0
1
Hello,

I just installed Proxmox 4 and I have a few question about a setup I thought of.

I have a few servers with a lot of disk space (hardware RAID, showing 1 physical volume) and I am wondering how to share this space among the cluster, thinking about HA and space optimization. I thought about distributed replicated GlusterFS volumes with 2 replicas, like some sort of RAID 5. The idea is to have one partition dedicated to LXC raw images and the other for backup (snapshot) purpose. Is it safe to have 2 GlusterFS on one disk ?

I have 2 NICs 1Gb/s, one for public traffic and the other used by PVE cluster. Can I use the later for GlusterFS ? Also, what is the overhead of using GlusterFS ? I think I have enough RAM and CPU power for running about 50 CTs amoung 5 or 6 servers, but what could be the bottleneck here ? Bandwidth ? Disk speed ?

My servers have about 8TB disk space each and I only need 2 or 3TB purely for production, backups excepted. What could be a good HA solution without wasting too much disk space ? Ceph is only documented with 2 nodes, and DRBD9 is presented as a technology preview, so I think GlusterFS is the only distributed filesystem that could be used like some sort of SAN. Is anyone has good production experience with that kind of setup ?
 
If you what to use shared or distributed storage you should have a dedicated network for this.
If you have cluster communication and storage on one network you get problems with quorum on load.
And you should have 10GBit or faster for this network.
Because the network storage have to transport minimum the written data form and this is approximately the same amount like the vm's write.
This is for all distributed storage types true.
So mostly the bottleneck are the network.
I would recommend you ceph for full feature LXC (snapshot are not working on GlusterFS).
Here in this wiki it is documented with 3 nodes.
https://pve.proxmox.com/wiki/Ceph_Server
 
You offer place LXC-container on CEPH block device? Are you sure? :)
Or may be You mean the CEPH FS? :)
 
LXC on Ceph with krdb as blk.
 
make a filesystem on a blk and you have a filesystem what you can mount everywhere you like.
it is the same way hdd and ssd work only you have a rbd blk and no real one.
 
LXC containers cannot be live migrated.
 
make a filesystem on a blk and you have a filesystem what you can mount everywhere you like.
it is the same way hdd and ssd work only you have a rbd blk and no real one.

I tried to implement this idea.
for testing I created rbd-image (size 40GiB):

rbd create backup-store --size 40960 --pool ceph_stor

rbd ls -l ceph_stor

NAME SIZE PARENT FMT PROT LOCK
backup-store 40960M 1
vm-100-disk-1 5120M 2
...

The image "backup-store" is present.

I'm trying to determine which disk corresponds to the created image via "fdisk -l".
And I do not see any drive (among all /dev/rbd*) of this 40GiB size...

I misunderstood something? :confused:
 
Last edited:
Doesnt LXD offer some additional great features and also support live migration?

I think we have more features - at least our toolkit is better integrated into the proxmox cluster stack, and has better support
for different storage types, HA, full features network setup/integration ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!