LXC on network storage - recommendations?

encore

Well-Known Member
May 4, 2018
108
1
58
36
Hi guys,

we've been playing around with Proxmox for a few weeks and want to get our entire infrastructure (3500 vps) away from SolusVM. Proxmox seems very flexible to us but so far we have not found satisfactory shared solutions for LXC.

So far we tried to create a zpool, set up en zvol there, and played an ext4 filesystem over it. The whole thing is then mounted to the individual nodes via ISCSI as a directory. Already after a few hours the first ext4 fs errors arose, which had to be repaired. We assume that the problem is access from different nodes to the same ext4 file system. EXT4 does not seem to be designed for this.

So we are still looking for a fast and stable solution for a central network storage on which LXC containers are hosted by several nodes. Thin provisioning is important, not the entire disk space should be used immediately.
What do you recommend?
NFS?

Thank you!
Marvin
 
if you want a useful answer, you'll have to define the problem more granularly

1. How many containers, and what resources will they need (RAM, CPU, disk space, etc)
2. How many nodes are you planning to spread them across, and at what fault tolerance?
3. how fast does the storage need to be (in IOPs)
4. budget.
 
if you want a useful answer, you'll have to define the problem more granularly

1. How many containers, and what resources will they need (RAM, CPU, disk space, etc)
2. How many nodes are you planning to spread them across, and at what fault tolerance?
3. how fast does the storage need to be (in IOPs)
4. budget.

Thanks for your fast reply aleskysilk.
My request was more a general request regarding LXC containers on a shared storage. I think there are not too many option I have compared with KVM VMs on a shared storage. By now I am just searching for a working solution as an ext4 partition shared by serveral nodes does not work.

We'll migrate about 1300 LXC containers to the new proxmox cluster with 2-64 GB maximum memory each, 2-8 booked cpu cores and 25 GB - 200 GB ssd space. Only ~ 10% of the booked space is really used. Same for memory, cpu etc.

We're using Dual E5-2690v2 Bladecenter Server with 256 GB DDR3 ECC memory each. Maybe about 10 of these nodes.
The storage will only consist of SSDs. I like Raid10 to gain speed and have a little bit fault tolerance, we are using it atm and made good experiences with it.

I thought of a solution where it is easy to extend the storage pool easily with new SSDs. But also serveral storages what will be added from time to time wouldnt be a problem.
 
If I'm reading you correctly, you want ~26TB usable space. Why are you not considering ceph? do you not have disk slots available on your nodes or some other reason?
Thanks for the hint. To be honest, I have never worked with ceph before. I've just been reading in. If I understand it correctly, each CT node need to run as a storage server too? If not, we still need at least 3 separate storage servers, right?
The bladecenter servers we use where the container should run don't have any free disk slots. They have only 2 disk slots what is a ssd raid1 for the OS (proxmox). So we need to stor all CTs on an external storageserver, what has 16 disk slots.
 
The bladecenter servers we use where the container should run don't have any free disk slots. They have only 2 disk slots what is a ssd raid1 for the OS (proxmox). So we need to stor all CTs on an external storageserver, what has 16 disk slots.

Gotcha. well, then you'll need a central storage device that can provide adequate performance. All we can do here is suggest solutions in general (EG iSCSI SAN) but for the specifics (performance/availability/cost/supportability) thats really something you'll need to architect depending on your specific needs/constraints.
 
  • Like
Reactions: encore
Gotcha. well, then you'll need a central storage device that can provide adequate performance. All we can do here is suggest solutions in general (EG iSCSI SAN) but for the specifics (performance/availability/cost/supportability) thats really something you'll need to architect depending on your specific needs/constraints.

Wouldn't it be possible to set up a separate Ceph cluster with 3 storage nodes? And 10 Bladecenter Nodes where the CTs will run (not stored)? For example, each ceph node get 8x 1 TB SSDs (plus the OS hard disks). I put this Ceph cluster into Proxmox. Is it possible to store all Bladecenter CTs on the Ceph Cluster? How much disk space is available in this example configuration from the ceph cluster?
If that would work, I can stor LXC Containers and KVM Virtual machines on that Ceph Cluster, can't I? Does thin provisioning for both virtualizations will work?
 
I am not answering #7, Just wanted to say we use ceph , works great now, but it did not with inferior ssd's . we followed pve wiki suggestions and now have a stable 24-ssd ceph over 3 nodes set up
 
also still not answering #7 , this book has great advise on ceph. from pve help docs: [Ahmed16] Wasim Ahmed. Mastering Proxmox - Second Edition. Packt Publishing, 2016. ISBN 978-1785888243

there may be a 3rd edition. just get the latest
 
  • Like
Reactions: encore
I am not answering #7, Just wanted to say we use ceph , works great now, but it did not with inferior ssd's . we followed pve wiki suggestions and now have a stable 24-ssd ceph over 3 nodes set up
- What exactly does "it did not work great with inferior ssds" mean? What issues did you have?
- Your VPS are running on these Ceph Cluster as well or is it only for storaging your VPS?
 
some of the ssd's were showing high wear out rates , some had errors per smartd reports . these disks had same issue with zfs raid 10 . these were intel ssd's - great for zfs mirror operating install.

we only use ceph for kvm and lxc. note for storage > ceph use KRBD . the help button should show best way to set up for kvm and lxc, however have not checked.
 
some of the ssd's were showing high wear out rates , some had errors per smartd reports . these disks had same issue with zfs raid 10 . these were intel ssd's - great for zfs mirror operating install.

we only use ceph for kvm and lxc. note for storage > ceph use KRBD . the help button should show best way to set up for kvm and lxc, however have not checked.
What model Intel was this?
 
What model Intel was this?

after reading https://pve.proxmox.com/wiki/Ceph_Server#Recommended_hardware we are using these. : 480GB INTEL SSDSC2BB480G7 , after 18 months wearout is less then 2% . we do not do a lot of data i/o. I think those are referred to as Intel SSD DC S3520 on pve wiki.

these were drive I used for zfs then ceph and now for a pve rpool [ and for us are excellent for that ]:

this did not work out:
INTEL SSDSC2BF480A5 , i'll get other model number later if i can get back to this.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!