Proxmox VE 5.0 beta2 released!

Our Ceph packages don't have redundant logrotate scripts ;) The ones in Debian and the ones provided by the Ceph project (which ours are based on) use different Debian packaging - one has the logrotate in ceph-common, the other in ceph-base, and neither cares about the others' quirks. Since our pve-qemu-kvm package depends on ceph-common, you initially get the ceph-common package from Debian Stretch installed. When you then install the Ceph packages from our repository (e.g., via "pveceph install"), the old logrotate script is not cleaned up because it is a conffile. We'll include some kind of cleanup in our next Luminous packages (based on 12.0.3).

@fabian, will there be support for Bluestore in Ceph Luminous in Proxmox v5.0? Really looking forward for this feature as it could improve the HDD performance up to 2x than the XFS filesystem.

Would also love to see pveceph createmds to add MDS for CephFS. :)
 
Last edited:
@fabian, will there be support for Bluestore in Ceph Luminous in Proxmox v5.0? Really looking forward for this feature as it could improve the HDD performance up to 2x than the XFS filesystem.

still experimental, so not exposed over pveceph. you can create bluestore OSDs manually though (by setting the experimental shoot yourself in the foot option and using "ceph-disk" manually)

Would also love to see pveceph createmds to add MDS for CephFS. :)

maybe once cephfs has seen more stability improvements. you can already set it up manually if you want to play around with it (but I don't really see the use case except for shared iso storage, given its current state)
 
  • Like
Reactions: chrone
Thx.

still experimental, so not exposed over pveceph. you can create bluestore OSDs manually though (by setting the experimental shoot yourself in the foot option and using "ceph-disk" manually)



maybe once cephfs has seen more stability improvements. you can already set it up manually if you want to play around with it (but I don't really see the use case except for shared iso storage, given its current state)

Thanks.

For Bluestore, I just run ceph-disk and select --bluestore option manually? It's the default OSD in Luminous

And how to deploy CephFS without ceph-deploy? I found no option to install ceph-deploy using Proxmox Ceph repository. CephFS is already HA active-active in Luminous and would be great to store VMs backup there along with ISO images. :)
 
For Bluestore, I just run ceph-disk and select --bluestore option manually? It's the default OSD in Luminous

not sure where you got this from - you need to set a special option in the ceph configuration file to even allow starting bluestore OSDs, which is aptly called "enable experimental unrecoverable data corrupting features = bluestore". it's about as far away from being the default as possible, short of not being compiled in at all.

And how to deploy CephFS without ceph-deploy? I found no option to install ceph-deploy using Proxmox Ceph repository. CephFS is already HA active-active in Luminous and would be great to store VMs backup there along with ISO images. :)

basically do what ceph-deploy does (replace MDSINSTANCENAME with something of your choice):
Code:
mkdir -p /var/lib/ceph/mds/ceph-MDSINSTANCENAME
chown -R ceph:ceph /var/lib/ceph/mds/ceph-MDSINSTANCENAME
ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.MDSINSTANCENAME osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-MDSINSTANCENAME/keyring
systemctl start ceph-mds@MDSINSTANCENAME
systemctl enable ceph-mds@MDSINSTANCENAME

rinse, repeat for as many MDS instances as you want.

then follow http://docs.ceph.com/docs/master/cephfs/createfs/ and http://docs.ceph.com/docs/master/cephfs/kernel/

but again, I am not sure where you get "CephFS is already HA active-activate in Luminous" - while there have been lots of fixes in that area, multimds is explicitly still marked as experimental:
http://docs.ceph.com/docs/luminous/cephfs/best-practices/

For the best chance of a happy healthy filesystem, use a single active MDS and do not use snapshots. Both of these are the default.
http://docs.ceph.com/docs/luminous/cephfs/experimental-features/

Multiple active MDSes are generally stable under trivial workloads, but often break in the presence of any failure, and do not have enough testing to offer any stability guarantees. If a filesystem with multiple active MDSes does experience failure, it will require (generally extensive) manual intervention. There are serious known bugs.

similar limitations apply to running multiple file systems in one cluster, as well as snapshotting directories on cephfs.
 
not sure where you got this from - you need to set a special option in the ceph configuration file to even allow starting bluestore OSDs, which is aptly called "enable experimental unrecoverable data corrupting features = bluestore". it's about as far away from being the default as possible, short of not being compiled in at all.



basically do what ceph-deploy does (replace MDSINSTANCENAME with something of your choice):
Code:
mkdir -p /var/lib/ceph/mds/ceph-MDSINSTANCENAME
chown -R ceph:ceph /var/lib/ceph/mds/ceph-MDSINSTANCENAME
ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.MDSINSTANCENAME osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-MDSINSTANCENAME/keyring
systemctl start ceph-mds@MDSINSTANCENAME
systemctl enable ceph-mds@MDSINSTANCENAME

rinse, repeat for as many MDS instances as you want.

then follow http://docs.ceph.com/docs/master/cephfs/createfs/ and http://docs.ceph.com/docs/master/cephfs/kernel/

but again, I am not sure where you get "CephFS is already HA active-activate in Luminous" - while there have been lots of fixes in that area, multimds is explicitly still marked as experimental:



similar limitations apply to running multiple file systems in one cluster, as well as snapshotting directories on cephfs.


Thanks for detail walkthrough @fabian , I'll give it a try this week.

I got the information for Sage's update on OpenStack Summit Boston 3 weeks ago. The slide says Bluestore is default and stable (slide 4), also CephFS is finally have multiple active daemons (slide 21).

Reference:
https://www.slideshare.net/sageweil1/community-update-at-openstack-summit-boston
 
Thanks for detail walkthrough @fabian , I'll give it a try this week.

I got the information for Sage's update on OpenStack Summit Boston 3 weeks ago. The slide says Bluestore is default and stable (slide 4), also CephFS is finally have multiple active daemons (slide 21).

Reference:
https://www.slideshare.net/sageweil1/community-update-at-openstack-summit-boston

we'll see what the RC and final release of Luminous will bring - but the current state as of 12.0.3 (which is still a dev release after all) is like I described ;)
 
  • Like
Reactions: chrone
we'll see what the RC and final release of Luminous will bring - but the current state as of 12.0.3 (which is still a dev release after all) is like I described ;)

Awesome! Perhaps they encountered some bugs and postponed it. I'm also interested on the ordered write back cache of RBD which will improve the write latency (hope the write throughput as well) on Gigabit network interfaces in Ceph Mimic this year.
 
Well, regarding my problem. It turned out it was problem in combination with 3Par 8200, multipath and LVM. I disabled discards in LVM layer and problems stopped.

Anyway, I now have different issue:
Failed to restart pvedaemon.service: Unit pvedaemon.service is masked.
Failed to restart pveproxy.service: Unit pveproxy.service is masked.
successfully added node 'prox-prod-dc1-srv05' to cluster.

Web interface stops after first restart and node is still red-crossed when viewed from proxmox 4.4 cluster.
 
Is there really no support for running LXC containers on GlusterFS? Is this feature forthcoming?
 
Is there really no support for running LXC containers on GlusterFS? Is this feature forthcoming?
mhmm the problem here is that there is no kernel driver for mounting glusterfs volumes, and using the fuse client is very slow..
 
mhmm the problem here is that there is no kernel driver for mounting glusterfs volumes, and using the fuse client is very slow..

In my tests, the fuse client writes simultaneously to two gluster nodes (distributed, replica 2), so the total write speed is limited to half of the bandwidth minus the overhead. With two bonded network connections on the ProxMox host, it seems to perform writes with acceptable performance.

Update: That is the behavior when running VMs on GlusterFS from ProxMox. I have no experience testing with LXC and GlusterFS.
 
Update: That is the behavior when running VMs on GlusterFS from ProxMox. I have no experience testing with LXC and GlusterFS.
yes but vms use the glusterfs driver from qemu not the fuse client
 
yes but vms use the glusterfs driver from qemu not the fuse client

Hmm. You're right. The fuse client performed only 61% as fast as the qemu driver did in a test I just ran. Could we not add support for the fuse client with a disclaimer about performance? I am only one use-case, by my LXC containers host very lightweight apps that do not require high performing disks.
 
Hmm. You're right. The fuse client performed only 61% as fast as the qemu driver did in a test I just ran. Could we not add support for the fuse client with a disclaimer about performance? I am only one use-case, by my LXC containers host very lightweight apps that do not require high performing disks.
I suppose that there's no need for code changes. I found a workaround that allows me to use the fuse client and works with LXC with only a simple config change. I guess that solves my specific needs (LXC on GlusterFS for redundancy purposes, not performance).
 
Does anybody have an idea on when PVE5.0 will be released? Now that there's a firm data on stretch, how long will it take to wrap PVE around the final release?

I'm waiting to build install recipes.......

+1. Looking for the plan as stretch will be released later today.
 
Also excited here for PVE 5.0 and Debian 9. We also need a template for Debian 9 LXC containers. I think if we had the new Debian 9 template we could at least start create new containers and then upgrade to PVE 5.0 when it was ready.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!