Ceph (or CephFS) for vzdump backup storage?

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
We have a small Ceph Hammer cluster (only a few monitors and less then 10 OSDs), still it proves very useful for low IO guest storage. Our Ceph cluster runs on our Proxmox nodes, but has it's own, separate gigabit LAN, and performance is adequate for our needs.

We would like to use it as backup storage as well, but when a VirtIO disk on a Ceph pool is shared over NFS (via OpenMediaVault running as a KVM guest), read/write performance becomes extremely low, making it unusable for vzdump backups.

Is it possible to use a Hammer / Jewel Ceph pool for vzdump backups without going through a VM?

or

Has anyone tried mounting CephFS as directory storage for Proxmox as it's already included in Jewel?
 
Last edited:

wolfgang

Proxmox Retired Staff
Retired Staff
Oct 1, 2014
6,496
499
103

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
No but I would wait until next release.
Because there a sill bug in it.

What bug? According to the Jewel release notes, CephFS is stable and the necessary repair and recovery tools are there! There is even a volume manager included, that could be used to create the Proxmox storage plugin!

CephFS:
This is the first release in which CephFS is declared stable! Several features are disabled by default, including snapshots and multiple active MDS servers.
The repair and disaster recovery tools are now feature-complete.
A new cephfs-volume-manager module is included that provides a high-level interface for creating “shares” for OpenStack Manila and similar projects.
There is now experimental support for multiple CephFS file systems within a single cluster.

See here:
http://ceph.com/releases/v10-2-0-jewel-released/
 

wolfgang

Proxmox Retired Staff
Retired Staff
Oct 1, 2014
6,496
499
103
I never it is unstable.
No but I would wait until next release.

What I meant with bug is the multi mds is not working perfect (as you self wrote).
For what do you like to have a distributed FS with a single point of failure?
 

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
I never it is unstable.
What I meant with bug is the multi mds is not working perfect (as you self wrote).
Multiple active does not work, which means in case one goes down, you have to set another one active for CephFS. This is an annoyance yes, but not a showstopper for us.

For what do you like to have a distributed FS with a single point of failure?
We would like to see CephFS implemented as a Proxmox storage plugin for vzdump backups. Backup storage has lower uptime requirements than VM storage, so we are fine with the single active MDS until it gets fixed.
 

alexskysilk

Renowned Member
Oct 16, 2015
803
105
63
Chatsworth, CA
www.skysilk.com
If you are really ok with using the same disks for backup and production, creating a VM/Container for the purpose and sharing NFS out is a lot less complex (and a lot more dependable today) then adding another layer for Ceph to process. in the future that may change...
 
  • Like
Reactions: LEE THOONG CHING

e100

Renowned Member
Nov 6, 2010
1,268
42
68
Columbus, Ohio
ulbuilder.wordpress.com
If you are really ok with using the same disks for backup and production

I would not consider any online storage system a backup, maybe part of a backup system but not a complete backup.


More related recommendations here:
https://forum.proxmox.com/threads/howto-upgrade-ceph-hammer-to-jewel.31692/#post-158359

I think it would be great to be able to use cephfs for vzdump, even better if it was all integrated in Proxmox. But that leaves me wondering if the GUI should attempt to prevent ( or just alert ) users when making mistakes like backing up a vm into the same storage.
 

alexskysilk

Renowned Member
Oct 16, 2015
803
105
63
Chatsworth, CA
www.skysilk.com
I think it would be great to be able to use cephfs for vzdump,

This is the part I'm not following. Ceph isnt free- it requires CPU power for monitors and MDS; its erasure coding isnt as reliable as ZFS and replication groups are inefficient for this purpose. Considering that backup storage is not mission critical, what would you see as the advantages vs a ZFS appliance?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!