KRBD - rbd image as backup storage

gosha

Well-Known Member
Oct 20, 2014
302
26
58
Russia
Hi All!

I want implement a store for vm's and ct's backups inside CEPH-storage in my cluster as rbd-image.
From any cluster node via krbd make any FS on created rbd-image and mount it in local folder.
And this local folder use as storage for backups VMs.

for testing I created rbd-image (size 40GiB):

rbd create backup-store --size 40960 --pool ceph_stor

rbd ls -l ceph_stor


NAME SIZE PARENT FMT PROT LOCK
backup-store 40960M 1
vm-100-disk-1 5120M 2
...

I'm trying to determine which disk corresponds to the created image via fdisk -l.
And I do not see any drive of this 40GiB size...

Any idea? :confused:
 
Hi,
you have to map it to a dev and then you can use it.

rbd map
 
Never try to mount an ext2/ext3/ext4/xfs file system on several nodes at the same time - That way you will destroy your data.
 

Hi!
I tried the rbd map. It`s Ok. I made a file system and mount image to the local directory.
Mount for this directory I place in /etc/fstab.
How better to make autostart rbd map before mount via /etc/fstab during boot nodes?
May be via /etc/rc.local both?
 
Last edited:
You could try cephfs. It will be available on all nodes

I would use a dedicated backup server with RAID5 and export via NFS (less risk)
 
And what are the risks of usage HA-storage - rbd (ceph)?
I assume you mean the same "cluster" right ?

Short Answer: Keeping backups on the same systems as the VM-Data (or to be backed up data) is not a Backup. It analogous to how Raid1 is not a backup but an increase in tolerance


Long Answer:
The pro and con's are as follows:

dedicated backup server with RAID5 and export via NFS
+ true backup (data loss less likely)
+ separate server (external copy of data)
+ duplication done via Raid 5 parity (saves space +allows for HDD's to fail)
- single point of Failure (1 server)
- single point of Failure (1 Raid-controller)



• Back up on a Ceph pool (same cluster as VM-Data)
+ you can potentially do as many replications as you have osd's (you can loos HDD by the following formula: (ReplicationNumber -1)
+ you can use Erasure coded pools to safe space (Increase in efficiency the more OSD's you have) and allows you to have variable parity for different pools
+ No single point of Failure (Raidcontroller - you have at minimum 3 in JBOD mode - can loose 2)
+ no singlepoint of Failure (Server - you have at minimum 3, so you can loose 2)
+ the likelyhood of you ever needing your Backups is decreased.
- Single Point of Failure (Ceph Software - e.g. update, broken crushmap, other types of snafu) - you loose both VM-data + Backups - this makes it not a true backup.


• Back up on a Ceph pool (dedicated backup Ceph-Cluster)
+ All the positive points from above
+ No singlePoint of Failure (Ceph Software)
+ True Backup


ps.: a "Ceph Cluster" can be a single Node running Ceph on a separate network. e.g. a "ceph cluster" of 3 nodes on 10.1.1.1/24 and a "Ceph Cluster" of 1 node on 10.1.2.1/24
 
- Single Point of Failure (Ceph Software - e.g. update, broken crushmap, other types of snafu) - you loose both VM-data + Backups - this makes it not a true backup.....

ps.: a "Ceph Cluster" can be a single Node running Ceph on a separate network. e.g. a "ceph cluster" of 3 nodes on 10.1.1.1/24 and a "Ceph Cluster" of 1 node on 10.1.2.1/24

I agree with all the arguments, but a single node for the ceph-storage... it does not seem a good idea.
I plan to make a second 2-nodes cluster with ceph-storage for backups main cluster's VMs.
There are two options:
via krbd -> mount -> iSCSI
via CephFS

CephFS seems like a good idea - but I'm not tried the possibility yet... and do not know about stability for productions.
 
I agree with all the arguments, but a single node for the ceph-storage... it does not seem a good idea.
[...]

A single Node Ceph-Cluster is nothing more then a "Raid-System" or a "NAS" based on Ceph-Storage. I was just pointing out that this was possible (not that it was a good idea)

We do use Ceph-Pools (EC+ SSD R4/2 CacheTiers / Replicated 4/2) -> KRDB -> Openmediavault at work on 20+ OVM-Machines on 3 Different Ceph-Clusters. serving some 70+ Proxmox nodes and some 300+ Clients.