ceph with one-copy block device

I wonder how proxmox react if a vm stored on a one-copy block device works if the OSD or the machine crash. Will it be reported as failing correctly ?
Has nothing to do with Proxmox VE per se. Ceph will do its thing.

I'm using the a 1-copy strategy to setup database follower that can be started/stopped any times as described on this paper:
When the database is replicating to another backend, sitting on a different storage, then the redundancy is build into the DB level. But this only works if those DBs are running independently of one another (down to the storage layer).
 
Has nothing to do with Proxmox VE per se. Ceph will do its thing.

ok perfect. I was worried there is anything special for it :)

When the database is replicating to another backend, sitting on a different storage, then the redundancy is build into the DB level. But this only works if those DBs are running independently of one another (down to the storage layer).

Indeed. Imo the "beauty" of it is abstracting the logic of storage, so you don't put a raid / machine like you can do with local storage and configure the placement of data you want. Ie want to be consistent on 3 machines or just on 1 in this case. I will report soon how it went.
 
Indeed. Imo the "beauty" of it is abstracting the logic of storage, so you don't put a raid / machine like you can do with local storage and configure the placement of data you want. Ie want to be consistent on 3 machines or just on 1 in this case. I will report soon how it went.
min_size 1 will only produce one copy for the PG in the pool. So any hardware malfunction on an OSD will lose the data.
 
min_size 1 will only produce one copy for the PG in the pool. So any hardware malfunction on an OSD will lose the data.
which is fine for a follower Imo. In such case you have the following design
  • Master (or any big data node) with a 3-copy (or 2-copy). Data are stored first to read.
  • Reads happen on followers, data is replicated from the master. You can also filter part of it

This allows to scale dynamically across the node the readers depending on the needs or the load. In such case you don't care a follower (aka read node) crash since you can easily launch a new one (or provision it) with the data in the master.
 
Master (or any big data node) with a 3-copy (or 2-copy). Data are stored first to read.
To clarify, I am talking about Ceph's data redundancy not about the database.
 
  • Like
Reactions: benoitc

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!