Ceph librbd vs krbd

librbd and krbd are just two different clients, the ceph pool does not care much with which one you access an RBD image.

In Proxmox VE we use librbd for VMs by default and krbd for Containers.
But, you can enforce the use of the kernel RBD driver also for VMs if you set "krbd" on in the PVE storage configuration of a pool.

Differences between two clients are:
librbd adopts the use of newer storage features quicker, but with the current 5.4 based kernel you got all good features in the kernel client too.
It's said that KRBD is often a bit faster than librbd, but in my experience librbd isn't slow either and you'd need quite the fast ceph to see a difference.
 
Hi All,

We just experienced a bug that caused us to switch to krbd.

Is there a good reason to switch back once the bug is resolved? It seems that krbd might be faster and I don't see any features that I'm giving up.

best,

James
 
Hi All,

We just experienced a bug that caused us to switch to krbd.

Is there a good reason to switch back once the bug is resolved? It seems that krbd might be faster and I don't see any features that I'm giving up.

best,

James
Also did the switch. Nothing to complain about so far. I guess krbd is the way to go.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!