ZFS in Guests

ianmbetts

Member
Mar 11, 2020
24
1
8
64
Hi,
I am running a cluster with Ceph Bluestore and some guest VMs that use ZFS file system.
To date I thought it prudent to set up a virtual raidZ in these VMs i.e. provide min three virtual disks to the guest.

The primary reason for using ZFS is features such as compression and snapshots and because it is the default FS
for certain guest OS ( e.g. trueNAS and pfsense).

Since I am using now Ceph Bluestore I am thinking that bit rot protection provided by ZFS is superfluous, since
it will be detected and corrected by Ceph already. In such case I think I no long need RaidZ and can install
the VM with ZFS on a single virtual disk.

I cannot find any discussion on such a configuration, and wonder what are peoples opinion about that.
Is it safe ? will it make any difference to performance or resource usage ?

Thanks in advance.
 
In theory it should be fine with a single virtual disk as ceph will take care of redundancy and bit rot protection. And performance should be better with a single virtual disk, as you remove overhead by not needing to do all the parity calculations and not wasting IOPS/bandwidth for unneccessary partiy data.
 
Just to update this after running for six months with a multi virtual disk Raid Z.

Its NOT a good idea.

Online backups leave the Raid Z in an inconsistent state, which
means backup restore always involves subsequently having to fix a corrupted ZFS pool.

Backup with the VM shutdown is fine.

Better solution is what I proposed ( a single virtual disk raid Z).
Been running this for a couple of months now with no issue.
 
I have also a couple of VMs with a ZFS pool inside. For all those disks, I disabled backup via PVE and just replicate the pool to the backup system.
 
  • Like
Reactions: ianmbetts
I would venture that this is probably not a good way to do it- cow on cow is just a recipe for poor performance and worse. Instead, I'd attach cephfs directly to the vm and use the ceph_snapshot vfs object.
 
I'm curious; what guest requires zfs?
I have had two apps that use ZFS, (both because that are running on BSD) :-
pfsense and truenas.

I have since moved pfsense to dedicated redundant bare metal because it's hard to do remote diagnosis on a sick cluster when you are logged in via a firewall running on said cluster - lol

I am still running truenas as a VM. It provides SMB NAS on our office LAN and NFS storage to pve for ISO images.
 
Neither one of these REQUIRES zfs, and work just fine with UFS.
While in theory you can use UFS with FreeBSD, those two are appliances and you should only use what those allow you to do. While OPNsense allows me to decide between UFS and ZFS, TrueNAS Core just requires ZFS.
 
  • Like
Reactions: ianmbetts
Not to beat a dead horse- You are used to driving your car to the grocery store, so you made a fleet of trucks to carry your car to the grocery store. Truenas is not meant to be used as you describe- it wants to control the hard drives directly under which conditions creating a zpool is ideal. You want to create an overcomplicated mousetrap- you're welcome to do so. I'm simply pointing out that there are better ways within the architecture you describe.

And you most certainly can use a ufs volume with truenas core, but its a much better solution to not use it at all within a ceph cluster.
 
Not to beat a dead horse- You are used to driving your car to the grocery store, so you made a fleet of trucks to carry your car to the grocery store. Truenas is not meant to be used as you describe- it wants to control the hard drives directly under which conditions creating a zpool is ideal. You want to create an overcomplicated mousetrap- you're welcome to do so. I'm simply pointing out that there are better ways within the architecture you describe.

And you most certainly can use a ufs volume with truenas core, but its a much better solution to not use it at all within a ceph cluster.
With respect, this is one of the primary use cases for virtualization, i.e being able to leverage diverse best in class applications, and preserve legacy investments, with the advantage of benefitting from consistent approach to backup and high availability.
So it really depends what you mean by "better solution".
This meets all my requirements and has the advantage that it is utilizing a long established and robust application that is widely known and well supported.
The performance is also perfectly adequate.
 
With respect, this is one of the primary use cases for virtualization, i.e being able to leverage diverse best in class applications,
Certainly.
and preserve legacy investments,
A filer isnt a legacy investment. the payload can exist on nearly any type of guest that support smbd/nfsd
with the advantage of benefitting from consistent approach to backup and high availability.
backup of filer assets and vm's dont have the same requirements. using the same tool for both is either by using a tool meant for both use cases (eg, veeam) or necessarily making compromises for one in favor of the other.
This meets all my requirements and has the advantage that it is utilizing a long established and robust application that is widely known and well supported.
The performance is also perfectly adequate.
As others told you, putting zfs in a guest is not a "supported" solution, ESPECIALLY when the backing store is a CoW file system. In any case, I can only point out WHY this isn't a good idea, but ultimately its up to you to deploy, administer, and support it; you do you.
 
Certainly.

A filer isnt a legacy investment. the payload can exist on nearly any type of guest that support smbd/nfsd

backup of filer assets and vm's dont have the same requirements. using the same tool for both is either by using a tool meant for both use cases (eg, veeam) or necessarily making compromises for one in favor of the other.

As others told you, putting zfs in a guest is not a "supported" solution, ESPECIALLY when the backing store is a CoW file system. In any case, I can only point out WHY this isn't a good idea, but ultimately its up to you to deploy, administer, and support it; you do you.
"As others told you, putting zfs in a guest is not a "supported" solution"

Nowhere on this thread has anyone said any such thing.
Other than yourself the only other responses are both from members who see it as a viable solution, and one of whom is actually also doing the same thing.

Anyway it's a dead horse. End
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!