PVE 5.2 on ZFS, use as plain disk only?

alexc

Renowned Member
Apr 13, 2015
138
4
83
I got a freshly installed host under PVE 5.2 on two HDDs which are set up as ZFS RAID1. I used to use plain filesystem to store VMs disks so I have no plans to use rpool/data pool, so what if I remove its definition from storage.conf? I can then mount this pool (or delete it and create another one) under /var/lib/vz and keep use old-good-familiar approach?

Is rpool/data some kind of special pool or name for PVE? Will I break something if I disable its use in PVE and remove it from the disk?
 
You understand zfs completly wrong. VM's are stored on rpool/data this is the default zfs-pool for vm's. On zfs it is very bad idear do store VM'd directly on the filesystem. ZFS is not desinged for that. ZFS have a lot of nice feature that can be used. If you don't need zfs-features have a look at a hardwareraid. So please read the dokumentation.
https://pve.proxmox.com/wiki/ZFS_on_Linux
https://pve.proxmox.com/wiki/Storage:_ZFS

For my own: ZFS are really the best, so never use hardwareraid anymore.
 
  • Like
Reactions: guletz
ZFS is very good, really (and I just love it on storage$ and ZFS designed for many things, including filesystem storage, too), and use it as intended is just perfect approach, but this is not my question.

What I need is to host VMs's disks (qcow2 files) on a plain failsystem over ZFS - there are some more overhead in it, but doing so I just don't need "rpool/data" zfs pool, so what it'll be if I remove it?

And yes, I can not afford neither get hardware RAID, nor I want to play roulette by seting up Debian over software RAID and putting PVE packages on top.
 
What I need is to host VMs's disks (qcow2 files) on a plain failsystem over ZFS - there are some more overhead in it, but doing so I just don't need "rpool/data" zfs pool, so what it'll be if I remove it?
No. Then ZFS is the wrong choice. If you plan save directly on filesystem vm's, zfs make no sense for you. And it is a really bad idea. First it is much more slower then everything and you could suffer data loss.
And yes, I can not afford neither get hardware RAID, nor I want to play roulette by seting up Debian over software RAID and putting PVE packages on top.
Then you have a problem ;)

What can you do?
Install only pve on this two harddrives on ZFS (two littel ssd's, no evo...) and did not use it for vm's. Use normal softwareraid (not support but it is also an old default and it works fine) for other hdd's to safe VM's on Ext4 filesystem.
 
No. Then ZFS is the wrong choice. If you plan save directly on filesystem vm's, zfs make no sense for you. And it is a really bad idea. First it is much more slower then everything and you could suffer data loss.

Then you have a problem ;)

What can you do?
Install only pve on this two harddrives on ZFS (two littel ssd's, no evo...) and did not use it for vm's. Use normal softwareraid (not support but it is also an old default and it works fine) for other hdd's to safe VM's on Ext4 filesystem.

Let me tell my inner ideas :)

First of all, I have a 2 HDD blade host which I can not add neither h/w raid card, nor extra disk or even SD card to boot from. This is 'given' configuration, I simple can not change it at will.

I saw software raid broken down once, and it given me the idea I want to see that repeated again. ZFS looks a bit more robust (and yes all these magic features are nice to see, too :) ). So ZFS appears to be good choice (but I haven't benchmarked it vs software raid or even single hdd).

I do like the way PVE installs from the box (use iso to boot and next-next-next, you know), and I hate to play with Debian's separate install.

What I can do now is:

1) use ZFS as it is intended (yes, keep VMs in ZFS, not over it). The only problem is, if I need to do something with the VM data (say, copy VM) I can shut it down and copy its disks if I use "qcow2 over filesystem" approach but I can't if I store VM inside ZFS. The same is true if I decide to migrate VM (in fact' its disks) to another PVE host: copy qcow2's is easy vm copy data from ZFS.

2) use one HDD to keep OS and some VMs while second HDD to VM backups only (no mirror, old plain backup every night or so). This an easy to support, and even if OS drive is dead I can restore last night state from the backup disk. Poor man HA apprach :)

3) Install Debian, while create software raid in debian style and set up PVE over Debian after that. Messy and there are a lot of thins to care for (PVE packages are expect some named things like LVM etc already setup).

Hard to choose, really. What I love PVE for is its default setup is work out of box, and doing "way 3" isn't that easy and reproducible. Way 2 is the easiest one but is kind of "amateur" :)
 
What I need is to host VMs's disks (qcow2 files) on a plain failsystem over ZFS - there are some more overhead in it, but doing so I just don't need "rpool/data" zfs pool, so what it'll be if I remove it?
this IS possible. Youd just create a zfs volume and use it as a mounted file system location; as a matter of fact, /var/lib/vz will already be available for the purpose.

What @fireon was trying to tell you is that this is a BAD IDEA.
So ZFS appears to be good choice (but I haven't benchmarked it vs software raid or even single hdd).
You dont really have many options with 2 drives. performance will be basically the same weather you use single disks or a RAID1.

use ZFS as it is intended (yes, keep VMs in ZFS, not over it). The only problem is, if I need to do something with the VM data (say, copy VM) I can shut it down and copy its disks if I use "qcow2 over filesystem" approach but I can't if I store VM inside ZFS.
Completely untrue. you can backup and restore images live without turning anything off using zfs; you can also dd the vzol to a raw image at will, or a snapshot. this is much more flexible then working with qcow2 files.

Hard to choose, really.
No its not. There is no use case for two drives that would not be best served with zfs.
 
And it is a really bad idea. First it is much more slower then everything and you could suffer data loss.

We've never seen proof of this "data loss theory", but it is the performance-wise worst setup for such a simple stack. I love ZFS but it still has at least one big drawbacks: it cannot give non-linear snapshoting (e.g. snapshot trees as QCOW2 can). I use ZFS for years with an exposed directory to store my qcow2 images and I never ran into a problem. I specifically need this snapshot trees and the machine already consisted of a ZFS, so I just went with it. Is it fast? No, not as fast as using ZFS directly, but my pool is an 6 bay Enterprise-SSD-Pool, so a little bit of a performance loss is negligible, yet I cannot recommend this performance-wise for a two-disk setup.

No its not. There is no use case for two drives that would not be best served with zfs.

And it is the only viable choice if you want to have a supported system with software raid.

One final remark on the /var/lib/vz usage:

IIRC, since PVE 4.2, there is no /var/lib/vz anymore. Thin-LVM has taken its place, so, you have to change your point of view eventually. Best time to learn and fell in love with ZFS :-D
 
I can shut it down and copy its disks if I use "qcow2 over filesystem" approach but I can't if I store VM inside ZFS. The same is true if I decide to migrate VM (in fact' its disks) to another PVE host: copy qcow2's is easy vm copy data from ZFS.
Thats completely wrong. With the Replicationfetaure, "zfs send" and pve-zsync you have everything what you would like. You are also able to migrate online from ZFS to ZFS without shutdown. Or you use nice webbased replicationfeature (i prefer that).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!