PVE 5: VM on mounted zfs?

alexc

Renowned Member
Apr 13, 2015
125
4
83
In PVE 5.2 I can add say couple of HDDs as ZFS based mirror. Then the ZFS pool can me added as storage. But ZFS pool can be as well mounted as general disk.

But say I added this way the ZFS pool as storage or added it as dir storage. How is the pros or contras to do both options? ZFS pool appears to be modern way but these data can not easily be extracted (obly via backup) to (say) be copied to othen PVE node. Use ZFS as mounted storage appears to have bigger overhead.

And another question, what will happen if I use the same ZFS pool both as PVE zfs-pool storage and as dir storage?
 
You can add a ZFS dataset as a Directory entry in Proxmox. If you do this, Proxmox will create the typical directory structure you see with ext4 (dump, images, templates, etc.). I think the biggest reason people choose this option is familiarity. People can see the .qcow2 files and move them around as they wish. But, the performance of the VM will suffer.

When adding a ZFS storage entry, Proxmox will use zvols for each virtual machine disk. If necessary, these can be accessed at /dev/zvol/zpool-name/vm-100-disk-1 as if you were accessing /dev/sdb, for example. As for moving a zvol based virtual disk from one PVE node to another, I'm sure Proxmox has a feature to do this. I'm just not familiar with it. At absolute worst case, you can use ZFS send/receive to migrate a zvol.

Regarding using a ZFS pool as both PVE ZFS storage and directory storage. You can do this. This would allow you to create virtual machines using the PVE ZFS storage, and allow you to keep things like ISOs on the same data disks.
 
I think the biggest reason people choose this option is familiarity. People can see the .qcow2 files and move them around as they wish. But, the performance of the VM will suffer.

Yes, familiarity is exactly my case, but I will explore the option of accessing zvols via path you've cited.

May I please ask your opinion on different part of the game, the performance. ZFS is great thing but it need to be turned up very well to show perfect performance and PVE won't do that automatically. Are there any turn up manuals (beside these old Sun's and FreeBSD ones that are focused mainly on memory allocation and without any focus on specific PVE disk load?

This is very interesting question since you can buy or rent node with no RAID for much less money keeping in mind mighty ZFS, but it the performance will suffer from far-from-ideal setup then RAID may be better option, isn't it?

Thank you!
 
In my mind, traditional RAID controllers are on their way out. Next generation file systems, and ZFS specifically, are only going to get better as time goes on. Sure, you can get a server with a RAID controller, use the familiar ext4 on top of LVM, and you'll get some, if not most of the benefits of a next generation file system. Or you could just use a next generation file system. It's not like ZFS is particularly complex compared to LVM.

I've done a lot of searching around the internet for performance turning ZFS for virtual machines. By and large, the biggest piece of misinformation I have ever been given came from a talk hosted by some Citrix admins. They suggested that ZFS's ARC could be kept very low, because virtual machines will have their OS caching file system data themselves. The theory was that your performance would suffer upon initial start of the VM, but would get better as the OS cached data.

Having tried to make this work for months, I can tell you that this is false. ZFS's ARC is a better form of cache than any operating system's first-in-first-out cache will ever be. So instead, you should grow the ARC (at base line, my Proxmox machines get a zfs_arc_max of 10GB, and zfs_arc_min of 8GB), and let the ARC handle the majority of the caching. In this, you can generally save on the memory utilization of the virtual machine. My standard for Windows Server 2016 VMs is 2 cores, 4GB of memory. If you're running a database, such as SQL Server, add memory for its cache.

The next recommendation is disk configuration. Again, in my opinion, parity RAID is dead. Storage is damn near free, these days. Use pools of mirrors instead of RAIDZ-Anything. With a pool of mirrors, you have flexibility in performance and redundancy. Take 6 disks, and decide if you would rather have additional performance, or additional redundancy. If you need performance, you will have three 2-disk mirrors. If you need redundancy, you will have two 3-disk mirrors (and still get better performance than RAIDZ-Anything).
 
  • Like
Reactions: leaveme

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!