Setting ZFS as the primary for container disk images.

gacott

Renowned Member
Dec 26, 2009
20
0
66
So this is what I'd like to do, and I can't quite figure it out. I am sure I am missing something simple. I have a fairly large cluster, so this has a lot to do with migration etc.

1. I'd like to install on an M.2 drive. This is simple, and I have accomplished this easily.
2. I am installing on a server with eight drives, and I am creating a ZFS pool out of the eight drives intended for disk images and containers.

a. I'd like to use ALL of the M.2 drive storage for templates, isos, and backups, rather than containers and disk images. Basically, get rid of the LVM and use it as file-based storage.

b. Doing this would hopefully allow the eight-drive ZFS pool to become the default for containers and disk images.

Seems this should all be set up at installation or at least initiated then, but I am not clear how to go about this. Or, should I just get rid of the LVM after installation and go from there? Is there an easier way? I'd prefer not to install the ZFS across everything, so having that m.s as the OS and storage seems like a good move to me, but not sure.

Thanks in advance.
 
a. I'd like to use ALL of the M.2 drive storage for templates, isos, and backups, rather than containers and disk images. Basically, get rid of the LVM and use it as file-based storage.
You can destroy (or tell the installer to not create it in the first place) the "data" LV (your thin pool) and then use the free space of the VG to extend your "root" LV and then resize the ext4 of the root filesystem (something like lvextend --resizefs -l +100%FREE /dev/pve/root).
 
Last edited:
Should everything work as expected if you create the ZFS pool from the installer and also install PVE directly on the ZFS from the installer. Addning the M.2 drive afterwards should be easy.
 
You can destroy (or tell the installer to not create it in the first place) the "data" LV (your thin pool) and then use the free space of the VG to extend your "root" LV and then resize the ext4 of the root filesystem (something like lvextend --resizefs -l +100%FREE /dev/pve/root).
This is exactly what I was after, thanks. I set maxvz to 0 within the installer and then resized it, worked great. But this created another issue for me, the one that actually started me down this path.

Once I add this machine to the cluster, automatically, the cluster shows a local-lvm for the machine with a question mark next to it. Fine, this shows it does not connect. But I don't want it there at all. Is it there because both local and local-vvm show as cluster storage?

Then I add my ZFS Storage there as well. The issue is, I am trying to migrate machines from some servers that I need to rebuild like this, that are using local-lvm. When I try to do this, they want to go to another local-lvm, and this machine does not have one. I want to migrate to the ZFS instead. I guess this would have to be done through the CLI maybe?

Again, thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!