How to properly delete 'local-zfs' storage?

mircolino

New Member
Feb 9, 2023
25
8
3
New to Proxmox, migrating from Hyper-V. I have an Intel NUC with a single Samsung 980 Pro 2TB SSD.
I installed PVE using EXT4 following NetworkChuck video and deleted the local-lvm storage:

From GUI:
delete Datacenter -> Storage -> local-lvm

From shell:
# lvremove /dev/pve/data

# lvresize -l +100%FREE /dev/pve/root
# resize2fs /dev/mapper/pve-root

Everything was perfect until I realized I cannot take snapshots of Windows VMs using TMP .raw disks.

So I decided to reinstall PVE using ZFS (RAID0).
I deleted local-zfs from the GUI but when I execute a zfs list I still have /rpool/data:

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.45G 1.75T 104K /rpool
rpool/ROOT 4.45G 1.75T 96K /rpool/ROOT
rpool/ROOT/pve-1 4.45G 1.75T 4.45G /
rpool/data 96K 1.75T 96K /rpool/data


What is the proper way of deleting the local-zfs storage?
 
Everything was perfect until I realized I cannot take snapshots of Windows VMs using TMP .raw disks.

So I decided to reinstall PVE using ZFS (RAID0).
I deleted local-zfs from the GUI but when I execute a zfs list I still have /rpool/data:
"local-zfs" is the place where you can store raw disks that are snapshotable. When deleting it you again end up with just "local" where you can't snapshot raw disks.

If you want snapshots, you either have to use qcow2 as format on "local" or "raw" on "local-zfs"/"local-lvm".
 
Last edited:
"local-zfs" is the place where you can store raw disks that are snapshotable. When deleting it you again end up with just "local" where you can't snapshot raw disks.

If you want snapshots, you either have to use qcow2 as format on "local" or "raw" on "local-zfs"/"local-lvm".
Somehow I naively thought ZFS had native snapshot capability. But you are right, even on ZFS, I still cannot take snapshots of VMs with .raw disks on "local" storage. If I remove TPM and its .raw disk from the VM, snapshots work.

When I add TPM (which is required for Windows 11) to a VM, PVE automatically creates a small .raw disk, and that's what prevents snapshots from working. Anyway to convert the TPM .raw disk to .qcow2, or better yet, force PVE to create the TPM disk in .qcow2 format?
 
Last edited:
Somehow I naively thought ZFS had native snapshot capability.
It got. But "local" is a directroy storage that doesn`t make any use of the underlaying storages features. "local-lvm" and "local-zfs" are "LVMThin" or "ZFSPool" type storages which store the virtual disks on block level and make use of the native snapshot features of ZFS or LVM-Thin.

When I add TPM (which is required for Windows 11) to a VM, PVE automatically creates a small .raw disk, and that's what prevents snapshots from working. Anyway to convert the TPM .raw disk to .qcow2, or better yet, force PVE to create the TPM disk in .qcow2 format?
I don't know if PVE requires the TPM disk to be raw. You could try to use qm importdisk command with the "--format qcow2" flag to import a raw file as a qcow2 file. It will then convert it.
 
Last edited:
  • Like
Reactions: mircolino
Is there a specific reason you want to use "local" for virtual disks and not the default "local-lvm" or "local-zfs"?
Because "local" should result in a bit less performance and a bit less SSD life expectation, as you get more overhead because of the nested filesystems which could be avoided. And you are missing features like quotas, replication and so on the LVM/ZFS would offer.

You usually only use a directory storage for virtual disks if you have to because you are using NFS as a shared storage or if you need a specific qcow2 feature, like the ability to undo a snapshot rollback.
 
Last edited:
Is there a specific reason you want to use "local" for virtual disks and not the default "local-lvm" or "local-zfs"?
This is a small home server with a single 2TB SSD: opnsense, pi-hole, unifi controller, home assistant, open media vault sharing a "local" folder with samba.
A default PVE installation gives me a "local" storage of ~100GB and a "local-lvm" storage of ~1.9TB, leaving very little room to set up my samba share.
Ideally I would need the opposite: a "local-lvm" storage of ~100GB and "local" storage of ~1.9TB.

Is there a tutorial how to expand/shrink "local" and "local-lvm"?

UPDATE:
I actually noticed in https://pve.proxmox.com/wiki/Installation that at installation time you can set "maxvz" size.
I'll try reinstalling PVE with maxvz=100GB.
 
Last edited:
Ideally I would need the opposite: a "local-lvm" storage of ~100GB and "local" storage of ~1.9TB.
There are advanced options in the installer where you can specify how big "local" and "local-lvm" should be.

And with ZFS this is no problem at all, as both "local" and "local-zfs" share the full 2 TB. So you don't have to decide how much space you want for your files/folder and virtual disks.
 
  • Like
Reactions: mircolino
New to Proxmox, migrating from Hyper-V. I have an Intel NUC with a single Samsung 980 Pro 2TB SSD.
I installed PVE using EXT4 following NetworkChuck video and deleted the local-lvm storage:

From GUI:
delete Datacenter -> Storage -> local-lvm

From shell:
# lvremove /dev/pve/data

# lvresize -l +100%FREE /dev/pve/root
# resize2fs /dev/mapper/pve-root

Everything was perfect until I realized I cannot take snapshots of Windows VMs using TMP .raw disks.

So I decided to reinstall PVE using ZFS (RAID0).
I deleted local-zfs from the GUI but when I execute a zfs list I still have /rpool/data:

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.45G 1.75T 104K /rpool
rpool/ROOT 4.45G 1.75T 96K /rpool/ROOT
rpool/ROOT/pve-1 4.45G 1.75T 4.45G /
rpool/data 96K 1.75T 96K /rpool/data


What is the proper way of deleting the local-zfs storage?
Did you ever figure this out? I don't see the answer here. I need to delete the local-zfs from cml.
 
Did you ever figure this out? I don't see the answer here. I need to delete the local-zfs from cml.
Then don't install with ZFS if you want to delete it afterwards. What you want to do makes no sense. I think there is a conceptual error in your wanted storage plan.
 
Did you ever figure this out? I don't see the answer here. I need to delete the local-zfs from cml.
Since then I got a bit more familiar with linux/proxmox and ended up with a different storage plan.
My single proxmox node NUC now has two 2TB SSDs: a primary NVMe and a secondary SATA.
I installed proxmox on the primary SSD with "local" and "local-zfs". Then I mounted "local" on the secondary SSD with the following:

Code:
From the PVE GUI:

- To initialize the disk, I created a new "directory", with the storage checkbox selected, using the secondary disk (/dev/sda1)
- Then I deleted the newly created "directory" and the associated storage configuration but without wiping the disk

From the PVE Shell:

$ blkid

/dev/sda1: UUID="<UUID_CODE>" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="<PARTUUID_CODE>"

$ nano /etc/fstab

UUID="<UUID_CODE>" /var/lib/vz ext4 defaults 0 0

$ rm -rf /var/lib/vz/*
$ mount -a

Now I have 2TB "local-zfs" on the primary SSD and 2TB "local" on the secondary/slower SSD.

I then created a SAMBA container and mounted my library on "local-zfs":

Code:
Datacenter -> <NODE_PVE> -> <SAMBA_CT> -> Resources  -> Add -> Mount Point: 0, local-zfs, 2048, /mnt/library

this "library" mount shares the same space and I see it in SAMBA as a 2T volume

Now my "local_zfs" holds VMs, CTs and "library", while "local" holds ISO images and backups/snapshots.

I like the idea of having two different SSDs. The chances of both failing at the same time are lower.
If the secondary fails, I still have a fully functional PVE node. If the primary fails, I'll restore the latest backup from the secondary SSD.
For a home installation IMHO it's good enough.

Hope this helps.
 
Last edited:
Then don't install with ZFS if you want to delete it afterwards. What you want to do makes no sense. I think there is a conceptual error in your wanted storage plan.
I am new to proxmox and linux....... I didn't know you could chose to not have it install during setup. I have two ssds setup with raid1. I don't like the idea of having a local-zfs and a local. I would rather just have everything in one place. I watched a video on networkchucks youtube channel and he said to delete the local-zfs because it takes up tons of storage just for isos. I am sure there is a conceptual error as I am new. Maybe you could better expalin this to me?
 
"local" is a filestorage and therefore for your files like ISOs, templates, backups, ... ."local-zfs" is a block storage and for virtual disks of your VMs/LXCs. Removing "local-zfs" would mean you aren't storing virtual disks as block devices anymore which will waste performance. And both "local" and "local-zfs" share the same space. So there is no waste of space. Both only consume that much amount of space of the data you store on them.
Store nothing on "local-zfs" and 100% of the space will be available for "local".
 
  • Like
Reactions: mircolino

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!