Can I create a ZFS volume in Proxmox, then mount and format it on my VM?

adowson

New Member
Jun 15, 2020
7
0
1
26
I have a ZFS pool that has been migrated to Proxmox. I have a couple of VM's running that I all want to have access to my zpool. However, I want to manage folder permissions on the VM side since it's much easier to control access on a per-user basis.

Can I create a logical block volume with "zfs create -V 1tb zpool/volume" and pass it through to an existing VM as secondary storage to be formatted?
 
Of course, if the ZFS pool is added as a storage in PVE (Datacenter -> Storage), you can simply add a harddrive to your VM (Hardware -> Add -> Hard Disk) and select your ZFS pool as backing storage.

That will do exactly the command you gave in the background to create the volume.
 
Of course, if the ZFS pool is added as a storage in PVE (Datacenter -> Storage), you can simply add a harddrive to your VM (Hardware -> Add -> Hard Disk) and select your ZFS pool as backing storage.

That will do exactly the command you gave in the background to create the volume.

I managed to figure that out, sorry forgot to update.

However, within Proxmox there is now /dev/zd0 which Proxmox has mapped to the Debian VM as /dev/sdb.

How do I format the volume? Do I do it in the VM or in the Proxmox hypervisor? On the VM there is /dev/sdb but there's no /dev/sdb1 to actually format. I've tried adding a GPT partition table but there's still nothing for me to actually format.
 
In general, never touch VM disks from PVE. Only ever let the VM do that.

If you want to install Debian just let the installer do it's thing and treat the entire disk as empty. There is no /dev/sdb1 because nothing has been created on there yet /dev/zd0 is completely empty, and for the VM /dev/zd0 is simple a disk, it doesn't know anything about ZFS.
 
In general, never touch VM disks from PVE. Only ever let the VM do that.

If you want to install Debian just let the installer do it's thing and treat the entire disk as empty. There is no /dev/sdb1 because nothing has been created on there yet /dev/zd0 is completely empty, and for the VM /dev/zd0 is simple a disk, it doesn't know anything about ZFS.

I've only touched it on the VM end, just wasn't sure if I was doing something wrong. What I'm trying to do is use the ZFS-backed volume as a secondary storage device (for data) in the VM that I'll probably mount to /mnt. The VM itself is already installed on the primary SSD.
 
Well yes, then you just format it like any other disk from your VM. If you're not famliar with GPT and partitions, I recommend 'gparted', it's a visual tool to help set up and format disks. Run it inside your VM to create an empty GPT, then a partition, and finally format that partition with ext4 or whatever.
 
Well yes, then you just format it like any other disk from your VM. If you're not famliar with GPT and partitions, I recommend 'gparted', it's a visual tool to help set up and format disks. Run it inside your VM to create an empty GPT, then a partition, and finally format that partition with ext4 or whatever.

Huh, that's exactly what I did but I was under the impression that once you formatted it as GPT it wolud provide a /dev/sdb1 to put the ext4 on.

I'll give gparted a try, think I'm having a layer 8 fault :).
 
You don't need a partition to put the filesystem on it. Just format /dev/sdb and mount it.

Realised why I was getting confused. I was looking for a UUID to mount it, but those are only generated for partitions, so partitionless disks only show /dev/sdb1.

However, I'm running into a different problem now. I'm trying to expand the ZFS volume for the VM by 5837GiB. There is 10025GiB free space but when I attempt to increment the size by 5837GiB it says that the "size is greater than available space".

Can't tell if it's a weird ZFS quirk or Proxmox.
 
Realised why I was getting confused. I was looking for a UUID to mount it, but those are only generated for partitions, so partitionless disks only show /dev/sdb1.

UUIDs are generated for filesystems, you mean PARTUUID, which are generated for partitions.

There is 10025GiB free space

According to whom? zfs or zpool? Please post the output of zpool list and zpool status.
 
According to whom? zfs or zpool? Please post the output of zpool list and zpool status.

Zpool. From the Proxmox GUI (Datacentre -> zpool)

Screenshot from 2020-06-20 18-44-49.png

Output of the commands requested:

Code:
root@pve:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zpool    29T  13.5T  15.5T        -         -     2%    46%  1.00x    ONLINE  -
root@pve:~# zpool status
  pool: zpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 05:14:22 with 0 errors on Sun Jun 14 05:38:23 2020
config:

    NAME                                          STATE     READ WRITE CKSUM
    zpool                                         ONLINE       0     0     0
      raidz2-0                                    ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K0JVCS8J  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K2KU8ALK  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K3NP6P6U  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K4TREF20  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K5SN0Z8S  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6VV8365  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6VD8UX3  ONLINE       0     0     0
        ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7CKDRLK  ONLINE       0     0     0

errors: No known data errors
 
Yeah, the culprit ist most probably raidz2, which just uses too much space. 5TB instead of 5,7TB should work in your expansion.
Worst case with raidz2 and volumes with 16k volblocksize, you have twice the amount of data stored.
 
Yeah, the culprit ist most probably raidz2, which just uses too much space. 5TB instead of 5,7TB should work in your expansion.
Worst case with raidz2 and volumes with 16k volblocksize, you have twice the amount of data stored.

Managed to expand it to about 950GB, there's about 350GB left now. I had heard of the volblocksize attribute but mistakenly thought it only really influenced performance. It makes sense though since I'm guessing there's extra padding for the zvol.

I think I might go back to the drawing board on this one, the aim was to manage the file permissions on the VM itself.
 
I had heard of the volblocksize attribute but mistakenly thought it only really influenced performance. It makes sense though since I'm guessing there's extra padding for the zvol.

Yes, you're not alone. A lot of people (me included) had/have problems with raidz2, therefore, we all recommend using raid10-equivalent (stripped mirrored vdevs). Yes, it's not the same level of failure tolerance, but AFAIK, the statistics of two failing adjacent devices is lower than two random ones.