Cannot Create VM on ZFS pool only local storage shows up for option

onoxsis

New Member
Sep 12, 2022
14
3
3
Hi I am very new to proxmox and have the following setup:

2x 60gb ssd enterprise drives in zfs raid 1 where proxmox is installed that shows up as local and local-zfs - here I can create vm's without an issue, the problem of course being there is very little space. So I added 2x 900gb scsi 15k drives in a zfs raid 1. My intention was to install my vm's on this. However after creating the pool and selecting create virtual machine. the storage dropdown only lets me choose "local" it will not let me select my zfs pool.

Output of storage config:

dir: local
path /var/lib/vz
content backup,vztmpl,iso

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: Storage1
pool Storage1
content images,rootdir
mountpoint /Storage1
sparse 0
 

Attachments

  • datacenterview.jpg
    datacenterview.jpg
    119.4 KB · Views: 44
  • storageoptions.jpg
    storageoptions.jpg
    46.4 KB · Views: 46
That is where you select the storage to load your ISO from. Not the storage where you store the virtual disk.
 
That is where you select the storage to load your ISO from. Not the storage where you store the virtual disk.
Well now i feel very stupid, I found the issue the version of chrome i was using had an extension that was conflicting with the drop down list and not allowing me to see any choice but local. I can now select my rfs pool for storage and it allows me to save.

Thank so much, one other question. How do upload iso's to this rfs pool? it says it only allows Disk image, container but no option to store iso images.
 
Thank so much, one other question. How do upload iso's to this rfs pool? it says it only allows Disk image, container but no option to store iso images.
That "Storage1" is a block storage, so it can't store any files. If you want to store files like ISOs, Backups, LXC templates and so on on top of that ZFS pool you would need to create a dataset first and then create a Directory Storage pointing to the mountpoint of that dataset.

Create a dataset:
zfs create Storage1/ISOs
Enable zstd compression to waste less space as ISOs are usually well compressible and not accessed that often:
zfs set compression=zstd Storage1/ISOs
Prevent unneccesary writes by enabling relatime:
zfs set relatime=on Storage1/ISOs

Create a Directory storage for ISOs:
pvesm add dir Storage1_ISOs --content iso --is_mountpoint yes --shared 0 --path "/Storage1/ISOs"
 
Last edited:
That "Storage1" is a block storage, so it can't store any files. If you want to store files like ISOs, Backups, LXC templates and so on on top of that ZFS pool you would need to create a dataset first and then create a Directory Storage pointing to the mountpoint of that dataset.

Create a dataset:
zfs create Storage1/ISOs
Enable zstd compression to waste less space as ISOs are usually well compressible and not accessed that often:
zfs set compression=zstd Storage1/ISOs
Prevent unneccesary writes by enabling relatime:
zfs set relatime=on Storage1/ISOs

Create a Directory storage for ISOs:
pvesm add dir Storage1_ISOs --content iso --is_mountpoint yes --shared 0 --path "/Storage1/ISOs"
Thank you very much this worked perfectly :)
I have a Dell M1220 full of 24x300gb scsi drives if I wanted a quick way to remove all partitions from these using terminal what command would that be as going through fdisk one at a time is very painful. And last if I wanted to setup zfs for all 24x300gb drives to use the fastest speed with redudancy for vm's what would you recommend? Something like Raid 10?
 
OOPs Forgot to include the screenshot where you can see all the partitions for each drive, it attempted to create zfs for this perch h810 controller connected to this array but it failed through the gui, not sure if. it was because of so many drives?
Thank you very much this worked perfectly :)
I have a Dell M1220 full of 24x300gb scsi drives if I wanted a quick way to remove all partitions from these using terminal what command would that be as going through fdisk one at a time is very painful. And last if I wanted to setup zfs for all 24x300gb drives to use the fastest speed with redudancy for vm's what would you recommend? Something like Raid 10?
 

Attachments

  • dellstoragearragy.jpg
    dellstoragearragy.jpg
    308.6 KB · Views: 12
Thank you very much this worked perfectly :)
I have a Dell M1220 full of 24x300gb scsi drives if I wanted a quick way to remove all partitions from these using terminal what command would that be as going through fdisk one at a time is very painful.
Not sure. But if fdisk is too tideous, you can also wipe them using the PVE webUI at YourNode -> Disks -> Select a disk -> Wipe Disk button
Or you create a ZFS pool using CLI with the "-f" argument, which will then force zpool to use (and partition) these disks even if they already contain data. Then you don't need to wipe them first.
And last if I wanted to setup zfs for all 24x300gb drives to use the fastest speed with redudancy for vm's what would you recommend? Something like Raid 10?
Yes. For best performance a raid10 of 12x 2 disk mirrors. For better redundancy a raid10 of 8x 3 disk mirrors. Only the first one can be created using the webUI.
 
Last edited:
Not sure. But if fdisk is too tideous, you can also wipe them using the PVE webUI at YourNode -> Disks -> Select a disk -> Wipe Disk button
Or you create a ZFS pool using CLI with the "-f" argument, which will then force zpool to use (and partition) these disks even if they already contain data. Then you don't need to wipe them first.

Yes. For best performance a raid10 of 12x 2 disk mirrors. For better redundancy a raid10 of 8x 3 disk mirrors. Only the first one can be created using the webUI.
ok I was afraid it would have to be done that way, that will take a lot of time for so many disks. Is there not a nice terminal command that can be run that would do the same thing? As well as a terminal command to create the raid 10 of 12 x 2 disk mirrors?
Sorry again, still a newbie at this but really appreciate your help.
 
Is there not a nice terminal command that can be run that would do the same thing? As well as a terminal command to create the raid 10 of 12 x 2 disk mirrors?
Code:
zpool create -f -o ashift=12 NameOfthePool mirror /dev/disk/by-id/Your1stDisk /dev/disk/by-id/Your2ndDisk mirror /dev/disk/by-id/Your3rdDisk /dev/disk/by-id/Your4thDisk mirror /dev/disk/by-id/Your5thstDisk /dev/disk/by-id/Your6thDisk  ...  mirror /dev/disk/by-id/Your23thstDisk /dev/disk/by-id/Your24thDisk

zfs create NameOfthePool/data

pvesm add zfspool NameOfYourStorage --blocksize 64K --content images,rootdir --pool NameOfthePool/data --sparse 1 --mountpoint /NameOfthePool/data

Also keep in mind that a ZFS pool shouldn't be filled more than 80% or it will become slow. So you might want to set a quota with something like zfs set quota=2880G NameOfthePool so you can't completely fill it up by accident.
 
Last edited:
Code:
zpool create -f -o ashift=12 NameOfthePool mirror /dev/disk/by-id/Your1stDisk /dev/disk/by-id/Your2ndDisk mirror /dev/disk/by-id/Your3rdDisk /dev/disk/by-id/Your4thDisk mirror /dev/disk/by-id/Your5thstDisk /dev/disk/by-id/Your6thDisk  ...  mirror /dev/disk/by-id/Your23thstDisk /dev/disk/by-id/Your24thDisk

zfs create NameOfthePool/data

pvesm add zfspool NameOfYourStorage --blocksize 64K --content images,rootdir --pool NameOfthePool/data --sparse 1 --mountpoint /NameOfthePool/data

Also keep in mind that a ZFS pool shouldn't be filled more than 80% or it will become slow. So you might want to set a quota with something like zfs set quota=2880G NameOfthePool so you can't completely fill it up by accident.
Thank you very much I will give this a shot
 
In case all your disks are using 512B physical sectors you might also want to consider using ashift=9 and a 8K blocksize. But then you won't be able to add or replace disks with 4K physical sectors.
 
  • Like
Reactions: onoxsis

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!