background: I am working towards migrating approx 12 clients from ESXi (the straight-up free version) to proxmox. I have been using ESXi, and ghettoVCB, to read and write to NFS, hosted on ZFS provided by FreeBSD, for close to 20 years (along with all the other stuff you can do with zfs). So I have a lot of very-effective ways of doing things that no longer work quite the same way. My experience with zfs on linux has been good, but pretty much limited to shared NFS storage for home (PVR, books, music, ad-hoc backups) and zfs send/receive operations for a colleague.
So - I've got a Proxmox VE 8.3 Community Subscription up and running on my test bed (dual CPU xeon, 265 GB RAM, no TPM). My plan (following long practice) is to have the OS boot from a boot mirror (a pair of 256 GB Samsung SSDs). I performed a default install from the ISO to the 2 256 GB SSDs, choosing ZFS-RAID1 at the appropriate moment during the install.
I also have (for my testbed) a RAIDz-1 pool with a zil partition on a separate SSD. For clients these will be enterprise-grade SSD drives. For this part of the learning curve they are a motley collection of 1 & 4 TB consumer SSDs and a never-used 120GB SSD (for the ZIL and swap). This zpool is now set up , using the command line, and visible in the webUI (Server View -> Datacentr -> pve1 -> Disks -> ZFS -> datapool0). I have not yet created a dataset on this pool, but assume it will be called datastore0.
So, in the docs I read that
now obviously I'm thinking I can edit this file and replace the second line in the configuration file -- pool rpool/data -- with the new zpool and dataset on my large-enough RAIDz-1 pool.
I'd also like to zfs destroy rpool/data, since it is completely (AFAIK) redundant, and I really dislike finding unused defaults lying about to trip up future me, or my replacement.
So -- any guidance as to the pitfalls, other ways to do this, etc. would be much appreciated. Also high-value reference sections of the docs.
So - I've got a Proxmox VE 8.3 Community Subscription up and running on my test bed (dual CPU xeon, 265 GB RAM, no TPM). My plan (following long practice) is to have the OS boot from a boot mirror (a pair of 256 GB Samsung SSDs). I performed a default install from the ISO to the 2 256 GB SSDs, choosing ZFS-RAID1 at the appropriate moment during the install.
I also have (for my testbed) a RAIDz-1 pool with a zil partition on a separate SSD. For clients these will be enterprise-grade SSD drives. For this part of the learning curve they are a motley collection of 1 & 4 TB consumer SSDs and a never-used 120GB SSD (for the ZIL and swap). This zpool is now set up , using the command line, and visible in the webUI (Server View -> Datacentr -> pve1 -> Disks -> ZFS -> datapool0). I have not yet created a dataset on this pool, but assume it will be called datastore0.
So, in the docs I read that
The installer automatically partitions the disks, creates a ZFS pool called rpool , and installs the root file system on the ZFS subvolume rpool/ROOT/pve-1 .
Another subvolume called rpool/data is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in /etc/pve/storage.cfg :
Code:
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir
now obviously I'm thinking I can edit this file and replace the second line in the configuration file -- pool rpool/data -- with the new zpool and dataset on my large-enough RAIDz-1 pool.
I'd also like to zfs destroy rpool/data, since it is completely (AFAIK) redundant, and I really dislike finding unused defaults lying about to trip up future me, or my replacement.
So -- any guidance as to the pitfalls, other ways to do this, etc. would be much appreciated. Also high-value reference sections of the docs.
Last edited: