Hi all.
I'm looking for help as I'm new to zfs.
I have a 20TB HDD that I will reformat with zfs and aim at storing some VM disks and backups as well as my data that is nicely split in directories like video, photo, music, documents, docker-volumes... Right now all are under /mnt/hdd20T/data/(vidéo...)
I have a privileged LXC with a bind-mount of /mnt/hdd20T that shares everything using NFS and SMB, others LXC also bind-mount some of it for my Plex/Jellyfish for example...
I do this because I want to have USB backups. Of course my USB HDDs are smaller than 20TB but overall I have enough space.
I dislike raid cause I do not need HA and all of my hardware is commodity with less than perfect reliability even if quite good enough for my use as homelab and NAS.
I want my backups to be self-contained, meaning if I take one drive and plug it into a linux machine or even windows with openzfsforwindows I can quickly access the single file I urgently need from my backups.
Today I do this with rsync and have several targets is or NFS with daily or weekly backups.
I'm considering creating a raid0 zpool on the 20TB and then one dataset per folder (video,photo,...) and use the snapshots and clone features to replace my rsync scripts when the target of the backup is also zfs.
Being new to zfs I wanted to check here that this is a good idea and if there was any advice on how to best do this.
I will not be unplugging most of the USB regularly but some of them I would have liked to be offline, I have a few ZigBee plugs and automations possible to power the drives up or down, but putting these USB drives in zfs I believe I would need to trigger some zfs commands whenever I plug the USB drives in, not only mount the dataset but start by importing the pool. Given the only auto mount function I have found for pve8 is a systemd script on GitHub claiming not to work with zfs (https://github.com/theyo-tester/automount-pve) and that messes with the dependencies of zfs-import-cache and zfs-import-scan... I think I have quite a bit of work and learning to do, so any advice is welcome at this stage.
My main concern right now is to ensure that with a single zpool I can create as many datasets without any space constraints and let them grow as they need all shearing the 20tb
I'm looking for help as I'm new to zfs.
I have a 20TB HDD that I will reformat with zfs and aim at storing some VM disks and backups as well as my data that is nicely split in directories like video, photo, music, documents, docker-volumes... Right now all are under /mnt/hdd20T/data/(vidéo...)
I have a privileged LXC with a bind-mount of /mnt/hdd20T that shares everything using NFS and SMB, others LXC also bind-mount some of it for my Plex/Jellyfish for example...
I do this because I want to have USB backups. Of course my USB HDDs are smaller than 20TB but overall I have enough space.
I dislike raid cause I do not need HA and all of my hardware is commodity with less than perfect reliability even if quite good enough for my use as homelab and NAS.
I want my backups to be self-contained, meaning if I take one drive and plug it into a linux machine or even windows with openzfsforwindows I can quickly access the single file I urgently need from my backups.
Today I do this with rsync and have several targets is or NFS with daily or weekly backups.
I'm considering creating a raid0 zpool on the 20TB and then one dataset per folder (video,photo,...) and use the snapshots and clone features to replace my rsync scripts when the target of the backup is also zfs.
Being new to zfs I wanted to check here that this is a good idea and if there was any advice on how to best do this.
I will not be unplugging most of the USB regularly but some of them I would have liked to be offline, I have a few ZigBee plugs and automations possible to power the drives up or down, but putting these USB drives in zfs I believe I would need to trigger some zfs commands whenever I plug the USB drives in, not only mount the dataset but start by importing the pool. Given the only auto mount function I have found for pve8 is a systemd script on GitHub claiming not to work with zfs (https://github.com/theyo-tester/automount-pve) and that messes with the dependencies of zfs-import-cache and zfs-import-scan... I think I have quite a bit of work and learning to do, so any advice is welcome at this stage.
My main concern right now is to ensure that with a single zpool I can create as many datasets without any space constraints and let them grow as they need all shearing the 20tb