(originally posted on a Reddit thread but not getting a reply so I figured I'd newb-out here a minute)
Goal:
I'd like to go ahead and get a storage skeleton setup that won't bite me in the future when I'm delving into more Container work. This is just for a home server + gaming VMs, nothing mission critical aside from my personal data.
Any input on the following is welcome. I'm just trying to get a sane setup that I won't feel the need to do massive rearchitecting on later.
My plan is to have a main Linux desktop running on 1 GPU while the Windows VM can spin up when needed on the second GPU, with Containers running in the background.
...
Hardware:
Storage:
In mapping out my needs my current plan:
...
Actual Question:
What is the best file system layout for the SSDs above?
1. I'd like to not run into massive headaches if I want to run an unprivileged LXD or LXC at some point (as I definitely plan to do that, probably with a Docker instance inside it ... I know enough to worry about that but not enough to look smart yet).
Is ZFS still a good option there (again, without headaches mapping UIDs/etc), maybe using a Folder for LXC storage, or do I need to be looking into LVM-thin or EXT4?
2. For the NVME, I'm not concerned with FULL bare metal like performance. But I don't want to waste the NVME speed. What would be the suggested method?
I would like to thin provision it so that more than 1 VM can have stuff on it without always claiming it's allocated space and don't want to share the allocation verbatim across all VMs. (I'm planning a Linux VM for editing, Windows only for games I don't find Proton working well with).
3. I believe I've read that ZFS snapshots of a running VM with passed-through PCI devices requires shutting down the VM to make the snapshot. I'm ok with that versus running qcow2, but if there's an argument for running qcow2 in my usage now is the time for me to hear about it.
...
Background:
I've been learning Proxmox basics for awhile just to get comfortable with it.
I was using Unraid last year for a bunch of media and file containers with a Windows VM. Most of my last decade or more has been on Windows and my 1990s and 2000s Linux experience is definitely dated.
So far I'm pretty happy with how I have been able to setup QEMU VMs and basic ZFS usage. However I'm still completely at 0 for LXC/LXD experience.
I haven't bothered to try and setup Dockers like I did on Unraid, I just organized my data and put it into hibernation for awhile while working on this.
Goal:
I'd like to go ahead and get a storage skeleton setup that won't bite me in the future when I'm delving into more Container work. This is just for a home server + gaming VMs, nothing mission critical aside from my personal data.
Any input on the following is welcome. I'm just trying to get a sane setup that I won't feel the need to do massive rearchitecting on later.
My plan is to have a main Linux desktop running on 1 GPU while the Windows VM can spin up when needed on the second GPU, with Containers running in the background.
...
Hardware:
Storage:
- 4x 8TB NAS hard drives (on LSI HBA, 2 WD Red, 2 HGST)
- 2x 1TB SATA consumer SSDs (Samsung EVO 860)
- 1x 512GB SATA consumer SSD (Samsung EVO 850)
- 1x 1TB consumer NVME (Adata SX8200PNP, original revision)
In mapping out my needs my current plan:
- 4x NAS drives in RAIDZ ... will be used for long term storage (media, /home *backup*, maybe snapshots intended to last more than a couple of days)
- 2x 1TB SSDs ... used for:
+ VM root disk storage (ie, in a Win10 VM putting C:\ here but using the NVME for game and editing storage)
+ intake of file transfers (media, etc) for processing prior to being dropped into the spinning rust RAIDZ
+ Container storage - 1x 512GB SSD ... for Proxmox /rpool ... hear me before decrying that should go on the 1TB drives. My thought process for this is to minimize wear on the 1TB drives and relying on system backups to the RAIDZ in case of needing to replace. But I'm not sold on this, it's just how I've been doing it up to now.
- 1TB NVME ... passed through to VMs via thin storage for games, editing videos, etc.
...
Actual Question:
What is the best file system layout for the SSDs above?
1. I'd like to not run into massive headaches if I want to run an unprivileged LXD or LXC at some point (as I definitely plan to do that, probably with a Docker instance inside it ... I know enough to worry about that but not enough to look smart yet).
Is ZFS still a good option there (again, without headaches mapping UIDs/etc), maybe using a Folder for LXC storage, or do I need to be looking into LVM-thin or EXT4?
2. For the NVME, I'm not concerned with FULL bare metal like performance. But I don't want to waste the NVME speed. What would be the suggested method?
I would like to thin provision it so that more than 1 VM can have stuff on it without always claiming it's allocated space and don't want to share the allocation verbatim across all VMs. (I'm planning a Linux VM for editing, Windows only for games I don't find Proton working well with).
3. I believe I've read that ZFS snapshots of a running VM with passed-through PCI devices requires shutting down the VM to make the snapshot. I'm ok with that versus running qcow2, but if there's an argument for running qcow2 in my usage now is the time for me to hear about it.
...
Background:
I've been learning Proxmox basics for awhile just to get comfortable with it.
I was using Unraid last year for a bunch of media and file containers with a Windows VM. Most of my last decade or more has been on Windows and my 1990s and 2000s Linux experience is definitely dated.
So far I'm pretty happy with how I have been able to setup QEMU VMs and basic ZFS usage. However I'm still completely at 0 for LXC/LXD experience.
I haven't bothered to try and setup Dockers like I did on Unraid, I just organized my data and put it into hibernation for awhile while working on this.