I'm setting up a Proxmox VE server primarily for running an Ubuntu Server VM with several Docker services / containers for file-/media-hosting, but having the added flexibility of begin able to run additional VMs / LXC containers if required.
The thought is to install Proxmox on two regular SSDs, using two M.2 disks for VMs, 4 x 8TB HDDs for storage. All three sets of disks will be mirrored and using ZFS. I will make use of iGPU / QuickSync passthrough for hardware transcoding in Plex. I will create a zpool with a dataset structure for the media / file storage on the storage drives and pass the complete pool to the Ubuntu Server VM for use by the various Docker services.
What I really could use some input on is how to do the resource allocation to the VM. I.e. how much memory and and CPU cores/sockets should I assign. The main goal of the server are the Docker file-/media-hosting containers (Nextcloud, Plex, Calibre, Radarr, etc.) so I would like them to have as much as possible.
Particularly with regards to memory I am uncertain whether I should assign most to the VM or "leave it for the host" since I do not know if it is the VM or the host that will use it for ZFS. I currently have 32GB or RAM (planning to expand to 64GB later). The recommendation for ZFS is 4GB + 1GB * TB raw disk space, which in my case totals about 38GB so I'm a little on the short side pr. now. In any case I imagine the host (Proxmox) handles the root and VM drives and will need some "ZFS memory" for those, but who needs the "ZFS memory" for the storage drives? Hope the question makes sense.
Regarding CPU assignment, is it correct that the core assignment does not really need to reflect the actual number of cores, but is more like a balance number for how much each VM is prioritized under heavy load. If so could I assign all the Cores for now? But what about sockets in that case?
Appreciate any input and feedback on my questions or comments to any other aspects of my setup.
The thought is to install Proxmox on two regular SSDs, using two M.2 disks for VMs, 4 x 8TB HDDs for storage. All three sets of disks will be mirrored and using ZFS. I will make use of iGPU / QuickSync passthrough for hardware transcoding in Plex. I will create a zpool with a dataset structure for the media / file storage on the storage drives and pass the complete pool to the Ubuntu Server VM for use by the various Docker services.
What I really could use some input on is how to do the resource allocation to the VM. I.e. how much memory and and CPU cores/sockets should I assign. The main goal of the server are the Docker file-/media-hosting containers (Nextcloud, Plex, Calibre, Radarr, etc.) so I would like them to have as much as possible.
Particularly with regards to memory I am uncertain whether I should assign most to the VM or "leave it for the host" since I do not know if it is the VM or the host that will use it for ZFS. I currently have 32GB or RAM (planning to expand to 64GB later). The recommendation for ZFS is 4GB + 1GB * TB raw disk space, which in my case totals about 38GB so I'm a little on the short side pr. now. In any case I imagine the host (Proxmox) handles the root and VM drives and will need some "ZFS memory" for those, but who needs the "ZFS memory" for the storage drives? Hope the question makes sense.
Regarding CPU assignment, is it correct that the core assignment does not really need to reflect the actual number of cores, but is more like a balance number for how much each VM is prioritized under heavy load. If so could I assign all the Cores for now? But what about sockets in that case?
Appreciate any input and feedback on my questions or comments to any other aspects of my setup.