Resource allocation for Proxmox running Ubuntu Server VM on ZFS

norsemangrey

Member
Feb 8, 2021
63
9
13
39
I'm setting up a Proxmox VE server primarily for running an Ubuntu Server VM with several Docker services / containers for file-/media-hosting, but having the added flexibility of begin able to run additional VMs / LXC containers if required.

The thought is to install Proxmox on two regular SSDs, using two M.2 disks for VMs, 4 x 8TB HDDs for storage. All three sets of disks will be mirrored and using ZFS. I will make use of iGPU / QuickSync passthrough for hardware transcoding in Plex. I will create a zpool with a dataset structure for the media / file storage on the storage drives and pass the complete pool to the Ubuntu Server VM for use by the various Docker services.

What I really could use some input on is how to do the resource allocation to the VM. I.e. how much memory and and CPU cores/sockets should I assign. The main goal of the server are the Docker file-/media-hosting containers (Nextcloud, Plex, Calibre, Radarr, etc.) so I would like them to have as much as possible.

Particularly with regards to memory I am uncertain whether I should assign most to the VM or "leave it for the host" since I do not know if it is the VM or the host that will use it for ZFS. I currently have 32GB or RAM (planning to expand to 64GB later). The recommendation for ZFS is 4GB + 1GB * TB raw disk space, which in my case totals about 38GB so I'm a little on the short side pr. now. In any case I imagine the host (Proxmox) handles the root and VM drives and will need some "ZFS memory" for those, but who needs the "ZFS memory" for the storage drives? Hope the question makes sense.

Regarding CPU assignment, is it correct that the core assignment does not really need to reflect the actual number of cores, but is more like a balance number for how much each VM is prioritized under heavy load. If so could I assign all the Cores for now? But what about sockets in that case?

Appreciate any input and feedback on my questions or comments to any other aspects of my setup.
 
Since you only want to run docker on your vm, you should probably think about to switch to an lxc container instead of a vm.

Yes vm is the recommended way for docker (stable etc..), but if you want max performance, you should switch to lxc.
Just another hint, i don't know how to make docker running under zfs as filesystem if you use lxc. That's not an issue with a vm.
Because even if i used the docker zfs driver, it didn't worked on my side. But there is for sure a way and someone should know the way.

If you use a vm, and you have only 32gb ram, you can limit zfs arc to 8gb. But below that i wouldn't go.
Keep in mind, that even if you have a shitton of docker instances, they don't consume that much ram. So 32 gb of total ram, or 16 for the vm, should be more as enough.
Zfs itself uses the ram to cache blocks (not files), blocks if any zfs volume. So it's highly efficient and it doesn't matter what your vm uses as filesystem.
For cpu assignment, you could try to use numa.
Numa itself, as far i know, doesn't really need a multicpu Plattform, it will just try to spread the workload better between the cores.
If you want to assign max possible processing power to your vm. Don't assign all cores.
i can't tell you if assigning only 8/16 physical cores or 12/16 leads to better performance, that's something that needs probably to be benchmarked. But never assign all cores to a vm, this will definitively lead to less processing power. I would recommend to assign either 8/16 (only physical) or more, but not all. If your cpu has 4/8, then either 4 or 6.

However, to give you probably the best hint:
This all makes probably minimal difference. In the end it almost doesn't matter how you assign everything, it will run most of the times good. Just basically this 2 rules:
1. give your vm minimum half cores
2. For ram etc, you can simply reduce zfs max size and give your vm more ram if it needs. Even 4gb zfs arc will work good. Just the more zfs arc size, the faster your harddrive (reading) will be "maybe". Zfs arc is only for most used files/blocks.

Cheers
 
Since you only want to run docker on your vm, you should probably think about to switch to an lxc container instead of a vm.

Yes vm is the recommended way for docker (stable etc..), but if you want max performance, you should switch to lxc.
Just another hint, i don't know how to make docker running under zfs as filesystem if you use lxc. That's not an issue with a vm.
Because even if i used the docker zfs driver, it didn't worked on my side. But there is for sure a way and someone should know the way.
@Ramalama Many thanks for the reply. Regarding LXC I have been looking into this previously and I have decided to go with a VM. On top of everything else there is just to many extra factors to consider that might mess things up. I have the following hardware so it should be enough juice for running a full VM I believe :)


If you use a vm, and you have only 32gb ram, you can limit zfs arc to 8gb. But below that i wouldn't go.
Is this basically restricting how much RAM is used for ZFS? How do I do this?
Keep in mind, that even if you have a shitton of docker instances, they don't consume that much ram. So 32 gb of total ram, or 16 for the vm, should be more as enough.
Zfs itself uses the ram to cache blocks (not files), blocks if any zfs volume. So it's highly efficient and it doesn't matter what your vm uses as filesystem.
So if I'm understanding you correctly, you are saying that the memory consumed for ZFS is consumed at the host side and not in the VM on which the ZFS storage pool is mounted?
For cpu assignment, you could try to use numa.
Numa itself, as far i know, doesn't really need a multicpu Plattform, it will just try to spread the workload better between the cores.
If you want to assign max possible processing power to your vm. Don't assign all cores.
i can't tell you if assigning only 8/16 physical cores or 12/16 leads to better performance, that's something that needs probably to be benchmarked. But never assign all cores to a vm, this will definitively lead to less processing power. I would recommend to assign either 8/16 (only physical) or more, but not all. If your cpu has 4/8, then either 4 or 6.
Thanks for the tips.
 
You can change that easily, no need to worry.


That is something you should worry about.

Before you do anything else: Try to get this working first.
@norsemangrey
exactly what he said, igpu is a pain in the ass to pass through. You have only 2 options.
Best and probably easiest, is to use plex inside an lxc container. This would be actually relatively simple to passthrough.
But passthrough igpu inside an VM, you need to google and dig yourself through gvt-d and gvt-g. It is not that extremely complicated, but i didn't seen anyone here in the forum, that wrote that this works xD
Most people anyway don't reply, if something is working, so probably this is good.

about zfs, yes, you can easily limit the size.
basically you need to change only zfs max arc size and zfs min arc size. Google for it. You have to do it with modules options to keep it persistent against reboots. Just google, this is perfectly documentated. I would simply set min size to 2gb and max size to 8gb, then the arc size should be max 8gb if it needs.

I could tell you simply where and how exactly to do it. But if you google and find out yourself, you will be cleverer afterwards xD

Cheers
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!