TL;DR: Questions at the bottom
Sidenote: I typically use Ceph for all my professional and personal Proxmox needs. (other then the occasional ZFS Raid1 for the OS-Disks).
I have a small personal project going, and its not going as expected at all.
Some Specs:
32 GB Ram
2x 3 TB HDD
Proxmox installed via installer as ZFS based Raid0 to utilize the space fully.
Latest updates with openvswitch
Currently all required functionally is sitting in a single Debian VM and semi-working, its bad for multiple reasons, but specifically being unable to set limits on a specific services access to server resources and the extreme IO delay (95% at times) i am encountering.
I'm looking to split this into 5 VM's:
Question 1: Is this the best way to handle this kind of Setup with regards to ZFS ?
Should i use a single vDisk per VM?
Should i use multiple vDisks per VM (as in OS-Disk and Data Disk?
I feel like i should use seperate vDisks for OS and Media Data. Not sure tho.
Is NFS the best way to share the sama dataset between multiple VM's ?
Question 2: What Caching Mode is best to choose for the vDisks in Proxmox when using ZFS ?
Direct Sync, Write through, Write Back, None ?
Use IOThread ?
I kinda feel i should use something like sync with iothread=on on OS-vDisks and No Cache on Data-vDisks (given the fact that the files stored and accessed on Data-vDisks are larger then the Servers Max Ram)
Not sure on this.
Question 3: How do i stop ZFS to consume all my Ram for its ZFS Caching ?
I don't use dedupe; seems useless to dedupe 5x2Gb of Debian OS-Data at the expense of Gigabytes worth of Ram.
ZFS probably sticks most of my Ram into Arch Caching anyways (which i do not need for the Large Files because a 50GB file does not fit 32 GB Ram). At present it consumes 27 GB out of the 32.
I tried to limit it using
https://pve.proxmox.com/wiki/ZFS_on_Linux#_limit_zfs_memory_usage
Not sure if it is the best method tho.
Question4: How do i stop a Debian VM to also Cache a Data-vDisk ?
It looks right now, as if Debian is using the 10GB i had assigned to its VM to also Cache the Media Data (or parts of it).
That seems redundant and counter productive
Since ZFS Arc should be used for the small files, which means they are kept in ram twice ( ZFS Arc + Debian VM Ram)
And Large Files probably make zero Sense to Cache, since a single file is most likely larger then the Ram assigned to ZFS or the Debian VM.
Any help appreciated.
Sidenote: I typically use Ceph for all my professional and personal Proxmox needs. (other then the occasional ZFS Raid1 for the OS-Disks).
I have a small personal project going, and its not going as expected at all.
Some Specs:
32 GB Ram
2x 3 TB HDD
Proxmox installed via installer as ZFS based Raid0 to utilize the space fully.
Latest updates with openvswitch
Code:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 2.53T 2.74T 96K /rpool
rpool/ROOT 3.35G 2.74T 96K /rpool/ROOT
rpool/ROOT/pve-1 3.35G 2.74T 3.35G /
rpool/data 2.52T 2.74T 96K /rpool/data
rpool/data/vm-100-disk-1 64K 2.74T 64K -
rpool/data/vm-1001-disk-1 2.52T 2.74T 2.52T -
rpool/swap 8.50G 2.75T 255M -
Currently all required functionally is sitting in a single Debian VM and semi-working, its bad for multiple reasons, but specifically being unable to set limits on a specific services access to server resources and the extreme IO delay (95% at times) i am encountering.
I'm looking to split this into 5 VM's:
- VM-1 is storing media-data, anywhere from 3-50 GB per file.
- VM-2 is responsible for handling fileuploads (Upload Server)
- VM-3 is responsible for Mediadata cutting /compression (Compression/cutting Server)
- VM-4 is responsible for displaying and okaying said compresssion (Plex server)
- VM-5 is responsible for distributing said Media Data to multiple services and and then deleting it from VM-2
Question 1: Is this the best way to handle this kind of Setup with regards to ZFS ?
Should i use a single vDisk per VM?
Should i use multiple vDisks per VM (as in OS-Disk and Data Disk?
I feel like i should use seperate vDisks for OS and Media Data. Not sure tho.
Is NFS the best way to share the sama dataset between multiple VM's ?
Question 2: What Caching Mode is best to choose for the vDisks in Proxmox when using ZFS ?
Direct Sync, Write through, Write Back, None ?
Use IOThread ?
I kinda feel i should use something like sync with iothread=on on OS-vDisks and No Cache on Data-vDisks (given the fact that the files stored and accessed on Data-vDisks are larger then the Servers Max Ram)
Not sure on this.
Question 3: How do i stop ZFS to consume all my Ram for its ZFS Caching ?
I don't use dedupe; seems useless to dedupe 5x2Gb of Debian OS-Data at the expense of Gigabytes worth of Ram.
ZFS probably sticks most of my Ram into Arch Caching anyways (which i do not need for the Large Files because a 50GB file does not fit 32 GB Ram). At present it consumes 27 GB out of the 32.
I tried to limit it using
https://pve.proxmox.com/wiki/ZFS_on_Linux#_limit_zfs_memory_usage
^^10-12 GB. 1 GB per TB of disk space, 4 GB for other stuff. Not sure how that is going as of yet, will need to do more tests.nano /etc/modprobe.d/zfs.conf
update-initramfs -uCode:options zfs zfs_arc_min=10737418240 options zfs zfs_arc_max=12884901888
Not sure if it is the best method tho.
Question4: How do i stop a Debian VM to also Cache a Data-vDisk ?
It looks right now, as if Debian is using the 10GB i had assigned to its VM to also Cache the Media Data (or parts of it).
That seems redundant and counter productive
Since ZFS Arc should be used for the small files, which means they are kept in ram twice ( ZFS Arc + Debian VM Ram)
And Large Files probably make zero Sense to Cache, since a single file is most likely larger then the Ram assigned to ZFS or the Debian VM.
Any help appreciated.
Last edited: