Hello everyone,
im wondering how should i attach my drives for nas stuff to a vm. I have 2 options in mind.
The vm will run services like samba, emby, minidlna, radarr, etc.
The drives will be setup as zfs raid on the host.
Option1
Add new disk to vm, the disk will then be ext4 formatted on the guest.
Less flexible. Not sure if zfs still offers all its benefits on a block device.
Overhead probably much less then Option2.
Option2
NFS share on the host mounted into guest.
Allows way more flexibility then Option1.
Adds a lot of overhead ? (needs nfs service on host, disk access wont be direct instead network/tcp overhead)
My guess is that attached storage will perform way better then a network mount.
What are your opinions and or experience with it ?
I will do some tests once i find time for it in the next couple of days.
im wondering how should i attach my drives for nas stuff to a vm. I have 2 options in mind.
The vm will run services like samba, emby, minidlna, radarr, etc.
The drives will be setup as zfs raid on the host.
Option1
Add new disk to vm, the disk will then be ext4 formatted on the guest.
Less flexible. Not sure if zfs still offers all its benefits on a block device.
Overhead probably much less then Option2.
Option2
NFS share on the host mounted into guest.
Allows way more flexibility then Option1.
Adds a lot of overhead ? (needs nfs service on host, disk access wont be direct instead network/tcp overhead)
My guess is that attached storage will perform way better then a network mount.
What are your opinions and or experience with it ?
I will do some tests once i find time for it in the next couple of days.