local zfs versus nfs share for vms/cts

Aug 4, 2020
6
0
6
48
At the moment I work with a hyperconverged proxmox server hosting vms and cts on a local zfs mirror (ssd). As I need additional machines (needing more ram, which is mot possible on this server) I think about buying a new server for proxmox only and an additional one with a nfs share (truenas scale with ssd mirror) and a 10G connection.
Can anyone give me information about speed changes when using an nfs share for vm/ct memory compared to the local zfs?
 
Having a local zfs pool will probably always be faster. Running benchmarks doesn't make any sense for me, because we have different hardware/network environments.
 
I am current playing with this. I have three PBS servers running in a Proxmox cluster of 3 machines ( Ryzen 3900x, HP z440, HP z840 ). The PBS machines are as follows:
  1. HP z440 with local ZFS raidz1 of 3 disks.
  2. HP z840 with TrueNAS CIFS filesystem
  3. TrueNAS "instances" VM with ZFS zvol on Virtio-BLK.

I have about 30 VMs that don't change much so the actual writes are minimal.

I expected a lot of differences between the PBSes, but as far as time is concerned they are all about the same. The longest time is the Ryzen system that has 20 VMs and containers which take the following duration: 22:27 , 22:41, 22:11. Each has about 380GB of 'read' data but not a lot of 'write' data since they haven't' changed much.

so are far as time is concerned, I have not seen any benefit. I do not know about the other advantage ( such as zfs deduplication ).

The reason I'm testing is I want to take down the hp z440 box and move the drives to another machine and I don't know whether to use CIFS/NFS or a truenas vm with it's own zvol.

The advantage of CIFS/NFS would be that I can see the files which gives me some confidence that I can reinstall the PBS and use the backups ( well, I have heard this is very doable but I have not actually tried it). I'm actually not sure how to access the Truenas zvol if the PBS goes down ( probably possible but I have not tried it).

BTW, my TrueNAS runs in my HP z840, so the PBS is actually doubly virtualized: HP Z840 --> TruNAS --> PBS. I sort of expected this to be the slowest, but it is marginally faster than the others.

I would like to hear if anybody else sees advantages of ZFS over CIFS/NFS for PBS datastores.
 
I am current playing with this. I have three PBS servers running in a Proxmox cluster of 3 machines ( Ryzen 3900x, HP z440, HP z840 ). The PBS machines are as follows:
  1. HP z440 with local ZFS raidz1 of 3 disks.
  2. HP z840 with TrueNAS CIFS filesystem
  3. TrueNAS "instances" VM with ZFS zvol on Virtio-BLK.

...

I would like to hear if anybody else sees advantages of ZFS over CIFS/NFS for PBS datastores.
Details
VMID Name Status Time Size Filename
101 rye-vmint22 ok 1m 1s 80 GiB vm/101/2025-05-28T10:00:04Z
102 homepage ok 23s 2.34 GiB ct/102/2025-05-28T10:01:05Z
104 rye-pihole ok 1m 45s 2.593 GiB ct/104/2025-05-28T10:01:28Z
105 rye-unifi ok 1m 54s 2.902 GiB ct/105/2025-05-28T10:03:13Z
106 rye-haos ok 2m 52s 32.001 GiB vm/106/2025-05-28T10:05:07Z
107 debian ok 1m 57s 3.906 GiB ct/107/2025-05-28T10:07:59Z
110 rye-watcharr ok 14s 2.562 GiB ct/110/2025-05-28T10:09:56Z
111 rye-lmde ok 1m 24s 120.001 GiB vm/111/2025-05-28T10:10:10Z
114 rye-win11 ok 2m 2s 120.004 GiB vm/114/2025-05-28T10:11:34Z
120 rye-ngproxy ok 2m 40s 2.157 GiB ct/120/2025-05-28T10:13:36Z
123 rye-deb12 ok 14s 2.474 GiB ct/123/2025-05-28T10:16:16Z
125 rye-jyp ok 1m 18s 1.858 GiB ct/125/2025-05-28T10:16:30Z
127 rye-onedev ok 1m 2s 1.676 GiB ct/127/2025-05-28T10:17:48Z
129 rye-cockpit ok 33s 961.074 MiB ct/129/2025-05-28T10:18:50Z
201 rye-peanut ok 1m 51s 3.613 GiB ct/201/2025-05-28T10:19:23Z

Total running time: 21m 11s
Total size: 379.026 GiB