[SOLVED] Proxmox VE with NFS storage on TrueNAS - Question about disk format / snapshots / backup

MisterDeeds

Member
Nov 11, 2021
126
22
23
34
Dear all

I have a 3 node cluster with shared storage (mounted via NFS). So far I have used Synology with BTRFS as storage. The disks of the VMs were stored in qcow2 format. So I could easily create snapshots via the Proxmox interface. The backup (via PBS) also ran smoothly as this could also do snapshots and incremental backups.

Now I want to switch from Synology to TrueNAS Scale. This also brings ZFS as a file system. I have read that the VM disks should be stored in RAW format. But now I have seen that then the functionality to create snapshots directly via proxmox is no longer available. Also, I noticed on a test system that the backup via Proxmox Backup Server takes much longer (I think because it can no longer make snapshots and therefore incremental backups).

Does anyone have the same constellation in operation and solutions for it?

Thank you very much and best regards
 
Hi,

I'm using Proxmox with TrueNAS quite a while and it works without a problem. TrueNAS use internal the great ZFS-Filesystem but this that means not that you must using this - for example ZFS over iSCSI. You can normaly use a Dataset that is exported as NFS. As you can see in https://pve.proxmox.com/wiki/Storage you can use all Features when you use the qcow2 format, which i think is the best way.

I use the follwoing Configuration in TrueNAS:
1683820436358.png

Number of Servers - 12 Threads in a Ryzen 3600X and Enable NFSv4
 
Dear crs369

Thank you for the answer! I am a bit unsure, as I have often read that qcow2 on ZFS brings performance issues... (here e.g. https://forum.proxmox.com/threads/using-zfs-with-qcow2.118643/). I had previously run a Synology storage with BTRFS (also a COW system) - that worked quickly and without problems - so I'm a little confused why so many write that COW on COW is not a good idea...

And best thanks for the screenshot. Since I use TrueNAS Scale, it looks slightly different. I assume the threads imply the servers at TureNAS COre.
1683933939390.png
 
so I'm a little confused why so many write that COW on COW is not a good idea...
Because CoW causes a lot of overhead and overhead doesn't add up, it multiplies. But it also really depends on the workload. A few big async reads/writes might not be as bad as a lot of small sync writes and so on. An example with fictional numbers:
You sync write 4K of data and this also causes 3 copies of 4K of metadata to be written. Because it is a sync write everything will be written twice so 8x 4K. Now have nested filesystems. Each of those 8x 4K writes will again cause 8 writes. Now you got 64x 4K writes. Nest it again and you got 512x 4K writes and so on. This is exponential and can get really bad.

Best you test it yourself if it works for your workload or not. Some run qcow2 on top of ZFS and don't really care because their storage is fast enough and they don't see a big performance drop. Some try to avoid it because they already got massive overhead and amplifying that again work make it way worse.
 
Last edited:
One point that i don't mentioned is the proxmox-configuration for NFS V4 - in the /etc/pve/storage.cfg you must set the options to vers=4
Code:
nfs: proxsharedssd
        export /mnt/mypoolssd/proxmox/proxshared
        path /mnt/pve/proxsharedssd
        server 192.168.0.10
        content images,snippets,rootdir
        options vers=4
        prune-backups keep-all=1

Without qcow2 you don't have snapshots and that is one of the main features that you don't will miss.

Actually for it really depends what you will need - use ist for fun and experiments or will use it as a high performance database for your company and then, there is more that raw or qcow2.

I have here two little benchmarks with fio:

Code:
Throughput Performance Tests
============================

Test file random read/writes
fio --filename=testfile --size=10GB --direct=1 --rw=randrw --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4 --time_based --group_reporting --name=throughput-test-job --eta-newline=1

qcow2
   READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=4336MiB (4547MB), run=120243-120243msec
  WRITE: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=4361MiB (4572MB), run=120243-120243msec


raw
   READ: bw=36.8MiB/s (38.5MB/s), 36.8MiB/s-36.8MiB/s (38.5MB/s-38.5MB/s), io=4420MiB (4635MB), run=120258-120258msec
  WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=4446MiB (4662MB), run=120258-120258msec

Test sequential reads (without direct)
fio --filename=testfile  --rw=read --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4 --time_based --group_reporting --name=throughput-test-job --eta-newline=1

qcow2
  READ: bw=1333MiB/s (1398MB/s), 1333MiB/s-1333MiB/s (1398MB/s-1398MB/s), io=156GiB (168GB), run=120020-120020msec

raw
  READ: bw=1329MiB/s (1393MB/s), 1329MiB/s-1329MiB/s (1393MB/s-1393MB/s), io=156GiB (167GB), run=120009-120009msec
As you can see not much difference - for me!

Too understand the results:
  • NFS-Share on TrueNAS
  • Storage-Pool on TrueNAS ist a SSD-Pool with 4x Samsung 980 Pro in RAIDZ1
  • Dataset on TrueNAS using compression zstd
  • Proxmox-Server is connected to TrueNAS with 10GBit
So the result for sequential read 1329MiB/s is more then the network capacity and this means we get better values beacuse we use compression.

If this help i have ca. 25 VM's on Proxmox/TrueNAS and die TBW for my SSD-Pool is 3 TByte/month.
 
Hi all

thanks for the feedback and tips! I have now solved it via ZFS over iSCSI which works well for me. The disks are as RAW directly on the ZFS of TrueNAS. Thanks to the iSCSI-FreeNAS-API (https://github.com/TheGrandWazoo/freenas-proxmox) TrueNAS automatically creates the necessary Zvols for the disks. This has the effect that through auto-snapshots all VMs can be rolled back individually. In the following post I described how it worked for me.

https://forum.proxmox.com/threads/g...-with-pve-5x-and-freenas-11.54611/post-558104
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!