degudejung

Member
Jun 17, 2021
13
0
6
24
Hi,
I'm fairly new to Proxmox and Linux, so please excuse my noobiness.

Objective
Trying to move away from a MacMini hosting SMB shares (the crooked Apple way), TimeMachines and running some Debian/Windows VMs via Virtualbox for Homelab stuff. Moving towards a "real" (home) server with Debian basis to
- run a few virtual machines
- run several docker containers
- provide real/standard SMB/NFS shares
- act as TimeMachine host for all the Macs in the house
- do all that with a reasonable degree of stability and security

Hardware
- old Dell T110 with Xeon X3440, 4 Cores, 8 Threads
- 16 GB ECC RAM
- 2x 1Gig-Ethernet, currently no link aggregation, only 1 connected, to get started (network infrastructure is, of course, also pure gigabit)
- onboard SATA-III:
-- 1x 120 GB SSD for primary OS
-- 2x 1 TB SSD for VM disks, ISOs and other "fast stuff"
- on PERC H200 (PCIe), flashed to LSI 8211-i8 in IT mode with FW P20, used as HBA:
-- 2x 4 TB HDD for ZFS Storage
-- 2x 2 TB HDD for ZFS Storage
- on PCIe 2. Gen.:
-- 1x 32 GB NVME for ZFS as log and cache for ZFS

Configuration
- bare metal OS: Proxmox VE 6.4.8
- both 1 TB SSDs as LVM storage
- VM101:
-- OpenMediaVault (OMV) current release
-- VM disk sits on LMV storage (SSD)
-- 2 Cores, 8 GB RAM (min. 2 GB)
-- physical disks on LBA attached to VM via "qm set 101 -scsi1 /dev/disk/by-id/..."
-- within OMV created a ZFS pool: mirror (2x 4 TB) mirror (2x 2 TB) log 8 GB cache 16 GB
-- created dataset "test" on that ZFS
-- created a shared folder "share_test" within that dataset
-- shared that folder "share_test" via SMB
- other VMs, not relevant here

Problem(s)
1. Theoretically I should be able to up/download to share_tets with ~110 MB/s. I did achieve exactly that with an installation of TrueNas Core on the exact same setup. Since my hardware is apparently too old for FreeBSD virtualization, I ditched TrueNas in favor of PVE. Now, with PVE I get transfer speeds of only ~65 MB/s - roughly 45% slower. Interestingly, it makes no difference whether I rsync/scp to the local/LMV storage of the PVE or into the OMV-VM. It's pretty much the same speed.

I don't need enterprise performance. But since we can have several Macs and media centers accessing the "NAS-part" of the server, I want to make sure, I get the best speed out of it possible. Are there any performance tweaks I don't see? Did I configure anything wrong? What can I do to get as close to 110 MB/s as possible?

2. Since I'm new to most of this, I tried to set up the storage components to the best of my limited knowledge. Does my config make sense to more experienced PVE users? Is there a good read about "What storage / filesystem type to use for what use case in PVE?"

3. I have not configured any backup routines yet. I would prefer to backup to the ZFS pool on OMV (since it is the biggest pool). Is that smart or what would be better?

Thank you for any support in advance!
 
Last edited:
- 16 GB ECC RAM
You probably want way more RAM. Rule of thumb would be 16GB RAM alone for ZFSs ARC inside your OMVs VM.
-- 1x 120 GB SSD for primary OS
-- 2x 1 TB SSD for VM disks, ISOs and other "fast stuff"
- on PERC H200 (PCIe), flashed to HBA with latest LSI-firmware:
-- 2x 4 TB HDD for ZFS Storage
-- 2x 2 TB HDD for ZFS Storage
- on PCIe 2. Gen.:
-- 1x 32 GB NVME for ZFS as log and cache for ZFS
I would get another 120GB SSD so you could mirror the system disk too. They cost basically nothing and you don't need to setup everything again if your system disk fails. Keep in mind that Proxmox can't backup itself. You would need to shutdown the server every time and boot in something like clonezilla to back it up.
Problem(s)
1. Theoretically I should be able to up/download to share_tets with ~110 MB/s. I did achieve exactly that with an installation of TrueNas Core on the exact same setup. Since my hardware is apparently too old for FreeBSD virtualization, I ditched TrueNas in favor of PVE. Now, with PVE I get transfer speeds of only ~65 MB/s - roughly 45% slower. Interestingly, it makes no difference whether I rsync/scp to the local/LMV storage of the PVE or into the OMV-VM. It's pretty much the same speed.

I don't need enterprise performance. But since we can have several Macs and media centers accessing the "NAS-part" of the server, I want to make sure, I get the best speed out of it possible. Are there any performance tweaks I don't see? Did I configure anything wrong? What can I do to get as close to 110 MB/s as possible?
You could use fio to test the performance of the storage. And iperf3 to test the performance of the network. You could run both on the host and guest to see what is bottlenecking.
When you are using "qm set" you aren't really using passthrough. The drives are still being virtualized by virtio (and therefore using 512B LBA by default, so you are writing with a 128K recordsize to a 512B LBA virtual disk to a 4K LBA physical disk). If the HDDs are attached to a dedicated HBA you could try to use PCI passthrough to passthrough the complete controller to the OMV VM. Only that way the OMV VM gets direct and physical access to the drives.
Also most of the time a SSD for log/cache isn't worth it. I saw alot of setups where the pool was alot of faster after removing the log/cache devices. For example is your RAM way faster than any NVMe SSD. If you use a cache device it will need more RAM therefore less stuff can be stored in ARC (RAM) and must be loaded from the slow NVMe SSD. Would be faster if you just got more RAM so everything could be loaded from fast RAM. Rule of thumb is: "Don't buy a L2ARC SSD if you could buy more RAM instead". So a L2ARC is only usefull if you already maxed out your RAM and still want a bigger Read Cache. And the SLOG is only used for sync writes. Most probably 99,9% of your writes inside the OMV VM will be async writes and therefore can't be cached at all.
2. Since I'm new to most of this, I tried to set up the storage components to the best of my limited knowledge. Does my config make sense to more experienced PVE users? Is there a good read about "What storage / filesystem type to use for what use case in PVE?"
I would use ZFS with atleast a mirror for everything. LVM is faster but you loose all the great ZFS features. Without ZFS you got no bit rot protection so your data isn't that safe. The best backup is useless if you can't know if data is corrupted or not before it is backed up, because the filesystem isn't capable of detecting it.
3. I have not configured any backup routines yet. I would prefer to backup to the ZFS pool on OMV (since it is the biggest pool). Is that smart or what would be better?
Thats something I wouldn't do. You want your backups to be stored somewhere else so you can recover your VMs even if the complete server dies or isn't able to boot anymore. How to recover the OMV VM from a backup if you need your OMV VM to be running to access the backups?
And remember that raid is never replacing a backup. So you atleast want another NAS or external USB disks to backup everything (especially your OMV data).
 
Last edited:
  • Like
Reactions: degudejung
I would double check your processor stats... pretty sure you listed a 4 core processor and the T110 only has 1 cpu.

Honestly I think you would be better off with a t610 or t710 if you can find one. They won't be much more and they have way more capacity. Make sure first thing is update firmware.

I have a t610 with dual 6/12 core/thread xeons and 128gb ram and 6 or 8 drives using ZFS.
The dell SAS 6/i card is cheap and can be flashed to IT mode, it's only 3gb speed but works great for most things.
Also make sure you get redundant power supplies which I don't think the t110 had... or at least wasn't very common.

This is just my home server but it runs multiple (10ish) VMs lots of containers and I've banged on it for about 3 years now and it has been solid.
 
  • Like
Reactions: degudejung
I would double check your processor stats... pretty sure you listed a 4 core processor and the T110 only has 1 cpu.

Honestly I think you would be better off with a t610 or t710 if you can find one. They won't be much more and they have way more capacity. Make sure first thing is update firmware.

I have a t610 with dual 6/12 core/thread xeons and 128gb ram and 6 or 8 drives using ZFS.
The dell SAS 6/i card is cheap and can be flashed to IT mode, it's only 3gb speed but works great for most things.
Also make sure you get redundant power supplies which I don't think the t110 had... or at least wasn't very common.

This is just my home server but it runs multiple (10ish) VMs lots of containers and I've banged on it for about 3 years now and it has been solid.
Thank you.

Indeed, the server has only 1 CPU with 4 cores. I just copy-pasted what PVE summary publishes about the node. I guess it's not a big deal, so I would probably not try and force PVE to correct that.

The SAS card is "upgraded" to LSI 8211-i8 in IT mode with FW P20 but thanks for the hint. I updated the initial post.

As for the rest, of course, better hardware is always better :cool: In my case, I'll probably have to stick to what's available and try to make the best out of it for now.
 
thank you, Dunuin! You put a lot of work into that and it's appreciated!

You probably want way more RAM. Rule of thumb would be 16GB RAM alone for ZFSs ARC inside your OMVs VM.
=> OK, been pushing the ZFS pool inside VM with dedicated 14 GB RAM today in and out but in never climbed beyond 5-6 GB RAM used. OF course, the more the merrier but unfortunately the server would not take more than 16 GB, so I'll see how it gets me.

I would get another 120GB SSD so you could mirror the system disk too. They cost basically nothing and you don't need to setup everything again if your system disk fails. Keep in mind that Proxmox can't backup itself. You would need to shutdown the server every time and boot in something like clonezilla to back it up.
=> thanks, makes total sense. that's ordered

You could use fio to test the performance of the storage. And iperf3 to test the performance of the network. You could run both on the host and guest to see what is bottlenecking.
=> on-system performance is good (enough); the zfs pool is writing at > 300 MB/s, the SSDs ~ 600 MB/s and that's accepted. Don't understand where that huge drop in performance to ~65 MB/s comes from. Didn't get to results with iperf3, yet [see Update below!].

When you are using "qm set" you aren't really using passthrough. The drives are still being virtualized by virtio (and therefore using 512B LBA by default, so you are writing with a 128K recordsize to a 512B LBA virtual disk to a 4K LBA physical disk). If the HDDs are attached to a dedicated HBA you could try to use PCI passthrough to passthrough the complete controller to the OMV VM. Only that way the OMV VM gets direct and physical access to the drives.
=> ah OK. the PCIe passthrough looked like handful with the tutorial I found I hoped to get around it. Might reconsider, your explanation sounds like its not very efficient virtio SCSI.

Also most of the time a SSD for log/cache isn't worth it. I saw alot of setups where the pool was alot of faster after removing the log/cache devices. For example is your RAM way faster than any NVMe SSD. If you use a cache device it will need more RAM therefore less stuff can be stored in ARC (RAM) and must be loaded from the slow NVMe SSD. Would be faster if you just got more RAM so everything could be loaded from fast RAM. Rule of thumb is: "Don't buy a L2ARC SSD if you could buy more RAM instead". So a L2ARC is only usefull if you already maxed out your RAM and still want a bigger Read Cache. And the SLOG is only used for sync writes. Most probably 99,9% of your writes inside the OMV VM will be async writes and therefore can't be cached at all.
=> hmm, good to know. In fact so far I didn't see a performance boost by the NVMe so far.

How to recover the OMV VM from a backup if you need your OMV VM to be running to access the backups?
=> are you saying I cannot just import/mount that zfs pool on any other zfs-capable (virtual) machine? Is a zfs pool always tied to it's "master"? If so, is there something like a header than can be made available to other (v)m to be able an run it without this master?

Update network speed
iperf3 confirms that network speed is excellent. All connections and directions basically have full gigabit speed: other PC <-> PVE host; other PC <-> PVE guest VM OMV shown constant 110-112 MBytes/s or 940 MBits/s.

Update disk speed
Using FIO with following test routine
Code:
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1 random-write: (g=0): rw=randwrite, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=16

for OS (PVE) SSD:
Code:
WRITE: bw=168MiB/s (176MB/s), 9518KiB/s-12.7MiB/s (9747kB/s-13.3MB/s), io=11.1GiB (11.9GB), run=60987-67238msec

for VM SSD with LVM:
Code:
WRITE: bw=3729MiB/s (3910MB/s), 224MiB/s-242MiB/s (235MB/s-254MB/s), io=219GiB (235GB), run=60001-60006msec

for ZFS Pool (performed from within VM)
Code:
WRITE: bw=91.7MiB/s (96.2MB/s), 3675KiB/s-7750KiB/s (3763kB/s-7936kB/s), io=5720MiB (5998MB), run=60385-62366msec
=> :eek:OMG, that's terrible! During the test, performance even drops below 50MiB/s here and there. Especially, given the fact that the "raw" drives = not virtualized through Virtio SCSI and attached to OMV VM and not in ZFS pool performed nicely with dd if=/dev/zero... with write speeds of > 300 MB/s for all drives.

Maybe I should really give the PCIe passthrough a try...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!