Hi,
I'm fairly new to Proxmox and Linux, so please excuse my noobiness.
Objective
Trying to move away from a MacMini hosting SMB shares (the crooked Apple way), TimeMachines and running some Debian/Windows VMs via Virtualbox for Homelab stuff. Moving towards a "real" (home) server with Debian basis to
- run a few virtual machines
- run several docker containers
- provide real/standard SMB/NFS shares
- act as TimeMachine host for all the Macs in the house
- do all that with a reasonable degree of stability and security
Hardware
- old Dell T110 with Xeon X3440, 4 Cores, 8 Threads
- 16 GB ECC RAM
- 2x 1Gig-Ethernet, currently no link aggregation, only 1 connected, to get started (network infrastructure is, of course, also pure gigabit)
- onboard SATA-III:
-- 1x 120 GB SSD for primary OS
-- 2x 1 TB SSD for VM disks, ISOs and other "fast stuff"
- on PERC H200 (PCIe), flashed to LSI 8211-i8 in IT mode with FW P20, used as HBA:
-- 2x 4 TB HDD for ZFS Storage
-- 2x 2 TB HDD for ZFS Storage
- on PCIe 2. Gen.:
-- 1x 32 GB NVME for ZFS as log and cache for ZFS
Configuration
- bare metal OS: Proxmox VE 6.4.8
- both 1 TB SSDs as LVM storage
- VM101:
-- OpenMediaVault (OMV) current release
-- VM disk sits on LMV storage (SSD)
-- 2 Cores, 8 GB RAM (min. 2 GB)
-- physical disks on LBA attached to VM via "qm set 101 -scsi1 /dev/disk/by-id/..."
-- within OMV created a ZFS pool: mirror (2x 4 TB) mirror (2x 2 TB) log 8 GB cache 16 GB
-- created dataset "test" on that ZFS
-- created a shared folder "share_test" within that dataset
-- shared that folder "share_test" via SMB
- other VMs, not relevant here
Problem(s)
1. Theoretically I should be able to up/download to share_tets with ~110 MB/s. I did achieve exactly that with an installation of TrueNas Core on the exact same setup. Since my hardware is apparently too old for FreeBSD virtualization, I ditched TrueNas in favor of PVE. Now, with PVE I get transfer speeds of only ~65 MB/s - roughly 45% slower. Interestingly, it makes no difference whether I rsync/scp to the local/LMV storage of the PVE or into the OMV-VM. It's pretty much the same speed.
I don't need enterprise performance. But since we can have several Macs and media centers accessing the "NAS-part" of the server, I want to make sure, I get the best speed out of it possible. Are there any performance tweaks I don't see? Did I configure anything wrong? What can I do to get as close to 110 MB/s as possible?
2. Since I'm new to most of this, I tried to set up the storage components to the best of my limited knowledge. Does my config make sense to more experienced PVE users? Is there a good read about "What storage / filesystem type to use for what use case in PVE?"
3. I have not configured any backup routines yet. I would prefer to backup to the ZFS pool on OMV (since it is the biggest pool). Is that smart or what would be better?
Thank you for any support in advance!
I'm fairly new to Proxmox and Linux, so please excuse my noobiness.
Objective
Trying to move away from a MacMini hosting SMB shares (the crooked Apple way), TimeMachines and running some Debian/Windows VMs via Virtualbox for Homelab stuff. Moving towards a "real" (home) server with Debian basis to
- run a few virtual machines
- run several docker containers
- provide real/standard SMB/NFS shares
- act as TimeMachine host for all the Macs in the house
- do all that with a reasonable degree of stability and security
Hardware
- old Dell T110 with Xeon X3440, 4 Cores, 8 Threads
- 16 GB ECC RAM
- 2x 1Gig-Ethernet, currently no link aggregation, only 1 connected, to get started (network infrastructure is, of course, also pure gigabit)
- onboard SATA-III:
-- 1x 120 GB SSD for primary OS
-- 2x 1 TB SSD for VM disks, ISOs and other "fast stuff"
- on PERC H200 (PCIe), flashed to LSI 8211-i8 in IT mode with FW P20, used as HBA:
-- 2x 4 TB HDD for ZFS Storage
-- 2x 2 TB HDD for ZFS Storage
- on PCIe 2. Gen.:
-- 1x 32 GB NVME for ZFS as log and cache for ZFS
Configuration
- bare metal OS: Proxmox VE 6.4.8
- both 1 TB SSDs as LVM storage
- VM101:
-- OpenMediaVault (OMV) current release
-- VM disk sits on LMV storage (SSD)
-- 2 Cores, 8 GB RAM (min. 2 GB)
-- physical disks on LBA attached to VM via "qm set 101 -scsi1 /dev/disk/by-id/..."
-- within OMV created a ZFS pool: mirror (2x 4 TB) mirror (2x 2 TB) log 8 GB cache 16 GB
-- created dataset "test" on that ZFS
-- created a shared folder "share_test" within that dataset
-- shared that folder "share_test" via SMB
- other VMs, not relevant here
Problem(s)
1. Theoretically I should be able to up/download to share_tets with ~110 MB/s. I did achieve exactly that with an installation of TrueNas Core on the exact same setup. Since my hardware is apparently too old for FreeBSD virtualization, I ditched TrueNas in favor of PVE. Now, with PVE I get transfer speeds of only ~65 MB/s - roughly 45% slower. Interestingly, it makes no difference whether I rsync/scp to the local/LMV storage of the PVE or into the OMV-VM. It's pretty much the same speed.
I don't need enterprise performance. But since we can have several Macs and media centers accessing the "NAS-part" of the server, I want to make sure, I get the best speed out of it possible. Are there any performance tweaks I don't see? Did I configure anything wrong? What can I do to get as close to 110 MB/s as possible?
2. Since I'm new to most of this, I tried to set up the storage components to the best of my limited knowledge. Does my config make sense to more experienced PVE users? Is there a good read about "What storage / filesystem type to use for what use case in PVE?"
3. I have not configured any backup routines yet. I would prefer to backup to the ZFS pool on OMV (since it is the biggest pool). Is that smart or what would be better?
Thank you for any support in advance!
Last edited: