I/O and Memory issues, losing my mind...

chipsharpdotcom

New Member
Feb 28, 2024
11
0
1
I need some help here. This is my first encounter with Proxmox. I made the switch after ESXi's free version went away.

Here's my hardware:
Dell Poweredge R710
128GB of RAM,
24 x Intel(R) Xeon(R) CPU X5670 @ 2.93GHz (2 Sockets),
6x 6TB 12Gb/s SAS drives connected to an HBA configured as a ZFS for VM images
1x 256GB SSD for the OS
An NFS connection to my other server running UNRAID that stores ISOs and some data backups

I've scoured the forums and tried various suggestions in different threads to try to get some better performance out of my ZFS array and it's just not happening for me. I've enabled thin provisioning, turned on discard, enabled SSD emulation, dialed back the memory allocated to ZFS, and my VMs still run so slow I feel like I need to get out and push.

Additionally, my VMs in the GUI are reporting that they're using almost every bit of RAM allocated to them, though htop doesn't seem to agree.

I need some help to troubleshoot this. I never had any problem running 8 simultaneous VMs under ESXi so I doubt very much that I have a hardware problem to speak of, just a configuration issue and most probably not wholly understanding the ins-and-outs of Proxmox well enough to properly tailor my configuration.

If someone could help me troubleshoot this, I would be greatly appreciative. You can speak Linux to me, I'm well versed, but ZFS is a new concept for me and I suspect I may just not have it set up correctly. For all the fanfare that ZFS got over HW RAID in terms of efficiency and flexibility, I have yet to have that experience.
 
I don't see information on how you configured the six drives for ZFS. I assume they are HDD's (which make and model?) because 6TB is a weird size for SSDs. Is it a stripe of three mirrors or a RAIDz1/2/3? The first is not great and the latter is terrible for running VMs because of the low IOPS and terrible writes.
Ironically, Proxmox itself runs fine on a slow HDD but wears out consumer SSDs due to logging. While SSDs have better IOPS for VMs, you waste it on Proxmox instead.

EDIT: Turns out you are using a RAIDz1 of rotating HDDs, which is terrible for running VMs. Use a stripe (of mirrors is you only need half the capacity) instead or better yet get enterprise SSDs.
 
Last edited:
Additionally, my VMs in the GUI are reporting that they're using almost every bit of RAM allocated to them, though htop doesn't seem to agree.
Most likely your VMs are using all/most of that memory as cache even though it isn't "actively" being used - hence htop is correct, PVE however isn't aware what's actually being done with the memory - just that its being used. This is a very common subject on these & other forums - I don't believe it should concern you. If you want you can look into the subject of "ballooning" which will cause the VM (under the right conditions) to pass back unused memory to the PVE host.

Concerning your ZFS array I can't directly help you as I don't use it - however I would start here in this excellent post, that has a wealth of info.

One other thing that maybe worthwhile looking into - is to check if the correct driver/config is loaded for the HBA.
 
As to the memory disparity I'm fine with that for the most part. I'll live to fight another day on that one.

As for my disk performance...what type of configuration do you recommend? Considering the disk type/size, what's the best setup you can recommend for that? I need as much space as possible for VMs due to my plex server being one of them and it requires a lot of storage. Under VMware I was just running a giant RAID5 array as a single disk but after reading all the users singing the praises of ZFS I decided to go that route, but have yet to get any tolerable performance out of VM-to-VM activity or out of large operations. (The small web server and DNS server I deployed on the same box run just fine.)
 
I see you are using 12GB/s drives. Are they on the disk back plane. I doubt it is more than 6GB/s. What is the speed of the hba? Maybe the r710 can't keep up and the combination is barely working? Try iperf to see what kind of speeds you are getting. Try vm to vm, vm to host, vm to network and host to network. In both directions.
Good luck
 
From what I've read, 12gb/s on sas is mostly a fake number, it's 2x6gbps as sas gets 2 addressable physical addresses, each with 6gbps access to the drive, but most drives will never saturate it especially on both, unless during very specific tasks. I don't think that's the droid I'm looking for.
 
In this case let's let slow be defined as "virtual machine is nearly unresponsive during a 50GB NFS file transfer between two VMs on the same host, despite the host console acting normally."

RAIDZ1 is performing slow on this box under proxmox whereas RAID5 under ESXi performed more than adequately.
 
X5670 is very old cpu (= slow for today). Server with such cpu doesn't support 12 Gbps SAS on the disk backplane.
ZFS will comment somebody other, but you need to show perftest/fio at least.
 
In this case let's let slow be defined as "virtual machine is nearly unresponsive during a 50GB NFS file transfer between two VMs on the same host, despite the host console acting normally."
Maybe you missed this, or maybe you don't believe me, but your RAIDz1 of HDDs is terrible for running VMs and writes (ZFS default volblocksize and overhead on top of at most the speed of one HDD) are the absolute worst.
RAIDZ1 is performing slow on this box under proxmox whereas RAID5 under ESXi performed more than adequately.
RAIDz1 is not at all the same as hardware RAID5 (with a battery backup?).
 
Maybe you missed this, or maybe you don't believe me, but your RAIDz1 of HDDs is terrible for running VMs and writes (ZFS default volblocksize and overhead on top of at most the speed of one HDD) are the absolute worst.

RAIDz1 is not at all the same as hardware RAID5 (with a battery backup?).
I had missed it when I replied earlier this morning. I didn't see the ninja edit. I'm working on backing up my important data to my unraid box now so I can reconfigure the disk array in Proxmox, but to what, I'm still not clear. I was of the understanding that RAIDz1 IS a striped configuration. According to https://pve.proxmox.com/wiki/ZFS_on_Linux it's supposed to be similar to RAID5 which is obviously striped w/parity. My goals are as-good-as-this-hardware-will-provide performance, with highest reliable capacity, reliable in this case meaning _some_ level of fault tolerance for a dead drive, but I don't want to sacrifice half my capacity for it.
 
I had missed it when I replied earlier this morning. I didn't see the ninja edit.
Before the edit, I warned about RAIDz1 and you showed a RAIDZ1, so I expected you would put it together.
I was of the understanding that RAIDz1 IS a striped configuration. According to https://pve.proxmox.com/wiki/ZFS_on_Linux it's supposed to be similar to RAID5 which is obviously striped w/parity.
It's only really similar in the sense that RAID5 and RAIDz1 can lose one drive without data loss.
Each write is striped in pieces but there is a lot of padding (due to small volblocksize) which causes a lot of write multiplication. The write speed/IOPS on a RAIDz is no more than a single drive. zvols are also slower than files on ZFS (unfortunately) and there is additional metadata overhead and checksums compared to hardware RAID5.
A hardware RAID5 controller (with battery backup) can cache sync writes, which ZFS cannot unless the drives themselves can cache sync writes safely (PLP).
My goals are as-good-as-this-hardware-will-provide performance, with highest reliable capacity, reliable in this case meaning _some_ level of fault tolerance for a dead drive, but I don't want to sacrifice half my capacity for it.
Then use the RAID5 controller as you did on ESXi (I assume). ZFS cannot give you the same performance on the same hardware, it can only give you more features and bit-flip protection.
Or just sell the drives and buy second-hand enterprise SSDs...
 
I'm working on backing up my important data to my unraid box now so I can reconfigure the disk array in Proxmox, but to what, I'm still not clear
Its probably time to reassess what you're trying to accomplish. Parity raid (be it hardware or RAIDZ) will never be optimal for virtualization. single parity is a bad idea all around as it provides the absolute minimum fault tolerance and is a single disk fault away from operating without a safety net (and consequently no assurance of write integrity.)

You are also using 3.5" spinning drives on a 12 year old server. "performance" is probably the wrong metric for you to chase; the only performance you'll accomplish is profitability for your utility provider- but that's neither here nor there. What is your use case? if its a typical home lab type (a bunch of small docker instances and one NAS distro) any configuration will work just fine given the network load you're placing on it.
 
Before the edit, I warned about RAIDz1 and you showed a RAIDZ1, so I expected you would put it together.
I did, in as much as I understood it was bad, but I wanted to understand the "why" behind that statement which I now do, thanks to your most recent response, and for that, I thank you.

Then use the RAID5 controller as you did on ESXi (I assume). ZFS cannot give you the same performance on the same hardware, it can only give you more features and bit-flip protection.
This is where I suspected I would land, but I wanted to have a sanity check and a good understanding of why before I landed there. Thank you again.

Or just sell the drives and buy second-hand enterprise SSDs...
I hear you, and I understand the logic behind that, but for the capacity I want/need v. what I have, that would put me into a whole different set of hardware and as this is all just for my homelab and uses, that road likely comes with a whole host of domestic issues that I'd rather avoid if you know what I mean. ;)

Again, thank you for the information, I think I have what I need at this point. Thank you all for your time.
 
Its probably time to reassess what you're trying to accomplish. Parity raid (be it hardware or RAIDZ) will never be optimal for virtualization. single parity is a bad idea all around as it provides the absolute minimum fault tolerance and is a single disk fault away from operating without a safety net (and consequently no assurance of write integrity.)

You are also using 3.5" spinning drives on a 12 year old server. "performance" is probably the wrong metric for you to chase; the only performance you'll accomplish is profitability for your utility provider- but that's neither here nor there. What is your use case? if its a typical home lab type (a bunch of small docker instances and one NAS distro) any configuration will work just fine given the network load you're placing on it.
The "high cost" guests on my host are my Plex VM and my *arr VM. I found out that I had problems when my *arr applications and NZBGet were trying to download, unpack, and import to the storage point (my Plex VM) via an NFS share exported from the Plex VM. I have several other VMs of various shapes and sizes that do relatively low volume in terms of disk IO and network traffic, I'm less concerned about those.
 
I found out that I had problems when my *arr applications and NZBGet were trying to download, unpack, and import to the storage point (my Plex VM) via an NFS share exported from the Plex VM.
Let me suggest an alternative. set up the base store on proxmox and NOT inside a guest. set up your guests as lxc containers, and map their datapoints directly using mountpoints. This configuration will perform better then any other configuration as there is no storage abstraction at all.
 
  • Like
Reactions: leesteken
Let me suggest an alternative. set up the base store on proxmox and NOT inside a guest. set up your guests as lxc containers, and map their datapoints directly using mountpoints. This configuration will perform better then any other configuration as there is no storage abstraction at all.
That's a good suggestion! I may look into that. The LXC containers give me a little pause for the *arrs/downloaders because I've got those all tucked behind a VPN.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!