Upgraded to SSD's, now getting extremely poor performance on containers

Method

Active Member
Aug 10, 2019
9
2
43
123
I was on Proxmox 5.3 before, and I had decided to try and toss in my spare SSDs, a couple of 128GB Samsung 840 Pro's, and reinstall on those with 6.0

The server itself is a Dell R710 (2x 5670's) with an H200 in IT mode. 4x 2TB drives as my main zfs pool as my NAS, and then the aforementioned 2x 128GB ssd's.

Before, I had installed proxmox on a pair of 750GB regular 7200rpm drives. I wish I had remembered how I installed it because clearly something was different. I'm not too familiar with how proxmox works but I know I had a local-lvm that I'm fairly sure the containers existed on, and now I no longer have a "local-lvm" but instead a "local-zfs".

The problem I'm running into is that the containers _seem_ to be operating just fine, just as they did when they were on the rotational drives. But various cli tasks like apt update/upgrade are _extremely_ slow, and sometimes outright hang. I was trying to install Kibana on a fresh container and it outright froze midway through installing it, forcing me to reboot the container.

I run a website on these containers, plex, a Jira instance, they all seem to actually _run_ just fine, but doing anything in cli seems to break completely.

If I try to create an LVM, the dialog window says "No disks unused". I have a suspicion that I'll need to reinstall proxmox somehow, but I'm not sure what I should do differently if that's the solution.
 
Last edited:
This is expected. If you had local-lvm, you were using single disk. Now you are using ZFS which is slower but has many features such as redundancy.
 
This is expected. If you had local-lvm, you were using single disk. Now you are using ZFS which is slower but has many features such as redundancy.

Im not sure this is accurate. I had 2 750GB rotational drives in use before, they _both_ were in use. I still have them, untouched. I could toss them back in and see exactly
 
Also Im not sure how its "expected" for ssd's to perform so poorly under zfs....to the point of them hanging/freezing when doing mundane tasks...
 
Just loaded them back up again, I guess I was mistaken. No 'local-lvm'.

I guess my next question here is, how the hell do I get my ssd's to perform better?
 
Those consumer grade SSDs perform poorly when you use with ZFS. If you want ZFS use Intel Datacenter SSDs.
 
  • Like
Reactions: tom
I see many posts about people using the Pro samsung drives without these kinds of issues.

I was able to track this down further. I had a container that was misbehaving and causing a lot of i/o wait, in addition, the one thing I was able to consistently replicate was trying to install Kibana and having it fail. It would _every time_ reach 17% and stop/hang, causing additional i/o wait which made other containers slow down. With these two issues out of the picture, it does seem that my containers performance are back to normal.

Not sure I can get my question answered here, but I suppose my next step is figuring out why the hell kibana is hanging on install.

For posterity, and if anyone cares to follow along: https://discuss.elastic.co/t/install-on-lxc-hangs-at-17/194724

edit: more evidence this is a Kibana issue https://github.com/elastic/kibana/issues/17176
 
Last edited: