Feature request: ATA trim support (for SSDs)

Maxnet

New Member
Aug 6, 2009
14
0
1
www.noc-ps.com
I was wondering if there are any plans to backport support for the ATA TRIM command to the PVE kernel.

TRIM is necessary to maintain a good write performance when using solid-state drives.

In addition to kernel support this would also require that the installation procedure creates a file system that supports trim like ext4.


Explanation of the purpose of TRIM: http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=10
Kernel tree with TRIM: http://git.kernel.org/?p=linux/kernel/git/willy/ssd.git
 
We are currently at kernel 2.6.24, and OpenVZ uses ext3 - so this is unlikely to happen soon.

Fast forward to 2011... Any word on whether OpenVZ supports ext4... or whether ext3 now supports TRIM? I've done lots of googling, but mostly just confusing and conflicting advice is all I've been able to find on these questions.

Curtis
 
upcoming Proxmox VE 2.x is based on Squeeze and will ext4 by default.
 

Unfortunately, I just realized that Debian Squeeze is only running the 2.6.32 kernel... so, even though Proxmox 2.x will support ext4, I believe TRIM support was added in 2.6.33. So, I think this means no SSD support for Proxmox any time soon. :-(
 

I just realized... Proxmox 1.8 uses the 2.6.32-33 kernel... so it seems to me that I should already be able to get ext4 and TRIM support without waiting for Proxmox 2. Seems all I would need to do is install Proxmox 1.8 on a standard HDD, and then move the /var/lib/vz over to the ext4 partition on the SSD after the initial install, right? I realize it would not be a supported configuration, but I don't see how that would be much different than what we're doing on other servers where I've moved the /var/lib/vz partition over to a large raid volume (one that is larger than the proxmox installer supports).

Does that make sense?

Curtis
 
Here's my follow up on this. Proxmox 1.8 did allow me to use the "discard" parameter in fstab, which seemed to take ok because when running "mount" it showed that the discard parameter was recognized...

Code:
# mount|grep discard
/dev/sdb1 on /var/lib/vz type ext4 (rw,noatime,discard)

Unfortunately, it didn't seem to actually do anything on my Samsung 470 Series SSD based on a test using the method of testing described here:

http://techgage.com/print/enabling_and_testing_ssd_trim_support_under_linux

...after the rm and sync, the data was not actually wiped (or zeroed). Oh well... I guess I really will have to wait for Proxmox 2 (assuming it uses a newer kernel). Hard to tell... the roadmap for Proxmox 2.0 says it will use "longterm 2.6.32"...which doesn't seem as specific as what it says for for Proxmox 1.8 which specifies pve-kernel-2.6.32 (2.6.32-33)...

http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_1.8

...I thought the "-33" part meant that it would include TRIM support which I thought had been backported since "-31"... but I guess not. :(

Curtis
 
One final follow up here. I tested for TRIM support in Debian Squeeze and it still does not work with that version. Too bad... I guess that means SSD performance under Proxmox 2.0 will suffer performance degradation and probably will not be viable in a production environment. I tested TRIM with Debian Wheezy, and it works there... but I guess it won't be able to support OpenVZ any time soon.

Curtis
 
Did you test with kernel 2.6.35? This is for KVM-only environments, I believe.
 
Did you test with kernel 2.6.35? This is for KVM-only environments, I believe.

Nope. The kernel version on my Debian Wheezy install is 2.6.39-2. My only interest (so far) with Proxmox had been OpenVZ, but your post just made me realize that perhaps I should consider giving the 2.6.35 kernel with KVM a try. Since these boxes will only be running a single VM, I suppose there wouldn't be too much loss of performance by going with KVM on them instead of OpenVZ. Thanks for the idea.

Of course, since ext3 does not support TRIM, I guess I will have to install Proxmox on a standard drive and use SSD as a secondary drive only. I haven't really looked at KVM yet, hopefully it supports adding KVM VMs on a secondary drive.

Curtis
 
You've confused me... isn't KVM faster than OpenVZ, typically? Particularly when configured so that the host cpu features are passed through?

Also, it's possible to upgrade the ext3 partitions of the Proxmox install to ext4 using an offline toolkit, like a linux liveCD. Make sure /etc/fstab is updated to mount those partitions as ext4 prior to boot, or it will hang. Note: the /boot partition must remain ext3 with the version of grub that ships with Proxmox VE 1.8. I haven't done it myself under Proxmox, but I have done it on other systems without issue, so I imagine it would be similar. There was another user that mentioned they had upgraded successfully in this way under Proxmox 1.8.
 
Last edited by a moderator:
You've confused me... isn't KVM faster than OpenVZ, typically? Particularly when configured so that the host cpu features are passed through?

Not that I'm aware. KVM, like all full virtualization technologies, requires a hypervisor layer, and therefore has additional overhead that isn't required with OpenVZ. OpenVZ, on the other hand, is more of a glorified chroot, and so you're running pretty much on bare metal. The main trade off, of course, is that with OpenVZ, you don't get to run Windows (or even your own kernel) because there's no hypervisor.

I would be interested to know if the story has somehow changed since I've researched it, but so far, the articles I've read on the subject read a lot like this one:

http://codemonkey.ravelry.com/2009/12/01/kvm-vs-xen-vs-bare-metal/

Well, that article is a comparison of KVM against bare metal, but if you scroll down, one of the commenters mention their OpenVZ tests show no significant performance hit with OpenVZ compared to bare metal. So far, that has been my experience too.

Curtis
 
Ah, I see our confusion. When you say "on bare metal" you're meaning that the applications in each containerized environment run as fast as they would as native applications on the host OS, which is more or less true.
 
Your "isn't KVM faster than OpenVZ" statement still has me curious... in what cases have you found KVM to be faster than OpenVZ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!