Proxmox 2.0 not likely to have SSD support

Curtis, this is a good bit of research. Thank you for spending the time. I too was interested in using ssd for a virtual environment - small windows rdp servers for just application work all file storage on other servers so I thought ssd would be ideal for this type of situation.

I am going to use SAS 2.5" discs running at 15k with 64 mb cache and a 1 gig raid card.

Hope it works for you to.

S
 
Curtis, this is a good bit of research. Thank you for spending the time. I too was interested in using ssd for a virtual environment - small windows rdp servers for just application work all file storage on other servers so I thought ssd would be ideal for this type of situation.

I am going to use SAS 2.5" discs running at 15k with 64 mb cache and a 1 gig raid card.

Hope it works for you to.

S
Hi,
i'm using mixed storage with pve1.x: SATA, SAS and SSD (all connected via hardware raid). I had no trouble with SSD yet (but i use not the cheap ssd).

Udo
 
Hi,
i'm using mixed storage with pve1.x: SATA, SAS and SSD (all connected via hardware raid). I had no trouble with SSD yet (but i use not the cheap ssd).

Udo

Yes but how long have you been using them for and what are they being used for. I wanted to use SSD for windows rdp servers - data stored on linux boxes so writes would be minimal. Gone off the idea after a lot of googling ,etc, etc...

Stephen
 
Yes but how long have you been using them for and what are they being used for. I wanted to use SSD for windows rdp servers - data stored on linux boxes so writes would be minimal. Gone off the idea after a lot of googling ,etc, etc...

Stephen
Hi,
on the ssd storage are three vm-disks of three window server. Two of them are for a mysql-database and the third is for a mssql-database (but all not high-traffic).
The vms are running since 96, 91 and 81 days now...
Til now, no one has reported any issue (database-progam to slow or anything else).

Udo
 
Just got back from vacation, and happy to see I'm not the only one interested in using SSDs with Proxmox. :-)

Udo, it sounds like you've had good luck with them so far, however, if you've got a lot of free space on the SSDs and low traffic, that would explain why things haven't slowed down for you yet. Without TRIM support, once you've had enough writes to fill the SSDs once (even though you have plenty of space left), you might notice a sudden drop in performance. If you don't, then I would be very interested to know what brand you're using, because I have not found any SSDs that are smart enough to do TRIM clean-up without the OS and file system supporting it. Just to be clear... garbage collection and TRIM are not the same thing. Yes, some SSDs have automatic garbage collection, but TRIM support requires the OS *and* file system support for it to work. If you can find an SSD that claims to not suffer from this problem, and some published benchmarking studies/reviews that back up the claims, I'd love to see them.

Curtis
 
Just got back from vacation, and happy to see I'm not the only one interested in using SSDs with Proxmox. :-)

Udo, it sounds like you've had good luck with them so far, however, if you've got a lot of free space on the SSDs and low traffic, that would explain why things haven't slowed down for you yet. Without TRIM support, once you've had enough writes to fill the SSDs once (even though you have plenty of space left), you might notice a sudden drop in performance. If you don't, then I would be very interested to know what brand you're using, because I have not found any SSDs that are smart enough to do TRIM clean-up without the OS and file system supporting it. Just to be clear... garbage collection and TRIM are not the same thing. Yes, some SSDs have automatic garbage collection, but TRIM support requires the OS *and* file system support for it to work. If you can find an SSD that claims to not suffer from this problem, and some published benchmarking studies/reviews that back up the claims, I'd love to see them.

Curtis
Hi Curtis,
i will do benchmarking - but on a test-system. This takes a little bit time, because i'm on holiday the next week.
What is the best benchmarking scenario? One VM use bonnie++ to measure and another VM use stress to write a lot of data - and this over a long (hours to days) time?
Recomendet disks size? I think bonnie++-VM 1/3 of ssd-space and stress-VM 2/3?

Udo
 
Hi Curtis,
i will do benchmarking - but on a test-system. This takes a little bit time, because i'm on holiday the next week.
What is the best benchmarking scenario? One VM use bonnie++ to measure and another VM use stress to write a lot of data - and this over a long (hours to days) time?
Recomendet disks size? I think bonnie++-VM 1/3 of ssd-space and stress-VM 2/3?

Udo

Udo, that is very kind of you to consider doing some benchmarks. I wasn't suggesting that you do that. I was just hoping that you were going to tell me that you had found a particular brand that performed well without the TRIM command, and that you had found some published benchmarks that back it up. I don't really know what the best way would be to go about the tests. But, to see the results of what I expect you'll see take a look at this:

http://www.mysqlperformanceblog.com/2010/07/14/on-benchmarks-on-ssd/

The huge drop in performance comes when every cell has been written to at least once, and then the automatic garbage collection kicks in... every write from that point forward has to do an erase procedure before it can write. With TRIM support, the OS does this in advance so that you always have good write speed. The one thing I'm not sure about, however, is on a busy server, it seems like the TRIM command would have the same impact as garbage collection... unless it's smart enough to do it only when disk activity slows down. If I had the time and resources, the test I would really like to see is a script that inserted into a MySQL table long enough for garbage collection to kick in and see what happens to the insert speed. Of course, this would require some deletes too, to keep the drive from filling up. And then another test on a system with TRIM enabled to see if there is any difference in performance between the two systems. I imagine someone has already done this testing, I just haven't had any luck finding some published results.

Curtis
 
Last edited:
Hi,
yes i assume that you see the effect on all ssds - but with cheap ssds extremly and i hope with a "good" one you can use it for production. But we will see after the test.

I had made before an test with an ssd as spool-disk for backup (many simultanious writes, one fast read) and the cheap samsung had an lesser performance than a sata-disk after a short time. The two intel-ssds (raid-0 - slc) do an good job - enough to stream 115MB/s to the LTO-4 drive (with writes at the same time).

Udo
 
I'm assuming the Samsung you tested was one of their earlier offerings and not their newer 470 series? The 470 series has great reviews, which is why I ended up going with it.

Side note... the Intel drives also support TRIM, which is what led me to believe that they suffer from the same performance issues as everyone else. If they don't need TRIM to stay optimal, then it seems like they would not have bothered offering TRIM support.

Curtis
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

Good news. I found a workaround that I've tested under Proxmox 1.8 with a Samsung 470 series drive. The wiper.sh utility that comes with hdparm effectively does the same thing as the automatic TRIM (aka "discard") feature built into newer kernels. Unfortunately, the wiper.sh utility does not come with the hdparm available in the default repository. But, I downloaded and installed hdparm version 9.37 from sourceforge (http://sourceforge.net/projects/hdparm/) and it works. This is with the pve-kernel-2.6.32 kernel (since I'm using OpenVZ). The only downside is that you have to tweak the wiper.sh script to be able to run it from cron because the current version does not have a switch to override the confirmation prompt.

Speaking of the confirmation prompt... I have only tested wiper.sh with a Samsung 470 series drive. You will definitely want to run your own tests to make sure there is no data loss on other brands. In fact, I'll have to report back later on whether things continue to be stable over the long haul. But, so far it looks very encouraging.

This is good news for me, because I have an application that really needs the speed of an SSD and am trying to keep everything under the Proxmox umbrella.
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

How to update the thread title with a [SOLVED] tag?
This is not a "solution"... it's a workaround. Ext4 has native TRIM support in the filesystem... the wiper.sh approach is quite different.
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

This is not a "solution"... it's a workaround. Ext4 has native TRIM support in the filesystem... the wiper.sh approach is quite different.

Yes, I agree that it is not elegant as the automatic TRIM (discard) support found in ext4 (and actually ext4 isn't enough... you also need kernel support), but my tests do show that it works. The problem I was trying to resolve was performance degradation that occurs in SSDs over a period of time, and wiper.sh seems to do the trick. But yes, if your only goal is to have TRIM working, then this is not a solution, only a work around.

Side note... there were two tests that I performed to make sure it was working:

1. The first test I did was the one posted on this page: http://askubuntu.com/questions/18903/how-to-enable-trim (by running the wiper.sh script after the "rm" command) and it worked. Well, I did notice that often when I used smaller file sizes (such as the one suggested here: http://techgage.com/print/enabling_and_testing_ssd_trim_support_under_linux ) that it didn't always seem to work right away. I had to delete more files until the wiper.sh script was able to clear the sector. So, while wiper.sh does not immediately zero out every deleted file right away, it should clear enough of them that the SSD drive always has "fresh" storage space to write to.

2. The second test I did was filling up the SSD and then repeatedly deleting and rewriting files until performance degradation set in. The drive went from the 220MB/sec + range down to around 100MB/sec. Running the wiper.sh tool instantly brought it back to the original speed.

So yeah, this solution is not as cool as having automatic TRIM, but for me, this solution will allow me to use SSD with Proxmox, which is what I was looking for. It will be great when/if Proxmox gets TRIM support (which, actually, under the "KVM only" kernel probably already does). Although, what I really expect to happen is that LXC will mature and OpenVZ will eventually fade away, which would also solve the problem, since LXC is supported in the latest kernels. But, that's another topic altogether. ;-)
 
Last edited:
Hi Curtis,
i will do benchmarking - but on a test-system. This takes a little bit time, because i'm on holiday the next week.
...
Hi,
i have done some benchmarks!
In short:
heavy IO on an raid1 (two OCZ Vertex 2 120GB) on an areca raidcontroller (arc-1222)
on pve with ext3 and also with devil-linux and ext4 without TRIM after a while the performance breaks down to app. 130 MB/s (before round 200MB/s). With devil-linux, ext4 and TRIM the performance don't break down. But in any cast the throughput is not stable.

With one OCZ Vertex 2 direct connected i get different results:
No break down at all (pve ext4, devil-linux ext4 with and without TRIM)!
The troughput is a little bit higher without TRIM! (app. 225MB/s and 240 MB/s).
Even with pve (1.8) and ext4 the result looks very nice (see picture - testtime is 4h45min)
pve_ssd_ext4.png

How do i test?
I start 4 bonnie++ processes with different IO-sizes,
and take the results of iostat every 10 seconds to see how much data has been read and written to the device.

Here the jobs:
Code:
bonnie++ -d /mnt/testdir1 -s 1G -x 1000 -r 512 -u root > /root/da_1ssd/bonnie01_notrim.output &
bonnie++ -d /mnt/testdir2 -s 2G -x 500 -r 512 -u root > /root/da_1ssd/bonnie02_notrim.output &
bonnie++ -d /mnt/testdir4 -s 4G -x 200 -r 512 -u root > /root/da_1ssd/bonnie04_notrim.output &
bonnie++ -d /mnt/testdir8 -s 8G -x 100 -u root > /root/da_1ssd/bonnie08_notrim.output &

and

iostat -dmxt 10 sdd > iostat_da_1ssd_notrim_output_10sec.txt
Summary:
1. It's looks for me that's no problem to use SSDs with PVE.
2. I must look for an actual firmware for my raidcontroller (firmware is pretty old) - perhaps with a new one the results are better

This benchmarks shows not what happens with lvm-storage...

Udo
 
Hi Udo,

we are testing ext4 for pve2.0, and we get very bad fsync rates (pveperf, compared to ext3). Can you confirm that?
 
Hi Udo,

we are testing ext4 for pve2.0, and we get very bad fsync rates (pveperf, compared to ext3). Can you confirm that?
Hi Dietmar,
no this looks ok for me - here the results for an ext4 formatted ssd:
Code:
CPU BOGOMIPS:      24083.05
REGEX/SECOND:      942485
HD SIZE:           110.03 GB (/dev/sde1)
BUFFERED READS:    186.70 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND:     3376.65

mount
/dev/sde1 on /mnt2 type ext4 (rw,noatime)

pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.28-1pve1
vzdump: 1.2-14
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6
I can reformat the same disk as ext3 and post the result soon.

Udo
 
can you also post 'cat /proc/mounts' (regarding ext4) for this tests?
 
can you also post 'cat /proc/mounts' (regarding ext4) for this tests?
Oops,
i was to fast -- the disk is now ext3... just a moment.

This time without mountoption "noatime":
Code:
CPU BOGOMIPS:      24083.05
REGEX/SECOND:      992743
HD SIZE:           110.03 GB (/dev/sde1)
BUFFERED READS:    186.31 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND:     3267.72

cat /mnt/proc_mounts 
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev tmpfs rw,relatime,size=10240k,mode=755 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
usbfs /proc/bus/usb usbfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/sda1 /boot ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/sde1 /mnt2 ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0
/dev/sdb1 /mnt vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=utf8,shortname=mixed,errors=remount-ro 0 0
Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!