Proxmox 2.0 not likely to have SSD support

Oops,
i was to fast -- the disk is now ext3... just a moment.

This time without mountoption "noatime":
Code:
CPU BOGOMIPS:      24083.05
REGEX/SECOND:      992743
HD SIZE:           110.03 GB (/dev/sde1)
BUFFERED READS:    186.31 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND:     3267.72

cat /mnt/proc_mounts 
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev tmpfs rw,relatime,size=10240k,mode=755 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
usbfs /proc/bus/usb usbfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/sda1 /boot ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/sde1 /mnt2 ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0
/dev/sdb1 /mnt vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=utf8,shortname=mixed,errors=remount-ro 0 0
Udo

you mount the ext4 with nodelalloc (/dev/sde1 /mnt2 ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0),

I assume removing this option will lead to a substantial lower fsync/seconds - can you try this?
 
you mount the ext4 with nodelalloc (/dev/sde1 /mnt2 ext4 rw,relatime,barrier=1,nodelalloc,data=ordered 0 0),

I assume removing this option will lead to a substantial lower fsync/seconds - can you try this?
Hi Tom,
just tried (other system - aptosid, with 2ssd-raid1 - but same effect):
Code:
root@powerbox:~/perf# cat /proc/mounts | grep sde
/dev/sde1 /mnt ext4 rw,noatime,user_xattr,acl,barrier=1,data=ordered 0 0

root@powerbox:/pve/usr/lib/perl5# /pve/usr/bin/pveperf /mnt
CPU BOGOMIPS:      24082.55
REGEX/SECOND:      1207613
HD SIZE:           110.00 GB (/dev/sde1)
BUFFERED READS:    388.24 MB/sec
AVERAGE SEEK TIME: 0.30 ms
[B]FSYNCS/SECOND:     325.84[/B]
DNS EXT:           151.51 ms
root@powerbox:/pve/usr/lib/perl5# [B]mount -o remount,nodelalloc /mnt[/B]
root@powerbox:/pve/usr/lib/perl5# /pve/usr/bin/pveperf /mnt
CPU BOGOMIPS:      24082.55
REGEX/SECOND:      1194654
HD SIZE:           110.00 GB (/dev/sde1)
BUFFERED READS:    365.07 MB/sec
AVERAGE SEEK TIME: 0.29 ms
[B]FSYNCS/SECOND:     2695.58[/B]
DNS EXT:           148.65 ms
Udo
 
Thanks Udo, I opened a new thread, pls add your final conclusion there, include some details.
 
i have done some benchmarks!

I know this reply is a bit late (things got busy for me for a while), but thanks for posting your benchmarks, Udo! It's good to know that performance suffered a little bit with TRIM enabled (I was wondering if that might be the case). So, there might actually be an advantage to using the wiper.sh script over TRIM... it takes less than a minute to run on a 128GB SSD and with the work loads I'm expecting for this server, a daily run will probably be more than enough. With my understanding of how wiper.sh works, this will come at the cost of one write cycle per day, which is pretty nominal.

Curtis
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

Good news. I found a workaround that I've tested under Proxmox 1.8 with a Samsung 470 series drive. The wiper.sh utility that comes with hdparm effectively does the same thing as the automatic TRIM (aka "discard") feature built into newer kernels. Unfortunately, the wiper.sh utility does not come with the hdparm available in the default repository. But, I downloaded and installed hdparm version 9.37 from sourceforge (http://sourceforge.net/projects/hdparm/) and it works. This is with the pve-kernel-2.6.32 kernel (since I'm using OpenVZ). The only downside is that you have to tweak the wiper.sh script to be able to run it from cron because the current version does not have a switch to override the confirmation prompt.

Speaking of the confirmation prompt... I have only tested wiper.sh with a Samsung 470 series drive. You will definitely want to run your own tests to make sure there is no data loss on other brands. In fact, I'll have to report back later on whether things continue to be stable over the long haul. But, so far it looks very encouraging.

This is good news for me, because I have an application that really needs the speed of an SSD and am trying to keep everything under the Proxmox umbrella.

isparks_curtis,

I've been looking at the wiper.sh thread on the OCZ boards and there has been a long-running deal about wiper.sh's capabilities with regard to ext3 and LVM. It was my understanding that wiper.sh still could not TRIM an ext3 partition that was mounted read/write or "online", and only ext4 was supposed to be able to do online TRIM. I thought for ext3 you had to have the FS unmounted, and that for LVM to work you had to use a patch for wiper.sh to make it understand that it needs to find the offset between the start of the physical disk and the start of the logical volume.

I am running my Proxmox 1.9 install on an SSD and while I'm still getting 230MiB/sec read and write in bonnie++ after several months...I know it is only a matter of time before I have to TRIM or do a full wipe. So I am quite curious.

Did you convert to ext4 or change the default LVM layout in any way? I did see somewhere that someone had mounted their SSD somewhere inside /var/lib/vz, which in certain scenarios could explain away my confusion about both LVM and ext*, but I cannot remember if that was you and (honestly) after a long day I do not feel like looking again!
 
Last edited:
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

isparks_curtis,

I've been looking at the wiper.sh thread on the OCZ boards and there has been a long-running deal about wiper.sh's capabilities with regard to ext3 and LVM. It was my understanding that wiper.sh still could not TRIM an ext3 partition that was mounted read/write or "online", and only ext4 was supposed to be able to do online TRIM. I thought for ext3 you had to have the FS unmounted, and that for LVM to work you had to use a patch for wiper.sh to make it understand that it needs to find the offset between the start of the physical disk and the start of the logical volume.

I am running my Proxmox 1.9 install on an SSD and while I'm still getting 250MiB/sec read and write in bonnie++ after several months...I know it is only a matter of time before I have to TRIM or do a full wipe. So I am quite curious.

Did you convert to ext4 or change the default LVM layout in any way? I did see somewhere that someone had mounted their SSD somewhere inside /var/lib/vz, which in certain scenarios could explain away my confusion about both LVM and ext*, but I cannot remember if that was you and (honestly) after a long day I do not feel like looking again!

I had heard the same thing about wiper.sh... that it only works with ext3 if you unmount it first. Because of that,
I installed the SSD as a secondary drive so that I could format it ext4. I then edited /etc/fstab and pointed /var/lib/vz to the ext4 partition I had created on the SSD drive. So far, it's working well.

Hope this helps. :)

Curtis
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

It does help. I actually ended up with an odd "solution"...doing nothing!

bonnie++ testing was still giving ~230MiB/s read and write after 6 months of the Proxmox system being installed to the SSD. I had 3 or 4 VMs on there, medium use. So I (temporarily) filled up the pve-data LV to ~90% with some ISOs and other crap and then ran bonnie++ 25 times with a 10gb test file. At the peak disk space usage of each bonnie++ pass pve-data had 2.5gb free and the whole ssd (including pve-root and /boot) had less than 10gb free. I cannot imagine that this usage level did not push me over the cliff and into the mode where the SSD does garbage collection, especially since it had been in service 6 months already.

After all of this abuse the SSD had slowed to ~217MiB read and write, and most of the other metrics were down just slightly too. So...I deleted the junk files to get my space back and I left it as is with no TRIM.

There is one thing I still wonder about on your setup...and that is whether it broke the backups. It was my understanding that using snapshot-backups required /var/lib/vz to be backed by LVM, not a separate disk. Did you set up LVM on the SSD too, or just go without snapshots (or do they somehow still work!?)?

Thanks! I'm sure this correspondence will be most useful to future visitors.
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

It does help. I actually ended up with an odd "solution"...doing nothing!

bonnie++ testing was still giving ~230MiB/s read and write after 6 months of the Proxmox system being installed to the SSD. I had 3 or 4 VMs on there, medium use. So I (temporarily) filled up the pve-data LV to ~90% with some ISOs and other crap and then ran bonnie++ 25 times with a 10gb test file. At the peak disk space usage of each bonnie++ pass pve-data had 2.5gb free and the whole ssd (including pve-root and /boot) had less than 10gb free. I cannot imagine that this usage level did not push me over the cliff and into the mode where the SSD does garbage collection, especially since it had been in service 6 months already.

After all of this abuse the SSD had slowed to ~217MiB read and write, and most of the other metrics were down just slightly too. So...I deleted the junk files to get my space back and I left it as is with no TRIM.

I would be interested in knowing what SSD drive you are using. My testing with the Samsung 470 did show a slow down after I hammered that drive. I'm not sure how any SSD drive would get around this problem without using either TRIM or wiper.sh. But, I guess if you're satisfied with your results, that's all that counts.

There is one thing I still wonder about on your setup...and that is whether it broke the backups. It was my understanding that using snapshot-backups required /var/lib/vz to be backed by LVM, not a separate disk. Did you set up LVM on the SSD too, or just go without snapshots (or do they somehow still work!?)?

Since wiper.sh requires that you have ext4, I setup Proxmox on a standard SATA drive on ext3 and then pointed the /var/lib/vz to an ext4 partition on the SSD drive. So, you're right, I don't get LVM snapshots. But, I use rsync for backups and that works fine for my purposes. (And MySQL replication to backup MySQL data without down time.)

Curtis

Thanks! I'm sure this correspondence will be most useful to future visitors.
 
Re: Proxmox 2.0 not likely to have SSD support [SOLVED]

I would be interested in knowing what SSD drive you are using.

It is an OCZ Agility2 120gb 3.5inch drive. Here are a handful of the metrics I could think of off the top of my head (the first one being intended to show system install date). None of this is quite 'pure' cause I ran them in the middle of the day with 2 Windows VMs and two ubuntu containers running and I'm not sure how active the users were at the time:

Code:
hostname:/# ls -la / | grep lost
drwx------   2 root root 16384 Mar  9  2011 lost+found

hostname:/# hdparm -i /dev/sdb

/dev/sdb:

 Model=OCZ-AGILITY2 3.5                        , FwRev=1.28    , SerialNo=OCZ-64R1NENIP89K0N1V
 Config={ Fixed }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
 BuffType=unknown, BuffSize=0kB, MaxMultSect=16, MultSect=?8?
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=224674128
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4
 DMA modes:  mdma0 mdma1 mdma2
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
 AdvancedPM=no WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-2,3,4,5,6,7

 * signifies the current active mode


hostname:/# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root  9.9G  4.7G  4.8G  50% /
tmpfs                 2.0G     0  2.0G   0% /lib/init/rw
udev                   10M  796K  9.3M   8% /dev
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/mapper/pve-data   87G   32G   55G  37% /var/lib/vz
/dev/sdb1             504M   49M  430M  11% /boot
/dev/sdc1             917G  142G  729G  17% /mnt/******
/dev/sdd1             917G  176G  695G  21% /mnt/*******

hostname:/# pveperf
CPU BOGOMIPS:      19201.19
REGEX/SECOND:      788579
HD SIZE:           9.84 GB (/dev/mapper/pve-root)
BUFFERED READS:    178.46 MB/sec
AVERAGE SEEK TIME: 0.29 ms
FSYNCS/SECOND:     1540.22
DNS EXT:           2993.10 ms
DNS INT:           0.58 ms (domainname.com)

hostname:/# bonnie++ -d /var/lib/vz/bonnie/ -s 10G -u root
Using uid:0, gid:0.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hostname       10G 66201  99 217962  50 75724  13 68363  94 203649  17  4644   8
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
hostname,10G,66201,99,217962,50,75724,13,68363,94,203649,17,4644.4,8,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
hostname:/#

Seems in line. The prior ~230MiB/s from bonnie I referred to was after a few months use and the prior ~217/s was after stressing the drive. I looked up the specs for that drive and it looks like I'm down 50-70/s read and write from what the drive is spec'ed to do. I may have been inadvertently deceiving before, giving the impression I'd only lost ~15/s. :) http://www.newegg.com/Product/Product.aspx?Item=N82E16820227593

Since wiper.sh requires that you have ext4, I setup Proxmox on a standard SATA drive on ext3 and then pointed the /var/lib/vz to an ext4 partition on the SSD drive. So, you're right, I don't get LVM snapshots. But, I use rsync for backups and that works fine for my purposes. (And MySQL replication to backup MySQL data without down time.)

Curtis

Makes sense. The snapshots worked out to be really convenient but if I wasn't depending on them as much, I'd have done the same thing you did.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!