Using SSD's with ProxMox 2.0 -- How To?!?

As suggested elsewhere, I tried formatting with:

mkfs -b 4096 -E stride=128,stripe-width=128 /dev/sdb1

Still got roughly the same performance...
 
Just found out something else interesting, perhaps it's a linux thing and I just never noticed before, or maybe it's something to do with EXT4, but

# cd /var/lib/vz
# du -h
4.0K ./images
4.0K ./private
4.0K ./root
4.0K ./dump
4.0K ./template/cache
4.0K ./template/iso
4.0K ./template/qemu
16K ./template
4.0K ./lock
40K .

(so 40K total being used, but)

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve2-data 454G 199M 431G 1% /var/lib/vz

df shows 199M being used? This is a 512GB drive, I always thought the
reason the initial size was smaller (454G in this case) was because of
the formatting and filesystem 'junk', so where is the 198MB+ of space
going in this case? Is this a bug somewhere, or am I just missing something
obvious (most likely... It's early and I haven't sleep'd yet).
 
So did you ever get the speed up on your SSD? Which SSD were you using?

No, actually, I never got the speeds any higher on this, I just ran with it and chalked it up to purchasing the incorrect drive for this. The drives I purchased were the Crucial M4 512GB drive. I contacted Crucial about the problem and the consensus was that it's not designed for server purposes...

I guess I should have done more research, I just figured ANY SSD would have been better than what I had (and in the end, I still think it was, but it definitely could have been better for probably not much more money).
 
Very interesting discovery that GT1 pointed out:
A disk partition:
/dev/sda1 on / type ext4 (rw,noatime,nodiratime,discard,data=ordered,errors=remount-ro)
CPU BOGOMIPS: 32549.58
REGEX/SECOND: 1280800
HD SIZE: 9.18 GB (/dev/sda1)
BUFFERED READS: 110.06 MB/sec
AVERAGE SEEK TIME: 0.31 ms
FSYNCS/SECOND: 272.49
DNS EXT: 59.21 ms
DNS INT: 1.05 ms (datanom.net)
/dev/sda1 on / type ext4 (rw,noatime,nodiratime,data=ordered,barrier=1,errors=remount-ro)
CPU BOGOMIPS: 32550.42
REGEX/SECOND: 1312899
HD SIZE: 9.18 GB (/dev/sda1)
BUFFERED READS: 112.57 MB/sec
AVERAGE SEEK TIME: 0.31 ms
FSYNCS/SECOND: 675.21
DNS EXT: 68.32 ms
DNS INT: 1.07 ms (datanom.net)


A LVM partition:
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,noatime,nodiratime,discard,data=ordered)
CPU BOGOMIPS: 32549.58
REGEX/SECOND: 1251371
HD SIZE: 88.59 GB (/dev/mapper/pve-data)
BUFFERED READS: 112.36 MB/sec
AVERAGE SEEK TIME: 0.28 ms
FSYNCS/SECOND: 274.38
DNS EXT: 65.60 ms
DNS INT: 0.94 ms (datanom.net)
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,noatime,nodiratime,data=ordered)
CPU BOGOMIPS: 32550.42
REGEX/SECOND: 1251491
HD SIZE: 81.70 GB (/dev/mapper/pve-data)
BUFFERED READS: 111.12 MB/sec
AVERAGE SEEK TIME: 0.28 ms
FSYNCS/SECOND: 781.10
DNS EXT: 65.11 ms
DNS INT: 1.00 ms (datanom.net)

Disk partition without discard increases fsyncs 3x
LVM partition without discard increases fsyncs 4x

Are there something wrong with the backported discard implementation in the current Redhat 2.6.32 kernel?????
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!