Proxmox on Sata Flash device

twocell

Member
May 12, 2010
59
0
6
I'm thinking about installing Proxmox on a sata flash device like this one:

http://www.amazon.com/7-pin-Flash-Drive-Module-Type-1/dp/B003C50AEQ

The advantages are fast read, reliability, frees up extra drive space in the front of the case for storage drives and low energy consumption.

I'm planning on storing the virtual machines on a seperate Openfiler box using iSCSI over gigabit ethernet.

Downside is slower writes, but that's not too important for the OS right? How much disk space in necessary for the base install? Is there any other factors I haven't considered?
 
Hi,
don't expect a good performance from the ssd - the write speed of 40MB/s is not very high and the peek value. With simultanuos access the speed of cheap ssd's drop after a short usage.
Is this for home-usage or for business?

Udo
 
Business: it's going to be a mission critical cluster. I have found devices that promise better performance/reliability. But they are expensive ($250 USD), about the same price as a desktop SSD with way less capacity. I still don't think I've found the right device. The first one I linked to is definitely the WRONG device to use. I guess the MLC nand devices pay for higher capacity with a lower amount of writes. From what I have read you want a SLC device with a wear leveling controller. Still most flash devices are rated at a million writes. At one write a second they wouldn't last 12 days. Information on these DOM flash devices is pretty scarce though, not sure if I've got enough to take a risk on them just yet. On the other hand this market is changing very quickly and maybe it's just a matter of finding the right device/manufacturer.

How much writing does Proxmox do? Does anyone have a Proxmox setup with all the virtual machines on another device? I'd love to see some iostats if anyone can post theirs. Can Proxmox run without swap?

I set down this path after noticing that Openfiler charges for preinstalled Flash DOM devices. Maybe Openfiler is optimized for embedded use where as Proxmox isn't.
 
Business: it's going to be a mission critical cluster. I have found devices that promise better performance/reliability. But they are expensive ($250 USD), about the same price as a desktop SSD with way less capacity. I still don't think I've found the right device. The first one I linked to is definitely the WRONG device to use. I guess the MLC nand devices pay for higher capacity with a lower amount of writes. From what I have read you want a SLC device with a wear leveling controller. Still most flash devices are rated at a million writes. At one write a second they wouldn't last 12 days. Information on these DOM flash devices is pretty scarce though, not sure if I've got enough to take a risk on them just yet. On the other hand this market is changing very quickly and maybe it's just a matter of finding the right device/manufacturer.

How much writing does Proxmox do? Does anyone have a Proxmox setup with all the virtual machines on another device? I'd love to see some iostats if anyone can post theirs. Can Proxmox run without swap?

I set down this path after noticing that Openfiler charges for preinstalled Flash DOM devices. Maybe Openfiler is optimized for embedded use where as Proxmox isn't.
Hi,
i had make two short tests with intel x25-e and one ocz - the ocz looks not to bad (http://www.amazon.com/OCZ-Technolog...GK/ref=sr_1_40?ie=UTF8&qid=1287242249&sr=8-40).
But i dont use them as a proxmox-disk.

I dont know if its a good idea to run a virtualisation-node without swap. normaly the system use a little bit swapspace.

Udo
 
Is there any other factors I haven't considered?

I wanted to do a similar configuration to acheive wider RAID10 spanning for the data partition(s), and it didn't work.
On my RAID controller, it doesn't allow slicing RAID10 disk sizes, and to have >2 pair span I end up with >2TB & PVE doesn't install.

So I tried installing PVE to a fast (Kingston E-Series, SLC) SSD, which was attached to the motherboard via a SATA II connector.

What happened was that the system wouldn't boot, even when I directed BIOS to boot from the SATA port rather than the RAID controller.
All there would be was a blank screen- I don't know what happened to grub.
 
Last edited:
Hi,
btw. i got today an ocz-disk (for another server) but made first some test as proxmox-disk.
This is the pveperf-output with the plain ocz-disk (without raidcontroller):
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      42670.29
REGEX/SECOND:      883425
HD SIZE:           86.05 GB (/dev/mapper/pve-data)
BUFFERED READS:    169.96 MB/sec
AVERAGE SEEK TIME: 0.11 ms
FSYNCS/SECOND:     2326.42
DNS EXT:           62.55 ms
DNS INT:           2.26 ms

Udo
 
Hi,
can you change the disk order (not boot order) in the bios?

Udo

I did, and it didn't help.
The sections where any disk ordering is mentioned is part of the boot section.
The boot menu includes an sub-entry where it can be defined either "Virtual Disk" (RAID Controller), or "SATA Device" that it boots from.
Then the entry right under that, which is the second sub-entry, defines the order of the "SATA Device" entry- where I can change the order of the SATA ports.

I also tried swapping physical connectors on the motherboard, which had no effect.
On another attempt, which still doesn't seem pertinent but I'll mention anyway, I zeroed the SSD- in case there was anything going on with the boot sector. Even so, there was no change.
 
PVE's up & running with pve-root on the Kingston E-series SATA II SSD now.

The RAID controller is now freed up to do >2TB LVM and now there are +2 HDD on the RAID10 (2 x 2 x 2).

The SSD is a little slower reading than when pve-root was on 2 x 2 RAID10, even with the much higher seek times, oh well. Will see how it pans out.

The values from the LVM storage don't seem realistic, not sure how to get proper numbers from that.

All with 2.6.35-1 kernel, no irqbalance, and idling.

SSD (Kingston E-Series / Intel X25-E)
Code:
Bascule:~# pveperf
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      831448
HD SIZE:           7.14 GB (/dev/mapper/pve-root)
BUFFERED READS:    198.57 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND:     1901.70
DNS EXT:           2002.69 ms
DNS INT:           2002.28 ms (cluster.sss.local)
Bascule:~# pveperf
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      825323
HD SIZE:           7.14 GB (/dev/mapper/pve-root)
BUFFERED READS:    198.52 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND:     1867.56
DNS EXT:           2002.70 ms
DNS INT:           2002.60 ms (cluster.sss.local)
Bascule:~# pveperf
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      826885
HD SIZE:           7.14 GB (/dev/mapper/pve-root)
BUFFERED READS:    193.67 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND:     1895.95
DNS EXT:           2002.70 ms
DNS INT:           2002.59 ms (cluster.sss.local)
Bascule:~# pveperf
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      808783
HD SIZE:           7.14 GB (/dev/mapper/pve-root)
BUFFERED READS:    197.89 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND:     1867.85
DNS EXT:           2002.69 ms
DNS INT:           2002.64 ms (cluster.sss.local)
Bascule:~# pveperf
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      814357
HD SIZE:           7.14 GB (/dev/mapper/pve-root)
BUFFERED READS:    194.60 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND:     1849.88
DNS EXT:           2002.81 ms
DNS INT:           2002.59 ms (cluster.sss.local)
Bascule:~#
Dell PERC 6/i, 64KB Stripe, Write-Back, Adaptive Read-Ahead, 6x 7200rpm 1TB RAID10 2 x 2 x 2
Code:
Bascule:~# pveperf /dev/LDatastore0/
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      818920
HD SIZE:           0.01 GB (udev)
FSYNCS/SECOND:     49527.28
DNS EXT:           2002.24 ms
DNS INT:           2002.61 ms (cluster.sss.local)
Bascule:~# pveperf /dev/LDatastore0/
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      804580
HD SIZE:           0.01 GB (udev)
FSYNCS/SECOND:     49458.18
DNS EXT:           2002.38 ms
DNS INT:           2002.26 ms (cluster.sss.local)
Bascule:~# pveperf /dev/LDatastore0/
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      821409
HD SIZE:           0.01 GB (udev)
FSYNCS/SECOND:     49076.27
DNS EXT:           2002.66 ms
DNS INT:           2002.28 ms (cluster.sss.local)
Bascule:~# pveperf /dev/LDatastore0/
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      819992
HD SIZE:           0.01 GB (udev)
FSYNCS/SECOND:     50059.78
DNS EXT:           2002.30 ms
DNS INT:           2002.61 ms (cluster.sss.local)
Bascule:~# pveperf /dev/LDatastore0/
CPU BOGOMIPS:      53203.97
REGEX/SECOND:      820939
HD SIZE:           0.01 GB (udev)
FSYNCS/SECOND:     49606.28
DNS EXT:           2002.72 ms
DNS INT:           2002.59 ms (cluster.sss.local)
Bascule:~#

qmrestore .tgz:
Code:
INFO: 8589934592 bytes copied, 24 s, 341.33 MiB/s
 
Last edited:
...
The values from the LVM storage don't seem realistic, not sure how to get proper numbers from that.
Hi,
to look for the pveperf on the lvm-storage you can do following:
Code:
lvcreate -n testlv -L +20G /dev/LDatastore0
mkfs.ext3 /dev/LDatastore0/testlv
mount /dev/LDatastore0/testlv /mnt
pveperf /mnt

# remove
umount /mnt
lvremove /dev/LDatastore0/testlv
Udo
 
to look for the pveperf on the lvm-storage you can do following

Thanks udo, that's more like it.
Moving PVE off that disk array and allowing another span/pair for the VMs added ~100 MB/s read speed from the previous 2x2 configuration, which in part yielded this:
Code:
BUFFERED READS:    206.95 MB/sec
AVERAGE SEEK TIME: 9.20 ms
FSYNCS/SECOND:     2544.88
New Test Results:
Code:
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      815116
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    308.32 MB/sec
AVERAGE SEEK TIME: 7.22 ms
FSYNCS/SECOND:     2425.52
DNS EXT:           2002.83 ms
DNS INT:           2002.65 ms (cluster.sss.local)
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      806488
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    314.94 MB/sec
AVERAGE SEEK TIME: 7.37 ms
FSYNCS/SECOND:     2456.48
DNS EXT:           2002.41 ms
DNS INT:           2002.23 ms (cluster.sss.local)
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      824835
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    315.17 MB/sec
AVERAGE SEEK TIME: 7.18 ms
FSYNCS/SECOND:     2439.31
DNS EXT:           2002.81 ms
DNS INT:           2002.60 ms (cluster.sss.local)
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      817236
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    312.32 MB/sec
AVERAGE SEEK TIME: 7.27 ms
FSYNCS/SECOND:     2483.49
DNS EXT:           2002.53 ms
DNS INT:           2002.69 ms (cluster.sss.local)
Bascule:~#
Most recent qmrestore to that disk:
Code:
INFO: 85903540224 bytes (86 GB) copied, 1049.03 s, 81.9 MB/s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!