If I have a single Proxmox server with a number of SATA hard drives (no raid controller) and I need a storage VM (think FreeNAS or similar) is there a simple way to provide the VM with direct disk access?
In other words in the interests of disk performance is there a way to directly connect...
Is there an issue with ext4? Is that why you recommend ext3 or xfs?
I know btrfs is still experimental and being copy-on-write will have a performance overhead but in a raid10 setup I thought the overhead would be mitigated.. Guess I was wrong.. :)
Was originally ext4.. Now setup with Btrfs and have attempted using Btrfs in a RAID10 configuration..
# btrfs filesystem df /var/lib/vzData, RAID10: total=10.00GB, used=8.10GB
Data: total=8.00MB, used=0.00
System, RAID10: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata...
Probably a dumb question but how do you enable/disable the cache on SATA disks directly? (there is no hardware raid controller with and form of battery backed cache)
Thanks..
I have run those tests and get >100MB/s (Similar to pveperf result seen in the original post).. The issue doesn't appear to be raw throughput but IO/transactional performance which seems odd..
Thanks for the reply..
We have two servers there now one is kimsufi and the other is OVH.. Both have shocking performance..
No Hardware RAID on either but at less than 100 FSYNCS/Sec on both, the kimsufi one as above with less than 20 FSYNCS/Sec..
Even my old Core2 desktop in my office that...
Hi,
Have setup ProxmoxVE on an OVH dedicated server using their install..
The disk IO performance is VERY bad.. Has anyone else used their servers and worked out how to speed things up??
Thanks.
~# pveperf /vz/
CPU BOGOMIPS: 44685.28
REGEX/SECOND: 1120717
HD SIZE: 903.80...
For anyone finding this tread..
Don't use balance-rr for your bond.. Although it has the highest raw throughput because it uses the links simultaneously the VM's networking doesn't appear to like it very much..
In my testing balance-alb and balance-tlb gave intermittent connectivity issues...
Hi.. I'm not getting 950Mbps on a 2x1Gbps link which I would expect to see.. I am getting 95Mbps, sometimes 140Mbps.. Still working on it to see if I can work it out..
Out of interest it seems that the VM's network failure had something to do with using the balance-rr mode on the bond.. Still...
I have been playing with network bonding over the last two days and the primary issue I am having is that when I use bonding the networks in the VM's fail.. They just won't connect to the network..
Here is my network config..
# network interface settings
auto lo
iface lo inet loopback
iface...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.