Search results

  1. W

    Storage VM

    If I have a single Proxmox server with a number of SATA hard drives (no raid controller) and I need a storage VM (think FreeNAS or similar) is there a simple way to provide the VM with direct disk access? In other words in the interests of disk performance is there a way to directly connect...
  2. W

    VERY slow disk IO on OVH dedicated server..

    Is there an issue with ext4? Is that why you recommend ext3 or xfs? I know btrfs is still experimental and being copy-on-write will have a performance overhead but in a raid10 setup I thought the overhead would be mitigated.. Guess I was wrong.. :)
  3. W

    VERY slow disk IO on OVH dedicated server..

    Was originally ext4.. Now setup with Btrfs and have attempted using Btrfs in a RAID10 configuration.. # btrfs filesystem df /var/lib/vzData, RAID10: total=10.00GB, used=8.10GB Data: total=8.00MB, used=0.00 System, RAID10: total=16.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata...
  4. W

    VERY slow disk IO on OVH dedicated server..

    Appears to be on.. hdparm -W /dev/sd[abcd] /dev/sda: write-caching = 1 (on) /dev/sdb: write-caching = 1 (on) /dev/sdc: write-caching = 1 (on) /dev/sdd: write-caching = 1 (on)
  5. W

    VERY slow disk IO on OVH dedicated server..

    Probably a dumb question but how do you enable/disable the cache on SATA disks directly? (there is no hardware raid controller with and form of battery backed cache) Thanks..
  6. W

    VERY slow disk IO on OVH dedicated server..

    I have run those tests and get >100MB/s (Similar to pveperf result seen in the original post).. The issue doesn't appear to be raw throughput but IO/transactional performance which seems odd..
  7. W

    VERY slow disk IO on OVH dedicated server..

    On the slower of the two servers.. root@in1:~# hdparm /dev/sd[abcd] /dev/sda: multcount = 16 (on) IO_support = 0 (default) readonly = 0 (off) readahead = 256 (on) geometry = 121601/255/63, sectors = 1953525168, start = 0 /dev/sdb: multcount = 16 (on)...
  8. W

    VERY slow disk IO on OVH dedicated server..

    Didn't change anything.. It's as it came.. I have never changed anything on my test PC either, it just worked out of the box..
  9. W

    VERY slow disk IO on OVH dedicated server..

    Thanks for the reply.. We have two servers there now one is kimsufi and the other is OVH.. Both have shocking performance.. No Hardware RAID on either but at less than 100 FSYNCS/Sec on both, the kimsufi one as above with less than 20 FSYNCS/Sec.. Even my old Core2 desktop in my office that...
  10. W

    VERY slow disk IO on OVH dedicated server..

    Hi, Have setup ProxmoxVE on an OVH dedicated server using their install.. The disk IO performance is VERY bad.. Has anyone else used their servers and worked out how to speed things up?? Thanks. ~# pveperf /vz/ CPU BOGOMIPS: 44685.28 REGEX/SECOND: 1120717 HD SIZE: 903.80...
  11. W

    Bonding causes VM networking to fail.

    For anyone finding this tread.. Don't use balance-rr for your bond.. Although it has the highest raw throughput because it uses the links simultaneously the VM's networking doesn't appear to like it very much.. In my testing balance-alb and balance-tlb gave intermittent connectivity issues...
  12. W

    Bonding causes VM networking to fail.

    Hi.. I'm not getting 950Mbps on a 2x1Gbps link which I would expect to see.. I am getting 95Mbps, sometimes 140Mbps.. Still working on it to see if I can work it out.. Out of interest it seems that the VM's network failure had something to do with using the balance-rr mode on the bond.. Still...
  13. W

    Bonding causes VM networking to fail.

    I have been playing with network bonding over the last two days and the primary issue I am having is that when I use bonding the networks in the VM's fail.. They just won't connect to the network.. Here is my network config.. # network interface settings auto lo iface lo inet loopback iface...
  14. W

    No Quorum!!

    Thnaks Dietmar that solved it.. Sorry if these are "newbie" issues.. Now I can get back to trying to get nic bonding to work reliably.. :)
  15. W

    No Quorum!!

    Also getting lots and lots of these messages in syslog..
  16. W

    No Quorum!!

    Great!! That's worked to get the services to start.. The issue now is that from the web interface on cnode1 I am not able to manage cnode2.. I just get "Unable to get IP for node 'cnode2' - node offline? (500)" How do I get the web interface working again to manage both nodes?
  17. W

    No Quorum!!

    Hi, I seem to be having a problem with my two node test cluster.. It was working before I tried to setup network bonding.. Since then I can't seem to get the cluster up.. I have changed the network config on both servers back to just using the nic directly on vmbr0 but still not having any...
  18. W

    Public cloud storage architecture suggestions..

    I'll have to do some testing.. As I said in a previous post I think the Sheepdog design is the best I have seen for VM's but GlusterFS is far more mature by comparison so if I am going to attempt a clustered storage backend over a single SAN/NAS then GlusterFS is probably the more logical option..
  19. W

    Public cloud storage architecture suggestions..

    That all sounds very interesting.. To me Sheepdog sounds like the best designed system for running VM disk images and seems to be reflected in your IO numbers.. Looking forward to seeing how it develops.. What version of KVM is currently in ProxmoxVE??

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!