Search results

  1. L

    Tuning performance in VM with sceduler

    It is 4184 and 1049 iops on cfq and ceph, so a little bit slower han writeback
  2. L

    Tuning performance in VM with sceduler

    So after barrier=0 and reboot just to be sure are results on gluster and cfq 4648 and 1161, so it is even lower then with barriers in my case.
  3. L

    Tuning performance in VM with sceduler

    Cache is writeback as I wrote and mount options are following: /dev/disk/by-uuid/e4027256-ecf7-4257-ac5a-30c228d2f74a on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered) I made no changes to vm except scheduler.
  4. L

    Tuning performance in VM with sceduler

    VM is default debian installation with ext4 run on virtio drive.
  5. L

    ceph performance seems very slow

    Spirit: I made some benchmarks showing similar results for ceph and gluster, but gluster was faster a little bit with every scheduler http://forum.proxmox.com/threads/20433-Tuning-performance-in-VM-with-sceduler?p=104256#post104256 Blackpaw: I read through those links but not found much...
  6. L

    Tuning performance in VM with sceduler

    I have now testing cluster running on latest proxmox nodes, which have intel ssd drives, connected with 10gb switch. One pool is gluster on zfs based on two striped drives and second one is ceph pool based on two osd drives on two servers. VM is latest debian with writeback cache and raw format...
  7. L

    ceph performance seems very slow

    Sorry, i ment ordinary ssd drives;) I made some config changes you have post and results are slightly better from fio: read: 3675 write: 1228 But almost the same for bonnie benchmark. I just do not know which test is more close to real life providing storage for vm.
  8. L

    ceph performance seems very slow

    Spirit, I can see you have much better results with just ordinary drives. Have you make some tuning to the ceph pool? Or have you compared ceph to gluster in terms of performance?
  9. L

    ceph performance seems very slow

    Well I have similar results. These days I try to consider between ceph and gluster. I have a three node cluster with just two storage nodes connected through 10gbps switch. I dedicated two intel ssd to ceph with xfs and two ssd to gluster on zfs on each server. I created a debian vm with virtio...
  10. L

    Could not read qcow2 header: Operation not permitted (500) on GlusterFS volume

    I experienced the same today. It is caused by split-brain of this file on gluster storage. Just try gluster volume heal YOUR_STORE info split-brain and you will probably see it in the output. Then you can use splitmount from this address...
  11. L

    can not select primary slave when bonding in active-backup mode in the "right" way

    Hi, I want to configure an active-backup bonding interface on proxmox 3.3 running kernel 3.10 and I need to select primary slave as it is a faster card connected to a faster switch. But unfortunately as a primary slave it is automatically selected interface with lower number, in my case eth2...
  12. L

    restore a directory from vma.lzo backup

    Hi, I had to restore some files from a backup but I was unable to find any completely working howto here or on the wiki, which is a pitty imo. This is my working solution for a vm without LVM (howto for LVM http://alexeytorkhov.blogspot.cz/2009/09/mounting-raw-and-qcow2-vm-disk-images.html ) ...
  13. L

    vzdump feature-request

    Thank you, I somehow missed this feature. Very nice!
  14. L

    vzdump feature-request

    I had backups from ESXI stored everytime in separate directories containing vm name and date. And inside this directory there was a config file as well as data files. In my point of view it would be also beneficial to include some button in gui to restore vm from backup completely instead of...
  15. L

    storage migration virtio failed

    I experienced this when moving disk from nfs to other nfs storage. Turning off the machine, move disk and turning on again solve the problem. Luckily this one was not important vm.
  16. L

    [SOLVED] restarted nfs - web frontend stopped working

    I finally found a solution here - http://forum.proxmox.com/archive/index.php/t-16196.html . I had to stop pve-cluster, manually kill dlm_controld and fenced processes and start pve-cluster. Everything is working now, but it is strange that it all starts just with restarting external nfs server...
  17. L

    [SOLVED] restarted nfs - web frontend stopped working

    Ok, I tried to restart cman and pve-cluster on every node and on console there was everything written OK. But now there are these errors on each node: and now I can see this in logs of all nodes: ep 23 10:43:20 cluster corosync[2794]: [SERV ] Unloading all Corosync service engines. Sep 23...
  18. L

    [SOLVED] restarted nfs - web frontend stopped working

    Hi, I have currently three node cluster, one node on 3.3 and two on 3.2. It was running without any problem but today I had to restart one nfs server. I received some error logs about vanished connections and servers stopped to "see" each other on the web page. I experienced this few months...
  19. L

    storage migration virtio failed

    Hi, I am now in process of migrating some machines from proxmox 3.2 to proxmox 3.3. With one machine I have a following error after reaching 100%: qmp command 'block-job-complete' failed - The active block job for device 'drive-virtio0' cannot be completed I have successfully done live...
  20. L

    Updated gluster possible?

    I have just tried to move vm drive from nfs to glusterfs 3.5.2-1 and i can see exactly same messages. But unfortunately after this nothing is happening and the vm freezed. Before canceling the job I was able to see that image with correct size on my gluster storage, but no transfer was...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!