Cache is writeback as I wrote and mount options are following:
/dev/disk/by-uuid/e4027256-ecf7-4257-ac5a-30c228d2f74a on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
I made no changes to vm except scheduler.
Spirit: I made some benchmarks showing similar results for ceph and gluster, but gluster was faster a little bit with every scheduler http://forum.proxmox.com/threads/20433-Tuning-performance-in-VM-with-sceduler?p=104256#post104256
Blackpaw: I read through those links but not found much...
I have now testing cluster running on latest proxmox nodes, which have intel ssd drives, connected with 10gb switch. One pool is gluster on zfs based on two striped drives and second one is ceph pool based on two osd drives on two servers. VM is latest debian with writeback cache and raw format...
Sorry, i ment ordinary ssd drives;)
I made some config changes you have post and results are slightly better from fio:
read: 3675 write: 1228
But almost the same for bonnie benchmark. I just do not know which test is more close to real life providing storage for vm.
Spirit, I can see you have much better results with just ordinary drives. Have you make some tuning to the ceph pool? Or have you compared ceph to gluster in terms of performance?
Well I have similar results. These days I try to consider between ceph and gluster. I have a three node cluster with just two storage nodes connected through 10gbps switch. I dedicated two intel ssd to ceph with xfs and two ssd to gluster on zfs on each server. I created a debian vm with virtio...
I experienced the same today. It is caused by split-brain of this file on gluster storage. Just try gluster volume heal YOUR_STORE info split-brain and you will probably see it in the output. Then you can use splitmount from this address...
Hi,
I want to configure an active-backup bonding interface on proxmox 3.3 running kernel 3.10 and I need to select primary slave as it is a faster card connected to a faster switch. But unfortunately as a primary slave it is automatically selected interface with lower number, in my case eth2...
Hi,
I had to restore some files from a backup but I was unable to find any completely working howto here or on the wiki, which is a pitty imo.
This is my working solution for a vm without LVM (howto for LVM http://alexeytorkhov.blogspot.cz/2009/09/mounting-raw-and-qcow2-vm-disk-images.html ) ...
I had backups from ESXI stored everytime in separate directories containing vm name and date. And inside this directory there was a config file as well as data files. In my point of view it would be also beneficial to include some button in gui to restore vm from backup completely instead of...
I experienced this when moving disk from nfs to other nfs storage. Turning off the machine, move disk and turning on again solve the problem. Luckily this one was not important vm.
I finally found a solution here - http://forum.proxmox.com/archive/index.php/t-16196.html .
I had to stop pve-cluster, manually kill dlm_controld and fenced processes and start pve-cluster.
Everything is working now, but it is strange that it all starts just with restarting external nfs server...
Ok, I tried to restart cman and pve-cluster on every node and on console there was everything written OK.
But now there are these errors on each node:
and now I can see this in logs of all nodes:
ep 23 10:43:20 cluster corosync[2794]: [SERV ] Unloading all Corosync service engines.
Sep 23...
Hi,
I have currently three node cluster, one node on 3.3 and two on 3.2.
It was running without any problem but today I had to restart one nfs server.
I received some error logs about vanished connections and servers stopped to "see" each other on the web page.
I experienced this few months...
Hi,
I am now in process of migrating some machines from proxmox 3.2 to proxmox 3.3. With one machine I have a following error after reaching 100%:
qmp command 'block-job-complete' failed - The active block job for device 'drive-virtio0' cannot be completed
I have successfully done live...
I have just tried to move vm drive from nfs to glusterfs 3.5.2-1 and i can see exactly same messages. But unfortunately after this nothing is happening and the vm freezed. Before canceling the job I was able to see that image with correct size on my gluster storage, but no transfer was...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.