Is it possible somehow with qemu to convert zvols to raw or qcow2 for example if you want to move those disks to other servers etc.
Can you show me some command line how to do it there ?
Today I see very good improvement after changing IDE to SATA...seems this was the biggest problem, havent tested writes yet but looks it 100% better.
Before installing proxmox I read Wiki ZFS proxmox fast without checking ever step and this words If you are experimenting with an installation...
Looks like some VM-s have disk type IDE including this one where I tested writes, changed this one to SATA and writes performing a little better so I need to change other VM-s to Sata or even Virto ?
Should I use Virtio ? Are guests going to work in ZFS storage better with virto like before in...
I know with this config that I have it can't be a miracle but even without SSD-s and only with md raid 1 drives I had better writes with same virtual machines...I feel something is not working well in my configuration.
I will turn off all other VM-s and let up only one in early morning and try...
My ZFS is completely dying in writes very very slow...today in one virtual machine I tried to copy a file and look whats shows there
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----...
I am sorry for writing a lot but this problem seems to be hot and needs more to deal with it until I fix.
Fdisk of one ssd is like that
Disk /dev/sda: 279.5 GiB, 300069052416 bytes, 586072368 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes...
Sounds interesting runing l2arc = none and zil none I guess you have lots of ram.
Is it safe to make atime = off
I have emails servers, web servers etc, all of them run independently in their ZVOLS so is it safe to put atime off because I have seen that increases a bit performance but can this...
Zfs flush strategy is okay to drop a bit but not like in my case to 0 with big stalls... anyway can you send me your ZFS Config ?
Zfs set commands and zfs.conf file please.
I am going to compare mine with yours guys.
In first code where you see the log device writing is copy in vm which is sync operation.
Second code is only in storagepool.
Yes I have adjusted a bit zfs parameters the IO Wait dropped a bit but not where is reasonable.
cat /sys/module/zfs/parameters/zfs_arc_max
8589934592
zfs get dedup,atime,copies,primarycache,secondarycache
NAME PROPERTY VALUE SOURCE
storagepool dedup off default
storagepool atime off...
Thank you.
Its working now and I will open ticket to OVH and ask them why they did such problem with permissions, and my disks has passed the tests which makes me breath easily because I have some other problems with IO wait and writing performance in ZFS but I am discussing it in other topic...
This would take days until they reply...support is not fast as you think and dont want to create new troubles to get my server accessed to them with root user etc, I thought you would help me how to change permission and make a smart test because I need to check my health of my drives.
how to do that test should I use something with dd ? Can you help on that and as I said in the topic my two problems are slow write speed and high iowait.
what do you mean with that ? All my virtual machines are part of Storagepool then when I copy big file it goes fast for example for first 500 MB-s then it goes 0 hangs up a bit then goes again slowly with 20-30MB/s or less than same again.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.