I really don't know what to do. I've tried to restore one vm and just launch that one ut nothing change. should I redo the zfs pool? or maybe reinstall the whole system?
could it be a problem of chipset ? Can I plug a sata expander with a marvell chipset or is it consider too far away as a raid controller is? It's this one :
http://www.speeddragon.com/index.php?controller=Default&action=ProductInfo&Id=609
it doesn't manage raid, it's just a sata chipset with...
mmmh okey different implementation now i can understand better.
well i don't how to identify if it's zvol. I've restored it from backup and they were as disks themselves so raw partition? so they are not files like qcow2 i mean. but maybe i should do a special thing to do it?
there are in IDE...
so this thing that I put earlier wasn't good either:
root@ns001007:~# pveperf /dev/zpoolada/
CPU BOGOMIPS: 63980.16
REGEX/SECOND: 3302348
HD SIZE: 0.01 GB (udev)
FSYNCS/SECOND: 161446.17
DNS EXT: 43.48 ms
DNS INT: 26.18 ms
I will put in a couple of...
The Samsung is normally irrelevant in my problem since the system of proxmox is installed on it and of course none of my vm is launched, then no iodelay. I always separate system from data storage or vm hosting.
I will try to find some documentation about the fio based test because I m not...
Any idea of what could be a good test for a iostat test ?
Reset of vm is bad idea since it has all in ram memory.
I would be really glad to have some clarifications about the pveperf and if I test the right locations since the default test doesn't seem right etc...
And I'm more worried about...
Maybe that's better?
root@ns001007:~# pveperf /dev/zpoolada/
CPU BOGOMIPS: 63980.16
REGEX/SECOND: 3302348
HD SIZE: 0.01 GB (udev)
FSYNCS/SECOND: 161446.17
DNS EXT: 43.48 ms
DNS INT: 26.18 ms
root@ns001007:~# pveperf
CPU BOGOMIPS: 63980.16
REGEX/SECOND: 3284731
HD SIZE: 39.25 GB (/dev/dm-0)
BUFFERED READS: 106.39 MB/sec
AVERAGE SEEK TIME: 0.09 ms
FSYNCS/SECOND: 220.90
DNS EXT: 46.27 ms
DNS INT: 102.49 ms
root@ns001007:~# pveperf...
@fireon and @LnxBil Maybe I wasn't clear enough but I clearly stated that I had previously an adaptec configuration without zfs with a RAID 10 on it and I hadn't any I/O problems with it. I wanted to change because of scurity features of the ZFS not really for performance because I don't believe...
just in case it would be because of the chipset:
root@ns001007:~# lspci
00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)...
@fireon since you've made a such big publicity for zfs, i've tried it on my home proxmox server for testing and it does not really what it should do about RAID perf.
but maybe you would be interested in helping...
during my test here, i've launch several app and use others in my windows10 VM with all other vm launched.
Should stress test in a vm the hdd to have more accurate results? or something else maybe?
Seems to me slower than my adaptec raid actually and I don't really see why since it's not a CPU...
thanks for the RAM indication. I'm such a newbie in those range of knowledge so...
I did not enable deduplication indeed. I've tried to put it as simple as I could. That's why I deleted the RAID adaptec. I wanted something more efficient than a something external.
But I did not limit the RAM...
thanks:)
and more in general should I have the same results than my adaptec RAId configuration or maybe I don't have a system big enough to be as much efficient than that?
is it possible to know how much ram is taken by ZFS?
Hi,
Little introduction: I had a big problem of heaving big I/O delay and error in file contents inside VM's on my ovh(provider) server a few months ago after an error of consistency from my mdadm raid. So I had to completely restore all my VM's to have my setup back running. Since this event...
@JedMeister Of course I know I can report you the bugs, and I'm doing it actually under boistordu login. But for example you had a problem with rotrrent server with the 14.0 which is okey now with 14.1 but it would be nice to have multiple source to deploy rapidly and not debugging for several...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.