I've tried both ZVOLs and raw disks on a ZFS dataset and both have the same issues. Any intensive writes to the VM's disk makes the load shoot up, which reaches 32 if I let it keep running. I've never seen such high load on ext4 systems just from writes before. I've tried almost all 'zfs set'...
1) I only have 32GB of RAM sadly so I cannot increase the ARC to that size.
2) I cannot add more disks to this system, I currently have 2x HDDs in RAID 1.
3) I have an L2ARC, but this does not help much with writes, which are my problem.
4) I will attempt this one.
I'm sorry but I'm confused. Writing zeros is writing, it's just not a real world scenario. Going back to the issue, if I write from urandom on an ext4 filesystem I have no issue and I get a consistent 104 Mb/s. With ZFS and one the same hardware writes crawl to a stop going below 20 Mb/s. Both...
It looks like the block size is the same for my LUKS devices.
root@zfs-test ~ # blockdev --getbsz /dev/mapper/sda5_crypt
512
root@zfs-test ~ # blockdev --getbsz /dev/mapper/sdb5_crypt
512
I've tried these options on my pool, still no luck in decreasing the load sadly..
Recent additions:
zfs set sync=disabled pool0
zfs set checksum=off pool0
zfs set atime=off pool0
zfs set redundant_metadata=most pool0
zfs set xattr=sa pool0
ZVOL options at the moment:
root@zfs-test ~ # zfs get...
Tried setting sync=disabled on the pool but not really any performance increase. On a server with ext4 under LUKS and mdadm RAID 1 I get about 104 Mb/s writes from urandom which is disappointing since I expected better performance with ZFS . I'm really not sure why ZFS causes such poor...
I found this issue posted for ZoL https://github.com/zfsonlinux/zfs/issues/7787#issuecomment-412508089
Is anyone else having issues with high load when doing writes?
I have haveged running on the host and VM so the entropy is pretty high. Currently 3599 on the host and 1841 on the VM. I'm doing the dd tests from the VM. Unfortunately the load still shoots up regardless of available entropy.
So it seems that only dd with /dev/urandom kills the server. /dev/zero doesn't cause load to shoot up that much. Question is, why do random writes hurt my zpool so much?
Thank you for your suggestion. I just tried to turn compression and atime off on the pool. This did not increase performance unfortunately. My processor does support AES-NI and has it enabled.
I've read that I should be using ashift 12 even if my disks use 512 and not 4096? I am going to try...
I'm experimenting with a mirrored ZFS pool under LUKS devices. I've been performing write tests with dd in a VM using VirtIO SCSI, specifically with these options;
dd if=/dev/urandom of=/root/test bs=4096 status=progress
It seems like the load shoots up very high but I am unsure what the...
I really need someone to help me with IPv6. I can't figure it out and I've been on and off trying to troubleshoot it for a month with no success. I have a Hetzner server and IPv4 in VMs work fine. I can't for the life of me get IPv6 working. I'll post some configs to troubleshoot...
I've discovered why the connection is dropping. Not sure how to fix it though...
Example here when I look at ARP
Address HWtype HWaddress Flags Mask Iface
5.x.xxx.33 ether 0c:96:12:f5:f0:a7 C eth0
Entries: 1...
I'm using a bridged config and it works fine without LUKS. When I encrypt the system and use dropbear to unlock the disk everything works fine. But once I install Proxmox and configure the network it is stable until I close my ssh session. So basically everything goes fine until I close my ssh...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.