I want to say that native ZFS on Oracle Solaris works much faster than Linux ZFS. That's why I say that ZFS on Linux is not ready yet as production storage. Also BTRFS in terms of speed can be more likely equal compared to ext4+RAW.
And you can disable whole bunch of that features - ZoL will be...
I'm talking about LOCAL target/storage.
Just compare benchmarks RAW+ext4 with ZFS on same hardware. ZFS will be much slower and there's nothing you can do with that - there's tons of complaints on official ZoL tracker (github.com/zfsonlinux) and devs have not focused yet on that problems. +...
@mir Thanks.
@LnxBil yeah, I mean discard=on on PVE.
@melanch0lia VM 10+ means more than 10 VMs regardless of the load. I don't have any big iowait as well (0-8% depending on load) and load average 2-5, but I have some peaks to 7-10 while I'm doing dd and something else or just booting up...
Thanks.
This should be in official wiki.
Small question - pve-zsync creates snapshots itself?
I currently use zfs-auto-snapshot https://github.com/zfsonlinux/zfs-auto-snapshot/blob/master/src/zfs-auto-snapshot.sh
Will pve-zsync create snapshot if it will not reach other host/fail to replicate...
Hello,
Guys, share your average "Load Average" or iowait on production host with ZFS (ZVOL) as storage for 10+ VMs?
Also, what "Load Average" should be considered as critical or problematic?..
Reverting to 2.6.32-39-pve from 2.6.32-41-pve fixes the problem.
Couldn't even think that this is related.
Repeatedly reproduced on two separate Proxmox VEs.
The only changes I see:
pve-kernel-2.6.32 (2.6.32-164) unstable; urgency=low
* update to latest stable...
Remark: VLAN is configured INSIDE linux guests, not in host Debian.
Hello,
Linux host 2.6.32-41-pve #1 SMP Sat Sep 12 13:10:00 CEST 2015 x86_64 GNU/Linux
pve-manager/3.4-11/6502936f (running kernel: 2.6.32-41-pve)
After latest apt-get upgrade stopped working VLAN tagged network inside KVM...
Here's iostat -kxz 1 during file copy.
It seems problem with disk load, but it shouldn't be that...
Also, should write cache on controller (LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS) be turned ON or OFF? NCQ?
Device: rrqm/s wrqm/s r/s w/s rkB/s...
No, copying started at 1st-2nd line really.
You can see jump to 31% I/O wait earlier
2 2 0 10998664 137136 222376 0 0 18507 76895 35992 73596 6 9 54 31
I have lowered ARC cache on the fly to 24 GB and will check iostat.
What may be the problem to drives slow to write?
scsi...
Problem reproduced during a file copy.
Here is my "vmstat 1" when I started copying.
Before starting - load average ~2.
After start - load average ~8-9
As I can see there's some peaks:
# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free...
If I send drop_caches to 2 (to free reclaimable slab objects (includes dentries and inodes)), my RAM frees up to 52 GB. After a while it is being used to ~82-84 GB again.
Here is my vmstat 1 for load average: 1.82, 1.90, 1.98
procs -----------memory---------- ---swap-- -----io---- -system--...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.