Absolutely agree.
I will try to make more tests... but for now I will keep sync=standard, and only if falling in intense issues while working I will think about temporalily disabling it. And if I find time (and resources) I will remake the RAID properly for ZFS.
Thanks for your comments and...
On the source...
EDIT: After some test, indeed, a simple VM migration doesn't seem to impact severely in IO delay on destination. Almost the same with sync=standard or sync=disabled.
Sudden reboot...? Man, what kind of HW do you have... ?
...just joking, sorry :p
Yes, this was a newbie...
It's not a throughput issue... it's a problem with IO delay getting totally stuck the rest of VMs... The same issue arrises when migrating any vm between nodes... so I don't understand why this occurs if writes are not synced.
Can you point any other scenario for this, please?
It's my main...
I'm studying enabling sync only for critical VMs... The rest ones are implemented for remote development so I'm happy if in worst case at least FS doesn't get corrupted, which should be guaranteed with ZFS.
Keeping sync disabled for root zpool also enables, if I'm not wrong, the extra...
After revising many many zfs parameters I've decided to set sync=disabled in the pool (we have UPS and controller with battery) and the problem is gone...
Thx.
Sorry, most probably is that I'm missing something... but at this point I think that there is anything else... Looking at this:
This is the result of running crystal mark inside a VM in the node 1 (the case of study of this thread)
Getting...
Yes, I know which VM is the problem (not always the same: If large data is moved between disks, intense DB working, etc...) but I guess that only one VM shouldn't hang all the node with the rest of VMs... Right? Thx
PVE version 5.2-2
Kernel Version
- Linux 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200)
CPU
- 24 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (2 Sockets)
Storage:
- 8 x SSD SATA3 INTEL S3520 800GB 6Gb/s 3D MLC in a ZFS pool (RAID 5 HW, ...ye we didn't know ZFS...
We're using HW RAID because we didn't know all the ZFS features and recommendations at the beginning when we were initially setting up the system (we assumed HW RAID is always better... and we were using EXT4).
The SSD disks are all 8 x SATA3 INTEL S3520 800GB 6Gb/s 3D MLC, and about RAM...
Hi, first of all, hello to everybody. I'm new as user, but I've read you a lot for long looking for support. Thx!
My question... We have a production pve cluster (4 nodes + 2 test nodes), updated to latest 5.1. This has been running for more than 1 year and now we have jumped to ZFS (I've...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.