ZFS / cache=writeback / safe ?

Jan 9, 2012
282
2
18
http://pve.proxmox.com/wiki/ZFS
  • kvm config:
  • change cache to Write Back

if not set this happened:

qm start 4016
kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument


http://forum.proxmox.com/threads/8829-Is-it-Cache-writeback-safe?p=49937#post49937
Re: Is it Cache=writeback safe?
NO, that is very dangerous.


To use a Secure File System, I need to make an unsafe setting? Does not contradict that?
What exactly can happen in the worst case?
 
  • Like
Reactions: chrone
you refer to an old post, newer kernels and newer qemu version are in place now. for zfs, cache=writeback is the recommended setting.

you can also set it to cache=writethrough.
 
  • Like
Reactions: chrone
Ok, but what can exactly happen in worst case of a Crash?

ZFS on the Host remains consistent.
But what about the Guest VM's (Windows ntfs / Linux ext3/4), could the File Systems of the VMs be damaged or become inconsistent? Or is missing only the Part that was in the Cache?
 
Hi,

The guest FS will no harmed.
right in the worst case you lose the cache!
 
And ins't loosing the cache a real bad thing for the guest? Is like running a physical OS with a raid card with cache and no BBU, and that should be a very bad thing if you loose power.
Why instead should be ok here? The zfs will be ok but the "guest" fs will have thought to have flushed the cache and have data safe on disk, but is not the case.
Thanks in advance
 
Yes as a zvol but not as localdir.
 
  • Like
Reactions: chrone
I use ZFS with localdir and there is only possible to use "writeback" & "writethrough".


https://pve.proxmox.com/wiki/Performance_Tweaks
-> I understood correctly that "cache = write-through" is the safest Variant of them all?

I tried this Option with my ZFS SSD-Mirror-Pool, the Permformance is good, a little bit higher "IO Delay" than with "writeback", but ok.


But two Notes in the WIKI i do not understand:

Note: The information below is based on using raw volumes, other volume formats may behave differently.
Then how it is for qcow2?

Avoid to use cache=directsync and writethrough with qcow2 files.
Why? I use qcow2 on all of my VM's.
 
Yes as a zvol but not as localdir.

I *really* wish the GUI would disallow me from setting cache=none on a local directory qcow2 file when it is sitting on a ZFS file system :( Wasted about an hour yesterday figuring out what "exited with error code 1" could be caused by every time I tried to start my new VM.

Also, is there a table somewhere that summarizes the current best practices for which cache mode to choose (or avoid) per storage type? Digging thru the forums is hard to tell what is current info vs. old info, and who's full of crap and who knows what they're really talking about.
 
  • Like
Reactions: FibreFoX
I agree vkhera,

I is hard to fine the best practice for this in the forum. It would be nice if it appeared in the wiki (and was up-to-date).
 
you refer to an old post, newer kernels and newer qemu version are in place now. for zfs, cache=writeback is the recommended setting.

you can also set it to cache=writethrough.

Is this still the recommended setting? Currently I'm running all VMs in zvols, with cache = Default (none). Will IO-performance be improved with cache=writeback ?
 
Of course will the io performance be increased. Writeback means that the write action is acknowledged to the OS when the block is written to cache, not to disk (please refer to e.g. wikipedia for cache architectures). Therefore single IO operations will hit the cache and not directly the disk. If it hits ZFS, then it is at least written to the slog and then can be replayed.

I hate arguments based on "what happens when I unplug the power or push reset". Anything can (and will eventually) happen with this setup. It does not depend on the used storage architecture, operating system or in which constellation jupiter currently is. You will have something in an unknown state, that's normal. Just image writing a file to disk and the machine reboot while in the middle of the save process. Naturally, you can throw away the file because it is not in an consistent state. Avoid those situation at all costs. You'll always have open and partially written files or blocks on disk, because there is always something open, e.g. logfiles. Yet it will not destroy your data that was already written correctly on disk prior to the crash. This is also independend of the used storage or operating system.
 
So if I understand you correctly, you would prefer the "new" default setting with cache=none while working with zvol's? I'm just asking this because it worked out of the box with zvol's on Proxmox 4, whereas I had to switch it to cache=writeback on Proxmox 3.x before (where raw files have been used).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!