Is Writeback caching safe?

cysiacom

Renowned Member
Jan 19, 2015
2
0
66
Hi everyone.

In the Windows 2012 Guest Best Practices (https://pve.proxmox.com/wiki/Windows_2012_guest_best_practices) there is an option recommended.
  • Select Bus/Device: VIRTIO, Storage: "your preferred storage" and Cache: Write back in the Hard Disk tab and click Next.

I've read a lot about what caching option should you choose when setting up a guest. Even more, there are lots of threads in this forum stating that Writeback is dangerous.
http://forum.proxmox.com/threads/10608-Cache-Performance

Sometimes is even really dangerougs
http://forum.proxmox.com/threads/8829-Is-it-Cache-writeback-safe

:confused::confused::confused:
What am i missing?
Is it really not enough dangerous to recommend it in Wiki article as "Best Practice?"

I am completely confused.

Thanks for any advice or comment.

Valentin
 
Yes, it's safe until you don't disable barrier in your guests.

(also note that for windows virtio driver, they was a flush bug fixed in last version).

By safe, I mean that you can't break your filesystem if case of power failure, but of course you can loose x seconds of datas.


Also note that writeback can be usefull only for some storage like ceph, or a local storage with raid controller with battery cache (which is already doing writeback cache)
 
Spirit: Why can be writeback useful only for some storages? So without BBU it is not recommended? I can run my vms on gluster only when using writeback, without this, my vm could not start due to IO errors.
 
Writeback w/o BBU means you're nearly GUARANTEED TO LOSE DATA in case of problems.
Then it's your choice to enable it or not. Maybe in a test environment it's OK, but I wouldn't do it in production anyway. And where is the sense in testing something in a different configuration than production?
 
NDK73: I understand the concept of bbu, but I have no idea why it is useful only for some storages as Spirit has written.
I have problem booting my migrated vms without setting cache to writeback on the glusterfs, as I found this solution somewhere. When I create a new vm on top of gluster, there is no problem at all.