About of "cache=none" in PVE

cesarpk

Well-Known Member
Mar 31, 2012
770
3
58
Hi People

I want to do a question to somebody that know too about of caches in QEMU or to developers PVE team

The question is based in how QEMU manages the write cache when this is activated in mode "cache=none"?.

Based on this link:
http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaat/liaatbpkvmguestcache.htm

And this link:
http://www.ilsistemista.net/index.p...s-on-red-hat-enterprise-linux-62.html?start=2

I understand that the write cache is only in the buffer volatile of the RAID controller or in his default in the buffer volatile of same HDD (if we don't have a RAID controller). So if we have a RAID controller with BBU, we will not should worry for the lost of data.

But many people said me that QEMU with the option "cache=none" enabled, his write cache is in RAM of Host or Guest.

Is correct this?, and
What options of QEMU can I use for get this behavior (write cache should be only on the buffer of the RAID controller/Disk and not in the RAM of Host/Guest)?

Best regards
Cesar
 
Last edited:
What options of QEMU can I use for get this behavior (write cache should be only on the buffer of the RAID controller/Disk and not in the RAM of Host/Guest)?

This is no problem, because the guest uses fsync to make sure all data is transferred to the Disk.
 
This is no problem, because the guest uses fsync to make sure all data is transferred to the Disk.

Hi Dietmar and thanks for your answer, but can you be more specific?, and the question more important for me is: What options of QEMU can I use for get this behavior (write cache should be only on the buffer of the RAID controller/Disk and not in the RAM of Host/Guest)?

The context must be this: The data of the VM must be the more important in case of power loss

Note: i am not worry for the lost of data if the data is in the buffer of the same HDD (without RAID controller), since i use DRBD and the buffer of the same HDD is in a lower layer of DRBD (and also the buffer of HDD can be managed by DRBD)

Best regards
Cesar

Re Edited: how fsync move the data?:
possible answers:
1- From RAM to buffer of RAID/HDD
2- From buffer of RAID/HDD to disk platters of HDD
3- From RAM to disk platters of HDD
 
Last edited:
Re Edited: how fsync move the data?:
possible answers:
1- From RAM to buffer of RAID/HDD
2- From buffer of RAID/HDD to disk platters of HDD
3- From RAM to disk platters of HDD

normally, the fsync is done by guest, to flush the datas from all buffer to the disk platter. (guest buffer,host buffer,disk buffer).

But a RAID with cache and battery, can fake this, and reply to guest that it's already written to disk.

Note that you could try to use cache=directsync, it's disable write buffer in guest too, so each write are sync.
 
normally, the fsync is done by guest, to flush the datas from all buffer to the disk platter. (guest buffer,host buffer,disk buffer).

But a RAID with cache and battery, can fake this, and reply to guest that it's already written to disk.

Note that you could try to use cache=directsync, it's disable write buffer in guest too, so each write are sync.

Thanks spirit for your answer, but i like to do a question for this scenery, and I will be very grateful of you can answer me:

1- I have configured my RAID controller in writeback mode.
2- The RAID controller have BBU

What options of cache of QEMU can i use for that my VMs do the writes of HDD only in buffer of the RAID controller and not in any other memory or the disks platter?

Best regards
Cesar
 
Last edited:
Thanks spirit for your answer, but i like to do a question for this scenery, and I will be very grateful of you can answer me:

1- I have configured my RAID controller in writeback mode.
2- The RAID controller have BBU

What options of cache of QEMU can i use for that my VMs do the writes of HDD only in buffer of the RAID controller and not in any other memory or the disks platter?

Best regards
Cesar

cache=directsync disable host buffer and guest buffer.

for disk platter buffers, generally, they are disabled by your raid controller, when it's manage cache itself. (it's working like that with lsi card for example)
 
cache=directsync disable host buffer and guest buffer.

for disk platter buffers, generally, they are disabled by your raid controller, when it's manage cache itself. (it's working like that with lsi card for example)

Many thanks spirit for your answer, and please let me to do the last question:
for this scenery:

1- I have a single HDD without a RAID controller in each PVE Hosts
2- Both PVE Hosts have DRBD (DRBD is managing the buffer of the HDD)

I want that the VMs only use the buffer of the HDD for writes and not writes in the RAM, then what options of chache of QEMU can i use (in this case DRBD is managing the buffer of HDD and must be transparent for the VMs)?

Best regards
Cesar
 
Last edited:
Many thanks spirit for your answer, and please let me to do the last question:
for this scenery:

1- I have a single HDD without a RAID controller in each PVE Hosts
2- Both PVE Hosts have DRBD (DRBD is managing the buffer of the HDD)

I want that the VMs only use the buffer of the HDD for writes and not writes in the RAM, then what options of chache of QEMU can i use (in this case DRBD is managing the buffer of HDD and must be transparent for the VMs)?

Best regards
Cesar

For this case, you need buffer enable on host (which manage drbd), so I think cache=writethrough. (cache enable on host, cache disable in guest)


just to resume cache mode:

directsync: host cache disable, guest cache disable
writethrough : host cache enable, guest cache disable
none : host cache disable, guest cache enable
writeback : host cache enable, guest cache enable
 
For this case, you need buffer enable on host (which manage drbd), so I think cache=writethrough. (cache enable on host, cache disable in guest)


just to resume cache mode:

directsync: host cache disable, guest cache disable
writethrough : host cache enable, guest cache disable
none : host cache disable, guest cache enable
writeback : host cache enable, guest cache enable

Spirit, thanks again for your answer, and excuse me please if i am recurrent in this topic, but i don't understand very well.
If i have enabled the buffer of writes in Host (by QEMU with writethrough), and i have DRBD with write cache enabled for the purpose of gain speed of writes because i am using standard SATA disks, then, i will have two write caches (in Host by QEMU and in DRBD - Both in RAM ), and I understand that if we have two write caches in the same Host, these layers will offer poorer performance due to that it will consuming more CPU cycles and resources in general (travel of data in PCI communications, etc.)

With this theory, please, can you explain me which is my better option? and why?

Best regards
Cesar
 
Last edited:
Spirit, thanks again for your answer, and excuse me please if i am recurrent in this topic, but i don't understand very well.
If i have enabled the buffer of writes in Host (by QEMU with writethrough), and i have DRBD with write cache enabled for the purpose of gain speed of writes because i am using standard SATA disks, then, i will have two write caches (in Host by QEMU and in DRBD - Both in RAM ), and I understand that if we have two write caches in the same Host, these layers will offer poorer performance due to that it will consuming more CPU cycles and resources in general (travel of data in PCI communications, etc.)

With this theory, please, can you explain me which is my better option? and why?

Best regards
Cesar

I don't known how drbd manage his cache, so I can't help.
I think it should respect the fsync (you need to verify), so directsync can't help here. So I think you need host buffer + drbd buffer.
Maybe cache=none could be also ok (guest buffer without host buffer), and drbd buffer.

I think you need to do benchmark, to see if double buffer have impact.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!