Proxmox 1.8 adds cache=none, vms do not boot on fuse ?

adoII

Renowned Member
Jan 28, 2010
182
19
83
Hi,
I have upgraded Proxmox to 1.8 which adds cache=none to the kvm options for raw disks.

However: If I start a vm whose image is on a zfs-fuse filesystem the vm wont start. I debugged it so far:

kvm -drive file=/tank/images/136/vm-136-disk-1.raw,if=virtio,index=0,cache=none,boot=on
gives:
kvm: -drive file=/tank/images/136/vm-136-disk-1.raw,if=virtio,index=0,cache=none,boot=on: could not open disk image /tank/images/136/vm-136-disk-1.raw: Invalid argument

When I change cache=none to cache=writeback it works:

san04:/var/log# kvm -drive file=/tank/images/136/vm-136-disk-1.raw,if=virtio,index=0,cache=writeback,boot=on

Crazy enough: When I move the image to nfs-storage the vm can be started with cache=none

Does anybody know what happens here ? Is it simply that fuse/zfs-fuse does not support setting cache-options for kvm ?

I ended up with a workaround by adding cache=writethrough to the vmid.conf files
 
Aah,
found the reason: fuse-filesystems do not support O_DIRECT
Maybe proxmox can check this before magically adding cache=none to the options
This would have saved me some time
 
I do not think that a lot of people are using fuse based file-systems for storing virtual disks ...

And I am sorry that we do not "magically" know that you are storing your disks on a not supported and not recommended file-system.
 
I agree that one should have good reasons to use fuse-filesystems for virtual machines.

In case of zfs there are a good reasons to use it in some use cases.

For example you can snapshot your vm, transfer the big file to another proxmox machine which might take hours with large images on gigabit.

Later you power down your vm, transfer a few hundred megabytes incremental snapshot and power up the machine on another host with minimal downtime

This is an acceptable way of near-line migration e.g. for use cases where there is no SAN.

Also incremental snapshots are great for backup. We image 40 vms, some of them larger than 100GB in 30 minutes using zfs.