[SOLVED] Can't convert a RAW disk to QCOW2 on local zfs storage

n4bz0r

Member
Nov 17, 2022
6
0
6
Hello!

For some reason, when I'm trying to convert a RAW disk, which is stored on zfs-local, to QCOW2, the format dropdown list is inactive. But if I try to move (Move Disc button) the disk to remote storage (SMB share), the dropdown is active.

It seems like I've missed something during the installation that makes it that way. Are there any prerequisites to being able to use QCOW2 format, and is there a way to make QCOW2 work on local storage without having to do something drastic like reinstalling Proxmox? Thanks!

Some info:
- Proxmox version: the latest (sorry, can't check the exact version atm, but it was updated today)
- Repository: no-subscription
- The server only has one SSD (for both the system and guests' data) and Proxmox is installed on a ZFS (RAID 0) partition
- The machines I'm having the issue with are in a cluster
 

Attachments

  • 111(1).png
    111(1).png
    5.6 KB · Views: 39
  • 222(1).png
    222(1).png
    14.4 KB · Views: 38
  • 333(1).png
    333(1).png
    6.2 KB · Views: 39
Last edited:
Qcow2 needs a filesystem to store the qcow2 file on. But a ZFS pool storage is storing virtual disks on zvol block devices as raw format. If you want to use qcow2 you need to use a directory storage pointing to a mountpoint of a ZFS dataset.
 
  • Like
Reactions: n4bz0r
Qcow2 needs a filesystem to store the qcow2 file on. But a ZFS pool storage is storing virtual disks on zvol block devices as raw format. If you want to use qcow2 you need to use a directory storage pointing to a mountpoint of a ZFS dataset.
Thanks for the response!

So I can basically check 'Disk Image' on 'local' storage (mounted to /var/lib/vz) and ditch the 'local-zfs' for VM data altogether? Or it would be a better practice to create a separate directory?

I just tried checking 'Disk Image' on 'local' and indeed the format dropdown becomes available when trying to move a disk there. But that leads me to a few questions:

- Why isn't this a default? Is QCOW2 not stable enough yet?
- Why there are two storages set up by default? Are there any benefits to storing the disks in RAW format?
 
Thanks for the response!

So I can basically check 'Disk Image' on 'local' storage (mounted to /var/lib/vz) and ditch the 'local-zfs' for VM data altogether? Or it would be a better practice to create a separate directory?
I would create another dataset and directory storage for it. This would make it easier to manage it and would allow to set different ZFS options (quota, compression, encryption, ...). But yes, you could just store those qcow2 files on the "local" storage.
I just tried checking 'Disk Image' on 'local' and indeed the format dropdown becomes available when trying to move a disk there. But that leads me to a few questions:

- Why isn't this a default? Is QCOW2 not stable enough yet?
- Why there are two storages set up by default? Are there any benefits to storing the disks in RAW format?
Using qcow2 adds additional overhead, as you got more nested filesystems and especially Copy-on-Write (of qcow2) on top of Copy-on-Write (of ZFS). So usually raw would be the better choice.
 
  • Like
Reactions: n4bz0r
Using qcow2 adds additional overhead, as you got more nested filesystems and especially Copy-on-Write (of qcow2) on top of Copy-on-Write (of ZFS). So usually raw would be the better choice.
unless you want tree-like snapshots or want to move around in between snapshots without destroying them.
 
  • Like
Reactions: n4bz0r
Thank you, that pretty much clears everything up!

To exhaust the topic, if that's not too much. How severe can performance losses get with qcow? And I'm not quite sure, but stacking filesystems should also lead to noticeable additional SSD wear, right?
 
How severe can performance losses get with qcow?
I have no numbers for you, but you will notice it (which is fine for me).

And I'm not quite sure, but stacking filesystems should also lead to noticeable additional SSD wear, right?
I don't think so. If the first level filesystem does a CoW, the unterlying FS will do the write also, so no "additional" write if you look at the whole stack, just handling it down to the next level. If you consinder having QCOW2 and ZFS snapshots, the world will be different.
 
  • Like
Reactions: n4bz0r

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!