special interest thread: qcow2 woes and interesting findings

RolandK

Famous Member
Mar 5, 2019
1,053
246
108
52
this is not meant to to make users/customers uncertain about product quality , but whoever is deeper into using qcow2 may have a look at https://bugzilla.proxmox.com/show_bug.cgi?id=7012 , it's about a case where qcow2 performs pathologically slow and while searching for the cause, i found other interesting things, raising questions.

these findings may probably be of interest in context of other performance issues or whatever....

just by the way - how/where are YOU using qcow2 ?

do you know that "vm freezing on qcow snapshot removal" issue with qcow2 ?

furthermore i'd like to use this thread to discuss why proxmox is still not using more modern options for qcow, which apparently can dramatically improve performance , as the improvements have been around for a while ( https://bugzilla.proxmox.com/show_bug.cgi?id=1989 )

this is NOT a discussion on wether qcow2 should be used with zfs or not.
 
Last edited:
Did you compare zfs and other directory storages? Due to the pverhead of qcow compared to raw the filesystem on the qcow and zfs filesystem layer some performance Regression is to be expected. I wonder whether zfs behaves different than btrfs/ or xfs / ext4 on lvm
 
i don't see performance regression . it's performance disadvantages of qcow2 on top of zfs are greatly exaggerated, imho.

1762823947793.png

qcow2:
# dd if=/dev/zero of=/dev/sdb bs=1024k status=progress count=10240
9954131968 bytes (10 GB, 9.3 GiB) copied, 12 s, 829 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 13.0702 s, 822 MB/s

# dd if=/dev/zero of=/dev/sdb bs=1024k status=progress count=10240 oflag=direct
9844031488 bytes (9.8 GB, 9.2 GiB) copied, 7 s, 1.4 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 7.63457 s, 1.4 GB/s

# dd if=/dev/zero of=/dev/sdb bs=1024k status=progress count=10240 oflag=sync
10458497024 bytes (10 GB, 9.7 GiB) copied, 23 s, 455 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 24.0784 s, 446 MB/s

raw:
# dd if=/dev/zero of=/dev/sdc bs=1024k status=progress count=10240
10543431680 bytes (11 GB, 9.8 GiB) copied, 13 s, 811 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 13.6078 s, 789 MB/s

# dd if=/dev/zero of=/dev/sdc bs=1024k status=progress count=10240 oflag=direct
10137632768 bytes (10 GB, 9.4 GiB) copied, 9 s, 1.1 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 9.54142 s, 1.1 GB/s

# dd if=/dev/zero of=/dev/sdc bs=1024k status=progress count=10240 oflag=sync
10543431680 bytes (11 GB, 9.8 GiB) copied, 36 s, 293 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 37.0498 s, 290 MB/s