Help with TRIM on Virtio SCSI Single

Tmanok

Renowned Member
Hi Everyone,

Wondering whether or not Virtio SCSI Single supports TRIM in guests, I have observed it not working automatically during disk migration with a hook script but when the disk controller is set to Virtio SCSI, it works as expected with no other changes. Evidently, I've prefer to use Virtio SCSI Single, due to the performance gains, notably with iothread.

Is this issue known? Does anyone have any solution for this?
Thank you,


Tmanok
 
Hi,
it works on my setup. Please share the output of pveversion -v and qm config <ID> with the ID of your VM. What is the source and target storage of the move operation? What exactly are you doing in the hook script, how do you invoke it? FYI, you can just install the QEMU guest agent inside the VM and then enable the option to trim cloned disks.
 
Hi,

attached the output you've requested. After some more testing, it seems that the fstrim works only once on a freshly startup VM (tested debian11 & arch). Regardless of `virtio scsi` or `virtio scsi single`. A second run of `fstrim -v /` yields 0 trimmed bytes. I guess that is expected. But then the underlying storage isn't trimmed. How to we get a thin provisioned storage after disk move?

Cheers,
Alwin

EDIT: tested with RBD & LVM-thin storage, here I can reproduce it consistently when moving between these two storages.
 

Attachments

  • pveversion.txt
    1.3 KB · Views: 4
  • case.txt
    936 bytes · Views: 4
  • UPID_vl-srv1_001E7AFB-00ED5BBE-6408662B_qmmove_103_root@pam.txt
    5.2 KB · Views: 2
Last edited:
Hi Alwin :)

Thanks, I understand the issue now and am able to reproduce it. To summarize (you already mentioned the first two points):
  • It only happens if an fstrim was already run before the disk move.
  • Doesn't matter if VirtIO SCSI or VirtIO SCSI single or SATA.
  • It's not a recent regression in QEMU, I get the same behavior with pve-qemu-kvm=6.0.0-4 too.
  • It also happens with Debian 9 with kernel 4.9
  • If write a large file with zeroes and remove it in the VM, trim will do something again
  • If I issue two fstrim calls right after another, for the second one, the function scsi_disk_emulate_unmap in QEMU is not even reached
  • A reboot of the guest makes fstrim work again.
From the last two points I'd guess that this is not a QEMU issue at all, but there is some kind of optimization inside the guest to avoid issuing duplicate trim requests. But the guest cannot know about the underlying disk move, so it cannot know that it shouldn't use that optimization.
 
  • If write a large file with zeroes and remove it in the VM, trim will do something again
  • If I issue two fstrim calls right after another, for the second one, the function scsi_disk_emulate_unmap in QEMU is not even reached
  • A reboot of the guest makes fstrim work again.
Yup. I've changed the filesystem from ext4 to xfs. And with xfs it works.

Do you have any good pointers how to get the old behavior back? :)

Cheers,
Alwin

EDIT: maybe I just never payed attention to it. :rolleyes: But the post in the link explains it.
https://serverfault.com/questions/1...return-same-value-unlike-ext4/1113129#1113129
 
Last edited:
  • Like
Reactions: Tmanok and fiona
Yup. I've changed the filesystem from ext4 to xfs. And with xfs it works.

Do you have any good pointers how to get the old behavior back? :)

Cheers,
Alwin

EDIT: maybe I just never payed attention to it. :rolleyes: But the post in the link explains it.
https://serverfault.com/questions/1...return-same-value-unlike-ext4/1113129#1113129
Good find! So it really is a guest optimization (in particular ext4) and there's nothing we can do, except ask kernel devs to make it optional.
 
  • Like
Reactions: Tmanok
Given that EXT4 is the default and most supported file system for Linux servers, is this something PVE can talk "shop" with the kernel devs about?

Thanks, Fiona! I'd do it myself but I feel hopelessly unwise about how to communicate the issue upstream.


Tmanok
 
Hi,
Given that EXT4 is the default and most supported file system for Linux servers, is this something PVE can talk "shop" with the kernel devs about?

Thanks, Fiona! I'd do it myself but I feel hopelessly unwise about how to communicate the issue upstream.
there is https://bugzilla.kernel.org/ but often, you'll have more luck writing a mail to the relevant mailing list and maintainers:
In this case linux-ext4@vger.kernel.org, see EXT4 FILE SYSTEM in https://www.kernel.org/doc/linux/MAINTAINERS

There was already a brief discussion about this once: https://lore.kernel.org/all/20211025094227.yio3cjpboxumt5ml@work/ probably doesn't hurt to mention either.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!