Question re: Making Sure Virtual Disk Shows Up as TRIM-Compatible SSD?

Sep 1, 2022
300
69
33
41
I just set up my first ever VM, a test Ubuntu Server LTS install.
Underlying storage system is a ZFS mirrored pair of SATA SSDs.
I enabled SSD Emulation to try to force the VM to recognize the attached virtual disk as an SSD.

tl;dr I cannot enable TRIM within the VM and I'm not sure I've correctly set up the virtual OS storage drive to be treated as an SSD. Is this right?

I would guess that this has something to do with LU Thin Provisioning being on (my VM storage is sparse enabled), but that's just a guess.

I thought I'd done okay:
Image


The guide tells me to be sure to enable TRIM support, and when I went to do that, I realized something's a bit off.
  1. LSHW thinks the disk is a 5400 RPM HDD
  2. Smartctl sees an SSD with a 512 byte block size (my SSDs are set to ashift=13, and Proxmox's dataset for this VM disk defines volblocksize=64k ), with "LU is thin provisioned." Vendor is QEMU, and Revision is 2.5+.
  3. Hdparm -I sees a non-removable ATA device with 512 byte sectors, no trim or SMART support, and 0 logical/cylinders/heads/sectors per track.
 
Virtio SCSI with enabled discard checkbox should be fine. With that the guest OS should be able to send TRIM commands over the virtual disk controller down to the physical disk controller.
Does your physical disk controller supports TRIM? Not all raid controllers do that (and raid controllers shouldn't be used with ZFS anyway).

Its normal that the guest OS displays it as 512B. No matter what ashift you choose or what sectoe size the physical disk is using, a virtual disk will always use 512B/512B logical/physical sectors. That only could be changed to 512B/4K logical/physical sectors by adding kvm arguments directly to the VMs config file.
 
Last edited:
  • Like
Reactions: SInisterPisces
Virtio SCSI with enabled discard checkbox should be fine. With that the guest OS should be able to send TRIM commands over the virtual disk controller down to the physical disk controller.
Does your physical disk controller supports TRIM? Not all raid controllers do that (and raid controllers shouldn't be used with ZFS anyway).
Thanks! This was really confusing and worrying, so I'm glad to get some clarity. :)

This is my first PVE server, and just to keep it simple for teaching myself, I rolled it out on a mini PC running a mobile Ryzen 5900HX (8c/16t). It's a Beelink GR9 (GTR5?): https://www.servethehome.com/beelink-gtr5-gr9-dual-2-5gbe-amd-ryzen-9-5900hx-mini-pc-review/

I have a much more complex piece of hardware, with HBA cards and SAS2 SSDs and other hardware that's going to become my primary node, but I don't want to start messing with it until I have a better idea of what I'm doing.

Right now, I've got the OS running on a single NVME SSD that came with the PC, which includes the root pool (rpool), and a pair of mirrored SATA SSDs working as my only VM/CT storage pool.

From within the PVE web UI's shell panel, I can see TRIM support on both of the disks making up my VM storage pool:
root@andromeda-ii:~# smartctl -a /dev/sda | grep -i TRIM TRIM Command: Available, deterministic, zeroed root@andromeda-ii:~# smartctl -a /dev/sdb | grep -i TRIM TRIM Command: Available, deterministic, zeroed

Here's all the PCI devices Proxmox can see.
root@andromeda-ii:~# lspci 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Root Complex 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Renoir IOMMU 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:02.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 51) 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166a 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166b 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166c 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166d 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166e 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166f 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1670 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1671 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) 03:00.0 Network controller: MEDIATEK Corp. Device 0608 04:00.0 Non-Volatile memory controller: Intel Corporation SSD 660P Series (rev 03) 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne (rev c4) 05:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device 1637 05:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor 05:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 05:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 05:00.5 Multimedia controller: Advanced Micro Devices, Inc. [AMD] Raven/Raven2/FireFlight/Renoir Audio Processor (rev 01) 05:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller 06:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 81) 06:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 81)

Its normal that the guest OS displays it as 512B. No matter what ashift you choose or what sectoe size the physical disk is using, a virtual disk will always use 512B/512B logical/physical sectors. That only could be changed to 512B/4K logical/physical sectors by adding kvm arguments directly to the VMs config file.

Thanks again, so much. That's exactly what I needed to know. :)

Is there a good reason to adjust this in the VM config files? None of the tutorials I've worked with so far really go into this level of detail.
I have no reason to want to twig the defaults unless there are real performance implications for actual use.

I'm assuming a well-written tutorial on installing, e.g., Windows 10 into a VM would mention if I needed to adjust the sector size within the VM.
 
Thanks! This was really confusing and worrying, so I'm glad to get some clarity. :)

This is my first PVE server, and just to keep it simple for teaching myself, I rolled it out on a mini PC running a mobile Ryzen 5900HX (8c/16t). It's a Beelink GR9 (GTR5?): https://www.servethehome.com/beelink-gtr5-gr9-dual-2-5gbe-amd-ryzen-9-5900hx-mini-pc-review/

I have a much more complex piece of hardware, with HBA cards and SAS2 SSDs and other hardware that's going to become my primary node, but I don't want to start messing with it until I have a better idea of what I'm doing.

Right now, I've got the OS running on a single NVME SSD that came with the PC, which includes the root pool (rpool), and a pair of mirrored SATA SSDs working as my only VM/CT storage pool.

From within the PVE web UI's shell panel, I can see TRIM support on both of the disks making up my VM storage pool:
root@andromeda-ii:~# smartctl -a /dev/sda | grep -i TRIM TRIM Command: Available, deterministic, zeroed root@andromeda-ii:~# smartctl -a /dev/sdb | grep -i TRIM TRIM Command: Available, deterministic, zeroed

Here's all the PCI devices Proxmox can see.
root@andromeda-ii:~# lspci 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir Root Complex 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Renoir IOMMU 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:02.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 51) 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166a 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166b 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166c 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166d 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166e 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 166f 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1670 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1671 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) 03:00.0 Network controller: MEDIATEK Corp. Device 0608 04:00.0 Non-Volatile memory controller: Intel Corporation SSD 660P Series (rev 03) 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne (rev c4) 05:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device 1637 05:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor 05:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 05:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 05:00.5 Multimedia controller: Advanced Micro Devices, Inc. [AMD] Raven/Raven2/FireFlight/Renoir Audio Processor (rev 01) 05:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller 06:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 81) 06:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 81)
Then I don't see why it shouldn't work.
Thanks again, so much. That's exactly what I needed to know. :)

Is there a good reason to adjust this in the VM config files? None of the tutorials I've worked with so far really go into this level of detail.
I have no reason to want to twig the defaults unless there are real performance implications for actual use.

I'm assuming a well-written tutorial on installing, e.g., Windows 10 into a VM would mention if I needed to adjust the sector size within the VM.
Changing the sector size isn't supported by PVE using GUI, CLI or API. You basically have to directly tell the underlaying KVM to do that. So I don't think many people will actually do that. I tried to compare 512B/512B to 512B/4K when doing alot of fio benchmarks but wasn't seeing any noticable differences in performance or write amplification. So I just stick with 512B/512B even in theory 512B/4K should be better when the physical disks also use 512B/4K sectors so it matches.


One thing you could check is if your zfs storage at "Datacenter -> Storage -> YourZfsStorage -> Edit" got the thin-provisioning checkbox enabled. Without there will be no thin-provisioning for the guests. I don't think thin-provisioning is needed for TRIM to work (as even thick provisioned SSDs would benefit from TRIM to not only rely on the SSDs GC) but you still could try it that helps for newly created VMs. But I'm out of ideas otherwise.
 
Last edited:
  • Like
Reactions: SInisterPisces
Then I don't see why it shouldn't work.

Changing the sector size isn't supported by PVE using GUI, CLI or API. You basically have to directly tell the underlaying KVM to do that. So I don't think many people will actually do that. I tried to compare 512B/512B to 512B/4K when doing alot of fio benchmarks but wasn't seeing any noticable differences in performance or write amplification. So I just stick with 512B/512B even in theory 512B/4K should be better when the physical disks also use 512B/4K sectors so it matches.

It really surprises me that adjusting the sector sizes makes no difference in performance or, in particular, write amplification. Given the pages upon pages I read warning me of the dangers of write amplification/SSD murder if I got the sector values wrong. Odd.

One thing you could check is if your zfs storage at "Datacenter -> Storage -> YourZfsStorage -> Edit" got the thin-provisioning checkbox enabled. Without there will be no thin-provisioning for the guests. I don't think thin-provisioning is needed for TRIM to work (as even thick provisioned SSDs would benefit from TRIM to not only rely on the SSDs GC) but you still could try it that helps for newly created VMs. But I'm out of ideas otherwise.

Thanks. I do have thin provisioning enabled. Here's a sample virtual disk from one of my cloned VMs based on the template where this happening. Looks like it's working. :)

1662406939954.png

Also, refreservation = none.

Code:
#] zfs get volblocksize,volsize,refreservation,used vmStore/vmDisks64k/vm-901-disk-0

NAME                              PROPERTY        VALUE      SOURCE
vmStore/vmDisks64k/vm-901-disk-0  volblocksize    64K        -
vmStore/vmDisks64k/vm-901-disk-0  volsize         10G        local
vmStore/vmDisks64k/vm-901-disk-0  refreservation  none       default
vmStore/vmDisks64k/vm-901-disk-0  used            1.84G      -
 
Last edited: