Hello.
I assembled mdadm raid0 from two Samsung 970 Evo plus NVMe SSDs, created an LVM VG on it and gave the thick LV as a virtual machine disk based on Centos8.
On a hypervisor, this RAID delivers about 7GB/s read performance.
When I test inside the guest OS with:
fio --readonly --name=onessd...
Guys, raidz2 on 4 devices? Seriously? And what is its advantage over the R10 in this configuration? =) IMHO, zfs is dead at the moment under Linux. It reduces the performance of enterprise ssd and even NVME at times, and sometimes by tens of times. It works worse on a consumer class HDD than...
Yes, I faced the same problem as you. VM freezes if you pull out the USB device that the OS is accessing. Everything used to work fine before. The only way that I have found so far is to correctly stop the call and disconnect the device from inside the guest system.
I use LVM-thin as the main storage and have repeatedly noticed that cloning VMs or recovering using proxmox often means that the VM is broken if LVM-thin is used as the target storage. In Windows 10, this means that after cloning, the VM boots only in recovery mode and does not see a drives...
Well, changing the record block size to 256k in the Proxmox settings gave an increase in VM speed to 140-200Mb/s. The reason was this. But still, it is half the size of the drives on the host. I continue to dig.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.