Recent content by Shadow2091

  1. S

    NVMe performance inside the guest OS

    With depth=1 and direct=1 ~5.2GB/s read
  2. S

    NVMe performance inside the guest OS

    Hello. I assembled mdadm raid0 from two Samsung 970 Evo plus NVMe SSDs, created an LVM VG on it and gave the thick LV as a virtual machine disk based on Centos8. On a hypervisor, this RAID delivers about 7GB/s read performance. When I test inside the guest OS with: fio --readonly --name=onessd...
  3. S

    Cloning and recovery problems on LVM-thin

    In my case, the problem was solved after enabling disk zeroing.
  4. S

    USB Drive Crashes VM

    Checked with the latest kernel version - the VM still breaks if you remove the mounted USB drive
  5. S

    4 drive config

    Guys, raidz2 on 4 devices? Seriously? And what is its advantage over the R10 in this configuration? =) IMHO, zfs is dead at the moment under Linux. It reduces the performance of enterprise ssd and even NVME at times, and sometimes by tens of times. It works worse on a consumer class HDD than...
  6. S

    4 drive config

    Software Raid5 with mdadm?
  7. S

    Cloning and recovery problems on LVM-thin

    Yes, I always use cloning from a template
  8. S

    Cloning and recovery problems on LVM-thin

    I also found out that the problem manifests itself exclusively with full cloning, linked-clone always works correctly.
  9. S

    USB Drive Crashes VM

    Yes, I faced the same problem as you. VM freezes if you pull out the USB device that the OS is accessing. Everything used to work fine before. The only way that I have found so far is to correctly stop the call and disconnect the device from inside the guest system.
  10. S

    Cloning and recovery problems on LVM-thin

    I use LVM-thin as the main storage and have repeatedly noticed that cloning VMs or recovering using proxmox often means that the VM is broken if LVM-thin is used as the target storage. In Windows 10, this means that after cloning, the VM boots only in recovery mode and does not see a drives...
  11. S

    The guest operating system kills the performance of the disk subsystem.

    Well, changing the record block size to 256k in the Proxmox settings gave an increase in VM speed to 140-200Mb/s. The reason was this. But still, it is half the size of the drives on the host. I continue to dig.
  12. S

    The guest operating system kills the performance of the disk subsystem.

    Yep. As far as I understand, this is not a mirror, but an analog of RAID5 with parity.
  13. S

    The guest operating system kills the performance of the disk subsystem.

    Good afternoon. I ask for help. Installation: Proxmox 6, Xeon 2665, 128Gb REG ECC RAM, ZFS, 16Gb ARC, 4 x 600Gb Enterprise 10kSAS HDD in RAIDZ-1. root@virt:/mnt/zfs/RZHDD/vmstore# zpool status -v pool: RZHDD state: ONLINE scan: none requested config: NAME...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!