fdcastel's latest activity

  • F
    Update: I rebuilt the VMs using Virtio SCSI, and performance again reached roughly double that of Virtio Block (matching the results seen when the drives were formatted with 512-byte blocks). This indicates that switching the NVMe block size...
  • F
    Thank you, @ucholak, for sharing your experience and expertise -- I really appreciated it. You gave me a glimpse of hope, but unfortunately it faded rather quickly :) I reformatted my NVMe drives to 4K using: nvme format /dev/nvme2n1...
  • F
    fdcastel reacted to ucholak's post in the thread Proxmox x Hyper-V storage performance. with Like Like.
    I think this is maybe bug or analyze of older state (blk missing queues?). In beginning of chapter, you have: From my study and usage: scsi translates scsi commands to virtio (virtqueues) layer (overhead, ~10-15%), and has NOW only...
  • F
    fdcastel reacted to ucholak's post in the thread Proxmox x Hyper-V storage performance. with Like Like.
    only checking: is it also formatted like this. or better for 4kN? for example: nvme id-ns /dev/nvmeXn0 -H | grep LBA nvme format /dev/nvmeXn0 -l 3
  • F
    But for now, no matter what I try, Hyper-V often delivers about 20%~25% better random I/O performance than Proxmox, and this has a direct impact on my application’s database response times. And, believe me, I tried EVERYTHING...
  • F
    Thanks, @ucholak! I wasn't aware of this command. It appears it's not: # nvme list Node Generic SN Model Namespace Usage Format...
  • F
    I didn't even try. According to Proxmox VE documentation: The VirtIO Block controller, often just called VirtIO or virtio-blk, is an older type of paravirtualized controller. It has been superseded by the VirtIO SCSI Controller, in terms of...
  • F
    fdcastel reacted to fba's post in the thread Proxmox x Hyper-V storage performance. with Like Like.
    cache=none leaves the cache of the storage system enabled, use directsync instead. See here for comparision of caching modes https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache
  • F
    No. Both servers are configured identically: 2x1.92 TB drives (mirrored, "RAID 1") for the operating system, and another 2x1.92 TB drives (mirrored, "RAID 1") for the VMs.
  • F
    - why identical tests on identical hardware are producing significantly different results? - why the Hyper-V benchmarks seem to align more closely with the manufacturer’s published performance? (It might simply be coincidence) - why Hyper-V...
  • F
    @spirit I’m available to run any additional tests you’d like. These systems are up solely for testing, and I can rebuild them as needed.
  • F
    I believe @spirit has nailed the issue of RND4K Q32T1 performance: Windows guest on Proxmox: During the RND4K Q32T1 test, cpu went to 12% (100% of 1 core on a 8-vcpu) Windows guest on Hyper-V Server: During the RND4K Q32T1 test, CPU...
    • 1764936147007.png
    • 1764936154974.png
  • F
    Thank you, @spirit, for the valuable insights. I'm doing some tests now. But I believe you’ve nailed the issue. Please let's continue on https://forum.proxmox.com/threads/proxmox-x-hyper-v-storage-performance.177355/ I'll post the results there.
  • F
    I’m embarrassed to admit that I just realized I’ve hijacked this thread. I’ve opened a new one here: https://forum.proxmox.com/threads/proxmox-x-hyper-v-storage-performance.177355/
  • F
    At first glance, Proxmox appears to offer substantial improvements over the old setup, with a few important observations: 1) According to Samsung’s official specifications this model is rated for 6800 MB/s sequential read and 2700 MB/s...
  • F
    For reference, the existing Hyper-V Server deployment (on identical hardware) yields the following results: C:\> fsutil fsinfo sectorInfo C: LogicalBytesPerSector : 512 PhysicalBytesPerSectorForAtomicity ...
    • 1764933340446.png
  • F
    I’m evaluating Proxmox for potential use in a professional environment to host Windows VMs. Current production setup runs on Microsoft Hyper-V Server. Results follow: 1) Using --scsi0 "$VM_STORAGE:$VM_DISKSIZE,discard=on,iothread=1,ssd=1"...
    • 1764933272639.png
    • 1764933297373.png
    • 1764933317836.png
  • F
    Exactly. Results of Post #3 come from running CrystalDiskMark on guest VMs hosted on Proxmox 9.1.1. This was done on a test system with no other load. Results in Post #5 come from running CrystalDiskMark on guest VMs hosted on Hyper-V Server...
  • F
    At first glance, Proxmox appears to offer substantial improvements over the old setup, with a few important observations: 1) According to Samsung’s official specifications this model is rated for 6800 MB/s sequential read and 2700 MB/s...
  • F
    For reference, the existing Hyper-V Server deployment (on identical hardware) yields the following results: C:\> fsutil fsinfo sectorInfo C: LogicalBytesPerSector : 512 PhysicalBytesPerSectorForAtomicity ...
    • 1764897435151.png