high IO delay

gernazdasch

New Member
Jun 9, 2022
25
1
3
Hello.

I am having big IO delay problems using Proxmox 7.2 and i see major speed issues, sometimes VM appearing to freeze.

Screenshot 2022-09-25 at 02-37-40 Proxmox-VE - Proxmox Virtual Environment.png
Screenshot 2022-09-25 at 04-56-44 Proxmox-VE - Proxmox Virtual Environment.png

My current configuration is
CPU : AMD Ryzen 7 3700X 8-Core Processor
RAM : 64 GB DDR4
HDD: 2x12TB TOSHIBA MG07ACA1 (configured in software RAID1 probably)

Screen Shot 2022-09-26 at 2.38.39 PM.png

Speed issues and VM freeze appears when i am trying to install new OS to new VMs and especially when i am using Windows :
Screenshot 2022-09-25 at 02-55-52 Proxmox-VE - Proxmox Virtual Environment.png
Screenshot 2022-09-25 at 02-38-39 Proxmox-VE - Proxmox Virtual Environment.png

Code:
pveperf
CPU BOGOMIPS:      115195.36
REGEX/SECOND:      3861393
HD SIZE:           11053.95 GB (/dev/md2)
BUFFERED READS:    228.65 MB/sec
AVERAGE SEEK TIME: 16.23 ms
FSYNCS/SECOND:     53.92
DNS EXT:           20.71 ms

I know that HDD are slow compared to SSD, but i did not imagine are that slow.

I have read that this HDD model is is using CMR/PRM technologies and not SHD. I also have to say that i installed 10 VM in the same time, Is there any tool that can rearrange memory zones so it can optimize the rear/write speed? Or just a simple copy of the whole image disk from the cli will solve this? I don't have time to reinstall all VMs again on another machine.

On this Proxmox machine, when the system is in idle i see no more than 0.5 IO delay, i don't know what are the normal values. I would also like to say that i used Proxmox (Virtual Environment 6.4-15 ) on other HDD systems and did not had this problem at all ... IO delay would not pass over 0.005 (using single HDD, not even RAID).

Thank you.
 
Last edited:
Try command on server "sysctl -w vm.swappiness=10" or "sysctl -w vm.swappiness=0" ..to avoid Swap/Cache going on HDD also try disabling the page-file in Windows , you would find some performance benefits.

While TOSHIBA MG07ACA1 HDD has low buffer, and low read-write, so 10VM would be like low read/write speed.
Suggestion 1> get a SSD say 250GB or 500GB and Get Windows OS on it c: drive, example 50GB while Data d: Drive on your Toshiba drive, this would give performance for windows OS,
Suggestion 2> get a SSD say 250GB and get only Windows Page-file on it under d: drive, it would not boost much, but up-to-some extent.
Suggestion 3>, get SSD say 500GB or 1TB Take all VM backup on external HDD first, and setup ZFS Raid1 for 12TB HDD with SSD as zfs-special Device, so the performance improves a lot.
 
Try command on server "sysctl -w vm.swappiness=10" or "sysctl -w vm.swappiness=0" ..to avoid Swap/Cache going on HDD also try disabling the page-file in Windows , you would find some performance benefits.

While TOSHIBA MG07ACA1 HDD has low buffer, and low read-write, so 10VM would be like low read/write speed.
Suggestion 1> get a SSD say 250GB or 500GB and Get Windows OS on it c: drive, example 50GB while Data d: Drive on your Toshiba drive, this would give performance for windows OS,
Suggestion 2> get a SSD say 250GB and get only Windows Page-file on it under d: drive, it would not boost much, but up-to-some extent.
Suggestion 3>, get SSD say 500GB or 1TB Take all VM backup on external HDD first, and setup ZFS Raid1 for 12TB HDD with SSD as zfs-special Device, so the performance improves a lot.
I hope this command will not corrupt anything. Does it need restart?

All your solutions need a SSD attached, the thing is that server is in production and is up and running. For me to ask the datacenter to mount an extra SSD will cost us time and money that we cannot afford right now.

I still don;t understand why is so slow, look at old server and compare it to the new one.

OLD SERVER:
Code:
CPU:
    Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz

HDD:
   *-disk
       description: SCSI Disk
       product: RS3DC080
       vendor: Intel
       physical id: 2.0.0
       bus info: scsi@0:2.0.0
       logical name: /dev/sda
       version: 4.68
       serial: 00b5e065398e29312a90e2ed0ab00506
       size: 1861GiB (1998GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=d913c365-3f1f-48e3-a248-4d3d28d9e485 logicalsectorsize=512 sectorsize=512
     *-volume:0
          description: BIOS Boot partition
          vendor: EFI
          physical id: 1
          bus info: scsi@0:2.0.0,1
          logical name: /dev/sda1
          serial: a2390291-be36-42e8-8caf-1d72a177fc24
          capacity: 1023KiB
          capabilities: nofs
     *-volume:1
          description: EXT4 volume
          vendor: Linux
          physical id: 2
          bus info: scsi@0:2.0.0,2
          logical name: /dev/sda2
          logical name: /boot
          version: 1.0
          serial: 8b82933c-3ac5-44d4-bc6c-e680d671580d
          size: 977MiB
          capabilities: journaled extended_attributes large_files huge_files dir_nlink recover 64bit extents ext4 ext2 initialized
          configuration: created=2022-06-06 17:12:31 filesystem=ext4 lastmountpoint=/boot modified=2022-06-14 08:47:49 mount.fstype=ext4 mount.options=rw,relatime,stripe=64 mounted=2022-06-14 08:47:49 state=mounted
     *-volume:2
          description: Linux swap volume
          vendor: Linux
          physical id: 3
          bus info: scsi@0:2.0.0,3
          logical name: /dev/sda3
          version: 1
          serial: 96592bd4-6858-47e9-9413-a3bb3ded6d45
          size: 3905MiB
          capacity: 3905MiB
          capabilities: nofs swap initialized
          configuration: filesystem=swap pagesize=4095
    *-volume:3
          description: EXT4 volume
          vendor: Linux
          physical id: 4
          bus info: scsi@0:2.0.0,4
          logical name: /dev/sda4
          logical name: /tmp
          version: 1.0
          serial: da464336-fe5e-4f24-a7f0-75f4f4833205
          size: 7813MiB
          capabilities: journaled extended_attributes large_files huge_files dir_nlink recover 64bit extents ext4 ext2 initialized
          configuration: created=2022-06-06 17:12:31 filesystem=ext4 lastmountpoint=/tmp modified=2022-06-14 08:47:49 mount.fstype=ext4 mount.options=rw,relatime,stripe=64 mounted=2022-06-14 08:47:49 state=mounted
     *-volume:4
          description: EXT4 volume
          vendor: Linux
          physical id: 5
          bus info: scsi@0:2.0.0,5
          logical name: /dev/sda5
          logical name: /
          version: 1.0
          serial: de9899d6-3c85-443e-8e52-2cf50fd42cd4
          size: 1849GiB
          capabilities: journaled extended_attributes large_files huge_files dir_nlink recover 64bit extents ext4 ext2 initialized
          configuration: created=2022-06-06 17:12:31 filesystem=ext4 lastmountpoint=/ modified=2022-06-14 08:47:45 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,stripe=64 mounted=2022-06-14 08:47:47 state=mounted

MEMORY:
                  total        used        free      shared  buff/cache   available
    Mem:          32000       17595        1441         243       12962       13698
    Swap:          3905        1481        2424


NEW SERVER:
Code:
CPU:
    AMD Ryzen 7 3700X 8-Core Processor

HDD:
   *-sata
    description: SATA controller
    product: 400 Series Chipset SATA Controller
    vendor: Advanced Micro Devices, Inc. [AMD]
    physical id: 0.1
    bus info: pci@0000:01:00.1
    logical name: scsi0
    logical name: scsi1
    version: 01
    width: 32 bits
    clock: 33MHz
    capabilities: sata msi pm pciexpress ahci_1.0 bus_master cap_list rom emulated
    configuration: driver=ahci latency=0
    resources: irq:39 memory:fc780000-fc79ffff memory:fc700000-fc77ffff
    *-disk:0
       description: ATA Disk
       product: TOSHIBA MG07ACA1
       vendor: Toshiba
       physical id: 0
       bus info: scsi@0:0.0.0
       logical name: /dev/sda
       version: 0102
       serial: 8180A01JF9BG
       size: 10TiB (12TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=b496570d-7b14-44b8-9fab-2c7f1ea61003 logicalsectorsize=512 sectorsize=4096
     *-volume:0
          description: RAID partition
          vendor: Linux
          physical id: 1
          bus info: scsi@0:0.0.0,1
          logical name: /dev/sda1
          serial: e80670fa-d9aa-4bb5-a701-0d932d58974a
          capacity: 31GiB
          capabilities: multi
     *-volume:1
          description: RAID partition
          vendor: Linux
          physical id: 2
          bus info: scsi@0:0.0.0,2
          logical name: /dev/sda2
          serial: 661604bb-4cdb-4272-af17-9d0a7b29d05a
          capacity: 1023MiB
          capabilities: multi
     *-volume:2
          description: RAID partition
          vendor: Linux
          physical id: 3
          bus info: scsi@0:0.0.0,3
          logical name: /dev/sda3
          serial: be7e148a-8f8f-4d86-a430-1d60bb675f57
          capacity: 10TiB
          capabilities: multi
     *-volume:3
          description: BIOS Boot partition
          vendor: EFI
          physical id: 4
          bus info: scsi@0:0.0.0,4
          logical name: /dev/sda4
          serial: a42afb38-4b50-4268-b1c1-a6c8988d5deb
          capacity: 1023KiB
          capabilities: nofs
    *-disk:1
       description: ATA Disk
       product: TOSHIBA MG07ACA1
       vendor: Toshiba
       physical id: 1
       bus info: scsi@1:0.0.0
       logical name: /dev/sdb
       version: 0102
       serial: 8180A01KF9BG
       size: 10TiB (12TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=c432ae54-e92b-48b3-98b7-0b2b6de90ad1 logicalsectorsize=512 sectorsize=4096
     *-volume:0
          description: RAID partition
          vendor: Linux
          physical id: 1
          bus info: scsi@1:0.0.0,1
          logical name: /dev/sdb1
          serial: 76779197-daa8-403d-988b-3ecfc9b76d00
          capacity: 31GiB
          capabilities: multi
     *-volume:1
          description: RAID partition
          vendor: Linux
          physical id: 2
          bus info: scsi@1:0.0.0,2
          logical name: /dev/sdb2
          serial: 6abcde0d-d8cf-4a09-98bf-ea2131a1672b
          capacity: 1023MiB
          capabilities: multi
     *-volume:2
          description: RAID partition
          vendor: Linux
          physical id: 3
          bus info: scsi@1:0.0.0,3
          logical name: /dev/sdb3
          serial: 7de3657a-a219-4b51-aed2-5a5ef2bebcc4
          capacity: 10TiB
          capabilities: multi
     *-volume:3
          description: BIOS Boot partition
          vendor: EFI
          physical id: 4
          bus info: scsi@1:0.0.0,4
          logical name: /dev/sdb4
          serial: 343917ff-0a3e-4b91-a0d1-6dcec103f438
          capacity: 1023KiB
          capabilities: nofs


MEMORY:
                   total        used        free      shared  buff/cache   available
    Mem:           64234       45368       14158          40        4707       18110
    Swap:          32734        3486       29248

I believed that a corporate grade HDD would perform much better than that.
 
imo, your old server use an accelerated (=cache+battery) hw raid controller
so this would make the difference? as i can see, even though i have big CPU,RAM and storage, you can't have multiple read/write all in the same time from multiple VMs. Probably using a single OS on this machine with fewer multiple read/write as it can, would not impact the IO delay.
 
so this would make the difference?
yes.
sorry for my poor english :
iops of hw cache count in thousands instead sub-hundred, with a size of 1GB/2GB, it absorb the peak usage, one Windows 10 can easily saturate one hdd on fresh install thanks to Defender, Search and Windows Update.
i can't use Windows 10 on single hdd.
i think your old server has multiple sas 10k rpm disks, surely 3 in raid5
 
yes.
sorry for my poor english :
iops of hw cache count in thousands instead sub-hundred, with a size of 1GB/2GB, it absorb the peak usage, one Windows 10 can easily saturate one hdd on fresh install thanks to Defender, Search and Windows Update.
i can't use Windows 10 on single hdd.
i think your old server has multiple sas 10k rpm disks, surely 3 in raid5
No RAID5 (i hear is slow for what i need), i either use raid1 or raid10 (which i think is the best)

Honestly i did not used Windows on HDD for a lot of time, i am doing this because i need storage, otherwise, i would got a SSD for this server, that's for sure.

Why Windows does that? Linux don't seem to do that.

For what i understand till now ... if you ever going to get a server that has HDD, at least have them in Hardware RAID10, must be enterprise hdds and be decently faster. 90%+ of servers that have HDD are not like that.
 
Last edited:
90% of servers has hw raid controller with cache + battery
If you skip hw raid + cache then ssd are mandatoy
 
90% of servers has hw raid controller with cache + battery
If you skip hw raid + cache then ssd are mandatoy
Not necessarily depending on the use case. 4x HDD in ZFS Striped Mirrors (Equivalent to RAID 10) with sync=disabled would be reasonable for some home or test environments (sync=disabled being risky here, but again, for home use where data integrity isn't paramount, it would be the way to extract the most usable performance from).
Adjusting volblocksize on the pool based on workloads also helps significantly, as if it's left at the default 8K while most operations are writing larger files, this destroys HDD IOP rates more than it needs to.

With SSDs though it's much more set and forget for casual use unless you need to squeeze out more performance by tuning it further (which you will get better performance by tuning, it just won't be as drastic as what happens with HDDs in ZFS without tuning).
 
  • Like
Reactions: Deepen Dhulla
I don't know if proxmox has ZFS, i use LVM because is more stable, ZFS might be interesting and possible more faster, but does not has same support like LVM.

I did some test across some various servers and it is interesting to see what the results are :

SERVER1 [NEW SERVER] (2x 12TB HDD, software raid1)
Code:
CPU BOGOMIPS:      115115.36
REGEX/SECOND:      3912208
HD SIZE:           11053.95 GB (/dev/md2)
BUFFERED READS:    217.50 MB/sec
AVERAGE SEEK TIME: 17.59 ms
FSYNCS/SECOND:     41.45
DNS EXT:           16.32 ms

SERVER2 [OLD SERVER] (2xHDD hardware raid)
Code:
CPU BOGOMIPS:      25545.12
REGEX/SECOND:      2176356
HD SIZE:           19.50 GB (/dev/md3)
BUFFERED READS:    180.42 MB/sec
AVERAGE SEEK TIME: 6.76 ms
FSYNCS/SECOND:     54.19
DNS EXT:           47.54 ms
DNS INT:           0.71 ms (local)

SERVER3 [ANOTHER SERVER] (2xSSD)
Code:
CPU BOGOMIPS:      54344.68
REGEX/SECOND:      1272167
HD SIZE:           54.57 GB (/dev/mapper/pve-root)
BUFFERED READS:    396.00 MB/sec
AVERAGE SEEK TIME: 0.07 ms
FSYNCS/SECOND:     3182.85
DNS EXT:           36.55 ms
DNS INT:           28.91 ms (b2.proxmox)

SERVER4 [ANOTHER SERVER] (3 hdd, software raid1)
Code:
CPU BOGOMIPS:      25440.92
REGEX/SECOND:      2176126
HD SIZE:           19.50 GB (/dev/md3)
BUFFERED READS:    180.42 MB/sec
AVERAGE SEEK TIME: 6.76 ms
FSYNCS/SECOND:     54.19
DNS EXT:           47.54 ms
DNS INT:           0.71 ms (local)

SERVER5 [ANOTHER SERVER] (4x HDD software raid10)
Code:
CPU BOGOMIPS:      59191.80
REGEX/SECOND:      3671447
HD SIZE:           195.68 GB (/dev/md3)
BUFFERED READS:    158.78 MB/sec
AVERAGE SEEK TIME: 8.51 ms
FSYNCS/SECOND:     51.53
DNS EXT:           28.66 ms
DNS INT:           0.56 ms (local)

SERVER6 [ANOTHER SERVER] (SSD)
Code:
CPU BOGOMIPS:      177641.20
REGEX/SECOND:      4392110
HD SIZE:           93.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    474.07 MB/sec
AVERAGE SEEK TIME: 0.11 ms
FSYNCS/SECOND:     2593.74
DNS EXT:           62.28 ms
DNS INT:           55.05 ms (bear5)

The conclusion is simple : get SSD ... HDD can't handle that many concurent IO operations (that are more likely to be present when you are emulating 20+ images) ... and if you get a HDD, make sure you get at least a RAID10 with performance hardware RAID.

Maybe some optimisations can be made, i am still trying variants.
 
  • Like
Reactions: Deepen Dhulla
hw raid controller is faster than software (mdadm) only if cache + battery (bbu) is used.
 
I hope this command will not corrupt anything. Does it need restart?

All your solutions need a SSD attached, the thing is that server is in production and is up and running. For me to ask the datacenter to mount an extra SSD will cost us time and money that we cannot afford right now.

I still don;t understand why is so slow, look at old server and compare it to the new one.

OLD SERVER:
Code:
CPU:
    Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz

HDD:
   *-disk
       description: SCSI Disk
       product: RS3DC080
       vendor: Intel
       physical id: 2.0.0
       bus info: scsi@0:2.0.0
       logical name: /dev/sda
       version: 4.68
       serial: 00b5e065398e29312a90e2ed0ab00506
       size: 1861GiB (1998GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=d913c365-3f1f-48e3-a248-4d3d28d9e485 logicalsectorsize=512 sectorsize=512
     *-volume:0
          description: BIOS Boot partition
          vendor: EFI
          physical id: 1
          bus info: scsi@0:2.0.0,1
          logical name: /dev/sda1
          serial: a2390291-be36-42e8-8caf-1d72a177fc24
          capacity: 1023KiB
          capabilities: nofs
     *-volume:1
          description: EXT4 volume
          vendor: Linux
          physical id: 2
          bus info: scsi@0:2.0.0,2
          logical name: /dev/sda2
          logical name: /boot
          version: 1.0
          serial: 8b82933c-3ac5-44d4-bc6c-e680d671580d
          size: 977MiB
          capabilities: journaled extended_attributes large_files huge_files dir_nlink recover 64bit extents ext4 ext2 initialized
          configuration: created=2022-06-06 17:12:31 filesystem=ext4 lastmountpoint=/boot modified=2022-06-14 08:47:49 mount.fstype=ext4 mount.options=rw,relatime,stripe=64 mounted=2022-06-14 08:47:49 state=mounted
     *-volume:2
          description: Linux swap volume
          vendor: Linux
          physical id: 3
          bus info: scsi@0:2.0.0,3
          logical name: /dev/sda3
          version: 1
          serial: 96592bd4-6858-47e9-9413-a3bb3ded6d45
          size: 3905MiB
          capacity: 3905MiB
          capabilities: nofs swap initialized
          configuration: filesystem=swap pagesize=4095
    *-volume:3
          description: EXT4 volume
          vendor: Linux
          physical id: 4
          bus info: scsi@0:2.0.0,4
          logical name: /dev/sda4
          logical name: /tmp
          version: 1.0
          serial: da464336-fe5e-4f24-a7f0-75f4f4833205
          size: 7813MiB
          capabilities: journaled extended_attributes large_files huge_files dir_nlink recover 64bit extents ext4 ext2 initialized
          configuration: created=2022-06-06 17:12:31 filesystem=ext4 lastmountpoint=/tmp modified=2022-06-14 08:47:49 mount.fstype=ext4 mount.options=rw,relatime,stripe=64 mounted=2022-06-14 08:47:49 state=mounted
     *-volume:4
          description: EXT4 volume
          vendor: Linux
          physical id: 5
          bus info: scsi@0:2.0.0,5
          logical name: /dev/sda5
          logical name: /
          version: 1.0
          serial: de9899d6-3c85-443e-8e52-2cf50fd42cd4
          size: 1849GiB
          capabilities: journaled extended_attributes large_files huge_files dir_nlink recover 64bit extents ext4 ext2 initialized
          configuration: created=2022-06-06 17:12:31 filesystem=ext4 lastmountpoint=/ modified=2022-06-14 08:47:45 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,stripe=64 mounted=2022-06-14 08:47:47 state=mounted

MEMORY:
                  total        used        free      shared  buff/cache   available
    Mem:          32000       17595        1441         243       12962       13698
    Swap:          3905        1481        2424


NEW SERVER:
Code:
CPU:
    AMD Ryzen 7 3700X 8-Core Processor

HDD:
   *-sata
    description: SATA controller
    product: 400 Series Chipset SATA Controller
    vendor: Advanced Micro Devices, Inc. [AMD]
    physical id: 0.1
    bus info: pci@0000:01:00.1
    logical name: scsi0
    logical name: scsi1
    version: 01
    width: 32 bits
    clock: 33MHz
    capabilities: sata msi pm pciexpress ahci_1.0 bus_master cap_list rom emulated
    configuration: driver=ahci latency=0
    resources: irq:39 memory:fc780000-fc79ffff memory:fc700000-fc77ffff
    *-disk:0
       description: ATA Disk
       product: TOSHIBA MG07ACA1
       vendor: Toshiba
       physical id: 0
       bus info: scsi@0:0.0.0
       logical name: /dev/sda
       version: 0102
       serial: 8180A01JF9BG
       size: 10TiB (12TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=b496570d-7b14-44b8-9fab-2c7f1ea61003 logicalsectorsize=512 sectorsize=4096
     *-volume:0
          description: RAID partition
          vendor: Linux
          physical id: 1
          bus info: scsi@0:0.0.0,1
          logical name: /dev/sda1
          serial: e80670fa-d9aa-4bb5-a701-0d932d58974a
          capacity: 31GiB
          capabilities: multi
     *-volume:1
          description: RAID partition
          vendor: Linux
          physical id: 2
          bus info: scsi@0:0.0.0,2
          logical name: /dev/sda2
          serial: 661604bb-4cdb-4272-af17-9d0a7b29d05a
          capacity: 1023MiB
          capabilities: multi
     *-volume:2
          description: RAID partition
          vendor: Linux
          physical id: 3
          bus info: scsi@0:0.0.0,3
          logical name: /dev/sda3
          serial: be7e148a-8f8f-4d86-a430-1d60bb675f57
          capacity: 10TiB
          capabilities: multi
     *-volume:3
          description: BIOS Boot partition
          vendor: EFI
          physical id: 4
          bus info: scsi@0:0.0.0,4
          logical name: /dev/sda4
          serial: a42afb38-4b50-4268-b1c1-a6c8988d5deb
          capacity: 1023KiB
          capabilities: nofs
    *-disk:1
       description: ATA Disk
       product: TOSHIBA MG07ACA1
       vendor: Toshiba
       physical id: 1
       bus info: scsi@1:0.0.0
       logical name: /dev/sdb
       version: 0102
       serial: 8180A01KF9BG
       size: 10TiB (12TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=c432ae54-e92b-48b3-98b7-0b2b6de90ad1 logicalsectorsize=512 sectorsize=4096
     *-volume:0
          description: RAID partition
          vendor: Linux
          physical id: 1
          bus info: scsi@1:0.0.0,1
          logical name: /dev/sdb1
          serial: 76779197-daa8-403d-988b-3ecfc9b76d00
          capacity: 31GiB
          capabilities: multi
     *-volume:1
          description: RAID partition
          vendor: Linux
          physical id: 2
          bus info: scsi@1:0.0.0,2
          logical name: /dev/sdb2
          serial: 6abcde0d-d8cf-4a09-98bf-ea2131a1672b
          capacity: 1023MiB
          capabilities: multi
     *-volume:2
          description: RAID partition
          vendor: Linux
          physical id: 3
          bus info: scsi@1:0.0.0,3
          logical name: /dev/sdb3
          serial: 7de3657a-a219-4b51-aed2-5a5ef2bebcc4
          capacity: 10TiB
          capabilities: multi
     *-volume:3
          description: BIOS Boot partition
          vendor: EFI
          physical id: 4
          bus info: scsi@1:0.0.0,4
          logical name: /dev/sdb4
          serial: 343917ff-0a3e-4b91-a0d1-6dcec103f438
          capacity: 1023KiB
          capabilities: nofs


MEMORY:
                   total        used        free      shared  buff/cache   available
    Mem:           64234       45368       14158          40        4707       18110
    Swap:          32734        3486       29248

I believed that a corporate grade HDD would perform much better than that.
"sysctl -w vm.swappiness=10" or "sysctl -w vm.swappiness=0" does not corrupts anything..unless you are doing too much swap or page-file...as this would put load on RAM (fill it up) ..and if RAM is not free enough..it might crash.
Safer way Part would be get all VM Down...run the command ..check RAM unitization and get VM up. , But yes SSD need to be planned for performance benefits. , as simple as like 1TB 2 SSD as Raid where only Windows OS c: drive would be there and page-file, would also give you good Boost.
 
hw raid controller is faster than software (mdadm) only if cache + battery (bbu) is used.
so basically there are two options:
1. SSD (simple but expensive)
2. if you are ever going to get a HDD get them in hardware raid (if you can have RAID10 would be perfect) with cache + battery (bbu), otherwise, HDD is very slow when trying to run multiple OS, that is for sure

So the
and if RAM is not free enough.
i would never ever use more than 80% ram, i believe SWAP memory is slow and useless

as simple as like 1TB 2 SSD as Raid
Why would i need a RAID when using SSD? They are already very fast (about x3.5 faster). Look at seek times in the benchmarks from above.
 
Screenshot 2022-09-30 at 21-42-35 Proxmox-VE - Proxmox Virtual Environment.png
Have any idea if disk formats make any difference? Maybe this can be a problem. I don't know what are the differences between qcow2 and raw.
 
I do think file based storage (raw and qcow2) is often slower than block based storage (LVM, ZFS) but I don't have much experience with raw and qcow2.
The overhead of IDE emulation is most likely (much) slower that VirtIO-SCSI (which needs driver support from the operating system inside the VM).
 
  • Like
Reactions: Deepen Dhulla
sysctl -w vm.swappiness=10 did not improve anything, same problems

MY FINAL CONCLUSION:
1. get a SSD, HDD are slow
2. if you still want to get a HDD (for storage pruposes, because they are big) at least get 4xHDD in Hardware RAID10

The reason is simple, HDD is terrible when you are trying to load up like 10-20 simultaneous OS that use same HDD basically.
 
so this is interesting discussion.
I'm not having that much troubles as OP here, but still I would like to clarify what's what.

Proxmox is not recommended with HW raids, yet this discussion looks like it suggesting opposite?
I do have HW raid, with battery - normal quite high performance server. Yet I did change HW raid controller which allows passthrough to be able to not use HW raid as suggested.
Which kind of brings quite high IOdelays time to time and not that great performance.

While using HW raid is probably quicker and safer (to some extend) why HW raid is not supported by Proxmox? eg. which way should we go if we have hw capability of having proper hw raid?
 
Last edited:
Who says raid controller is not supported in Proxmox?
It works fine maybe last 10 years. It just doesn't have the features similar to zfs, but everything works. And snapshots are better than zfs's
 
I havent said it's not supported, I said it's not recommended. Because of ZFS and CEPH.
But cluster without CEPH? not sure about ZFS honestly, so these are what Im thinking of, that's why I'm bypassing HW raid by passthrough. Wondering if there is a HW way
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!