[SOLVED] Very poor IO Delay / FSYNCS/SECOND on RAID1 Sata disk

Buzznative

New Member
Jan 5, 2021
15
1
3
54
Hello,

We migrate old three server on proxmox 5.2 to three new server EG-32 on OVH (Serveur EG-32 - Xeon E3-1270v6 - 32GB - SoftRaid 2x4To) with proxmox 6.2.
In december, we migrate all VM on new servers, since, we face on low FSYNCS/SECOND and high IO delay, we don't have any issue on old server, ~ 50 VM on each servers.

Before this week, I never test pveperf because, no problem, don't touch !
So on new server, we have resulat as this:

Code:
#: pveperf /var/lib/vz
CPU BOGOMIPS:      60798.40
REGEX/SECOND:      3994018
HD SIZE:           3666.44 GB (/dev/md2)
BUFFERED READS:    75.55 MB/sec
AVERAGE SEEK TIME: 134.40 ms
FSYNCS/SECOND:     0.81
DNS EXT:           59.08 ms

And this on all new server, very bad no?

On one of the old, it's not great value, but we don't have problem:
Code:
# pveperf /var/lib/vz/
CPU BOGOMIPS:      38399.84
REGEX/SECOND:      2501531
HD SIZE:           3623.08 GB (/dev/mapper/pve-data)
BUFFERED READS:    272.37 MB/sec
AVERAGE SEEK TIME: 13.77 ms
FSYNCS/SECOND:     36.25
DNS EXT:           20.70 ms

I don't know why to search or change, some problems appear with ZFS, but we have raid1 hardware with 2 disk, not ZFS and most low FSYNCS/SECOND on other thread is around 40~50, not less than 1....

Could you help me to improve performance ?
if information is missing, ask;)

Thanks !
 
Last edited:
I don't know if HDD as SMR or how to know this

Code:
# smartctl -a /dev/sda
Model Family:     HGST Ultrastar 7K6000
Device Model:     HGST HUS726040ALA610
 
On old server, we have:
Code:
# smartctl -a /dev/sda
Device Model:     HGST HUS726020ALA610

Seems to be old version of HDD ?
 
Its the 2TB pre-model it seems.
But both are 512n, SATA 6G drives with CMR.
So nothing obvious in difference :/
 
Seems you are using mdadm software raid, that is not supported (/dev/md2) (I don't use it, just a guess).
You should run pveperf with all VM NOT working, with vm working is pointless.
Old server is in usual storage config (/dev/mapper/pve-data so Thin LVM), new server is... I've no idea! Try to provide more data, is a totally different storage setup OMHO
 
Last edited:
Effectivelly, mdadm is default soft raid on OVH server, but, on your old server, we don't have this issue, so why this happen on new servers?

For pveperf, I launched all the VMs on the old server and have ten times better performance, so, it's seem possible to improve perf, even on mdadm ?
 
Please provide full specs of both machine es so we have a little more to compare.
 
Ok, If some information is missing, ask and I add them.

New Server Cpu:
Code:
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 158
model name    : Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz
stepping    : 9
microcode    : 0x8e
cpu MHz        : 3965.233
cache size    : 8192 KB
physical id    : 0
siblings    : 8
core id        : 0
cpu cores    : 4
apicid        : 0
initial apicid    : 0
fpu        : yes
fpu_exception    : yes
cpuid level    : 22
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds
bogomips    : 7599.80
clflush size    : 64
cache_alignment    : 64
address sizes    : 39 bits physical, 48 bits virtual
power management:
Same information for 7 other core

New server RAM
Code:
# free
              total        used        free      shared  buff/cache   available
Mem:       32835576     1003172    30854012      107448      978392    31271804
Swap:       3142644      649164     2493480
Code:
# cat /proc/meminfo
MemTotal:       32566376 kB
MemFree:          311060 kB
MemAvailable:   14412556 kB
Buffers:         1035132 kB
Cached:         13793896 kB
SwapCached:        76260 kB
Active:         12377720 kB
Inactive:       16601940 kB
Active(anon):    8341796 kB
Inactive(anon):  7062460 kB
Active(file):    4035924 kB
Inactive(file):  9539480 kB
Unevictable:      246764 kB
Mlocked:          246764 kB
SwapTotal:       1046520 kB
SwapFree:              0 kB
Dirty:              2476 kB
Writeback:             0 kB
AnonPages:      14329560 kB
Mapped:          2439236 kB
Shmem:           1233060 kB
KReclaimable:     993180 kB
Slab:            2228652 kB
SReclaimable:     993180 kB
SUnreclaim:      1235472 kB
KernelStack:      130592 kB
PageTables:       385988 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    17329708 kB
Committed_AS:   77050556 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      230456 kB
VmallocChunk:          0 kB
Percpu:            40128 kB
HardwareCorrupted:     0 kB
AnonHugePages:    663552 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:     2584796 kB
DirectMap2M:    30644224 kB
DirectMap1G:           0 kB

New Server Disk (I don't show /dev/loopXX for readibility)
Code:
df -h
Sys. de fichiers                                                     Taille Utilisé Dispo Uti% Monté sur
udev                                                                    16G       0   16G   0% /dev
tmpfs                                                                  3,2G     26M  3,1G   1% /run
/dev/md2                                                               3,6T    972G  2,5T  28% /
tmpfs                                                                   16G     63M   16G   1% /dev/shm
tmpfs                                                                  5,0M       0  5,0M   0% /run/lock
tmpfs                                                                   16G       0   16G   0% /sys/fs/cgroup
/dev/sdb1                                                              510M    5,3M  505M   2% /boot/efi
/dev/fuse                                                               30M    116K   30M   1% /etc/pve
tmpfs                                                                  3,2G       0  3,2G   0% /run/user/0
Code:
Disk /dev/sdb: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HGST HUS726T4TAL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5A5546E2-3794-4D35-A5F2-2FC5BEF0542C

Device          Start        End    Sectors  Size Type
/dev/sdb1        2048    1048575    1046528  511M EFI System
/dev/sdb2     1048576 7812980735 7811932160  3,7T Linux RAID
/dev/sdb3  7812980736 7814027263    1046528  511M Linux swap


Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HGST HUS726T4TAL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 665E8F4F-615E-4C4A-9B68-F179D5B807F2

Device          Start        End    Sectors  Size Type
/dev/sda1        2048    1048575    1046528  511M EFI System
/dev/sda2     1048576 7812980735 7811932160  3,7T Linux RAID
/dev/sda3  7812980736 7814027263    1046528  511M Linux swap
/dev/sda4  7814035215 7814037134       1920  960K Linux filesystem


Disk /dev/md2: 3,7 TiB, 3999709200384 bytes, 7811932032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Code:
# hdparm -i /dev/sda

/dev/sda:

 Model=HGST HUS726T4TALA6L1, FwRev=VLGNX41C, SerialNo=V6JJLYUS
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=56
 BuffType=DualPortCache, BuffSize=unknown, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=7814037168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4
 DMA modes:  mdma0 mdma1 mdma2
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
 AdvancedPM=yes: unknown setting WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-2,3,4,5,6,7

 * signifies the current active mode

# hdparm -i /dev/sdb

/dev/sdb:

 Model=HGST HUS726T4TALA6L1, FwRev=VLGNX41C, SerialNo=V6J60A2S
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=56
 BuffType=DualPortCache, BuffSize=unknown, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=7814037168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4
 DMA modes:  mdma0 mdma1 mdma2
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
 AdvancedPM=yes: unknown setting WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-2,3,4,5,6,7

 * signifies the current active mode
 
Old Server CPU:
Code:
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 86
model name    : Intel(R) Xeon(R) CPU D-1521 @ 2.40GHz
stepping    : 3
microcode    : 0x7000013
cpu MHz        : 2147.563
cache size    : 6144 KB
physical id    : 0
siblings    : 8
core id        : 0
cpu cores    : 4
apicid        : 0
initial apicid    : 0
fpu        : yes
fpu_exception    : yes
cpuid level    : 20
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts flush_l1d
bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips    : 4799.98
clflush size    : 64
cache_alignment    : 64
address sizes    : 46 bits physical, 48 bits virtual
power management:
8 total core

Old server RAM
Code:
# free
              total        used        free      shared  buff/cache   available
Mem:       32566376    16419716      319612     1233044    15827048    14425964
Swap:       1046520     1046444          76
Code:
# cat /proc/meminfo
MemTotal:       32835576 kB
MemFree:        30856308 kB
MemAvailable:   31274176 kB
Buffers:           46676 kB
Cached:           775052 kB
SwapCached:       144524 kB
Active:           431208 kB
Inactive:         861392 kB
Active(anon):     328524 kB
Inactive(anon):   254496 kB
Active(file):     102684 kB
Inactive(file):   606896 kB
Unevictable:       85564 kB
Mlocked:           85564 kB
SwapTotal:       3142644 kB
SwapFree:        2493480 kB
Dirty:               208 kB
Writeback:             0 kB
AnonPages:        452876 kB
Mapped:           103668 kB
Shmem:            107448 kB
Slab:             410444 kB
SReclaimable:     156720 kB
SUnreclaim:       253724 kB
KernelStack:        4016 kB
PageTables:        16784 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    19560432 kB
Committed_AS:    4115204 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:    10802884 kB
DirectMap2M:    22646784 kB
DirectMap1G:     2097152 kB


Old server Disk

Code:
df -h
Sys. de fichiers     Taille Utilisé Dispo Uti% Monté sur
udev                    16G       0   16G   0% /dev
tmpfs                  3,2G    322M  2,9G  11% /run
/dev/md2                20G    8,9G  9,3G  49% /
tmpfs                   16G     46M   16G   1% /dev/shm
tmpfs                  5,0M       0  5,0M   0% /run/lock
tmpfs                   16G       0   16G   0% /sys/fs/cgroup
/dev/mapper/pve-data   3,6T    1,4T  2,1T  40% /var/lib/vz
/dev/fuse               30M     40K   30M   1% /etc/pve
Code:
# fdisk -l
Disque /dev/sda : 1,8 TiB, 2000398934016 octets, 3907029168 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : gpt
Identifiant de disque : 5B5960FC-9378-479C-8EFC-B7FBA6356226

Périphérique    Début        Fin   Secteurs  Taille Type
/dev/sda1          40       2048       2009 1004,5K Amorçage BIOS
/dev/sda2        4096   40962047   40957952   19,5G RAID Linux
/dev/sda3    40962048   43057151    2095104   1023M RAID Linux
/dev/sda4    43057152 3907018751 3863961600    1,8T RAID Linux


Disque /dev/sdb : 1,8 TiB, 2000398934016 octets, 3907029168 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : gpt
Identifiant de disque : 9BF1E8A9-9CD6-4179-8945-6E38F39CB845

Périphérique    Début        Fin   Secteurs  Taille Type
/dev/sdb1          40       2048       2009 1004,5K Amorçage BIOS
/dev/sdb2        4096   40962047   40957952   19,5G RAID Linux
/dev/sdb3    40962048   43057151    2095104   1023M Partition d'échange Linux
/dev/sdb4    43057152 3907018751 3863961600    1,8T RAID Linux


Disque /dev/sdc : 1,8 TiB, 2000398934016 octets, 3907029168 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : gpt
Identifiant de disque : AD82C406-8248-4B84-B57A-D6D3F518EB87

Périphérique    Début        Fin   Secteurs  Taille Type
/dev/sdc1          40       2048       2009 1004,5K Amorçage BIOS
/dev/sdc2        4096   40962047   40957952   19,5G RAID Linux
/dev/sdc3    40962048   43057151    2095104   1023M Partition d'échange Linux
/dev/sdc4    43057152 3907018751 3863961600    1,8T RAID Linux


Disque /dev/md2 : 19,5 GiB, 20970405888 octets, 40957824 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets


Disque /dev/md4 : 3,6 TiB, 3956695629824 octets, 7727921152 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 524288 octets / 1048576 octets


Disque /dev/mapper/pve-data : 3,6 TiB, 3952401711104 octets, 7719534592 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 524288 octets / 1048576 octets
Code:
# hdparm -i /dev/sda

/dev/sda:

 Model=HGST HUS726020ALA610, FwRev=A5GNT920, SerialNo=N4G2XVYY
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=56
 BuffType=DualPortCache, BuffSize=unknown, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=3907029168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4
 DMA modes:  mdma0 mdma1 mdma2
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
 AdvancedPM=yes: unknown setting WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-2,3,4,5,6,7

 * signifies the current active mode

# hdparm -i /dev/sdb

/dev/sdb:

 Model=HGST HUS726020ALA610, FwRev=A5GNT920, SerialNo=N4G2VJWY
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=56
 BuffType=DualPortCache, BuffSize=unknown, MaxMultSect=16, MultSect=off
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=3907029168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4
 DMA modes:  mdma0 mdma1 mdma2
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
 AdvancedPM=yes: unknown setting WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-2,3,4,5,6,7

 * signifies the current active mode
 
Effectivelly, mdadm is default soft raid on OVH server, but, on your old server, we don't have this issue, so why this happen on new servers?

For pveperf, I launched all the VMs on the old server and have ten times better performance, so, it's seem possible to improve perf, even on mdadm ?
If you check you df -h, you see that you don't have the same kind of storage configured!!!
Old server has
/dev/mapper/pve-data 3,6T 1,4T 2,1T 40% /var/lib/vz
that is LVM type storage (so vm disks are upon a "block device", directly without intermediate file system on Proxmox side)
new server... where does it put VMs? /dev/md2? What kind of storage is it? Sure not LVM as it was in the old server... a file system? Maybe ext4? I remember with ext3 the "nobarrier" parameter made a HUGE difference (beware possible data loss, maybe recent version of ext4 have solved)
Could please provide the output of these two commands?:
Code:
mount
cat /etc/pve/storage.cfg
 
On new server, each VM has his own disk, for example:
Code:
Disk /dev/loop55: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
For me, it's an evolution between proxmox 5 and 6.

And the output (I remove NFS server information)
Code:
# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16260252k,nr_inodes=4065063,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=3256640k,mode=755)
/dev/md2 on / type ext4 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=41,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14836)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/sdb1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
[ovh backup ftp hostname]:/export/ftpbackup/[OVHServerName] on /mnt/pve/nfs-back-02 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=51.77.152.78,local_lock=none,addr=10.21.131.41)
[ovh backup ftp hostname]:/export/ftpbackup/[OVHServerName] on /mnt/pve/nfs-back-01 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=51.77.152.78,local_lock=none,addr=10.21.131.41)
[ovh backup ftp hostname]:/export/ftpbackup/[OVHServerName] on /mnt/pve/nfs-back-03 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=51.77.152.78,local_lock=none,addr=10.21.131.41)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3256636k,mode=700)

# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content rootdir,snippets,vztmpl,backup,images,iso
    maxfiles 7
    shared 0

nfs: nfs-back-01
    export /export/ftpbackup/[OVHServerName]
    path /mnt/pve/nfs-back-01
    server [ovh backup ftp hostname]
    content iso,vztmpl,images,backup,snippets
    maxfiles 5

nfs: nfs-back-02
    export /export/ftpbackup/[OVHServerName]
    path /mnt/pve/nfs-back-02
    server [ovh backup ftp hostname]
    content iso,vztmpl,images,backup,snippets
    maxfiles 5

nfs: nfs-back-03
    export /export/ftpbackup/[OVHServerName]
    path /mnt/pve/nfs-back-03
    server [ovh backup ftp hostname]
    content vztmpl,iso,snippets,backup,images
    maxfiles 5

On the old server

Code:
# mount
sysfs on /sys type sysfs (rw,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16396752k,nr_inodes=4099188,mode=755)
devpts on /dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=3283560k,mode=755)
/dev/md2 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (ro,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=42,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15462)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (ro,relatime)
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,relatime,stripe=256,data=ordered)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content rootdir,iso,vztmpl,images,backup
    maxfiles 5
    shared 0

root@kili:~#
 
Old server mount has the line
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,relatime,stripe=256,data=ordered)
New server has not. Old server VM storage was based upon LVM, new server... I don't understand!!! /dev/loop55 ? Seems that you have raw files in your ext4 local partition that are accessed as block device through a loop mount... I've no idea but I would install Proxmox6 from scratch.
I've a server on Scaleway hoster, I did a "custom" installation and used ZFS raid capabilities (but my storage in on SSD, don't know if performs well enough on your sata, it should perform much much better than your current situation though).
In short, I think is not an hardware problem (slow disks) but insane storage configuration.
 
Can you please post Details about your mdadm configuration?

On old server:
Code:
# mdadm --detail --scan
ARRAY /dev/md2 metadata=0.90 UUID=b35014e9:92c5dc37:a4d2adc2:26fd5302
ARRAY /dev/md4 metadata=0.90 UUID=655cac0e:41e7cb4e:a4d2adc2:26fd5302

On new
Code:
# mdadm --detail --scan
ARRAY /dev/md2 metadata=0.90 UUID=d090d79b:04b40376:a4d2adc2:26fd5302

Really strange, this is OVH default.... So, the configuration mdadm use only one disk? Crazy....

Old server mount has the line
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,relatime,stripe=256,data=ordered)
New server has not. Old server VM storage was based upon LVM, new server... I don't understand!!! /dev/loop55 ? Seems that you have raw files in your ext4 local partition that are accessed as block device through a loop mount... I've no idea but I would install Proxmox6 from scratch.
I've a server on Scaleway hoster, I did a "custom" installation and used ZFS raid capabilities (but my storage in on SSD, don't know if performs well enough on your sata, it should perform much much better than your current situation though).
In short, I think is not an hardware problem (slow disks) but insane storage configuration.

On proxmox GUI, on new server, it's Container Type:
new_disk_gui.png
on old, Disk Image Type:
old_disk_gui.png

May be this is problem? I don't known where I can check this configuration....
 
On proxmox GUI, on new server, it's Container Type:
View attachment 22557
on old, Disk Image Type:
View attachment 22558

May be this is problem? I don't known where I can check this configuration....
How on earth did you migrate the VMs? Backup and restore? In the GUI the VM are listed as VM or Containers?
You can't transform a Vm in a container with simply backup/restore, so I've no idea what's going on. The above image is taken from where? I've not found in GUI a view like that (maybe my fault)
What is the output of
qm config 106
I hope someone else will help you, because you are in a setup I can't understand, never experienced, and the only thing that I'm pretty sure is that is messed up :(
 
I think your storage configuration is totally not what you expect.
Technically you can have a raid containing one disk but you are not comparing like a like.
I think this is safe to conclude and hence your performance varies a lot (and I mean a lot!)
Fix your setup and I guess your numbers will look alike
 
How on earth did you migrate the VMs? Backup and restore? In the GUI the VM are listed as VM or Containers?
You can't transform a Vm in a container with simply backup/restore, so I've no idea what's going on. The above image is taken from where? I've not found in GUI a view like that (maybe my fault)
What is the output of
qm config 106
I hope someone else will help you, because you are in a setup I can't understand, never experienced, and the only thing that I'm pretty sure is that is messed up :(
Yep, I backup on old and restore after scp, with this configuration:
restore.png

All VM are LXC Containers

I take screen on [nodename] > storage local ([nodename]) (at the end of VM list)

For the qm config, I have an error.... (106 is on the server)....
Code:
# qm config 106
Configuration file 'nodes/[nodename]/qemu-server/106.conf' does not exist
 
Yep, I backup on old and restore after scp, with this configuration:
View attachment 22560

All VM are LXC Containers

I take screen on [nodename] > storage local ([nodename]) (at the end of VM list)

For the qm config, I have an error.... (106 is on the server)....
Code:
# qm config 106
Configuration file 'nodes/[nodename]/qemu-server/106.conf' does not exist
Ah, ok, I've no experience on LXC containers, so maybe is just how pve 6.x vs 5.x shows their "disks". Of course qm config command fails, I was convinced that your VM were KVM not LXC, sorry.
So we are back to the starting point...
Could be some LXC that is doing much more I/O? Can you produce a pveperf with all VM shut down? And also a hdparm -tT /dev/sda on both (old and new)?
If you disable the NFS mount (just in case, just for test), does something improve?
Could really be that mdraid with only a single driver is not working fine (also why do you have only one driver? Is your hardware configuration on OVH right, you stated "SoftRaid 2x4To", if that's standard config, where is the second driver? Have you asked them?). Is the "embedded" motherboard raid enabled and conflicting with MD? Maybe the server was badly prepared (mistake from OVH), ask them?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!