Slow (unpossible) boot time of VM's

Brononius

Active Member
Apr 6, 2017
28
1
43
48
This server was running for almost 2 years without real issues. Last week, I removed the cluster on my server, and updated all version (running now 6.3.2).
With all these changes, I also rebooted the server, and at first sight everything was working.

But after approx 2 days, I noticed some slow reaction times. When I now boot a VM, I'm getting the bios booting from hard disk, and after this a black screen. After stopping almost all my VM's (1 is my firewall/routing/...), this problem is gone? The VM earlier failing starts normal?

I just booted 5 VM's that are really important for daily use (Home Automation, data...). But now, after 24 hours, the problem comes back? I can't start a 'new' VM machine?
Strange thing that the machines that are currently runnig, don't have problem? Just new 'boots'?

Any tips how I can find out what the real problem is? :$


Some pv info:

Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1


----------------------------------
pvestatd status
running
root@stampertj:~# ^C
root@stampertj:~# systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
   Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-11-27 21:13:25 CET; 23h ago
  Process: 16645 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCESS)
Main PID: 16658 (pvestatd)
    Tasks: 1 (limit: 7372)
   Memory: 94.6M
   CGroup: /system.slice/pvestatd.service
           └─16658 pvestatd

nov 28 09:28:53 stampertj pvestatd[16658]: status update time (5.547 seconds)
nov 28 09:53:48 stampertj pvestatd[16658]: status update time (9.622 seconds)
nov 28 11:26:14 stampertj pvestatd[16658]: status update time (5.356 seconds)
nov 28 11:26:37 stampertj pvestatd[16658]: status update time (8.147 seconds)
nov 28 11:26:56 stampertj pvestatd[16658]: status update time (18.621 seconds)
nov 28 12:09:57 stampertj pvestatd[16658]: status update time (11.773 seconds)
nov 28 12:41:33 stampertj pvestatd[16658]: status update time (16.426 seconds)
nov 28 13:06:39 stampertj pvestatd[16658]: status update time (5.308 seconds)
nov 28 17:49:24 stampertj pvestatd[16658]: auth key pair too old, rotating..


-----------------
systemctl status pvedaemon
● pvedaemon.service - PVE API Daemon
   Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-11-27 10:47:23 CET; 1 day 9h ago
  Process: 3246 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
Main PID: 3320 (pvedaemon)
    Tasks: 6 (limit: 7372)
   Memory: 291.2M
   CGroup: /system.slice/pvedaemon.service
           ├─ 3320 pvedaemon
           ├─14811 pvedaemon worker
           ├─25716 pvedaemon worker
           ├─29663 pvedaemon worker
           ├─47269 task UPID:stampertj:0000B8A5:00B862B4:5FC2A30F:vncproxy:613:root@pam:
           └─47271 /usr/bin/perl /usr/sbin/qm vncproxy 613

nov 28 20:20:42 stampertj pvedaemon[25716]: <root@pam> starting task UPID:stampertj:0000B85B:00B86104:5FC2A30A:vncproxy:613:root@pam:
nov 28 20:20:43 stampertj qm[47197]: VM 613 qmp command failed - VM 613 not running
nov 28 20:20:43 stampertj pvedaemon[47195]: Failed to run vncproxy.
nov 28 20:20:43 stampertj pvedaemon[25716]: <root@pam> end task UPID:stampertj:0000B85B:00B86104:5FC2A30A:vncproxy:613:root@pam: Failed to run vncproxy.
nov 28 20:20:44 stampertj pvedaemon[47219]: start VM 613: UPID:stampertj:0000B873:00B861D5:5FC2A30C:qmstart:613:root@pam:
nov 28 20:20:44 stampertj pvedaemon[25716]: <root@pam> starting task UPID:stampertj:0000B873:00B861D5:5FC2A30C:qmstart:613:root@pam:
nov 28 20:20:46 stampertj pvedaemon[25716]: <root@pam> end task UPID:stampertj:0000B873:00B861D5:5FC2A30C:qmstart:613:root@pam: OK
nov 28 20:20:47 stampertj pvedaemon[47269]: starting vnc proxy UPID:stampertj:0000B8A5:00B862B4:5FC2A30F:vncproxy:613:root@pam:
nov 28 20:20:47 stampertj pvedaemon[14811]: <root@pam> starting task UPID:stampertj:0000B8A5:00B862B4:5FC2A30F:vncproxy:613:root@pam:
nov 28 20:27:10 stampertj pvedaemon[14811]: <root@pam> successful auth for user 'root@pam'

-----------------


systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-11-27 10:47:26 CET; 1 day 9h ago
  Process: 3324 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
  Process: 3327 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
  Process: 40778 ExecReload=/usr/bin/pveproxy restart (code=exited, status=0/SUCCESS)
Main PID: 3329 (pveproxy)
    Tasks: 4 (limit: 7372)
   Memory: 195.8M
   CGroup: /system.slice/pveproxy.service
           ├─  547 pveproxy worker
           ├─ 3329 pveproxy
           ├─40785 pveproxy worker
           └─40787 pveproxy worker

nov 28 00:00:32 stampertj pveproxy[29214]: worker exit
nov 28 00:00:32 stampertj pveproxy[31445]: worker exit
nov 28 00:00:32 stampertj pveproxy[28008]: worker exit
nov 28 00:00:32 stampertj pveproxy[3329]: worker 31445 finished
nov 28 00:00:32 stampertj pveproxy[3329]: worker 28008 finished
nov 28 00:00:32 stampertj pveproxy[3329]: worker 29214 finished
nov 28 20:25:10 stampertj pveproxy[40786]: worker exit
nov 28 20:25:10 stampertj pveproxy[3329]: worker 40786 finished
nov 28 20:25:10 stampertj pveproxy[3329]: starting 1 worker(s)
nov 28 20:25:10 stampertj pveproxy[3329]: worker 547 started



---------------------------------------
pvesm status
Name             Type     Status           Total            Used       Available        %
VD3               lvm     active      1171517440       729808896       441708544   62.30%
VD4               lvm     active      1171517440               0      1171517440    0.00%
data              lvm     active      4883214336      2394947584      2488266752   49.04%
local             dir     active        98559220        16879832        76629840   17.13%
local-lvm     lvmthin     active      1826881536       404106195      1422775340   22.12%


Some hardware info:

Code:
lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              24
On-line CPU(s) list: 0-23
Thread(s) per core:  2
Core(s) per socket:  6
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               45
Model name:          Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Stepping:            7
CPU MHz:             2238.334
CPU max MHz:         2500,0000
CPU min MHz:         1200,0000
BogoMIPS:            4000.02
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            15360K
NUMA node0 CPU(s):   0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s):   1,3,5,7,9,11,13,15,17,19,21,23
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm pti tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts

------
free
              total        used        free      shared  buff/cache   available
Mem:      396212660    81392340   310449960      319432     4370360   312120068
Swap:       8388604           0     8388604

------
df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  189G     0  189G   0% /dev
tmpfs                  38G  267M   38G   1% /run
/dev/mapper/pve-root   94G   17G   74G  19% /
tmpfs                 189G   46M  189G   1% /dev/shm
tmpfs                 5,0M     0  5,0M   0% /run/lock
tmpfs                 189G     0  189G   0% /sys/fs/cgroup
/dev/fuse              30M   28K   30M   1% /etc/pve
tmpfs                  38G     0   38G   0% /run/user/0
 
Last edited:
Looks like RAM and disk space aren't full. I think if the CPU would be the problem you would have seen that yourself using the GUI.
You could run iostat to check if your HDDs can't handle the IOPS. That would slow down the system if you start too much VMs.

Code:
# install sysstat
apt-get install sysstat

# run iostat for 10 minutes
iostat 600 2
 
Last edited:
These are the results of the test. I ran it 2 times.
What am I looking at/after? :$

Mostly, I'm running about 18 VM's. 4 machines with heavy action (videosurveillance, firewall, home automation and monitoring). Other machines aren't so much doing (small webservers for tiny blog, data server...). The setup is in place for several months/years. On hardware level, nothing has been changed.


Code:
 iostat 600 2
Linux 5.4.73-1-pve (stampertj)     29-11-20     _x86_64_    (24 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          10,89    0,00    2,24    1,70    0,00   85,17

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb              24,47      2365,26       194,29  391182769   32133690
sda             155,64       381,77      1092,74   63139906  180724232
sdc               3,56        42,50       225,61    7028692   37312666
sdd               0,20        25,76         0,00    4260248          0
dm-5              0,30        76,35         0,00   12626940          0
dm-6              2,82         7,54        78,29    1246648   12948688
dm-7              0,00         0,02         0,00       3152          0
dm-9              0,00         0,02         0,00       3268          0
dm-10             3,24         4,30        20,65     710965    3415916
dm-11             0,21         0,75         0,07     124536      10896
dm-12           160,04       273,64      1072,82   45256863  177429736
dm-13           160,04       273,64      1072,82   45256863  177429736
dm-15             0,24         3,72         0,24     615831      40028
dm-16            21,59        32,25       164,18    5333330   27153951
dm-17            11,27        96,16        79,00   15903104   13065236
dm-18             0,42         5,21         3,15     861340     520684
dm-19             0,41         5,15         2,61     852457     431176
dm-20             1,97        15,85        11,52    2620705    1904768
dm-21            11,55         7,61       168,52    1258685   27871257
dm-22             0,06         2,91         0,00     481601          9
dm-23             1,90        13,06        24,17    2160270    3997678
dm-24             8,14        31,32        28,79    5180660    4762304
dm-25             2,66        24,00        38,10    3968894    6301498
dm-26             4,84        11,74        15,81    1942424    2615200
dm-27            95,00        25,07       532,17    4146385   88013307
dm-0              0,86        50,54        19,59    8358101    3240340
dm-1              0,06         2,28         0,40     376407      65924

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           7,81    0,00    2,42    1,01    0,00   88,76

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb               3,18        76,80        80,82      46080      48492
sda             204,09      1464,97      1267,82     878981     760692
sdc               0,50        25,61         1,25      15364        752
sdd               0,20        25,60         0,00      15360          0
dm-5              0,00         0,00         0,00          0          0
dm-6              2,37         0,00        79,38          0      47628
dm-7              0,00         0,00         0,00          0          0
dm-9              0,00         0,00         0,00          0          0
dm-10             3,28         0,00        19,31          0      11588
dm-11             0,52         2,09         0,00       1256          0
dm-12           203,57      1360,35      1249,31     816208     749584
dm-13           203,57      1360,35      1249,31     816208     749584
dm-15             0,00         0,00         0,00          0          0
dm-16            22,25        12,92       190,70       7752     114420
dm-17             1,06         0,00         4,76          0       2856
dm-18             0,00         0,00         0,00          0          0
dm-19            31,03       904,07       166,63     542440      99980
dm-20             6,48        28,30        36,94      16980      22164
dm-21             0,00         0,00         0,00          0          0
dm-22             0,00         0,00         0,00          0          0
dm-23            28,06       143,67       273,89      86204     164332
dm-24             3,11         0,01        11,73          8       7040
dm-25             7,15       276,59        17,73     165952      10640
dm-26             0,00         0,00         0,00          0          0
dm-27           104,72         0,00       546,92          0     328152
dm-0              0,21         0,00         1,44          0        864
dm-1              0,30         0,01         1,25          4        752

Second test:
Code:
iostat 600 2
Linux 5.4.73-1-pve (stampertj)     29-11-20     _x86_64_    (24 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          10,87    0,00    2,24    1,70    0,00   85,19

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb              24,36      2353,74       193,71  391247281   32199374
sda             155,85       385,99      1093,22   64160959  181718668
sdc               3,54        42,41       224,48    7050200   37313694
sdd               0,20        25,76         0,00    4281752          0
dm-5              0,30        75,96         0,00   12626940          0
dm-6              2,81         7,50        78,29    1246648   13013108
dm-7              0,00         0,02         0,00       3152          0
dm-9              0,00         0,02         0,00       3268          0
dm-10             3,24         4,28        20,65     710965    3432136
dm-11             0,21         0,76         0,07     126020      10896
dm-12           160,22       277,88      1073,30   46190339  178408640
dm-13           160,22       277,88      1073,30   46190339  178408640
dm-15             0,24         3,70         0,24     615831      40028
dm-16            21,59        32,14       164,30    5342838   27310027
dm-17            11,22        95,67        78,62   15903104   13068996
dm-18             0,42         5,18         3,13     861340     520684
dm-19             0,52         8,39         3,20    1394997     532212
dm-20             1,98        15,87        11,60    2637689    1928580
dm-21            11,49         7,57       167,67    1258685   27871257
dm-22             0,06         2,90         0,00     481601          9
dm-23             2,02        13,52        25,29    2246950    4204022
dm-24             8,12        31,17        28,73    5180672    4776100
dm-25             2,71        25,57        38,04    4249774    6322450
dm-26             4,82        11,69        15,73    1942424    2615200
dm-27            95,05        24,94       532,21    4146385   88466435
dm-0              0,86        50,28        19,50    8358101    3241604
dm-1              0,07         2,26         0,40     376411      66952

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           7,74    0,00    2,32    0,85    0,00   89,09

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb               3,12        76,80        79,33      46080      47596
sda             203,21      1129,29      1247,17     677576     748304
sdc               0,49        25,62         1,14      15372        684
sdd               0,20        25,60         0,00      15360          0
dm-5              0,00         0,00         0,00          0          0
dm-6              2,32         0,00        77,99          0      46796
dm-7              0,00         0,00         0,00          0          0
dm-9              0,00         0,00         0,00          0          0
dm-10             3,17         0,00        18,95          0      11368
dm-11             0,29         1,17         0,00        704          0
dm-12           203,05      1026,30      1228,99     615780     737396
dm-13           203,05      1026,30      1228,99     615780     737396
dm-15             0,00         0,00         0,00          0          0
dm-16            22,57        55,04       224,22      33024     134532
dm-17             1,07         0,00         4,76          0       2856
dm-18             0,00         0,00         0,00          0          0
dm-19            15,87       472,45        23,46     283472      14076
dm-20             2,09         0,05        12,85         28       7708
dm-21             0,00         0,00         0,00          0          0
dm-22             0,00         0,00         0,00          0          0
dm-23            15,82         1,31       191,88        784     115128
dm-24             3,76         0,00        14,23          0       8536
dm-25            37,10       497,45       216,96     298472     130176
dm-26             0,00         0,00         0,00          0          0
dm-27           104,77         0,00       540,64          0     324384
dm-0              0,19         0,00         1,33          0        800
dm-1              0,29         0,02         1,14         12        684
 
What I also noticed, is that when I check with top, some VM's go really high in % usuage. Sometimes toward 500%? I don't see this back in the GUI.
The machine here fe with 300% is zoneminder, a videosurveillance. But this is the same for other systems (fe monitoring, home automation). So not just 1 system.


Code:
top -c

top - 13:50:01 up  1:38,  1 user,  load average: 4,87, 5,84, 6,74
Tasks: 420 total,   4 running, 416 sleeping,   0 stopped,   0 zombie
%Cpu(s): 18,1 us,  3,2 sy,  0,0 ni, 75,0 id,  3,5 wa,  0,0 hi,  0,2 si,  0,0 st
MiB Mem : 386926,4 total, 342586,7 free,  42567,8 used,   1771,9 buff/cache
MiB Swap:   8192,0 total,   8192,0 free,      0,0 used. 342122,2 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                           
 6388 root      20   0   33,3g   9,9g  10856 S 294,0   2,6 236:56.66 /usr/bin/kvm -id 713 -name Quasimodo -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/713.qmp,server,nowait -mon ch+
 3400 root      20   0   48,8g  10,8g  10720 S  97,0   2,9  95:17.32 /usr/bin/kvm -id 999 -name Zeus -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/999.qmp,server,nowait -mon chardev+
 6831 root      20   0   16,7g   3,5g  11416 S  66,2   0,9  27:39.77 /usr/bin/kvm -id 711 -name Doornroosje2 -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/711.qmp,server,nowait -mon+
41164 root      20   0  281004  91524  19884 R  29,5   0,0   0:00.89 /usr/bin/perl -T /usr/bin/pvesr run --mail 1                                                                                     
13951 root      20   0 9101368   8,0g  10948 S  16,6   2,1  11:58.53 /usr/bin/kvm -id 622 -name Botje -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/622.qmp,server,nowait -mon charde+
 5682 root      20   0 9066768   1,5g  10832 S   6,3   0,4  10:10.17 /usr/bin/kvm -id 714 -name Baloo -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/714.qmp,server,nowait -mon charde+
 5210 root      20   0 7137272   1,0g  11268 S   6,0   0,3  10:24.66 /usr/bin/kvm -id 834 -name MelodyAriel -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/834.qmp,server,nowait -mon +
 3445 root      20   0       0      0      0 S   2,6   0,0   2:09.19 [vhost-3400]


Schermafdruk van 2020-11-29 13.49.46.png
 
Your IO delay is quite high. And your sda is doing 200 tps (IOPS). Depending on your HDD it is only capable of handling 120 to 350 IOPS.
 
Your IO delay is quite high. And your sda is doing 200 tps (IOPS). Depending on your HDD it is only capable of handling 120 to 350 IOPS.

These disks are 'standard' SATA disks on a hardware RAID controller.
Is there anything (beside changing the disks) I can do to lower the io delays?

ps I hadn't this issue over 2 years, so I want (hope) to believe it's somewhere software?
 
I moved some VM's to other disks, and that seems to solve the issue. Guess that the 2 SATA disks in VD1 can't handle it. Now I'm using a combinations between VD1~VD4. And till now, the VM's are running much better.

Code:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          23,79    0,00    3,47    0,02    0,00   72,73

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             106,84       138,06       581,47      82836     348884
sdb              46,42      7429,49        80,13    4457692      48080
sdc              63,67        35,32      1627,79      21192     976672
sdd              46,96        27,51       421,68      16504     253006
dm-1              0,01         0,00         0,04          0         24
dm-4             43,64      7352,69         2,79    4411612       1672
dm-5              0,00         0,00         0,00          0          0
dm-6              2,24         0,00        77,35          0      46408
dm-7              0,00         0,00         0,00          0          0
dm-9              0,00         0,00         0,00          0          0
dm-10             3,43         0,00        19,97          0      11980
dm-11             0,03         0,13         0,00         76          0
dm-12           104,34        35,52       562,31      21312     337384
dm-13           104,34        35,52       562,31      21312     337384
dm-16             0,00         0,00         0,00          0          0
dm-17             2,47        35,52         4,83      21312       2900
dm-19             0,00         0,00         0,00          0          0
dm-20             0,00         0,00         0,00          0          0
dm-21             0,00         0,00         0,00          0          0
dm-22             0,00         0,00         0,00          0          0
dm-23             0,00         0,00         0,00          0          0
dm-24             0,00         0,00         0,00          0          0
dm-26             0,00         0,00         0,00          0          0
dm-27            99,58         0,00       541,65          0     324988
dm-2              0,00         0,00         0,00          0          0
dm-3             22,07         0,79       201,61        472     120968
dm-28             0,45         0,00         4,08          0       2448
dm-29             0,97         0,00         7,23          0       4338
dm-30             2,39         1,12        20,01        672      12008
dm-31             1,74         0,00        11,66          0       6996
dm-18             0,98         0,00         4,31          0       2588
dm-0              1,47         0,00         8,04          0       4824
dm-32             2,30         0,00        15,83          0       9496
dm-25             0,59         0,00         3,61          0       2164
dm-8             52,95         9,71       710,96       5828     426576
dm-15             9,05         0,01       907,23          4     544336
dm-33            17,27         0,00       169,51          0     101708

So guess that in the future, I'll need to thinking about changing the SATA disks to fe SSD...

Thank you very much for your support !!!
 
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!