When i'm running backup snapshots in fast LZO compression of my KVM via proxmox, I'm seeing very significant slowdowns, not only of the system being backed up, but also of other KVMs on the same server.
These systems then no longer become usable during backup.
This is the case for all KVMs with disks larger than 60GB (sometimes 200GB).
The problem does not seem to occur on KVMs with a 30GB disk
I'm making backup on local disk.
My KVM are using VirtIO SCSI driver.
Server load increases to 8 during backups.
I'm using ProxMox VE 5.1-41.
My server spec:
- X2 500Go SSD SAMSUNG MZ7LN512HCHP
- 12 Core CPU 3.5GHz
- 100Go RAM
The /boot partition is on a RAID 1 and / partition is on RAID 5
The host is using ext4, as the KVMs.
This is the result of a few commands:
Same command during a backup:
I don't know what kind of information I could approach in addition.
Is this comportment normal ?
----------------
Next to that, I wonder if this is not due to the type of controller I use for my KVMs?
What is the difference between VirtIO SCSI and LSI 53C895A?
Also I see that the cache is disabled on my KVM disks, where there is the possibility to configure it in Direct Sync, Write Through, ect ...
Would that change anything?
I just followed the advice on your wiki to configure theses things ... But this one seems to give another reality.
I also read this topic but I couldn't find solution. It's a pretty old topic and the stated configurations are getting away from mine.
These systems then no longer become usable during backup.
This is the case for all KVMs with disks larger than 60GB (sometimes 200GB).
The problem does not seem to occur on KVMs with a 30GB disk
I'm making backup on local disk.
My KVM are using VirtIO SCSI driver.
Server load increases to 8 during backups.
I'm using ProxMox VE 5.1-41.
My server spec:
- X2 500Go SSD SAMSUNG MZ7LN512HCHP
- 12 Core CPU 3.5GHz
- 100Go RAM
The /boot partition is on a RAID 1 and / partition is on RAID 5
The host is using ext4, as the KVMs.
This is the result of a few commands:
Code:
cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
Code:
pveperf
CPU BOGOMIPS: 83900.76
REGEX/SECOND: 3170830
HD SIZE: 936.10 GB (/dev/md1)
BUFFERED READS: 1034.51 MB/sec
AVERAGE SEEK TIME: 0.14 ms
FSYNCS/SECOND: 37.45
DNS EXT: 20.46 ms
Same command during a backup:
CPU BOGOMIPS: 83900.76
REGEX/SECOND: 3087087
HD SIZE: 936.10 GB (/dev/md1)
BUFFERED READS: 53.97 MB/sec
AVERAGE SEEK TIME: 11.27 ms
FSYNCS/SECOND: 0.05
DNS EXT: 17.49 ms
I don't know what kind of information I could approach in addition.
Is this comportment normal ?
----------------
Next to that, I wonder if this is not due to the type of controller I use for my KVMs?
What is the difference between VirtIO SCSI and LSI 53C895A?
Also I see that the cache is disabled on my KVM disks, where there is the possibility to configure it in Direct Sync, Write Through, ect ...
Would that change anything?
I just followed the advice on your wiki to configure theses things ... But this one seems to give another reality.
I also read this topic but I couldn't find solution. It's a pretty old topic and the stated configurations are getting away from mine.
Last edited: