Hallo,
nachdem ich auf Kernel 5.11 aktualisiert habe mag icinga keine diskstats via procfs mehr auslesen:
perl -MData::Printer -MSys::Statistics::Linux::DiskStats -E'my $lxs = Sys::Statistics::Linux::DiskStats->new; $lxs->init; p $lxs->get;'
Sys::Statistics::Linux::DiskStats: no diskstats...
I was able to fix the problem.
- changed clocksource for VMs (clocksource=acpi_pm, was kvm_clock previously)
- on the virtualization host
/etc/sysctl.conf
# 0 avoids swapping processes out of physical memory for as long as
possible.
vm.swappiness = 0
# lessen the IO blocking effect that...
Hallo nochmal,
ich habe das Problem inzwischen lösen können.
Die letzten beiden Änderungen, die vielleicht den Ausschlag gegeben
haben könnte waren:
- Dateisystemüberwachung auf dem Virtualisierungsserver erhöht
(fs.inotify.max_user_instances=256)
- Zeitgeber auf den VMs geändert...
Lucio, i added your request for a qcow2 cluster size parameter to the ticket
https://bugzilla.proxmox.com/show_bug.cgi?id=1989
What application is your picture taken from?
There is a qemu parameter to set the size of qcow2 L2 cache properly (default is 8 GB disk size max) which has a dramatic impact on performance, but it is currently unusable with Proxmox PVE...
Is there any workaround? E.g. can i create the qcow2 image manually, so i don't need to add the parameter later?
And is there any plan to integrate this parameter as a Proxmox PVE feature in future?
I just read that ionice generally does not work on NFS:
https://forum.proxmox.com/threads/about-of-ionice-in-vzdump.16485/#post-84955
Maybe you can use another location instead?
As far as i know all other availble io schedulers do not honour ionice parameters, so you would have to use CFQ...
I'm not an expert but i have similar problems (not involving NFS though). What you could try as a workaround is a) install the CFQ i/o scheduler on the host and then b) use vzdump in combination with ionice parameter in the shell. See man vzdump.
I installed latest BIOS and intel-microcode package. Can i still use "noibrs noibpb" kernel parameters to disable these functions? Or do i have to downgrade BIOS again?
Ich habe noch keine Erfahrung mit ZFS. Aber es sieht nach drei Optionen aus:
1) normalen SATA3-Controller nachrüsten (~50 Euro)
2) SATA-RAID-Controller nachrüsten (muss eine Batterie haben um 'write back' nutzen zu können, 200-400 Euro, je nach Geschmack)
3) ZFS mit neuem Board, RAM (und ggf...
Bevor Du etwas an der Hardware änderst - wie ist Deine Paketgröße (MTU) eingestellt?
siehe auch
https://forum.proxmox.com/threads/enable-mtu-9000-jumbo-frames.34523/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.