>There are some knobs that can limit ksmd, like multiplying KSM_SLEEP_MSEC (put a 0 behind it) and lowering KSM_NPAGES_MAX in /etc/ksmtuned.conf
@leesteken , yes i know, but this post is not about individually tuning KSM , it's about making it work correctly by default
>Are you sure it is not...
thanks, but this post is not about getting rid of ksm but about making it more cpu friendly and sorting out issues in conjunction with ZFS, i.e. make it behave better by default.
this is taken from my servers. on more then half of them, ksmd is the top cpu consumer - though ARC size is...
hello,
see subject.
there are these two bugreports / RFEs which did not get any attention for a long time now
1. RFE: ZFS ARC size not taken into account by pvestatd or ksmtuned
https://bugzilla.proxmox.com/show_bug.cgi?id=3859
2. ksmtuned using vsz instead of rss...
it tried it some days ago and it's a pain in the ass and it's also broken with 8.1.x
https://github.com/Telmate/terraform-provider-proxmox/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc
i have explicitly set "format qcow2" for the datastore because i want nothing but qcow2 on that - but the disk is still raw after "qm disk move..."
so i wonder, what's the purpose of this param or where does it apply (and where not)
i guess my expectation is wrong that it applies to "qm disk...
hello, there is some format param described at
https://pve.proxmox.com/wiki/Storage
format
Default image format (raw|qcow2|vmdk)
which also can be set with command
pvesm set <SR> --format <format>
which storage actions should this format param apply to, when being set ?
i would expect that...
i have made a proof of concept and i could successfully use a file as a virtual usb stick via dummy_hcd/g_g_mass_storage kernel modules
https://bugzilla.proxmox.com/show_bug.cgi?id=4879
i have made a proof of concept and i could successfully use a file as a virtual usb stick via dummy_hcd/g_g_mass_storage kernel modules
https://bugzilla.proxmox.com/show_bug.cgi?id=4879
>I have a single VM (OPNsense) that is writing around 70k per minute (showing in VM summary tab) which is less than 5MB / hour.
what do you measure inside the VM ?
hier gibt es ein bugzilla ticket dazu. https://bugzilla.proxmox.com/show_bug.cgi?id=4835
aktuell leider geschlossen "works for me". wenn es keine gegensprache oder reaktion gibt werd ich das die tage versuchen wieder öffnen. oder vielleicht öffnet jemand anderes es nochmal.
@TosH45 @Wintendo
can you disable nfs mounts to make the errors below go away, to make sure it's not the nfs mounts causing this problem?
there is some ticket and linked threads on the failing of zfs-import systemd service at https://bugzilla.proxmox.com/show_bug.cgi?id=4835 , these appear to be non critical or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.