yes, even on live migrated vms this issue occured. I have the feeling that a rolling restart takes alot more time than it was before the CVE patch.
Maybe there is a problem with some sort of timeout that got hit or something? The weird is, regardless what i tried to reproduce this on a test...
After passing the stage where CVE Patch (CVE-2021-20288: Unauthorized global_id reuse in cephx) for mon_warn_on_insecure_global_id_reclaim came into play and doing further rolling upgrades up to the latest version we are facing into a weird behavior executing: ceph.target on a single node
all...
I agree with you that it's suboptimal but the only way to get highest performance out of this.
In my usecase it might work, because of the deployment chain i use. Symlinking current with timestamped release directory. So I deploy code to hostmachine (r/w) mount, based on timestamp named release...
I'm using ext2 which is not journaling filesystem. So it's save for this.
I do have an glustered filesytem which i use for all the dynamic content stuff like uploads e.g. but want to serve the core components of my webapps via fast local storage spread with the virtual machines within the...
Hi Wolfgang,
thanks alot, that di the trick.
After benchmarking read speed for 9p shares i recognized that it seems not fast enough for my usecase.
Streaming a 1GB file from SSD giving me read speed of ~174 MB/s so i gonna try the other way, mounting
the partition readonly with virtio, like...
The approach of 9p_virtio (http://www.linux-kvm.org/page/9p_virtio) is very usefull one, especially for clustered environments were you need shared Webroot (Readonly), within several KVM VMs, sharing config files e.g.
That shoudn't be that hard to implement because its already implemented into...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.