We have same "Guest has not initialized display ..." if we set "freeze CPU at startup".
Even freeze=1 does not show up in your qm config, maybe you have a problem with the cpu-type of the VM.
Did you try "host" already?
Hey,
wenn Ceph / rados bench "o.k." ist und die Leistung in der VM nicht o.k. ist, dann ist auch die Anbindung der VM hier zu betrachten. Welchen Controller benutzt die virtuelle Disk, welche Cache Einstellungen, sowas.
I think that would make it worse. In ceph best is to use nearly identical capacities on the nodes. If you increase your "big nodes" further, ceph can not distribute data in "it´s way".
I think best is to exchange the small ssd one by one with 2 TB ssd. Beginning in the node with the lowest...
hey, please use code-tags around the output.
i can´t see any big mistake. Your pool seems to be near full.
In ceph you can´t (by normal way) equalize all osd´s. That´s because of pg placement. It´s not on byte level or something similiar. But it doesn´t hurt.
You can try to increase pg count i...
I found a hint in a german news portal, maybe it is related.
https://www.golem.de/news/secure-boot-virtuelle-maschinen-mit-windows-server-2022-booten-nicht-2302-172076.html
When it comes to performance, consider using virtio.
At first you should choose Hard-disk SCSI an not SATA. Network also: choose virtio instead of e1000.
Although not recommended generally, we use the HP 840 in RAID Mode with JBOD-RAID0, so we can use cache settings and battery/capacitor backuped cache. You must know what you do in this case and check your use case (e.g. data security) against this.
Using the HP controller without it´s features...
We use HP G9 also. But with another HBA (P840), and we use raid mode. As far as i remember we did not had these problems with local attached storage (zfs e.g.). But we used the battery backuped write cache from the P840er. Without that, most performance was "ugly".
I do not remember details...
so you are benchmarking ceph?
Then there are thousands of parameters to look at. Network settings for ceph. Ceph version. Ceph storage / osd parameter, pool redundancy, number of nodes, switch settings, and many many aspects more.
If i see it right you test the ceph behavior (fsync=1) in the vm. That will be "nested" in some way i think.
Normal processes in the vm will not use fsync normally. DBMS will use fsync=1 maybe, but then most sequential (redo logs e.g.).
btw., bzgl. des Monitoring Tools ist es aber ja auch genau richtig, dass der freie Wert da genommen wird. Und das genügend frei ist. "Gleich" wirst Du m.E. nicht hinbekommen.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.