We have been on the 5.13.x kernel because the entire 5.15 kernel is a mess imo. Live migration being a #1.
I can't do any testing on 5.15.x for that reason.
KSM sharing is pretty much broken in the 6.2.x kernel and pve8. Leaving this here for more attention.
https://forum.proxmox.com/threads/ksm-memory-sharing-not-working-as-expected-on-6-2-x-kernel.131082/
Also worth mentioning I am having performance issues on the 6.2.x kernels without ksm enabled. We have pretty much lost 20-30% performance in the 6.2.x kernel's.
These are heavy lifting CentOS7 LAMP servers that have no issues on the 5.13 and 5.15 kernel (Besides live migration on the 5.15.x...
Yikes, 5.15 kernel is a mess to. Live migration between different CPU's and identical CPU's is pretty much broken all together.
Looks like we are going back to thee 5.13 kernel as nothing has been stable in 6-12 months.
Hey all. I have a 7 node cluster with roughly 600 CentOS7 VM's running on it. Typically average anywere from 150-500G in KSM memory sharing.
After moving to the 6.2.x kernel on 2 of the 7 front ends those 2 front ends are barely using any KSM sharing.
Front end with 6.2.x
Front end with...
Getting this trying to move a disk.
Moving disks from Ceph to LVM iscsi storage. What I don't understand is all the disks on the LVM iscsi storage that are in place now are using aio=io_uring by default.
So if aio=io_uring is bad for this disk type, then why is it default when creating a...
I have some iscsi based storage with LVM ontop. Introduced a new lun this morning to all 7 nodes.
On one of the hosts I created a physical volume and volume group on mpath6.
Iscsi looks good and so does multipath on all the hosts. Each host has the following.
mpath6...
I am using a project called Benji.
Benji doesn't like the operations feature.
To work around this I used the proxmox disk move feature to move the 2 disks to other storage, then back to Ceph. Operations feature was gone after that and all has been solid. No idea how 2 of 600 RBD's had that...
Im still fighting performance issues.
I have found stop/start only works for a few days.
These are very active CentOS7 apache servers, but performance is night and day on the 5.15 kernels.
That makes sense.
So I guess the question is, why didn't the backup run?
As you can see prox1 started its backup on Sunday at 12am, prox2 is no were to be seen. Backup job has Node: set to All
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.