Not ready to shutdown whole Ceph cluster but realised I have debugging on which is default with proxmox install and ceph.
So would like to turn off debugging and would really not want to reboot on live cluster.
Is it easier just do the following and quote from another google response...
We have Microns 5210 drives in ceph.
I read this today:
It states we must disable write cache?
Should I do this on all our drives.
Can we do it with a live Ceph...
I have never seen this before. usually I get disk failed completely but this is new. Please advise if this disk has failed or not. I have another 11 of these disks and they dont give these results only this particular one.
We typically only have KVM VMs in proxmox and currently use Krbd. I was informed by my colleague librbd is better for qemu kvm worloads. We mainly have vms hosting websites and sql.
He stated there is major improvements on librbd recently that make it better? something about it being rewritten...
did that however the load average of the host still went up to 303 at one point.
Didnt think the host would be that affected by this. We run over 80 lxc servers on proxmox 6 however we now using proxmox 7 and I am starting to think its something in this version.
so I created a new lxc container and set cores to 2 and cpu limit to 2.
The server itself has 64GB memory and 24 cores (12 core processors x 2 sockets)
However when this server is heavily tested and load goes up in top we see this on the node:
top - 08:34:49 up 10:55, 3 users, load average...
We have cpanel centos 7 servers on proxmox 6 using LXC
Are there any known issues we should be aware of as we need to upgrade around 80 LXC containers as systemd is outdated on these and they using centos 7.
Anyone have experience or aware of any known issues we should be aware of.
Can we set the time for example during business hours like say from 7am to 5pm for garbage collection and pruning to start rather than it running during the night at the same time as backups run?
It seems to slow the backup server somewhat.
UPDATE: NEvermind. Found it.
I think the reason was that we had most servers licenced (enterprise repo) but our two new servers we didnt have licenced yet.
We then licenced them a week ago but didnt reboot them. When we tried to replace osds it kept freezing looking at logs at some point in the creating and...