did that however the load average of the host still went up to 303 at one point.
Didnt think the host would be that affected by this. We run over 80 lxc servers on proxmox 6 however we now using proxmox 7 and I am starting to think its something in this version.
so I created a new lxc container and set cores to 2 and cpu limit to 2.
The server itself has 64GB memory and 24 cores (12 core processors x 2 sockets)
However when this server is heavily tested and load goes up in top we see this on the node:
top - 08:34:49 up 10:55, 3 users, load average...
We have cpanel centos 7 servers on proxmox 6 using LXC
Are there any known issues we should be aware of as we need to upgrade around 80 LXC containers as systemd is outdated on these and they using centos 7.
Anyone have experience or aware of any known issues we should be aware of.
Planning...
Hi guys
Can we set the time for example during business hours like say from 7am to 5pm for garbage collection and pruning to start rather than it running during the night at the same time as backups run?
It seems to slow the backup server somewhat.
UPDATE: NEvermind. Found it.
Thanks
thanks
I think the reason was that we had most servers licenced (enterprise repo) but our two new servers we didnt have licenced yet.
We then licenced them a week ago but didnt reboot them. When we tried to replace osds it kept freezing looking at logs at some point in the creating and...
Weirdly enough I just created a new pool and started moving things onto it after enabling compression but somehow I feel its not working properly. I even used "force" and "lz4"
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 38 TiB 15 TiB 23 TiB 23 TiB...
Not sure if its working.
I know I have to rewrite the data to the OSDs so I assume this may work?
Moving DISK vis NFS to a remote server then deleting it off data_ceph pool when it becomes (unused). The removing it back to data_Ceph?
Will this process work in getting the data compressed on...
Trying to run the following:
ceph daemon osd.6 perf
Can't get admin socket path: unable to get conf option admin_socket for osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n"
Not sure what is wrong.
ceph.conf is as per below...
I'm trying stopping an OSD and OUTing it then destroy data on the osd. And re-added it. In theory that sounds like it may work as new data replicated to the OSD should be compressed?
Hi
Is it possible to have ceph compression work on existing pools? I think since I only enable it now compression is only working with new data. How to compress existing data. I am using aggressive mode with lz4
Hi guys
We wanted to move to 2/2 for a bit while we wait for our new SSDs to arrive as we have limited storage space now in one cluster. However when doing so and moving from 3/2 to 2/2 we notice that all our VMs pause or become "read only" when Ceph is rebalancing if a disk is taken out and a...
seconds and only did one OSD at a time. did it numerous times. like a lot. Still not one issue so far. And we host VMs on it hosting 1000s of cPanel accounts using over 5.7 TB of storage.
I think doing it for more than 1 OSD at a time may be super risky if it the one holds a copy of the other PG.
I have been doing the following and had no issue as yet:
Stopped the OSD
Then clicked OUT
Then Destroyed the data. I didnt even consider to wait for it to show Health OK.
Did it multiple times with no issues. Seems Ceph can handle the order of things fine.
yip confirmed. Changed back to x3 replication now , no more VMs freezing due to high io wait when taking out OSDs and rebalancing happening. Everything still runs smoothly at a cost of a little performance penalty.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.