Search results

  1. S

    cpu limit not working.

    did that however the load average of the host still went up to 303 at one point. Didnt think the host would be that affected by this. We run over 80 lxc servers on proxmox 6 however we now using proxmox 7 and I am starting to think its something in this version.
  2. S

    cpu limit not working.

    so I created a new lxc container and set cores to 2 and cpu limit to 2. The server itself has 64GB memory and 24 cores (12 core processors x 2 sockets) However when this server is heavily tested and load goes up in top we see this on the node: top - 08:34:49 up 10:55, 3 users, load average...
  3. S

    Upgrading to Proxmox 7

    Thanks let me try it out and see how it goes.
  4. S

    Upgrading to Proxmox 7

    We have cpanel centos 7 servers on proxmox 6 using LXC Are there any known issues we should be aware of as we need to upgrade around 80 LXC containers as systemd is outdated on these and they using centos 7. Anyone have experience or aware of any known issues we should be aware of. Planning...
  5. S

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    interesting. No more migrating from one server to the next and then back again. Thanks
  6. S

    [SOLVED] garbage collection and pruning time

    Hi guys Can we set the time for example during business hours like say from 7am to 5pm for garbage collection and pruning to start rather than it running during the night at the same time as backups run? It seems to slow the backup server somewhat. UPDATE: NEvermind. Found it. Thanks
  7. S

    How to delete Ghost OSDs

    thanks I think the reason was that we had most servers licenced (enterprise repo) but our two new servers we didnt have licenced yet. We then licenced them a week ago but didnt reboot them. When we tried to replace osds it kept freezing looking at logs at some point in the creating and...
  8. S

    How to delete Ghost OSDs

    yes 25 osds yes we replaced some ssds a 3 days ago. I see a list of numbers from 0 to 26
  9. S

    How to delete Ghost OSDs

    I noticed today two ghost osds. How does one delete it or should we?
  10. S

    Compress existing data on Ceph

    Weirdly enough I just created a new pool and started moving things onto it after enabling compression but somehow I feel its not working properly. I even used "force" and "lz4" --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 38 TiB 15 TiB 23 TiB 23 TiB...
  11. S

    Compress existing data on Ceph

    Not sure if its working. I know I have to rewrite the data to the OSDs so I assume this may work? Moving DISK vis NFS to a remote server then deleting it off data_ceph pool when it becomes (unused). The removing it back to data_Ceph? Will this process work in getting the data compressed on...
  12. S

    unable to get conf option admin_socket for osd

    Trying to run the following: ceph daemon osd.6 perf Can't get admin socket path: unable to get conf option admin_socket for osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n" Not sure what is wrong. ceph.conf is as per below...
  13. S

    Compress existing data on Ceph

    I'm trying stopping an OSD and OUTing it then destroy data on the osd. And re-added it. In theory that sounds like it may work as new data replicated to the OSD should be compressed?
  14. S

    Compress existing data on Ceph

    Hi Is it possible to have ceph compression work on existing pools? I think since I only enable it now compression is only working with new data. How to compress existing data. I am using aggressive mode with lz4
  15. S

    Replication 3/2 vs 2/2 and read only

    yes that's why I said I will wait for the extra disks to arrive :) Thanks
  16. S

    Replication 3/2 vs 2/2 and read only

    That helps. I really don't want to use 2/1 as this is a production cluster. So will rather wait for the disks to arrive. Thanks
  17. S

    Replication 3/2 vs 2/2 and read only

    Hi guys We wanted to move to 2/2 for a bit while we wait for our new SSDs to arrive as we have limited storage space now in one cluster. However when doing so and moving from 3/2 to 2/2 we notice that all our VMs pause or become "read only" when Ceph is rebalancing if a disk is taken out and a...
  18. S

    Ceph and diskspace

    seconds and only did one OSD at a time. did it numerous times. like a lot. Still not one issue so far. And we host VMs on it hosting 1000s of cPanel accounts using over 5.7 TB of storage. I think doing it for more than 1 OSD at a time may be super risky if it the one holds a copy of the other PG.
  19. S

    Ceph and diskspace

    I have been doing the following and had no issue as yet: Stopped the OSD Then clicked OUT Then Destroyed the data. I didnt even consider to wait for it to show Health OK. Did it multiple times with no issues. Seems Ceph can handle the order of things fine.
  20. S

    Replication 3 to 2 - ceph - pgs degraded

    yip confirmed. Changed back to x3 replication now , no more VMs freezing due to high io wait when taking out OSDs and rebalancing happening. Everything still runs smoothly at a cost of a little performance penalty.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!