Search results

  1. S

    Ceph: Moving data from one OSD to others

    So we always use 3/2 replication however we noticed that data was running out and we changed to 2/2 to keep it going and did not act fast enough. In South Africa we have shortage of Enterprise SSDs so we are waiting on new stock we ordered via Amazon. But we found now that 2 disks are failing...
  2. S

    Block IPs RBLs via proxmox firewall

    Hi guys I would like to somehow easily block IPs in PVE firewall the following lists as per: https://forum.proxmox.com/threads/automated-proxmox-firewall-management.22813/ OpenBL Base Spamhaus DROP and EDROP Blocklist.de STRONGIPS ISC DSHIELD Emerging Threats CINS Are there any scripts...
  3. S

    suricata install and check

    So thinking of installing suricata but just want to check if this is correct: apt-get install suricata modprobe nfnetlink_queue nano /etc/pve/firewall/132.fw Add below to the file above [OPTIONS] ips: 1 ips_queues: 0 Now go to proxmox a be sure on Datacenter the firewall is enabled which it...
  4. S

    Ceph Crush Rule for HDD

    Hi So we have been using the crush rule "replicated rule" for ssds only. I want to now add hdds to each server so we have some "slow" storage with tons of diskspace. Are the steps as per below: 1. Run the following on any node: ceph osd crush rule create-replicated replicated_hdd default...
  5. S

    Ceph Debugging off during runtime

    Hi guys Not ready to shutdown whole Ceph cluster but realised I have debugging on which is default with proxmox install and ceph. So would like to turn off debugging and would really not want to reboot on live cluster. Is it easier just do the following and quote from another google response...
  6. S

    Disabling Write Cache on SSDs with Ceph

    Hi guys We have Microns 5210 drives in ceph. I read this today: https://yourcmc.ru/wiki/index.php?title=Ceph_performance&mobileaction=toggle_view_desktop#Drive_cache_is_slowing_you_down It states we must disable write cache? Should I do this on all our drives. Can we do it with a live Ceph...
  7. S

    disk failure or not?

    Hi I have never seen this before. usually I get disk failed completely but this is new. Please advise if this disk has failed or not. I have another 11 of these disks and they dont give these results only this particular one.
  8. S

    Ceph librbd vs krbd

    We typically only have KVM VMs in proxmox and currently use Krbd. I was informed by my colleague librbd is better for qemu kvm worloads. We mainly have vms hosting websites and sql. He stated there is major improvements on librbd recently that make it better? something about it being rewritten...
  9. S

    Quotas issue with cgroup2

    Is there no support for Quotas on LXC with cgroup2 If this is the case should we stick to cgroup1 (legacy)? Thanks
  10. S

    cpu limit not working.

    so I created a new lxc container and set cores to 2 and cpu limit to 2. The server itself has 64GB memory and 24 cores (12 core processors x 2 sockets) However when this server is heavily tested and load goes up in top we see this on the node: top - 08:34:49 up 10:55, 3 users, load average...
  11. S

    Upgrading to Proxmox 7

    We have cpanel centos 7 servers on proxmox 6 using LXC Are there any known issues we should be aware of as we need to upgrade around 80 LXC containers as systemd is outdated on these and they using centos 7. Anyone have experience or aware of any known issues we should be aware of. Planning...
  12. S

    [SOLVED] garbage collection and pruning time

    Hi guys Can we set the time for example during business hours like say from 7am to 5pm for garbage collection and pruning to start rather than it running during the night at the same time as backups run? It seems to slow the backup server somewhat. UPDATE: NEvermind. Found it. Thanks
  13. S

    How to delete Ghost OSDs

    I noticed today two ghost osds. How does one delete it or should we?
  14. S

    unable to get conf option admin_socket for osd

    Trying to run the following: ceph daemon osd.6 perf Can't get admin socket path: unable to get conf option admin_socket for osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n" Not sure what is wrong. ceph.conf is as per below...
  15. S

    Compress existing data on Ceph

    Hi Is it possible to have ceph compression work on existing pools? I think since I only enable it now compression is only working with new data. How to compress existing data. I am using aggressive mode with lz4
  16. S

    Replication 3/2 vs 2/2 and read only

    Hi guys We wanted to move to 2/2 for a bit while we wait for our new SSDs to arrive as we have limited storage space now in one cluster. However when doing so and moving from 3/2 to 2/2 we notice that all our VMs pause or become "read only" when Ceph is rebalancing if a disk is taken out and a...
  17. S

    Replication 3 to 2 - ceph - pgs degraded

    I think I may have found something. I had an issue with diskspace and hence changed from replication x3 to x2 knowing the possible risks. However it was to be temporary while I add more OSDs. But when I add more OSDs to new servers now I am noticing very high IO wait and servers "freeze". I...
  18. S

    Ceph and diskspace

    I am reading on some posts that diskspace of disks are measure based on the smallest OSD disk in cluster? So for example if we have the below on each node on 6 nodes. 2 x 2TB 1 x 500GB Are we saying diskspace is lost? due to the 500GB. Should we rather just remove the 500GB. We just had...
  19. S

    ceph - stop and out a disk

    Hey guys I have a question I forgot to test when we did our testing phase. If we stop and out a disk but then realised we did the wrong disk. Can we just bring it back in a gain without "destroying the data" first on it? Any risk in doing so. Will ceph just use the data on the OSD and just...
  20. S

    KBRD on existing ceph pool

    Hi guys We have a stable good working ceph cluster with one ceph pool where all data is on. We have a few VMs running currently on that pool. I noticed there was an option called KBRD and noticed that on some posts on forums it states that performance can be increased in enabled KBRD on the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!