Search results

  1. S

    Replication 3 to 2 - ceph - pgs degraded

    I think I may have found something. I had an issue with diskspace and hence changed from replication x3 to x2 knowing the possible risks. However it was to be temporary while I add more OSDs. But when I add more OSDs to new servers now I am noticing very high IO wait and servers "freeze". I...
  2. S

    Ceph and diskspace

    I am reading on some posts that diskspace of disks are measure based on the smallest OSD disk in cluster? So for example if we have the below on each node on 6 nodes. 2 x 2TB 1 x 500GB Are we saying diskspace is lost? due to the 500GB. Should we rather just remove the 500GB. We just had...
  3. S

    ceph - stop and out a disk

    ok I see when using out I think its not a good idea only when using stop. We just did stop and out and then destroyed data on it using the "more" dropdown and it cleaned disks. We readded the disks and now all is well in the world once more :)
  4. S

    ceph - stop and out a disk

    Hey guys I have a question I forgot to test when we did our testing phase. If we stop and out a disk but then realised we did the wrong disk. Can we just bring it back in a gain without "destroying the data" first on it? Any risk in doing so. Will ceph just use the data on the OSD and just...
  5. S

    KBRD on existing ceph pool

    Hi guys We have a stable good working ceph cluster with one ceph pool where all data is on. We have a few VMs running currently on that pool. I noticed there was an option called KBRD and noticed that on some posts on forums it states that performance can be increased in enabled KBRD on the...
  6. S

    Installing on Dell R710 with 8 500G Drives

    Hi Some SAS disks that you buy on Ebay or Amazon come sometimes with firmware of NetApp on them. We had numerous issues with these. Just ensure that is not the case. This link can help you fix it which I saved a few years back - just requires low level format due to sector size...
  7. S

    Ceph and Monitors and Managers

    Hi guys For now we have been creating a monitor on every server we setup. Not sure why we just have :) Now lets say we have a 11 node cluster with a monitor on each - do you think that's overkill. Also on a 4 node ceph cluster which we also have setup do you think 2 monitors would suffice as I...
  8. S

    How can i add a SSD as Journal-DB to an CEPH-OSD on a running clusternode?

    https://forum.proxmox.com/threads/proxmox-ceph-what-happens-when-you-loose-a-journal-disk.24148/ As per above link I assume all data will go "poof" on the OSD aswell. But as its Ceph and you have other nodes that contain replicas of the data you just add the replacement SSDs in place and wipe...
  9. S

    [SOLVED] Trim/discard With CEPH Rbd

    Hi I've been searching aswell as we are playing with ceph with some test servers atm but found the same issue with an external source on rook-ceph which sais some options need to be enabled: https://github.com/rook/rook/issues/6964 bdev_async_discard and bdev_enable_discard on the osds...
  10. S

    [SOLVED] Console Timeout

    Thanks guys - struggled for a day with this problem and came across this post. Was ESET had to turn off all SSL/TLS filtering and now it works fine.
  11. S

    Ceph SAS OSD's to SSD Question

    I wanted to ask this question. Is this not related to the amount of data being written and read and only if it is maxed? Looking at iotop each VM we will be hosting does around 25 to 50MB/s during busy periods of the day and we have around 15 VMs. So surely you are meaning it will only become...
  12. S

    Failed deactivating swap /dev/pve/swap

    yes its still happening it seems. I just do swapoff before I reboot now but that takes ages soemtiems.
  13. S

    Ceph SAS OSD's to SSD Question

    Isnt that dependant on the amount of data? The server has 4 x 1GB Ethernet ports I could bond the dual 10Gb Network ports on the network cards as they come with 2 x 10Gb ports in each network card.
  14. S

    Ceph SAS OSD's to SSD Question

    ooooo - This is sooo easy then :) I cant wait to test this out. Thanks for the response.
  15. S

    Ceph SAS OSD's to SSD Question

    If we have a currently running Ceph Cluster with the following: 7 nodes with the following setup: Dell R610 Servers 64 GB Memory 1 x 480GB PM863a SSD for Proxmox OS 5 x 600GB Enterprise 10K SAS Disks for OSDs 10Gb Ethernet Network Dell H200 Card Lets say these nodes are doing ok running only...
  16. S

    i/o disk limit

    I've just noticed the same. Any ideas yet?
  17. S

    lvm issue

    I found that even after a reboot it would still happen. I then thought maybe its monitoring as nothing else queries lvm partitions except monitoring to check the diskspace. I then disabled snmp and rebooted again and left it running since last night. Its over 8 hours and no grey out of vms on...
  18. S

    Failed deactivating swap /dev/pve/swap

    busy rebooting one server now and noticed this same issue using Dell R710 server.
  19. S

    lvm issue

    I notice that our 2 nodes in our cluster are greyed out now after update and restart of server. I see this in logs: ug 4 18:10:57 pve-2 systemd[1]: pvestatd.service: Found left-over process 21897 (vgs) in control group while starting unit. Ignoring. Aug 4 18:10:57 pve-2 systemd[1]: This...
  20. S

    lvm errors in 5.4.44-1+

    I do see this running dmesg though [Mon Jul 6 02:42:32 2020] vgs D 0 15259 1668 0x00000000 [Mon Jul 6 02:42:32 2020] Call Trace: [Mon Jul 6 02:42:32 2020] __schedule+0x2e6/0x6f0 [Mon Jul 6 02:42:32 2020] schedule+0x33/0xa0 [Mon Jul 6 02:42:32 2020]...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!