Search results

  1. C

    Cannot stop OSD in WebUI

    I added ld5507-hdd_strgbox to /etc/hosts, but this does not solve the issue. The error message is different, though.
  2. C

    [SOLVED] Ceph health warning: backfillfull

    Hello! The issue with the failed disks is resolved, means all disks are back in to Ceph. However I would take the opportunity to ask you for clarification of the available and allocated disk space. In my setup I have 4 OSD nodes. Each node has 2 (storage) enclosures. Each enclosure has 24 HDDs...
  3. C

    Cannot stop OSD in WebUI

    Hm... if the function of the available button(s) is not working, I would call this a bug. Using the CLI helps to complete the demanded task, but it's not a solution for the issue with the WebUI. If you need more info or data to solve this issue please advise and I will try to provide it. THX
  4. C

    Cannot stop OSD in WebUI

    Hi, I want to stop several OSDs to start node maintenance. However, I get an error message indicating a communication error. Please check the attached screenshot for details. Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
  5. C

    [SOLVED] Ceph health warning: backfillfull

    Hi Alwin, the faulty host is ld5507-hdd_strgbox, means there are two enclosures connected with 24 HDDs à 2TB each. On each enclosure there are 15 and respectively 16 HDDs showing a failure. The investigation why 31 disks fail at the same time are ongoing. The error of every single disk is this...
  6. C

    [SOLVED] Ceph health warning: backfillfull

    Hello, in my cluster consisting of 4 OSD nodes there's a HDD failure. This affects currently 31 disks. Each node has 48 HDDs à 2TB connected. This results in this crushmap: root hdd_strgbox { id -17 # do not change unnecessarily id -19 class hdd # do not change...
  7. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    In Ceph's documentation the hardware requirements are: Process Criteria Minimum Recommended ceph-osd Processor 1x 64-bit AMD-64 1x 32-bit ARM dual-core or better 1x i386 dual-core RAM ~1GB for 1TB of storage per daemon Volume Storage 1x storage drive per daemon Journal 1x SSD partition per...
  8. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    I have ~60 OSDs per node. My understanding is that restarting the service would trigger pg replacement. Not really an option i.m.o.
  9. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    I didn't try to restart any Ceph services. Which services do you suggest to restart?
  10. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    Hi, I have updated the title because the issue is getting worse. And this issue is not a single occurrence, but multiple nodes are affected. Therefore I post the current statistics regarding memory allocation: root@ld5505:# free -h total used free shared...
  11. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    All right. I agree that the terminology is not equal on the different tools. Therefore let's stick with what is displayed by command free. I take node ld5505 as the current most painful example, because this node runs with ~80% RAM allocation w/o any CT / VM running: root@ld5505:~# free -m...
  12. C

    Reverse Proxy required for accessing web services running in CT / VM

    Now I get a HTTP Error 501 displaying this URL: https://ld3955/ipam I'm hesitating, but I think I will go for a dedicated Reverse Proxy service provided by HAProxy and leave the Nginx configuration untouched.
  13. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    This issue is getting serious now because PVE reports 88% RAM allocation w/o any VM / CT running! And the relevant node has 250GB RAM. As this is not a single node issue I would conclude that something is wrong with PVE.
  14. C

    Reverse Proxy required for accessing web services running in CT / VM

    Hi, I have configured Nginx to access Proxmox WebUI via port 443 based on this documentation. This works like charm. However, I need an additional solution to access web services running on specific CTs / VMs. In my case this web service is: NIPAP This web service has only connection to...
  15. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    Thanks for this excurse in the theory of memory allocation on Linux systems. But I have the impression that you simply ignore the facts documented by the monitoring tools glances and netdata. One could argue that a monitor is inaccurate. But two different tools reporting the same metrics that...
  16. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    There's no doubt about other services / processes running on that node allocating memory. But here we talk about an additional memory allocation of 100GB. And there's a difference of used memory when using other monitoring tools, e.g. glances or netdata. As a matter of fact the available...
  17. C

    Proxmox reports: 88% memory allocation, but no VM / CT runs - Is this a memory leak caused by Ceph?

    Hi, in Proxmox WebUI the monitor reports 88% memory allocation (see screenshot). This data is confirmed by command free: root@ld5505:~# free -h total used free shared buff/cache available Mem: 251G 221G 1,2G 125M 29G...
  18. C

    [SOLVED] Ceph cluster network vs. Ceph public network: which data is transferred over which network?

    Well, in this particular scenario I tried to outwit Ceph. This means, the client communicates with the Ceph cluster in the cluster network: The client's NFS share is mounted over cluster network NIC. However, this does not have the expected impact if I use host ld3955 which is neither a MON nor...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!