Hello!
The issue with the failed disks is resolved, means all disks are back in to Ceph.
However I would take the opportunity to ask you for clarification of the available and allocated disk space.
In my setup I have 4 OSD nodes.
Each node has 2 (storage) enclosures.
Each enclosure has 24 HDDs...
Hm... if the function of the available button(s) is not working, I would call this a bug.
Using the CLI helps to complete the demanded task, but it's not a solution for the issue with the WebUI.
If you need more info or data to solve this issue please advise and I will try to provide it.
THX
Hi,
I want to stop several OSDs to start node maintenance.
However, I get an error message indicating a communication error. Please check the attached screenshot for details.
Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
Hi Alwin,
the faulty host is ld5507-hdd_strgbox, means there are two enclosures connected with 24 HDDs à 2TB each.
On each enclosure there are 15 and respectively 16 HDDs showing a failure.
The investigation why 31 disks fail at the same time are ongoing.
The error of every single disk is this...
Hello,
in my cluster consisting of 4 OSD nodes there's a HDD failure.
This affects currently 31 disks.
Each node has 48 HDDs à 2TB connected.
This results in this crushmap:
root hdd_strgbox {
id -17 # do not change unnecessarily
id -19 class hdd # do not change...
In Ceph's documentation the hardware requirements are:
Process Criteria Minimum Recommended
ceph-osd Processor
1x 64-bit AMD-64
1x 32-bit ARM dual-core or better
1x i386 dual-core
RAM ~1GB for 1TB of storage per daemon
Volume Storage 1x storage drive per daemon
Journal 1x SSD partition per...
Hi,
I have updated the title because the issue is getting worse.
And this issue is not a single occurrence, but multiple nodes are affected.
Therefore I post the current statistics regarding memory allocation:
root@ld5505:# free -h
total used free shared...
All right. I agree that the terminology is not equal on the different tools.
Therefore let's stick with what is displayed by command free.
I take node ld5505 as the current most painful example, because this node runs with ~80% RAM allocation w/o any CT / VM running:
root@ld5505:~# free -m...
Now I get a
HTTP Error 501
displaying this URL: https://ld3955/ipam
I'm hesitating, but I think I will go for a dedicated Reverse Proxy service provided by HAProxy and leave the Nginx configuration untouched.
This issue is getting serious now because PVE reports 88% RAM allocation w/o any VM / CT running!
And the relevant node has 250GB RAM.
As this is not a single node issue I would conclude that something is wrong with PVE.
Hi,
I have configured Nginx to access Proxmox WebUI via port 443 based on this documentation.
This works like charm.
However, I need an additional solution to access web services running on specific CTs / VMs.
In my case this web service is: NIPAP
This web service has only connection to...
Thanks for this excurse in the theory of memory allocation on Linux systems.
But I have the impression that you simply ignore the facts documented by the monitoring tools glances and netdata.
One could argue that a monitor is inaccurate. But two different tools reporting the same metrics that...
There's no doubt about other services / processes running on that node allocating memory.
But here we talk about an additional memory allocation of 100GB.
And there's a difference of used memory when using other monitoring tools, e.g. glances or netdata.
As a matter of fact the available...
Hi,
in Proxmox WebUI the monitor reports 88% memory allocation (see screenshot).
This data is confirmed by command free:
root@ld5505:~# free -h
total used free shared buff/cache available
Mem: 251G 221G 1,2G 125M 29G...
Well, in this particular scenario I tried to outwit Ceph.
This means, the client communicates with the Ceph cluster in the cluster network:
The client's NFS share is mounted over cluster network NIC.
However, this does not have the expected impact if I use host ld3955 which is neither a MON nor...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.