I asked similar question around a year ago but i did not find it so ill ask it here again.
proxmox cluster based on 6.3-2 10 node,
ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week )
we plan to add more...
Hey all; I've made use of Proxmox for a while and I'm looking to take my first few steps into ceph -- I have 3x QCT QuantaGrid SD1Q-1ULH that I will be making use of.
Each of these nodes have a Xeon D-1541 with an OCP Mezz 10Gb SFP+ (QCT Intel® 82599ES dual-port 10G SFP+ OCP mezzanine) and...
I've read posts about Proxmox not supporting Ceph EC. Last one was dated on May last year. I'd like to ask if, almost one year later, this is still the stance of Proxmox or we can expect EC support in some near future (2021).
I'm considering setting up a separate Ceph cluster (storage only...
Fyi, one (1) of our Proxmox Node (Host) trigger alarm Health Warning Status "clock skew detected on mon.host" on CEPH today starting on (26 mac 2021 6.27AM). Fyi, there are no changes in configuration but suddenly it trigger alarm "clock skew detected on mon.host". Kindly advise me...
I've a ceph Cluster with 3 nodes HPE each with 10xSAS 1TB and 2xnvme 1TB below the config.
The replica and ceph network is 10Gb but the performance are very low...
in VM I got (in sequential mode) Read: 230MBps Write: 65MBps
What I can do/check to tune my storage environment?
I'm using latest proxmox (6.3) and experiencing strange issue during live migration of KVM machines running on ceph block storage (ceph cluster was created through proxmox). Cluster is running fine for several years (was prevously on proxmox 5). This issue started only lately, I...
Hi to all
We have recently upgraded our cluster from the version 5.4 to version 6.3. Our cluster is composed of 6 ceph node and 5 hypervisor. All server have the same packages versions. Here's the details:
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
I'm again in a "mon low disk space" on my small proxmox cluster. I saw many threads about this warning but I'm unable to manage the problem.
average osd use is 5% only
/ partition (where /var/lib/ceph is located) is 72% (19GB available) and this seems to be a problem for Ceph
# df -h...
just wondering is it possible to have a ProxMox cluster with both local ZFS data store per host + Ceph on the same cluster?
5 or 6 hosts
10 bays per host
4 bays for ZFS mirror
6 bays for Ceph
is it possible to have this mixed storage design in a single cluster?
we have set up a 3-node Proxmox VE/CEPH Cluster with one network/VLAN/IP-range for both public and cluster network.
Now we decided to separate public and cluster network into 2 VLANs/IP ranges.
1. Is there a way to do this in an easy way? (we don't have workload, yet - so service...
This week we have been balancing storage across our 5 node cluster; Everything is going relatively smoothly but am getting a warning in CEPH:
"pgs not being deep-scrubbed in time"
This only began happening AFTER we made changes to the disks on one of our nodes; CEPH is still healing properly...
I'm looking into a solution for local storage Live Migration. As it seems Proxmox doesn't support "XCP-like" local storage migrations, it seems that I need to build a Ceph cluster. Let's consider a basic 3 machine cluster (PX1, PX2 and PX3) with 1 Gbps connectivity between them via a...
After a HW failure in a 3 nodes Proxmox VE 6.3 cluster, I replaced the HW, and re-joined the new node.
The replaced node is called hystou1, the 2 other nodes are hystou2 and 3.
I had a couple of minor issues when re-joining the new node since it has the same name, and I had to remove...
I wanted to know what was the status of non-Linux OSes support for direct access to native I/O paths of Ceph with RADOS.
I am trying to evaluate which existing program would allow a direct CEPH cluster access in order to avoid using iSCSI or CIFS gateways.
Idea is to use such programs to...
While I understand that this is not really a Proxmox question, I've found no other place to ask this into so here we go:
I've installed Proxmox with 3 nodes, each having additional disks installed in order to form a Ceph cluster. I did so, having 3 osds (1 on each host) and creating two...
Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster.
Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature?
I'm trying to replace HDD to SSD.
As my understanding, I let a target osd out and wait to become HEALTH_OK and destroy it to remove the current HDD physically.
but after the osd out operation , HEALTH_WARN never ends. How can I fix it?
My version is Virtual Environment 5.4-15
I'm seeing this error randomly when trying to attempt (usually live migration works for the same VM after 1-2 attempts) and consistently when I'm trying to show the content of second Ceph pool. I have 2 Ceph pools, within Proxmox, one has a replication rule where it replicated to SSDs only...