Hi, i have 2 erros regarding ceph.
I have ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable) on all nodes.
1- Reduced data availability: 128 pgs inactive, 5 pgs stale:
pg 1.8 is stuck stale for 2d, current state stale+active+clean, last acting [1,2,3]
pg 1.a is stuck...
i am trying to upgrade to proxmox 8.
after finish updating all nodes to 7.4-16, (and rebooted each node after install)
and updating ceph from Pacific to Quincy
update when smooth without issues,
after the update this issue occurred
chrony is setup and running on all nodes across the...
I have a license at the lowest cost Proxmox VE subscription level. I installed fresh Proxmox 8 in the last few days of the beta, then updated to release.
I removed my license from my old box and added it to the new one -- no problem. I can get packages from the enterprise repo. But not...
I have a test cluster of 3 VMs in virtualbox on my pc.
Their IPs changed as follows:
A: 172.16.0.150 > 192.168.100.1 (NOT a gateway)
A: 172.16.0.151 > 192.168.100.2
A: 172.16.0.152 > 192.168.100.2
Now I get the following message when i run 'service ceph-mon@local status':
Processor -- bind...
I have a cluster of 4 nodes with Proxmox 7.1-9 and Ceph Pacific v.16.2.7. This weekend I would like to upgrade Proxmox to 7.3 and Ceph to Quincy(the latest version). My Ceph Cluster is made of 1 pool, consisting of 8xSSD to each nodes.
The question are these:
-The 7.3 PVE version is...
After the overall good feedback in the Ceph Quincy preview thread, and no new issues popping up when testing the first point release (17.2.1), we are confident to mark the Proxmox VE integration and build of Ceph Quincy as stable and supported when used with pve-manager in version...