Hi,
i have a cluster of 3 compute nodes and 3 storage nodes.
I wanted to upgrade to pve 7.4 and ceph quincy.
Followed the official documentation https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
All went ok until i restarted the osds on one of the storage nodes.
some are upgraded and some...
I upgraded my CEPH cluster without properly following the mon upgrade so they were no longer on leveldb.
Proxmox and CEPH were updated to latest for current release.
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
The upgrade to Quincy states a recommendation that Mons are using RocksDB...
Hi.
I have a question regarding the upgrade process for Proxmox VE in combination with Ceph.
Currently, my Proxmox VE setup is running version 7, and I also have Ceph installed with version 15.2.17 (Octopus). I am planning to upgrade Proxmox VE to version 8, as per the official upgrade...
I run a 3-node PVE with CEPH.
I migrated all VMs away from node 3, upgraded to the latest CEPH (Quincy) and then started the PVE 7 to 8 upgrade on node 3.
After rebooting node 3 (now PVE 8), everything seemed to work well. So I migrated two VMs, one each from node 1 (still on PVE 7) and node 2...
Yesterday I upgraded my Proxmox servers following https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
I face the issue not longer to be able to create new osd's:
# pveceph osd create /dev/sdb -db_dev /dev/nvme1n1
binary not installed: /usr/sbin/ceph-volume
Any ideas?
Hi Community!
After the overall good feedback in the Ceph Quincy preview thread, and no new issues popping up when testing the first point release (17.2.1), we are confident to mark the Proxmox VE integration and build of Ceph Quincy as stable and supported when used with pve-manager in version...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.