1. C

    [SOLVED] ceph-volume gone after upgrade to Quincy?

    Yesterday I upgraded my Proxmox servers following https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy I face the issue not longer to be able to create new osd's: # pveceph osd create /dev/sdb -db_dev /dev/nvme1n1 binary not installed: /usr/sbin/ceph-volume Any ideas?
  2. N

    ceph-volume lvm create => error connecting to the cluster

    Hello, i am trying to get ceph up and running again after migrating proxmox v5 to v6 on 3 nodes. After the upgrade the old lumious setup didn't work anymore due to 2 "ghost" monitors (mon.0 and mon.1). I couldn't remove them and the GUI says "mon_command failed - command not known (500) ". So...
  3. A

    ceph-disk or ceph-volume ?

    today I added some new HDDs to our storage nodes. All HDDs are Seagate IronWolf 8TB. As you see attached, the new HDDs are shown with a different size. Only difference I know is I created the old OSD with cep-volume by myself, not within the GUI. I used cep-volume because cep-disk is deprecated...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!