Search results

  1. C

    Ceph upgrade from Jewel to Luminous

    Dear Sir/Madam, Currently we are in the running to upgrade our environment from Proxmox 4.4 with Ceph Hammer to 5.2 with Ceph Luminous. But now we have the same problem as we have a few months ago: The problem is that we are unable again to locate the right packages for Ceph Luminous 12.2.8...
  2. C

    Ceph upgrade from Jewel to Luminous

    Okay thanks for your information. We will wait for the Proxmox update. Just out of curiosity is there no other way to upgrade now? Because we cant seem to find a old repo from Ceph that wil work on Jessie?
  3. C

    Ceph upgrade from Jewel to Luminous

    Hi, We currently have an environment that runs on Proxmox 4.4 with ceph Hammer, we want to upgrade it to Proxmox 5.2 with ceph Luminous. In our test environment we first upgraded Ceph Hammer to Jewel, that upgrade was successful and the cluster is healthy. Now we're trying to upgrade Ceph...
  4. C

    reverse proxy nginx noVNC problem since V5

    Thank you so much! Libpve-http-server-perl version 2.0-9 fixed all my nginx reverse proxy problems. Keep up the good work!
  5. C

    Ceph Monhost (Storage.cfg) edit

    Everything works again after manually editing the storage.cfg with the new monhosts. Still weird that it didn't update automatically? Even in a fresh installed test environment it didn't update the storage.cfg after deleting a Ceph monitor. So I think there is bug in the proxmox-ve 4.4-90...
  6. C

    Ceph Monhost (Storage.cfg) edit

    proxmox-ve: 4.4-90 (running kernel: 4.4.67-1-pve) pve-manager: 4.4-13 (running version: 4.4-13/7ea56165) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.13-2-pve: 4.4.13-58 pve-kernel-4.4.67-1-pve: 4.4.67-90 lvm2: 2.02.116-pve3 corosync-pve: 2.4.2-2~pve4+1 libqb0: 1.0.1-1 pve-cluster: 4.0-52...
  7. C

    Ceph Monhost (Storage.cfg) edit

    Dear Proxmox people, We added 3 new nodes to our Proxmox cluster with the goal to make these our primary monitors for our Ceph storage layer. Steps we did: - Added them to the Proxmox cluster - Made them Ceph Monitors - Waited a day and then destroyed the old ones. - Everything kept running...
  8. C

    Ceph.conf is empty after editing it on a running Ceph cluster

    We wanted to edit the /etc/pve/ceph.conf to decrease the I/O when remove/adding a SSD to the cluster. I found on the proxmox forum that we can use: osd max backfills = 1 osd recovery max active = 1 To reduce the I/O when its rebuilding. So I picked one node and directly edit the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!