luminous

  1. L

    Ceph performance

    I need some input on tuning performance on a new cluster I have setup The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes. I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...
  2. ssaman

    Update instruction for Proxmox/Ceph 4.4(jewel) to 5.x (luminous)

    Hello Proxmox Support, we want to safe upgrade our system to the newest version. The challenge ist to upgrade 5 nodes at once with the smallest downtime. We are afraid that something may go wrong. we have over 30 Server that are all running. we also read that there is a new (blue)storage. So...
  3. F

    Ceph Luminous with Bluestore - slow VM read

    Hi everyone, recently we installed proxmox with Ceph Luminous and Bluestore on our brand new cluster and we experiencing problem with slow reads inside VMs. We tried different settings in proxmox VM but the read speed is still the same - around 20-40 MB/s. Here is our hardware configuration...
  4. Volker Lieder

    Merge Proxmox 4.4 to 5.1

    Hi, we have a running setup with: 4 Server Debian 8.x with Proxmox 4.4, CEPH "hammer" with 30 osds Last weekend and for further growing, we installed 4 server with debian stretch and proxmox 5.1. If we login to the proxmox panel the new servers are marked as "offline". I didnt see any hints...
  5. TwiX

    Ceph 12.2.1 update - Weird syslog

    Hi, I've just updated a 3 nodes pve 5.0 cluster with latest luminous packages. Everything seems to be good after upgrade and reboot but on one node I have weird syslog relative to a "osd.12 service". Oct 12 20:51:32 dc-prox-13 systemd[1]: ceph-osd@12.service: Service hold-off time over...
  6. H

    [SOLVED] Ceph luminous required for PVE 5 if Ceph is external

    We are planning an upgrade of PVE 4.x to 5.x. Is the requirement[1] on Ceph Luminous also in the case of using an external Ceph cluster? [1] https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0#Upgrade_the_basic_system_to_Debian_Stretch_and_PVE_5.0
  7. T

    Limit Ceph Luminous RAM usage

    I am trying to limit my osd RAM usage. Currently my osd's (3) are using ~70% of my RAM (the ram is now completely full and lagging the host). Is there a way to limit the RAM usage for each osd?
  8. T

    Increase Ceph recovery speed

    I'm in the middle of migrating my current osd's to Bluestore but the recovery speed is quite low (5600kb/s ~10 objects/s). Is there a way to increase the speed? I currently have no virtual machines running on the cluster so performance doesn't matter at the moment. Only the recovery is running.
  9. D

    Ceph Status Page Health and OSDs strangeness.

    So i have successfully upgraded my Proxmox 5.0-29 environment running Luminous 12.1.1, the weird thing is that when I look at the health screen in the GUI, it displays the Performance, but the Health is blank and so is what it thinks the OSD's are (0 in, 0 out). However if i go to the Ceph->OSD...
  10. J

    Ceph Luminous backup improvement?

    I am using the Proxmox 5 beta with ceph luminous configured in a 3 node cluster. Works very well. I noticed that backups of the VM's (housed in the RBD) seem to work much faster now. Am I dreaming or did something get fixed? INFO: status: 73% (3962830848/5368709120), sparse 73% (3942830080)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!