I need some input on tuning performance on a new cluster I have setup
The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes.
I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...
Hello Proxmox Support,
we want to safe upgrade our system to the newest version.
The challenge ist to upgrade 5 nodes at once with the smallest downtime.
We are afraid that something may go wrong.
we have over 30 Server that are all running.
we also read that there is a new (blue)storage. So...
Hi everyone,
recently we installed proxmox with Ceph Luminous and Bluestore on our brand new cluster and we experiencing problem with slow reads inside VMs. We tried different settings in proxmox VM but the read speed is still the same - around 20-40 MB/s.
Here is our hardware configuration...
Hi,
we have a running setup with:
4 Server Debian 8.x with Proxmox 4.4, CEPH "hammer" with 30 osds
Last weekend and for further growing, we installed 4 server with debian stretch and proxmox 5.1.
If we login to the proxmox panel the new servers are marked as "offline". I didnt see any hints...
Hi,
I've just updated a 3 nodes pve 5.0 cluster with latest luminous packages.
Everything seems to be good after upgrade and reboot but on one node I have weird syslog relative to a "osd.12 service".
Oct 12 20:51:32 dc-prox-13 systemd[1]: ceph-osd@12.service: Service hold-off time over...
We are planning an upgrade of PVE 4.x to 5.x. Is the requirement[1] on Ceph Luminous also in the case of using an external Ceph cluster?
[1] https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0#Upgrade_the_basic_system_to_Debian_Stretch_and_PVE_5.0
I am trying to limit my osd RAM usage.
Currently my osd's (3) are using ~70% of my RAM (the ram is now completely full and lagging the host).
Is there a way to limit the RAM usage for each osd?
I'm in the middle of migrating my current osd's to Bluestore but the recovery speed is quite low (5600kb/s ~10 objects/s). Is there a way to increase the speed?
I currently have no virtual machines running on the cluster so performance doesn't matter at the moment. Only the recovery is running.
So i have successfully upgraded my Proxmox 5.0-29 environment running Luminous 12.1.1, the weird thing is that when I look at the health screen in the GUI, it displays the Performance, but the Health is blank and so is what it thinks the OSD's are (0 in, 0 out). However if i go to the Ceph->OSD...
I am using the Proxmox 5 beta with ceph luminous configured in a 3 node cluster. Works very well. I noticed that backups of the VM's (housed in the RBD) seem to work much faster now. Am I dreaming or did something get fixed?
INFO: status: 73% (3962830848/5368709120), sparse 73% (3942830080)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.