Search results

  1. F

    [SOLVED] Ceph storage fully although storage is still available

    Can you please explain how you have your ceph cluster set up? It is unconventional to have all SSDs on one host and all HDDs on another host.
  2. F

    [SOLVED] Ceph storage fully although storage is still available

    Check the OSD ratios. Is one OSD holding more data compared to the others? This can be seen in the Ceph > OSD page, Used (%). If one of the OSDs is holding a higher percentage compared to others, you can manually set the reweigh via the command line to a lower value to help force a...
  3. F

    [SOLVED] Some ceph related questions (autoscale)

    Old thread, but great advice. Thank you @aaron. This was my issue also. Could this be added to the Proxmox Wiki? I searched the Ceph documentation and didn't see mention of this issue.
  4. F

    Ceph Storage and possible new pool

    I would only recommend having multiple pools if the duplication is different or if the drive type is different, e.g. HDD & SSD. If you follow this path, you must set up different rules with different classes for drive type. Again, this is only needed if you have different drive types. If all...
  5. F

    Proxmox Ceph Cluster Configuration

    Look into erasure coding: I use it on my cephFS implementation on a 3 node, 4 OSDs per node cluster. It works well. I would not use erasure coding for VM storage.
  6. F

    PVE 5.4-3 CEPH High RAM Usage?

    I use ceph bluestore OSDs... if you do also, review this reference doc: TLDR: set the maximum memory usage per each OSD in ceph.conf. I have done this without any issues. [global] other settings...
  7. F

    [SOLVED] Ceph missing more that 50% of storage capacity

    Good news... I read up on the Ceph docs and found the following: A pool can then be changed to use the new rule with: ceph osd pool set <pool-name> crush_rule <rule-name> AKA - Pools can have NEW rule sets applied!
  8. F

    [SOLVED] Ceph missing more that 50% of storage capacity

    Please clarify I understand your problem correctly: the 'general' pool is storing information on the SSDs? This information is dependent upon my understanding above being correct. Just create a new temporary pool and empty everything off of the current problematic pool. Then delete the...
  9. F

    [SOLVED] Ceph missing more that 50% of storage capacity

    I think it will require a completely new pool that you migrate over too, however I am not 100% positive.
  10. F

    [SOLVED] Ceph missing more that 50% of storage capacity

    Ceph does use ALL OSDs for any pool that does not have a drive type limitation. Your theory is likely valid. Also, FYI the Total column is the amount of storage data being used. Not the total availability of the pool.
  11. F

    How to use google apps smtp to email warnings

    Good information. This got it working for me. Thanks!
  12. F

    Host Network Not Starting

    I am still experiencing this error... must manually restart networking.service upon reboot for OVS network to start up. Does anyone know anything about this bug? Thanks!
  13. F

    Design recommendations - proxmox 4.2

    Based upon my knowledge, however limited, if using ceph or glusterFS, you can definitely set up HA as long as you have duplicates of the data, e.g. set ceph duplication level to 3. Also, I would suspect you would be able to do ceph on a 3 node cluster with 1 SSD journal device (I would...
  14. F

    Design recommendations - proxmox 4.2

    I would definitely recommend considering ceph. However, I would definitely want to have a subscription for the customer d/t the upcoming changes in Proxmox with the recent Hammer to Jewel upgrade. Also, "pveceph install does not run correctly due to changes on the ceph end. It is easy to...
  15. F

    pvedaemon timed out

    Constant error in syslog for host ... pvedaemon (4 to 5 digit number) got timeout. Here is my package versions... proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve) pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.10-1-pve: 4.4.10-54 lvm2...
  16. F

    pveceph install issues

    I am attempting to install ceph version Hammer via pveceph: pveceph install -version hammer However, this is failing on me with the following output: root@node1:~# pveceph install -version hammer download and import ceph repository keys unable to download ceph release key: 500 Can't connect to...
  17. F

    [SOLVED] Live Migration Network

    As a follow-up question to the above question... which interface do disk migrations occur on? From my observations, I believe it to be either the default vmbr0 or the corosync network (if segmented off of vmbr0)... Same question for NFS mounts, which network interface is responsible for...
  18. F

    [SOLVED] Live Migration Network

    Thank you for confirming that information. It was what I suspected, however when setting up my new network, I wanted to be sure.
  19. F

    [SOLVED] Live Migration Network

    Is it possible to select the network in which live migration occurs? Is it the default vmbr0? Or is it the same network at corosync - in which case I can change the network via this wiki guide: Or is it completely random? I am unable to...
  20. F

    Proxmox 3 Node Cluster with Ceph, Networking

    I would recommend you forgo the need for a 10GB switch via directly linking the machines if your cluster is only 3 nodes. You would need 3 Intel x520-DA2's which can be had on eBay for ~$100 per device. This would enable you to set up a static ceph network without a switch and have the...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!