Search results

  1. J

    [SOLVED] cannot start ha resource when ceph in health_warn state

    Are you sure that you set min_size for this pool to 1? Please show 'ceph health detail' when cluster is in health_warn state.
  2. J

    [SOLVED] how to disable ksm sharing in proxmox v3.4

    Try this: echo 2 > /sys/kernel/mm/ksm/run
  3. J

    ceph : [client] rbd cache = true override qemu cache=none|writeback

    http://docs.ceph.com/docs/master/rbd/qemu-rbd/#qemu-cache-options
  4. J

    Problem with CEPH after upgrade

    If your pool has size = 3, then each osd has (1024 * 3 / 12) = 256 placegroups. Now you'll have to: - add a new node with 4 osds (or add 4 osds to existing nodes), so there will be (1024 * 3 / 16) = 192 pg per osd (and this is the best way); - change variable 'mon pg warn max per osd' to some...
  5. J

    Single Node CEPH - HEALTH_WARN

    Please show your crush map (from Ceph->Configuration).
  6. J

    restart pve-manager - restart containers

    To update pve-kernel you need to restart a node.
  7. J

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Explanation and solution: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023223.html
  8. J

    CEPH warning: too many PGs per OSD (225 > max 200)

    http://ceph.com/community/new-luminous-pg-overdose-protection/
  9. J

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Recovery is in progress. Wail until it complete. I wonder what is the primary cause of this failure. Maybe you didn't wait for HEALTH_OK between every step of upgrade? Or upgrade with noout set? When status become HEALTH_ERR, after reboot of last node?
  10. J

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    This happens when you use wrong command to remove osd - ceph osd rm osd.16 instead of ceph osd rm 16 Have you tried: - reboot a node containing osd.16 (with noout flag)? - set osd.16 as lost?
  11. J

    OSD issues after migration

    So, you install PVE 5.1 on the new hardware and move osd disks from old one to new? IMO it will not work that way.
  12. J

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Have you tried this: http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ ?
  13. J

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Cluster is in a normal recovery state. So where is the bottleneck? Try to find it with the atop utility. Disks are SSDs or spinners? Did you have a problem with flapping osds?
  14. J

    CEPH problem after upgrade to 5.1 / slow requests + stuck request

    Your ceph network is 1Gbps? You don't need to have more than 3 mons. If you need more storage space, you should add a node with OSDs only (without mon). It is no good for recovery/backfill process.
  15. J

    [SOLVED] [PVE5] Internet on VM but not to ProxMox host

    Add default gateway to Proxmox: #ip route add default via 192.168.1.1
  16. J

    Add nodes to existing cluster

    You don't need to delete VMs, nor even stop them.
  17. J

    IGMP over switch uplink issue

    If you don't know what the igmp snooping exactly do, you don't need it.
  18. J

    IGMP over switch uplink issue

    You should disable IGMP snooping on both switches.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!