Search results

  1. J

    VM Start "got timeout" if 10+ interfaces added

    So maybe you should move vlans inside VM instead of setting one interface for each.
  2. J

    Number of CPUs decreased after restarting Proxmox

    So you have a hardware problem. Remove all cards and try again. With kernel panic? Please show kernel messages.
  3. J

    Number of CPUs decreased after restarting Proxmox

    That's the reason. Remove 'acpi=off' from command line and you will see all the cores.
  4. J

    ceph 12.1.0-pve2 high ram usage

    It's not true. http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/
  5. J

    How do I mount a raw image?

    losetup -o offset=1048576 /dev/loop22 disk-drive-ide0.raw mount /dev/loop22 /mnt/123
  6. J

    Proxmox+ceph on partition

    This is outdated. Use lvm and ceph-volume: http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#bluestore
  7. J

    Moving OSD's from 1 node to another

    Are you sure you using right /dev/sdX device and sdX device isn't mounted?
  8. J

    Calculating Journal Size - ceph

    If you like the 'filestore design with journal' performance, you need to setup bcache as mentioned somewhere on the forum. Moving DB + WAL to ssd didn't improve write speed with noticeable factor.
  9. J

    Moving OSD's from 1 node to another

    So the osd.6 is this one taken from prox1 to prox2? I guess you try to add disk which is already added to ceph as osd.6.
  10. J

    Moving OSD's from 1 node to another

    1. When you move an osd from one node to another, you don't need to destroy and create it. It will be automagically discovered and added. 2. To remove an osd from cluster, use ceph osd purge {id} --yes-i-really-mean-it 3. In ceph commands you should use numeric osd id, i.e. ceph auth del 7...
  11. J

    [SOLVED] Added disks, updated pg_num, recovery going very slow

    My 2 cents: 1. pg_num should be a power of 2 (in this case, 1024). 2. You did too many jobs as once. You should: a) add first osd; b) wait for HEALTH_OK; c) add second osd; d) wait for HEALTH_OK; ... z) increase pg_num. 3. The 'too many pg per osd' warning warns you about real problem, you...
  12. J

    CephFS - filesystem is degraded

    So now you have a problem with osd 0 and 11 (no space left on device?). With mon osd full ratio = .98 mon osd nearfull ratio = .95 you only disable the warning, it will not free space on your osds. Maybe reweight this osds will help.
  13. J

    CephFS - filesystem is degraded

    How long you ran this cluster with HEALTH_WARN state? With size=2 you agree with some data loss.
  14. J

    CephFS - filesystem is degraded

    Check what causing the peering problem: http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#placement-group-down-peering-failure
  15. J

    CephFS - filesystem is degraded

    Probably yes. Please show output of ceph health detail
  16. J

    CephFS - filesystem is degraded

    You have PGs stuck in activating state because of Follow this https://forum.proxmox.com/threads/ceph-problem-after-upgrade-to-5-1-slow-requests-stuck-request.38586/ and then wait for recovery to complete.
  17. J

    [SOLVED] osd not being created

    Add osd manually: ceph-disk prepare --bluestore /dev/sdX --osd-id {id} --osd-uuid `uuidgen` ceph-disk activate /dev/sdX1
  18. J

    Lost files after a power outage

    modprobe nbd max_part=8 qemu-nbd -c /dev/nbd0 /path/to/image.qcow2
  19. J

    [SOLVED] cannot start ha resource when ceph in health_warn state

    ceph osd pool get [your pool name] size ceph osd pool get [your pool name] min_size

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!