Search results

  1. M

    CEPH MON issue on one node

    and ceph -s log: cluster: id: bf79845c-f78b-4b28-8bf9-85fb8d320a38 health: HEALTH_OK services: mon: 3 daemons, quorum pve11,pve12,pve13 (age 10h) mgr: pve11(active, since 5w), standbys: pve12, pve13 osd: 18 osds: 18 up (since 4d), 18 in (since 13M) data...
  2. M

    CEPH MON issue on one node

    When mon service i active cluster is healthy. The problem starts after update Ceph to Quincy (update from Proxmox solution, before PVE update to version 8 from 7.
  3. M

    CEPH MON issue on one node

    PVE updated and restarted, still same issue with mon on one node pve13. Disk is almost free, with 11% used space. Any ideas? Update: Attached logs form journalctl --since '1 week ago' -u ceph-mon@pve13 >> ceph-mon_log.txt
  4. M

    CEPH MON issue on one node

    root@pve13:~# df -hT /var/log/ceph/ Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/pve-root ext4 94G 9.7G 80G 11% / root@pve13:~#
  5. M

    CEPH MON issue on one node

    Hi, i have issue with ceph mon on one node few weeks after update to 17.27 version. Below logs from Syslog Mon. once every few days the service stops and I have to start it manually. It works fine for a few days and then these errors come back. The system disk has a lot of free space. It is in...
  6. M

    SWAP usage 100%

    We have the same situation after upgrade 3 nodes cluster to 7.4 from 6.4 and increasing ram on node 1 with additional 128GB. Is there any solution? We change parameters in systemct to vm.swappiness=10.
  7. M

    CEPH pools have too many placement groups

    Hi, we have new installation 3 nides ceph cluster on Proxmox 6.3. Each node has 3 OSD. We create new pool using GUI with default 128Pgs. Now we have Health Warn pools have too many placement groups. Detail 1 pools have too many placement groupsPool POOL_CEPH has 128 placement groups, should...
  8. M

    Subscription Key

    Hi, we change our hardware and we want to move our subscription keys from old to new enviroment. Old enviroment had 4 sockets per host (2 hosts) now we have 2 sockets and 3 hosts. Is there any chance to change our active keys?
  9. M

    Installing Proxmox Backup Server on FreeNAS 11.3 as bhyve VM

    Did you try this solution with instaling Debian first then PBS? I'm plan to di the same in our enviroment. Much thanks for PBS
  10. M

    Probleme when back up on a LXC container

    Solution with permission change on NFS no_root_squash works for me. Thanks Mir

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!