Search results

  1. W

    dmeg shows many : fuse: Bad value for 'source'

    Same story root@pve-node3:~# pveversion -v proxmox-ve: 6.2-1 (running kernel: 5.4.44-1-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-3 pve-kernel-helper: 6.2-3 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-1-pve: 5.4.44-1 pve-kernel-5.3.18-3-pve...
  2. W

    Proxmox VE 6.2 released!

    Do you mind to move this package to pvetest repo?
  3. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Tomas, could you please check/confirm that with one way mirroring next command: rbd mirror pool status rbd --verbose gives normal output running on backup node: root@pve-backup:~# rbd mirror pool status rbd health: OK images: 18 total 18 replaying and warning running on main cluster...
  4. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    I had to down Replica/min to 2/1 from 3/2 to get some "extra space". Any ideas why journaling data are not wiped after pulling from backup node? If I was not mistaken I did one-way mirror. How could I check that? Thanks
  5. W

    After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    After an upgrade to PVE 6 and CEPH to 14.2.4 I enabled pool mirroring to independent node (following PVE wiki) From that time my pool usage is growing up constantly even-though no VM disk changes are made Could anybody help to sort out where my space is flowing out? Pool usage size is going to...
  6. W

    Web UI cannot create CEPH monitor when multiple pulbic nets are defined

    Well, this report is mine) This post mainly about asking for any advice to workaround
  7. W

    Web UI cannot create CEPH monitor when multiple pulbic nets are defined

    According to CEPH docs (https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#id1) several public nets could be defined (useful in case of rdb mirroring when slave CEPH cluster is located in separate location or/and monitors need to be created on different network...
  8. W

    [SOLVED] Ghost monitor in CEPH cluster

    Thanks. Managed to delete ceph and reinstall it
  9. W

    [SOLVED] Ghost monitor in CEPH cluster

    Alwin, thanks. Will give a try
  10. W

    [SOLVED] Ghost monitor in CEPH cluster

    I did it. I even deleted all /var/lib/ceph folder and all ceph* related services in /etc/system.d/.. and rebooted that node but pveceph purge still says: root@pve-node4:~# pveceph purge detected running ceph services- unable to purge data what pveceph purge checks for "running ceph...
  11. W

    [SOLVED] Ghost monitor in CEPH cluster

    Nothing changes( root@pve-node4:~# pveceph purge detected running ceph services- unable to purge data root@pve-node4:~# pveceph createmon monitor 'pve-node4' already exists root@pve-node4:~#
  12. W

    [SOLVED] Ghost monitor in CEPH cluster

    Not sure it's somehow related but I dont have any OSDs in my cluster by the moment root@pve-node4:~# systemctl | grep ceph- ● ceph-mon@pve-node4.service loaded failed failed Ceph cluster...
  13. W

    [SOLVED] Ghost monitor in CEPH cluster

    Yeap, systemd service was enabled but disabling does change nothing ceph log on pve-node4 on mon start: Oct 04 13:41:25 pve-node4 systemd[1]: Started Ceph cluster monitor daemon. Oct 04 13:41:25 pve-node4 ceph-mon[436732]: 2019-10-04 13:41:25.495 7f5aed4ec440 -1 mon.pve-node4@-1(???) e14 not...
  14. W

    [SOLVED] Ghost monitor in CEPH cluster

    After an update from 5.x to 6.x one CEPH monitors became "ghost" With status "stopped" and address "unknown" It can be neither run, created or deleted with errors as below: create: monitor address '10.10.10.104' already in use (500 ) destroy : no such monitor id 'pve-node4' (500) I deleted...
  15. W

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    In my environment with libknet* 1.12-pve1 (from no-subscription repo) cluster has become much more stable (no "link down" and corosync seg fault so far >48hrs)
  16. W

    [SOLVED] Why do KNET chose ring with higher priority instead of lower one (as said in manual?)

    Here is an answer... https://github.com/corosync/corosync/commit/0a323ff2ed0f2aff9cb691072906e69cb96ed662 PVE wiki should be get updated accordingly Dumn corosync...
  17. W

    [SOLVED] Why do KNET chose ring with higher priority instead of lower one (as said in manual?)

    Could anyone explain why do corosync (KNET) choose best link with the highest priority instead of the lowest one (as written in PVE wiki)? Very confused with corosync3 indeed... quorum { provider: corosync_votequorum } totem { cluster_name: amarao-cluster config_version: 20 interface...
  18. W

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Another observation is that in my setups only nodes with no swap (zfs as root and NFS share as datastore) and vm.swappiness=0 in sysctl.conf are affected I do remember the unresolved issue with PVE 5.x where swap has been used even with vm.swappiness=0 by pve process. Couldn't this be the case...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!