Search results

  1. W

    BlueFS spillover detected on 30 OSD(s)

    I agree with this assumption. The one should at least be warn before and upgrade. I'm facing the same issue with 50+ OSDs and have no idea how to sort it out I don't have another cluster to play with and found not much info how correctly destroy all OSDs on single node, wipe all disks (as well...
  2. W

    Multipath iSCSI /dev/mapper device is not created (Proxmox 6)

    Check your multipath.conf file. Seems one more “}” bracket is missing at the end
  3. W

    [SOLVED] Warning after sucessfull upgrade to PVE 6.x + Ceph Nautilus

    After a successful upgrade from PVE 5 to PVE 6 with Ceph the warning message "Legacy BlueStore stats reporting detected on ..." appears on Ceph monitoring panel Have I missed something during an upgrade or it's an expected behavior? Thanks in advance
  4. W

    lacp bond wihout speed increase

    Single connection will be always limited to the speed of single interface. LACP bond increase total throughput (read as sum of all connections).
  5. W

    Nodes unreachables in PVE Cluster

    Mine configs: root@pve2:~# cat /etc/network/interfaces # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually, # please utilize the 'source' or...
  6. W

    Nodes unreachables in PVE Cluster

    I'm facing almost the same issue with couple of setups after an upgrade to 5.4. Could you show network config and lspci output. Probably we could find out something in common
  7. W

    Proxmox cluster broke at upgrade

    There is no dedicated net. But switch is not loaded (according to SNMP stats). And once again: everything was fine before an upgrade
  8. W

    Proxmox cluster broke at upgrade

    Below how omping result looks like now: root@pve2:~# omping -c 600 -i 1 -q pve2 pve3 pve4A pve3 : waiting for response msg pve4A : waiting for response msg pve4A : joined (S,G) = (*, 232.43.211.234), pinging pve3 : joined (S,G) = (*, 232.43.211.234), pinging pve3 : given amount of query...
  9. W

    Proxmox cluster broke at upgrade

    omping test now shows 60% drop(( it was not the case with 5.3 (I performed that tests on all cluster setups)
  10. W

    Proxmox cluster broke at upgrade

    This morning I restarted corosync on all the nodes again. Cluster was forking for couple of minutes and than hanged May 15 09:40:10 pve1 systemd[1]: Starting Corosync Cluster Engine... May 15 09:40:10 pve1 corosync[24728]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready...
  11. W

    After upgrade to 5.4 redundant corosync ring does not work as expected

    On another cluster I'm facing different issue but again after an upgrade to 5.4 Could you please take a look into: https://forum.proxmox.com/threads/proxmox-cluster-broke-at-upgrade.54182/#post-250102 I'm fully confident that my network switches are configured inline with PVE docs IGMP snooping...
  12. W

    Proxmox cluster broke at upgrade

    Here is a log from the node that became a part of cluster and than left May 14 10:26:08 pve4A corosync[20900]: notice [MAIN ] Completed service synchronization, ready to provide service. May 14 10:26:08 pve4A corosync[20900]: [CPG ] downlist left_list: 0 received May 14 10:26:08 pve4A...
  13. W

    Proxmox cluster broke at upgrade

    On the node that hangs I see root@pve2:~# systemctl status pve-cluster ● pve-cluster.service - The Proxmox VE cluster filesystem Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-05-13 21:49:07 MSK; 16min ago...
  14. W

    Proxmox cluster broke at upgrade

    I'm facing exactly the same issue. Unfortunately reinstalling is not possible
  15. W

    mount error: exit code 16 (500) on cephfs mount

    unmounting "/mnt/pve/cephfs-backup" works (temporary) for all nodes but one. root@pve-node3:~# umount /mnt/pve/cephfs-backup umount: /mnt/pve/cephfs-backup: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).)...
  16. W

    mount error: exit code 16 (500) on cephfs mount

    root@pve-node2:~# cat /etc/pve/storage.cfg dir: local disable path /var/lib/vz content images maxfiles 0 shared 0 zfspool: local-zfs disable pool rpool/data blocksize 8k content images nodes...
  17. W

    After upgrade to 5.4 redundant corosync ring does not work as expected

    Yes, nothing has changed. No ideas so far( What has been checked - 3 older kernels (one - that has been used in similar environment, the only difference on that setup - IPoIB instead of 10Gbe on ring#0 - anyway ring#0 is working in both setups). All the nodes we rebooted (VMs were migrated)...
  18. W

    mount error: exit code 16 (500) on cephfs mount

    I'm facing the same issue with the latest 5.4 syslog May 7 14:44:27 pve-node4 pvestatd[2269]: A filesystem is already mounted on /mnt/pve/cephfs-backup May 7 14:44:27 pve-node4 pvestatd[2269]: mount error: exit code 16 May 7 14:44:37 pve-node4 pvestatd[2269]: A filesystem is already mounted...
  19. W

    CEPH-Log DBG messages - why?

    [mon.HOSTNAME] debug mon = 0/5 correct?