Hi,
We did separate corosync traffic from the rest.
Our setup is now (on each node)
2x10G LACP for production traffic
2x10G LACP for cluster traffic
Still we experience corosync failures:
Oct 7 04:23:57 proxmox-siege-001 corosync[13778]: [KNET ] link: host: 4 link: 0 is down
Oct 7 04:23:57...
Sure....the network was working fine _before_ migrating to proxmox 6, and suddenly (without any network change), the packets need 35seconds to travel from one port to another one...
I'm more inclined in thinking the token has not been received _because_ corosync caused trouble before...
Hi,
After upgrading our 4 node cluster from PVE 5 to 6, we experience constant crashed (once every 2 days).
Those crashes seem related to corosync.
Since numerous users are reporting sych issues (broken cluster after upgrade, unstabilities, ...) I wonder if it is possible to downgrade...
Hi,
I did update to latest version and it seems it is fixed now:
701: Mar 01 10:35:36 INFO: Starting Backup of VM 701 (qemu) 701: Mar 01 10:35:36 INFO: status = running 701: Mar 01 10:35:37 INFO: backup mode: snapshot 701: Mar 01 10:35:37 INFO: ionice priority: 7 701: Mar 01 10:35:38 INFO...
Hi Dietmar,
# lsof | grep backups_tmm
# (no output)
# fuser /dev/VG_SSD_proxmox-vty-001/vzsnap-proxmox-vty-001-0
# (no output)
I would then assume nothing is keeping those open.
The backup jobs however ends in:
INFO: archive file size: 1.00GB
INFO: delete old backup...
Hi,
I'm sorry to heavily insist but I believe there is a bug (or maybe a
misunderstanding on my side) with vzdump/snapshots/lvm backups on 1.9.
I'm constantly getting "lvremove failed" while using snapshot backups.
My setup:
4 proxmox boxes (2.0-28):
# pveversion --verbose
pve-manager...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.