Perhaps this can shed some light:
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_corosync_over_bonds
Corosync is very strict regards bond interface.
You can follow the instruction in this web site:
https://fohdeesha.com/docs/index.html
Click in PERC Cross Flash and find you RAID controller.
To be honesty, I don't recall what I do at that time.
But I think the Fohdeesha guide will help you.
Keep us posted about your progress.
Cheers
I know this can be a little annoying, but one can set the VM state do disable, and then do a shutdown in cli inside the VM.
Also, if you hit the shutdown button in the WEB GUI, the vm will remain shutdown, regardless its vm state in the HA stack.
Just to reference, there is OmniOS/Illumos which is a former Solaris clone which has some zfs-over-iscsi with COMSTAR and there's some iscsi-ha.
I never tested it, but could be worth.
https://icicimov.github.io/blog/high-availability/ZFS-storage-with-OmniOS-and-iSCSI/
I'm not sure about that.
But I read the docs[0] again, and there's some additions that need in PVE 9.
I will check out.
SOLVED after change the line in the interfaces file:
Before:
vmbr0
...
...
post-up /usr/bin/systemctl restart frr.service
After:
vmbr0
...
...
post-up...
Hi... Do I need to something with fabric when upgrade from Proxmox 8 to Proxmox 9.
Everything was fine, but after upgrade to PVE 9, the post-up systemctl restart frr.services doesn't work inside /etc/network/interfaces.
It's happens again...
New fresh installation.
After activated frr hungs on boot
I am using this options in daemon for frr
bgpd=yes
ospfd=yes
ospf6d=yes
ripd=no
ripngd=no
isisd=no
pimd=no
pim6d=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
sharpd=no
pbrd=no
bfdd=yes
fabricd=yes
vrrpd=no
pathd=no...
Solved after upgrade from PVE 8 to PVE 9
But now I got another issue:
https://forum.proxmox.com/threads/frr-service-doesnt-restart-when-call-ifreload-a-ou-doesnt-start-in-boot-time.180797/
Hi there.
I update a 3 node cluster with ceph from PVE 8 to PVE9.
On the PVE8 nodes it's has frr configuration, to provide mesh network between this 3 nodes.
After upgrade and restart, I notice that the networking hangs.
So I reboot and enter in rescue mode.
After commented the line:
#post-up...
Hi there...
We are using the remote sync features a lot.
But I need to pull just the last backup from a certain group.
How can I achieve that?
Is there some regex that I can apply?
Thanks
Hi there.
Try to use redorescue to create a backup of this VM directly to Proxmox SSH
- Download redorescue: https://sourceforge.net/projects/redobackup/files/latest/download
- Boot the Vm in Hyper-V with this iso and create a backup using SSH session to your Proxmox VE
- In the Proxmox side...
Hi...
I updated a Proxmox VE server with last version 8.
But suddenly ifreload -a doesn't work leading to a fault network frr
Here the error:
ifreload -a
2026 Feb 3 09:50:15 proxmox01 Received signal 11 at 1770123015 (si_addr 0x7ffc89226628, PC 0x715cc9706dec); aborting...
2026 Feb 3 09:50:15...
Hi there.
I still having issues with this.
When the VMs that is using VXLAN are in the same physical hosts, the bitrate hits 10G!
But when migrated VM to other host, the bitrate goes down to less than 1G!
Any advice?
Thanks
I will tell you what I do before try to create a cluster.
If I want to create a cluster using a different IP from that one in the vmbr0 bridge, I put this IP in the /etc/hosts.
Then I use
pvecm create cluster-name --link0 IP_I_WANT_TO_USE_TO_CLUSTER_COMMUNICATION_BUT_NOT_IN_VMBR0
I am not...
I think you should always use static IP.
Besides that, check the /etc/hosts to match the IP and the server name, like this:
192.168.100.10 pve01.network.local pve01