Thanks.
To sum that up for posterity:
- the parser and checker is NOT part of the corosync utils
- `PVE::Corosync` lives at https://github.com/proxmox/pve-cluster/blob/master/src/PVE/Corosync.pm#L222
- `PVE::Cluster` lives at...
There's also a parser in Python by Redhat:
- https://insights-core.readthedocs.io/en/latest/shared_parsers_catalog/corosync.html
There's another in Go by LINBIT
- https://pkg.go.dev/github.com/LINBIT/gocorosync
And another in Rust by a Suse...
Thanks to our amazing community, we’re rolling out our newest software update for Proxmox Backup Server 3.4. Your feedback has been instrumental in shaping these improvements.
This version is based on Debian 12.10 ("Bookworm") but uses the Linux...
We are excited to announce that our latest software version 8.4 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.10 "Bookworm" but uses the Linux kernel 6.8.12 and kernel 6.14 as opt-in, QEMU...
Hello!
There are a few ways to configure VLANs in PVE. Using the SDN is a nice option if your up for it, making individual Linux VLAN interfaces (vmbr0.100 for example) is another option. In the end, making the bridge VLAN aware and adding the...
Currently not, but support for this is already on the mailinglist [1] and will be added soon.
https://lists.proxmox.com/pipermail/pve-devel/2023-December/061136.html
Well, anyway, I tested Ceph today and moved the workloads onto it. Benchmark is following (fio):
randwrite 4k 19,5k iops
(seq) write 4M 1070MiB/s
randread 4k 41,9k IOPS
seqread 4M 1900MiB/s
Since the numbers are OK and VMs seem to work fine for...
Ah. Often see that with Dell and HP's older hardware. They released the hardware back before PVE was cool. :-)
Then, you need to rely on the drivers in the kernel.
Ah! Thanks for that overview @markf1301
Your architecture makes sense. Yes, having dedicated hardware for PBS is the ideal situation, but there are many use cases where that is not possible for technical reasons or resource constraints (e.g...
Apologies for the confusion. Here is my current config -
Production site - PVE1 as main server also a PVE2/PBS (dual roles on it - not recommended but works for failover and has a good sized datastore).
DR/Remote site - single PVE3/PBS.
The...
Would there be a need to research a driver update for Proxmox for 533FLR-T NIC? Firmware is up to date. The other option is just to save myself possibly hours of troubleshooting and order the same NIC as other servers have and be done with it. 99€...
Thank you very much for your insight.
We'll be adding a dedicated network for corosync as you described when deploying for production. We had it originally configured but reinstalled the nodes without it when we were analyzing the broadcast issue...
I may not have understood your topology correctly.
My understanding was you had a PBS local to your PVE (site #1). Then, you had a remote site (site #2) with a PBS and a Synology together.
From this last post, it sounds like you have a PBS...
Thanks for sharing this information.
Please run the following variations on the iperf3 command:
# Node1
iperf3 -B 10.xxx.xxx.61 -s
# Node2 with extra option "-P 8"
iperf3 -B 10.xxx.xxx.63 -c 10.xxx.xxx.61 -P 8
iperf3 -B 10.xxx.xxx.63 -c...
Please post the iperf (iperf3) commands you are running on the server and clients.
Please share your /etc/network/interfaces file.
What model of NICs are you using on each host?
What model are the switches you are connecting your PVE hosts to?
Mounting the NFS share from your Synology and using a Sync Job requires a couple of extra steps to set up, but once it is set up, it will be automatic and will not require you to SCP the files over manually. It would be slow, but the speed might...
Thanks - I'll look into the removable datastores as an option. I was hoping for a way to just ssh (copy) the data from it's current location to another 'dummy' device (synology box) and use that as my existing cheap long term storage. I'd of...
@daubner Thank you for sharing this information.
Your information makes it reasonable to conclude that a prolonged broadcast storm on the host network caused the reboot. This is based on the assumption that the cluster was set up much like you...