I did some more testing with and without the libpve-network-perl patch so I could control the route-map easier.
With the old frr and frr-pythontools packages I was not able to get a connection with route-map enabled.
After disabling route-map as stated in my first post the connection suddenly...
The name itself in FRR is not used for anything outside the FRR config, only the isis-net 10.0000.0000.0005.00 is important to the other devices.
It is nice however to have it configurable to match your other devices to administration.
If we were to create another proxmox cluster in a different...
I wan't very clear on this one.
The idea for the dynamic name is more administrative.
FRR can only have one isis process/domain but the network doesn't.
Making the name dynamic will help in case your network has multiple isis domains.
I am testing on the latest stable release of proxmox.
We...
Both fixed work so far!
I tried setting the interface block in frr.conf.local and it appeared in the frr.conf as expected.
Your other change is working as wel, I managed to get the setup working without the need to use frr.conf.local which is nice!
My controller.cfg looks like this:
evpn: evpn...
Hi Spirit,
I opened 2 feature requests, one for the "interface" block and another for the GUI addition.
https://bugzilla.proxmox.com/show_bug.cgi?id=4901
https://bugzilla.proxmox.com/show_bug.cgi?id=4902
I saw more FRR users with issues regarding the route-map on Github, that part might be...
I've been trying to integrate the Proxmox SDN into an existing vxlan network using IS-IS.
This way we'll be able to use the different vnets across multiple clusters as well as bind that to a vlan to attach legacy devices.
Our lab setup is using a route-reflector on a spine switch and 2 leaf...
It has some documentation under "man qm".
The api token part is a bit unclear but here's an example:
qm remote-migrate 112 116 apitoken='Authorization: PVEAPIToken=root@pam!token1=your-api-key-goes-here',host=target.host.tld,port=443 --target-bridge vmbr0 --target-storage local-zfs --online...
The windows install is very picky on the location of drivers.
You can't just set e:\ as driver location, you'll have to define the full path like e:\virtiostor\server2012\amd64\ (or something like that).
In short: Would it be possible to hide nodes from the overview in the left panel if a user has nothing to do with it?
Some backstory:
I'm using separate clusters for my users at this moment.
Pro is that we have good separation between them, con is loads of overhead and a lot of unused spares...
Hi Cedvan,
My response is still in draft, must have forgot to press post.
I Added the driver using the Rancher webgui under Global > Tools > Drivers > Node Drivers using the Add Node Driver button.
Here you can paste the link to the binary.
Just got another error and catted every file in /var/log/pve/replicate/*.
Not a conclusive error but still wanted to include it.
The affected replication is 107-0.
2021-04-21 17:00:14 101-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_101-0_1619016309__' on...
I wanted to update this thread again.
The cluster just got updated to the newest pve enterprise repo version (6.3-6 as of writing) including ZFS 2.0.
Even though this did not fix the issue it did give me some more information.
Instead of the general "no tunnel IP received" error it now spits...
In my case the repair didn't help. Also the metadata didn't seem to be corrupted at all.
Many hours later I found this:
https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/
changing the transaction_id for pve/data fixed the issue for me.
WARNING: This is a pretty...
Unfortunately I haven't been able to resolve the issue so far.
As these are mostly pbx vm's they don't generate a lot of load (especially not on the storage) and I don't run a backup either.
The systems are connected using a dedicated 10G backend link for replication so I don't suspect the...
Just wanted to note I'm having the same issue on pve 6.3-1:
proxmox-ve: 6.3-1 (running kernel: 5.4.60-1-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1...
Recently I upgraded a 3 node-cluster running pve-5.3 to pve-6.3-3.
These nodes all have zfs-replication running to the two other nodes.
Since the upgrade I've been getting random errors about the Replication failing.
I'm receiving an e-mail with "Replication Job: 127-2 failed - no tunnel IP...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.