SDN problems with Netbox as IPAM

Hi @WarmEthernet, @spirit,

I'm currently using Proxmox 8.2.4 and NetBox 4.1.1 running in a Docker container. I noticed that the NetboxPlugin.pmfile has been patched, but the issue seems to persist. It appears that NetBox is still returning the full CIDR (e.g., 192.168.1.1/24), but I'm not entirely sure what the plugin expects in terms of IP format.

Has anyone found a workaround for this bug, or could you provide more details on what the Proxmox plugin is looking for? Any insights would be appreciated!

Thanks!

1726767427734.png
 
I did a little debugging and I think the bug is caused because this return statement has the wrong scope.
I'm by no means a perl expert but from what I understand this doesn't actually returns the ip value from the add_range_next_freeip subroutine.

This should fix it:
Git:
diff --git a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
index d923269..124c355 100644
--- a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
+++ b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
@@ -151,7 +151,7 @@ sub add_next_freeip {
 
     my $params = { dns_name => $hostname, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/prefixes/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     return $ip;
@@ -160,6 +160,8 @@ sub add_next_freeip {
     if ($@) {
     die "can't find free ip in subnet $cidr: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub add_range_next_freeip {
@@ -174,7 +176,7 @@ sub add_range_next_freeip {
 
     my $params = { dns_name => $data->{hostname}, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/ip-ranges/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     print "found ip free $ip in range $range->{'start-address'}-$range->{'end-address'}\n" if $ip;
@@ -184,6 +186,8 @@ sub add_range_next_freeip {
     if ($@) {
     die "can't find free ip in range $range->{'start-address'}-$range->{'end-address'}: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub del_ip {
 
I did a little debugging and I think the bug is caused because this return statement has the wrong scope.
I'm by no means a perl expert but from what I understand this doesn't actually returns the ip value from the add_range_next_freeip subroutine.

This should fix it:
Git:
diff --git a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
index d923269..124c355 100644
--- a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
+++ b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
@@ -151,7 +151,7 @@ sub add_next_freeip {
 
     my $params = { dns_name => $hostname, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/prefixes/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     return $ip;
@@ -160,6 +160,8 @@ sub add_next_freeip {
     if ($@) {
     die "can't find free ip in subnet $cidr: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub add_range_next_freeip {
@@ -174,7 +176,7 @@ sub add_range_next_freeip {
 
     my $params = { dns_name => $data->{hostname}, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/ip-ranges/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     print "found ip free $ip in range $range->{'start-address'}-$range->{'end-address'}\n" if $ip;
@@ -184,6 +186,8 @@ sub add_range_next_freeip {
     if ($@) {
     die "can't find free ip in range $range->{'start-address'}-$range->{'end-address'}: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub del_ip {
THANK YOU!! This definitely helped and now Netbox IPAM works, I had to restart the GUI before I was able to use the Netbox patch you made. I hope this gets into the next update/patch release.
 
I did a little debugging and I think the bug is caused because this return statement has the wrong scope.
I'm by no means a perl expert but from what I understand this doesn't actually returns the ip value from the add_range_next_freeip subroutine.

This should fix it:
Git:
diff --git a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
index d923269..124c355 100644
--- a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
+++ b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
@@ -151,7 +151,7 @@ sub add_next_freeip {
 
     my $params = { dns_name => $hostname, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/prefixes/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     return $ip;
@@ -160,6 +160,8 @@ sub add_next_freeip {
     if ($@) {
     die "can't find free ip in subnet $cidr: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub add_range_next_freeip {
@@ -174,7 +176,7 @@ sub add_range_next_freeip {
 
     my $params = { dns_name => $data->{hostname}, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/ip-ranges/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     print "found ip free $ip in range $range->{'start-address'}-$range->{'end-address'}\n" if $ip;
@@ -184,6 +186,8 @@ sub add_range_next_freeip {
     if ($@) {
     die "can't find free ip in range $range->{'start-address'}-$range->{'end-address'}: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub del_ip {
Thanks @jon4hz this is the actual fix!
 
Any way we can get this thread's title change to have [SOLVED] prepended, and possibly the original post updated to include an edit stating that the fix is contained in reply #? Just a thought.

Also, are the PVE packages going to include this in an update soon, or is there a dev cycle that has to complete before we get the fix into the package? Only asking out of curiosity, definitely not trying to make your lives difficult.

Thanks for the hard work and diligent bug squashing!

Ronin
Resident Forum Lurker and Mental Patient
 
Also, are the PVE packages going to include this in an update soon, or is there a dev cycle that has to complete before we get the fix into the package?

Yeah, me too! I also submitted my patch to the bugzilla issue but unfortunately I didn't receive any feedback yet. To be fair, my patch doesn't fix the full issue, since there is still the problem that Proxmox doesn't add the IP range automatically. I'll see if I get a free minute in the near future and patch that as well. From what I saw, it should be fairly simple by extending the add_subnet subroutine to also create an IP range.

Also, this is my first time contributing to Proxmox. So if submitting the patch to bugzilla isn't the way to go, please let me know! I saw that there is a mailing list, but to be honest, I have to research what the proper etiquette is to submit patches there first.
 
Last edited:
  • Like
Reactions: msvronin
Yeah, me too! I also submitted my patch to the bugzilla issue but unfortunately I didn't receive any feedback yet. To be fair, my patch doesn't fix the full issue, since there is still the problem that Proxmox doesn't add the IP range automatically. I'll see if I get a free minute in the near future and patch that as well. From what I saw, it should be fairly simple by extending the add_subnet subroutine to also create an IP range.

Also, this is my first time contributing to Proxmox. So if submitting the patch to bugzilla isn't the way to go, please let me know! I saw that there is a mailing list, but to be honest, I have to research what the proper etiquette is to submit patches there first.

Sorry for the silence, I wasn't really available the last few days / weeks. I'll look into reviewing the patch. Submitting to the mailing list would be the preferred way of posting patches, we have a guideline for submitting patches in our wiki [1]. It might look intimidating at first, but it really isn't ;)

[1] https://pve.proxmox.com/wiki/Developer_Documentation#Sending_Patches
 
  • Like
Reactions: msvronin and jon4hz
Yeah, me too! I also submitted my patch to the bugzilla issue but unfortunately I didn't receive any feedback yet. To be fair, my patch doesn't fix the full issue, since there is still the problem that Proxmox doesn't add the IP range automatically. I'll see if I get a free minute in the near future and patch that as well. From what I saw, it should be fairly simple by extending the add_subnet subroutine to also create an IP range.

Also, this is my first time contributing to Proxmox. So if submitting the patch to bugzilla isn't the way to go, please let me know! I saw that there is a mailing list, but to be honest, I have to research what the proper etiquette is to submit patches there first.
Hey @jon4hz , thanks for the helping with the issue. You are correct, it does not automatically create the ip range, but I also noticed something else while testing your changes. When I deploy a new VM, it grabs 2 different IPs from netbox. One upon creation of the VM, then another once I start the VM. The second one that it grabs is the one that the VM ends up using, and disappears from netbox once I remove the VM. But the initial IP that it grabs when the VM is created, needs to be manually removed from netbox to free up an address. Otherwise its falsely taking up an IP that otherwise should be free to use from the pool.

I'm not sure what versions of PVE or Netbox you are using, but I just updated mine -
PVE: 8.2.7
Netbox: 4.1.1
 
I did a little debugging and I think the bug is caused because this return statement has the wrong scope.
I'm by no means a perl expert but from what I understand this doesn't actually returns the ip value from the add_range_next_freeip subroutine.

This should fix it:
Git:
diff --git a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
index d923269..124c355 100644
--- a/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
+++ b/src/PVE/Network/SDN/Ipams/NetboxPlugin.pm
@@ -151,7 +151,7 @@ sub add_next_freeip {
 
     my $params = { dns_name => $hostname, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/prefixes/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     return $ip;
@@ -160,6 +160,8 @@ sub add_next_freeip {
     if ($@) {
     die "can't find free ip in subnet $cidr: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub add_range_next_freeip {
@@ -174,7 +176,7 @@ sub add_range_next_freeip {
 
     my $params = { dns_name => $data->{hostname}, description => $description };
 
-    eval {
+    my $ip = eval {
     my $result = PVE::Network::SDN::api_request("POST", "$url/ipam/ip-ranges/$internalid/available-ips/", $headers, $params);
     my ($ip, undef) = split(/\//, $result->{address});
     print "found ip free $ip in range $range->{'start-address'}-$range->{'end-address'}\n" if $ip;
@@ -184,6 +186,8 @@ sub add_range_next_freeip {
     if ($@) {
     die "can't find free ip in range $range->{'start-address'}-$range->{'end-address'}: $@" if !$noerr;
     }
+
+    return $ip;
 }
 
 sub del_ip {

We still have to manually add IP Ranges to Netbox right?

Thanks
 
I'd love some guidance on how you managed to give an IP address from Netbox to your deployed VM, I can't manage to make it work

My config :
  • Netbox 3.1.5
  • PVE 8.3.3, frr functional with EVPN zone, dsnmasq installed on all nodes (is this necessary for EVPN zones ?)
Netbox plugin correctly configured, I can see prefix and gateway IP addresses created when I create the subnets of my 2 zones (simulating an admin and data networks)

As seen in this thread, I created the corresponding IP ranges in Netbox from what I configured in the subnets. I put these IP ranges as "active" in Netbox

I then clone a new VM from a debian cloud image template with 2 NICS connected to these vNets with cloudinit enabled, but they don't get an IP.

What am I doing wrong ? I Can provide any log necessary. Here is the journalctl --since '1 minutes ago' launched after VM clone and starting

Code:
Jan 30 11:41:47 pvirtocbhpewd03 ceph-mgr[2461]: 2025-01-30T11:41:47.649+0100 7e30c2e006c0 -1 --2- 192.168.250.3:0/2208199443 >> [v2:192.168.250.2:6811/2082057067,v1:192.168.250.2:6814/2082057067] conn(0x5a9bcaa49800 0x5a9bc9456580 unkno>
Jan 30 11:41:50 pvirtocbhpewd03 pvedaemon[1031197]: <root@pam> starting task UPID:pvirtocbhpewd03:000FF5AC:00F87861:679B576E:qmclone:1000001:root@pam:
Jan 30 11:41:50 pvirtocbhpewd03 pvedaemon[1045932]: clone base-1000001-disk-0: base-1000001-disk-0 snapname __base__ to vm-1000-disk-0
Jan 30 11:41:50 pvirtocbhpewd03 pvedaemon[1031197]: <root@pam> end task UPID:pvirtocbhpewd03:000FF5AC:00F87861:679B576E:qmclone:1000001:root@pam: OK
Jan 30 11:41:55 pvirtocbhpewd03 pvedaemon[1031197]: <root@pam> starting task UPID:pvirtocbhpewd03:000FF60E:00F87AAB:679B5773:qmstart:1000:root@pam:
Jan 30 11:41:55 pvirtocbhpewd03 pvedaemon[1046030]: start VM 1000: UPID:pvirtocbhpewd03:000FF60E:00F87AAB:679B5773:qmstart:1000:root@pam:
Jan 30 11:41:56 pvirtocbhpewd03 systemd[1]: Started 1000.scope.
Jan 30 11:41:57 pvirtocbhpewd03 kernel: tap1000i0: entered promiscuous mode
Jan 30 11:41:57 pvirtocbhpewd03 kernel: BEINET1: port 2(fwpr1000p0) entered blocking state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: BEINET1: port 2(fwpr1000p0) entered disabled state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwpr1000p0: entered allmulticast mode
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwpr1000p0: entered promiscuous mode
Jan 30 11:41:57 pvirtocbhpewd03 kernel: BEINET1: port 2(fwpr1000p0) entered blocking state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: BEINET1: port 2(fwpr1000p0) entered forwarding state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 1(fwln1000i0) entered blocking state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 1(fwln1000i0) entered disabled state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwln1000i0: entered allmulticast mode
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwln1000i0: entered promiscuous mode
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 1(fwln1000i0) entered blocking state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 1(fwln1000i0) entered forwarding state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 2(tap1000i0) entered blocking state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 2(tap1000i0) entered disabled state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: tap1000i0: entered allmulticast mode
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 2(tap1000i0) entered blocking state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: fwbr1000i0: port 2(tap1000i0) entered forwarding state
Jan 30 11:41:57 pvirtocbhpewd03 kernel: tap1000i1: entered promiscuous mode
Jan 30 11:41:58 pvirtocbhpewd03 kernel: BLSNET1: port 2(fwpr1000p1) entered blocking state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: BLSNET1: port 2(fwpr1000p1) entered disabled state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwpr1000p1: entered allmulticast mode
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwpr1000p1: entered promiscuous mode
Jan 30 11:41:58 pvirtocbhpewd03 kernel: BLSNET1: port 2(fwpr1000p1) entered blocking state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: BLSNET1: port 2(fwpr1000p1) entered forwarding state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 1(fwln1000i1) entered blocking state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 1(fwln1000i1) entered disabled state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwln1000i1: entered allmulticast mode
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwln1000i1: entered promiscuous mode
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 1(fwln1000i1) entered blocking state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 1(fwln1000i1) entered forwarding state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 2(tap1000i1) entered blocking state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 2(tap1000i1) entered disabled state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: tap1000i1: entered allmulticast mode
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 2(tap1000i1) entered blocking state
Jan 30 11:41:58 pvirtocbhpewd03 kernel: fwbr1000i1: port 2(tap1000i1) entered forwarding state
Jan 30 11:41:58 pvirtocbhpewd03 pvedaemon[1046030]: VM 1000 started with PID 1046081.
Jan 30 11:41:58 pvirtocbhpewd03 pvedaemon[1031197]: <root@pam> end task UPID:pvirtocbhpewd03:000FF60E:00F87AAB:679B5773:qmstart:1000:root@pam: OK
Jan 30 11:41:58 pvirtocbhpewd03 zebra[1931]: [WPPMZ-G9797] if_zebra_speed_update: BEINET1 old speed: 4294967295 new speed: 10000
Jan 30 11:41:58 pvirtocbhpewd03 zebra[1931]: [WPPMZ-G9797] if_zebra_speed_update: BLSNET1 old speed: 4294967295 new speed: 10000
Jan 30 11:42:02 pvirtocbhpewd03 ceph-mgr[2461]: 2025-01-30T11:42:02.649+0100 7e30c2e006c0 -1 --2- 192.168.250.3:0/2208199443 >> [v2:192.168.250.2:6811/2082057067,v1:192.168.250.2:6814/2082057067] conn(0x5a9bcaa49800 0x5a9bc9456580
 
EVPN zones currently do not support the DHCP feature, so if you're using the built-in DHCP feature this won't work (at the moment).
 
@shanreich it seems @JakeFrosty managed to make it work, if not using the dhcp feature how could we make this work ? The information in this thread is not very clear to me
He used a Simple Zone rather than an EVPN zone. There are currently patches on the mailing list for making it work with other types of zones, but they haven't been applied yet.
 
He used a Simple Zone rather than an EVPN zone. There are currently patches on the mailing list for making it work with other types of zones, but they haven't been applied yet.
Thank you very much for the clarification. Waiting for the patches for EVPN zone :)
 
Greetings,

So the same issues persists ?
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.4 (running version: 8.3.4/65224a0f9cd294a3)
NetBox Community v4.2.4

I'm trying to fix with the advices from this thread but still 0 results.