SDN problems with Netbox as IPAM

WarmEthernet

Member
Nov 18, 2022
11
2
8
PVE version: 8.2.2
Netbox Version: 4.0.1

Hello,

As stated in the title, I'm having issues deploying VMs when they are assigned to an SDN zone that uses Netbox for IPAM. When testing, I'm able to create a zone, vnet, subnet, and dhcp pool just fine and deploy a VM when that zone is set to use the default PVE IPAM. But when I then switch it to use Netbox as IPAM, I'm unable to even start the VM due to errors. Basically, the VM goes to start, finds an IP, allocates that IP in Netbox, then appears to fail to find the bridge with the virtual network interface assigened to the VM. The bridge exists, and I can see the VM interface for about half a second before it disappears from the host. Below is some screenshots and outputs from the Proxmox gui, cli, and Netbox. I followed SDN guide from the pve docs. I do have frr and dnsmasq installed and running as well.

SDN Setup:
zone-config.pngvnet-config.pngsubnet-config.pngdhcp-range.png

VM Setup:
vm-hw-config.png

VM Start Errors:
Code:
root@test-pve-node1:~# qm start 100
can't find any free ip in zone zonetest for IPv4 at /usr/share/perl5/PVE/Network/SDN/Vnets.pm line 143.
found ip free 100.64.10.3 in range 100.64.10.2-100.64.10.254
kvm: -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 6400
start failed: QEMU exited with code 1

Code:
root@test-pve-node1:~# systemctl status frr
● frr.service - FRRouting
     Loaded: loaded (/lib/systemd/system/frr.service; enabled; preset: enabled)
     Active: active (running) since Fri 2024-05-17 11:24:42 PDT; 2 days ago
       Docs: https://frrouting.readthedocs.io/en/latest/setup.html
   Main PID: 803 (watchfrr)
     Status: "FRR Operational"
      Tasks: 15 (limit: 9300)
     Memory: 29.6M
        CPU: 46.755s
     CGroup: /system.slice/frr.service
             ├─803 /usr/lib/frr/watchfrr -d -F traditional zebra bgpd staticd bfdd
             ├─821 /usr/lib/frr/zebra -d -F traditional -A 127.0.0.1 -s 90000000
             ├─849 /usr/lib/frr/bgpd -d -F traditional -A 127.0.0.1
             ├─867 /usr/lib/frr/staticd -d -F traditional -A 127.0.0.1
             └─874 /usr/lib/frr/bfdd -d -F traditional -A 127.0.0.1

May 20 09:05:33 test-pve-node1 bgpd[849]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF tap100i0 in VRF 0
May 20 09:06:59 test-pve-node1 bgpd[849]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF tap100i0 in VRF 0
May 20 10:32:47 test-pve-node1 bgpd[849]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF tap100i0 in VRF 0


Netbox:
netbox-prefix.pngnetbox-ip-range.pngnetbox-ip-addresses.png

Interfaces:
Code:
root@test-pve-node1:~# cat /etc/network/interfaces.d/sdn
#version:41

auto vnettest
iface vnettest
    address 100.64.10.1/24
    post-up iptables -t nat -A POSTROUTING -s '100.64.10.0/24' -o vmbr0 -j SNAT --to-source 172.18.7.201
    post-down iptables -t nat -D POSTROUTING -s '100.64.10.0/24' -o vmbr0 -j SNAT --to-source 172.18.7.201
    post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    mtu 1460
    ip-forward on


Code:
root@test-pve-node1:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 54:b2:03:0b:b0:8e brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 14:4f:8a:02:72:5e brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 54:b2:03:0b:b0:8e brd ff:ff:ff:ff:ff:ff
13: vnettest: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 32:31:42:d4:44:ad brd ff:ff:ff:ff:ff:ff


Extra things we have tried:
- Enabling VLAN Aware
- Manually turning vnettest bridge to UP
- Removing the DNS Prefix
- Rebuilding the whole stack
- Manually adding the Prefix, IP range, and the IP addresses to a specific VRF in Netbox
- We made sure the API calls to Netbox were correct and permissions were right for the token
- Checked firewall rules to the Netbox VM
- Added node firewall rules to accept DNS and DHCPfwd
- Updated the PVE cluster nodes
- Restarted dnsmasq for the zones

We are pretty new to using Netbox and this is our first attempt at using SDN within Proxmox so I'm hoping maybe we are just missing a simple option somewhere. The Prefix, and IP addresses dynamically configure in Netbox, but we do need to manually add the DHCP range of addresses to get the VM to successfully pull an address. Sorry for the wall of text and pictures, but I wanted to be through.

Thanks!
 
Last edited:
Can you post the resulting journal after starting a VM? Please adjust the since value accordingly, you can also use something like '30 minutes ago'

Code:
journalctl --since '1970-01-01' > journal.txt
 
Can you post the resulting journal after starting a VM? Please adjust the since value accordingly, you can also use something like '30 minutes ago'

Code:
journalctl --since '1970-01-01' > journal.txt
Here is the output of that. Its weird that it can't seem to find the interface. When I try and start it, while watching watch ip link show the tap100i0 interface shows up for a brief second then just disappears again.

Code:
May 21 10:33:18 test-pve-node1 sshd[1207703]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
May 21 10:33:18 test-pve-node1 systemd-logind[726]: New session 111 of user root.
May 21 10:33:18 test-pve-node1 systemd[1]: Created slice user-0.slice - User Slice of UID 0.
May 21 10:33:18 test-pve-node1 systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
May 21 10:33:18 test-pve-node1 systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
May 21 10:33:18 test-pve-node1 systemd[1]: Starting user@0.service - User Manager for UID 0...
May 21 10:33:18 test-pve-node1 (systemd)[1207706]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
May 21 10:33:18 test-pve-node1 systemd[1207706]: Queued start job for default target default.target.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Created slice app.slice - User Application Slice.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Reached target paths.target - Paths.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Reached target timers.target - Timers.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
May 21 10:33:18 test-pve-node1 systemd[1207706]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
May 21 10:33:18 test-pve-node1 systemd[1207706]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
May 21 10:33:18 test-pve-node1 systemd[1207706]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Reached target sockets.target - Sockets.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Reached target basic.target - Basic System.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Reached target default.target - Main User Target.
May 21 10:33:18 test-pve-node1 systemd[1207706]: Startup finished in 73ms.
May 21 10:33:18 test-pve-node1 systemd[1]: Started user@0.service - User Manager for UID 0.
May 21 10:33:18 test-pve-node1 systemd[1]: Started session-111.scope - Session 111 of User root.
May 21 10:33:18 test-pve-node1 sshd[1207703]: pam_env(sshd:session): deprecated reading of user environment enabled
May 21 10:33:20 test-pve-node1 corosync[1143]:   [TOTEM ] Token has not been received in 2737 ms
May 21 10:33:21 test-pve-node1 corosync[1143]:   [TOTEM ] A processor failed, forming new configuration: token timed out (3650ms), waiting 4380ms for consensus.
May 21 10:33:24 test-pve-node1 corosync[1143]:   [TOTEM ] Token has not been received in 6388 ms
May 21 10:33:24 test-pve-node1 corosync[1143]:   [QUORUM] Sync members[3]: 1 2 3
May 21 10:33:24 test-pve-node1 corosync[1143]:   [TOTEM ] A new membership (1.17c9) was formed. Members
May 21 10:33:24 test-pve-node1 corosync[1143]:   [QUORUM] Members[3]: 1 2 3
May 21 10:33:24 test-pve-node1 corosync[1143]:   [MAIN  ] Completed service synchronization, ready to provide service.
May 21 10:33:26 test-pve-node1 qm[1207764]: <root@pam> starting task UPID:test-pve-node1:00126DD5:020AA87C:664CDAE6:qmstart:100:root@pam:
May 21 10:33:26 test-pve-node1 qm[1207765]: start VM 100: UPID:test-pve-node1:00126DD5:020AA87C:664CDAE6:qmstart:100:root@pam:
May 21 10:33:26 test-pve-node1 systemd[1]: Started 100.scope.
May 21 10:33:27 test-pve-node1 bgpd[849]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF tap100i0 in VRF 0
May 21 10:33:27 test-pve-node1 systemd[1]: 100.scope: Deactivated successfully.
May 21 10:33:27 test-pve-node1 systemd[1]: 100.scope: Consumed 1.005s CPU time.
May 21 10:33:27 test-pve-node1 qm[1207765]: start failed: QEMU exited with code 1
May 21 10:33:27 test-pve-node1 qm[1207764]: <root@pam> end task UPID:test-pve-node1:00126DD5:020AA87C:664CDAE6:qmstart:100:root@pam: start failed: QEMU exited with code 1
 
When I try and start it, while watching watch ip link show the tap100i0 interface shows up for a brief second then just disappears again.

Most likely because the interface gets created and then something fails in the SDN part which leads to the VM start failing and the interface gets cleaned up again. I cannot really tell what's going wrong - this part is very weird:

Code:
can't find any free ip in zone zonetest for IPv4 at /usr/share/perl5/PVE/Network/SDN/Vnets.pm line 143.
found ip free 100.64.10.3 in range 100.64.10.2-100.64.10.254

Can you post the VM configuration, as well as the SDN configuration?

Code:
cat /etc/pve/sdn/*
qm config <vmid>
 
Most likely because the interface gets created and then something fails in the SDN part which leads to the VM start failing and the interface gets cleaned up again. I cannot really tell what's going wrong - this part is very weird:

Code:
can't find any free ip in zone zonetest for IPv4 at /usr/share/perl5/PVE/Network/SDN/Vnets.pm line 143.
found ip free 100.64.10.3 in range 100.64.10.2-100.64.10.254

Can you post the VM configuration, as well as the SDN configuration?

Code:
cat /etc/pve/sdn/*
qm config <vmid>
Here are those configurations.

cat /etc/pve/sdn/*
Code:
root@test-pve-node1:~# cat /etc/pve/sdn/*
pve: pve

netbox: nbtest
    token my-super-secret-token-abcdefg
    url https://10.255.103.200/api

subnet: zonetest-100.64.10.0-24
    vnet vnettest
    dhcp-range start-address=100.64.10.2,end-address=100.64.10.254
    dnszoneprefix prefixtest
    gateway 100.64.10.1
    snat 1

vnet: vnettest
    zone zonetest

simple: zonetest
    dhcp dnsmasq
    ipam nbtest
    mtu 1460

qm config 100:
Code:
root@test-pve-node1:~# qm config 100
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: local:iso/ubuntu-22.04.4-live-server-amd64.iso,media=cdrom,size=2055086K
memory: 2048
meta: creation-qemu=8.1.5,ctime=1716219690
name: test-vm
net0: virtio=BC:24:11:B3:5B:9C,bridge=vnettest,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-100-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=6f37f3d4-bac8-4b58-adfb-3f12d8ccfd3d
sockets: 1
vmgenid: 18daf660-7460-44e5-9940-6a033443652c
 
@shanreich

maybe related to this recent commit ?

View attachment 68640

seems like it's trying to get an id for the IP range via GET /api/ipam/ip-ranges/?start_address=172.16.66.100&end_address=172.16.66.199
then it wants to get an available IP via /api/ipam/ip-ranges/{id}/available-ips/ (which fails because id is unset because no ip range exists)

It only seems to use the endpoint when DHCP is activated

the IP range is never created anywhere, so the first request already fails. do you have any idea where it should get created that I'm missing? You got a better overview of the netbox plugin afaict. I only found 2 mentions of this endpoint in the SDN code, none of which create the ip range.

This also happens when downgrading to 0.9.6, which does not have the changes mentioned.

see https://git.proxmox.com/?p=pve-netw...e4cb638c788f94f3b01c8c13c6fb2d93;hb=HEAD#l165


@WarmEthernet do you have IP ranges contained in your Netbox IPAM? I can reproduce something similar, but not quite.
EDIT: re-read your post and saw that you manually created the IP range, nevermind. I'll check it out further.
 
Last edited:
  • Like
Reactions: WarmEthernet
do you have any idea where it should get created that I'm missing? You got a better overview of the netbox plugin afaict. I only found 2 mentions of this endpoint in the SDN code, none of which create the ip range.
ok, got it!
As far I remeber, currently, we don't create the range in netbox ipam (or other external).
We only create the subnet if it's not existing in ipam.

I think it's because we don't have a specific api call when add/del a range. (it's just an value option of the subnet), so we should detect add/delete with compare with old/new values when we update the subnet)


The current workaround is to create range manually in netbox.
 
Last edited:
ok, got it!
As far I remeber, currently, we don't create the range in netbox ipam (or other external).
We only create the subnet if it's not existing in ipam.

I think it's because we don't have a specific api call when add/del a range. (it's just an value option of the subnet), so we should detect add/delete with compare with old/new values when we update the subnet)


The current workaround is to create range manually in netbox.
Thats what I have found as well. The prefex, and individual IP addresses will automatically populate, but I do have to manually add the DHCP IP range to Netbox first to get it to pull an address for the VM.
 
Thats what I have found as well. The prefex, and individual IP addresses will automatically populate, but I do have to manually add the DHCP IP range to Netbox first to get it to pull an address for the VM.
can you open a bug on bugzilla.proxmox.com ? I'll try to add support for ip range creation.
 
I was able to replicate the Failed Start Up issue with SDN+Netbox using LXCs on a virtualized 3-node PVE cluster.

Even when I manually specified the prefix and ranges in Netbox ahead of time, the LXC refuses to start.

This line from the first code block below proves that PVE is definitely getting a free IP in the specified range in Netbox, but won't allow the LXC to continue starting.
DEBUG utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100013 lxc pre-start produced output: found ip free 100.64.3.12 in range 100.64.3.10-100.64.3.20

Note:
When configured to use the SDN zone that's uses PVE's IPAM (pveipam), the same LXC will start successfully. See last code block in this post.


Code:
pct start 100013 -debug
###
run_buffer: 571 Script exited with status 25
lxc_init: 845 Failed to run lxc.hook.pre-start for container "100013"
__lxc_start: 2034 Failed to initialize container "100013"
100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100013", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100013 lxc pre-start produced output: can't find any free ip in zone netboxd for IPv4 at /usr/share/perl5/PVE/Network/SDN/Vnets.pm line 143.

DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100013 lxc pre-start produced output: found ip free 100.64.3.12 in range 100.64.3.10-100.64.3.20

ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 25
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "100013"
ERROR    start - ../src/lxc/start.c:__lxc_start:2034 - Failed to initialize container "100013"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "100013", config section "lxc"

TASK ERROR: startup for container '100013' failed

MANUALLY CREATED Netbox Prefix:
screenshot_20240525_192641.png

MANUALLY CREATED Netbox IP Range:
screenshot_20240525_192714.png

Netbox IP Addresses successfully created by PVE:
screenshot_20240525_192754.png

Code:
pveversion
pve-manager/8.2.2/9355359cd7afbae4 (running kernel: 6.8.4-3-pve)

Code:
cat /etc/pve/lxc/100013.conf
arch: amd64
cores: 2
features: nesting=1
hostname: test13
memory: 2048
net0: name=eth0,bridge=netboxd,hwaddr=BC:24:11:18:86:93,ip=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-100013-disk-0,size=8G
swap: 2048
unprivileged: 1

Code:
cat /etc/pve/sdn/subnets.cfg
subnet: pveipam-100.64.2.0-24
        vnet pveipam
        dhcp-range start-address=100.64.2.10,end-address=100.64.2.20
        gateway 100.64.2.1
        snat 1

subnet: netboxd-100.64.3.0-24
        vnet netboxd
        dhcp-range start-address=100.64.3.10,end-address=100.64.3.20
        gateway 100.64.3.1
        snat 1

Code:
cat /etc/pve/sdn/vnets.cfg
vnet: pveipam
        zone pveipam

vnet: netboxd
        zone netboxd
Code:
cat /etc/pve/sdn/zones.cfg
simple: pveipam
        dhcp dnsmasq
        ipam pve

simple: netboxd
        dhcp dnsmasq
        ipam netboxd

Code:
cat /etc/network/interfaces.d/sdn
#version:5

auto netboxd
iface netboxd
        address 100.64.3.1/24
        post-up iptables -t nat -A POSTROUTING -s '100.64.3.0/24' -o vmbr0 -j SNAT --to-source 10.255.103.53
        post-down iptables -t nat -D POSTROUTING -s '100.64.3.0/24' -o vmbr0 -j SNAT --to-source 10.255.103.53
        post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        ip-forward on

auto pveipam
iface pveipam
        address 100.64.2.1/24
        post-up iptables -t nat -A POSTROUTING -s '100.64.2.0/24' -o vmbr0 -j SNAT --to-source 10.255.103.53
        post-down iptables -t nat -D POSTROUTING -s '100.64.2.0/24' -o vmbr0 -j SNAT --to-source 10.255.103.53
        post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        ip-forward on

Last Code Block showing the LXC works when using PVE's IPAM:
Code:
root@nested-pve-n03:~# cat /etc/pve/lxc/100013.conf
arch: amd64
cores: 2
features: nesting=1
hostname: test13
memory: 2048
net0: name=eth0,bridge=pveipam,hwaddr=BC:24:11:18:86:93,ip=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-100013-disk-0,size=8G
swap: 2048
unprivileged: 1


root@nested-pve-n03:~# pct start 100013 -debug
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100013", config section "lxc"
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:unpriv_systemd_create_scope:1498 - Running privileged, not using a systemd unit
DEBUG    seccomp - ../src/lxc/seccomp.c:parse_config_v2:664 - Host native arch is [3221225534]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:815 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:532 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:532 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:532 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:815 - Processing "[all]"
...snip...
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:1036 - Merging compat seccomp contexts into main context
INFO     start - ../src/lxc/start.c:lxc_init:882 - Container "100013" is initialized
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1669 - The monitor process uses "lxc.monitor/100013" as cgroup
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1777 - The container process uses "lxc/100013/ns" as inner and "lxc/100013" as limit cgroup
INFO     start - ../src/lxc/start.c:lxc_spawn:1769 - Cloned CLONE_NEWUSER
INFO     start - ../src/lxc/start.c:lxc_spawn:1769 - Cloned CLONE_NEWNS
INFO     start - ../src/lxc/start.c:lxc_spawn:1769 - Cloned CLONE_NEWPID
INFO     start - ../src/lxc/start.c:lxc_spawn:1769 - Cloned CLONE_NEWUTS
INFO     start - ../src/lxc/start.c:lxc_spawn:1769 - Cloned CLONE_NEWIPC
INFO     start - ../src/lxc/start.c:lxc_spawn:1769 - Cloned CLONE_NEWCGROUP
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved user namespace via fd 17 and stashed path as user:/proc/25367/fd/17
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/25367/fd/18
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/25367/fd/19
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/25367/fd/20
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/25367/fd/21
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved cgroup namespace via fd 22 and stashed path as cgroup:/proc/25367/fd/22
DEBUG    idmap_utils - ../src/lxc/idmap_utils.c:idmaptool_on_path_and_privileged:93 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    idmap_utils - ../src/lxc/idmap_utils.c:idmaptool_on_path_and_privileged:93 - The binary "/usr/bin/newgidmap" does have the setuid bit set
DEBUG    idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:178 - Functional newuidmap and newgidmap binary found
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits:3528 - Limits for the unified cgroup hierarchy have been setup
DEBUG    idmap_utils - ../src/lxc/idmap_utils.c:idmaptool_on_path_and_privileged:93 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    idmap_utils - ../src/lxc/idmap_utils.c:idmaptool_on_path_and_privileged:93 - The binary "/usr/bin/newgidmap" does have the setuid bit set
INFO     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:176 - Caller maps host root. Writing mapping directly
NOTICE   utils - ../src/lxc/utils.c:lxc_drop_groups:1572 - Dropped supplimentary groups
INFO     start - ../src/lxc/start.c:do_start:1105 - Unshared CLONE_NEWNET
NOTICE   utils - ../src/lxc/utils.c:lxc_drop_groups:1572 - Dropped supplimentary groups
NOTICE   utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1548 - Switched to gid 0
NOTICE   utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1557 - Switched to uid 0
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:140 - Preserved net namespace via fd 5 and stashed path as net:/proc/25367/fd/5
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/lxcnetaddbr" for container "100013", config section "net"
DEBUG    network - ../src/lxc/network.c:netdev_configure_server_veth:876 - Instantiated veth tunnel "veth100013i0 <--> vethUkl4ES"
DEBUG    conf - ../src/lxc/conf.c:lxc_mount_rootfs:1240 - Mounted rootfs "/var/lib/lxc/100013/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs" with options "(null)"
INFO     conf - ../src/lxc/conf.c:setup_utsname:679 - Set hostname to "test13"
DEBUG    network - ../src/lxc/network.c:setup_hw_addr:3863 - Mac address "BC:24:11:18:86:93" on "eth0" has been setup
DEBUG    network - ../src/lxc/network.c:lxc_network_setup_in_child_namespaces_common:4004 - Network device "eth0" has been setup
INFO     network - ../src/lxc/network.c:lxc_setup_network_in_child_namespaces:4061 - Finished setting up network devices with caller assigned names
INFO     conf - ../src/lxc/conf.c:mount_autodev:1023 - Preparing "/dev"
INFO     conf - ../src/lxc/conf.c:mount_autodev:1084 - Prepared "/dev"
DEBUG    conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:539 - Invalid argument - Tried to ensure procfs is unmounted
DEBUG    conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:562 - Invalid argument - Tried to ensure sysfs is unmounted
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2219 - Remounting "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" to respect bind or remount options
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2238 - Flags for "/sys/fs/fuse/connections" were 4110, required extra flags are 14
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2282 - Mounted "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" with filesystem type "none"
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2282 - Mounted "proc" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/.lxc/proc" with filesystem type "proc"
DEBUG    conf - ../src/lxc/conf.c:mount_entry:2282 - Mounted "sys" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/.lxc/sys" with filesystem type "sysfs"
DEBUG    cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroupfs_mount:2187 - Mounted cgroup filesystem cgroup2 onto 19((null))
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.mount.hook" for container "100013", config section "lxc"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-autodev-hook" for container "100013", config section "lxc"
INFO     conf - ../src/lxc/conf.c:lxc_fill_autodev:1121 - Populating "/dev"
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/full) to 18(full)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/null) to 18(null)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/random) to 18(random)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/tty) to 18(tty)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/urandom) to 18(urandom)
DEBUG    conf - ../src/lxc/conf.c:lxc_fill_autodev:1205 - Bind mounted host device 16(dev/zero) to 18(zero)
INFO     conf - ../src/lxc/conf.c:lxc_fill_autodev:1209 - Populated "/dev"
INFO     conf - ../src/lxc/conf.c:lxc_transient_proc:3307 - Caller's PID is 1; /proc/self points to 1
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_devpts_child:1554 - Attached detached devpts mount 20 to 18/pts
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_devpts_child:1640 - Created "/dev/ptmx" file as bind mount target
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_devpts_child:1647 - Bind mounted "/dev/pts/ptmx" to "/dev/ptmx"
DEBUG    conf - ../src/lxc/conf.c:lxc_allocate_ttys:908 - Created tty with ptx fd 22 and pty fd 23 and index 1
DEBUG    conf - ../src/lxc/conf.c:lxc_allocate_ttys:908 - Created tty with ptx fd 24 and pty fd 25 and index 2
INFO     conf - ../src/lxc/conf.c:lxc_allocate_ttys:913 - Finished creating 2 tty devices
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_ttys:869 - Bind mounted "pts/1" onto "tty1"
DEBUG    conf - ../src/lxc/conf.c:lxc_setup_ttys:869 - Bind mounted "pts/2" onto "tty2"
INFO     conf - ../src/lxc/conf.c:lxc_setup_ttys:876 - Finished setting up 2 /dev/tty<N> device(s)
INFO     conf - ../src/lxc/conf.c:setup_personality:1720 - Set personality to "0lx0"
DEBUG    conf - ../src/lxc/conf.c:capabilities_deny:3006 - Capabilities have been setup
NOTICE   conf - ../src/lxc/conf.c:lxc_setup:4014 - The container "100013" is set up
INFO     apparmor - ../src/lxc/lsm/apparmor.c:apparmor_process_label_set_at:1189 - Set AppArmor label to "lxc-100013_</var/lib/lxc>//&:lxc-100013_<-var-lib-lxc>:"
INFO     apparmor - ../src/lxc/lsm/apparmor.c:apparmor_process_label_set:1234 - Changed AppArmor profile to lxc-100013_</var/lib/lxc>//&:lxc-100013_<-var-lib-lxc>:
DEBUG    terminal - ../src/lxc/terminal.c:lxc_terminal_peer_default:709 - No such device - The process does not have a controlling terminal
NOTICE   start - ../src/lxc/start.c:start:2201 - Exec'ing "/sbin/init"
NOTICE   start - ../src/lxc/start.c:post_start:2212 - Started "/sbin/init" with pid "25425"
NOTICE   start - ../src/lxc/start.c:signal_handler:447 - Received 17 from pid 25421 instead of container init 25425


root@nested-pve-n03:~# pct list
VMID       Status     Lock         Name              
100013     running                 test13
 
Last edited:
  • Like
Reactions: WarmEthernet
Hi there,

I'm not sure if my issue is very much related to this thread, however there are similarities, so here's what happened:

Today I've tried to integrate Netbox (v4.0.3) with PVE (v8.2.2), the connection was successful and following are the events:

  1. Create a subnet in SDN VNet with DHCP
    • Creates a Prefix in Netbox with the Network Address
    • Creates an IP address in Netbox with Gateway Address
    • Does not create an IP Range (So had to manually add it to Netbox)
  2. Assign the above mentioned VNet to a VM's network
    • Creates an IP address in Netbox from the IP Range
    • Does not assign the given IP to the VM
    • Gives the following error message:
Screenshot 2024-06-04 at 3.07.31 PM.png

Any help is much appreciated.

Thanks
 
Hi there,

I'm not sure if my issue is very much related to this thread, however there are similarities, so here's what happened:

Today I've tried to integrate Netbox (v4.0.3) with PVE (v8.2.2), the connection was successful and following are the events:

  1. Create a subnet in SDN VNet with DHCP
    • Creates a Prefix in Netbox with the Network Address
    • Creates an IP address in Netbox with Gateway Address
    • Does not create an IP Range (So had to manually add it to Netbox)
  2. Assign the above mentioned VNet to a VM's network
    • Creates an IP address in Netbox from the IP Range
    • Does not assign the given IP to the VM
    • Gives the following error message:
View attachment 69214

Any help is much appreciated.

Thanks
Thanks for the report . seem to be related. The ip range creation manual creation is expected currently.(I should add a note in doc). But the ip search in the range was working previously, maybe is it a regression in 4.X. I need to check that.
 
  • Like
Reactions: WarmEthernet
Thanks for the report . seem to be related. The ip range creation manual creation is expected currently.(I should add a note in doc). But the ip search in the range was working previously, maybe is it a regression in 4.X. I need to check that.
@spirit Have you had a chance to test with 4.X yet?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!