CT not getting IP on 1 of 4 nodes

MoreDakka

Active Member
May 2, 2019
58
13
28
45
Heyo,

I have a zoneminder turnkey linux CT that is unable to get an IP on Node02 of our cluster. It had a lock on the snapshots but I couldn't find the snapshot (etc/pve/qemu-server/<vmid>.conf ) so I did:

"pct unlock 107"

Then I noticed that the server wasn't getting an IP (set to DHCP), rebooted the CT no change. Migrated to another host, was able to get an IP again.
Migrated back to Node02, loses the IP. Migrate to another node, gets IP again.
Migrated a VM to Node02, has an IP no problem. Migrated another CT to Node02, keeps IP

I'm not sure how to troubleshoot this problem. It seems that one CT on one specific node cannot get an IP. Did I break something by forcing an unlock?

The CT is running no problem other than the missing IP.

Thanks!
 
Hi,

this sounds like a Network problem.
Try to use a static IP and test if this works on the node where you get no DHCP package.
 
Thanks Wolfgang, I didn't think of trying to set static IPs. However that doesn't make a difference:

==========| On Node02 |==========
root@zm ~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1a:48:2d:5d:de:17 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.0.116/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::1848:2dff:fe5d:de17/64 scope link
valid_lft forever preferred_lft forever

root@zm ~# ping 10.1.0.1
PING 10.1.0.1 (10.1.0.1) 56(84) bytes of data.
^C
--- 10.1.0.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1013ms

root@zm ~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
^C
root@zm ~#


==========| Migrated to Node03 |==========
root@zm ~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
42: eth0@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1a:48:2d:5d:de:17 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.0.116/16 brd 10.1.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2002:c760:1c3d:0:1848:2dff:fe5d:de17/64 scope global mngtmpaddr dynamic
valid_lft 24sec preferred_lft 14sec
inet6 fe80::1848:2dff:fe5d:de17/64 scope link
valid_lft forever preferred_lft forever
root@zm ~# ping 10.1.0.1
PING 10.1.0.1 (10.1.0.1) 56(84) bytes of data.
64 bytes from 10.1.0.1: icmp_seq=1 ttl=64 time=0.389 ms
64 bytes from 10.1.0.1: icmp_seq=2 ttl=64 time=0.808 ms
^C
--- 10.1.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1023ms
rtt min/avg/max/mdev = 0.389/0.598/0.808/0.210 ms
root@zm ~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.1.0.1 0.0.0.0 UG 0 0 0 eth0
10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
root@zm ~#

It's like that Proxmox node hates this CT... :(
 
Maybe the firewall/ebtables?
Or do you have port security on this network?
 
Morning,
Looked at the Node02 and the Zoneminder CT, both are set with No firewall.

Is there a location where a config file might be stuck on Node02 for the zoneminder CT? Any time I migrate a CT or VM to Node02, there are no problems. If I migrate the zoneminder CT to Node03 or 01, all the problems go away.

Thanks!
 
Is there a location where a config file might be stuck on Node02 for the zoneminder CT?
No, because if the Network would fail to come up your container doesn't start.
 
So, is there no reason to why this one zoneminder CT loses it's IP on one Node? All other VMs and CTs on that node work just fine.
 
Sorry I should rephrase that so I don't sound like an ass :-/

Wondering what troubleshooting I can do to get the networking working on the CT on the one node. I can statically assign an IP, the OS shows it's assigned but no traffic can flow, cannot ping the gateway. However any other VM or CT I migrate to this node has no problems with traffic.
 
Hey Vikozo,

Currently this environment is a test environment. Need to make sure it works like it should before putting it into production.

That being said it is the only CT of VM on that node. I have migrated other VMs/CTs to node 2, those have no problems with their networks.
The IPs for node02 is 10.2.2.62 (Management) and 10.10.1.162 (storage/migration).
The Zoneminder CT's IP is 10.1.0.116

If I have the Zoneminder CT on nodes 1, 3 or 4 then 10.1.0.116 is pingable.
Migrate to node 2, the IP becomes unresponsive.

It's very confusing :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!