pve5to6 fail on "Resolved node IP"

popallo

Well-Known Member
Jul 2, 2019
36
6
48
France
apacher.eu
People hi!

I 'm facing a new problem.

I would like to update my version of proxmox (currently 5.4) on a server where I have only one node currently.

By following the documentation I could see that a very useful command allows to see the compatibility "pve5to6".

After execution of this command I'm facing with the following message:

Code:
INFO: Checking if resolved IP is configured on local node ..
FAIL: Resolved node IP 'x.x.x.x' not configured or active for 'hostname'

x.x.x.x = the server's external ip

However, my /etc/hosts file seems correctly filled in and the server works without any problem.

Any idea?

Regards,
Pop :)
 
Is the IP listed on any interface in the 'ip a' output?
 
Is the IP listed on any interface in the 'ip a' output?
mira, thank you for your answer.

Indeed this ip does not appear anywhere when I do an "ip a".

In fact I have never been able to correctly graft it in configuration network in /etc/network/interfaces

I had talked about it here: https://forum.proxmox.com/threads/proxmox-container-no-internet.55887/#post-257431

I think I miss something important at this level knowing that the server in question is a vps with a public ip and a private ip.
 
Last edited:
I have the same error, but my IP is visible on vmbr0 at the ip a' output.
did you found a solution or did you try the update despite the error?
 
Please post the complete error message, the 'ip a' output and the network config (/etc/network/interfaces).
If there are any public IPs in the output you might want to mask them.
 
Please post the complete error message, the 'ip a' output and the network config (/etc/network/interfaces).
If there are any public IPs in the output you might want to mask them.

thank you for your answer.
this is the output of "ip a"

12: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 50:46:5d:9e:20:xx brd ff:ff:ff:ff:ff:ff
inet xx.xx.xx.xxx peer gw_ip/32 brd xx.xx.xx.xxx scope global vmbr0
valid_lft forever preferred_lft forever

xx.xx.xx.xxx = IP of the proxmox_host

alias and hostname are both in the /etc/hosts and via ping reachable.
The hostname has a guilty dns entry.
IP is used only for the host.
 
I've the same error message, here are my outputs (system runs fine ....):

pve5to6:
Code:
root@vhost01 ~ # pve5to6
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages uptodate

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 5.4-2

Checking running kernel version..
PASS: expected running kernel '4.15.18-18-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'hetzner-space' enabled and active.
PASS: storage 'storage-hdd' enabled and active.
SKIP: storage 'local' disabled.
PASS: storage 'storage-ssd' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
WARN: 9 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'vhost01' is resolvable..
INFO: Checking if resolved IP is configured on local node..
FAIL: Resolved node IP 'xxx.xxx.xxx.xxx' not configured or active for 'vhost01'
INFO: Checking KVM nesting support, which breaks live migration for VMs using it..
PASS: KVM nested parameter not set.

= SUMMARY =

TOTAL: 15
PASSED: 10
SKIPPED: 3
WARNINGS: 1
FAILURES: 1

ATTENTION: Please check the output for detailed information!
Try to solve the problems one at a time and then run this checklist tool again.

ip a
Code:
root@vhost01 ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 90:1b:0e:cc:cf:90 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.xxx.xxx peer 195.201.87.129/32 brd 195.201.87.191 scope global enp0s31f6
       valid_lft forever preferred_lft forever
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1e:80:da:1c:a1:95 brd ff:ff:ff:ff:ff:ff
    inet yyy.yyy.yyy.yyy/29 brd 46.4.217.47 scope global vmbr0
       valid_lft forever preferred_lft forever
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:a7:bc:ad:99:75 brd ff:ff:ff:ff:ff:ff
    inet zzz.zzz.zzzz.zzz/24 brd 10.119.0.255 scope global vmbr1
       valid_lft forever preferred_lft forever
5: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 1e:80:da:1c:a1:95 brd ff:ff:ff:ff:ff:ff
6: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 0a:a7:bc:ad:99:75 brd ff:ff:ff:ff:ff:ff
7: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 66:c9:34:7e:4e:7b brd ff:ff:ff:ff:ff:ff
8: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether f6:c0:cd:2e:2c:54 brd ff:ff:ff:ff:ff:ff
9: tap102i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 8e:cd:f0:14:ea:fe brd ff:ff:ff:ff:ff:ff
10: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 16:fb:e6:d3:de:02 brd ff:ff:ff:ff:ff:ff
11: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 7e:28:c0:28:86:92 brd ff:ff:ff:ff:ff:ff
12: tap104i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 02:89:86:8a:5e:64 brd ff:ff:ff:ff:ff:ff
13: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 72:eb:16:4b:ec:4c brd ff:ff:ff:ff:ff:ff
14: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 2e:99:ad:f2:8c:b5 brd ff:ff:ff:ff:ff:ff
15: tap107i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether ea:bb:b9:d5:f0:e2 brd ff:ff:ff:ff:ff:ff
16: tap108i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether ae:40:6a:cb:fe:3c brd ff:ff:ff:ff:ff:ff

/etc/network/interfaces:
Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp0s31f6
iface enp0s31f6 inet static
  address xxx.xxx.xxx.xxx
  netmask 255.255.255.192
  gateway xxx.xxx.xxx.129
  pointopoint xxx.xxx.xxx.129

auto vmbr0
iface vmbr0 inet static
  address yyy.yyy.yyy.yyy
  netmask 255.255.255.248
  gateway xxx.xxx.xxx.xxx
  bridge_ports none
  bridge_stp off
  bridge_fd 0

auto vmbr1
iface vmbr1 inet static
  address zzz.zzz.zzz.zzz
  netmask 255.255.255.0
  up route add -net zzz.zzz.zzz.zzz netmask 255.255.255.0 gw zzz.zzz.zzz.254
  down route del -net zzz.zzz.zzz.zzz netmask 255.255.255.0 gw zzz.zzz.zzz.254
  bridge_ports none
  bridge_stp off
  bridge_fd 0
 
I've the same error message, here are my outputs (system runs fine ....):

pve5to6:
Code:
root@vhost01 ~ # pve5to6
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages uptodate

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 5.4-2

Checking running kernel version..
PASS: expected running kernel '4.15.18-18-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'hetzner-space' enabled and active.
PASS: storage 'storage-hdd' enabled and active.
SKIP: storage 'local' disabled.
PASS: storage 'storage-ssd' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
WARN: 9 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'vhost01' is resolvable..
INFO: Checking if resolved IP is configured on local node..
FAIL: Resolved node IP 'xxx.xxx.xxx.xxx' not configured or active for 'vhost01'
INFO: Checking KVM nesting support, which breaks live migration for VMs using it..
PASS: KVM nested parameter not set.

= SUMMARY =

TOTAL: 15
PASSED: 10
SKIPPED: 3
WARNINGS: 1
FAILURES: 1

ATTENTION: Please check the output for detailed information!
Try to solve the problems one at a time and then run this checklist tool again.

ip a
Code:
root@vhost01 ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 90:1b:0e:cc:cf:90 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.xxx.xxx peer 195.201.87.129/32 brd 195.201.87.191 scope global enp0s31f6
       valid_lft forever preferred_lft forever
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1e:80:da:1c:a1:95 brd ff:ff:ff:ff:ff:ff
    inet yyy.yyy.yyy.yyy/29 brd 46.4.217.47 scope global vmbr0
       valid_lft forever preferred_lft forever
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:a7:bc:ad:99:75 brd ff:ff:ff:ff:ff:ff
    inet zzz.zzz.zzzz.zzz/24 brd 10.119.0.255 scope global vmbr1
       valid_lft forever preferred_lft forever
5: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 1e:80:da:1c:a1:95 brd ff:ff:ff:ff:ff:ff
6: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 0a:a7:bc:ad:99:75 brd ff:ff:ff:ff:ff:ff
7: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 66:c9:34:7e:4e:7b brd ff:ff:ff:ff:ff:ff
8: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether f6:c0:cd:2e:2c:54 brd ff:ff:ff:ff:ff:ff
9: tap102i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 8e:cd:f0:14:ea:fe brd ff:ff:ff:ff:ff:ff
10: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 16:fb:e6:d3:de:02 brd ff:ff:ff:ff:ff:ff
11: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 7e:28:c0:28:86:92 brd ff:ff:ff:ff:ff:ff
12: tap104i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 02:89:86:8a:5e:64 brd ff:ff:ff:ff:ff:ff
13: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 72:eb:16:4b:ec:4c brd ff:ff:ff:ff:ff:ff
14: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 2e:99:ad:f2:8c:b5 brd ff:ff:ff:ff:ff:ff
15: tap107i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether ea:bb:b9:d5:f0:e2 brd ff:ff:ff:ff:ff:ff
16: tap108i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether ae:40:6a:cb:fe:3c brd ff:ff:ff:ff:ff:ff

/etc/network/interfaces:
Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp0s31f6
iface enp0s31f6 inet static
  address xxx.xxx.xxx.xxx
  netmask 255.255.255.192
  gateway xxx.xxx.xxx.129
  pointopoint xxx.xxx.xxx.129

auto vmbr0
iface vmbr0 inet static
  address yyy.yyy.yyy.yyy
  netmask 255.255.255.248
  gateway xxx.xxx.xxx.xxx
  bridge_ports none
  bridge_stp off
  bridge_fd 0

auto vmbr1
iface vmbr1 inet static
  address zzz.zzz.zzz.zzz
  netmask 255.255.255.0
  up route add -net zzz.zzz.zzz.zzz netmask 255.255.255.0 gw zzz.zzz.zzz.254
  down route del -net zzz.zzz.zzz.zzz netmask 255.255.255.0 gw zzz.zzz.zzz.254
  bridge_ports none
  bridge_stp off
  bridge_fd 0

it seems the detection is broken for pointtopoint interfaces, since "ip a" does not print the prefix in that case. filed a bug: https://bugzilla.proxmox.com/show_bug.cgi?id=2303
 
Hello Fabian,

thank you for your reply and the bugreport.
As it seems to be a detection issue in the update check script, in my case this specific FAIL output can be safely ignored regarding the update to Proxmox 6.0?
 
Hello Fabian,

thank you for your reply and the bugreport.
As it seems to be a detection issue in the update check script, in my case this specific FAIL output can be safely ignored regarding the update to Proxmox 6.0?

yes, if your hostname resolves to an address configured on the same host (seems to be the case, but since you censored the actual addresses I just wanted to make sure).
 
yes, if your hostname resolves to an address configured on the same host (seems to be the case, but since you censored the actual addresses I just wanted to make sure).
Hello Fabian,
would not it be the better way to either completely disallow the pve5to6 script in the upgrade howto or patch it?
First, I find it extremely nerve wracking if, after the system has been running reliably for months, you get an error message about an unresolvable host name, although the opposite is true. Second, at least the Proxmox staff could explicitly point out that the error message can be neglected. You do not necessarily want to read whole stories for solving a problem, even if it can be enlightening at times, but can see the solution quickly.
 
For everyone with the same problem:
An upgrade from PVE 5.4 to PVE 6.0 seems to work here, according to the Proxmox staff, as long as the requested host IP is indeed successfully configured on the host and can be resolved into the host name.
Whether this only applies to Debian Stretch, but also after a dist-upgrade after Debian Buster still works, has not been discussed here so far successfully.
 
would not it be the better way to either completely disallow the pve5to6 script in the upgrade howto or patch it?

Sorry, what? Why dissallow it? It's not forced, it's a recommendation and points out common potential pit-falls.

First, I find it extremely nerve wracking if, after the system has been running reliably for months, you get an error message about an unresolvable host name, although the opposite is true.

A unresolved node IP does not need to make immediate problems, thus "reliably for months" does not has much to say here..

But, if you ran into the actual issue where point to point IP addresses did not get recognized, a fix landed but was not rolled out for the oldstable PVE 5 repo.

https://git.proxmox.com/?p=pve-common.git;a=commitdiff;h=204ac4388214902fe10239c434c001991ea62ee6

It seems that it was a bit forgotten, so thanks for the hint here, I rolled the fix just out to our repos, future upgrades should not not see this false-positive anymore.
 
But, if you ran into the actual issue where point to point IP addresses did not get recognized, a fix landed but was not rolled out for the oldstable PVE 5 repo.

https://git.proxmox.com/?p=pve-common.git;a=commitdiff;h=204ac4388214902fe10239c434c001991ea62ee6

It seems that it was a bit forgotten, so thanks for the hint here, I rolled the fix just out to our repos, future upgrades should not not see this false-positive anymore.

Thank you for taking care of this. But where can I find this Network.pm file on my current PVE 5.4 system in which directory?

In addition, may I ask this question if PVE 5.4 is still executable when I upgrade Debian Stretch to Buster?

Thank you very much
Diani
 
Thank you for taking care of this. But where can I find this Network.pm file on my current PVE 5.4 system in which directory?
If you already upgraded to Proxmox VE 6.0 you already have the fix, it just was not packaged for the 5.4 package repository.
We use the common perl standard directory /usr/share/perl5, but you won't need to change it there, just update :)

In addition, may I ask this question if PVE 5.4 is still executable when I upgrade Debian Stretch to Buster?

Hmm, I'm not sure if I understand the question correct. Proxmox VE 5.x is based on Debian 9 (Stretch) , Proxmox VE 6.x is based on top of Debian 10 (Buster). But effectively PVE is it's own Linux Distribution, so, if you upgrade a PVE 5.4 to 6.x you'll always upgrade "all at once".
It's not really possible, and surely not supported, to install or use the PVE parts onto a Debian installation which do not match.
So no, you cannot use PVE 5.4 on Debian Buster, but 6.0 works pretty good there ;)
 
Okay, but I understand it. If I want to make my dist-upgrade to Buster, I have to make an upgrade of PVE 5.4 to PVE 6.0 too. My concern is that the VMs (mail server, name server, web server) will not start anymore. These are all already successfully upgraded.

Thanks for your positive feedback. :)

Daini
 
Hi folks!
Sorry to bring this "old" tread up, but I am having the same issue: trying to update from 5.4 to 6 and pve5to6 returns a fail.

Here, my pve5to6 failure:
FAIL: Resolved node IP '192.168.1.247' not configured or active for 'pve'

Here my 'ip a':
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether d0:50:99:c0:b8:93 brd ff:ff:ff:ff:ff:ff
3: ens1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UP group default qlen 1000
link/ether 68:05:ca:00:75:8b brd ff:ff:ff:ff:ff:ff
4: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether d0:50:99:c0:b8:94 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d0:50:99:c0:b8:94 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.247/24 brd 192.168.5.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::d250:99ff:fec0:b894/64 scope link
valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d0:50:99:c0:b8:93 brd ff:ff:ff:ff:ff:ff
inet6 fe80::d250:99ff:fec0:b893/64 scope link
valid_lft forever preferred_lft forever
7: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 68:05:ca:00:75:8b brd ff:ff:ff:ff:ff:ff
inet6 fe80::6a05:caff:fe00:758b/64 scope link
valid_lft forever preferred_lft forever
8: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
link/ether 32:9a:34:c8:d0:03 brd ff:ff:ff:ff:ff:ff
9: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
link/ether c2:d9:9d:b7:df:1f brd ff:ff:ff:ff:ff:ff
10: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether e6:07:86:a3:db:b8 brd ff:ff:ff:ff:ff:ff
11: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
link/ether 5e:37:c0:a3:d6:c3 brd ff:ff:ff:ff:ff:ff

here my hosts:
127.0.0.1 localhost.localdomain localhost
192.168.1.247 pve.example.invalid pve localhost

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

While my proxmox machine is available at the ip shown in 'ip a' (i.e. 192.168.5.247)), I do not have any clue why the "faulty" one (i.e. 192.168.1.247) is contained in "hosts". In fact, proxmox is available only on the net handled by the pfSense VM (i.e. net 192.168.5.x).
What am I missing?
Additionally, I believe that in order to upgrade from 5 to 6 I have also to assign to 'pve' an IP on my main net (i.e. 192.168.1.x), am I right?

All the best,
juspriss
 
if the hosts entry is wrong/outdated, simple correct it so that the hostname resolves to the IP.
 
if the hosts entry is wrong/outdated, simple correct it so that the hostname resolves to the IP.
Lovely, thank you! I updated the hosts as you wrote and the red became green.

Last question, I believe that in order to upgrade from 5 to 6 I have also to assign to 'pve' an IP on my main net (i.e. the one that goes to the modem without passing for the pfSense one), am I right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!