[SOLVED] Restore to new instance of VE

Halfhidden

Member
May 14, 2021
24
3
8
58
Here's my story.
I installed and ran Proxmox 7 on a Dell Poweredge R710. Things were good and I hade about 6 containers and a couple of vm's. I realised how useful Proxmox was and my projects grew to the point that I had to upgrade the hardware. I ended up buying a better conditioned R710 with more ram and better processors. So, I thought all I needed to do was to swap out the HDD array and fire it back up and that would be that. Nope, after a bit of struggle I managed to reconfigure the network card so that I had the GUI again, but now none of the vm's or containers are accessible through their respective ip's, even though I can ping them just fine without loss. I can only imagine that there is a mess left somewhere because of hardware mac change or something.
Anyway, I did build a PBS and ran it for about two weeks backing up all of the containers and vm's. I chose to stop the containers and back up each time and scheduled a backup of all containers at midnight every day. No errors were recorded from the back ups.

So I'm thinking can I wipe my Proxmox VE and start again using the new hardware and then restore those backups to the new Proxmox VE. Or is there a better way?
Thanks for any input... I really appreciate it.
 
Whats your output of cat /etc/network/inferfaces and ip addr?
I'm at work at the moment but will do that when I return at about 11pm UK time :)
But I remember editing the /etc/network/interfaces in order to correct the right network card name. It was after doing that I was able to log in to the gui. There is only one nic plugged in.
 
Last edited:
Here we go output from cat /etc/network/interfaces:
Code:
Linux proxmox 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jul 29 23:02:20 BST 2022 on tty1
root@proxmox:~# cat /etc/network/inferfaces
cat: /etc/network/inferfaces: No such file or directory
root@proxmox:~# nano /etc/netplan
root@proxmox:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno3 inet manual

iface eno1 inet manual

iface eno2 inet manual

iface eno4 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.14/24
        gateway 192.168.1.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0



Output from ip addr:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:e4:f4:14 brd ff:ff:ff:ff:ff:ff
altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:e4:f4:16 brd ff:ff:ff:ff:ff:ff
altname enp1s0f1
4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether bc:30:5b:e4:f4:18 brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:e4:f4:1a brd ff:ff:ff:ff:ff:ff
altname enp2s0f1
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether bc:30:5b:e4:f4:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::be30:5bff:fee4:f418/64 scope link
valid_lft forever preferred_lft forever
7: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i0 state UNKNOWN group default qlen 1000
link/ether 96:34:fa:a6:e2:c3 brd ff:ff:ff:ff:ff:ff
8: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 36:6a:be:b1:06:55 brd ff:ff:ff:ff:ff:ff
9: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 02:89:b1:24:1e:e0 brd ff:ff:ff:ff:ff:ff
10: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
link/ether e2:18:74:a9:ba:15 brd ff:ff:ff:ff:ff:ff
11: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:02:1f:b5:9f:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
12: veth105i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr105i0 state UP group default qlen 1000
link/ether fe:97:78:86:df:8c brd ff:ff:ff:ff:ff:ff link-netnsid 1
13: fwbr105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 76:fb:2f:40:00:78 brd ff:ff:ff:ff:ff:ff
14: fwpr105p0@fwln105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 3e:0e:61:ac:ab:63 brd ff:ff:ff:ff:ff:ff
15: fwln105i0@fwpr105p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr105i0 state UP group default qlen 1000
link/ether ca:6d:02:44:89:cb brd ff:ff:ff:ff:ff:ff
16: veth108i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr108i0 state UP group default qlen 1000
link/ether fe:dd:cf:95:fc:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
17: fwbr108i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 46:af:4a:96:99:61 brd ff:ff:ff:ff:ff:ff
18: fwpr108p0@fwln108i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 36:f0:cb:f8:bb:af brd ff:ff:ff:ff:ff:ff
19: fwln108i0@fwpr108p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr108i0 state UP group default qlen 1000
link/ether 3a:aa:81:e5:42:68 brd ff:ff:ff:ff:ff:ff
20: veth109i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
link/ether fe:22:1f:7c:c1:0b brd ff:ff:ff:ff:ff:ff link-netnsid 3
21: fwbr109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:f3:64:6a:22:21 brd ff:ff:ff:ff:ff:ff
22: fwpr109p0@fwln109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 3a:a9:ec:be:a5:f7 brd ff:ff:ff:ff:ff:ff
23: fwln109i0@fwpr109p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
link/ether 52:35:ef:cf:88:c2 brd ff:ff:ff:ff:ff:ff



As far as I can see the NIC eno3 is in use and the ip is 192.168.1.14/24 which is correct and the default gateway is also correct


That resolves to the router and I can access Proxmox VE through the GUI over the internal network. I understand that all of the vm's and containers use the vmbro to access the router and I can ping them successfully.
 
Jup, config looks fine. Are the guests setup to use static IPs or DHCP?
I guess you didn't changes your physical network setup and router and so on are unchanged?
 
Here we go output from cat /etc/network/interfaces:
Code:
Linux proxmox 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jul 29 23:02:20 BST 2022 on tty1
root@proxmox:~# cat /etc/network/inferfaces
cat: /etc/network/inferfaces: No such file or directory
root@proxmox:~# nano /etc/netplan
root@proxmox:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno3 inet manual

iface eno1 inet manual

iface eno2 inet manual

iface eno4 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.14/24
        gateway 192.168.1.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0



Output from ip addr:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:e4:f4:14 brd ff:ff:ff:ff:ff:ff
altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:e4:f4:16 brd ff:ff:ff:ff:ff:ff
altname enp1s0f1
4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether bc:30:5b:e4:f4:18 brd ff:ff:ff:ff:ff:ff
altname enp2s0f0
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether bc:30:5b:e4:f4:1a brd ff:ff:ff:ff:ff:ff
altname enp2s0f1
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether bc:30:5b:e4:f4:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::be30:5bff:fee4:f418/64 scope link
valid_lft forever preferred_lft forever
7: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i0 state UNKNOWN group default qlen 1000
link/ether 96:34:fa:a6:e2:c3 brd ff:ff:ff:ff:ff:ff
8: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 36:6a:be:b1:06:55 brd ff:ff:ff:ff:ff:ff
9: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 02:89:b1:24:1e:e0 brd ff:ff:ff:ff:ff:ff
10: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
link/ether e2:18:74:a9:ba:15 brd ff:ff:ff:ff:ff:ff
11: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:02:1f:b5:9f:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
12: veth105i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr105i0 state UP group default qlen 1000
link/ether fe:97:78:86:df:8c brd ff:ff:ff:ff:ff:ff link-netnsid 1
13: fwbr105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 76:fb:2f:40:00:78 brd ff:ff:ff:ff:ff:ff
14: fwpr105p0@fwln105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 3e:0e:61:ac:ab:63 brd ff:ff:ff:ff:ff:ff
15: fwln105i0@fwpr105p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr105i0 state UP group default qlen 1000
link/ether ca:6d:02:44:89:cb brd ff:ff:ff:ff:ff:ff
16: veth108i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr108i0 state UP group default qlen 1000
link/ether fe:dd:cf:95:fc:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
17: fwbr108i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 46:af:4a:96:99:61 brd ff:ff:ff:ff:ff:ff
18: fwpr108p0@fwln108i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 36:f0:cb:f8:bb:af brd ff:ff:ff:ff:ff:ff
19: fwln108i0@fwpr108p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr108i0 state UP group default qlen 1000
link/ether 3a:aa:81:e5:42:68 brd ff:ff:ff:ff:ff:ff
20: veth109i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
link/ether fe:22:1f:7c:c1:0b brd ff:ff:ff:ff:ff:ff link-netnsid 3
21: fwbr109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:f3:64:6a:22:21 brd ff:ff:ff:ff:ff:ff
22: fwpr109p0@fwln109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 3a:a9:ec:be:a5:f7 brd ff:ff:ff:ff:ff:ff
23: fwln109i0@fwpr109p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
link/ether 52:35:ef:cf:88:c2 brd ff:ff:ff:ff:ff:ff



As far as I can see the NIC eno3 is in use and the ip is 192.168.1.14/24 which is correct and the default gateway is also correct


That resolves to the router and I can access Proxmox VE through the GUI over the internal network. I understand that all of the vm's and containers use the vmbro to access the router and I can ping them successfully.
Jup, config looks fine. Are the guests setup to use static IPs or DHCP?
I guess you didn't changes your physical network setup and router and so on are unchanged?
I set up static ip's for the containers and at the router they are set for mac address. They all show in the ip table pool and the mac address for each container also shows up correctly.
So no the router hasn't changed and the firewalls are just the same as before.
 
I wanted to post the results of the weekend. It turns out that changing my NIC card dragged up a lot of problems. That said I can conclude that none of them were actually the fault of Proxmox VE, well not directly. What I did in the end was to reconnect the Proxmox Backup Server to a new instance of Proxmox VE and then restored the guests as "Unique" and then set up the NIC for each guest

Following the advice from:
Oguz
just add the PBS as a storage unit on your new PVE (Datacenter -> Storage -> Add -> Proxmox Backup Server) and enter the details of your existing datastore on PBS; nothing will be overwritten when adding the storage.

from there you should be able to restore whichever VM you need.

Then I simply set up new network settings for the guests and all worked fine.


The mess was caused in part with me changing the NIC but I also had Nginx acting as a reverse proxy for the other containers. The Ubuntu distro failed and I was left with a bit of a mess with dns issues.
So restoring guests from the backup was a life saver. I had to build the Nginx docker again, but I got off lightly I think.

Oh the restore took well over 4 hours ( there was a few TB's of data) and once you've committed a restore you must let it run its course. Unfortunately there is no progress bar, you you just have to trust the system.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!