No access to server after rebooting

Trojan

New Member
Dec 5, 2018
10
0
1
39
Hello,

Little new to PVE to bear with me.....

I installed Debian 9 minimal on my server then installed Proxmox on top of it, when I make a bridge in PVE it does not show as active. If I then either reboot the node to implement changes or copy the interfaces.new file to replace the interfaces with the bridge information, I cannot gain access to the server again through SSH for some reason? I have tried rebooting from the PVE interface, rebooting from SSH (Putty) and also restarting networking.service when copying the interfaces files without the reboot.

Each time it locks me out of the server and when I boot into rescue mode via kvm, the PVE boot screen is there, asking for log in details.

How do I stop PVE booting and locking me out the server each time I reboot the server?

Thanks
 
Hi
How do I stop PVE booting and locking me out the server each time I reboot the server?
I guess your network setting are not correct.
SSHd works always if the network is working.
 
I only installed it and created a bridge that was it, put in the bridge ports of the network device, but when I reboot the server to implement the changes I can't get back into the server. I can only get in on the rescue kvm?
 
Sorry but this indicates a misconfiguration.
You have to post your configuration in case you like someone should help you.
 
/etc/network/interfaces
 
this is after a clean install of PVE on Debian 9, no bridge create or nothing yet

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp5s0
iface enp5s0 inet static
address xxx.xxx.xxx.180
netmask 255.255.255.192
gateway xxx.xxx.xxx.129
# route xxx.xxx.xxx.128/26 via xxx.xxx.xxx.129
up route add -net xxx.xxx.xxx.128 netmask 255.255.255.192 gw xxx.xxx.xxx.129 dev enp5s0

iface enp5s0 inet6 static
address xxxx:xx::2
netmask 64
gateway xxxx::1
 
Last edited by a moderator:
Try
Code:
auto lo
iface lo inet loopback
iface lo inet6 loopback

iface enp5s0 inet manual
iface enp5s0 inet6 manual

auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xxx.180
netmask 255.255.255.192
gateway xxx.xxx.xxx.129
bridge-ports enp5s0
bridge-stp off
bridge-fd 0
# route xxx.xxx.xxx.128/26 via xxx.xxx.xxx.129
up route add -net xxx.xxx.xxx.128 netmask 255.255.255.192 gw xxx.xxx.xxx.129 dev enp5s0

iface vmbr0 inet6 static
address xxxx:xx::2
netmask 64
gateway xxxx::1
 
Last edited:
  • Like
Reactions: Samir Souza
That worked! Rebooted and it logged in fine, webUI also back up and showing the bridge active.

Thanks a lot!

Edit: Actually, too early, I still don't have any connection with any containers somehow? Also the network details from the network device have changed to the bridge (IP, Subnet, Gatway etc) the network device doesn't show any IP or anything now?
 
Last edited:
Have tried a couple of articles regarding container connections but cannot find a solution that works yet
 
That worked! Rebooted and it logged in fine, webUI also back up and showing the bridge active.

Thanks a lot!

Edit: Actually, too early, I still don't have any connection with any containers somehow? Also the network details from the network device have changed to the bridge (IP, Subnet, Gatway etc) the network device doesn't show any IP or anything now?
This is correct behavior.
Your bridge is your network device now. In vm and containers you need to select the bridge as port for network. The actual hardware works like in passthrough mode now. You still see the real MAC from the outside but it is passed to bridge now.
 
This is correct behavior.
Your bridge is your network device now. In vm and containers you need to select the bridge as port for network. The actual hardware works like in passthrough mode now. You still see the real MAC from the outside but it is passed to bridge now.

When I create the container I leave the name as eth0 and use the vmbr0 bridge, I leave everything else as it is?
 
I guess the problem is you Hoster he will only accept packages from a known MAC address and reject all others.
Ask your hoster how to deal with VM's
 
I think it'a problem with the interface file in the container as there's nothing much in it:

# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

#auto eth1
#iface eth1 inet dhcp

That's what I get in the config file in the container, but I'm unsure what settings to put in it?
 
Do you get an ip range from your provider?
If not you have to use a NAT setup.
 
No I got only one IP address with the server unfortunately. How do I set the NAT setup up in PVE?
 
Thanks for the article I will try and follow that to set up NAT rather than routed config. Do I need to revert the settings back to the NIC or can I do that from the bridge I have acting as the NIC now?
 
You should revert the config.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!