Networking configuration after convert openVZ to LXC

Pct config 404 on LXC proxmox:
Code:
arch: i386
cpulimit: 1
cpuunits: 1024
hostname: gestionbibli.mydomain.fr
memory: 1024
nameserver: 147.94.59.21 147.94.59.22
net0: name=venet0,bridge=vmbr0,hwaddr=6E:7E:49:0E:F6:BD,ip=147.94.59.25/24,type=veth
ostype: debian
rootfs: vmstorage:404/vm-404-disk-0.raw,size=8G
searchdomain: mydomain.fr
swap: 1024
 
ok - you need to change the net0 line to look like:
Code:
net0: name=eth0,bridge=vmbr0,gw=147.94.59.21,hwaddr=6E:7E:49:0E:F6:BD,ip=147.94.59.25/24,type=veth

and your node's '/etc/network/interfaces' should only contain the vmbr0 defnition (and the manual up for the physical port)

I hope this helps!
 
Ok so i make what you say, here is my network config on LXC:
Presse-papiers-1.jpg

And what i have now on etc/network/interfaces on this 404 vm:

Code:
# Auto generated lo interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 147.94.59.25
        netmask 255.255.255.0
        gateway 147.94.59.1

But still no access...
 
This looks correct

now you need to also adapt you /etc/network/interfaces on the host - and reboot (so that it becomes effective) (alternatively you could also try ifreload if you have ifupdown2 installed)

Please make sure to read and understand the network documentation I linked above before trying to reboot (since you may loose network connectivity
 
Ok, what you call "on the host" is on my proxmox (posidonie) ?
Cause, here i have :

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 147.94.57.102
        netmask 255.255.254.0
        gateway 147.94.56.1
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
 
Ok, what you call "on the host" is on my proxmox (posidonie) ?
Yes

the config looks ok - assuming your physical network interface is indeed called eno1 (verify with the output of `ip link`)

also make sure that the network where your host (posidonie) is connected supports a bridged setup (multiple mac-addresses per port )
 
Ok, yes, here is my ip a:

Code:
root@posidonie:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 54:9f:35:20:80:58 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 54:9f:35:20:80:59 brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 54:9f:35:20:80:5a brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 54:9f:35:20:80:5b brd ff:ff:ff:ff:ff:ff
6: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:79:88 brd ff:ff:ff:ff:ff:ff
7: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:79:89 brd ff:ff:ff:ff:ff:ff
8: enp4s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:79:8a brd ff:ff:ff:ff:ff:ff
9: enp4s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:79:8b brd ff:ff:ff:ff:ff:ff
10: enp129s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:2c:94 brd ff:ff:ff:ff:ff:ff
11: enp129s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:2c:95 brd ff:ff:ff:ff:ff:ff
12: enp129s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:2c:96 brd ff:ff:ff:ff:ff:ff
13: enp129s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0a:f7:84:2c:97 brd ff:ff:ff:ff:ff:ff
14: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 54:9f:35:20:80:58 brd ff:ff:ff:ff:ff:ff
    inet 147.94.57.102/23 brd 147.94.57.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::569f:35ff:fe20:8058/64 scope link
       valid_lft forever preferred_lft forever
15: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 36:14:15:5e:92:4b brd ff:ff:ff:ff:ff:ff
28: vmbr0v73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 54:9f:35:20:80:58 brd ff:ff:ff:ff:ff:ff
29: eno1.73@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v73 state UP group default qlen 1000
    link/ether 54:9f:35:20:80:58 brd ff:ff:ff:ff:ff:ff
59: veth404i0@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:52:00:14:53:cb brd ff:ff:ff:ff:ff:ff link-netnsid 0
 
That looks ok

I just noticed - your host has a netmask of /23 (and a different network, than the container)
is the container network 147.94.57.0/24 also directly configured on the infrastructure?

this you need to check with your provider/network team
 
Yes, posidonie, my proxmox server is 147.94.57.102/23
My container is 147.94.59.25/24
The old proxmox server where openVZ was working was 147.94.59.5.
This is why my LXC doesn't work ?
 
Is 147.94.59.25/24 configured in the same network/vlan as 147.94.57.102/23 ?

then it should work - else you need to adapt your configuration to the situation in your network - ask your ISP/Datacenter/Network team

also try pinging your default gateway
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!