VM and Container could not ping to network

Ahmad Dhamiri

New Member
Jul 25, 2019
10
0
1
24
Hi to all Proxmox commitees,

I've just recently installed one Proxmox VE 6.0 host virtually in a POC server. The installation went fine with all the local storages & network running smoothly for the host but when I try provision one container and one vm inside the Proxmox host, the container and vm could not reach the network.

Here are screenshot of my Linux Mint VM and Ubuntu container inside the Proxmox host
My Linux Mint VM
1567654074799.png
My Ubuntu container
1567654304785.png
How do I overcome this problem? Is there still a configuration I missed in the Proxmox host? the container or the vm?

Regards,
Ahmad Dhamiri
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
2,030
205
63
Please post the network configuration of:
* your PVE-node
* the VM
* the container

Thanks
 

Ahmad Dhamiri

New Member
Jul 25, 2019
10
0
1
24
Here are the configurations I can obtain in the host, container & vm

Proxmox host
root@proxmox01:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface ens192 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.1.1.51
netmask 255.255.255.0
gateway 10.1.1.1
bridge_ports ens192
bridge_stp off
bridge_fd 0

iface ens224 inet manual
VM configuration

1567741082086.png
Container configuration

root@dhamiri-test-ct:~# cat /etc/network/interfaces
# ifupdown has been replaced by netplan(5) on this system. See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
# sudo apt install ifupdown
root@dhamiri-test-ct:~# cat /etc/netplan/
cat: /etc/netplan/: Is a directory
root@dhamiri-test-ct:~#
Forgive me for I was not able to retrieve full configuration for both the container and the VM

Regards,
Ahmad Dhamiri
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
2,030
205
63
The config on the PVE node looks ok! - Can you:
* `ping 10.1.1.1`
* `ping 8.8.8.8`
from it?

The VM seems to use netplan with the systemd-networkd renderer for its config - please post:
* the live config: `ip link show`, `ip addr show`, `ip route show`
* all files under '/etc/systemd/network/*'
* the VM's config: `qm conf $vmid` (you need to run this last one on the PVE-node)

For the container please post the same information as for the VM
(only difference is that you get the container config with `pct conf $vmid`)
 

Ahmad Dhamiri

New Member
Jul 25, 2019
10
0
1
24
Hi Stoikov, sorry for the late respond,

1) The ping test for PVE node was successful
2) Here are the result for the checklist mentioned in your reply for the VM
a) ip link show
1568078943051.png

b) ip addr show
1568079010597.png

c) ip route show
1568079138769.png

d) files under /etc/network
1568081655424.png

e) qm conf <vmid>
root@proxmox01:~# qm config 110
boot: c
bootdisk: scsi0
cores: 2
ide2: local:iso/linuxmint-19.2-cinnamon-64bit.iso,media=cdrom
memory: 7629
name: dhamiri-test-vm
net0: vmxnet3=36:C6:2E:41:94:4F,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local-lvm:vm-110-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=6cb6e86c-f942-427b-b8f5-da79408eef6a
sockets: 1
vmgenid: c1fb512e-b727-42d5-98e5-08026b1193c7
3) this is the following checklist for the container
1) ip link show
root@dhamiri-test-ct:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
30: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether f2:6d:28:99:74:ea brd ff:ff:ff:ff:ff:ff link-netnsid 0
root@dhamiri-test-ct:~#
2) ip addr show
root@dhamiri-test-ct:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
30: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f2:6d:28:99:74:ea brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.1.61/24 brd 10.1.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f06d:28ff:fe99:74ea/64 scope link
valid_lft forever preferred_lft forever
root@dhamiri-test-ct:~#
3) ip route show
root@dhamiri-test-ct:~# ip route show
default via 10.1.1.1 dev eth0 proto static
10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.61
root@dhamiri-test-ct:~#
4) Files under /etc/network
root@dhamiri-test-ct:~# cd /etc/network
root@dhamiri-test-ct:/etc/network# ls
if-down.d if-up.d interfaces
root@dhamiri-test-ct:/etc/network#
5) pct conf <cid>
root@proxmox01:~# pct conf 101
arch: amd64
cores: 2
cpuunits: 4096
hostname: dhamiri-test-ct
memory: 4000
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=10.1.1.1,hwaddr=F2:6D:28:99:74:EA,ip=10.1.1.61/24,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-101-disk-0,size=10G
swap: 4000
unprivileged: 1
root@proxmox01:~#
This is all I can fetch for now

Regards,
Ahmad Dhamiri
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!