Guest container reachability over bridge

untergeek

New Member
Jan 25, 2017
4
0
1
50
UT
untergeek.com
The storage backend is SSD. The machine is an i7-4790K CPU @ 4.00GHz with 32G of RAM.

Other machines in the cluster demonstrate the same behavior.

Host: 172.19.73.9
vmbr0 Link encap:Ethernet HWaddr 74:d4:35:e7:6d:4f
inet addr:172.19.73.9 Bcast:172.19.73.255 Mask:255.255.255.0
inet6 addr: fe80::76d4:35ff:fee7:6d4f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4735310 errors:0 dropped:870 overruns:0 frame:0
TX packets:3108084 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1651440111 (1.5 GiB) TX bytes:2147440929 (1.9 GiB)


Guest: 172.19.73.23 for container 102
veth102i0 Link encap:Ethernet HWaddr fe:4b:ce:a7:4a:b1
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:119512 errors:0 dropped:0 overruns:0 frame:0
TX packets:371129 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6470696 (6.1 MiB) TX bytes:42532795 (40.5 MiB)


Gateway: 172.19.73.1

Container 102 config:
arch: amd64
cores: 2
hostname: REDACTED
memory: 11000
net0: name=eth0,bridge=vmbr0,gw=172.19.73.1,hwaddr=3E:FC:69:E8:60:6F,ip=172.19.73.23/32,type=veth
ostype: ubuntu
mp0: /bigdisk,mp=/bigdisk
rootfs: storage:vm-102-disk-1,size=50G
swap: 0
unprivileged: 0


Pings to Host (from desktop):
--- 172.19.73.9 ping statistics ---
506 packets transmitted, 506 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.172/0.251/0.399/0.035 ms


Pings to Guest (from desktop):
--- 172.19.73.23 ping statistics ---
607 packets transmitted, 198 packets received, 67.4% packet loss
round-trip min/avg/max/stddev = 0.197/0.270/0.593/0.039 ms


Pings to Guest (from host):
--- 172.19.73.23 ping statistics ---
16 packets transmitted, 5 received, 68% packet loss, time 14996ms
rtt min/avg/max/mdev = 0.006/0.008/0.014/0.004 ms



I can use the Console in the Web UI just fine, and there are no interruptions in service or availability when using the virtual console. I'm just very confused by this behavior and would like to know how to address it.
 
This is the output of pveversion -v of all 3 boxes (it's the same):

proxmox-ve: 4.4-78 (running kernel: 4.4.35-2-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.35-2-pve: 4.4.35-78
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.1-1
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
 
Tentative solution, which does not explain how it got there:

  1. Shut down all nodes in the cluster and do a clean reboot.
  2. Do an apt update/upgrade to make sure all nodes have the same patch level
  3. Reboot again.

So far, it seems to be okay.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!