I think that we found the root cause of this issue.
It`s lxc-freeze command, that issued by pvesr and it work as expected - container is freezing.
I don't have enough time now to find the reason of this and need to do some tests (replication is off - as my colleagues said), and read the docs...
We separate the cluster network (move to dedicated nic), cluster is fine, problem (issue) is persist and nothing was changed as expected.
We will try to profile the components by strace and perf, debug corosync, lxcfs(fuse) and update glibc in container.
It`s not an argument for network engineer, activity from corosync (bidirectional!) is approx (without DSCP and CoS values):
400packets*162bytes = 64800 bytes in 120 ms.
For 1 second rate is 5.2 Mbit/s.
And additional network interface needed for 5Mbit/s traffic?
We will try to setup dedicated...
Many thanks for answer, gosha.
Then internal traffic in 3 nodes cluster (without shared storage and replication) with 2 containers up to 10 Mbps (on Gigabit network) it`is not necessary, i think (we observe this issue in 2 nodes cluster with 1 container).
Or you can describe why this network...
We found some interesting thing, that container freeze for 1.2 sec then network activity from corosync started (unicast udp 5404->5405 between cluster members) and unfreeze after them ended (and rtp traffic burst detected).
Any help would be appreciated.
Many thanks for answer.
No, we use migration only for move container to different hardware (in off state) for tests.
'Freeze' problem persists on all nodes (on lab, on production).
After analyze tcpdumps we seen very strange network problems inside hosts (tcpdump on external interface of host...
"A long time ago" we try to migrate our small office PBX (Asterisk 11 + CentOS6) to Proxmox (LXC container) and discovered problem with voice that called standstill (one way audio).
We have up to 7 concurent SIP calls (maximum 1 Mbps of voice traffic).
Only one container running on...
Alwin, engin - Many thanks for answers and for point me to right direction (udevadm test).
Problem was not in kernel, problem was in udevd configuration (that applied by systemd).
Someone (package or user - i will investigate) create /etc/systemd/network/more 99-default.link:
Alwin - many thanks, i will read docs about changes in udev (and systemd) and reload-rules after that.
engin - you are right , content of /etc/default/grub here:
and no options net.ifnames=0 biosdevname=0 is defined on both nodes.
and no significant changes in /boot/config... :
Thanks for answer.
BIOS was not updated for more than 1 year and udev rules was not edited by hand.
On node with new kernel:
# udevadm info /sys/class/net/eth0
Hi guys !
I had a 2 node cluster with 4.13.13-6-pve (4.13.13-41) and 4.13.13-5-pve (4.13.13-36).
Hardware is - HP DL360G7 with Broadcom NetXtreme II 4 ports integrated network card.
After upgrade one node from 4.13.13-36 to 4.13.13-41 all ethernet interfaces names changed from enp3s0f0 to eth0...