Wondering if initial connection attempts between two VMs on the same PVE 4.2 cluster been refused is due to using PVE FW and if so could this be avoided. It seems like some kind connection [state] cache needs to be set initially before been allowed as iptables rules dictate. Happens again after idling for a while (like a cache ttl expires). Any hints & clues appreciated, TIA!
Below cnx attempts rapidly follows one another:
PVE 4.2 @:
Below cnx attempts rapidly follows one another:
#n2:/> telnet dcs3.<redacted> 389
Trying <redacted>.155...
telnet: connect to address <redacted>.155: Connection refused
#n2:/> telnet dcs3.<redacted> 389
Trying <redacted>.155...
Connected to dcs3.<redacted>.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
#n2:/> telnet dcs4.<redacted> 389
Trying <redacted>.156...
telnet: connect to address <redacted>.156: Connection refused
#n2:/> telnet dcs4.<redacted> 389
Trying <redacted>.156...
Connected to dcs4.<redacted>.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
PVE 4.2 @:
root@n1:~# pveversion -verbose
proxmox-ve: 4.2-51 (running kernel: 4.4.8-1-pve)
pve-manager: 4.2-5 (running version: 4.2-5/7cf09667)
pve-kernel-4.4.8-1-pve: 4.4.8-51
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-75
pve-firmware: 1.1-8
libpve-common-perl: 4.0-62
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-17
pve-container: 1.0-64
pve-firewall: 2.0-27
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
openvswitch-switch: 2.5.0-1