Search results

  1. stefws

    Initial cnx refused

    Appear not to have the issue between two VMs with non-firewalled IFs, so assuming it's to do with plugging the extraFW/std.bridges in the communication path somehow. [root@speB ~]# arp | grep dcs [root@speB ~]# telnet dcs1 389 Trying 10.45.69.16... Connected to dcs1. Escape character is '^]'...
  2. stefws

    Initial cnx refused

    It could appear to be arp cache warm-up issue if I do an arping first there's no initial issue: #n2:/> arping -I eth1 -f dcs4.<redacted>; telnet dcs4.<redacted> 389 ARPING <redacted>.156 from <redacted>.185 eth1 Unicast reply from <redacted>.156 [92:B9:56:CE:03:E6] 1.150ms Sent 1 probes (1...
  3. stefws

    Initial cnx refused

    could it be arp cache maybe? On destination VM I'm seeing relative frequent arp requests like these whenever there's communication otherwise not, like server wants to ensure client peers is still there (on the same HN maybe though mac-addr should change during live migration, switches might...
  4. stefws

    Initial cnx refused

    According to this it should be remote side not listening, only this is not the case. Seem more like some sort of cache needs to make a note of peers wanting to connect...
  5. stefws

    Initial cnx refused

    Anyone known what might trigger a RST reply to an intial (in a while) SYN request between two VM's firewalled NICs attached to same vLAN only split seconds later not to? tcpdump of inital cnx attempt getting refused: 15:04:56.786105 IP <redacted>.185.60362 > <redacted>.154.ldap: Flags [S], seq...
  6. stefws

    Initial cnx refused

    only the tested destination are not routed via default GW but via directly attached NIC, very strange, other VMs are not seeing the same issue, though they are similar... got a felling that I'm overlooking something...
  7. stefws

    Initial cnx refused

    Seems it's properly only VMs that have their default gateway via a floating ip on a HA proxy LB cluster... will investigate further
  8. stefws

    Initial cnx refused

    Wondering if initial connection attempts between two VMs on the same PVE 4.2 cluster been refused is due to using PVE FW and if so could this be avoided. It seems like some kind connection [state] cache needs to be set initially before been allowed as iptables rules dictate. Happens again after...
  9. stefws

    Port forwarding VM=>VM

    Search on iptables and port forwarding and read up on iptables forwarding might help you.
  10. stefws

    Port forwarding VM=>VM

    Specific for ssh you could create a ssh tunnel, check various links on this.
  11. stefws

    Randomly Inter VM NW cnx issues

    Stupid me :oops: Missed events like these in our HA proxy VM at peak traffic time: May 31 12:10:00 hapA kernel: nf_conntrack: table full, dropping packet May 31 12:10:00 hapA kernel: nf_conntrack: table full, dropping packet May 31 12:10:00 hapA kernel: nf_conntrack: table full, dropping...
  12. stefws

    Randomly Inter VM NW cnx issues

    This NW connectivity issue causes our HA proxy see real servers as flapping up/down as health checking randomly fails to connect. So are customers seeing the LB service latency flapping as well :confused: See HA proxy status split samples as part:
  13. stefws

    Randomly Inter VM NW cnx issues

    VMs use Virtio_net on NICs @vmbr1 and usually e1000 on NICs @vmbr0 unless the few that have multiQs to this bridge as well. No HN vhost process (HN side userland process of virtio_net's vring - shared memory buffer) are running maxed out on HNs. So I don't understand the presumed packet drops...
  14. stefws

    Randomly Inter VM NW cnx issues

    A few drops on bond0, but even more on vmbr0, only this isn't used for inter VM traffic Few packets later No error/drops etc on bond1 nor it's slaves But quite some drops (1 being too many) on vmbr1 (used for inter VM traffic) only no packet counts, why do I see dropped packets...
  15. stefws

    Randomly Inter VM NW cnx issues

    Got an application spread over multiple VMs across multiple Hypervisor Nodes - HN utilizing PVE FireWalling. Some central NW VMs (load balancers) have multiqueued NICs to be able to handle more packets. HNs are HP Proliant 360 Gen9, each having two bonded NICs, bond0 over two separate 1Gbs...
  16. stefws

    Control chain PVEFW-FWBR-IN from PVE WebUI

    What would be the purpose of having rules at hypervisor node level, ie. what's the use case compared with Datacenter and VM level rules?
  17. stefws

    Control chain PVEFW-FWBR-IN from PVE WebUI

    forgot to enable VMs firewall under VM->Firewall->Option then both group and reference show up :)
  18. stefws

    Control chain PVEFW-FWBR-IN from PVE WebUI

    I define a security group at Datacenter level add it to several VM rule sets, only I don't see this group show up in iptables and rule set for referencing VMs... wondering why not?
  19. stefws

    Control chain PVEFW-FWBR-IN from PVE WebUI

    Seems per VM rules goes into the PVEFW-FWBR chains. Meaning to simulate former central rules I need to dup former central rules to every VM, right?
  20. stefws

    Control chain PVEFW-FWBR-IN from PVE WebUI

    Trying to grasp howto use the firewall of PVE. Got a PVE cluster which only hold one tenant/application and are trying to replicate rules from a former central FW for this. Have defined global ipsets and security groups at Datacenter level. Adding rules at Datacenter level end up in the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!