Connection error 595 ... again

Discussion in 'Proxmox VE: Installation and configuration' started by tim taler, Feb 11, 2019.

  1. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    Hi all,
    I know this has been discussed before, but I ran into this error too and have an additional question/different set up.
    I like to run a two node cluster in a hosted environment.
    Good news: the cluster is running, both nodes are "green" in the webgui, pvecm status looks good etc etc.

    Both nodes have two interfaces, one facing the public internet, and one connected via a crosslink cabel to the other host (10G)
    I created the cluster with:

    pvecm create <clustername> --bindnet0_addr <ip> --ring0_addr <ip>

    directing the cluster internal traffic to the cross linked interface

    the firewall allows all traffic between these both (cross linked) interfaces AND all traffic between the two external addresses
    IGMP is accepted between the the cross linked interfaces.

    Now when I log into the webgui of host A there are both hosts marked "green" on the left menu,
    when I click on host A everything works as expected (attributes are displayed in the main window)
    when I click on host B I get the "Connection Error 595: Connection refused"
    (AND vice versa, I can log into webgui of host B but from there can't access host A)

    I have multicast ONLY between then cross linked interfaces,
    my provider doesn't allow multicast over the external addresses.

    Might that be the reason (and if so that would be ... sad and bad ;-) ?
    Or do I miss any special config operation to make it work?

    Any hint is appreciated!

    TIA
     
  2. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0

    Well as far as I understand, yes.
    I can't give our actual ips here, but here's our /etc/pve/firewall/cluster.fw

    vmbr0 has the public IP
    vmbr1 has the internal/corosync IP

    ----------------------------------------------------

    [OPTIONS]

    enable: 1
    policy_in: DROP

    [ALIASES]

    [IPSET ips_proxmox] # All IPs of servers running Proxmox

    <hostA_public_address> # hostA
    <hostA_internal_address> # hostA-corosync
    <hostB_public_address> # hostB
    <hostA_internal_address> # hostB-corosync

    [IPSET ips_proxmox_wan] # WAN IPs of servers running Proxmox

    <hostA_public_address> # hostA
    <hostB_public_address> # hostB

    [IPSET ips_static_admin_adress] # Reserved (Admin-)Desktop IPs

    <my_static_admin_adress> # admin


    [RULES]

    IN Ping(ACCEPT) # Allow ping
    GROUP sg_prox_corosync # Proxmox Corosync traffic
    GROUP sg_prox_corosync -i vmbr1 # Proxmox Corosync traffic
    GROUP sg_prox_cluster # Proxmox cluster traffic
    GROUP sg_prox_admin # Proxmox admin access
    IN REJECT -p tcp # Reject TCP
    IN DROP # DROP everything else

    [group sg_prox_admin] # Proxmox admin access

    IN ACCEPT -source +ips_static_admin_adress -p icmp # ICMP
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 22 # SSH
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 3128 # SPICE
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 5900:5999 # VNC Web console
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 8006 # HTTPS/WebGUI

    [group sg_prox_cluster] # Proxmox cluster traffic

    IN ACCEPT -source +ips_proxmox # Accept cluster traffic

    [group sg_prox_corosync] # IGMP for Corosync

    IN ACCEPT -p igmp # IGMP for Corosync

    ----------------------------------------------------
     
  4. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    On which IPs are is the web interface listening? The requests are forwarded from the node you are logged in to the second one, if tasks need to be done on that node.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    pveproxy runs on ":::8006"
    I access the WebGui through the public address "+ips_proxmox_wan"
    it is forwarded how? from which source to which destination? public_address to public_address?
    I added +ips_proxmox_wan to "IN ACCEPT" but still got the same error


    cluster.fw is now:

    --------------------------------------------------------------------------

    [OPTIONS]

    enable: 1
    policy_in: DROP

    [ALIASES]

    [IPSET ips_proxmox] # All IPs of servers running Proxmox

    <hostA_public_address> # hostA
    <hostA_internal_address> # hostA-corosync
    <hostB_public_address> # hostB
    <hostA_internal_address> # hostB-corosync

    [IPSET ips_proxmox_wan] # WAN IPs of servers running Proxmox

    <hostA_public_address> # hostA
    <hostB_public_address> # hostB

    [IPSET ips_static_admin_adress] # Reserved (Admin-)Desktop IPs

    <my_static_admin_adress> # admin


    [RULES]

    IN Ping(ACCEPT) # Allow ping
    GROUP sg_prox_corosync # Proxmox Corosync traffic
    GROUP sg_prox_corosync -i vmbr1 # Proxmox Corosync traffic
    GROUP sg_prox_cluster # Proxmox cluster traffic
    GROUP sg_prox_admin # Proxmox admin access
    IN REJECT -p tcp # Reject TCP
    IN DROP # DROP everything else

    [group sg_prox_admin] # Proxmox admin access

    IN ACCEPT -source +ips_static_admin_adress -p icmp # ICMP
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 22 # SSH
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 3128 # SPICE
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 5900:5999 # VNC Web console
    IN ACCEPT -source +ips_static_admin_adress -dest +ips_proxmox_wan -p tcp -dport 8006 # HTTPS/WebGUI

    [group sg_prox_cluster] # Proxmox cluster traffic

    IN ACCEPT -source +ips_proxmox # Accept cluster traffic
    IN ACCEPT -source +ips_proxmox_wan # Accept cluster traffic

    [group sg_prox_corosync] # IGMP for Corosync

    IN ACCEPT -p igmp # IGMP for Corosync

    --------------------------------------------------------------------------
     
  6. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    To what IP does the hostname resolve? Check in 'iptables-save' if there might not be something additional that drops packets.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    hostname resolves to the ips_proxmox_wan (the public ip address)

    I will check iptables again ... it would help though if the firewall would log when configured to do so:

    cat /etc/pve/nodes/<hostname>/host.fw
    [OPTIONS]

    tcpflags: 1
    smurf_log_level: info
    tcp_flags_log_level: info
    log_level_in: debug
    log_level_out: debug
     
  8. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    You need to enable the firewall on the host to get debugging.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    Ah, I see,
    but the host firewall
    (/etc/pve/nodes/<node_name>/host.fw)
    takes precedence over the cluster wide settings?
    (/etc/pve/firewall/cluster.fw)
    (meaning if I restrict a port in cluster.fw but allow it in host.fw than it will be allowed?)

    can I put the above mentioned option section safely into
    the cluster wide configuration and have each host logging accordingly?
     
  10. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    debug settings where set on both nodes
    but only the cluster.fw was configured.
    Firewall on the nodes was on, but no individual rule configured.
    Will be away for a couple of days, but than I need to come back and get this solved.
    Thnx for having a look on this so far!
     
  12. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    So, I'm back and sadly the situation hasn't changed ...
    I can confirm that it is the firewall settings on the cluster that produces the "Connection refused (595)" error
    when I switch of the cluster firewall through the guy I can access the other host (error gone)

    I don't understand how my firewall rule can be to restrictive since I allow all ports in from the source of all interfaces.
    additionally I allow igmp in.

    The firewall logging doesn't work - at least it doesn't log any rejected/dropped packages although it is enabled on both hosts:

    -------------------
    cat /etc/pve/nodes/<node_name>/host.fw

    [OPTIONS]

    tcp_flags_log_level: info
    enable: 1
    smurf_log_level: info
    log_level_in: debug
    log_level_out: debug
    tcpflags: 1
     
  13. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    How does your network configuration of the servers look like?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    network for host A looks like this:
    --------------------

    auto lo
    iface lo inet loopback
    iface lo inet6 loopback

    auto <if1>
    iface <if1> inet manual

    auto <if2>
    iface <if2> inet manual

    auto vmbr0
    iface vmbr0 inet static
    address <hostA_public_address>
    netmask <hostA_public_netmask>
    gateway <hostA_public_gateway>
    up route add -net <hostA_public_net> netmask <hostA_public_netmask> gw <hostA_public_gateway> dev vmbr0
    bridge-ports <if1>
    bridge-stp off
    bridge-fd 0

    iface vmbr0 inet6 static
    address <hostA_public_ipv6address>
    netmask 64
    gateway <hostA_public_ipv6gateway>
    bridge-ports <if1>
    bridge-stp off
    bridge-fd 0

    auto vmbr1
    iface vmbr1 inet static
    address <hostA_internal_address>
    netmask <hostA_internal_netmask>
    bridge-ports <if2>
    bridge-stp off
    bridge-fd 0

    -----------------------------

    "hostname" and "hostname -f" resolve to <hostA_public_address>
    (accordingly on host B)
    connection between both vmbr1 is okay... as said everything seems okay, cluster is happy except of the GUI connecting from host A to host B

    and - as said - it is definetly a problem with the firewall (firewall off = problem goes away)

    Any tests I can run?

    TIA
     
  15. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    This may be issue, the traffic is routed through your gateway to the second node and that IP is not whitelisted. Check our 'ip route' output, this route should be obsolete.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    Ah I see! thnx!!
    I will have a look into that and will report back.
    Best regards
     
  17. tim taler

    tim taler New Member

    Joined:
    Mar 8, 2018
    Messages:
    17
    Likes Received:
    0
    It wasn't the route ... it was ipv6.

    just adding the ipv6 addresses to (in our case)
    "ips_proxmox"
    "ips_proxmox_wan"

    solved the problem.

    Thnx for your effort!
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice