Search results for query: 192.168.100.2

  1. The following words were not included in your search because they are too short, too long, or too common: 2
  1. B

    setting up my pfsense router

    Ooh that might be the issue isn't it? I might have also messed it up in the instillation because I know I set up one of my bridges to be 192.168.1.100 but I think I allowed dhcp and it changed it to I think the wan. So I guess I try to set the 192.168.1.81 or whatever it was to 192.168.1.100?
  2. R

    Script de sincronización bidireccional controlada para Proxmox VE (pve-zsync)

    Hola comunidad, les comparto una solución que he implementado para mantener sincronizados dos servidores Proxmox (o más) con control dinámico de dirección, ancho de banda y parada/arranque sin detener el proceso principal. Está pensada para entornos donde se necesita un servidor en espejo...
  3. M

    Installation internet

    info: requesting link dump info: requesting address dump info: requesting netconf dump info: loading builtin modules from ['/usr/share/ifupdown2/addons'] info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found) info: module openvswitch_port not loaded (module init...
  4. J

    [SOLVED] Fixing broken cluster

    Thanks @fabian ! Here's the results for Server1: root@LAB-server1:~:$ pvecm status Cluster information ------------------- Name: LAB-home Config Version: 2 Transport: knet Secure auth: on Quorum information ------------------ Date: Thu Apr 16 09:04:41...
  5. F

    How to build isolated VXLAN networks across Proxmox cluster nodes — and why they still can't reach outside

    Before getting into it: this is not meant to be the only or universally best way to do this. You can solve similar problems with router VMs, OPNsense/pfSense, or more fabric-style designs. This post is specifically about a minimal-resource, practical pattern for making isolated VXLAN networks...
  6. M

    [SOLVED] VM Freeze - vCPU stuck in kvm_vcpu_block - Only Docker VMs affected

    Hi everyone, I've been dealing with a frustrating issue for a while now and after extensive debugging I've gathered enough data to hopefully get some expert input. I'm aware that there are already several threads about VM freezes in this forum, but none of them seem to match this specific case...
  7. B

    EVPN, IPv6 and routed underlay

    Hi, we are currently transforming our infrastructure to ipv6 only. Additionally we are moving away from LACP bonds to routed host addresses to get independent of switch vendors. Technically this is implemented by adding a /128 address to a loopback, dummy or bridge interface, and using a...
  8. R

    SMB Performance Woes

    Thank you so much for your help! I got a pretty major breakthrough today! The CPU was barely getting touched at all (<5%) while running FIO, so I tried changing the encryption but noticed it was already false, turned off the RejectUnencryptedAccess but that didn't change anything.. Debug is...
  9. J

    [SOLVED] Fixing broken cluster

    Hi Community, I have two small servers in my homelab: one at 192.168.1.100, the other at 192.168.110. I had a network issue and decided to set a static DHCP address for the second at 192.168.1.120. Unfortunately, this broke the cluster: the UI returns the error message "hostname lookup...
  10. R

    SMB Performance Woes

    Gotcha, the ping issue with the link local addresses makes sense. Should I just assign static IPs to the interfaces, then? I was getting conflicting information about that + subnets. I tried adding rdma + vers=3.1.1 + cache=none, but it didn't seem to make a difference. Current ftsab entry...
  11. fstrankowski

    NFS Share on Synology DS218+,

    Which KI did you use to generate that answer?
  12. P

    NFS Share on Synology DS218+,

    Hi hansB, The symptom you described—showmount eventually working but with a massive delay—could be a classic indicator of a DNS / Reverse DNS Resolution timeout. NFS servers (like your Synology NAS) often try to perform a reverse DNS lookup to verify the hostname of the incoming client IP...
  13. S

    Deleted container rootfs disk

    The following is my CT configuration. cores: 10 features: mount=nfs hostname: downloader memory: 16384 nameserver: 1.1.1.1 8.8.8.8 9.9.9.9 net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=BC:24:11:BE:C0:ED,ip=192.168.1.50/24,type=veth onboot: 1 ostype: centos startup...
  14. M

    [SOLVED] Turning on firewall with ACCEPT policies everywhere makes hosts unreachable

    @PaddraighOS thank you so much for looking into this and your reply. Great eyes you got. There is no masquerading going on whatsoever on the PVE, but you are still 100% right I think. Our PVE is connected to a firewall on a trunk port, and traffic between VLANs does actually go via that...
  15. P

    [SOLVED] Console access from behind an LB

    Hi. The console issue happens because Proxmox's noVNC/SPICE console connections use a ticket-based system where the ticket is issued by and valid only on the specific node running the VM. So when HAProxy routes your initial API/UI request to node A but the console WebSocket connection lands on...
  16. C

    Proxmox Vlan Issues 8.4 to 9.1

    Hey there I unfortunately am having some issues with the 9.1 and I'm just here double checking if anyone else has any ideas or solutions for this odd problem. Unfortunately this configuration works perfectly on my 8.4.1.7 PVE versions but when updated to proxmox 9 or fresh installed, this...
  17. t.lamprecht

    proxmox und WLAN geht das?

    When WLAN generell schon funktioniert, sollte das auch via einen bond verwendbar sein. Es kann sein, dass die API/UI den namen des WLAN interface nicht akzeptiert, dann sollte es aber immer noch manuell in /etc/network/interfaces konfigurierbar sein, in etwa (ungetestet!): # ... existing config...
  18. I

    Lose Access after every Restart

    Ok then will do, i will try all of these next week...was out of office most of today. Thanks for the usual assistance
  19. N

    Lose Access after every Restart

    Hah... that really sounds strange... and interesting :) I will be honest, I have used some AI to analyse the information provided by you. So what I found (what the AI recommends me to check): 1. --- "Partner Mac Address: 00:00:00:00:00:00 Partner Churn State: churned In a healthy LACP...
  20. N

    Lose Access after every Restart

    Is it possible to dump the bond status after a reboot, while it is still not working? Something like: # cat /proc/net/bonding/bond0 > $SOME_TMP_FILE_1 Then bring up the connection manually and do the same command, but to another file, e.g.: # cat /proc/net/bonding/bond0 > $SOME_TMP_FILE_2...