Search results

  1. R

    Networking Question - eth0: entered promiscuous mode || eth0: left promiscuous mode loop in dmesg

    Looks like I found tthe culprit in another post: https://forum.proxmox.com/threads/e...spammed-by-eth0-left-promiscuous-mode.153412/ Odd behavior indeed ;-)
  2. R

    Network device promiscuous mode

    Looks like I found the culprit in another post: https://forum.proxmox.com/threads/eth0-doesnt-exist-yet-console-is-spammed-by-eth0-left-promiscuous-mode.153412/ Odd behavior indeed ;-)
  3. R

    eth0 doesn't exist, yet console is spammed by "eth0: left promiscuous mode"

    Sorry to hijack this thread - I have a similar issue occurring on latest version 8.4.1 - coincidentally, I also have an LXXC running watchyourlan - so that looks like the culprit: Apr 27 12:18:36 FoxN100 kernel: eth0: entered promiscuous mode Apr 27 12:18:38 FoxN100 kernel: eth0: left...
  4. R

    Networking Question - eth0: entered promiscuous mode || eth0: left promiscuous mode loop in dmesg

    Sorry to hijack this thread - I have a similar issue occurring on latest version 8.4.1: Apr 27 12:18:36 FoxN100 kernel: eth0: entered promiscuous mode Apr 27 12:18:38 FoxN100 kernel: eth0: left promiscuous mode Apr 27 12:18:38 FoxN100 kernel: eth1: entered promiscuous mode Apr 27 12:18:40...
  5. R

    Network device promiscuous mode

    Sorry to hijack this thread - I have a similar issue occurring on latest version 8.4.1: Apr 27 12:18:36 FoxN100 kernel: eth0: entered promiscuous mode Apr 27 12:18:38 FoxN100 kernel: eth0: left promiscuous mode Apr 27 12:18:38 FoxN100 kernel: eth1: entered promiscuous mode Apr 27 12:18:40...
  6. R

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    After diving in head first with 6.14 - I somehow borked my cluster. going back to 6.11 got eveything back to "normal" - I have 5 different mini PCs running in the cluster - all over 10GB or LACP'd 2.5GB NW (aka 5GB) - I have no hight traffic and I broke the cardinal sin of having the corosync...
  7. R

    QEMU 9.2 available on pvetest and pve-no-subscription as of now

    Stupid question #354 - running latest PVE 8.3.4 - enabled pvetest and wanted to try qemu 9.2 - after update I see pve-qemu-kvm: 9.2.0-2 in pveversion -v but qemu-server version says: qemu-server: 8.3.8 :oops: Is that right ??
  8. R

    PCI Pass-through: iommu & vfio on 8.2

    Kind of confusing - because there are still many references around (like here: Thomas-Krenn) which leads you to believe it's still relevant. I have several machines in my cluster which have been upgraded over time and still have those entries in /etc/modules - Guess it doesn't hurt . . . :rolleyes:
  9. R

    PCI Pass-through: iommu & vfio on 8.2

    I noticed that iommu is enable by default and no longer requires the "intel_iommu=on" boot parameter - are the vfio modules still required? In the past we always had to add: vfio vfio_iommu_type1 vfio_pci vfio_virqfd to /etc/modules file ??
  10. R

    Vor- und Nachteile: VLAN in proxmox oder in opnsense anlegen?

    Was für ein Prozessor? Ich hab ein R86s gerät mit N5105 CPU (Amazon) - mit 16GB Ram - direkt von die OPNSense VM, Ich kriege 580MB beide Richtungen mit Speedtest-CLI - aber von VLANs und WLAN Ich kriege ungefähr 400MB im schnitt ?!?
  11. R

    Vor- und Nachteile: VLAN in proxmox oder in opnsense anlegen?

    Kurze Frage dazu - welche von die 3 Optionen sind am performanste ?!? Hat jemanden es getestet?? Ich bin zurzeit beim Option2 und kriege nicht die ganze dürchsatz von mein 600MB Glasfaserleitung . . . Danke vorweg!
  12. R

    Best practice - Migrate existing cluster NW to SDN ?

    Thanks Stefan! The documentation is a bit high level - but I've watched a few videos online and have the general understanding. I guess it would have been helpful for those of us who have legacy installations and want to "migrate" or convert to the SDN features - and know what is best...
  13. R

    Best practice - Migrate existing cluster NW to SDN ?

    I thought as much - Appreciate the offer, but my config is a bit convoluted! Maybe you can just give general pointers as best practice reference . . . ?!? Cheers, Robert
  14. R

    Best practice - Migrate existing cluster NW to SDN ?

    This might be a newbie question: I have a five node cluster which I started with Proxmox 7.1 and subsequently updated to the latest 8.1.11 - I have up to now manually configured each node with it's own network settings (different machines & HW) Now that SDN is in full swing, is there a simple...
  15. R

    Best practice - Migrate existing cluster NW to SDN ?

    This might be a newbie question: I have a five node cluster which I started with Proxmox 7.1 and subsequently updated to the latest 8.1.11 - I have up to now manually configured each node with it's own network settings (different machines & HW) Now that SDN is in full swing, is there a simple...
  16. R

    What is considered unusually high disk io for an NVME SSD ?

    Thx! No ZFS on this box (R86s) - Small form factor and just one nvme slot - so a single 500G device using LVM & LVM-Thin
  17. R

    What is considered unusually high disk io for an NVME SSD ?

    Thanks for the quick answer! I know the nvme can handle such volumes of writes - my question was more about sustained writes at that volume (presently around 5M) constantly over a long period . . . I checked the SMART valutes - and to my surprise :oops:, the nvme shows 13% wearout and 71TB...
  18. R

    What is considered unusually high disk io for an NVME SSD ?

    Just wondering - I have a small cluster of 5 nodes running various stuff - one node is an R86s (N5105) device running OPNSense as router - been working fine for many months now . . . but I noticed a problem recently which turned out to be the netflow monitor going nuts - I was experiencing...
  19. R

    Strange behavior on a single node - hanging on Proxmox Helper-Scripts execution

    UPDATE - after more testing, I narrowed it down to wget defaulting and failing while using IPv6 - if I force IPv4 - it works !!! Used this command: wget --inet4-only https://github.com/tteck/Proxmox/raw/main/turnkey/turnkey.sh so something strange about IPv6 from this particular node's console...