Kind of confusing - because there are still many references around (like here: Thomas-Krenn) which leads you to believe it's still relevant. I have several machines in my cluster which have been upgraded over time and still have those entries in /etc/modules - Guess it doesn't hurt . . . :rolleyes:
I noticed that iommu is enable by default and no longer requires the "intel_iommu=on" boot parameter - are the vfio modules still required? In the past we always had to add:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
to /etc/modules file ??
Was für ein Prozessor? Ich hab ein R86s gerät mit N5105 CPU (Amazon) - mit 16GB Ram - direkt von die OPNSense VM, Ich kriege 580MB beide Richtungen mit Speedtest-CLI - aber von VLANs und WLAN Ich kriege ungefähr 400MB im schnitt ?!?
Kurze Frage dazu - welche von die 3 Optionen sind am performanste ?!? Hat jemanden es getestet?? Ich bin zurzeit beim Option2 und kriege nicht die ganze dürchsatz von mein 600MB Glasfaserleitung . . . Danke vorweg!
Thanks Stefan! The documentation is a bit high level - but I've watched a few videos online and have the general understanding. I guess it would have been helpful for those of us who have legacy installations and want to "migrate" or convert to the SDN features - and know what is best...
I thought as much - Appreciate the offer, but my config is a bit convoluted! Maybe you can just give general pointers as best practice reference . . . ?!?
Cheers,
Robert
This might be a newbie question:
I have a five node cluster which I started with Proxmox 7.1 and subsequently updated to the latest 8.1.11 - I have up to now manually configured each node with it's own network settings (different machines & HW)
Now that SDN is in full swing, is there a simple...
This might be a newbie question:
I have a five node cluster which I started with Proxmox 7.1 and subsequently updated to the latest 8.1.11 - I have up to now manually configured each node with it's own network settings (different machines & HW)
Now that SDN is in full swing, is there a simple...
Thanks for the quick answer! I know the nvme can handle such volumes of writes - my question was more about sustained writes at that volume (presently around 5M) constantly over a long period . . .
I checked the SMART valutes - and to my surprise :oops:, the nvme shows 13% wearout and 71TB...
Just wondering - I have a small cluster of 5 nodes running various stuff - one node is an R86s (N5105) device running OPNSense as router - been working fine for many months now . . . but I noticed a problem recently which turned out to be the netflow monitor going nuts - I was experiencing...
UPDATE - after more testing, I narrowed it down to wget defaulting and failing while using IPv6 - if I force IPv4 - it works !!!
Used this command:
wget --inet4-only https://github.com/tteck/Proxmox/raw/main/turnkey/turnkey.sh
so something strange about IPv6 from this particular node's console...
Funny enough - I have a PiHole running as a LXC container on said node and I can use the wget command just fine to fetch the script - just not from the node console ?!? :confused:
Thx for the suggestions - here's what I checked:
Pinging with IPs and names works fine . . .
Networking config on all nodes appears to be correct . . .
Switches checked - all OK, and YES - using VLANs and LACP for quite some time without issue . . .
Firewall live view shows nothing strange when...
OK - so it appears to be something with the wget command not being able to fetch the script ?!? Hangs during the connection - here's the output from both examples - the broken node (R86s) and a working one (awowfox)
So it would appear to be a networking issue of sorts . . . will investigate...
Thanks for your prompt reply - The exact same command line works on 4 out of 5 Proxmox nodes - so it's not the scripts themselves, it's something with the bash execution on this particular machine ?!?
Take this one for example:
bash -c "$(wget -qLO -...
I have a small cluster of 5 mixed machines - one of which is running a virtual OPNSense as firewall. Everything is running fine, but recently I noticed a strange issue which I can't find what's causing it . . .
When running a script from Proxmox Helper Scripts...
Turns out, after many tests - my main machine which I use to access Proxmox was in the wrong VLAN segment - so this issue only affected ONE machine and Proxmox seems fine - So my bad for jumping to conclusions something was wrong with the servers Thanks again and have a great day!
Thanks Luckyj for your response - YES, I have multiple IPs assigned to different VLAN segments (see attachment) and that has worked fine since over 14 months - not the issue. Turns out, after many tests - my main machine which I use to access was in the wrong VLAN segment - so this issue only...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.