Hello,
I'm writing this to maybe get some more idea as to why I should invest time and maybe money in building a pve cluster or a shared storage pool.
As of now, I have a setup that mainly does failover and which I quite like :
1 miniPC with 6*1GbLAN bonded in PVE for LAG, 32GBRAM, 500GBSSD...
Sorry to revive this ancient thread, but it seems @dietmar has some insights into my current issue as I just got a stuck container due to nfs backend being somehow crashed.
Nonetheless, I was unable to solve the issue in any other way than ps aux | grep <cointainerID> and trying to kill...
Hello,
I know it's probably nothing specific to proxmox but I'll try asking anyway :
Every day from 4 to 5am there is something running on my system, making it hot, and I don't know what it is...
I have seen the CPU temperature spiking for a hole hour in netdata, the nice PVE graphs show me it's...
To be totally complete, I was still missing a route , final fix looks like this :
post-up ip rule add from 10.0.10.0/24 table 10Server prio 1
post-up ip route add default via 10.0.10.1 dev vmbr10 table 10Server
post-up ip route add 10.0.10.0/24 dev vmbr10 table...
Thanks for the feedback yes !
The key in your answer for me is "if you can manage HA on an application layer, it will usually be better".
I'll keep my investigations of proxmox clusters for another time and will first find a way to failover 2 traefik instances, then I'll come back to proxmox...
Hmm, sorry, looking at it it's not clear yes, but indeed I did solve it : the ip rule thing did work, no need for weight or anything else...
The final solution :
echo 200 myname >> /etc/iproute2/rt_tables.d/myname.conf
ip rule add from 10.0.10.0/24 table myname prio 1
ip route add default via...
Thanks for the reply and ideas !
In fact, only docker unprivileged containers will be exposed to WAN, but the docker engine that runs them will itself run in an unconfined LXC.
I've been looking into mounting the cifs on pve host and later bind mountpoints on the CT.
Leaving aside a few...
Hello,
I use pve to virtualize my opnSense router, I'm about to install and configure a second machine with another pve and another virtual router that will use CARP to do failover of the first virtual router.
I feel there is no need for me to use pve cluster features, especially since both...
Hello,
I have found a way and I can now choose either a VM or a LXC to run my main docker host on top of pve. It will be the one exposed to the internet by my opnSense router, it will run portainer to let me start most of my services and try out new docker containers.
The LXC looks better for...
Hello,
======
Edit:
No I've actually found a simpler solution to my problem : route the trafic back from the interface it came from, I do this as follows :
echo 200 myname >> /etc/iproute2/rt_tables.d/myname.conf
ip rule add from 10.0.10.0/24 table myname prio 1
ip route add default via...
Ok, got this working, I had 2 issues in fact :
Getting the bond working with vlan-aware in proxmox
create bond of all phy interfaces you need
create vmbrX having this bond as slave
attach the opnSense VM to vmbrX to access all VLANs
attach VMs to vmbrX.Y with Y beeing the VLAN
all traffic...
Hello,
I'm getting desperate, I need help to find a setup where 2 windows clients can download files from my NAS using SMBv3 both at 1GB/s at the same time for a total 2GB/s sent from the NAS...
I've tried a lot of things and got LAGG working between lots of parties achieving 2GB/s several...
Thanks spirit, that was indeed my understanding ;)
I'm closing this thread since I spent hours trying out various setups, and I just fail to see why my opnsense virtualRouter is now able to send and recieve at 2GB/s to my NAS but not so when the traffic is actually beeing generated by 2 clients...
Hmm... This setup seems kinda dumb...
In fact, I do have connectivity when using bond-mode balance-xor but regardless of the bond-xmit-hash-policy, It seems I cannot get over 1GB/s of bandwidth...
I have tried layer2+3, even layer3+4, adn also layer2.
I do the tests using iperf on my Synology...
Ok, it seems static LAG on my switch is actually bond type balance-xor.
Tried it and for now it seems to work even with all interfaces plugges in which was not the case earlier...
Will monitor and report...
Hello, I face a strange issue on my bond where it works "almost" but depends on the order in which I plug in the cables... I mean, same cable at the same place each time, but first patch cable A then patch cable B will not work, but if I plug the patch cable B first and then the patch cable A it...
Ok, my switch doesn't support STP so that's it. I'll have to look into something else.
I'm thinking about something that would check the status of the virtual router VM and if down it would attach the physical NIC to vmbr0, and find a hook before VM startup to detach the NIC from vmbr0 and pass...
Ok, just looked up STP and it seems that will do the trick, only my vRouter needs it and opnSense lets me define priority for each interface so I will definitely be able to tell OpnSense to prefer the 10GB/s link.
Even my coming TL-SG1024DE has support for loop detection although it claims to...
It's my home network, so production yes but test also ;) I'm still waiting to receive some hardware to be able to test without disturbing the rest too much, but on weekends production is totally interruptible ^^
I'll have a look into STP as I don't know what it is yet.
As of now, vmbr0 is not...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.