After drinking more coffee and finally waking up, it occurred to me that this is the silliest question in my life. I have been working with Proxmox since there WAS proxmox, and I forgot that a bridge on server1 IS a switch. DOH!
I have 3 nodes in a cluster, each with a pub IP on NIC adapter 1. The servers have 4 port NICs. I want to establish a private 10.10.0.0/24 network between them using crossover cables. They will be colocated in a DC and I dont want to have any of the priv IP traffic going thru the DC network...
Hi Stefan, thanks for replying. I originally wanted to place a switch in the server cabinet and connect the secondary NICs directly to each other. Unfortunately, these 3 proxmox servers are not in the same cabinet, or even the same subnet. At that point I realized I would have to do tunneling...
My main challenge here is that I do not have control over any of the external routers/switches leading into my 3 proxmox hosts. I am having to use private IP's for all vm's. The only option I have is NAT'ing everything out and port forwarding it back in.
Far as I can tell, I need iptables rules...
The most likely solution after reading further is to use the inbuilt SDN feature of proxmox. There is also Openvswitch, and Im wondering if they are the same underneath. These 3 servers are not even in the same physical subnet, so SDN will be the only means to architect anything really.
I dont...
Im sure this question has been asked a thousand times, but some of the pieces I have found still come up short.
I have 3 proxmox nodes in a cluster, sitting in a data center. I want to use these 3 nodes to run a k8s cluster, with the cluster spread among all three nodes. The idea is to have 3...
i think maybe the better question is how to manually do it. i can unpack a template on the zfs box itself then clone a conf file from another one since i have them all the same. what about quota or root?
in other words what goes where? i am currently creating a ct on nfs and it will finish...
i have infiniband cards in my 2 nodes. How can I force the cluster communications to go thru that link and not vmbr0?
I saw a hint about this on google but it was for VE 2...i cant quite get it going.
I saw a thread about this from a few years back but it didn't go anywhere.
I have 2 nodes in cluster and they are connected to an nfs share via Infiniband to a ZFS storage server.
I used iperf to confirm thruput to around 4gb so the link is fine.
But when I create a new container and choose...
no idea what that refers to but good point. i worry that i may have screwed up when i was playing with fencing a while back. i thought i could set up HA before reading the effin manual. i backed out of that but probably didn't turn fencing off properly. also my vmbr0 doesnt work on the second node.
yea im just addressing the big data transfer initially. i can see sending the diffs over a WAN link but the 225GB i would be concerned with checksumming and all that. one other thing. how are you gonna deal with the DNS repropagation lag time to the new hosting?
I cannot edit either options or resources for a container in 3.1.
Is this because I have two nodes in a cluster? Or because I have them stored on NFS?
Or did I just break it somehow? I see that permissions are owner root and group is the web server. I thought I should give www-data write...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.