I use pfSense but same difference and have all of my Proxmox hosts and VMs using pfSense for the NTP source of truth. I have pfSense going out to the internet to the an NTP time and while my pfSense is a VM within my cluster having 7 nodes running in a HCI setup I haven't had any issues other...
I use both tagged and untagged VLANs but am using the OVS networking that is available within Proxmox, as I always seemed to have issues when I started out with Proxmox and Linux Bridges for networking and haven't gone back to try I again.
You need to disable the enterprise repositories and enable the no subscription repositories, then you will be able to install any updates and install Ceph. Once you have a subscription then you can use the enterprise repositories.
If you are using a Linux Bridge that is VLAN aware or a OVS Bridge you should be able to just add a VM's or LXC's interface to the corresponding bridge and it should work using the untagged/native VLAN. However if you are trying to do this with an SDN setup I am still working out how to deal...
have you created a bridge named "SharSvcs"? can you also post the following outputs:
System -> Network for each node in your cluster
Datacenter -> SDN
Datacenter -> SDN -> Zones
Datacenter -> SDN -> VNets
From what I know in my use case the biggest different for me was that OVS supported VLANs without having to specify it each time, but I haven't really looked into the difference between them in detail, I only did a little digging when I was first learning proxmox and needed to have VLAN support...
I have done more experimenting with the SDN and the need to always create a VLAN zone with a tagged VLAN and the simple zone not allowing to associate to a bridge is leaving me with only 2 options that I can see for now:
Leave everything as is with multiple OVS Bridges and specifying the VLAN...
You can use the HA features of Proxmox to have the VM fail over to another node automatically...My back hauls are not very quick right now with only using LCAP bonded 1GB links but it works pretty decent for my needs and will be scaling up to better hardware once i have a full plan on the...
@RyanMM - What is your plan for your long term solution? I ask as I have a 3 node Ceph cluster and would also like to have NFS and CIFS shares as well using the Ceph storage pool.
I then assume with would require me to setup the wireless interface using the documentation available in the CLI as i do not see Proxmox VE adding this to the WebUI?
I am still planning my next switch upgrade and the switches that are within my budget and with the speed and # of ports that I am wanting them to have do not support any type of multi-chassis aggregation or stacking of the switches. I have tried experimenting with setting up a nested bond within...
- Odd number of votes: I have kept only 7 votes in the cluster and only physical nodes are given a vote.
- no more than 1 vote per physical machine: Each physical node has 1 vote, the virtual node does not have a vote.
- all cluster members provide exactly 1 node: This is the only thing that is...
While i totally agree this is not in anyway a support or a production setup, and to never have any single node produce more than 1 vote. I am not sure i understand this part of your reply:
My understanding is that you need a minimum of 3 nodes for a stable cluster and that you should always...
While i fully agree that using a VM as a node either within the cluster or hosted in or on another machine or cluster of any kind is not following any sort of "best practices" and should not be blindly deployed in any production environment, i think I have come to a solution that I can work...
Here is how my workstation system is setup, I have 2 smaller SSDs (500 GB) that I use for booting Proxmox VE and also storing the few ISO images that I need and the VM disks for the few VMs that run on the system (Firewall [], TrueNAS, Windows [Workstation with PCIe pass-through for a graphics...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.