You can use the HA features of Proxmox to have the VM fail over to another node automatically...My back hauls are not very quick right now with only using LCAP bonded 1GB links but it works pretty decent for my needs and will be scaling up to better hardware once i have a full plan on the...
@RyanMM - What is your plan for your long term solution? I ask as I have a 3 node Ceph cluster and would also like to have NFS and CIFS shares as well using the Ceph storage pool.
I then assume with would require me to setup the wireless interface using the documentation available in the CLI as i do not see Proxmox VE adding this to the WebUI?
I am still planning my next switch upgrade and the switches that are within my budget and with the speed and # of ports that I am wanting them to have do not support any type of multi-chassis aggregation or stacking of the switches. I have tried experimenting with setting up a nested bond within...
- Odd number of votes: I have kept only 7 votes in the cluster and only physical nodes are given a vote.
- no more than 1 vote per physical machine: Each physical node has 1 vote, the virtual node does not have a vote.
- all cluster members provide exactly 1 node: This is the only thing that is...
While i totally agree this is not in anyway a support or a production setup, and to never have any single node produce more than 1 vote. I am not sure i understand this part of your reply:
My understanding is that you need a minimum of 3 nodes for a stable cluster and that you should always...
While i fully agree that using a VM as a node either within the cluster or hosted in or on another machine or cluster of any kind is not following any sort of "best practices" and should not be blindly deployed in any production environment, i think I have come to a solution that I can work...
Here is how my workstation system is setup, I have 2 smaller SSDs (500 GB) that I use for booting Proxmox VE and also storing the few ISO images that I need and the VM disks for the few VMs that run on the system (Firewall [], TrueNAS, Windows [Workstation with PCIe pass-through for a graphics...
I'm just walking through the logic of not messing with the votes within Proxmox VE to see if this works....
Currently the cluster can loose any 3 nodes and everything keeps trucking along.
Moving to an 8 node cluster would mean needing 5 nodes online for quorum, still allows for 3 nodes to...
Any VM that is going to use the CephFS storage location would need to have these commands run. I use it for storing Docker volumes in a shared space as I have had issues using NFS or CIFS with some services that I run (SQL lite DBs). What I did was installed my distro of choice and then...
I know asking or implementing a Proxmox VE node as a VM either within or outside of a Proxmox VE cluster can be a danger zone and also a disputed topic. I am considering deploying a single Proxmox VE node into my 7 node cluster for the following reasons:
It would provide a "Floating" or...
I was experimenting with SDN today to see if it would improve managing the various networks using VLANs that are present on my network. I have 7 nodes and each of the nodes have 5 x 1GB RJ45 interfaces. Two of those interfaces are bonded together and are configured for use by Proxmox VE and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.