You can use the HA features of Proxmox to have the VM fail over to another node automatically...My back hauls are not very quick right now with only using LCAP bonded 1GB links but it works pretty decent for my needs and will be scaling up to better hardware once i have a full plan on the...
@RyanMM - What is your plan for your long term solution? I ask as I have a 3 node Ceph cluster and would also like to have NFS and CIFS shares as well using the Ceph storage pool.
I then assume with would require me to setup the wireless interface using the documentation available in the CLI as i do not see Proxmox VE adding this to the WebUI?
I am still planning my next switch upgrade and the switches that are within my budget and with the speed and # of ports that I am wanting them to have do not support any type of multi-chassis aggregation or stacking of the switches. I have tried experimenting with setting up a nested bond within...
- Odd number of votes: I have kept only 7 votes in the cluster and only physical nodes are given a vote.
- no more than 1 vote per physical machine: Each physical node has 1 vote, the virtual node does not have a vote.
- all cluster members provide exactly 1 node: This is the only thing that is...
While i totally agree this is not in anyway a support or a production setup, and to never have any single node produce more than 1 vote. I am not sure i understand this part of your reply:
My understanding is that you need a minimum of 3 nodes for a stable cluster and that you should always...
While i fully agree that using a VM as a node either within the cluster or hosted in or on another machine or cluster of any kind is not following any sort of "best practices" and should not be blindly deployed in any production environment, i think I have come to a solution that I can work...
Here is how my workstation system is setup, I have 2 smaller SSDs (500 GB) that I use for booting Proxmox VE and also storing the few ISO images that I need and the VM disks for the few VMs that run on the system (Firewall [], TrueNAS, Windows [Workstation with PCIe pass-through for a graphics...
I'm just walking through the logic of not messing with the votes within Proxmox VE to see if this works....
Currently the cluster can loose any 3 nodes and everything keeps trucking along.
Moving to an 8 node cluster would mean needing 5 nodes online for quorum, still allows for 3 nodes to...
Any VM that is going to use the CephFS storage location would need to have these commands run. I use it for storing Docker volumes in a shared space as I have had issues using NFS or CIFS with some services that I run (SQL lite DBs). What I did was installed my distro of choice and then...
I know asking or implementing a Proxmox VE node as a VM either within or outside of a Proxmox VE cluster can be a danger zone and also a disputed topic. I am considering deploying a single Proxmox VE node into my 7 node cluster for the following reasons:
It would provide a "Floating" or...
I was experimenting with SDN today to see if it would improve managing the various networks using VLANs that are present on my network. I have 7 nodes and each of the nodes have 5 x 1GB RJ45 interfaces. Two of those interfaces are bonded together and are configured for use by Proxmox VE and...
I didn't specify a mount size on mine and when I created the CephFS share there was no way to specify the size under Proxmox VE. I currently have 2 Ceph clusters one that is hyper-converged within my Proxmox VE cluster of 7 nodes and another that is a cluster of 3 nodes that is only used for...
This might not be the best way but it is they way I have been doing it to mount my CephFS storage into some VMs:
* I will reference Debian/Ubuntu commands as that is the distribution I use.
1) On your VM install ceph-common package: {sudo} apt install ceph-common
2) On your VM execute: echo...
This might be a silly thought/question about the SDN features that are more generally available and that are now installed by default on PVE 8.1.
Would it be possible/feasible to setup the SDN to take advantage of the wireless NIC in my workstation? The SDN has the ability with a simple zone...
I wanted to experiment with Ceph before moving to a bunch of new hardware later and also allow me to plan what i wanted/needed for new hardware. I have setup 2 separate clusters using Proxmox and Ceph and while it is not perfect or even close to what you would do in a "real production"...
Trying an different approach to solve this issue I tried taking the OSDs from node 2 and DOWNED and OUT them 4 and moved the disks to node 3. I then was able to bring them IN but could not bring them UP. Node 2 was able to bring the now blank disks from the node 3 UP and IN after being able to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.