My current practice is to set any VMs i will jeep running during maintenance to "ignored" in the HA manager and then I stop any VMs that will not be kept running. I then put all the nodes into maintenance mode and then I manually move the running VMs as needed while completing any maintenance...
As of Proxmox 8.3.1 it will show which nodes are in maintenance mode on the UI however you still need to use the CMD line to move them in and out of maintenance mode.
I would also like to see the ability to use both tagged and untagged VLANs with the SDN as well as the expansion of the DHCP system not just simple networks.
Just tried the first half on one of my non important machines to test while I leave my production machines for a fix. It worked without any issues. I'll report back on the second half once the issues have been fixed.
I have been testing/using native zfs encryption on my root pool on a single node install and is has been working flawlessly. Though I did not setup remote unlock on this install as it is a laptop with a DE also installed so I have a built in keyboard, mouse and monitor for unlocking the boot...
I run 2 Ceph clusters in my home environment and while not setup both hardware and network wise to production standards, does meet my needs.
My first cluster is 7 nodes with 2 1Gb links and 2 SSD OSDs per node. It won't win any speed contests but it offers simular performance from either...
After reading the issues with network interfaces changing on the upgrade to 8.2 I went through the 7 nodes I upgraded yesterday and used the instructions HERE to override the device names to eth# and each name is associated to the interface's MAC address which is a consistent value. It also...
I have VMs that connect to mutiple VLANs and I have a single bridge that they all connect to and if I do not specify a VLAN on the VMs interface they get the untagged VLAN for that bridge and if I specify one on the interface they then used the tagged VLAN. For each VM that needs to be connected...
I just finished the first batch of updates and had no issues on any of the systems I updated. With the possible issue around interface renaming, I did do the assign network naming changes that have been mentioned before and rebooted after applying those changes before doing the update.
So far...
I have a DIY pikvm setup that is connected to an 8 port kvm switch to add sorta a poor-man's IPMI to some of my servers that are too old for the virtual console in their IPMI to work with a modern browser.
I have been thinking of keeping this setup and possibly adding a second one if and when I...
Are you wanting to do any PCI-e passthrough or looking to do any other let's call it a more advanced or complicated configuration? Or are you just looking to install Proxmox VE and run 2 simple VMs? If they are more comfortable with Windows could you not use something like RDP and VirtualBox...
Are you able to ping say Google or another external destination on all your nodes? Have you reviewed your networking configuration on all nodes to confirm they are correct? Have you check your upstream switches and their setting if you are using any management features such as VLANs or LACP...
I'm in the same camp as @leesteken would check with the writer of the script as they are essential unsupported modifications to the Proxmox VE environment. However have you looked at the scripts and tried running the commands manually on the node that is having the issue to see if you can get...
I wanted all devices on my network to use the same time, this way only 1 device is going to the internet to get the time and the rest are getting it from that device. Eventually I would like to move to a raspberry pi or simular device thar can provide time through GPS but it will still probably...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.