New tool: pmmaint

tuxis

Famous Member
Jan 3, 2014
233
193
108
Ede, NL
www.tuxis.nl
Hi,

while managing a lot of clusters, we often run into the issue that customers have thought about where they want to have their VMs running. So, when doing maintenance on a cluster, after a reboot of a node, we need to place the VMs back on the node that they were running on. Also, when you want to reboot a node, it's often quite some work to figure out where we can migrate VMs to without depleting memory of a sibling.

So we created pmmaint. It allows you to create a configuration snapshot of the current configuration and tell nodes to empty themselves. pmmaint will figure out where the VMs should fit, taking into account local-storage (partially, it will not think about space) and CPU-types. When you're done, simply type 'pmmaint restore <snapname>' and all your VMs will be nicely migrated back to where they came from.

Version 1.0 is available as a Debian package. See https://gitlab.tuxis.nl/oss_public/pmmaint for more information. If you have any questions, let us know and feel free to improve the code. I'm not a hardcore developer, so I'm pretty sure there are improvements to be made.

Mark
 
I will take a look at this asap, seems useful!

For the part that deals with where the VM/CT should be running, I usually just add a note to each VM with the name of the node it should be normally running in and follow that while moving them in/out of a host for maintenance of after a hardware failure.

I'm wondering: why is Proxmoxer needed? Wouldn't just pvesh be enough?
 
Last edited:
  • Like
Reactions: Tmanok
Nice work - seems like something that would be great integrated into the PMX GUI - have an "evacuate node" right-click-menu option.

Spacing needs a little help for larger hosts, this is my lab:


Code:
                    |Memory (GB)
hostname            | total free  used  | CPU
pmx1                | 125.7482.88 42.86 | AMD Opteron(tm) Processor 6380
pmx3                | 125.7471.39 54.35 | AMD Opteron(tm) Processor 6380
pmx5                | 125.8979.20 46.69 | AMD Opteron(tm) Processor 6380
pmx4                | 125.8871.61 54.26 | AMD Opteron(tm) Processor 6380
pmx0                | 251.58162.1789.41 | AMD Opteron(tm) Processor 6380
pmx2                | 125.8886.89 38.99 | AMD Opteron(tm) Processor 6380
 
Last edited:
Nice work - seems like something that would be great integrated into the PMX GUI - have an "evacuate node" right-click-menu option.

Spacing needs a little help for larger hosts, this is my lab:


Code:
                    |Memory (GB)
hostname            | total free  used  | CPU
pmx1                | 125.7482.88 42.86 | AMD Opteron(tm) Processor 6380
pmx3                | 125.7471.39 54.35 | AMD Opteron(tm) Processor 6380
pmx5                | 125.8979.20 46.69 | AMD Opteron(tm) Processor 6380
pmx4                | 125.8871.61 54.26 | AMD Opteron(tm) Processor 6380
pmx0                | 251.58162.1789.41 | AMD Opteron(tm) Processor 6380
pmx2                | 125.8886.89 38.99 | AMD Opteron(tm) Processor 6380
Hi,

The memory should be in GB. So I'm guessing that your locale confuses the print on line 707-711. Can you look what you get if you add
Code:
print(nodes[node].get_mem())

to /usr/sbin/pmmaint on line 707, and move all lines below one line lower?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!