In Datacenter > Summary we have a Resources tab that give us combined percentages in % of all nodes in CPU, RAM and Storage usage.
I want to get these values with pvesh in order to create some nagios checks.
Running the following commands I can get the values somehow
I noticed some changes in the output of pvesh utility. I was using it for some nagios scripts and reporting tools that broke with the update. from 5.2 to 5.3
Example of command
pvesh get /cluster/resources
Was giving back a very detailed json will all nodes, storages vms info etc etc. A very...
A small update.
I configured the interface in /etc/network/interfaces and added bridge-mcrouter 0.
Restarted the node.
After successfully booting the node I can see that the value of /sys/devices/virtual/net/vmbr0/bridge/multicast_router is still 1.
Checked multicast traffic on a VM in this...
Thank you. This is were we were focusing also.
We will try values in
It seems that they are integrated in network...
I would like to ask if there is a way to block all multicast traffic coming to specific bridge in a Proxmox node. We can accomplish this at the moment with switch settings and acl. But a node level configuration would be more efficient and "dummy" for all clusters regardless networking...
It would be helpfull to post some cpu usage metrics here or in another threadm to check the performance impact in various CPUs.
Updating all of our clusters, migrating vms and rebooting nodes its a quite time hungry procedure. Do we have an estimation for next kernel? Maybe we can wait if its...
Well, Debian supports wheezy till May 2018. Ubuntu 12.04 is till April 2017 though. Its a little irrelevant in this topic check https://forum.proxmox.com/threads/understanding-proxmox-3-4-eol-and-4-0.25079 for more detailed info and...
I did several tests (PVE 4.x) and it seems its not yet fixed. Our clients seem to need this feature lately and we cant do anything.
Did anyone accomplished it with PVE 5.x versions?
offtopic: Also Docker for windows has problems, but its still in beta.
If you have some more time to spare, i would suggest to create multiple OSDs per NVMe disk. I doubt ceph at this stage will be able to fully utilize an NVMe disk hosting only one osd. Hope i am wrong. I had very good results in the past with 0.8x ceph versions and handling 2 or 3 osd per disk...