Proxmox Datacenter Manager - First Alpha Release

so I ran it again just to be see if it wasn't a fluke and I have the same error, this is the error on the receiving host:

Code:
mtunnel started
received command 'version'
received command 'bwlimit'
received command 'disk-import'
ready

received command 'ticket'
received command 'query-disk-import'
disk-import: Importing image: 100% complete...done.

received command 'query-disk-import'
disk-import: successfully imported 'TWOTB:vm-103-disk-0'

received command 'config'
received command 'unlock'
received command 'start'
run_buffer: 571 Script exited with status 2
lxc_init: 845 Failed to run lxc.hook.pre-start for container "103"
__lxc_start: 2034 Failed to initialize container "103"
TASK ERROR: mtunnel exited unexpectedly
what about the system logs (journalctl)?
 
Searchbox on the top is broken, searching searching and nothing finding ;p (Datacenter Manager 0.1.11ALPHA)
in first release it was working.
 
I also had an issue with my certificates, when I checked the systemctl status of proxmox-datacenter-api.service, I found the following:
I first tried the fingerprint from the actual cert, and then the one that it listed in the systemctl status, so even if you specify a fingerprint, it will look for a different one.
My assumption is that this is due to pveproxy?

Code:
root@PDM:/etc/proxmox-datacenter-manager# systemctl status proxmox-datacenter-api.service
● proxmox-datacenter-api.service - Proxmox Datacenter Manager API daemon
     Loaded: loaded (/lib/systemd/system/proxmox-datacenter-api.service; enabled; preset: enabled)
     Active: active (running) since Sat 2025-02-08 11:46:09 CET; 3h 34min ago
   Main PID: 21677 (proxmox-datacen)
      Tasks: 7 (limit: 2245)
     Memory: 26.2M
        CPU: 4.385s
     CGroup: /system.slice/proxmox-datacenter-api.service
             └─21677 /usr/libexec/proxmox/proxmox-datacenter-api

Feb 08 15:19:39 PDM proxmox-datacenter-api[21677]: bad fingerprint: 46:39:5a:d4:75:60:48:91:f7:e5:01:57:8b:4d:b5:77:65:ef:6f:29:34:cc:32:b9:63:7d:b2:d4:-redacted-
Feb 08 15:19:39 PDM proxmox-datacenter-api[21677]: expected fingerprint: 12:15:56:7f:33:98:38:37:94:f6:88:49:e1:bc:fc:a8:3e:b1:7e:18:db:1f:1b:45:7d:13:0e:-redacted-
Feb 08 15:20:05 PDM proxmox-datacenter-api[21677]: bad fingerprint: 12:15:56:7f:33:98:38:37:94:f6:88:49:e1:bc:fc:a8:3e:b1:7e:18:db:1f:1b:45:7d:13:0e:-redacted-
Feb 08 15:20:05 PDM proxmox-datacenter-api[21677]: expected fingerprint: 46:39:5a:d4:75:60:48:91:f7:e5:01:57:8b:4d:b5:77:65:ef:6f:29:34:cc:32:b9:63:7d:b2:d4:-redacted-
Feb 08 15:20:22 PDM proxmox-datacenter-api[21677]: bad fingerprint: 46:39:5a:d4:75:60:48:91:f7:e5:01:57:8b:4d:b5:77:65:ef:6f:29:34:cc:32:b9:63:7d:b2:d4:-redacted-
Feb 08 15:20:22 PDM proxmox-datacenter-api[21677]: expected fingerprint: 12:15:56:7f:33:98:38:37:94:f6:88:49:e1:bc:fc:a8:3e:b1:7e:18:db:1f:1b:45:7d:13:0e:-redacted-

What fixed it for me was to reset all the certificates:
Code:
mv /etc/pve/pve-root-ca.pem /etc/pve/pve-root-ca.pem.BAK
mv /etc/pve/priv/pve-root-ca.key /etc/pve/priv/pve-root-ca.key.BAK
mv /etc/pve/nodes/NODENAME/pveproxy-ssl.pem /etc/pve/nodes/NODENAME/pveproxy-ssl.pem.BAK
mv /etc/pve/nodes/NODENAME/pveproxy-ssl.key /etc/pve/nodes/NODENAME/pveproxy-ssl.key.BAK
pvecm updatecerts -f
systemctl restart pveproxy
 
During my experience using the Proxmox Datacenter Manager (version 0.1.7 Alpha), I noticed that the interface for viewing the list of VMs and containers within a node is constrained to a small area. This limitation becomes especially noticeable when managing a node with a large number of VMs and containers.

The current interface does not provide an effective way to scroll through and view the entire list of VMs/containers comfortably. This forces frequent scrolling or other workarounds, which can be inefficient, especially when dealing with a node hosting many instances. Additionally, the display area is not resizable, further limiting usability.
So I tested PDM 0.1.11 Alpha a bit more thoroughly now.

I concur that space for scrolling through VMs/containers is kind of limited. Another option might be allow to scroll some of the "Remote" and "Allocation" information out of view. Or make the different parts of them fold-able. I probably do not need generic cluster information in view when migrating VMs. I think having those parts fold-able would be a good first step to increase usability of the VM list a lot.

Migration of VMs work in training/demo environment. Some observations:
  • When choosing not to delete the source VM on a live migration – on ZFS, thus slow – the source VM remained locked even after no processes related to it where running anymore and the destination VM started.
  • When choosing live migration and not to delete the source VM, the source VM was only started after the migration basically finished. This might be wise to avoid duplicate IP addresses ;). However in that case it might be better not to make it an online migration to begin with?
  • Additionally I upgraded the source VM packages during the migration. The result was: The destination VM was upgraded as well. If the online migration works similar to the so called "snapshot" mode I get why. However… it kind of might be an unexpected result as one would expect that the state of the VM as the migration started would be replicated on the destination.
  • When selecting a node to migrate to PVE shows both memory and CPU usage of that node. PDM only shows memory. Would be nice when it could show CPU usage as well. Bonus points for adding in some graphical representation in the drop down. Not sure whether that is possible with the new framework.
I really like the remote migrating settings when "Target Remote" is different from the source. Especially that "Detailed Mapping" is awesome! Great work.

Additionally in that training/demo environment I have everything behind Nginx reverse proxy with additional HTTP basic auth for further protection from outside access. When configuring remotes I hostnames referring to IP addresses in an RFC 1918 network. That means that any links to open original web UI of PVE hosts do not work. When alternatively I chose public DNS names, then PDM would not be able to connect cause it cannot to that HTTP basic auth step. So what would be nice here would be to be able to configure some kind of URL mapping. I.e. use RFC 1918 names for adding remotes, but allow to configure external URLs for PVE web UI links trusting that the web browser in question will be able to access them just fine. I read in the FAQ that reverse proxies are not supported and it is recommended to use Wireguard/OpenVPN tunnels. A properly configured reverse proxy only allows for web access while a tunnel would allow access to other ports of PVE nodes.

I agree with a way to be able to reboot and shutdown nodes from the PDM gui.

I also would like to see some "Show fingerprint" button in dashboard of PVE host.

I also wonder about security implications of PDM. Someone who has access to it could basically mess up the whole infrastructure. That might be a reason not to add reboot and shutdown of PVE nodes or removal of VMs and other destructive action. But then you could not really add access to VM/LXC guests either and stopping all VMs would also be quite the denial of service attack.
One idea would be to let PDM automatically adapt to the permission its API token has. So some of that could be configurable by admins, of course with a minimum set of permissions in order to do anything useful. Another idea would be to make PDM bring up additional password prompt for certain more dangerous operations.
In the end it will be a balance between convenience and security and I do not think all that much can be done about it.

So far for now. PDM is really a fine product and I look forward to what you come up with for it. I pet some of the PVE roadmap stuff about large setups might end up being implemented within PDM instead of PVE or at least partly in each of those.
 
Last edited:
  • Like
Reactions: XMarcR
Thank you for the great addition to PVE infrastructure.

We've installed PDM in our environment and connected it to two clusters, all nodes running:
pve-manager/8.3.2/3e76eec21c4a14a7 (running kernel: 6.8.12-1-pve)

When trying to migrate the VM between clusters we get the following error message in the Target Network field:

Code:
Error: api error (status = 400 Bad Request): api returned unexpected data - failed to parse api response

The storage portion is populated successfully.

Any suggestions?

Datacenter Manager 0.1.10

P.S. just realized that perhaps the fact that we are running PVE as instances in Openstack with Public/Private network may have something to do with it?

Public IP from PDM point of view: 172.22.1.142
I ran into this same issue today, version 0.1.11. I'm not running Openstack or have any real complex networking...just got this error.

Workaround request: make the network portion optional, and leave the NIC in an unconfigured state, if the API call fails.
 
  • Like
Reactions: Coop59
  • Like
Reactions: vaschthestampede
Dear Proxmox Staff and Community members,

after searching through the previous posts and finding nothing like this, I would like to write down a feature request for customizable links on the dashboard:
For the dashboard of the PDM, a possibility to display customized links that lead to other administrative functions, e.g. monitoring dashboards such as CEPH dashboard, Grafana etc. and other third party tools, would be very useful.

Thank you and keep up your good work! I am on PDM version 0.1.11 now and everything works great with PVE >= 8.3.3.

Best regards,
Niklas
 
Last edited: