This isn't an official image from the Proxmox team though. This won't fly in a corporate environment.
An official lxc or docker from Proxmox Server Solutions would be a different, I hope we will get something like that at sone point
This isn't an official image from the Proxmox team though. This won't fly in a corporate environment.
what about the system logs (journalctl)?so I ran it again just to be see if it wasn't a fluke and I have the same error, this is the error on the receiving host:
Code:mtunnel started received command 'version' received command 'bwlimit' received command 'disk-import' ready received command 'ticket' received command 'query-disk-import' disk-import: Importing image: 100% complete...done. received command 'query-disk-import' disk-import: successfully imported 'TWOTB:vm-103-disk-0' received command 'config' received command 'unlock' received command 'start' run_buffer: 571 Script exited with status 2 lxc_init: 845 Failed to run lxc.hook.pre-start for container "103" __lxc_start: 2034 Failed to initialize container "103" TASK ERROR: mtunnel exited unexpectedly
it is case sensitiveSearchbox on the top is broken, searching searching and nothing finding ;p (Datacenter Manager 0.1.11ALPHA)
in first release it was working.
root@PDM:/etc/proxmox-datacenter-manager# systemctl status proxmox-datacenter-api.service
● proxmox-datacenter-api.service - Proxmox Datacenter Manager API daemon
Loaded: loaded (/lib/systemd/system/proxmox-datacenter-api.service; enabled; preset: enabled)
Active: active (running) since Sat 2025-02-08 11:46:09 CET; 3h 34min ago
Main PID: 21677 (proxmox-datacen)
Tasks: 7 (limit: 2245)
Memory: 26.2M
CPU: 4.385s
CGroup: /system.slice/proxmox-datacenter-api.service
└─21677 /usr/libexec/proxmox/proxmox-datacenter-api
Feb 08 15:19:39 PDM proxmox-datacenter-api[21677]: bad fingerprint: 46:39:5a:d4:75:60:48:91:f7:e5:01:57:8b:4d:b5:77:65:ef:6f:29:34:cc:32:b9:63:7d:b2:d4:-redacted-
Feb 08 15:19:39 PDM proxmox-datacenter-api[21677]: expected fingerprint: 12:15:56:7f:33:98:38:37:94:f6:88:49:e1:bc:fc:a8:3e:b1:7e:18:db:1f:1b:45:7d:13:0e:-redacted-
Feb 08 15:20:05 PDM proxmox-datacenter-api[21677]: bad fingerprint: 12:15:56:7f:33:98:38:37:94:f6:88:49:e1:bc:fc:a8:3e:b1:7e:18:db:1f:1b:45:7d:13:0e:-redacted-
Feb 08 15:20:05 PDM proxmox-datacenter-api[21677]: expected fingerprint: 46:39:5a:d4:75:60:48:91:f7:e5:01:57:8b:4d:b5:77:65:ef:6f:29:34:cc:32:b9:63:7d:b2:d4:-redacted-
Feb 08 15:20:22 PDM proxmox-datacenter-api[21677]: bad fingerprint: 46:39:5a:d4:75:60:48:91:f7:e5:01:57:8b:4d:b5:77:65:ef:6f:29:34:cc:32:b9:63:7d:b2:d4:-redacted-
Feb 08 15:20:22 PDM proxmox-datacenter-api[21677]: expected fingerprint: 12:15:56:7f:33:98:38:37:94:f6:88:49:e1:bc:fc:a8:3e:b1:7e:18:db:1f:1b:45:7d:13:0e:-redacted-
mv /etc/pve/pve-root-ca.pem /etc/pve/pve-root-ca.pem.BAK
mv /etc/pve/priv/pve-root-ca.key /etc/pve/priv/pve-root-ca.key.BAK
mv /etc/pve/nodes/NODENAME/pveproxy-ssl.pem /etc/pve/nodes/NODENAME/pveproxy-ssl.pem.BAK
mv /etc/pve/nodes/NODENAME/pveproxy-ssl.key /etc/pve/nodes/NODENAME/pveproxy-ssl.key.BAK
pvecm updatecerts -f
systemctl restart pveproxy
So I tested PDM 0.1.11 Alpha a bit more thoroughly now.During my experience using the Proxmox Datacenter Manager (version 0.1.7 Alpha), I noticed that the interface for viewing the list of VMs and containers within a node is constrained to a small area. This limitation becomes especially noticeable when managing a node with a large number of VMs and containers.
The current interface does not provide an effective way to scroll through and view the entire list of VMs/containers comfortably. This forces frequent scrolling or other workarounds, which can be inefficient, especially when dealing with a node hosting many instances. Additionally, the display area is not resizable, further limiting usability.
I ran into this same issue today, version 0.1.11. I'm not running Openstack or have any real complex networking...just got this error.Thank you for the great addition to PVE infrastructure.
We've installed PDM in our environment and connected it to two clusters, all nodes running:
pve-manager/8.3.2/3e76eec21c4a14a7 (running kernel: 6.8.12-1-pve)
When trying to migrate the VM between clusters we get the following error message in the Target Network field:
Code:Error: api error (status = 400 Bad Request): api returned unexpected data - failed to parse api response
The storage portion is populated successfully.
Any suggestions?
Datacenter Manager 0.1.10
P.S. just realized that perhaps the fact that we are running PVE as instances in Openstack with Public/Private network may have something to do with it?
Public IP from PDM point of view: 172.22.1.142
i guess, this is this issue: https://forum.proxmox.com/threads/n...rse-api-response-in-network-selection.160166/
118431744 bytes (118 MB, 113 MiB) copied, 1 s, 118 MB/s235495424 bytes (235 MB, 225 MiB) copied, 2 s, 118 MB/s353292288 bytes (353 MB, 337 MiB) copied, 3 s, 118 MB/s470605824 bytes (471 MB, 449 MiB) copied, 4 s, 117 MB/s587771904 bytes (588 MB, 561 MiB) copied, 5 s, 118 MB/s705404928 bytes (705 MB, 673 MiB) copied, 6 s, 117 MB/s822829056 bytes (823 MB, 785 MiB) copied, 7 s, 117 MB/s94037
api error (status = 400: api error (status = 596: ))
what versions are you running on those nodes?another one, when trying to move another kvm i now get this error:
Code:api error (status = 400: api error (status = 596: ))
and when adding a cluster hostnames are read from the config. thats fine, then the manager cannot resolve the name to an ip address. i made a workaround (adding the host an the ip into the hosts file), but when trying to move the kvm it displays the hostname:8006 and does not try to resolve it via the hosts file.
edit:
found something. the migration of vms from the node that i added the remotes is working just fine. but the other ones not. i have root20,root21,root22 and added the pdm-connection with root20, i can migrate vms from this server just fine. from the other ones i got the message on the top
Thank you.no, that should be possible. i have 3 standalone hosts here (basically each their own cluster) and can migrate vms around as i wish as long as the hardware on the nodes in question is identical in case of host type cpu in the vm or without any limitation if i use x86-64-v2-aes as vm-cpu (which should be compatible with most nodes).
also there is at the moment a limitation that you cant migate between zfs and other storage types, so if you use zfs on one cluster you have to use it on all of them or not use it on all of them. i ran into this issue because 1 node had zfs, the other 2 used lvm and i couldnt migrate. after changing all to zfs it works just fine. changing all of them to lvm would work as well, but i like zfs.
We use essential cookies to make this site work, and optional cookies to enhance your experience.