Proxmox Datacenter Manager - First Alpha Release

Thank you very much for the software in need :D

I have tested a migration. The follow things seemed not to work right now:

- If the VMs have some CD Disks in the Hardware (ISOs loaded) then you can´t migrate when the new host doesnt have the ISOs in the same version
- Migration is not possible right now when Snapshots exists
- All hosts need a functional DNS Entry to get all information in the datacenter manager
 
I am surprised that you decided on an architecture where the PDM needs access to each PVE/Cluster.... But not the other way round.
I can see a setup where the PDM was accessible on the internet would make just as much sense (so initiating/mainting open ports initiated from the PVEs)...

But even more obvious - why not add Tailscale to both PDM and PVE, and then make it so much easier to join clusters across "whatever needs to be transversed"???

I manage some clusters/PVEs, through which I will NEVER enable a direct VPN from any infrastructure in which I have a PDM running... But a Tailscale build in to all PVEs - and build PBM <--> PVE connection on that... Would be a perfect solution for me..
(and I suspect many others - It is by design very secure, especially if adding Tailscale configs directly in PVE...)
 
@proxmox Staff - Thank you very much for another amazing piece of Software.
I just bumped in to test your ALPHA Release. Added my three Single-Node Labs to test.

It would be great if the VM Consolse and the Shell of a Container could be kind a tunneled (SSH-Tunnel?) in order to allow Datacenter Admins to primary working on PDM.

There are just three Bugs I allready notice:
1. ACME Challenges
2. When Spaces are in the Remote-ID then a Regex-Failure ist displayed
3. Running Tasks are nonwhere visible

This is just an ALPHA... I'm allready excited what the Finalversion would bring on great features an possibilities
 
  • Like
Reactions: Nightman
I am surprised that you decided on an architecture where the PDM needs access to each PVE/Cluster.... But not the other way round.
I can see a setup where the PDM was accessible on the internet would make just as much sense (so initiating/mainting open ports initiated from the PVEs)...

But even more obvious - why not add Tailscale to both PDM and PVE, and then make it so much easier to join clusters across "whatever needs to be transversed"???

I manage some clusters/PVEs, through which I will NEVER enable a direct VPN from any infrastructure in which I have a PDM running... But a Tailscale build in to all PVEs - and build PBM <--> PVE connection on that... Would be a perfect solution for me..
(and I suspect many others - It is by design very secure, especially if adding Tailscale configs directly in PVE...)
As the PVE and PDM are Debian based you could use the full Linux Networkstack. Go Install the necessary Packages and then you have your VPN running.
But thank you for your thought and idea which could be a good way to go as an MSP.
 
  • Like
Reactions: Nightman
I can see a setup where the PDM was accessible on the internet would make just as much sense (so initiating/mainting open ports initiated from the PVEs)...
As the PDM can access all ones infrastructure having it open on the internet would mean having single target that can be used to overtake your whole infra. As there will be just a single PDM for most setups it made more sense to have it as central piece and it avoids that we need to have some mesh trust of all the different hosts or clusters.

But even more obvious - why not add Tailscale to both PDM and PVE, and then make it so much easier to join clusters across "whatever needs to be transversed"???
You can set up your preferred tunneling already, we want to stay flexible, tailscale or the like, while great, is not for everyone.

We're currently thinking through SDN use cases and a tunneling "fabric" feature or the like might be an option though, but mainly to connect various clusters/nodes at different sides with each other.
 
Last edited:
As the PDM can access all ones infrastructure having it open on the internet would mean having single target that can be used to overtake your whole infra. As there will be just a single PDM for most setups it made more sense to have it as central piece and it avoids that we need to have some mesh trust of all the different hosts or clusters.
I am fully aware - and that is also why I suggest "something quite similar to what you mention below"
You can set up your preferred tunneling already, we want to stay flexible, tailscale or the like, while great, is not for everyone.
I know - but this leaves it up to each individual to secure/implements this in a proper way...
We're currently thinking through SDN use cases and a tunneling "fabric" feature or the like might be an option though.
That was exactly what I would suggest.... Not specifically Tailscale (since I agree, probably not for everyone), but a solution "baked in" to both PVE and PDM, which by design could/would be VERY secure (simply designing it to ONLY allow bindings to be utilized by the PVE<-->PDM traffic)...

Solved through building on the PVE SDN foundation would most likely be the best option - but again, I believe it should be made to specifically NOT allow "generic traffic" between those Cluster-SDNs - but only allowing these management functionalities to work... Unless specifically (by the admin) opened up for other types of traffic...

Looking very much forward to where this will go in the future.... Could become really valuable :)
 
I believe it should be made to specifically NOT allow "generic traffic" between those Cluster-SDNs - but only allowing these management functionalities to work... Unless specifically (by the admin) opened up for other types of traffic...
Yes, definitively; besides obvious address duplication and what not woes it would be quite the security risk to just unconditionally/unfiltered connect all remotes to each other.

SDN is a huge topic and IMO one of the most important for the PDM; some devs here are already fledging out ideas since a few weeks and I hope we get some of them integrated over the next months.
 
VM migrations between two standalone servers (not in a cluster) with the same shared storage mounted (an NFS mount backed by a ZFS dataset on a TrueNAS server) appear to duplicate the VM disk and copy all the data into the duplicate. It would be nice if migrations with the same shared storage could just migrate the active state of the VM, as in RAM and CPU exection, and then reattach the existing disk on the migrated to node since the underlying storage is the same.

Overall though this is a really cool development, thanks for all the work on this!

There's a bugzilla request open for remote-migrate with shared storage:

https://bugzilla.proxmox.com/show_bug.cgi?id=4928

Both ceph and NFS could be of interest as shared storage.

I assume technically it would be simpler since there's nothing to be done disk-wise during migration for shared storage but may be some basic checks that the storage is indeed the same would help prevent mistakes,

For this purpose one way would be for proxmox to add a unique uuid/magic file somewhere in the shared storage NFS or ceph that is easy to check in this kind of use.
 
Last edited:
Is there a plan for exposing an API for the PDM later on?
There is already one. The UI already uses purely the PDM REST API for everything

It's far from stable, expect breakage of available endpoints parameters and return format during the alpha and beta development phase. API docs like our other projects have are planned too.
 
  • Like
Reactions: flames
Is there any reason why this couldn't be placed in a docker image? Additionally, how do y'all feel about releasing an official image?
 
  • Like
Reactions: BSpendlove
Will we get some sort of embeded fully featured Webinterface which proxies the calls via PDM to PVE/PBS so that we don't have to ever go onto the Webinterfaces directly?

Also, can we specify which address is used to open links to the Webinterfaces? My Browser can't access my PVE Nodes via the same IP's as my PDM can and i can't use Split DNS since they have different ports due to me using reverse proxy for browser access.
 
Will we get some sort of embeded fully featured Webinterface which proxies the calls via PDM to PVE/PBS so that we don't have to ever go onto the Webinterfaces directly?

Also, can we specify which address is used to open links to the Webinterfaces? My Browser can't access my PVE Nodes via the same IP's as my PDM can and i can't use Split DNS since they have different ports due to me using reverse proxy for browser access.
This interfaces runs on port 8443 is also told in the intro.
 
There is already one. The UI already uses purely the PDM REST API for everything

It's far from stable, expect breakage of available endpoints parameters and return format during the alpha and beta development phase. API docs like our other projects have are planned too.
I think I communicated that the wrong way - is the PDM API and the docs publicly available now, or is it something that will come in the future?

Once again - awesome job
 
Looking great so far, I like the new UI widgets, nothing to report not already reported.

Didn't see it mentioned, any plans for a native LXC for this?
 
Is there any reason why this couldn't be placed in a docker image? Additionally, how do y'all feel about releasing an official image?
There is no hard technical reason against it, rather, we're in the Debian packaging business so to say and have streamlined releasing with that quite well, so it was our obvious first choice to release the alpha. Nearing to a more stable release, we'll relook into what appliances/images/... we'll provide besides our standard Debian packages and ISO images; one relatively sure thing is our standard CT image, as that we already got a process for, and quite probably also an OVA, which is relatively similar to what we already build now.
Didn't see it mentioned, any plans for a native LXC for this?
As spelled out above a bit more in detail: Quite definitively.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!