PDM: migrate VMs across firewall

proxwolfe

Renowned Member
Jun 20, 2020
559
69
68
51
In my homelab I have a small cluster from which I serve some applications that live in my DMZ. And I have another node inside my inner firewall on which I run some apps that need not be reachable from the public internet. Amongs others, I run PDM from there. While from inside the firewall, it it possible to reach the DMZ PVE, it is not possible to reach the inner pve from the dmz.

Now, I would like to be able to move vms between the DMZ PVE and the inner PVE. Apparently, this only works directly and since the DMZ PVE can't reach the inner PVE, moving vms from the DMZ PVE to the inner PVE is not possible. At least I am getting an "api error (status = 400: api error (status = 596: ))".

So this is either a cry for help (if there is a way to make this work) or a feature suggestion (if this doesn't work yet): I think it would be good, if the PDM could act as a conduit (since it can reach both source and target nodes) and relay the traffic. Alternatively, this should also work, if instead of pushing the vm from the DMZ PVE to the inner PVE, the inner PVE could pull the VM from the DMZ PVE.

I image that I am not the only one who wants to migrate VMs across firewalls?
 
My personal choice would be to establish a Wireguard tunnel.

My preferred method is to handcraft something like https://www.wireguard.com/quickstart/. But this requires one endpoint to be able to reach the other one; in a single direction is sufficient.

If that is not possible I build an additional external relay which is reachable from both endpoints.

(( I use several such tunnels. Nearly all of them do "only" establish a point-to-point connection to get a specific job done. This is different from "route all my generic traffic to the outside world through that tunnel" as I do not need to activate routing/masquerading. ))
 
Hi,

portforwarding on the firewall should also work, and exposes only the required port, instead of the host (to the other host). This would also cause lesser computational overhead for wrapping the package content. On the other hand there is usually no authentication mechanism like with the wireguard setup.
(SSH reverse tunneling should be able do the same job, with onboard tools, but the configuration might be more complex, depending on the personal knowledge)

Maybe your firewall solution provides a load balancing tool, that can add an authentication mechanism to the port forwarding.

The best solution will likely depend on your architectural/security/compliance requirements.
The wireguard solution, might be very suitable
for your needs.

Which firewall solution, do you use?

BR, Lucas
 
Last edited:
  • Like
Reactions: UdoB
I image that I am not the only one who wants to migrate VMs across firewalls?
I'm wondering why you don't just put the VMs in the DMZ on this PVE instead of the entire PVE? Don't you trust the VM isolation?

I'm asking because if you had all the PVEs on the same management network and only exposed the VMs to the appropriate local or publicly accessible networks, your life would be much easier.

EDIT: By the way, I wouldn't even necessarily say that your setup is more secure than what I proposed, especially if the PVE host and its management interfaces are located within that DMZ as well. Because the real security risks usually come from exposed management interfaces, not from someone breaking out of a VM and gaining root access on the host. I'm not saying the latter is completely unheard of, but it is much, much, much less likely than someone gaining access through an exposed management interface.
 
Last edited:
Or wait, do you mean that by “inner firewall” you are actually doing double NAT? Internet → NAT on Firewall 1 → DMZ → NAT on Firewall 2 → LAN? Are we even talking about actual firewalls here, or just two routers placed behind each other?

If the latter is the case, I guess the suggestions from the others in this thread are the only ways to make that work. However, I would then reconsider your overall network design. Double NAT tends to cause a lot of trouble while not providing that much security. In other words, don’t use NAT to isolate network segments; use proper firewall rules instead.

A simple example would be:
  • a PVE_MGMT network where the PVE management interfaces are located
  • a DMZ / PUBLIC network where publicly accessible services are located
  • a LOCAL_SERVICES network where internal services are located
  • a CLIENT_LAN for end-user devices
Then you can define proper firewall rules between those segments, all on a single firewall, instead of relying on NAT on a second router to isolate your public services from your local ones. Also, your Proxmox VE hosts can then reach each other while being properly isolated from both the PUBLIC / DMZ network and the LOCAL_SERVICES network, and in general this gives you much more control over what can reach what.
 
Last edited:
My personal choice would be to establish a Wireguard tunnel.

My preferred method is to handcraft something like https://www.wireguard.com/quickstart/. But this requires one endpoint to be able to reach the other one; in a single direction is sufficient.
Up to now, it has been my policy to not allow anything from the outside into my inner firewall. Ideally, I would like to keep it that way. If no viable other solutions exist, wireguard certainly would be an option.

If that is not possible I build an additional external relay which is reachable from both endpoints.
And how would I use that to migrate a VM from outside in? Would I set up a new PVE and migrate the VM first from the DMZ PVE to that relay PVE and from there to my inner PVE? If the relay PVE were able to push migrate to my inner PVE, this wouldn't achieve my objective.
 
Hi,

portforwarding on the firewall should also work, and exposes only the required port, instead of the host (to the other host). This would also cause lesser computational overhead for wrapping the package content. On the other hand there is usually no authentication mechanism like with the wireguard setup.
(SSH reverse tunneling should be able do the same job, with onboard tools, but the configuration might be more complex, depending on the personal knowledge)
Up to now, it has been my policy to not allow anything from the outside into my inner firewall. Ideally, I would like to keep it that way. If no viable other solutions exist, port forwarding would indeed be an easy option.

Maybe your firewall solution provides a load balancing tool, that can add an authentication mechanism to the port forwarding.
But how would I integrate the authentication into the PDM migration process?

The best solution will likely depend on your architectural/security/compliance requirements.
The wireguard solution, might be very suitable
for your needs.

Which firewall solution, do you use?
pfSense
 
I'm wondering why you don't just put the VMs in the DMZ on this PVE instead of the entire PVE? Don't you trust the VM isolation?
Only the VMs live in the DMZ. The PVE that serves them does not.

I'm asking because if you had all the PVEs on the same management network and only exposed the VMs to the appropriate local or publicly accessible networks, your life would be much easier.

EDIT: By the way, I wouldn't even necessarily say that your setup is more secure than what I proposed, especially if the PVE host and its management interfaces are located within that DMZ as well. Because the real security risks usually come from exposed management interfaces, not from someone breaking out of a VM and gaining root access on the host. I'm not saying the latter is completely unheard of, but it is much, much, much less likely than someone gaining access through an exposed management interface.
The PVE management interface, of course, is on a separate management network and not reachable from the DMZ.
 
Or wait, do you mean that by “inner firewall” you are actually doing double NAT? Internet → NAT on Firewall 1 → DMZ → NAT on Firewall 2 → LAN? Are we even talking about actual firewalls here, or just two routers placed behind each other?
Yes, I believe that is a classic setup.

And yes, there are two fully fledged firewalls. One on the edge to the internet and one behind it at the edge of my private LAN.

If the latter is the case, I guess the suggestions from the others in this thread are the only ways to make that work.
It does seem so. Unless, of course, Proxmox take up my suggestion to have PDM act as a conduit.

However, I would then reconsider your overall network design. Double NAT tends to cause a lot of trouble while not providing that much security. In other words, don’t use NAT to isolate network segments; use proper firewall rules instead.

A simple example would be:
  • a PVE_MGMT network where the PVE management interfaces are located
  • a DMZ / PUBLIC network where publicly accessible services are located
  • a LOCAL_SERVICES network where internal services are located
  • a CLIENT_LAN for end-user devices
Then you can define proper firewall rules between those segments, all on a single firewall, instead of relying on NAT on a second router to isolate your public services from your local ones. Also, your Proxmox VE hosts can then reach each other while being properly isolated from both the PUBLIC / DMZ network and the LOCAL_SERVICES network, and in general this gives you much more control over what can reach what.
I'm feeling relative comfortable with my current setup, which is mostly like you describe it above, except that my private LAN is sitting behind another firewall. But that is not causing major issues because traffic only ever goes out.
 
Yes, I believe that is a classic setup..
Hmm, “classic” is probably the right word here, in the sense that it’s usually not done this way anymore nowadays ;)

And yes, there are two fully fledged firewalls. One on the edge to the internet and one behind it at the edge of my private LAN.

Isn’t that a bit overkill for a homelab? I mean, unless you are trying to replicate a large enterprise environment with multiple firewalls, which is totally beyond my scope of knowledge, why not just do the network segmentation on a single pfSense instance?

I would argue that, ultimately, this would be even more secure unless you already have proper separation between end-user-reachable services and management interfaces on *both* firewalls.

The reason I’m saying that is that if the Proxmox VE management interface is on the same network as all your publicly available VMs, someone could reach it if one of your services is compromised. On the other hand, isolating the Proxmox VE hosts from each other isn’t really necessary anymore if the management interfaces are placed in their own isolated network, unless you’re worried that someone could break out of a VM and gain access to the host.

But yeah, if you want to keep your current setup, I can't really help, and I leave that to the others here...
 
Last edited:
  • Like
Reactions: Johannes S
Isn’t all of that a bit complicated for a homelab? I mean, unless you are trying to replicate an enterprise environment with multiple firewalls, which is totally beyond my scope of knowledge (and yours apparently too) ;), why not just do the network segmentation on a single pfSense instance?
I'm aiming for maximum security. But, admittedly, I'm still learning.

I would argue that, ultimately, this would be even more secure unless you already have proper separation between end-user-reachable services and management interfaces on *both* firewalls.
How so?

The reason is that if the Proxmox management interface is on the same network as all your publicly available VMs, someone could reach that if one of your services is compromised.
The PVE management network is, of course, separate from the DMZ network.
On the other hand, isolating the PVEs from each other isn't necessary anymore if the management interfaces are in their own isolated network, unless you're worried that someone could break out of a VM and gain access to the host.
That is, actually, the contingency, I'm trying to provide for.

But yeah, if you want to keep your current setup, I can't really help, and I leave that to the others here...
Well, there is the obvious option of piercing my inner firewall. But I'm hoping to find a more sophisticated solution that does not compromise my inner firewall (or to learn that there is no other way).
 
And how would I use that to migrate a VM from outside in? Would I set up a new PVE and migrate the VM first from the DMZ PVE to that relay PVE and from there to my inner PVE?
There is no additional PVE involved ;-)

The "external relay" is just a common reachable rendezvous point to establish the tunnel. All payload traffic would then go through this device, which vastly influences the reachable bandwidth. Automatically for the chosen subnet - which needs to be conflict-free and unused until now.

At the end all PVE's would have a static IP address in the tunnel and can "ping" each other - and reach the services of the other end. Both do believe they are using the very same network with no routing involved.
 
There is no additional PVE involved ;-)

The "external relay" is just a common reachable rendezvous point to establish the tunnel. All payload traffic would then go through this device, which vastly influences the reachable bandwidth. Automatically for the chosen subnet - which needs to be conflict-free and unused until now.

At the end all PVE's would have a static IP address in the tunnel and can "ping" each other - and reach the services of the other end. Both do believe they are using the very same network with no routing involved.
Okay, so set up a tailscale network for example.

That is an option, but my objective is to let nothing from the outside reach behind my inner firewall. And having a common network between inside and outside would basically circumvent the inner firewall.
 
  • Like
Reactions: UdoB
Two different firewalls, can also help with a dual vendor strategy - for plattform backdoors.
I use the same approach at home - the ISP ones and my virtualized one on a pve, as single point of configuration for segregation.
My wireguard conjunction instance is available on the internet, and is than providing an access to the "DMZ". Like UdoB explained.

I don't like tailscale, as I don't want to trust VPN-Providers in general, and with a classic DynDNS-Setup my former flatmate and me had a lot of portscanning traffic. Wireguard at least is/was available for all plattforms and still is available for all relevant we use at home.

It is a little bit difficult to determ a approriate level of segregation from remote, as some people run nearly a complete private business in their "homelabs".

For this usecase loadbalancing witch access limitation is indeed a little bit tricky,
as it is depending on the application, that needs to be capable of that.

If your PVEs are already on a dedicated Management network, that could be a good solution. Migration will take place anyway on the host level, with the ip from nameresolution, and there is an option in the datacenter configuration, to determ the migration network.

BR, Lucas
 
  • Like
Reactions: UdoB
If your PVEs are already on a dedicated Management network, that could be a good solution. Migration will take place anyway on the host level, with the ip from nameresolution, and there is an option in the datacenter configuration, to determ the migration network.
Yes, that would work, if both PVEs shared the same management network. But they don't. The networks outside my inner firewall and inside are totally segregated. Nothing goes in. That's the problem in my case. And, yes, I could put them on the same network and then the migration would be not an issue. But my objective is to keep the complete segregation of networks.
 
There are also alternatives to tailscale who works similiar ( they all base on wireguard) like netbird or headscale and there are even more. Personally I use headscale which is an open source Implementation of tailscales protocol.
For your usecase I would use the PDM dedicated vm as subnet Router depending on your Level of Paranoia. OPNsense ( maybe pfsense too ) has a tailscale plugin too.
Regarding security I wouldn't add PBS servers to the PDM because a bad actor could take over your PDM and wreak havoc on your complete infrastructure including the backups
 
Last edited:
  • Like
Reactions: UdoB
There are also alternatives to tailscale who works similiar ( they all base on wireguard) like netbird or headscale and there are even more. Personally I use headscale which is an open source Implementation of tailscales protocol.
I use headscale, too.

For your usecase I would use the PDM dedicated vm as subnet Router depending on your Level of Paranoia. OPNsense ( maybe pfsense too ) has a tailscale plugin too.
But establishing a common network between my otherwise segregated PVEs means to circumvent my inner firewall. If I were willing to do this, it would easier to remove the firewall instead of making an effort to circumvent it.

Regarding security I wouldn't add PBS servers to the PDM because A bad axtor could take over your PDM and wreak gavoc on your complete infrastructure including the backups
That is a very good point. I have not added it yet and I will reconsider my initial inclination to add it. After all, I have a backup PBS that, for the same reasons, sits on a segregated network and is not reachable by the PBS on my normal management network. But there, the solution is easy. The backup PBS can pull the backups from the primary PBS.
 
  • Like
Reactions: UdoB