LXC - Virtualize PVE inside PVE

TwiX

Renowned Member
Feb 3, 2015
310
21
83
Hi,

We usually make KVM VMs. But for history reasons we still have some old containers. Openvz containers which were migrated to lxc since PVE4.
Now with pve7 some LXCs refuse to start properly (mainly due to old systemd < 232). Also, LXC live migration is impossible.

So I'm wondering if virtualize PVE 6 inside a KVM VM may make sense (and put some LXC containers inside). With CPU 'host' settings, the expected CPU decreased performance should be minimal.
Also, we could live migrate all container in one shot within live migrating the virtualize PVE it self.

What do you think ?
 
Hi,
So I'm wondering if virtualize PVE 6 inside a KVM VM may make sense (and put some LXC containers inside). With CPU 'host' settings, the expected CPU decreased performance should be minimal.
Yes, that can work out OK, FWIW we use PVE nested quite a lot for development and some functional testing here.
But, I'd recommend against clustering that virtual instance with the host cluster, that'd only create trouble and bootstrap starting issues.

It'd mostly be a stop-gap measurement, around 2022-07 PVE 6.4 will become EOL, but it can help to give you some time figuring out how to migrate to cgroupv2 best (Either switching to unprivileged + nesting and/or upgrade the CT's distro so that a new enough systemd is used by the CT), while avoid having the whole machine stuck on PVE 6.4.
 
Hi,

Yes, that can work out OK, FWIW we use PVE nested quite a lot for development and some functional testing here.
But, I'd recommend against clustering that virtual instance with the host cluster, that'd only create trouble and bootstrap starting issues.
Of course, this virtualized PVE should be a standalone server with only LXC VMs inside.
It'd mostly be a stop-gap measurement, around 2022-07 PVE 6.4 will become EOL, but it can help to give you some time figuring out how to migrate to cgroupv2 best (Either switching to unprivileged + nesting and/or upgrade the CT's distro so that a new enough systemd is used by the CT), while avoid having the whole machine stuck on PVE 6.4.

Yes, we should plan to recreate these LXC VMs to KVM. Or just simply remove them when convenient...

Thanks :)
 
I am able to live migrate the virtualized pve with few lxc inside. but lxc lost connectivity for about 1 min :/

I was expecting lxc containers should be still reachable during the live migration :/
 
Well, that depends on your network equipment and config, e.g., sometimes switches with MAC learning would record that the CT's mac came from port X and cache that for a while, and once the containers moves it's not reachable on port X but on another switch port and there may be some delay until the cache is invalidated.

In general, a few tens to hundreds of milliseconds downtime also happens with live-migration, but any network can have a hiccup so that should not matter much. A minute sounds rather long though.

How did you test if the CT's connectivity? If you only tested from the outside I'd also recommend checking from the inside.
(e.g., start ping to GW from a shell inside the CT before migration of the PVE VM)
 
You 're right. Pinging a gw during the live migration did the trick :)
Only lost one ping.

I guess the CT was inactive on my first attempt.

This is a good alternative to have CT with live migration !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!