Minimize downtime while migrating to 4.1

Hi,

I'm about to migrate a Proxmox VE 3.4 host to 4.1. I'm starting with a fresh 4.1 install on separate hardware. What's the best way to migrate OpenVZ hosts with minimal downtime? I understand containers need to be stopped. So downtime needed for the migration sums up to
  1. shut down the hosts
  2. perform a full backup, writing backup data to a NFS share (5-10 GB per host)
  3. restore from that NFS share on the new host
  4. start container
Is there a way to further reduce downtime? Regards

Christian
 
shut down the hosts
  1. perform a full backup, writing backup data to a NFS share (5-10 GB per host)

You wrote "host" - but you mean containers here, don't you?

If so: no, that's the minimum for downtime.
 
the minimum will always be the time it takes you to write a backup to shared storage and to read said backup from shared storage. There is no way around it. (unless you e.g. use HotSwapable disks and are talking a couple TB of storage that is. )

ps.: Are you migrating openVz to LXC CT's by any chance ? I assume you did some dry runs for your different OpenVz configs ? since pve4.x release there consistently have been threads regarding "how to migrate with X exotic config"
 
ps.: Are you migrating openVz to LXC CT's by any chance ? I assume you did some dry runs for your different OpenVz configs ? since pve4.x release there consistently have been threads regarding "how to migrate with X exotic config"

Due to lack of appropriate testing hardware, I migrated one container to see if importing a backup works, which it did. I did not start intensive functional testing beforehand, because I assumed that Proxmox tested its release software better than I did. In particular I expected LXC to work basically like OpenVZ does: Share a kernel to have lightweight Linux containers.

As I described in a thread named Permission error w/ sockets inside CT since migration to PVE 4.1 things turned out rather bad. In short: AppArmor and/or LXC lets UNIX sockets be created inside a container with permissions very different than before. As a result, daemons on the same container cant connect to each other, rendering services unusable. I've learned that this is a known effect (wasn't clear to me, I'm no expert in LXC no AppArmor), which I find completely unacceptable for a 4.1 release of a software product like Proxmox VE.
 
We've got a Proxmox-labCluster at work and i got a single proxmox-node at home for those pesky mandatory home office days and my personal enjoyment. I always dry-run everything that i have not at least done once as a best practice and see if it works. And i do not just mean proxmox. Especially if it has anything business critical on it. Sounds like you are (client-service equals business critical in my book). If it does not work, then you don't upgrade to a new version until you are sure it works or have a working; its that simple.

What is acceptable or not is not for me to say, its a very very subjective topic. In any case, when ever there are some major features dropped anywhere in IT/Software, there is bound to be some rough edges and features that are different or not supported at all anymore. It's just the nature of the beast.

I only run KVM on our servers, so i can not judge LXC/openvz migration process, sry to hear its causing issues.
 
Last edited:
What is acceptable or not is not for me to say, its a very very subjective topic. In any case, when ever there are some major features dropped anywhere in IT/Software, there is bound to be some rough edges and features that are different or not supported at all anymore. It's just the nature of the beast.

I assume there were good reasons for Proxmox staff to drop OpenVZ in favor of LXC in PVE 4.0. Being able to run a fairly standard kernel instead of a heavily patched OpenVZ kernel might be one, I guess.

That being said, I learned that I manually need to add a custom parameter lxc.aa_profile: unconfined to every LXC configuration in /etc/pve/lxc/*.conf because LXC conflicts with AppArmor. It's not because we're doing anything fancy here, this seems to be needed for basic services like postfix, mysql and probably anything that chroots or uses sockets. Which is most of'em, I guess.

Bottom line(s)
  1. This should not have been like this in the first place.
  2. Then again: It should have been documented somewhere.
  3. Given that there's no UI for that: It should be.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!