Yes, but it will have a downtime until the startup of the container system and service is finished on the target node.
Concerning docker inside lxc: Please reconsider this approach since that might break after updates due to changes in the underlying kernel and system services (which are used by lxc and docker/podman thus might lead to conflicts).
Here is one recent example from the PVE 9 beta:
Hey Guys,
I updated to VE 9.0 Beta, and since then, my LCX Docker apps haven't been running. Every container is showing the same error message:
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "mqueue" to rootfs at "/dev/mqueue": change mount propagation through procfd: resolving path inside rootfs failed: lstat...
And two older ones:
Hi all,
I just updated my PVE to 7.3-4 and no service is reachable, I just checked and all my docker container are gone. Seems like all docker container in my lxc are gone.
This is how docker info looks like...
I will try to restore an lxc from my last backup. I hope someone could tell me what and how I can check for issues further, currently I don't know how to proceed.
docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
compose: Docker Compose (Docker Inc...
Hello,
I had an LXC running Docker, running Compreface. It was working fine until I upgraded Proxmox and restarted the servers. A couple things I don't understand happened.
- My Docker LXCs did not automatically start the containers when the LXC booted up. And Many of them had to do fresh pulls when I ran `docker compose up -d`
- In the specific case of this LXC, the pull fails with the following error:
`failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /usr/local/lib/python3.7/site-packages/numpy-1.19.5.dist-info: invalid argument`
I tried restoring...
For VM migration I would suggest to change the CPU type to something which is compatible with both cpus:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings
In your case the default x86-64-v2-AES should already be enough for live migration. x86-64-v3 however is supposed to work too and allows using EPYC features not present in older CPUs, so I would first try that before switchting to x86-64-v2-AES.
Not sure about that to be honest. What if the system time on the VM or host is wrong and NTP timesync doesn't work (for whatever reason)? Isn't there a possibility that your node then might be considered as not trustworthy? But I'm by no means not an expert on the defails of crypto currency ponzi schemes so please take this with a grain of salt