Great!
How can I dump and restore in another node? I want CRIU for attempting live lxc migration.
I'd be greatful if you could indicate me how to proceed to achieve it.
Hello everyone,
can someone tell me how can I get the .raw of an image that is in a local-lvm?
I would like to get it to send it through scp to another pc as a backup. I'd prefer doing it in this way to have the image and not a snapshot.
Thanks
Hello everyone,
has anyone installed CRIU in proxmox nodes to experiment live migration?
How did you do so?
I know it's not stable and I'll not install it on production. I would like to install it on my laboratory (kind of sandbox).
Any guidance would be great!
Thanks in advance
The last idea of the day...
What if I have this kind of problem?
In the container conf I see:
Root Disk | nfs:200/vm-200-disk-1.raw,size=4G
May I it be wrong?
I start thinking this is something with the remote execution itself:
/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=kcl-node2' root@10.81.59.102 pct start 200
May I change something in the ssh conf that solve my issue?
@fabian I have capture with tail -f /var/log/syslog the destination node while the migration is done and it get's stucked in the highlighted (!!!!!!) command.
Source node:
root@kcl-node1:~# pct migrate 200 kcl-node2 --restart
2017-12-15 16:54:01 shutdown CT 200
2017-12-15 16:54:01 # lxc-stop -n...
It's really crazy though... I've seen sometimes that it gives 4s again...
Why do you say the problem is on shutdown if we see that it gets stucked when trying to start the container at the destination node?
Check these new message I got at the receiving node:
Dec 15 14:53:53 kcl-node1...
The same container, when it's shutted down and you start it:
Dec 15 10:23:44 kcl-node2 systemd[1]: Starting LXC Container: 200...
Dec 15 10:23:45 kcl-node2 kernel: [61797.616397] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null)
Dec 15 10:23:45 kcl-node2...
I could synchronize the clocks of the proxmox nodes via NTP but this didn't solve the problem. There's still a migration time of 50s approx.
However, this work was not in vain. Now I could check the logs correctly because there's no timeshift of course. In the destination node, it can be...
In fact, the logs I posted for the destination node are not correct since there's a shift in time.
This is the error that is slowing down the migration:
illegal attempt to update using time when last update time is (minimum one second step)
Ok... I comprehend that of 50s almost 40s the container is up... however for some reason the process is not concluded correctly...
I keep seeing RRDC update error and if I'm not wrong this is due to a synchronizing time. I do not have internet access in the lab so I am trying to sort out the...
Ok...It seems that the problem is that the nodes are not synchronized... The time I get when I run date is different. I'll post how to sync as soon as I find how.
These are the logs in the source node:
Dec 14 12:27:26 kcl-node1 pct[8298]: <root@pam> starting task UPID:kcl-node1:0000206D:0004735E:5A326E2D:vzmigrate:200:root@pam:
Dec 14 12:27:26 kcl-node1 kernel: [ 2917.028122] vmbr0: port 2(veth200i0) entered disabled state
Dec 14 12:27:26 kcl-node1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.