VM/CT migration done but stuck

AxelTwin

Well-Known Member
Oct 10, 2017
133
6
58
39
Hi everyone,
I migrated all my CT/container with bulk migration from PVE1 to PVE2. It's been running for 8 hours, and it keeps running but the log is saying that storage has been imported successfuly and I can see my vm/CT storage in both PVE. Container's storages on PVE 2 are not fully imported, it's missing 1GB for both CT. The migration process seems to be stuck.
What should I do, keep waiting migration process to finish or abort and start again ?

1682144611712.png

1682144656412.png
Code:
root@hyperviser:~# zfs list
NAME                             USED  AVAIL     REFER  MOUNTPOINT
rpool                            115G   315G      104K  /rpool
rpool/ROOT                      20.3G   315G       96K  /rpool/ROOT
rpool/ROOT/pve-1                20.3G   315G     20.3G  /
rpool/data                        96K   315G       96K  /rpool/data
rpool/vm-193-disk-0             17.6G   315G     17.6G  -
rpool/vm-283-disk-0             77.2G   315G     77.2G  -
storage                         4.30T   875G      140K  /storage
storage/data                    4.30T   875G     1.44T  /storage/data
storage/data/subvol-100-disk-0  1.93T   875G     1.93T  /storage/data/subvol-100-disk-0
storage/data/subvol-171-disk-0   888G   875G      888G  /storage/data/subvol-171-disk-0
storage/data/vm-102-disk-0      63.9G   875G     63.9G  -

Code:
root@hyperviser2:~# zfs list
NAME                             USED  AVAIL     REFER  MOUNTPOINT
rpool                            156G  72.6G      104K  /rpool
rpool/ROOT                      30.2G  72.6G       96K  /rpool/ROOT
rpool/ROOT/pve-1                30.2G  72.6G     30.2G  /
rpool/data                        96K  72.6G       96K  /rpool/data
rpool/vm-193-disk-0             17.6G  72.6G     17.6G  -
rpool/vm-241-disk-0              108G   111G     69.9G  -
storage                         5.65T  1.49T      100K  /storage
storage/data                    5.65T  1.49T     2.26T  /storage/data
storage/data/subvol-100-disk-0  1.92T  1.15T     1.92T  /storage/data/subvol-100-disk-0
storage/data/subvol-171-disk-0   887G  1013G      887G  /storage/data/subvol-171-disk-0 -

Code:
2023-04-22 04:34:15 04:34:15    885G   storage/data/subvol-171-disk-0@__migration__
2023-04-22 04:34:16 04:34:16    886G   storage/data/subvol-171-disk-0@__migration__
2023-04-22 04:34:22 successfully imported 'storage:subvol-171-disk-0'
2023-04-22 04:34:23 volume 'storage:subvol-171-disk-0' is 'storage:subvol-171-disk-0' on the target
2023-04-22 04:34:23 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=hyperviser2' root@192.168.1.111 pvesr set-state 171 \''{}'\' -

Code:
2023-04-22 07:12:32 07:12:32   1.91T   storage/data/subvol-100-disk-0@__migration__
2023-04-22 07:12:33 07:12:33   1.91T   storage/data/subvol-100-disk-0@__migration__
2023-04-22 07:12:44 successfully imported 'storage:subvol-100-disk-0'
2023-04-22 07:12:45 volume 'storage:subvol-100-disk-0' is 'storage:subvol-100-disk-0' on the target
2023-04-22 07:12:45 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=hyperviser2' root@192.168.1.111 pvesr set-state 100 \''{}'\' -


1682145054156.png
CT config:

Code:
root@hyperviser:~# cat /etc/pve/nodes/hyperviser/lxc/100.conf
arch: amd64
cores: 6
cpuunits: 10000
features: nesting=1
hostname: srvdc.eec31.local
lock: migrate
memory: 16384
nameserver: 192.168.1.101 192.168.31.10 192.168.1.120
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=F2:5A:DA:AB:81:C2,ip=192.168.1.101/24,type=veth
net1: name=net31,bridge=vmbr31,firewall=1,hwaddr=72:7E:E5:D6:B3:C6,ip=192.168.31.10/24,type=veth
onboot: 0
ostype: ubuntu
protection: 1
rootfs: storage:subvol-100-disk-0,acl=1,size=3148G
searchdomain: eec31.local
swap: 8192
 

Attachments

  • 1682145039833.png
    1682145039833.png
    11.1 KB · Views: 0
Last edited:
Ok, so I aborted migration from GUI.
CT 100 storage still appears in PVE1 and in PVE2 (missing 1Gb on PVE2)
CT 171 storage is gone from PVE1 and visible in PVE2 (missing 1Gb onPVE2)

Now on PVE2, I get that:

1682147765596.png
It looks like config hasn't been migrated. to PVE2. But everytime I go to folder: /etc/pve/nodes/hyperviser2/ the console gets stuck and I can do nothing...

migration process is still running:

Code:
    PID UTIL.     PR  NI    VIRT    RES    SHR S  %CPU  %MEM    TEMPS+ COM.                                                                                     
   4593 root      rt   0  564216 170868  53336 S   1,0   0,1   4:32.90 corosync                                                                                 
1843170 root      20   0   10988   4400   3036 R   0,7   0,0   0:00.19 top                                                                                     
     53 root      20   0       0      0      0 S   0,3   0,0   0:00.72 ksoftirqd/6                                                                             
    327 root      20   0       0      0      0 I   0,3   0,0   0:05.90 kworker/11:1-mm_percpu_wq                                                               
   4072 root      20   0 2811816  15384  11192 S   0,3   0,0   2:06.62 proxmox-backup-                                                                         
   4272 backup    20   0 3154004  27256  15712 S   0,3   0,0   2:34.37 proxmox-backup-                                                                         
3138681 root      20   0   80204   2012   1808 S   0,3   0,0   0:03.41 pvefw-logger                                                                             
3138776 www-data  20   0  362208 136912   8480 S   0,3   0,1   0:04.22 pveproxy worker                                                                         
      1 root      20   0  164916   8708   5200 S   0,0   0,0   0:04.31 systemd                                                                                 
      2 root      20   0       0      0      0 S   0,0   0,0   9:35.11 kthreadd                                                                                 
      3 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 rcu_gp                                                                                   
      4 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 rcu_par_gp                                                                               
      5 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 slub_flushwq                                                                             
      6 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 netns                                                                                   
      8 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 kworker/0:0H-events_highpri                                                             
     10 root      20   0       0      0      0 I   0,0   0,0   0:00.10 kworker/u80:0-gid-cache-wq                                                               
     11 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 mm_percpu_wq                                                                             
     12 root      20   0       0      0      0 S   0,0   0,0   0:00.00 rcu_tasks_rude_                                                                         
     13 root      20   0       0      0      0 S   0,0   0,0   0:00.00 rcu_tasks_trace                                                                         
     14 root      20   0       0      0      0 S   0,0   0,0   0:00.66 ksoftirqd/0                                                                             
     15 root      20   0       0      0      0 I   0,0   0,0   0:46.22 rcu_sched                                                                               
     16 root      rt   0       0      0      0 S   0,0   0,0   0:00.10 migration/0                                                                             
     17 root     -51   0       0      0      0 S   0,0   0,0   0:00.00 idle_inject/0                                                                           
     19 root      20   0       0      0      0 S   0,0   0,0   0:00.00 cpuhp/0                                                                                 
     20 root      20   0       0      0      0 S   0,0   0,0   0:00.00 cpuhp/1                                                                                 
     21 root     -51   0       0      0      0 S   0,0   0,0   0:00.00 idle_inject/1                                                                           
     22 root      rt   0       0      0      0 S   0,0   0,0   0:01.55 migration/1                                                                             
     23 root      20   0       0      0      0 S   0,0   0,0   0:00.16 ksoftirqd/1                                                                             
     25 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 kworker/1:0H-kblockd                                                                     
     26 root      20   0       0      0      0 S   0,0   0,0   0:00.00 cpuhp/2                                                                                 
     27 root     -51   0       0      0      0 S   0,0   0,0   0:00.00 idle_inject/2                                                                           
     28 root      rt   0       0      0      0 S   0,0   0,0   0:01.56 migration/2                                                                             
     29 root      20   0       0      0      0 S   0,0   0,0   0:00.09 ksoftirqd/2                                                                             
     31 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 kworker/2:0H-kblockd                                                                     
     32 root      20   0       0      0      0 S   0,0   0,0   0:00.00 cpuhp/3                                                                                 
     33 root     -51   0       0      0      0 S   0,0   0,0   0:00.00 idle_inject/3                                                                           
     34 root      rt   0       0      0      0 S   0,0   0,0   0:01.55 migration/3                                                                             
     35 root      20   0       0      0      0 S   0,0   0,0   0:13.28 ksoftirqd/3                                                                             
     37 root       0 -20       0      0      0 I   0,0   0,0   0:00.00 kworker/3:0

Any help would be greatly appreciated
 

Attachments

  • 1682147383026.png
    1682147383026.png
    9 KB · Views: 1
Last edited:
well, it seems that migration failed to transfer config file and everything got stuck.
ct storages have been migrated but only one out of two has been deleted from source.
I renamed files.conf to files.conf.bkp, rebooted both PVE, manually moved config files to PVE destination and deleted remaining source ct storage.
looks good now.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!