migrate vm broken pipe

haiwan

Well-Known Member
Apr 23, 2019
246
1
58
36
2024-05-13 14:03:54 remote: started tunnel worker 'UPID:xy:002DBA84:03015538:6641AD4A:qmtunnel:138:root@pam!Qq123654:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2024-05-13 14:03:54 local WS tunnel version: 2
2024-05-13 14:03:54 remote WS tunnel version: 2
2024-05-13 14:03:54 minimum required WS tunnel version: 2
websocket tunnel started
2024-05-13 14:03:54 starting migration of VM 258 to node 'xy' (110.42.110.28)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2024-05-13 14:03:54 found local disk 'local:258/vm-258-disk-0.qcow2' (attached)
2024-05-13 14:03:54 found local disk 'local:258/vm-258-disk-1.qcow2' (attached)
2024-05-13 14:03:54 mapped: net1 from vmbr10 to vmbr0
2024-05-13 14:03:54 mapped: net0 from vmbr0 to vmbr0
2024-05-13 14:03:54 Allocating volume for drive 'scsi0' on remote storage 'Data'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2024-05-13 14:03:54 volume 'local:258/vm-258-disk-0.qcow2' is 'Data:138/vm-138-disk-0.qcow2' on the target
2024-05-13 14:03:54 Allocating volume for drive 'scsi1' on remote storage 'Data'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2024-05-13 14:03:55 volume 'local:258/vm-258-disk-1.qcow2' is 'Data:138/vm-138-disk-1.qcow2' on the target
tunnel: -> sending command "config" to remote
tunnel: <- got reply
tunnel: -> sending command "start" to remote
tunnel: <- got reply
2024-05-13 14:03:56 Setting up tunnel for '/run/qemu-server/258.migrate'
2024-05-13 14:03:56 Setting up tunnel for '/run/qemu-server/258_nbd.migrate'
2024-05-13 14:03:56 starting storage migration
2024-05-13 14:03:56 scsi0: start migration to nbd:unix:/run/qemu-server/258_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
tunnel: accepted new connection on '/run/qemu-server/258_nbd.migrate'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/qemu-server/258_nbd.migrate'
drive-scsi0: transferred 1.0 MiB of 40.0 GiB (0.00%) in 0s
drive-scsi0: transferred 43.6 MiB of 40.0 GiB (0.11%) in 1s
drive-scsi0: transferred 145.0 MiB of 40.0 GiB (0.35%) in 2s
drive-scsi0: transferred 162.0 MiB of 40.0 GiB (0.40%) in 3s
drive-scsi0: transferred 179.0 MiB of 40.0 GiB (0.44%) in 4s
drive-scsi0: transferred 195.0 MiB of 40.0 GiB (0.48%) in 5s
drive-scsi0: transferred 212.0 MiB of 40.0 GiB (0.52%) in 6s
drive-scsi0: transferred 229.0 MiB of 40.0 GiB (0.56%) in 7s
drive-scsi0: transferred 246.0 MiB of 40.0 GiB (0.60%) in 8s
drive-scsi0: transferred 262.0 MiB of 40.0 GiB (0.64%) in 9s
drive-scsi0: transferred 279.0 MiB of 40.0 GiB (0.68%) in 10s
drive-scsi0: transferred 296.0 MiB of 40.0 GiB (0.72%) in 11s
drive-scsi0: transferred 313.0 MiB of 40.0 GiB (0.76%) in 12s
drive-scsi0: transferred 330.0 MiB of 40.0 GiB (0.81%) in 14s
drive-scsi0: transferred 347.0 MiB of 40.0 GiB (0.85%) in 15s
drive-scsi0: transferred 364.0 MiB of 40.0 GiB (0.89%) in 16s
drive-scsi0: transferred 381.0 MiB of 40.0 GiB (0.93%) in 17s
drive-scsi0: transferred 398.4 MiB of 40.0 GiB (0.97%) in 18s
drive-scsi0: transferred 416.0 MiB of 40.0 GiB (1.02%) in 19s
drive-scsi0: transferred 432.0 MiB of 40.0 GiB (1.05%) in 20s
drive-scsi0: transferred 452.0 MiB of 40.0 GiB (1.10%) in 21s
drive-scsi0: transferred 469.0 MiB of 40.0 GiB (1.15%) in 22s
drive-scsi0: transferred 488.6 MiB of 40.0 GiB (1.19%) in 23s
drive-scsi0: transferred 505.0 MiB of 40.0 GiB (1.23%) in 24s
drive-scsi0: transferred 522.6 MiB of 40.0 GiB (1.28%) in 25s
drive-scsi0: transferred 541.0 MiB of 40.0 GiB (1.32%) in 26s
drive-scsi0: transferred 557.4 MiB of 40.0 GiB (1.36%) in 27s
drive-scsi0: transferred 575.0 MiB of 40.0 GiB (1.40%) in 28s
drive-scsi0: transferred 592.0 MiB of 40.0 GiB (1.45%) in 29s
drive-scsi0: transferred 609.0 MiB of 40.0 GiB (1.49%) in 30s
drive-scsi0: transferred 626.0 MiB of 40.0 GiB (1.53%) in 31s
drive-scsi0: transferred 643.4 MiB of 40.0 GiB (1.57%) in 32s
drive-scsi0: transferred 661.0 MiB of 40.0 GiB (1.61%) in 33s
drive-scsi0: transferred 679.4 MiB of 40.0 GiB (1.66%) in 34s
drive-scsi0: transferred 701.1 MiB of 40.0 GiB (1.71%) in 35s
drive-scsi0: transferred 719.4 MiB of 40.0 GiB (1.76%) in 36s
drive-scsi0: transferred 743.6 MiB of 40.0 GiB (1.82%) in 37s
drive-scsi0: transferred 765.4 MiB of 40.0 GiB (1.87%) in 38s
drive-scsi0: transferred 786.8 MiB of 40.0 GiB (1.92%) in 39s
drive-scsi0: transferred 804.0 MiB of 40.0 GiB (1.96%) in 40s
drive-scsi0: transferred 822.0 MiB of 40.0 GiB (2.01%) in 41s
drive-scsi0: transferred 839.0 MiB of 40.0 GiB (2.05%) in 42s
drive-scsi0: transferred 855.0 MiB of 40.0 GiB (2.09%) in 43s
drive-scsi0: transferred 872.0 MiB of 40.0 GiB (2.13%) in 44s
drive-scsi0: transferred 889.0 MiB of 40.0 GiB (2.17%) in 45s
drive-scsi0: transferred 907.4 MiB of 40.0 GiB (2.22%) in 46s
drive-scsi0: transferred 924.0 MiB of 40.0 GiB (2.26%) in 47s
drive-scsi0: transferred 941.0 MiB of 40.0 GiB (2.30%) in 48s
drive-scsi0: transferred 958.0 MiB of 40.0 GiB (2.34%) in 49s
drive-scsi0: transferred 976.2 MiB of 40.0 GiB (2.38%) in 50s
drive-scsi0: transferred 994.0 MiB of 40.0 GiB (2.43%) in 51s
drive-scsi0: transferred 1010.0 MiB of 40.0 GiB (2.47%) in 52s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.51%) in 53s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.55%) in 54s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.59%) in 55s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.63%) in 56s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.67%) in 57s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.71%) in 58s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.76%) in 59s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.80%) in 1m
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.84%) in 1m 1s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.88%) in 1m 2s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.92%) in 1m 3s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.97%) in 1m 4s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.01%) in 1m 5s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.05%) in 1m 6s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.09%) in 1m 7s
drive-scsi0: transferred 1.3 GiB of 40.0 GiB (3.13%) in 1m 8s
drive-scsi0: transferred 1.3 GiB of 40.0 GiB (3.17%) in 1m 9s
drive-scsi0: transferred 1.3 GiB of 40.0 GiB (3.21%) in 1m 10s
drive-scsi0: transferred 1.3 GiB of 40.0 GiB (3.25%) in 1m 11s
drive-scsi0: transferred 1.3 GiB of 40.0 GiB (3.29%) in 1m 12s
drive-scsi0: transferred 1.3 GiB of 40.0 GiB (3.33%) in 1m 13s
drive-scsi0: transferred 1.4 GiB of 40.0 GiB (3.38%) in 1m 14s
drive-scsi0: transferred 1.4 GiB of 40.0 GiB (3.43%) in 1m 15s
drive-scsi0: transferred 1.4 GiB of 40.0 GiB (3.47%) in 1m 16s
drive-scsi0: transferred 1.4 GiB of 40.0 GiB (3.52%) in 1m 17s
drive-scsi0: transferred 1.4 GiB of 40.0 GiB (3.56%) in 1m 18s
drive-scsi0: transferred 1.4 GiB of 40.0 GiB (3.60%) in 1m 19s
drive-scsi0: transferred 1.5 GiB of 40.0 GiB (3.64%) in 1m 20s
drive-scsi0: transferred 1.5 GiB of 40.0 GiB (3.68%) in 1m 21s
drive-scsi0: transferred 1.5 GiB of 40.0 GiB (3.73%) in 1m 22s
drive-scsi0: transferred 1.5 GiB of 40.0 GiB (3.78%) in 1m 23s
drive-scsi0: transferred 2.1 GiB of 40.0 GiB (5.31%) in 1m 24s
drive-scsi0: transferred 2.1 GiB of 40.0 GiB (5.36%) in 1m 25s
drive-scsi0: transferred 2.2 GiB of 40.0 GiB (5.40%) in 1m 26s
drive-scsi0: transferred 2.2 GiB of 40.0 GiB (5.44%) in 1m 27s
drive-scsi0: transferred 2.2 GiB of 40.0 GiB (5.48%) in 1m 28s
drive-scsi0: transferred 2.2 GiB of 40.0 GiB (5.52%) in 1m 29s
drive-scsi0: transferred 2.2 GiB of 40.0 GiB (5.56%) in 1m 30s
drive-scsi0: transferred 2.2 GiB of 40.0 GiB (5.60%) in 1m 31s
drive-scsi0: transferred 2.3 GiB of 40.0 GiB (5.64%) in 1m 32s
drive-scsi0: transferred 2.3 GiB of 40.0 GiB (5.68%) in 1m 33s
drive-scsi0: transferred 2.3 GiB of 40.0 GiB (5.72%) in 1m 34s
drive-scsi0: transferred 2.3 GiB of 40.0 GiB (5.82%) in 1m 35s
drive-scsi0: transferred 2.3 GiB of 40.0 GiB (5.85%) in 1m 36s
drive-scsi0: transferred 40.0 GiB of 40.0 GiB (100.00%) in 1m 37s, ready
all 'mirror' jobs are ready
2024-05-13 14:05:33 scsi1: start migration to nbd:unix:/run/qemu-server/258_nbd.migrate:exportname=drive-scsi1
drive mirror is starting for drive-scsi1
tunnel: accepted new connection on '/run/qemu-server/258_nbd.migrate'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/qemu-server/258_nbd.migrate'
drive-scsi1: transferred 0.0 B of 40.0 GiB (0.00%) in 0s
drive-scsi1: transferred 47.2 MiB of 40.0 GiB (0.12%) in 1s
drive-scsi1: transferred 64.0 MiB of 40.0 GiB (0.16%) in 2s
drive-scsi1: transferred 80.0 MiB of 40.0 GiB (0.20%) in 3s
drive-scsi1: transferred 97.0 MiB of 40.0 GiB (0.24%) in 4s
drive-scsi1: transferred 113.0 MiB of 40.0 GiB (0.28%) in 5s
TASK ERROR: broken pipe

we use api Migrate virtual machine to a remote cluster. Creates a new migration task. EXPERIMENTAL feature!
 
HTML:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-4-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.0.9
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.5: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.4
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.9
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-1
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.2
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3
 
any hint on the remote side (task log/journal)?
 
any hint on the remote side (task log/journal)?
mtunnel started
received command 'version'
received command 'bwlimit'
received command 'bwlimit'
received command 'disk'
Formatting '/var/lib/vz/images/258/vm-258-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
received command 'disk'
Formatting '/var/lib/vz/images/258/vm-258-disk-1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
received command 'config'
Wide character in print at /usr/share/perl5/PVE/API2/Qemu.pm line 1716.
update VM 258: -agent 1 -bios seabios -boot order=scsi0 -cores 2 -cpu host -cpulimit 4 -description å建æ¹å¼ï¼prokvm
订åç¼å·ï¼#1821
ä¼åè´¦å·ï¼1101937542
é®ç®±å°åï¼1ddddsdf@qq.com -memory 4096 -name VM258 -net0 virtio=BC:24:11:9D:52:40,bridge=vmbr0,firewall=0,rate=1.25 -net1 virtio=BC:24:11:0C:BA:CF,bridge=vmbr0,firewall=0 -numa 0 -onboot 1 -ostype l26 -scsi0 local:258/vm-258-disk-0.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G -scsi1 local:258/vm-258-disk-1.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G -scsihw virtio-scsi-pci -serial0 socket -smbios1 uuid=338b1075-4544-4022-b822-c683076f8016 -sockets 2
received command 'start'
migration listens on unix:/run/qemu-server/258.migrate
storage migration listens on nbd:unix:/run/qemu-server/258_nbd.migrate:exportname=drive-scsi0 volume:local:258/vm-258-disk-0.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G
storage migration listens on nbd:unix:/run/qemu-server/258_nbd.migrate:exportname=drive-scsi1 volume:local:258/vm-258-disk-1.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G
received command 'ticket'
received command 'ticket'
TASK ERROR: mtunnel exited unexpectedly
 
and in the journal? did the started VM on the remote end crash?
 
on the target side, please check the journal during the migration..
 
on the target side, please check the journal during the migration..
we just check task log is ok?
HTML:
mtunnel started
received command 'version'
received command 'bwlimit'
received command 'bwlimit'
received command 'disk'
Formatting '/var/lib/vz/images/258/vm-258-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
received command 'disk'
Formatting '/var/lib/vz/images/258/vm-258-disk-1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
received command 'config'
Wide character in print at /usr/share/perl5/PVE/API2/Qemu.pm line 1716.
update VM 258: -agent 1 -bios seabios -boot order=scsi0 -cores 2 -cpu host -cpulimit 4 -description å建æ¹å¼ï¼prokvm

订åç¼å·ï¼#1821

ä¼åè´¦å·ï¼1101937542

é®ç®±å°åï¼1101937542@qq.com -memory 4096 -name VM258 -net0 virtio=BC:24:11:9D:52:40,bridge=vmbr0,firewall=0,rate=1.25 -net1 virtio=BC:24:11:0C:BA:CF,bridge=vmbr0,firewall=0 -numa 0 -onboot 1 -ostype l26 -scsi0 local:258/vm-258-disk-0.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G -scsi1 local:258/vm-258-disk-1.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G -scsihw virtio-scsi-pci -serial0 socket -smbios1 uuid=338b1075-4544-4022-b822-c683076f8016 -sockets 2
received command 'start'
migration listens on unix:/run/qemu-server/258.migrate
storage migration listens on nbd:unix:/run/qemu-server/258_nbd.migrate:exportname=drive-scsi0 volume:local:258/vm-258-disk-0.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G
storage migration listens on nbd:unix:/run/qemu-server/258_nbd.migrate:exportname=drive-scsi1 volume:local:258/vm-258-disk-1.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G
received command 'ticket'
received command 'ticket'
TASK ERROR: mtunnel exited unexpectedly
1715942209912.png
 
please check the JOURNAL, like I asked!
 
  • Like
Reactions: Kingneutron
please check the JOURNAL, like I asked!
root@xy:~# journalctl --since="2024-05-13 14:03:54"
May 13 14:03:54 xy pvedaemon[2985081]: <root@pam!Qq123654> starting task UPID:xy:002DBA84:03015538:6641AD4A:qmtunnel:138:root@pam!Qq123654:
May 13 14:03:55 xy pvedaemon[2996868]: [931B blob data]
May 13 14:03:55 xy pvedaemon[2996868]: Wide character in print at /usr/share/perl5/PVE/API2/Qemu.pm line 1716.
May 13 14:03:55 xy systemd[1]: Started 138.scope.
May 13 14:03:56 xy kernel: tap138i0: entered promiscuous mode
May 13 14:03:56 xy kernel: vmbr0: port 38(tap138i0) entered blocking state
May 13 14:03:56 xy kernel: vmbr0: port 38(tap138i0) entered disabled state
May 13 14:03:56 xy kernel: tap138i0: entered allmulticast mode
May 13 14:03:56 xy kernel: vmbr0: port 38(tap138i0) entered blocking state
May 13 14:03:56 xy kernel: vmbr0: port 38(tap138i0) entered forwarding state
May 13 14:03:56 xy pvedaemon[2984918]: VM 258 qmp command failed - VM 258 qmp command 'guest-ping' failed - got timeout
May 13 14:03:56 xy kernel: tap138i1: entered promiscuous mode
May 13 14:03:56 xy kernel: vmbr0: port 39(tap138i1) entered blocking state
May 13 14:03:56 xy kernel: vmbr0: port 39(tap138i1) entered disabled state
May 13 14:03:56 xy kernel: tap138i1: entered allmulticast mode
May 13 14:03:56 xy kernel: vmbr0: port 39(tap138i1) entered blocking state
May 13 14:03:56 xy kernel: vmbr0: port 39(tap138i1) entered forwarding state
May 13 14:04:16 xy pvedaemon[2985081]: VM 258 qmp command failed - VM 258 qmp command 'guest-ping' failed - got timeout
May 13 14:04:26 xy pveproxy[2991929]: detected empty handle
May 13 14:04:36 xy pvedaemon[2977947]: VM 258 qmp command failed - VM 258 qmp command 'guest-ping' failed - got timeout
May 13 14:04:52 xy pveproxy[2200]: worker 2983861 finished
May 13 14:04:52 xy pveproxy[2200]: starting 1 worker(s)
May 13 14:04:52 xy pveproxy[2200]: worker 2997417 started
May 13 14:04:54 xy pveproxy[2997412]: got inotify poll request in wrong process - disabling inotify
May 13 14:04:56 xy pvedaemon[2977947]: VM 258 qmp command failed - VM 258 qmp command 'guest-ping' failed - got timeout
May 13 14:04:59 xy pveproxy[2991929]: detected empty handle
May 13 14:05:16 xy pvedaemon[2985081]: VM 258 qmp command failed - VM 258 qmp command 'guest-ping' failed - got timeout
May 13 14:05:35 xy pvedaemon[2977947]: VM 258 qmp command failed - VM 258 qmp command 'guest-ping' failed - got timeout
 
we try agains test
HTML:
()
2024-05-17 22:43:29 remote: started tunnel worker 'UPID:xy:001920D1:05403EE0:66476D11:qmtunnel:285:root@pam!Qq123654:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2024-05-17 22:43:29 local WS tunnel version: 2
2024-05-17 22:43:29 remote WS tunnel version: 2
2024-05-17 22:43:29 minimum required WS tunnel version: 2
websocket tunnel started
2024-05-17 22:43:29 starting migration of VM 260 to node 'xy' (110.42.110.28)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2024-05-17 22:43:29 found local disk 'local:260/vm-260-disk-0.qcow2' (attached)
2024-05-17 22:43:29 found local disk 'local:260/vm-260-disk-1.qcow2' (attached)
2024-05-17 22:43:29 mapped: net1 from vmbr10 to vmbr0
2024-05-17 22:43:29 mapped: net0 from vmbr0 to vmbr0
2024-05-17 22:43:29 Allocating volume for drive 'scsi0' on remote storage 'Data'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2024-05-17 22:43:29 volume 'local:260/vm-260-disk-0.qcow2' is 'Data:285/vm-285-disk-0.qcow2' on the target
2024-05-17 22:43:29 Allocating volume for drive 'scsi1' on remote storage 'Data'..
tunnel: -> sending command "disk" to remote
tunnel: <- got reply
2024-05-17 22:43:30 volume 'local:260/vm-260-disk-1.qcow2' is 'Data:285/vm-285-disk-1.qcow2' on the target
tunnel: -> sending command "config" to remote
tunnel: <- got reply
tunnel: -> sending command "start" to remote
tunnel: <- got reply
2024-05-17 22:43:31 Setting up tunnel for '/run/qemu-server/260.migrate'
2024-05-17 22:43:31 Setting up tunnel for '/run/qemu-server/260_nbd.migrate'
2024-05-17 22:43:31 starting storage migration
2024-05-17 22:43:31 scsi0: start migration to nbd:unix:/run/qemu-server/260_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
tunnel: accepted new connection on '/run/qemu-server/260_nbd.migrate'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/qemu-server/260_nbd.migrate'
drive-scsi0: transferred 1.0 MiB of 40.0 GiB (0.00%) in 0s
drive-scsi0: transferred 37.6 MiB of 40.0 GiB (0.09%) in 1s
drive-scsi0: transferred 133.4 MiB of 40.0 GiB (0.33%) in 2s
drive-scsi0: transferred 144.0 MiB of 40.0 GiB (0.35%) in 3s
drive-scsi0: transferred 156.0 MiB of 40.0 GiB (0.38%) in 4s
drive-scsi0: transferred 166.0 MiB of 40.0 GiB (0.41%) in 5s
drive-scsi0: transferred 178.0 MiB of 40.0 GiB (0.43%) in 6s
drive-scsi0: transferred 188.0 MiB of 40.0 GiB (0.46%) in 7s
drive-scsi0: transferred 202.0 MiB of 40.0 GiB (0.49%) in 8s
drive-scsi0: transferred 210.0 MiB of 40.0 GiB (0.51%) in 9s
drive-scsi0: transferred 221.0 MiB of 40.0 GiB (0.54%) in 10s
drive-scsi0: transferred 233.0 MiB of 40.0 GiB (0.57%) in 11s
drive-scsi0: transferred 246.0 MiB of 40.0 GiB (0.60%) in 12s
drive-scsi0: transferred 255.0 MiB of 40.0 GiB (0.62%) in 13s
drive-scsi0: transferred 269.0 MiB of 40.0 GiB (0.66%) in 14s
drive-scsi0: transferred 277.0 MiB of 40.0 GiB (0.68%) in 15s
drive-scsi0: transferred 291.0 MiB of 40.0 GiB (0.71%) in 16s
drive-scsi0: transferred 299.0 MiB of 40.0 GiB (0.73%) in 17s
drive-scsi0: transferred 313.0 MiB of 40.0 GiB (0.76%) in 18s
drive-scsi0: transferred 321.0 MiB of 40.0 GiB (0.78%) in 19s
drive-scsi0: transferred 335.0 MiB of 40.0 GiB (0.82%) in 20s
drive-scsi0: transferred 343.0 MiB of 40.0 GiB (0.84%) in 21s
drive-scsi0: transferred 358.0 MiB of 40.0 GiB (0.87%) in 22s
drive-scsi0: transferred 365.0 MiB of 40.0 GiB (0.89%) in 23s
drive-scsi0: transferred 380.0 MiB of 40.0 GiB (0.93%) in 24s
drive-scsi0: transferred 389.4 MiB of 40.0 GiB (0.95%) in 25s
drive-scsi0: transferred 400.0 MiB of 40.0 GiB (0.98%) in 26s
drive-scsi0: transferred 412.0 MiB of 40.0 GiB (1.01%) in 27s
drive-scsi0: transferred 425.0 MiB of 40.0 GiB (1.04%) in 28s
drive-scsi0: transferred 433.0 MiB of 40.0 GiB (1.06%) in 29s
drive-scsi0: transferred 449.0 MiB of 40.0 GiB (1.10%) in 30s
drive-scsi0: transferred 456.9 MiB of 40.0 GiB (1.12%) in 31s
drive-scsi0: transferred 468.0 MiB of 40.0 GiB (1.14%) in 32s
drive-scsi0: transferred 482.3 MiB of 40.0 GiB (1.18%) in 33s
drive-scsi0: transferred 496.0 MiB of 40.0 GiB (1.21%) in 34s
drive-scsi0: transferred 504.0 MiB of 40.0 GiB (1.23%) in 35s
drive-scsi0: transferred 517.6 MiB of 40.0 GiB (1.26%) in 36s
drive-scsi0: transferred 524.6 MiB of 40.0 GiB (1.28%) in 37s
drive-scsi0: transferred 542.0 MiB of 40.0 GiB (1.32%) in 38s
drive-scsi0: transferred 548.4 MiB of 40.0 GiB (1.34%) in 39s
drive-scsi0: transferred 561.0 MiB of 40.0 GiB (1.37%) in 40s
drive-scsi0: transferred 573.0 MiB of 40.0 GiB (1.40%) in 41s
drive-scsi0: transferred 585.6 MiB of 40.0 GiB (1.43%) in 42s
drive-scsi0: transferred 594.0 MiB of 40.0 GiB (1.45%) in 43s
drive-scsi0: transferred 609.0 MiB of 40.0 GiB (1.49%) in 44s
drive-scsi0: transferred 615.0 MiB of 40.0 GiB (1.50%) in 45s
drive-scsi0: transferred 631.0 MiB of 40.0 GiB (1.54%) in 46s
drive-scsi0: transferred 637.0 MiB of 40.0 GiB (1.56%) in 47s
drive-scsi0: transferred 654.4 MiB of 40.0 GiB (1.60%) in 48s
drive-scsi0: transferred 661.0 MiB of 40.0 GiB (1.61%) in 49s
drive-scsi0: transferred 674.1 MiB of 40.0 GiB (1.65%) in 50s
drive-scsi0: transferred 690.9 MiB of 40.0 GiB (1.69%) in 51s
drive-scsi0: transferred 701.1 MiB of 40.0 GiB (1.71%) in 52s
drive-scsi0: transferred 717.4 MiB of 40.0 GiB (1.75%) in 53s
drive-scsi0: transferred 725.4 MiB of 40.0 GiB (1.77%) in 54s
drive-scsi0: transferred 746.7 MiB of 40.0 GiB (1.82%) in 55s
drive-scsi0: transferred 756.1 MiB of 40.0 GiB (1.85%) in 56s
drive-scsi0: transferred 775.2 MiB of 40.0 GiB (1.89%) in 57s
drive-scsi0: transferred 781.2 MiB of 40.0 GiB (1.91%) in 58s
drive-scsi0: transferred 796.8 MiB of 40.0 GiB (1.95%) in 59s
drive-scsi0: transferred 809.9 MiB of 40.0 GiB (1.98%) in 1m
drive-scsi0: transferred 822.0 MiB of 40.0 GiB (2.01%) in 1m 1s
drive-scsi0: transferred 829.0 MiB of 40.0 GiB (2.02%) in 1m 2s
drive-scsi0: transferred 845.0 MiB of 40.0 GiB (2.06%) in 1m 3s
drive-scsi0: transferred 851.0 MiB of 40.0 GiB (2.08%) in 1m 4s
drive-scsi0: transferred 864.0 MiB of 40.0 GiB (2.11%) in 1m 5s
drive-scsi0: transferred 877.0 MiB of 40.0 GiB (2.14%) in 1m 6s
drive-scsi0: transferred 884.0 MiB of 40.0 GiB (2.16%) in 1m 7s
drive-scsi0: transferred 901.4 MiB of 40.0 GiB (2.20%) in 1m 8s
drive-scsi0: transferred 906.4 MiB of 40.0 GiB (2.21%) in 1m 9s
drive-scsi0: transferred 923.0 MiB of 40.0 GiB (2.25%) in 1m 10s
drive-scsi0: transferred 930.0 MiB of 40.0 GiB (2.27%) in 1m 11s
drive-scsi0: transferred 942.0 MiB of 40.0 GiB (2.30%) in 1m 12s
drive-scsi0: transferred 956.0 MiB of 40.0 GiB (2.33%) in 1m 13s
drive-scsi0: transferred 962.0 MiB of 40.0 GiB (2.35%) in 1m 14s
drive-scsi0: transferred 978.2 MiB of 40.0 GiB (2.39%) in 1m 15s
drive-scsi0: transferred 984.2 MiB of 40.0 GiB (2.40%) in 1m 16s
drive-scsi0: transferred 997.0 MiB of 40.0 GiB (2.43%) in 1m 17s
drive-scsi0: transferred 1009.0 MiB of 40.0 GiB (2.46%) in 1m 18s
drive-scsi0: transferred 1023.0 MiB of 40.0 GiB (2.50%) in 1m 19s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.51%) in 1m 20s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.55%) in 1m 21s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.57%) in 1m 22s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.61%) in 1m 23s
drive-scsi0: transferred 1.0 GiB of 40.0 GiB (2.62%) in 1m 24s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.66%) in 1m 25s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.67%) in 1m 26s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.71%) in 1m 27s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.73%) in 1m 28s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.77%) in 1m 29s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.78%) in 1m 30s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.83%) in 1m 31s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.84%) in 1m 32s
drive-scsi0: transferred 1.1 GiB of 40.0 GiB (2.87%) in 1m 33s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.90%) in 1m 34s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.93%) in 1m 35s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.95%) in 1m 36s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (2.99%) in 1m 37s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.00%) in 1m 38s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.04%) in 1m 39s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.06%) in 1m 40s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.09%) in 1m 41s
drive-scsi0: transferred 1.2 GiB of 40.0 GiB (3.12%) in 1m 42s
TASK ERROR: broken pipe
HTML:
()
mtunnel started
received command 'version'
received command 'bwlimit'
received command 'bwlimit'
received command 'disk'
Formatting '/Data/images/285/vm-285-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
received command 'disk'
Formatting '/Data/images/285/vm-285-disk-1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
received command 'config'
Wide character in print at /usr/share/perl5/PVE/API2/Qemu.pm line 1716, <GEN134140> line 6.
update VM 285: -agent 1 -bios seabios -boot order=scsi0 -cores 1 -cpu host -cpulimit 2 -description å建æ¹å¼ï¼prokvm

订åç¼å·ï¼#1862

ä¼åè´¦å·ï¼1101937542

é®ç®±å°åï¼1101937542@qq.com -memory 2048 -name VM260 -net0 virtio=BC:24:11:F4:C1:E7,bridge=vmbr0,firewall=0,rate=1.25 -net1 virtio=BC:24:11:0E:3D:AB,bridge=vmbr0,firewall=0 -numa 0 -onboot 1 -ostype l26 -scsi0 Data:285/vm-285-disk-0.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G -scsi1 Data:285/vm-285-disk-1.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G -scsihw virtio-scsi-pci -serial0 socket -smbios1 uuid=ad4df691-ef56-4996-934d-4c09fb6a6fd1 -sockets 2
received command 'start'
migration listens on unix:/run/qemu-server/285.migrate
storage migration listens on nbd:unix:/run/qemu-server/285_nbd.migrate:exportname=drive-scsi0 volume:Data:285/vm-285-disk-0.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G
storage migration listens on nbd:unix:/run/qemu-server/285_nbd.migrate:exportname=drive-scsi1 volume:Data:285/vm-285-disk-1.qcow2,cache=none,format=qcow2,iops_rd=2400,iops_rd_max=2400,iops_wr=2400,iops_wr_max=2400,mbps_rd=200,mbps_rd_max=200,mbps_wr=200,mbps_wr_max=200,size=40G
received command 'ticket'
TASK ERROR: mtunnel exited unexpectedly
 
========
log
May 17 22:43:29 xy pvedaemon[1607363]: <root@pam!Qq123654> starting task UPID:xy:001920D1:05403EE0:66476D11:qmtunnel:285:root@pam!Qq123654:
May 17 22:43:30 xy pvedaemon[1646801]: [931B blob data]
May 17 22:43:30 xy pvedaemon[1646801]: Wide character in print at /usr/share/perl5/PVE/API2/Qemu.pm line 1716, <GEN134140> line 6.
May 17 22:43:30 xy systemd[1]: Started 285.scope.
May 17 22:43:30 xy sshd[1646779]: Failed password for invalid user huanglongbo-admin from 172.232.248.83 port 47770 ssh2
May 17 22:43:31 xy kernel: tap285i0: entered promiscuous mode
May 17 22:43:31 xy kernel: vmbr0: port 58(tap285i0) entered blocking state
May 17 22:43:31 xy kernel: vmbr0: port 58(tap285i0) entered disabled state
May 17 22:43:31 xy kernel: tap285i0: entered allmulticast mode
May 17 22:43:31 xy kernel: vmbr0: port 58(tap285i0) entered blocking state
May 17 22:43:31 xy kernel: vmbr0: port 58(tap285i0) entered forwarding state
May 17 22:43:31 xy sshd[1646779]: Connection closed by invalid user huanglongbo-admin 172.232.248.83 port 47770 [preauth]
May 17 22:43:31 xy kernel: tap285i1: entered promiscuous mode
May 17 22:43:31 xy kernel: vmbr0: port 59(tap285i1) entered blocking state
May 17 22:43:31 xy kernel: vmbr0: port 59(tap285i1) entered disabled state
May 17 22:43:31 xy kernel: tap285i1: entered allmulticast mode
May 17 22:43:31 xy kernel: vmbr0: port 59(tap285i1) entered blocking state
May 17 22:43:31 xy kernel: vmbr0: port 59(tap285i1) entered forwarding state
May 17 22:43:35 xy sshd[1646791]: Invalid user sureshj from 172.232.248.83 port 49720
May 17 22:43:36 xy sshd[1646871]: Invalid user lsn from 172.232.248.83 port 49732
May 17 22:43:36 xy sshd[1646791]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:43:36 xy sshd[1646791]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:43:39 xy sshd[1646791]: Failed password for invalid user sureshj from 172.232.248.83 port 49720 ssh2
May 17 22:43:42 xy sshd[1646746]: Connection closed by 172.232.248.83 port 47762 [preauth]
May 17 22:43:46 xy sshd[1646791]: Connection closed by invalid user sureshj 172.232.248.83 port 49720 [preauth]
May 17 22:43:47 xy systemd-logind[1851]: Session 1449 logged out. Waiting for processes to exit.
May 17 22:43:47 xy systemd[1]: session-1449.scope: Deactivated successfully.
May 17 22:43:47 xy systemd-logind[1851]: Removed session 1449.
May 17 22:43:47 xy pvedaemon[1614179]: <root@pam> end task UPID:xy:0018EB7C:053BFF9D:66476232:vncshell::root@pam: OK
May 17 22:43:48 xy sshd[1646967]: Invalid user jzhou from 172.232.248.83 port 48448
May 17 22:43:48 xy pveproxy[1640220]: worker exit
May 17 22:43:51 xy sshd[1646967]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:43:51 xy sshd[1646967]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:43:52 xy sshd[1646967]: Failed password for invalid user jzhou from 172.232.248.83 port 48448 ssh2
May 17 22:43:53 xy sshd[1646871]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:43:53 xy sshd[1646871]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:43:55 xy sshd[1646967]: Connection closed by invalid user jzhou 172.232.248.83 port 48448 [preauth]
May 17 22:43:55 xy sshd[1646871]: Failed password for invalid user lsn from 172.232.248.83 port 49732 ssh2
May 17 22:43:57 xy sshd[1646871]: Connection closed by invalid user lsn 172.232.248.83 port 49732 [preauth]
May 17 22:43:58 xy systemd[1]: Stopping user@0.service - User Manager for UID 0...
May 17 22:43:58 xy systemd[1633157]: Activating special unit exit.target...
May 17 22:43:58 xy systemd[1633157]: Stopped target default.target - Main User Target.
May 17 22:43:58 xy systemd[1633157]: Stopped target basic.target - Basic System.
May 17 22:43:58 xy systemd[1633157]: Stopped target paths.target - Paths.
May 17 22:43:58 xy systemd[1633157]: Stopped target sockets.target - Sockets.
May 17 22:43:58 xy systemd[1633157]: Stopped target timers.target - Timers.
May 17 22:43:58 xy systemd[1633157]: Closed dirmngr.socket - GnuPG network certificate management daemon.
May 17 22:43:58 xy systemd[1633157]: Closed gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
May 17 22:43:58 xy systemd[1633157]: Closed gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
May 17 22:43:58 xy systemd[1633157]: Closed gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
May 17 22:43:58 xy systemd[1633157]: Closed gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
May 17 22:43:58 xy systemd[1633157]: Removed slice app.slice - User Application Slice.
May 17 22:43:58 xy systemd[1633157]: Reached target shutdown.target - Shutdown.
May 17 22:43:58 xy systemd[1633157]: Finished systemd-exit.service - Exit the Session.
May 17 22:43:58 xy systemd[1633157]: Reached target exit.target - Exit the Session.
May 17 22:43:58 xy systemd[1]: user@0.service: Deactivated successfully.
May 17 22:43:58 xy systemd[1]: Stopped user@0.service - User Manager for UID 0.
May 17 22:43:58 xy systemd[1]: Stopping user-runtime-dir@0.service - User Runtime Directory /run/user/0...
May 17 22:43:58 xy systemd[1]: run-user-0.mount: Deactivated successfully.
May 17 22:43:58 xy systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
May 17 22:43:58 xy systemd[1]: Stopped user-runtime-dir@0.service - User Runtime Directory /run/user/0.
May 17 22:43:58 xy systemd[1]: Removed slice user-0.slice - User Slice of UID 0.
May 17 22:43:58 xy systemd[1]: user-0.slice: Consumed 12.462s CPU time.
May 17 22:43:58 xy pve-ha-crm[2184]: successfully acquired lock 'ha_manager_lock'
May 17 22:43:58 xy pve-ha-crm[2184]: watchdog active
May 17 22:43:58 xy pve-ha-crm[2184]: status change wait_for_quorum => master
May 17 22:43:58 xy pve-ha-crm[2184]: node 'xy': state changed from 'unknown' => 'online'
May 17 22:43:58 xy pve-ha-crm[2184]: adding new service 'vm:285' on node 'xy'
May 17 22:43:59 xy pve-ha-crm[2184]: service 'vm:285': state changed from 'request_start' to 'started' (node = xy)
May 17 22:44:00 xy sshd[1647006]: Invalid user tes from 172.232.248.83 port 43212
May 17 22:44:02 xy sshd[1647006]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:44:02 xy sshd[1647006]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:44:02 xy pve-ha-lrm[2208]: successfully acquired lock 'ha_agent_xy_lock'
May 17 22:44:02 xy pve-ha-lrm[2208]: watchdog active
May 17 22:44:02 xy pve-ha-lrm[2208]: status change wait_for_agent_lock => active
May 17 22:44:03 xy sshd[1647006]: Failed password for invalid user tes from 172.232.248.83 port 43212 ssh2
May 17 22:44:03 xy sshd[1646932]: Connection reset by 172.232.248.83 port 48432 [preauth]
May 17 22:44:04 xy sshd[1647006]: Connection closed by invalid user tes 172.232.248.83 port 43212 [preauth]
May 17 22:44:08 xy sshd[1646919]: Invalid user yufeng from 172.232.248.83 port 49734
May 17 22:44:08 xy sshd[1646919]: Connection closed by invalid user yufeng 172.232.248.83 port 49734 [preauth]
May 17 22:44:09 xy sshd[1647011]: Invalid user tyler from 172.232.248.83 port 43228
May 17 22:44:12 xy sshd[1647011]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:44:12 xy sshd[1647011]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:44:14 xy sshd[1647011]: Failed password for invalid user tyler from 172.232.248.83 port 43228 ssh2
May 17 22:44:17 xy sshd[1647011]: Connection closed by invalid user tyler 172.232.248.83 port 43228 [preauth]
May 17 22:44:22 xy sshd[1647113]: Invalid user gg from 172.232.248.83 port 40530
May 17 22:44:24 xy sshd[1647113]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:44:24 xy sshd[1647113]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:44:26 xy sshd[1647113]: Failed password for invalid user gg from 172.232.248.83 port 40530 ssh2
May 17 22:44:28 xy sshd[1647113]: Connection closed by invalid user gg 172.232.248.83 port 40530 [preauth]
May 17 22:44:28 xy sshd[1647060]: Connection reset by 172.232.248.83 port 55704 [preauth]
May 17 22:44:33 xy sshd[1647081]: Connection closed by 172.232.248.83 port 40522 [preauth]
May 17 22:44:42 xy sshd[1647212]: Invalid user wangb from 172.232.248.83 port 60080
May 17 22:44:43 xy sshd[1647212]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:44:43 xy sshd[1647212]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:44:43 xy sshd[1647068]: Connection closed by 172.232.248.83 port 55712 [preauth]
May 17 22:44:45 xy sshd[1647212]: Failed password for invalid user wangb from 172.232.248.83 port 60080 ssh2
May 17 22:44:48 xy sshd[1647212]: Connection closed by invalid user wangb 172.232.248.83 port 60080 [preauth]
May 17 22:44:50 xy sshd[1647166]: Connection closed by 172.232.248.83 port 60250 [preauth]
May 17 22:44:51 xy sshd[1647156]: Connection reset by 172.232.248.83 port 60236 [preauth]
May 17 22:44:53 xy sshd[1647179]: Connection closed by 172.232.248.83 port 60066 [preauth]
May 17 22:44:59 xy sshd[1647227]: Invalid user fate from 172.232.248.83 port 37212
May 17 22:45:04 xy sshd[1647227]: pam_unix(sshd:auth): check pass; user unknown
May 17 22:45:04 xy sshd[1647227]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=172.232.248.83
May 17 22:45:06 xy sshd[1647227]: Failed password for invalid user fate from 172.232.248.83 port 37212 ssh2
May 17 22:45:06 xy sshd[1647227]: Connection closed by invalid user fate 172.232.248.83 port 37212 [preauth]
May 17 22:45:10 xy sshd[1647261]: Connection closed by 172.232.248.83 port 57342 [preauth]
May 17 22:45:15 xy pvedaemon[1646801]: mtunnel exited unexpectedly
May 17 22:45:15 xy pvedaemon[1607363]: <root@pam!Qq123654> end task UPID:xy:001920D1:05403EE0:66476D11:qmtunnel:285:root@pam!Qq123654: mtunnel exited unexpec>
May 17 22:45:15 xy QEMU[1646861]: kvm: Disconnect client, due to: Failed to read CMD_WRITE data: Unexpected end-of-file before all data were read
May 17 22:45:17 xy sshd[1647304]
 
those logs look like something is interrupting the connection between source and target node..
 
yes, but

- the VM on the target doesn't crash (until the tunnel is closed)
- the tunnel is just suddenly gone in the middle of a disk transfer

something is killing the connection
 
yes, but

- the VM on the target doesn't crash (until the tunnel is closed)
- the tunnel is just suddenly gone in the middle of a disk transfer

something is killing the connection
ok we check network is ok
 
did you also check the journal on the source side? without anymore indication what's going on, this will be impossible to debug..
 
there is no PHP involved in a remote migration?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!