Hi all,
I've setup a few days ago, so I used the beta1 iso. And I have the issue since I setup the machine's
I've two NUC's as my test environment, both with the same setup and hardware spec:
root@pve01:~# cat /proc/meminfo
root@pve01:~# cat /proc/cpuinfo
root@pve02:~# pveversion -v
The error I see when trying to live-migrate a VM is:
while running "qm migrate 101 pve01 --online" I get this errors in syslog:
Cluster status seems to be fine:
root@pve01:~# pvecm status
root@pve02:~# qm config 101
The behaviour is the same with another VM using a qcow2 image...
root@pve02:~# cat /etc/pve/storage.cfg
ZFS is in a good condition too:
root@pve02:~# zpool list
root@pve02:~# zfs list
I can't see a prob, is it a known bug or is my setup faulty?
I've setup a few days ago, so I used the beta1 iso. And I have the issue since I setup the machine's
I've two NUC's as my test environment, both with the same setup and hardware spec:
root@pve01:~# cat /proc/meminfo
Code:
MemTotal: 16345196 kB
MemFree: 14776328 kB
MemAvailable: 14821328 kB
root@pve01:~# cat /proc/cpuinfo
Code:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz
stepping : 9
microcode : 0x1b
cpu MHz : 2398.558
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 2
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt
bugs :
bogomips : 4589.63
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
root@pve02:~# pveversion -v
Code:
proxmox-ve: 4.0-10 (running kernel: 4.2.0-1-pve)pve-manager: 4.0-36 (running version: 4.0-36/9815097f)
pve-kernel-4.2.0-1-pve: 4.2.0-10
lvm2: 2.02.116-pve1
corosync-pve: 2.3.4-2
libqb0: 0.17.1-3
pve-cluster: 4.0-17
qemu-server: 4.0-23
pve-firmware: 1.1-7
libpve-common-perl: 4.0-20
libpve-access-control: 4.0-8
libpve-storage-perl: 4.0-21
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-5
pve-container: 0.9-18
pve-firewall: 2.0-11
pve-ha-manager: 1.0-5
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.4-pve3~jessie
The error I see when trying to live-migrate a VM is:
Code:
Sep 10 18:34:29 starting migration of VM 101 to node 'pve01' (10.10.10.31)
Sep 10 18:34:29 copying disk images
Sep 10 18:34:29 starting VM 101 on remote node 'pve01'
Sep 10 18:34:31 starting ssh migration tunnel
Sep 10 18:34:31 starting online/live migration on localhost:60000
Sep 10 18:34:31 migrate_set_speed: 8589934592
Sep 10 18:34:31 migrate_set_downtime: 0.1
Sep 10 18:34:33 migration status: active (transferred 244662773, remaining 798330880), total 1082793984)
Sep 10 18:34:35 migration status: active (transferred 477656869, remaining 499875840), total 1082793984)
Sep 10 18:34:37 ERROR: online migrate failure - aborting
Sep 10 18:34:37 aborting phase 2 - cleanup resources
Sep 10 18:34:37 migrate_cancel
Sep 10 18:34:38 ERROR: migration finished with problems (duration 00:00:09)
TASK ERROR: migration problems
while running "qm migrate 101 pve01 --online" I get this errors in syslog:
Code:
Sep 10 19:00:42 pve02 qm[6386]: <root@pam> starting task UPID:pve02:000018F3:0002CECD:55F1B73A:qmigrate:101:root@pam:Sep 10 19:00:43 pve02 pmxcfs[960]: [status] notice: received log
Sep 10 19:00:43 pve02 pmxcfs[960]: [status] notice: received log
Sep 10 19:00:50 pve02 pmxcfs[960]: [status] notice: received log
Sep 10 19:00:50 pve02 pmxcfs[960]: [status] notice: received log
Sep 10 19:00:50 pve02 qm[6387]: migration problems
Sep 10 19:00:50 pve02 qm[6386]: <root@pam> end task UPID:pve02:000018F3:0002CECD:55F1B73A:qmigrate:101:root@pam: migration problems
Cluster status seems to be fine:
root@pve01:~# pvecm status
Code:
Quorum information
------------------
Date: Thu Sep 10 18:50:49 2015
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 60
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.10.10.31 (local)
0x00000002 1 10.10.10.32
root@pve02:~# qm config 101
Code:
balloon: 512
bootdisk: virtio0
cores: 2
ide2: isos:iso/pmagic_2014_09_29.iso,media=cdrom
memory: 1024
name: tst
net0: virtio=E6:C5:9D:74:81:CB,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=c4caba6e-e35c-4508-bf79-a0bd17c2a29f
sockets: 1
virtio0: disks:101/vm-101-disk-1.raw,size=6G
The behaviour is the same with another VM using a qcow2 image...
root@pve02:~# cat /etc/pve/storage.cfg
Code:
dir: local
path /var/lib/vz
maxfiles 0
shared
content vztmpl
glusterfs: isos
path /mnt/pve/isos
volume isos
content iso
maxfiles 1
server pve01.one.lan
server2 pve02.one.lan
glusterfs: disks
path /mnt/pve/disks
volume disks
content rootdir,images
maxfiles 1
server pve01.one.lan
server2 pve02.one.lan
ZFS is in a good condition too:
root@pve02:~# zpool list
Code:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 74.5G 2.52G 72.0G - 1% 3% 1.00x ONLINE -
root@pve02:~# zfs list
Code:
NAME USED AVAIL REFER MOUNTPOINT
rpool 12.3G 59.8G 470M /rpool
rpool/ROOT 1.60G 59.8G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.60G 59.8G 1.60G /
rpool/disks 720K 45.0G 720K /rpool/disks
rpool/isos 470M 14.5G 470M /rpool/isos
rpool/swap 9.83G 69.7G 64K -
I can't see a prob, is it a known bug or is my setup faulty?
Last edited: