Proxmox VE 6.1 released!

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
282
44
28
Vienna
Dec 5 09:17:07 node3 kernel: [3236224.935997] CPU: 26 PID: 3030113 Comm: kvm Tainted: P D O 5.0.21-3-pve #1
[...]
Dec 5 09:17:07 node3 kernel: [3236224.945437] Call Trace:
Dec 5 09:17:07 node3 kernel: [3236224.946020] kvm_vcpu_ioctl_get_hv_cpuid+0x44/0x220 [kvm]
I remember seeing a bug in get_hv_cpuid on the KVM mailing list at some point, it should have been fixed in newer kernels though. That also explains why switching to "Linux" fixes it, since that disables the Hyper-V extensions for the VM.

Try rebooting the node to load the new 5.3 kernel included in 6.1, then your VM should run with OS type "Windows" as well.
 

wech

Member
Sep 9, 2009
12
1
23
Try rebooting the node to load the new 5.3 kernel included in 6.1, then your VM should run with OS type "Windows" as well.
I just shut down the remaining windows vms and rebootet the host machine on a planned downtime during noon.
After the reboot everything works fine again - all windows vms came up fine :)
 
  • Like
Reactions: t.lamprecht
Jun 4, 2013
27
1
23
Slovakia
2x dell r320 reboot after upgrade to 6.1

server 1
Code:
[    5.771732] ipmi_ssif: IPMI SSIF Interface driver
[    5.913929] ------------[ cut here ]------------
[    5.913930] ------------[ cut here ]------------
[    5.913931] General protection fault in user access. Non-canonical address?
[    5.913931] General protection fault in user access. Non-canonical address?
[    5.913941] WARNING: CPU: 0 PID: 1729 at arch/x86/mm/extable.c:126 ex_handler_uaccess+0x52/0x60
[    5.913946] WARNING: CPU: 1 PID: 1730 at arch/x86/mm/extable.c:126 ex_handler_uaccess+0x52/0x60
[    5.913947] Modules linked in:
[    5.913948] Modules linked in:
[    5.913948]  ipmi_ssif intel_rapl_msr
[    5.913950]  ipmi_ssif
[    5.913950]  intel_rapl_common sb_edac
[    5.913952]  intel_rapl_msr
[    5.913952]  x86_pkg_temp_thermal intel_powerclamp
[    5.913954]  intel_rapl_common
[    5.913954]  coretemp
[    5.913955]  sb_edac
[    5.913956]  kvm_intel kvm
[    5.913957]  x86_pkg_temp_thermal
[    5.913957]  irqbypass crct10dif_pclmul
[    5.913959]  intel_powerclamp
[    5.913959]  crc32_pclmul ghash_clmulni_intel
[    5.913960]  coretemp
[    5.913961]  aesni_intel mgag200
[    5.913962]  kvm_intel
[    5.913963]  drm_vram_helper ttm
[    5.913964]  kvm
[    5.913965]  aes_x86_64 crypto_simd
[    5.913966]  irqbypass
[    5.913967]  dcdbas cryptd
[    5.913968]  crct10dif_pclmul
[    5.913968]  drm_kms_helper glue_helper
[    5.913970]  crc32_pclmul
[    5.913970]  intel_cstate intel_rapl_perf
[    5.913971]  ghash_clmulni_intel
[    5.913972]  drm pcspkr input_leds
[    5.913974]  aesni_intel
[    5.913974]  i2c_algo_bit fb_sys_fops
[    5.913976]  mgag200
[    5.913976]  syscopyarea sysfillrect
[    5.913978]  drm_vram_helper
[    5.913978]  sysimgblt joydev mei_me
[    5.913980]  ttm
[    5.913980]  mei ipmi_si
[    5.913982]  aes_x86_64
[    5.913982]  ipmi_devintf ipmi_msghandler
[    5.913983]  crypto_simd
[    5.913984]  mac_hid acpi_power_meter zram
[    5.913985]  dcdbas
[    5.913986]  vhost_net vhost
[    5.913987]  cryptd
[    5.913988]  tap ib_iser rdma_cm
[    5.913990]  drm_kms_helper
[    5.913990]  iw_cm ib_cm ib_core
[    5.913992]  glue_helper
[    5.913993]  iscsi_tcp libiscsi_tcp sunrpc
[    5.913994]  intel_cstate
[    5.913995]  libiscsi scsi_transport_iscsi
[    5.913996]  intel_rapl_perf
[    5.913997]  ip_tables x_tables
[    5.913998]  drm
[    5.913998]  autofs4 zfs(PO)
[    5.914000]  pcspkr
[    5.914000]  zunicode(PO)
[    5.914001]  input_leds
[    5.914002]  zlua(PO) zavl(PO)
[    5.914003]  i2c_algo_bit
[    5.914004]  icp(PO) hid_generic
[    5.914005]  fb_sys_fops
[    5.914005]  usbmouse
[    5.914006]  syscopyarea
[    5.914007]  usbkbd usbhid
[    5.914008]  sysfillrect
[    5.914009]  hid
[    5.914010]  sysimgblt
[    5.914010]  zcommon(PO) znvpair(PO)
[    5.914012]  joydev
[    5.914012]  spl(O) btrfs
[    5.914014]  mei_me
[    5.914014]  xor zstd_compress
[    5.914015]  mei
[    5.914016]  raid6_pq libcrc32c
[    5.914017]  ipmi_si
[    5.914018]  ahci lpc_ich libahci
[    5.914020]  ipmi_devintf
[    5.914020]  tg3 mpt3sas raid_class
[    5.914022]  ipmi_msghandler
[    5.914023]  scsi_transport_sas wmi
[    5.914025]  mac_hid acpi_power_meter zram
[    5.914028] CPU: 1 PID: 1730 Comm: kworker/u96:5 Tainted: P           O      5.3.10-1-pve #1
[    5.914028]  vhost_net vhost tap
[    5.914030] Hardware name: Dell Inc. PowerEdge R320/0R5KP9, BIOS 2.6.0 06/11/2018
[    5.914031]  ib_iser rdma_cm iw_cm
[    5.914034] RIP: 0010:ex_handler_uaccess+0x52/0x60
[    5.914034]  ib_cm ib_core iscsi_tcp
[    5.914037] Code: c4 08 b8 01 00 00 00 5b 5d c3 80 3d 45 d6 78 01 00 75 db 48 c7 c7 28 1f d4 a5 48 89 75 f0 c6 05 31 d6 78 01 01 e8 af a1 01 00 <0f> 0b 48 8b 75 f0 eb bc 66 0f 1f 44 00 00 66 66 66 66 90 55 80 3d
[    5.914038]  libiscsi_tcp sunrpc libiscsi
[    5.914040] RSP: 0018:ffffa7e88ffefcc0 EFLAGS: 00010282
[    5.914040]  scsi_transport_iscsi ip_tables x_tables
[    5.914042]  autofs4 zfs(PO)
[    5.914044] RAX: 0000000000000000 RBX: ffffffffa5802448 RCX: 0000000000000006
[    5.914045]  zunicode(PO) zlua(PO) zavl(PO)
[    5.914047] RDX: 0000000000000007 RSI: 0000000000000092 RDI: ffff97c77f057440
[    5.914047]  icp(PO) hid_generic usbmouse
[    5.914049] RBP: ffffa7e88ffefcd0 R08: 00000000000003fc R09: 0000000000000004
[    5.914050]  usbkbd usbhid
[    5.914052] R10: 0000000000000000 R11: 0000000000000001 R12: 000000000000000d
[    5.914052]  hid zcommon(PO) znvpair(PO)
[    5.914054] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    5.914054]  spl(O) btrfs xor
[    5.914057] FS:  0000000000000000(0000) GS:ffff97c77f040000(0000) knlGS:0000000000000000
[    5.914057]  zstd_compress raid6_pq libcrc32c
[    5.914059] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    5.914060]  ahci lpc_ich
[    5.914062] CR2: 00007ffeda87e068 CR3: 00000003ece0a004 CR4: 00000000000606e0
[    5.914062]  libahci tg3 mpt3sas
[    5.914064] Call Trace:
[    5.914064]  raid_class scsi_transport_sas wmi
[    5.914068] CPU: 0 PID: 1729 Comm: kworker/u96:4 Tainted: P           O      5.3.10-1-pve #1
[    5.914070]  fixup_exception+0x4a/0x61
[    5.914071] Hardware name: Dell Inc. PowerEdge R320/0R5KP9, BIOS 2.6.0 06/11/2018
[    5.914074]  do_general_protection+0x4e/0x150
[    5.914077]  general_protection+0x28/0x30
[    5.914079] RIP: 0010:ex_handler_uaccess+0x52/0x60
[    5.914081] Code: c4 08 b8 01 00 00 00 5b 5d c3 80 3d 45 d6 78 01 00 75 db 48 c7 c7 28 1f d4 a5 48 89 75 f0 c6 05 31 d6 78 01 01 e8 af a1 01 00 <0f> 0b 48 8b 75 f0 eb bc 66 0f 1f 44 00 00 66 66 66 66 90 55 80 3d
[    5.914085] RIP: 0010:strnlen_user+0x4c/0x110
[    5.914086] Code: f8 0f 86 e1 00 00 00 48 29 f8 45 31 c9 66 66 90 0f ae e8 48 39 c6 49 89 fa 48 0f 46 c6 41 83 e2 07 48 83 e7 f8 31 c9 4c 01 d0 <4c> 8b 1f 85 c9 0f 85 96 00 00 00 42 8d 0c d5 00 00 00 00 41 b8 01
[    5.914087] RSP: 0018:ffffa7e88add7cc0 EFLAGS: 00010282
[    5.914089] RSP: 0018:ffffa7e88ffefde8 EFLAGS: 00010206
[    5.914090] RAX: 0000000000000000 RBX: ffffffffa5802448 RCX: 0000000000000000
[    5.914092] RAX: 0000000000020000 RBX: dc15c8d5fc4d2f00 RCX: 0000000000000000
[    5.914093] RDX: 0000000000000007 RSI: ffffffffa6583f7f RDI: 0000000000000246
[    5.914094] RDX: dc15c8d5fc4d2f00 RSI: 0000000000020000 RDI: dc15c8d5fc4d2f00
[    5.914096] RBP: ffffa7e88add7cd0 R08: ffffffffa6583f40 R09: 0000000000029fc0
[    5.914097] RBP: ffffa7e88ffefdf8 R08: 8080808080808080 R09: 0000000000000000
[    5.914098] R10: 0001286193ba1ff4 R11: ffffffffa6583f40 R12: 000000000000000d
[    5.914099] R10: 0000000000000000 R11: 0000000000000000 R12: 00007fffffffefe7
[    5.914100] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    5.914102] R13: ffff97bf6d748fe7 R14: 0000000000000000 R15: fffff8234fb5d200
[    5.914103] FS:  0000000000000000(0000) GS:ffff97c77f000000(0000) knlGS:0000000000000000
[    5.914106]  ? _copy_from_user+0x3e/0x60
[    5.914109]  copy_strings.isra.35+0x92/0x380
[    5.914110] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    5.914111] CR2: 00007f331b985000 CR3: 00000003ece0a004 CR4: 00000000000606f0
[    5.914113] Call Trace:
[    5.914114]  __do_execve_file.isra.42+0x5b5/0x9d0
[    5.914117]  ? kmem_cache_alloc+0x100/0x220
[    5.914119]  fixup_exception+0x4a/0x61
[    5.914122]  do_general_protection+0x4e/0x150
[    5.914123]  do_execve+0x25/0x30
[    5.914126]  call_usermodehelper_exec_async+0x188/0x1b0
[    5.914128]  general_protection+0x28/0x30
[    5.914131] RIP: 0010:strnlen_user+0x4c/0x110
[    5.914132]  ? call_usermodehelper+0xb0/0xb0
[    5.914135]  ret_from_fork+0x35/0x40
[    5.914136] Code: f8 0f 86 e1 00 00 00 48 29 f8 45 31 c9 66 66 90 0f ae e8 48 39 c6 49 89 fa 48 0f 46 c6 41 83 e2 07 48 83 e7 f8 31 c9 4c 01 d0 <4c> 8b 1f 85 c9 0f 85 96 00 00 00 42 8d 0c d5 00 00 00 00 41 b8 01
[    5.914137] RSP: 0018:ffffa7e88add7de8 EFLAGS: 00010206
[    5.914139] RAX: 0000000000020000 RBX: 066bd1e2e6c21800 RCX: 0000000000000000
[    5.914140] ---[ end trace 08768478aabeb606 ]---
[    5.914141] RDX: 066bd1e2e6c21800 RSI: 0000000000020000 RDI: 066bd1e2e6c21800
[    5.914142] RBP: ffffa7e88add7df8 R08: 8080808080808080 R09: 0000000000000000
[    5.914143] R10: 0000000000000000 R11: 0000000000000000 R12: 00007fffffffefe6
[    5.914144] R13: ffff97c7568dafe6 R14: 0000000000000000 R15: fffff8236f5a3680
[    5.914147]  ? _copy_from_user+0x3e/0x60
[    5.914149]  copy_strings.isra.35+0x92/0x380
[    5.914151]  __do_execve_file.isra.42+0x5b5/0x9d0
[    5.914154]  ? kmem_cache_alloc+0x100/0x220
[    5.914156]  do_execve+0x25/0x30
[    5.914158]  call_usermodehelper_exec_async+0x188/0x1b0
[    5.914160]  ? call_usermodehelper+0xb0/0xb0
[    5.914162]  ret_from_fork+0x35/0x40
[    5.914164] ---[ end trace 08768478aabeb607 ]---
 
Jun 4, 2013
27
1
23
Slovakia
Server 2
Code:
[    5.814515] ipmi_ssif: IPMI SSIF Interface driver
[    5.918271] ------------[ cut here ]------------
[    5.918273] General protection fault in user access. Non-canonical address?
[    5.918282] WARNING: CPU: 10 PID: 1642 at arch/x86/mm/extable.c:126 ex_handler_uaccess+0x52/0x60
[    5.918283] Modules linked in: ipmi_ssif intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel mgag200 drm_vram_helper ttm dcdbas drm_kms_helper aesni_intel drm aes_x86_64 crypto_simd cryptd glue_helper i2c_algo_bit fb_sys_fops intel_cstate syscopyarea intel_rapl_perf sysfillrect pcspkr sysimgblt joydev input_leds mei_me mei ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter mac_hid vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc zram ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zlua(PO) zavl(PO) icp(PO) hid_generic usbkbd usbmouse usbhid hid zcommon(PO) znvpair(PO) spl(O) btrfs xor zstd_compress raid6_pq libcrc32c ahci libahci lpc_ich tg3 mpt3sas raid_class scsi_transport_sas wmi
[    5.918322] CPU: 10 PID: 1642 Comm: kworker/u96:4 Tainted: P           O      5.3.10-1-pve #1
[    5.918323] Hardware name: Dell Inc. PowerEdge R320/0R5KP9, BIOS 2.6.0 06/11/2018
[    5.918326] RIP: 0010:ex_handler_uaccess+0x52/0x60
[    5.918328] Code: c4 08 b8 01 00 00 00 5b 5d c3 80 3d 45 d6 78 01 00 75 db 48 c7 c7 28 1f 94 89 48 89 75 f0 c6 05 31 d6 78 01 01 e8 af a1 01 00 <0f> 0b 48 8b 75 f0 eb bc 66 0f 1f 44 00 00 66 66 66 66 90 55 80 3d
[    5.918329] RSP: 0018:ffffbfc10a537cc0 EFLAGS: 00010282
[    5.918330] RAX: 0000000000000000 RBX: ffffffff89402448 RCX: 0000000000000000
[    5.918331] RDX: 0000000000000007 RSI: ffffffff8a183f7f RDI: 0000000000000246
[    5.918332] RBP: ffffbfc10a537cd0 R08: ffffffff8a183f40 R09: 0000000000029fc0
[    5.918333] R10: 0001285c6e0ada9e R11: ffffffff8a183f40 R12: 000000000000000d
[    5.918334] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    5.918335] FS:  0000000000000000(0000) GS:ffff9ddeff280000(0000) knlGS:0000000000000000
[    5.918336] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    5.918337] CR2: 00007f55f6ba1000 CR3: 000000001320a004 CR4: 00000000000606e0
[    5.918338] Call Trace:
[    5.918344]  fixup_exception+0x4a/0x61
[    5.918347]  do_general_protection+0x4e/0x150
[    5.918351]  general_protection+0x28/0x30
[    5.918355] RIP: 0010:strnlen_user+0x4c/0x110
[    5.918356] Code: f8 0f 86 e1 00 00 00 48 29 f8 45 31 c9 66 66 90 0f ae e8 48 39 c6 49 89 fa 48 0f 46 c6 41 83 e2 07 48 83 e7 f8 31 c9 4c 01 d0 <4c> 8b 1f 85 c9 0f 85 96 00 00 00 42 8d 0c d5 00 00 00 00 41 b8 01
[    5.918357] RSP: 0018:ffffbfc10a537de8 EFLAGS: 00010206
[    5.918358] RAX: 0000000000020000 RBX: 20233d8115419000 RCX: 0000000000000000
[    5.918359] RDX: 20233d8115419000 RSI: 0000000000020000 RDI: 20233d8115419000
[    5.918360] RBP: ffffbfc10a537df8 R08: 8080808080808080 R09: 0000000000000000
[    5.918361] R10: 0000000000000000 R11: 0000000000000000 R12: 00007fffffffefe7
[    5.918362] R13: ffff9ddeff9f7fe7 R14: 0000000000000000 R15: ffffe6792ffe7dc0
[    5.918366]  ? _copy_from_user+0x3e/0x60
[    5.918370]  copy_strings.isra.35+0x92/0x380
[    5.918373]  __do_execve_file.isra.42+0x5b5/0x9d0
[    5.918377]  ? kmem_cache_alloc+0x100/0x220
[    5.918380]  do_execve+0x25/0x30
[    5.918384]  call_usermodehelper_exec_async+0x188/0x1b0
[    5.918386]  ? call_usermodehelper+0xb0/0xb0
[    5.918390]  ret_from_fork+0x35/0x40
[    5.918392] ---[ end trace a1490dccb8423ddb ]---
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,948
298
103
South Tyrol/Italy
Server 2
Code:
[    5.814515] ipmi_ssif: IPMI SSIF Interface driver
[    5.918271] ------------[ cut here ]------------
[    5.918273] General protection fault in user access. Non-canonical address?
[    5.918282] WARNING: CPU: 10 PID: 1642 at arch/x86/mm/extable.c:126 ex_handler_uaccess+0x52/0x60
[    5.918283] Modules linked in: ipmi_ssif intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel mgag200 drm_vram_helper ttm dcdbas drm_kms_helper aesni_intel drm aes_x86_64 crypto_simd cryptd glue_helper i2c_algo_bit fb_sys_fops intel_cstate syscopyarea intel_rapl_perf sysfillrect pcspkr sysimgblt joydev input_leds mei_me mei ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter mac_hid vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc zram ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zlua(PO) zavl(PO) icp(PO) hid_generic usbkbd usbmouse usbhid hid zcommon(PO) znvpair(PO) spl(O) btrfs xor zstd_compress raid6_pq libcrc32c ahci libahci lpc_ich tg3 mpt3sas raid_class scsi_transport_sas wmi
[    5.918322] CPU: 10 PID: 1642 Comm: kworker/u96:4 Tainted: P           O      5.3.10-1-pve #1
[    5.918323] Hardware name: Dell Inc. PowerEdge R320/0R5KP9, BIOS 2.6.0 06/11/2018
[    5.918326] RIP: 0010:ex_handler_uaccess+0x52/0x60
[    5.918328] Code: c4 08 b8 01 00 00 00 5b 5d c3 80 3d 45 d6 78 01 00 75 db 48 c7 c7 28 1f 94 89 48 89 75 f0 c6 05 31 d6 78 01 01 e8 af a1 01 00 <0f> 0b 48 8b 75 f0 eb bc 66 0f 1f 44 00 00 66 66 66 66 90 55 80 3d
[    5.918329] RSP: 0018:ffffbfc10a537cc0 EFLAGS: 00010282
[    5.918330] RAX: 0000000000000000 RBX: ffffffff89402448 RCX: 0000000000000000
[    5.918331] RDX: 0000000000000007 RSI: ffffffff8a183f7f RDI: 0000000000000246
[    5.918332] RBP: ffffbfc10a537cd0 R08: ffffffff8a183f40 R09: 0000000000029fc0
[    5.918333] R10: 0001285c6e0ada9e R11: ffffffff8a183f40 R12: 000000000000000d
[    5.918334] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    5.918335] FS:  0000000000000000(0000) GS:ffff9ddeff280000(0000) knlGS:0000000000000000
[    5.918336] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    5.918337] CR2: 00007f55f6ba1000 CR3: 000000001320a004 CR4: 00000000000606e0
[    5.918338] Call Trace:
[    5.918344]  fixup_exception+0x4a/0x61
[    5.918347]  do_general_protection+0x4e/0x150
[    5.918351]  general_protection+0x28/0x30
[    5.918355] RIP: 0010:strnlen_user+0x4c/0x110
[    5.918356] Code: f8 0f 86 e1 00 00 00 48 29 f8 45 31 c9 66 66 90 0f ae e8 48 39 c6 49 89 fa 48 0f 46 c6 41 83 e2 07 48 83 e7 f8 31 c9 4c 01 d0 <4c> 8b 1f 85 c9 0f 85 96 00 00 00 42 8d 0c d5 00 00 00 00 41 b8 01
[    5.918357] RSP: 0018:ffffbfc10a537de8 EFLAGS: 00010206
[    5.918358] RAX: 0000000000020000 RBX: 20233d8115419000 RCX: 0000000000000000
[    5.918359] RDX: 20233d8115419000 RSI: 0000000000020000 RDI: 20233d8115419000
[    5.918360] RBP: ffffbfc10a537df8 R08: 8080808080808080 R09: 0000000000000000
[    5.918361] R10: 0000000000000000 R11: 0000000000000000 R12: 00007fffffffefe7
[    5.918362] R13: ffff9ddeff9f7fe7 R14: 0000000000000000 R15: ffffe6792ffe7dc0
[    5.918366]  ? _copy_from_user+0x3e/0x60
[    5.918370]  copy_strings.isra.35+0x92/0x380
[    5.918373]  __do_execve_file.isra.42+0x5b5/0x9d0
[    5.918377]  ? kmem_cache_alloc+0x100/0x220
[    5.918380]  do_execve+0x25/0x30
[    5.918384]  call_usermodehelper_exec_async+0x188/0x1b0
[    5.918386]  ? call_usermodehelper+0xb0/0xb0
[    5.918390]  ret_from_fork+0x35/0x40
[    5.918392] ---[ end trace a1490dccb8423ddb ]---
Do you use ZFS? If yes, it's rather a harmless warning which will disappear with a upcomming ZFS update. But yes it looks pretty scary, sorry for that:

Add
Code:
# echo "zfs zfs_vdev_scheduler=none" >> /etc/modprobe.d/zfs-no-vdev-sched.conf
then:
#  update-grub
or if UEFI is used:
# pve-efiboot-tool refresh

EDIT: see https://github.com/zfsonlinux/zfs/issues/9417#issuecomment-548085631
 
  • Like
Reactions: lacosta

MateuszAdach

New Member
Dec 5, 2019
2
0
1
26
I've solved this problem. In fresh installed servers i don't have /etc/apparmor.d/tunables/proc file.
Just copied from other server, and lxc container start.

On two fresh installed servers with proxmox6.1 cannot start lxc containers with local or lvm storage. I've attached `strace lxc-start -n 100 -F`

Code:
root@pve1:~# pct start 100
Job for pve-container@100.service failed because the control process exited with error code.
See "systemctl status pve-container@100.service" and "journalctl -xe" for details.
command 'systemctl start pve-container@100' failed: exit code 1
root@pve1:~# lxc-start -n 100 -F
lxc-start: 100: conf.c: run_buffer: 352 Script exited with status 2
lxc-start: 100: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "100"
lxc-start: 100: start.c: __lxc_start: 2032 Failed to initialize container "100"
Segmentation fault
root@pve1:~# dmesg | tail
[   23.077662] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[   24.889958] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null)
[  169.129070] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[  169.144425] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[  533.341624] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[  533.357210] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[  552.452058] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[  552.470831] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[  552.480625] lxc-start[3695]: segfault at 50 ip 00007f40ebdecf8b sp 00007ffd01d84e40 error 4 in liblxc.so.1.6.0[7f40ebd93000+8a000]
[  552.480659] Code: 9b c0 ff ff 4d 85 ff 0f 85 82 02 00 00 66 90 48 8b 73 50 48 8b bb f8 00 00 00 e8 80 78 fa ff 4c 8b 74 24 10 48 89 de 4c 89 f7 <41> ff 56 50 4c 89 f7 48 89 de 41 ff 56 58 48 8b 83 f8 00 00 00 8b

Code:
root@pve1:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 

arttk

New Member
Dec 5, 2019
1
0
1
45
I just updated my server, made a reboot, and this is what i have
Does it normal?
How can i fix this?

Dec 05 18:35:41 node kernel: EDAC skx MC1: HANDLING MCE MEMORY ERROR
Dec 05 18:35:41 node kernel: EDAC skx MC1: CPU 14: Machine Check Event: 0x0 Bank 8: 0xcc10000001010090
Dec 05 18:35:41 node kernel: EDAC skx MC1: TSC 0x0
Dec 05 18:35:41 node kernel: EDAC skx MC1: ADDR 0x201ff56d80
Dec 05 18:35:41 node kernel: EDAC skx MC1: MISC 0x200000c000201086
Dec 05 18:35:41 node kernel: EDAC skx MC1: PROCESSOR 0:0x50654 TIME 1575560141 SOCKET 0 APIC 0xd
Dec 05 18:35:41 node kernel: EDAC MC1: 16384 CE memory read error on CPU_SrcID#0_MC#1_Chan#0_DIMM#0 (channel:0 slot:0 page:0x201ff56 offset:0xd80 grain:32 syndrome:0x0 - OVERFLOW err_code:0x0101:0x0090 socket:0 imc:1 rank:0 bg:2 ba:3 row:0x1ffea col:0x3b8)
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
282
44
28
Vienna
Dec 05 18:35:41 node kernel: EDAC skx MC1: HANDLING MCE MEMORY ERROR
[...]
Dec 05 18:35:41 node kernel: EDAC MC1: 16384 CE memory read error ...
That looks like an ECC memory error. Probably a faulty RAM module, maybe try a memtest and replace it.

Very unlikely that it has to do with the update though, it probably just manifested because of the reboot...
 

Ops_Mass

New Member
Dec 9, 2019
4
0
1
19
Our containers don't start via GUI since Upgrade to Proxmox 6.1:

lxc-start 100 20191209130904.462 INFO seccomp - seccomp.carse_config_v2:1008 - Merging compat seccomp contexts into main context
lxc-start 100 20191209130904.462 INFO conf - conf.c:run_script_argv:372 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100", config section "lxc"
lxc-start 100 20191209130904.980 DEBUG conf - conf.c:run_buffer:340 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: symlink encountered at: //var

lxc-start 100 20191209130904.989 ERROR conf - conf.c:run_buffer:352 - Script exited with status 20
lxc-start 100 20191209130904.989 ERROR start - start.c:lxc_init:897 - Failed to run lxc.hook.pre-start for container "100"
lxc-start 100 20191209130904.990 ERROR start - start.c:__lxc_start:2032 - Failed to initialize container "100"
lxc-start 100 20191209130904.990 DEBUG lxccontainer - lxccontainer.c:wait_on_daemonized_start:862 - First child 4467 exited
lxc-start 100 20191209130904.990 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:865 - No such file or directory - Failed to receive the container state
lxc-start 100 20191209130904.990 ERROR lxc_start - tools/lxc_start.c:main:329 - The container failed to start
lxc-start 100 20191209130904.990 ERROR lxc_start - tools/lxc_start.c:main:332 - To get more details, run the container in foreground mode
lxc-start 100 20191209130904.990 ERROR lxc_start - tools/lxc_start.c:main:335 - Additional information can be obtained by setting the --logfile and --logpriority options
But we can start the containers manually in foreground mode with:
lxc-start -o lxc-start.log -lDEBUG -n 100 -F
Can someone help?
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
1,115
116
63

mvrhov

Member
Jan 29, 2011
15
2
23
6.1 Still clock jumps for me :/
After the reboot command clock is in the future.

root@h01:~# date
Mon 09 Dec 2019 10:30:33 PM UTC
root@h01:~# date
Mon 09 Dec 2019 02:22:32 PM UTC
 
Sep 20, 2019
4
0
1
32
Do you use ZFS? If yes, it's rather a harmless warning which will disappear with a upcomming ZFS update. But yes it looks pretty scary, sorry for that:

Add
Code:
# echo "zfs zfs_vdev_scheduler=none" >> /etc/modprobe.d/zfs-no-vdev-sched.conf
then:
#  update-grub
or if UEFI is used:
# pve-efiboot-tool refresh

EDIT: see https://github.com/zfsonlinux/zfs/issues/9417#issuecomment-548085631
I am having the same problem on some servers and it causes some processes like pveproxy to go into D state and become unresponsive.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,948
298
103
South Tyrol/Italy
I am having the same problem on some servers and it causes some processes like pveproxy to go into D state and become unresponsive.
Then it's not the same cause, as that just produces a single such warning at boot during ZFS module load, after that there are no effects - if it's this specific "issue". So how does your warning looks like, would be good to see with a bit before and after kernel log.
 
Sep 20, 2019
4
0
1
32
Then it's not the same cause, as that just produces a single such warning at boot during ZFS module load, after that there are no effects - if it's this specific "issue". So how does your warning looks like, would be good to see with a bit before and after kernel log.
Still, if itsn't related to that bug then I am getting some FUSE crashes too which send pveproxy into D state:

Code:
Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183852] pveproxy        D    0 2191549      1 0x00004004

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183861] Call Trace:

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183878]  __schedule+0x2bb/0x660

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183885]  schedule+0x33/0xa0

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183895]  request_wait_answer+0x133/0x210

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183907]  ? wait_woken+0x80/0x80

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183918]  __fuse_request_send+0x69/0x90

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183929]  fuse_request_send+0x29/0x30

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183939]  fuse_simple_request+0xdd/0x1a0

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183952]  fuse_dentry_revalidate+0x1a0/0x310

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.183974]  lookup_fast+0x292/0x310

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184017]  walk_component+0x49/0x330

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184031]  ? inode_permission+0x63/0x1a0

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184042]  link_path_walk.part.43+0x2c6/0x540

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184053]  path_parentat.isra.44+0x2f/0x80

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184068]  filename_parentat.isra.59.part.60+0xa4/0x180

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184188]  ? rrw_exit+0x5e/0x150 [zfs]

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184257]  ? rrm_exit+0x46/0x80 [zfs]

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184271]  filename_create+0x55/0x180

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184289]  ? getname_flags+0x6f/0x1e0

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184303]  do_mkdirat+0x59/0x110

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184313]  __x64_sys_mkdir+0x1b/0x20

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184324]  do_syscall_64+0x5a/0x130

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184334]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184344] RIP: 0033:0x7f1dace630d7

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184359] Code: Bad RIP value.

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184366] RSP: 002b:00007ffede9a5a38 EFLAGS: 00000246 ORIG_RAX: 0000000000000053

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184376] RAX: ffffffffffffffda RBX: 00005564114f2260 RCX: 00007f1dace630d7

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184384] RDX: 00005564101ad1f4 RSI: 00000000000001ff RDI: 00005564152671b0

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184392] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000010

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184401] R10: 0000000000000000 R11: 0000000000000246 R12: 0000556413952ee8

Dec  9 09:19:01 proxmox-alfred5 kernel: [213270.184409] R13: 00005564152671b0 R14: 0000556414f40e30 R15: 00000000000001ff

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.015901] pveproxy        D    0 2191549      1 0x00004004

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.015909] Call Trace:

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.015925]  __schedule+0x2bb/0x660

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.015933]  schedule+0x33/0xa0

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.016274]  request_wait_answer+0x133/0x210

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.016522]  ? wait_woken+0x80/0x80

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.016997]  __fuse_request_send+0x69/0x90

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.017637]  fuse_request_send+0x29/0x30

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.018179]  fuse_simple_request+0xdd/0x1a0

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.018759]  fuse_dentry_revalidate+0x1a0/0x310

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.019342]  lookup_fast+0x292/0x310

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.019947]  walk_component+0x49/0x330

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.020497]  ? inode_permission+0x63/0x1a0

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.021075]  link_path_walk.part.43+0x2c6/0x540

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.021641]  path_parentat.isra.44+0x2f/0x80

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.022185]  filename_parentat.isra.59.part.60+0xa4/0x180

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.022843]  ? rrw_exit+0x5e/0x150 [zfs]

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.023308]  ? rrm_exit+0x46/0x80 [zfs]

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.023772]  filename_create+0x55/0x180

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.024233]  ? getname_flags+0x6f/0x1e0

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.024727]  do_mkdirat+0x59/0x110

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.025217]  __x64_sys_mkdir+0x1b/0x20

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.025701]  do_syscall_64+0x5a/0x130

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.026189]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.026673] RIP: 0033:0x7f1dace630d7

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.027168] Code: Bad RIP value.

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.027647] RSP: 002b:00007ffede9a5a38 EFLAGS: 00000246 ORIG_RAX: 0000000000000053

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.028140] RAX: ffffffffffffffda RBX: 00005564114f2260 RCX: 00007f1dace630d7

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.028634] RDX: 00005564101ad1f4 RSI: 00000000000001ff RDI: 00005564152671b0

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.029133] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000010

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.029625] R10: 0000000000000000 R11: 0000000000000246 R12: 0000556413952ee8

Dec  9 09:21:02 proxmox-alfred5 kernel: [213391.030113] R13: 00005564152671b0 R14: 0000556414f40e30 R15: 00000000000001ff

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.848591] pveproxy        D    0 2191549      1 0x00004004

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.849120] Call Trace:

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.849643]  __schedule+0x2bb/0x660

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.850152]  schedule+0x33/0xa0

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.850673]  request_wait_answer+0x133/0x210

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.851191]  ? wait_woken+0x80/0x80

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.851741]  __fuse_request_send+0x69/0x90

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.852228]  fuse_request_send+0x29/0x30

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.852745]  fuse_simple_request+0xdd/0x1a0

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.853266]  fuse_dentry_revalidate+0x1a0/0x310

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.853782]  lookup_fast+0x292/0x310

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.854287]  walk_component+0x49/0x330

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.854804]  ? inode_permission+0x63/0x1a0

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.855321]  link_path_walk.part.43+0x2c6/0x540

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.855868]  path_parentat.isra.44+0x2f/0x80

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.856370]  filename_parentat.isra.59.part.60+0xa4/0x180

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.856984]  ? rrw_exit+0x5e/0x150 [zfs]

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.857477]  ? rrm_exit+0x46/0x80 [zfs]

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.857948]  filename_create+0x55/0x180

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.858477]  ? getname_flags+0x6f/0x1e0

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.858989]  do_mkdirat+0x59/0x110

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.859493]  __x64_sys_mkdir+0x1b/0x20

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.860017]  do_syscall_64+0x5a/0x130

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.860496]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.861000] RIP: 0033:0x7f1dace630d7

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.861506] Code: Bad RIP value.

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.861997] RSP: 002b:00007ffede9a5a38 EFLAGS: 00000246 ORIG_RAX: 0000000000000053

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.862506] RAX: ffffffffffffffda RBX: 00005564114f2260 RCX: 00007f1dace630d7

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.863008] RDX: 00005564101ad1f4 RSI: 00000000000001ff RDI: 00005564152671b0

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.863511] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000010

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.864032] R10: 0000000000000000 R11: 0000000000000246 R12: 0000556413952ee8

Dec  9 09:23:03 proxmox-alfred5 kernel: [213511.864510] R13: 00005564152671b0 R14: 0000556414f40e30 R15: 00000000000001ff
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,948
298
103
South Tyrol/Italy
Still, if itsn't related to that bug then I am getting some FUSE crashes too which send pveproxy into D state:
Never said that your issue isn't valid, just that it's surely not this one.

Can you restart the pve-cluster.service? Any IO errors? Maybe it'd be best to open another thread for this.
The first of such a message would be interesting too.
 

tkazmierczak

New Member
Dec 11, 2019
2
0
1
33
Hi,
after upgrading to 6.1 we are experiencing problems with online VM migration with local disks on ZFS storage. This worked for us in 5.4 from console using "qm migrate <VMID> <HOST> --online=true --with-local-disks".

I must say that this is happening on a cluster which got upgraded from 5.4 directly to 6.1. In our test cluster which was updated from 5.4 to 6.0 and then from 6.0 to 6.1 this is not happening.

Migrate command: qm migrate 12401 node24 --online=true --with-local-disks

Problem details:
A migration error at the very end of the migration with message: unable to open file '/etc/pve/nodes/node21/qemu-server/12401.conf.tmp.13207' - Permission denied. This aborted migration leaves the VM in a frozen and locked state. We had to unlock it with qm unlock and then stop/start.

Migrate log:
Code:
2019-12-11 11:19:03 starting migration of VM 12401 to node 'node24' (10.88.1.24)
2019-12-11 11:19:03 found local disk 'local-zfs:vm-12401-disk-0' (in current VM config)
2019-12-11 11:19:03 copying local disk images
2019-12-11 11:19:03 starting VM 12401 on remote node 'node24'
2019-12-11 11:19:05 start remote tunnel
2019-12-11 11:19:06 ssh tunnel ver 1
2019-12-11 11:19:06 starting storage migration
2019-12-11 11:19:06 virtio0: start migration to nbd:10.88.1.24:60000:exportname=drive-virtio0
drive mirror is starting for drive-virtio0
drive-virtio0: transferred: 0 bytes remaining: 17179869184 bytes total: 17179869184 bytes progression: 0.00 % busy: 1 ready: 0
drive-virtio0: transferred: 116391936 bytes remaining: 17063477248 bytes total: 17179869184 bytes progression: 0.68 % busy: 1 ready: 0
drive-virtio0: transferred: 233832448 bytes remaining: 16946036736 bytes total: 17179869184 bytes progression: 1.36 % busy: 1 ready: 0
drive-virtio0: transferred: 351272960 bytes remaining: 16828596224 bytes total: 17179869184 bytes progression: 2.04 % busy: 1 ready: 0
drive-virtio0: transferred: 466616320 bytes remaining: 16713252864 bytes total: 17179869184 bytes progression: 2.72 % busy: 1 ready: 0
...
2019-12-11 11:21:39 migration xbzrle cachesize: 268435456 transferred 0 pages 0 cachemiss 0 overflow 0
2019-12-11 11:21:40 migration status: active (transferred 564820067, remaining 18894848), total 2165645312)
2019-12-11 11:21:40 migration xbzrle cachesize: 268435456 transferred 0 pages 0 cachemiss 9472 overflow 0
2019-12-11 11:21:40 migration speed: 13.30 MB/s - downtime 80 ms
2019-12-11 11:21:40 migration status: completed
drive-virtio0: transferred: 17187667968 bytes remaining: 0 bytes total: 17187667968 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-virtio0: Completing block job...
drive-virtio0: Completed successfully.
drive-virtio0 : finished
2019-12-11 11:21:41 ERROR: unable to open file '/etc/pve/nodes/node21/qemu-server/12401.conf.tmp.13207' - Permission denied
2019-12-11 11:21:41 ERROR: migration finished with problems (duration 00:02:39)
TASK ERROR: migration problems
 

tkazmierczak

New Member
Dec 11, 2019
2
0
1
33
Hi,
I can't be 100% sure that it was quorate, but that made me think a little. I made more tests and once in a while the migration passed, but often failed. I switched the migration network to a different network not shared with PVE/Corosync and it started working more predictably. Probably during the migration the node lost Quorum and that was the reason of the error. Thanks for the suggestion, I'll report back if I find more problems.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!