Hallo,
Ich habe dieses Problem jetzt schon das zweite mal bekommen.
Ich will ein Image von einem NFS Storage auf einen zweiten verschieben.
Dabei kommt es nach wenigen Sekunden zu diesem Fehler:
May 15 08:55:16 sv-c-vdz1 pvedaemon[15193]: <nawroth_p@pve> move disk VM 108: move --disk virtio0 --storage nfs_fast
May 15 08:55:16 sv-c-vdz1 pvedaemon[15193]: <nawroth_p@pve> starting task UPID:sv-c-vdz1:0000692B:0383C5BD:5AFA8454:qmmove:108:nawroth_p@pve:
May 15 08:58:08 sv-c-vdz1 pvedaemon[27358]: <nawroth_p@pve> starting task UPID:sv-c-vdz1:00006B4D:03840901:5AFA8500:qmigrate:105:nawroth_p@pve:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273497] kvm D 0 5297 1 0x00000000
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273500] Call Trace:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273506] __schedule+0x3e0/0x870
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273509] ? hrtimer_try_to_cancel+0xc8/0x120
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273510] schedule+0x36/0x80
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273513] rwsem_down_write_failed+0x230/0x3a0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273516] call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273517] ? call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273518] down_write+0x2d/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273538] nfs_start_io_write+0x19/0x40 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273544] nfs_file_write+0x7c/0x250 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273547] new_sync_write+0xe7/0x140
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273549] __vfs_write+0x29/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273551] vfs_write+0xb5/0x1a0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273553] SyS_pwrite64+0x95/0xb0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273556] entry_SYSCALL_64_fastpath+0x24/0xab
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273557] RIP: 0033:0x7f94a1a6e963
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273558] RSP: 002b:00007f93877fc5d0 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273560] RAX: ffffffffffffffda RBX: 0000000000000189 RCX: 00007f94a1a6e963
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273561] RDX: 0000000000001000 RSI: 00007f9495016000 RDI: 0000000000000019
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273562] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273563] R10: 000000027f919e00 R11: 0000000000000293 R12: 00007f93877fc620
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273563] R13: 00007f94960bcf88 R14: 00007f93877ff700 R15: 00007f938e79ae00
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273634] kvm D 0 12788 1 0x00000000
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273635] Call Trace:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273637] __schedule+0x3e0/0x870
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273638] schedule+0x36/0x80
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273641] io_schedule+0x16/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273645] wait_on_page_bit_common+0xf3/0x180
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273647] ? page_cache_tree_insert+0xc0/0xc0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273649] __filemap_fdatawait_range+0x114/0x180
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273651] ? __filemap_fdatawrite_range+0xd4/0x100
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273652] filemap_write_and_wait_range+0x57/0xa0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273658] nfs_file_fsync+0x34/0x1e0 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273661] vfs_fsync_range+0x4e/0xb0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273662] do_fsync+0x3d/0x70
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273664] SyS_fdatasync+0x13/0x20
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273665] entry_SYSCALL_64_fastpath+0x24/0xab
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273666] RIP: 0033:0x7f94a17a063d
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273667] RSP: 002b:00007f93837fc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273668] RAX: ffffffffffffffda RBX: 0000000000000189 RCX: 00007f94a17a063d
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273669] RDX: 00007f94960bcf30 RSI: 000055e06b379788 RDI: 0000000000000019
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273669] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273670] R10: 00007f93837fc620 R11: 0000000000000293 R12: 00007f93837fc620
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273671] R13: 00007f94960bcf88 R14: 00007f93837ff700 R15: 00007f938f612400
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273740] kvm D 0 27228 1 0x00000000
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273741] Call Trace:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273743] __schedule+0x3e0/0x870
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273744] schedule+0x36/0x80
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273745] rwsem_down_write_failed+0x230/0x3a0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273747] ? update_curr+0x78/0x1c0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273748] call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273752] ? dentry_free+0x38/0x70
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273754] ? call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273755] down_write+0x2d/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273761] nfs_start_io_write+0x19/0x40 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273766] nfs_file_write+0x7c/0x250 [nfs]
Beim Storage handelt es sich um eine NetApp mit 10GB Anbindung.
Alle NFS Mounts sind in dem Zustand nicht mehr verfügbar. Laufende VM's sind nicht mehr steuerbar.
Es hilft dann nur das Blade neu durchzustarten.
Version ist:
proxmox-ve: 5.1-41 (running kernel: 4.13.13-6-pve)
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4)
pve-kernel-4.13.13-6-pve: 4.13.13-41
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-common-perl: 5.0-28
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-17
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 2.1.1-3
lxcfs: 2.0.8-2
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-11
pve-cluster: 5.0-20
pve-container: 2.0-19
pve-docs: 5.1-16
pve-firewall: 3.0-5
pve-firmware: 2.0-3
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.9.1-9
pve-xtermjs: 1.0-2
qemu-server: 5.0-22
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
Solche verschiebereien haben in der Vergangenheit niemals zu Problemen geführt.
Seit ca. drei Monaten läuft bei uns die Version 5.
Peter
Ich habe dieses Problem jetzt schon das zweite mal bekommen.
Ich will ein Image von einem NFS Storage auf einen zweiten verschieben.
Dabei kommt es nach wenigen Sekunden zu diesem Fehler:
May 15 08:55:16 sv-c-vdz1 pvedaemon[15193]: <nawroth_p@pve> move disk VM 108: move --disk virtio0 --storage nfs_fast
May 15 08:55:16 sv-c-vdz1 pvedaemon[15193]: <nawroth_p@pve> starting task UPID:sv-c-vdz1:0000692B:0383C5BD:5AFA8454:qmmove:108:nawroth_p@pve:
May 15 08:58:08 sv-c-vdz1 pvedaemon[27358]: <nawroth_p@pve> starting task UPID:sv-c-vdz1:00006B4D:03840901:5AFA8500:qmigrate:105:nawroth_p@pve:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273497] kvm D 0 5297 1 0x00000000
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273500] Call Trace:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273506] __schedule+0x3e0/0x870
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273509] ? hrtimer_try_to_cancel+0xc8/0x120
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273510] schedule+0x36/0x80
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273513] rwsem_down_write_failed+0x230/0x3a0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273516] call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273517] ? call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273518] down_write+0x2d/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273538] nfs_start_io_write+0x19/0x40 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273544] nfs_file_write+0x7c/0x250 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273547] new_sync_write+0xe7/0x140
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273549] __vfs_write+0x29/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273551] vfs_write+0xb5/0x1a0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273553] SyS_pwrite64+0x95/0xb0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273556] entry_SYSCALL_64_fastpath+0x24/0xab
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273557] RIP: 0033:0x7f94a1a6e963
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273558] RSP: 002b:00007f93877fc5d0 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273560] RAX: ffffffffffffffda RBX: 0000000000000189 RCX: 00007f94a1a6e963
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273561] RDX: 0000000000001000 RSI: 00007f9495016000 RDI: 0000000000000019
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273562] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273563] R10: 000000027f919e00 R11: 0000000000000293 R12: 00007f93877fc620
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273563] R13: 00007f94960bcf88 R14: 00007f93877ff700 R15: 00007f938e79ae00
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273634] kvm D 0 12788 1 0x00000000
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273635] Call Trace:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273637] __schedule+0x3e0/0x870
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273638] schedule+0x36/0x80
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273641] io_schedule+0x16/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273645] wait_on_page_bit_common+0xf3/0x180
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273647] ? page_cache_tree_insert+0xc0/0xc0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273649] __filemap_fdatawait_range+0x114/0x180
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273651] ? __filemap_fdatawrite_range+0xd4/0x100
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273652] filemap_write_and_wait_range+0x57/0xa0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273658] nfs_file_fsync+0x34/0x1e0 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273661] vfs_fsync_range+0x4e/0xb0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273662] do_fsync+0x3d/0x70
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273664] SyS_fdatasync+0x13/0x20
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273665] entry_SYSCALL_64_fastpath+0x24/0xab
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273666] RIP: 0033:0x7f94a17a063d
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273667] RSP: 002b:00007f93837fc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273668] RAX: ffffffffffffffda RBX: 0000000000000189 RCX: 00007f94a17a063d
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273669] RDX: 00007f94960bcf30 RSI: 000055e06b379788 RDI: 0000000000000019
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273669] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273670] R10: 00007f93837fc620 R11: 0000000000000293 R12: 00007f93837fc620
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273671] R13: 00007f94960bcf88 R14: 00007f93837ff700 R15: 00007f938f612400
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273740] kvm D 0 27228 1 0x00000000
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273741] Call Trace:
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273743] __schedule+0x3e0/0x870
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273744] schedule+0x36/0x80
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273745] rwsem_down_write_failed+0x230/0x3a0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273747] ? update_curr+0x78/0x1c0
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273748] call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273752] ? dentry_free+0x38/0x70
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273754] ? call_rwsem_down_write_failed+0x17/0x30
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273755] down_write+0x2d/0x40
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273761] nfs_start_io_write+0x19/0x40 [nfs]
May 15 08:59:04 sv-c-vdz1 kernel: [589895.273766] nfs_file_write+0x7c/0x250 [nfs]
Beim Storage handelt es sich um eine NetApp mit 10GB Anbindung.
Alle NFS Mounts sind in dem Zustand nicht mehr verfügbar. Laufende VM's sind nicht mehr steuerbar.
Es hilft dann nur das Blade neu durchzustarten.
Version ist:
proxmox-ve: 5.1-41 (running kernel: 4.13.13-6-pve)
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4)
pve-kernel-4.13.13-6-pve: 4.13.13-41
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-common-perl: 5.0-28
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-17
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 2.1.1-3
lxcfs: 2.0.8-2
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-11
pve-cluster: 5.0-20
pve-container: 2.0-19
pve-docs: 5.1-16
pve-firewall: 3.0-5
pve-firmware: 2.0-3
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.9.1-9
pve-xtermjs: 1.0-2
qemu-server: 5.0-22
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
Solche verschiebereien haben in der Vergangenheit niemals zu Problemen geführt.
Seit ca. drei Monaten läuft bei uns die Version 5.
Peter