Hallo
ich habe gerade zwei Proxmox-Server installiert. Auf dem einen entpacke ich "OVA" Archive und erstelle mit den enthaltenen Disks neue Proxmox Maschinen (alles Windows 2012 R2), verifiziere dass VirtIO funktioniert und erstelle anschliessend ein Backup mittels "vdump".
Das Backup kopiere ich auf den anderen Proxmox-Server und versuche dieses über "qmrestore <dumpfile> <id>" zu restoren. Der Restore läuft bis 100%, danach geschieht nichts mehr. Nach ca. 5 Minuten ist dann auch das Web-UI nicht mehr erreichbar.
Source sowie Zielsystem laufen mit der gleichen Proxmox Version.
Das einzige was ich aus dem Syslog lesen konnte ist:
Sep 12 14:13:00 pve systemd[1]: Started Proxmox VE replication runner.
Sep 12 14:13:42 pve systemd-udevd[1633]: seq 4485 '/devices/virtual/block/dm-13' is taking a long time
Sep 12 14:14:00 pve systemd[1]: Starting Proxmox VE replication runner...
Sep 12 14:14:00 pve systemd[1]: Started Proxmox VE replication runner.
Sep 12 14:15:00 pve systemd[1]: Starting Proxmox VE replication runner...
Sep 12 14:15:00 pve systemd[1]: Started Proxmox VE replication runner.
Sep 12 14:15:01 pve CRON[3940]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Sep 12 14:15:06 pve kernel: [ 726.370314] INFO: task systemd-udevd:3665 blocked for more than 120 seconds.
Sep 12 14:15:06 pve kernel: [ 726.370348] Tainted: G O 4.10.17-2-pve #1
Sep 12 14:15:06 pve kernel: [ 726.370367] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 12 14:15:06 pve kernel: [ 726.370393] systemd-udevd D 0 3665 1633 0x00000100
Sep 12 14:15:06 pve kernel: [ 726.370396] Call Trace:
Sep 12 14:15:06 pve kernel: [ 726.370406] __schedule+0x233/0x6f0
Sep 12 14:15:06 pve kernel: [ 726.370408] schedule+0x36/0x80
Sep 12 14:15:06 pve kernel: [ 726.370410] schedule_preempt_disabled+0xe/0x10
Sep 12 14:15:06 pve kernel: [ 726.370411] __mutex_lock_slowpath+0x190/0x2a0
Sep 12 14:15:06 pve kernel: [ 726.370412] mutex_lock+0x2f/0x40
Sep 12 14:15:06 pve kernel: [ 726.370416] __blkdev_get+0x6d/0x400
Sep 12 14:15:06 pve kernel: [ 726.370417] blkdev_get+0x12a/0x330
Sep 12 14:15:06 pve kernel: [ 726.370419] blkdev_open+0x82/0xd0
Sep 12 14:15:06 pve kernel: [ 726.370423] do_dentry_open+0x20a/0x310
Sep 12 14:15:06 pve kernel: [ 726.370425] ? blkdev_get_by_dev+0x50/0x50
Sep 12 14:15:06 pve kernel: [ 726.370427] vfs_open+0x4c/0x70
Sep 12 14:15:06 pve kernel: [ 726.370428] ? may_open+0x9b/0x100
Sep 12 14:15:06 pve kernel: [ 726.370429] path_openat+0x659/0x14f0
Sep 12 14:15:06 pve kernel: [ 726.370432] ? find_next_bit+0x18/0x20
Sep 12 14:15:06 pve kernel: [ 726.370435] ? page_add_file_rmap+0xcc/0x130
Sep 12 14:15:06 pve kernel: [ 726.370438] ? filemap_map_pages+0x3eb/0x400
Sep 12 14:15:06 pve kernel: [ 726.370439] do_filp_open+0x91/0x100
Sep 12 14:15:06 pve kernel: [ 726.370441] ? __check_object_size+0x100/0x1d7
Sep 12 14:15:06 pve kernel: [ 726.370443] ? __alloc_fd+0x46/0x170
Sep 12 14:15:06 pve kernel: [ 726.370445] do_sys_open+0x135/0x280
Sep 12 14:15:06 pve kernel: [ 726.370446] SyS_open+0x1e/0x20
Sep 12 14:15:06 pve kernel: [ 726.370449] do_syscall_64+0x5b/0xc0
Sep 12 14:15:06 pve kernel: [ 726.370451] entry_SYSCALL64_slow_path+0x25/0x25
Sep 12 14:15:06 pve kernel: [ 726.370452] RIP: 0033:0x7f6e923e5820
Sep 12 14:15:06 pve kernel: [ 726.370453] RSP: 002b:00007ffc0d393808 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
Sep 12 14:15:06 pve kernel: [ 726.370454] RAX: ffffffffffffffda RBX: 000055bc6179b500 RCX: 00007f6e923e5820
Sep 12 14:15:06 pve kernel: [ 726.370455] RDX: 000055bc6054bd63 RSI: 0000000000080000 RDI: 000055bc617a7d90
Sep 12 14:15:06 pve kernel: [ 726.370456] RBP: 0000000000000001 R08: 000055bc6054b3f0 R09: 0000000000000110
Sep 12 14:15:06 pve kernel: [ 726.370456] R10: 00000000000002fe R11: 0000000000000246 R12: 0000000000000000
Sep 12 14:15:06 pve kernel: [ 726.370457] R13: 0000000000000000 R14: 000055bc6179ec00 R15: 00000000ffffffff
Sep 12 14:15:42 pve systemd-udevd[1633]: seq 4485 '/devices/virtual/block/dm-13' killed
Ein "vma verify" ergibt keine Fehler, das Backup ist Ok. Bin gerade etwas ratlos,...
ich habe gerade zwei Proxmox-Server installiert. Auf dem einen entpacke ich "OVA" Archive und erstelle mit den enthaltenen Disks neue Proxmox Maschinen (alles Windows 2012 R2), verifiziere dass VirtIO funktioniert und erstelle anschliessend ein Backup mittels "vdump".
Das Backup kopiere ich auf den anderen Proxmox-Server und versuche dieses über "qmrestore <dumpfile> <id>" zu restoren. Der Restore läuft bis 100%, danach geschieht nichts mehr. Nach ca. 5 Minuten ist dann auch das Web-UI nicht mehr erreichbar.
Source sowie Zielsystem laufen mit der gleichen Proxmox Version.
proxmox-ve: 5.0-19 (running kernel: 4.10.17-2-pve)
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-19
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-3
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
root@pve:/mnt/backup/data# pveversion
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve)
root@pve:/mnt/backup/data# pveversion -v
proxmox-ve: 5.0-19 (running kernel: 4.10.17-2-pve)
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-19
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-3
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-19
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-3
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
root@pve:/mnt/backup/data# pveversion
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve)
root@pve:/mnt/backup/data# pveversion -v
proxmox-ve: 5.0-19 (running kernel: 4.10.17-2-pve)
pve-manager: 5.0-30 (running version: 5.0-30/5ab26bc)
pve-kernel-4.10.17-2-pve: 4.10.17-19
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-3
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
Das einzige was ich aus dem Syslog lesen konnte ist:
Sep 12 14:13:00 pve systemd[1]: Started Proxmox VE replication runner.
Sep 12 14:13:42 pve systemd-udevd[1633]: seq 4485 '/devices/virtual/block/dm-13' is taking a long time
Sep 12 14:14:00 pve systemd[1]: Starting Proxmox VE replication runner...
Sep 12 14:14:00 pve systemd[1]: Started Proxmox VE replication runner.
Sep 12 14:15:00 pve systemd[1]: Starting Proxmox VE replication runner...
Sep 12 14:15:00 pve systemd[1]: Started Proxmox VE replication runner.
Sep 12 14:15:01 pve CRON[3940]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Sep 12 14:15:06 pve kernel: [ 726.370314] INFO: task systemd-udevd:3665 blocked for more than 120 seconds.
Sep 12 14:15:06 pve kernel: [ 726.370348] Tainted: G O 4.10.17-2-pve #1
Sep 12 14:15:06 pve kernel: [ 726.370367] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 12 14:15:06 pve kernel: [ 726.370393] systemd-udevd D 0 3665 1633 0x00000100
Sep 12 14:15:06 pve kernel: [ 726.370396] Call Trace:
Sep 12 14:15:06 pve kernel: [ 726.370406] __schedule+0x233/0x6f0
Sep 12 14:15:06 pve kernel: [ 726.370408] schedule+0x36/0x80
Sep 12 14:15:06 pve kernel: [ 726.370410] schedule_preempt_disabled+0xe/0x10
Sep 12 14:15:06 pve kernel: [ 726.370411] __mutex_lock_slowpath+0x190/0x2a0
Sep 12 14:15:06 pve kernel: [ 726.370412] mutex_lock+0x2f/0x40
Sep 12 14:15:06 pve kernel: [ 726.370416] __blkdev_get+0x6d/0x400
Sep 12 14:15:06 pve kernel: [ 726.370417] blkdev_get+0x12a/0x330
Sep 12 14:15:06 pve kernel: [ 726.370419] blkdev_open+0x82/0xd0
Sep 12 14:15:06 pve kernel: [ 726.370423] do_dentry_open+0x20a/0x310
Sep 12 14:15:06 pve kernel: [ 726.370425] ? blkdev_get_by_dev+0x50/0x50
Sep 12 14:15:06 pve kernel: [ 726.370427] vfs_open+0x4c/0x70
Sep 12 14:15:06 pve kernel: [ 726.370428] ? may_open+0x9b/0x100
Sep 12 14:15:06 pve kernel: [ 726.370429] path_openat+0x659/0x14f0
Sep 12 14:15:06 pve kernel: [ 726.370432] ? find_next_bit+0x18/0x20
Sep 12 14:15:06 pve kernel: [ 726.370435] ? page_add_file_rmap+0xcc/0x130
Sep 12 14:15:06 pve kernel: [ 726.370438] ? filemap_map_pages+0x3eb/0x400
Sep 12 14:15:06 pve kernel: [ 726.370439] do_filp_open+0x91/0x100
Sep 12 14:15:06 pve kernel: [ 726.370441] ? __check_object_size+0x100/0x1d7
Sep 12 14:15:06 pve kernel: [ 726.370443] ? __alloc_fd+0x46/0x170
Sep 12 14:15:06 pve kernel: [ 726.370445] do_sys_open+0x135/0x280
Sep 12 14:15:06 pve kernel: [ 726.370446] SyS_open+0x1e/0x20
Sep 12 14:15:06 pve kernel: [ 726.370449] do_syscall_64+0x5b/0xc0
Sep 12 14:15:06 pve kernel: [ 726.370451] entry_SYSCALL64_slow_path+0x25/0x25
Sep 12 14:15:06 pve kernel: [ 726.370452] RIP: 0033:0x7f6e923e5820
Sep 12 14:15:06 pve kernel: [ 726.370453] RSP: 002b:00007ffc0d393808 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
Sep 12 14:15:06 pve kernel: [ 726.370454] RAX: ffffffffffffffda RBX: 000055bc6179b500 RCX: 00007f6e923e5820
Sep 12 14:15:06 pve kernel: [ 726.370455] RDX: 000055bc6054bd63 RSI: 0000000000080000 RDI: 000055bc617a7d90
Sep 12 14:15:06 pve kernel: [ 726.370456] RBP: 0000000000000001 R08: 000055bc6054b3f0 R09: 0000000000000110
Sep 12 14:15:06 pve kernel: [ 726.370456] R10: 00000000000002fe R11: 0000000000000246 R12: 0000000000000000
Sep 12 14:15:06 pve kernel: [ 726.370457] R13: 0000000000000000 R14: 000055bc6179ec00 R15: 00000000ffffffff
Sep 12 14:15:42 pve systemd-udevd[1633]: seq 4485 '/devices/virtual/block/dm-13' killed
Ein "vma verify" ergibt keine Fehler, das Backup ist Ok. Bin gerade etwas ratlos,...