SO, the question is (details below) can i abruptly interrupt or stop this task and simply fire up the vm it has restored..???
longer story w details:
Did a VZdump of a server a few days ago and the VzDump backup took about 30 hours bcs it was compressed thru gzip to bout 660 Gigs (big server with lotsa content on a boot drive + 2 RAID arrays holding the data and VZdumped the whole thing..)
Then physically moved the VZdump *.vma.gz to a new node (just a single node NOT part of a cluster and its own local storage) and performed a restore, took about a bit over 24 hours, the task status shows that it went to 100% complete and THEN after that, it's now been "Stuck" on "rescan volumes" for a bit under 4 full days..
Another vm that was created while this restore is being run hasn't finished being created and still shows as task running as well, i suspect that the 1st restoration has to complete prior to any mroe tasks completing.. idk ..
here is what the task status looks like:
and like i said its been "stuck on this" "rescan volumes" part for 4 days.
lsblk shows this:
The VM's in question config file is this
SO the question is: can i interupt this task, perhaps run a rescan volumes manually? and start the VM anyways?
longer story w details:
Did a VZdump of a server a few days ago and the VzDump backup took about 30 hours bcs it was compressed thru gzip to bout 660 Gigs (big server with lotsa content on a boot drive + 2 RAID arrays holding the data and VZdumped the whole thing..)
Then physically moved the VZdump *.vma.gz to a new node (just a single node NOT part of a cluster and its own local storage) and performed a restore, took about a bit over 24 hours, the task status shows that it went to 100% complete and THEN after that, it's now been "Stuck" on "rescan volumes" for a bit under 4 full days..
Another vm that was created while this restore is being run hasn't finished being created and still shows as task running as well, i suspect that the 1st restoration has to complete prior to any mroe tasks completing.. idk ..
here is what the task status looks like:
Code:
Virtual Environment 5.4-3
Search
Datacenter
Search:
Server View
Logs
restore vma archive: zcat /mnt/mira2/dump/vzdump-qemu-103-2019_05_25-16_46_23.vma.gz | vma extract -v -r /var/tmp/vzdumptmp31505.fifo - /var/tmp/vzdumptmp31505
CFG: size: 649 name: qemu-server.conf
DEV: dev_id=1 size: 1127428915200 devname: drive-ide1
DEV: dev_id=2 size: 214748364800 devname: drive-ide3
DEV: dev_id=3 size: 49392123904 devname: drive-scsi0
CTIME: Sat May 25 16:46:26 2019
Using default stripesize 64.00 KiB.
Logical volume "vm-113-disk-0" created.
new volume ID is 'local-lvm:vm-113-disk-0'
map 'drive-ide1' to '/dev/pve/vm-113-disk-0' (write zeros = 0)
Using default stripesize 64.00 KiB.
Logical volume "vm-113-disk-1" created.
new volume ID is 'local-lvm:vm-113-disk-1'
map 'drive-ide3' to '/dev/pve/vm-113-disk-1' (write zeros = 0)
Using default stripesize 64.00 KiB.
Logical volume "vm-113-disk-2" created.
new volume ID is 'local-lvm:vm-113-disk-2'
map 'drive-scsi0' to '/dev/pve/vm-113-disk-2' (write zeros = 0)
progress 1% (read 13915717632 bytes, duration 1003 sec)
..
progress 100% (read 1391569403904 bytes, duration 86490 sec)
total bytes read 1391569403904, sparse bytes 611882180608 (44%)
space reduction due to 4K zero blocks 0.456%
rescan volumes...
and like i said its been "stuck on this" "rescan volumes" part for 4 days.
lsblk shows this:
Code:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 3.7T 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-miraqcow 253:2 0 900.9G 0 lvm /mnt/mira2
├─pve-data_tmeta 253:3 0 88M 0 lvm
│ └─pve-data-tpool 253:5 0 2.7T 0 lvm
│ ├─pve-data 253:6 0 2.7T 0 lvm
│ ├─pve-vm--113--disk--0 253:7 0 1T 0 lvm
│ ├─pve-vm--113--disk--1 253:8 0 200G 0 lvm
│ └─pve-vm--113--disk--2 253:9 0 46G 0 lvm
└─pve-data_tdata 253:4 0 2.7T 0 lvm
└─pve-data-tpool 253:5 0 2.7T 0 lvm
├─pve-data 253:6 0 2.7T 0 lvm
├─pve-vm--113--disk--0 253:7 0 1T 0 lvm
├─pve-vm--113--disk--1 253:8 0 200G 0 lvm
└─pve-vm--113--disk--2 253:9 0 46G 0 lvm
sr0 11:0 1 1024M 0 rom
The VM's in question config file is this
Code:
agent: 1
balloon: 4096
bootdisk: scsi0
cores: 6
ide0: none,media=cdrom
ide1: local-lvm:vm-113-disk-0,size=1050G
ide2: none,media=cdrom
ide3: local-lvm:vm-113-disk-1,size=200G
memory: 6144
name: MiRAKermit
net0: virtio=12:A5:1C:99:CD:78,bridge=vmbr0
numa: 0
ostype: win10
scsi0: local-lvm:vm-113-disk-2,discard=on,size=46G
scsihw: virtio-scsi-pci
smbios1: uuid=93449426-4965-4cda-adc0-21120d339bb6
sockets: 1
SO the question is: can i interupt this task, perhaps run a rescan volumes manually? and start the VM anyways?