How to attach another vm disk to a different VM

rudiduarterodrigues

New Member
Sep 28, 2021
12
0
1
43
Hi Everyone

So basically my File server does not want to restore back to PVE keeps giving me a loopback pipe broken error ? i think its space but i have 10TB free on the NAS where the image is, maybe i cannot restore onto the same media as the backup is ??

is there a way i can map the disk to another machine and try pull the contents off the C drive ?

Your help would be very much appreciated

Thanks
 
Please provide the VM config of the backup, as well as the storage config (/etc/pve/storage.cfg) and the log of the failed restore.
 
Not quite understand your scenario.
Do you have a PM host with mounted storage (like NFS) on external NAS? Or NAS is a VM aswell?
What's your filesystem on a storage containing your VD?
You could try to create 2nd VM, detach your 1st VM disk, then attach it to other one. 2nd part needs to be done manually though (renaming VD, modifying VD .conf files etc).
Or try to backup entire VM and restore (obviously) with new VM ID (whitch will also add new VD's id).
 
Last edited:
Please provide the VM config of the backup, as well as the storage config (/etc/pve/storage.cfg) and the log of the failed restore.

Hi

#172.19.181.23 agent: 1 bootdisk: virtio0 cores: 4 memory: 4096 name: S16FILE01 net0: rtl8139=36:F5:2F:D0:5A:4D,bridge=vmbr0 numa: 0 onboot: 1 ostype: win10 scsihw: virtio-scsi-pci smbios1: uuid=5b415fbf-698e-4c7c-9a80-6a90b5bd5948 sockets: 1 virtio0: local-lvm:vm-114-disk-1,size=128G virtio1: local-lvm:vm-114-disk-2,size=1T #qmdump#map:virtio0:drive-virtio0:local-lvm:raw: #qmdump#map:virtio1:drive-virtio1:local-lvm:raw:

here is the storage.cfg

dir: local
path /var/lib/vz
content iso,snippets,images,rootdir,backup,vztmpl
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

nfs: Unraid
export /mnt/user/proxmox
path /mnt/pve/Unraid
server 172.19.181.30
content rootdir,vztmpl,backup,images,snippets,iso
prune-backups keep-all=1
 
i have a Proxmox with mounted NAS which holds the backup i want to restore
but i can only restore back onto the NAS because my local LVM has only 5TB left and apparently this vm is like 9TB
 
There are only 2 disks configured with 1T and 128G. You should be able to restore it on local-lvm if there's enough space.
Please provide the task log of the failed restore.
 
There are only 2 disks configured with 1T and 128G. You should be able to restore it on local-lvm if there's enough space.
Please provide the task log of the failed restore.
restore vma archive: lzop -d -c /mnt/pve/Unraid/dump/vzdump-qemu-114-2021_09_12-05_26_06.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp35814.fifo - /var/tmp/vzdumptmp35814
CFG: size: 425 name: qemu-server.conf
DEV: dev_id=1 size: 137438953472 devname: drive-virtio0
DEV: dev_id=2 size: 1099511627776 devname: drive-virtio1
CTIME: Sun Sep 12 05:26:20 2021
Formatting '/mnt/pve/Unraid/images/114/vm-114-disk-1.raw', fmt=raw size=137438953472
new volume ID is 'Unraid:114/vm-114-disk-1.raw'
Formatting '/mnt/pve/Unraid/images/114/vm-114-disk-2.raw', fmt=raw size=1099511627776
new volume ID is 'Unraid:114/vm-114-disk-2.raw'
map 'drive-virtio0' to '/mnt/pve/Unraid/images/114/vm-114-disk-1.raw' (write zeros = 0)
map 'drive-virtio1' to '/mnt/pve/Unraid/images/114/vm-114-disk-2.raw' (write zeros = 0)
progress 1% (read 12369526784 bytes, duration 327 sec)
progress 2% (read 24739053568 bytes, duration 704 sec)
progress 3% (read 37108580352 bytes, duration 1068 sec)
progress 4% (read 49478041600 bytes, duration 1418 sec)
progress 5% (read 61847568384 bytes, duration 1765 sec)
progress 6% (read 74217095168 bytes, duration 2166 sec)
progress 7% (read 86586556416 bytes, duration 2497 sec)
progress 8% (read 98956083200 bytes, duration 2788 sec)
progress 9% (read 111325609984 bytes, duration 3027 sec)
progress 10% (read 123695071232 bytes, duration 3390 sec)
progress 11% (read 136064598016 bytes, duration 3704 sec)
vma: restore failed - blk_pwrite to failed (-5)
/bin/bash: line 1: 35816 Broken pipe lzop -d -c /mnt/pve/Unraid/dump/vzdump-qemu-114-2021_09_12-05_26_06.vma.lzo
35817 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp35814.fifo - /var/tmp/vzdumptmp35814
temporary volume 'Unraid:114/vm-114-disk-1.raw' sucessfuly removed
temporary volume 'Unraid:114/vm-114-disk-2.raw' sucessfuly removed
no lock found trying to remove 'create' lock
error before or during data restore, some or all disks were not completely restored. VM 114 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/Unraid/dump/vzdump-qemu-114-2021_09_12-05_26_06.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp35814.fifo - /var/tmp/vzdumptmp35814' failed: exit code 133
 
Please also provide the output of pvesm status.

If you want to restore the VM on a different storage, you should be able to specify a target storage in the GUI. If that's not possible, you could try the `qmrestore` command (see man qmrestore for more details).
 
Please also provide the output of pvesm status.

If you want to restore the VM on a different storage, you should be able to specify a target storage in the GUI. If that's not possible, you could try the `qmrestore` command (see man qmrestore for more details).
Unraid nfs active 11715333120 1693641728 10021691392 14.46%
local dir active 98497780 2668708 90779524 2.71%
local-lvm lvmthin active 5699006464 336241381 5362765082 5.90%
root@covenant:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!