use old qcow2 drive for new vm

Gino_B

New Member
Feb 8, 2023
2
0
1
Hello, i started my first proxmoxserver 1 year ago. Since i didnt know very much about proxmox i didn't run it in raid mode.
Now my first drive is broken and i needed to migrate to a new server. I connected my old harddrive to the new machine. On the old server it was mouted as a directory and it has a qcow2 drive with 12 TB on it.

root@pve:/mnt/pve/Backup/images/101# ls -l -h
total 8.5T
-rw-r----- 1 root root 11T Feb 4 13:18 vm-101-disk-0.qcow2

i really need my old files of this vm and i dont want to buy a new Harddisk. I have a new zfs pool (5x3,8TB)
but the command for import ends in this Error.

root@pve:/mnt/pve/Backup/images/101# qm importdisk 101 vm-101-disk-0.qcow2 BigRAID
importing disk 'vm-101-disk-0.qcow2' to VM 101 ...
zfs error: cannot create 'BigRAID/vm-101-disk-0': out of space

i can not understand why, because my space should be more than enough

root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
BigRAID 393G 13.4T 393G /BigRAID

the best way for me would be the mounting of the qcow.2 image directly to the new VM
 
I would guess you used a raidz1/2/3 and then there is padding overhead when not increasing the volblocksize first so you end up with something like 25% to 50% of the raw capacity as usable storage.

With 5x 3,8TB in a raidz1 and an ashift of 12 you should get 9,5TB of usable capacity with the default 8K volblocksize. With a volblocksize of 32K it would be 15,2TB.
But also keep in mind that a ZFS pool should't be filled more than 80% for best performance, so those 15,2TB are more like 12TB.

See here for padding overhead: https://web.archive.org/web/2021031...or-how-i-learned-stop-worrying-and-love-raidz
 
Last edited:
  • Like
Reactions: Gino_B