Hi, dcsapak
thanks for the answer, and yes the work is done.
my work to migrate the vm from an old Proxmox 3.4 with LVM,
to a new Proxmox 5.2 with ZFS Raid 10:
storage overview:
Code:
root@SERVER:~# pvesm status
backup dir 1 480584400 242242108 213923272 53.60%
local dir 1 107756536 52071304 55685232 48.82%
pool dir 1 2884152536 254838024 2482801260 9.81%
root@SERVER:~# cat /etc/pve/storage.cfg
dir: pool
path /var/lib/vz/pool/
content images,iso,backup
maxfiles 2
nodes SERVER
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0
dir: backup
path /var/lib/vz/backup
shared
content backup
maxfiles 2
which and where are the vm disks:
Code:
root@SERVER:~# pvesm list pool
pool:110/vm-110-disk-1.qcow2 qcow2 665719930880 110
pool:110/vm-110-disk-2.raw raw 665719930880 110
root@SERVER:~# pvesm list local
local:110/vm-110-disk-1.raw raw 93415538688 110
local:iso/virtio-win.iso iso 316628992
root@SERVER:~# pvesm path local:110/vm-110-disk-1.raw
/var/lib/vz/images/110/vm-110-disk-1.raw
whitch type is the vm disks:
Code:
qemu-img info /var/lib/vz/pool/images/110/vm-110-disk-1.qcow2
convert from qcow2 to raw:
(two qcow2 disks with the same name, in two raw disks with different names)
Code:
qemu-img convert -f qcow2 /var/lib/vz/pool/dump/vm-110-disk-1.qcow2 vm-110-disk-1.raw
qemu-img convert -f qcow2 vm-110-disk-1.qcow2 -O raw vm-110-disk-2.raw
qemu-img info vm-110-disk-1.raw
edit the VMID.conf file, to change the storage line from qcow2 to raw:
Code:
vim /etc/pve/nodes/SERVER/qemu-server/110.conf
…
virtio0: local:110/vm-110-disk-1.qcow2,format=qcow2,cache=writeback,size=87G
virtio1: pool:110/vm-110-disk-1.qcow2,format=qcow2,cache=writeback,size=620G
to
scsi0: local:110/vm-110-disk-1.raw,format=raw,cache=writeback,size=87G
scsi1: pool:110/vm-110-disk-2.raw,format=raw,cache=writeback,size=620G
here in this file I change three things:
1 - format qcow2 to raw
2 - the disk name
3 - the hard disk interface from virtIO to scsi
after it you also have to change the boot device from virtio0 to scsi0
IMPORTENT: this change from virtIO to scsi0 is a new subject, and it does not work before you have to install the necessary VirtIO-SCSI drivers. this is not necessary for this migration.
a last test for starting the VM on the old Proxmox 3.4. If everything is okay, shutdown the VM and create the vzdump:
Code:
vzdump --compress lzo --mode stop --dumpdir /var/lib/vz/backup/dump/ 110
Copy the vzdump form the old Proxmox to the new Proxmox 5.2 ZFS-Raid10 host:
Code:
rsync -av --stats --progress vzdump-qemu-110-2018.* root@192.168.1.5:/var/lib/vz/dump/
And now on the new Proxmox host restore the vzdump:
Code:
qmrestore /var/lib/vz/dump/vzdump-qemu-110-2018_06_29-00_54_58.vma.lzo --storage local-zfs 110
to conclude have a look at the zfs pool:
Code:
root@SERVER02:/etc# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 245G 6.78T 24K /rpool
rpool/ROOT 116G 6.78T 24K /rpool/ROOT
rpool/ROOT/pve-1 116G 6.78T 116G /
rpool/data 120G 6.78T 24K /rpool/data
rpool/data/vm-110-disk-1 29.0G 6.78T 29.0G -
rpool/data/vm-110-disk-2 91.0G 6.78T 91.0G -
rpool/swap 8.50G 6.79T 134M -
regards maxprox