migration from PVE 2.3-13 to 4.4-12

Denzuk

Member
Feb 6, 2017
2
0
6
38
Hi all! I got such (subject line) task and dont understand how can i transfer VM from old server to new. Can some help with manual or something. (I'm very new to ZFS)

i find "old" VMs on 2.3-13 pve
[509][root.proxmox: images]$ ls * -l | grep root | head
-rw-r--r-- 1 root root 34365243392 Feb 6 16:17 vm-100-disk-1.qcow2
-rw-r--r-- 1 root root 34365243392 Apr 29 2013 vm-101-disk-1.qcow2
-rw-r--r-- 1 root root 10739318784 Feb 6 17:06 vm-102-disk-1.qcow2
-rw-r--r-- 1 root root 35433480192 Jul 28 2015 vm-103-disk-1.raw
-rw-r--r-- 1 root root 343597383680 Jan 30 15:41 vm-104-disk-2.raw
-rw-r--r-- 1 root root 4294967296 Feb 6 17:01 vm-104-disk-3.raw
-rw-r--r-- 1 root root 48325984256 Nov 4 13:14 vm-105-disk-1.qcow2
-rw-r--r-- 1 root root 3257008128 Nov 4 13:09 vm-106-disk-1.vmdk
-rw-r--r-- 1 root root 10739318784 Feb 6 17:06 vm-107-disk-1.qcow2
-rw-r--r-- 1 root root 21478375424 Nov 25 14:45 vm-108-disk-1.qcow2
[510][root.proxmox: images]$ pwd
/var/lib/vz/images
[511][root.proxmox: images]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 95G 33G 58G 37% /
tmpfs 16G 0 16G 0% /lib/init/rw
udev 16G 320K 16G 1% /dev
tmpfs 16G 16M 16G 1% /dev/shm
/dev/mapper/pve-data 1.7T 1.6T 103G 94% /var/lib/vz
/dev/sda1 495M 100M 370M 22% /boot
/dev/fuse 30M 88K 30M 1% /etc/pve



and this is 4.4 PVE server
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 791M 8.9M 782M 2% /run
rpool/ROOT/pve-1 886G 1.5G 885G 1% /
tmpfs 2.0G 43M 1.9G 3% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
rpool 885G 128K 885G 1% /rpool
rpool/ROOT 885G 128K 885G 1% /rpool/ROOT
rpool/data 885G 128K 885G 1% /rpool/data
/dev/fuse 30M 16K 30M 1% /etc/pve
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 14.7G 884G 96K /rpool
rpool/ROOT 1.48G 884G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.48G 884G 1.48G /
rpool/data 4.67G 884G 96K /rpool/data
rpool/data/vm-100-disk-1 1.03G 884G 1.03G -
rpool/data/vm-101-disk-1 3.64G 884G 3.64G -
rpool/swap 8.50G 893G 108K -
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 708K in 0h2m with 0 errors on Fri Feb 3 14:47:35 2017
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0

errors: No known data errors
root@pve:~#

and sorry for my bad english :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!