Assistance with migrating from Proxmox 1.9 to 4.4

jhbiz

New Member
May 10, 2017
13
0
1
I'd like a little bit of help migrating from Proxmox 1.9 to 4.4.

A disclaimer, I inherited this setup/environment, and have no linux knowledge or training. Everything, and the moment I am at here, is all from just poking around and reading about it.

So our environment is 4 proxmox 1.9 servers in a Cluster I guess you could say. They were set up long ago before they were my responsibility. Each server is hosting only 1 VM that we need anymore. They are old, and I want to replace them with one single server, which I have already and it has proxmox 4.4, installed by me. I want to migrate these 4 VMs to the new server. My idea is to back them up to external hard drives, and then add that external hard drive as a 'storage location' on the new server, tell it that it is a location for "backups" and then begin restoring by choosing the VM and clicking restore from the backup menu.
Well for one reason or another, this does not work. The VMs do not show up in the list of things on the drive within the GUI, nothing does, but it does recognize that the hard drive has space taken up on it.

So, I go digging some more. If I list the contents of the drive (which, to give you another hammering home that I do not know anything about linux, I had to google that simple command to list files), for some reason the .tgz shows up. After yet another escapade through Google, I find the command to restore the VM is: qmrestore --storage local-lvm backups2/vzdump-qemu-106-2017_05_02-13_16_02.tgz 101

Well as my luck would have it, it failed. How could that be? Well, here is what happens:


Device Start End Sectors Size Type
/dev/sdb1 2048 1953456127 1953454080 931.5G Microsoft basic data

root@vmhost6:~# mkdir backups
root@vmhost6:~# mount /dev/sdb1 backups
root@vmhost6:~#
root@vmhost6:~#
root@vmhost6:~# qmrestore --storage local-lvm backups/vzdump-qemu-106-2017_05_02-13_16_02.tgz 101
extracting archive '/root/backups/vzdump-qemu-106-2017_05_02-13_16_02.tgz'
extracting 'qemu-server.conf' from archive
extracting 'vm-disk-virtio0.qcow2' from archive
unable to restore 'vm-disk-virtio0.qcow2' to storage 'local-lvm'
storage type 'lvmthin' does not support format 'qcow2
tar: vm-disk-virtio0.qcow2: Cannot write: Broken pipe


So I went and looked further, and discovered that apparently this new "LVM-Thin" storage is some whole nother type of storage system and doesn't work quite as seamlessly as the prior versions of Proxmox. I discovered this thread here:
https://forum.proxmox.com/threads/local-lvm-storage-and-vm-format.27209/page-2
Specifically the post by "kinetica". I followed his instructions almost EXACTLY (i had to change a little of his formatting of the commands for it to function) but it didn't work. When I added the storage directory that i want to place my VMs in from the GUI of proxmox, it just showed the same amount of free space as "local" (100gb) and not 2tb which it should be.


I have been working at this off and on, researching and trying numerous solutions from reading the wiki (please for the love of god don't just link me wiki articles blindly without pointing SPECIFIC things within the article. Please dont just say "read the qmrestore article on the wiki" or "please read the lvm storage article". I have. They don't help me.) to googling error messages and command string options and all that. I've been at this for nearly 16-20 hours for what should, to me, be a simple hour or 2 hour ordeal. I've even went through the hassle of wiping 4.4 and installing 3.4, and after an extremely arduous time of jumping error message hurdles, and I got the VM to run on that. It does not have this 'new' 'better' LVM-thin storage.

My ultimate goal, and it really is simple here, is to just get these 4 VMs onto promxox 4.4 from 1.9. If anyone could simply just give me some exact, simple, step by step instructions that would be VERY greatly appreciated.
 
What storage format is the old server using? I see you have mentioned LVM-thin on the new setup but wasn't able to see the format used on the old system. If your not 100% sure just include an image of the storage tab for one of the old servers.
 
The "NFS" share/storage for the "backup" storage is dead. The NFS server is no longer functional, and getting one up and running right now is unfortunately out of the question. The other 'backups2' is simply an external hard drive.

Here is the screenshot: i.imgur.com/YJqdfnk.jpg

Thank you for the reply!
 
Okie, so from your image your old hosts where using directory/file based storage, hence the qcow2 format, your then trying to import into a LVM format on the new host.

LVM can only import a standard .raw file, you have a few options.

1/ Convert the qcow2 files to .raw files, this will then allow you to import (dd) into the LVM system on the new host.

2/ Convert the new host from LVM to directory/file based storage, this will allow you to copy the disk images / restore the backup's as they are onto the new host.
 
I, too, have come to those conclusions.

Seeing as how I attempted to do #2, and was unsuccessful (that is what my paragraph starting with "So I went and looked further" was about), can you give me instructions on how to import the qcow2 into the LVM on the new server?
 
I've got the qcow2 file already converted in to RAW format (im pretty sure I did it right). I messed with that all yesterday, actually from the CLI in proxmox itself.

It's the importing-into-the-lvm-storage that's giving me problems.
 
I am attempting to follow the instructions per the article you linked. I had no clue what "vg_name" was supposed to be, but researching, i assume it's supposed to be 'volume group' name? I discover my volume group name by typing vgdisplay, and am presented with my volume group name being pve.

so i replace their example code with: lvcreate -L80G pve -n paging
Volume group "pve" has insufficient free space (4061 extents): 20480 required.
root@vmhost6:~#

yes, "paging" is the name for my vm, FYI.


here is the entire output i have:





root@vmhost6:~/backups# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.18 TiB
PE Size 4.00 MiB
Total PE 571711
Alloc PE / Size 567650 / 2.17 TiB
Free PE / Size 4061 / 15.86 GiB
VG UUID EUXvHZ-DK5y-TfHO-3dbC-KnAb-C27f-xaHuSf

root@vmhost6:~# lvcreate -L80G pve -n paging
Volume group "pve" has insufficient free space (4061 extents): 20480 required.
root@vmhost6:~#



There should be PLENTY of free space, and it says theres 2.18tb free. hell, its a fresh installation.
 
you need to create the LV on the thin pool, not on the VG (like vgdisplay says, the VG only has ~16G free). the easiest way would be to just create a disk in the PVE GUI with the desired size, and then replace the content with that of your raw disk (e.g., with dd)
 
Thank you for your reply.

would you be so kind as to be a little more specific with how i can do this, as i stated before, i have never heard of DD or 'creating a disk' within the GUI. Preferably with a little handholding or step-by-step commands
 
You should convert your lvm-thin to directory based storage and you could simply restore the old dumps as is. For that you need to remove the thin pool, create a suitably sized lvm volume in place of it, format it as ext4, add as directory storage on the PVE web admin and done. LVM thin is more performant, yes (though has other quirks you might need to address), but might not worth the hassle for those 4 VMs.

Anyway if you go with the second option, you can mount the qcow images using qemu-nbd and then use dd to clone them into a freshly created VM disk (you can create them on the PVE web admin), no need to convert them to raw first.

I suggest some googling on the above methods before jumping at them.
 
You should convert your lvm-thin to directory based storage and you could simply restore the old dumps as is. For that you need to remove the thin pool, create a suitably sized lvm volume in place of it, format it as ext4, add as directory storage on the PVE web admin and done. LVM thin is more performant, yes (though has other quirks you might need to address), but might not worth the hassle for those 4 VMs.

I suggest some googling on the above methods before jumping at them.

Oh trust me. I've invested in this simple task of importing a VM from 1.9 to 4.4 over twenty hours by now id estimate. I've been doing nothing BUT googling.
And as far as I understand, your suggestion is exactly what i did here, pasted from my original post:
So I went and looked further, and discovered that apparently this new "LVM-Thin" storage is some whole nother type of storage system and doesn't work quite as seamlessly as the prior versions of Proxmox. I discovered this thread here:
https://forum.proxmox.com/threads/local-lvm-storage-and-vm-format.27209/page-2
Specifically the post by "kinetica". I followed his instructions almost EXACTLY (i had to change a little of his formatting of the commands for it to function) but it didn't work. When I added the storage directory that i want to place my VMs in from the GUI of proxmox, it just showed the same amount of free space as "local" (100gb) and not 2tb which it should be.


Here is an output of my commands, as well as a screenshot of my GUI showing the new incorrect storage size being reported:



Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu May 11 11:10:00 2017 from 192.168.61.101
root@vmhost6:~# lvremove /dev/pve/data
Do you really want to remove and DISCARD active logical volume data? [y/n]: y
Logical volume "data" successfully removed
root@vmhost6:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID iHG2oD-7a9q-UBYg-Z5AF-RvY8-wZOV-QIxSqF
LV Write Access read/write
LV Creation host, time proxmox, 2017-05-10 08:32:17 -0600
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID kk2mTk-CUis-2jFC-L8LR-F8yb-zUQa-yxvATI
LV Write Access read/write
LV Creation host, time proxmox, 2017-05-10 08:32:17 -0600
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0

root@vmhost6:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.2T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part
└─sda3 8:3 0 2.2T 0 part
├─pve-root 251:0 0 96G 0 lvm /
└─pve-swap 251:1 0 8G 0 lvm [SWAP]
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part /root/backups
sr0 11:0 1 1024M 0 rom
root@vmhost6:~# lvcreate -L 2000G -n data pve
Logical volume "data" created.
root@vmhost6:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.2T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 256M 0 part
└─sda3 8:3 0 2.2T 0 part
├─pve-root 251:0 0 96G 0 lvm /
├─pve-swap 251:1 0 8G 0 lvm [SWAP]
└─pve-data 251:2 0 2T 0 lvm
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part /root/backups
sr0 11:0 1 1024M 0 rom
root@vmhost6:~# mkfs.ext4 /dev/pve/data
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 524288000 4k blocks and 131072000 inodes
Filesystem UUID: 51c7e9c6-ad33-4c2d-b8c7-633fadb55c35
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

root@vmhost6:~# nano fstab
root@vmhost6:~# mount -a



And the images: http://imgur.com/a/6WCSa
 
From your post it looks to me that you haven't mounted the freshly created ext4 volume. I see you editing fstab but please show its contents and also the contents of /proc/mounts. The size and usage of the VMs storage is consistent with your root volume, further hinting the big volume is not mounted.
 
root@vmhost6:~# nano fstab
GNU nano 2.2.6 File: fstab

/dev/pve/data /var/lib/vz ext4 defaults 0 1





root@vmhost6:~# nano /proc/mounts
GNU nano 2.2.6 File: /proc/mounts

sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=12370190,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=19798204k,mode=755 0 0
/dev/dm-0 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=21,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
/dev/sdb1 /root/backups ext3 rw,relatime,data=ordered 0 0



If i need to perform another step to 'mount' something, please explicitly tell me what to type and where and how
 
Weird. The big data volume has the defaults flag which includes auto, meaning that at issuing mount -a or at startup it mounts automatically. Yet it didn't. However, try to issue simply "mount /var/lib/vz" and see what happens.
 
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed May 17 13:29:06 2017 from 192.168.61.101
root@vmhost6:~# nano fstab
root@vmhost6:~# mount /var/lib/vz
mount: can't find /var/lib/vz in /etc/fstab
root@vmhost6:~#


root@vmhost6:~# nano fstab
GNU nano 2.2.6 File: fstab

/dev/pve/data /var/lib/vz ext4 defaults 0 1
 
root@vmhost6:~# mount /var/lib/vz
mount: can't find /var/lib/vz in /etc/fstab
Here's your problem. Please post the contents of the WHOLE /etc/fstab file. "cat /etc/fstab" and copypasta the output.
 
root@vmhost6:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
root@vmhost6:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!