VM moving confusion

Tuckerdog

Well-Known Member
Aug 17, 2011
67
0
46
Hi All,

I am a Linux noob...long time computers...pdp11,dos,windows(all), os2, system36/38/400...so I can learn.

I have had a Proxmox for years, but the i7/16Gb was long in the tooth. Purchased memory and 4Tb sata drive, but still not happy. Went and put an Epyc with 126Gb and 4TB together, programmed with proxmox and plugged in to the same router/subnet with the old server. Created cluster, new server joined...both nodes showing.

This is where I am getting lost in circles.

I want to get all the VMs from the old server over to the new server, but migrate is no-go as I have no shared storage. Backups (vzdumps) copied to new server keep over-running space, as I believe the directory on the new one seems to be in the root space (96Gb), and not on the rest of the drive, like the old ones are.

I have read a ton of info on this, or similar topics, and I must admit my shortcomings.

Any hand-holding would be great.
 
Why is migration not possible? What is your current storage setup? It should be possible to migrate from dir-based local storage as long as they are defined on both nodes. Please post the output of /etc/pve/storage.cfg
If migration and restoring from backup are not a viable solution, storage replication might be an alternative https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvesr
 
the directory on the new one seems to be in the root space (96Gb),
You need to create storage for vm images on the rest of the disk. Create large enough partition with file system, then you can in Proxmox go to Storage and Add that partition as suitable type. Sorry, not at Proxmox UI now so can not write exact spot. https://pve.proxmox.com/wiki/Storage

I have moved virtual machines by copying the dump over to the other Proxmox host, and then restore it there.
 
Why is migration not possible? What is your current storage setup? It should be possible to migrate from dir-based local storage as long as they are defined on both nodes. Please post the output of /etc/pve/storage.cfg
If migration and restoring from backup are not a viable solution, storage replication might be an alternative https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvesr
Hi,

Both servers look like this:

dir: local
path /var/lib/vz
content vztmpl,images,backup,snippets,iso,rootdir
maxfiles 5
shared 0

Migration (live or stopped) gave an error about the file missing.
 
What is the exact error message during migration? Please post the full output.
 
What is the exact error message during migration? Please post the full output.


2019-04-24 07:26:07 starting migration of VM 107 to node 'epyc' (10.0.0.11)
2019-04-24 07:26:07 found local disk 'local:107/vm-107-disk-0.qcow2' (in current VM config)
2019-04-24 07:26:07 found local disk 'local:iso/smeserver-9.2-x86_64.iso' (in current VM config)
2019-04-24 07:26:07 can't migrate local disk 'local:iso/smeserver-9.2-x86_64.iso': local cdrom image
2019-04-24 07:26:07 ERROR: Failed to sync data - can't migrate VM - check log
2019-04-24 07:26:07 aborting phase 1 - cleanup resources
2019-04-24 07:26:07 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
TASK ERROR: migration aborted
 
Hi,

Just realized what the error was saying...removed iso from cd rom and trying it again...
 
Hi,

OK...Here is what I saw in the error log, before.

2019-04-24 10:22:52 starting migration of VM 103 to node 'epyc' (10.0.0.11)
2019-04-24 10:22:52 found local disk 'local:103/vm-103-disk-0.qcow2' (in current VM config)
2019-04-24 10:22:52 copying disk images
Formatting '/var/lib/vz/images/103/vm-103-disk-0.qcow2', fmt=qcow2 size=549755813888 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
dd: error writing '/var/lib/vz/images/103/vm-103-disk-0.qcow2': No space left on device
2247+5690647 records in
2247+5690646 records out
93882814464 bytes (94 GB, 87 GiB) copied, 1216.94 s, 77.1 MB/s
command 'dd 'of=/var/lib/vz/images/103/vm-103-disk-0.qcow2' 'conv=sparse' 'bs=64k'' failed: exit code 1
command 'dd 'if=/var/lib/vz/images/103/vm-103-disk-0.qcow2' 'bs=4k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2019-04-24 10:44:20 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size - -with-snapshots 1 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=epyc' root@10.0.0.11 -- pvesm import local:103/vm-103-disk-0.qcow2 qcow2+size - -with-snapshots 1' failed: exit code 1
2019-04-24 10:44:20 aborting phase 1 - cleanup resources
2019-04-24 10:44:20 ERROR: found stale volume copy 'local:103/vm-103-disk-0.qcow2' on node 'epyc'
2019-04-24 10:44:20 ERROR: migration aborted (duration 00:21:29): Failed to sync data - command 'set -o pipefail && pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size - -with-snapshots 1 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=epyc' root@10.0.0.11 -- pvesm import local:103/vm-103-disk-0.qcow2 qcow2+size - -with-snapshots 1' failed: exit code 1
TASK ERROR: migration aborted

What I understand is that the new system is not using the full 4TB drive, only the root 98GB, and it's local storage should be connected to the remaining drive space but it's not...?...but that is where my knowledge is really thinning out.
 
Hi,

Here is where the error is, I believe. The first output is the old server, second is the new one. I see the mapping is not right, but have no idea how to fix this. I have forumed and googled for a week now, and why I am here asking the questions.

Linux proxmox 4.15.18-12-pve #1 SMP PVE 4.15.18-35 (Wed, 13 Mar 2019 08:24:42 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Apr 24 06:35:12 2019 from
root@proxmox:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 73M 1.5G 5% /run
/dev/mapper/pve-root 95G 41G 50G 45% /
tmpfs 7.9G 63M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/mapper/pve-data 3.5T 787G 2.7T 23% /var/lib/vz
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 1.6G 0 1.6G 0% /run/user/0


Linux epyc 4.15.18-12-pve #1 SMP PVE 4.15.18-35 (Wed, 13 Mar 2019 08:24:42 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Apr 23 11:22:09 2019 from
root@epyc:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 34M 13G 1% /run
/dev/mapper/pve-root 94G 14G 77G 15% /
tmpfs 63G 63M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda2 511M 304K 511M 1% /boot/efi
/dev/fuse 30M 20K 30M 1% /etc/pve
 
So yes, the problem is you have not enough space on the local storage of the new server. As you correctly state the size is only 94G.
By default, the installer creates a volume group (VG) called pve and two logical volumes (LVs), one called root, the other one data. Data is typically a block based lvmthin storage (although not present in your storage.cfg, did you delete it?).
So in order for you to have more space on local you can either expand your root LV or create a new LV and mount it on /var/lib/vz like you did on the old server.
 
Hi Chris,

Thanks for your response in confirming my observations.

When setting the original server, the "default settings" on install choose the storage usage/options. That is to say, I had not changed the way the local storage landed on that drive.

The data drive on the new server, I am not sure...I have had a week of trying other thread's ideas that were seemly closely related...could/probably be me.

I guess what I am getting at is :

1) How do I create a new LV and mount this local directory to it?

2) in regards to the missing DATA LV, can/should this be resurrected and can/would it be used in conjunction with #1?

With any further advise, could you please give or show the commands needed to do this/these operations?

Thanks in advance.
 
Last edited:
So yes, the problem is you have not enough space on the local storage of the new server. As you correctly state the size is only 94G.
By default, the installer creates a volume group (VG) called pve and two logical volumes (LVs), one called root, the other one data. Data is typically a block based lvmthin storage (although not present in your storage.cfg, did you delete it?).
So in order for you to have more space on local you can either expand your root LV or create a new LV and mount it on /var/lib/vz like you did on the old server.

P.S. Option to expand root LV is not the best choice, i think...?
 
The steps would be the following:
Remove the existing lvm-thin LV
Code:
lvremove pve/data
Create a LVM LV
Code:
lvcreate -L <size_of_lv> -n data pve
Create a ext4 filesystem on the new LV
Code:
mkfs.ext4 /dev/pve/data
Finally you have to create a new line in /etc/fstab stating
Code:
/dev/pve/data /var/lib/vz ext4 defaults 0 1
in order for the system to mount the LV to /var/lib/vz
Make sure you have nothing under /var/lib/vz.
Finally mount the new LV with
Code:
mount -a
so you do not need to reboot. After this you should be able to migrate your VMs to the new machine.
 
  • Like
Reactions: Tuckerdog
The steps would be the following:
Remove the existing lvm-thin LV
Code:
lvremove pve/data
Create a LVM LV
Code:
lvcreate -L <size_of_lv> -n data pve
Create a ext4 filesystem on the new LV
Code:
mkfs.ext4 /dev/pve/data
Finally you have to create a new line in /etc/fstab stating
Code:
/dev/pve/data /var/lib/vz ext4 defaults 0 1
in order for the system to mount the LV to /var/lib/vz
Make sure you have nothing under /var/lib/vz.
Finally mount the new LV with
Code:
mount -a
so you do not need to reboot. After this you should be able to migrate your VMs to the new machine.

Hi Chris,

That's great... I'll try that when I get home tonight.

Thank you, and I will report back on the results.
 
Hi Chris,

Worked like a charm. I am moving VMs as I type this.

Thank-you for this help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!