PVE 9.1 - vm migration host1 local storage to host2 local storage - cannot migrate from storage type 'dir' to 'dir'

TheTaran

New Member
Mar 4, 2024
3
0
1
Before the PVE 9.1 update, I was able to migrate my VMs from host1 local storage to host2 local storage, and LXC containers just like normal VMs. Since the update, I get this message:


ERROR: migration aborted (duration 00:00:00): storage migration for 'vmstorage:199/vm-199-disk-0.raw' to storage 'vmstorage' failed - cannot migrate from storage type 'dir' to 'dir' TASK ERROR: migration aborted


Any idea what is happening?
 
If anything between host1 and host2 differs even slightly, Proxmox now blocks the migration with the generic error.
Verify difference between 2 storage:
cat /etc/pve/storage.cfg
 
This is the output of the 2 nodes.

NODE1
Bash:
dir: local
        path /var/lib/vz
        content snippets
        shared 0

nfs: prox-nfs
        export /volume1/prox-nfs
        path /mnt/pve/prox-nfs
        server IP
        content backup,images,rootdir,import,vztmpl,iso
        options vers=4
        prune-backups keep-last=2

pbs: prox-backup
        datastore prox-backup
        server name
        content backup
        fingerprint secure
        prune-backups keep-all=1
        username backup@pbs

dir: vmstorage
        path /var/lib/vm-storage
        content images,backup,rootdir,vztmpl
        shared 0

NODE2
Bash:
dir: local
        path /var/lib/vz
        content snippets
        shared 0

nfs: prox-nfs
        export /volume1/prox-nfs
        path /mnt/pve/prox-nfs
        server IP
        content backup,images,rootdir,import,vztmpl,iso
        options vers=4
        prune-backups keep-last=2

pbs: prox-backup
        datastore prox-backup
        server name
        content backup
        fingerprint secure
        prune-backups keep-all=1
        username backup@pbs

dir: vmstorage
        path /var/lib/vm-storage
        content images,backup,rootdir,vztmpl
        shared 0

The storage witch makes problem ist the vmstorage. That is local on each node.
 
Last edited:
Verify on two nodes:
df -T /var/lib/vm-storage
mount | grep vm-storage
The results they must be identical

After this, verify permissions and status (if one is offline):
ls -ld /var/lib/vm-storage
pvesm status | grep vmstorage

Proxmox requires identical storage capabilities
 
Everything ist identical. :(

NODE1
Bash:
root@pox1:[~]: df -T /var/lib/vm-storage
Filesystem                Type  1K-blocks      Used  Available Use% Mounted on
/dev/mapper/pve-vmstorage xfs  1842492420 175967808 1666524612  10% /var/lib/vm-storage
root@pox1:[~]: mount | grep vm-storage
/dev/mapper/pve-vmstorage on /var/lib/vm-storage type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=8,swidth=8,noquota)
root@pox1:[~]: ls -ld /var/lib/vm-storage
drwxr-xr-x 8 root root 93 Sep  4 16:00 /var/lib/vm-storage
root@pox1:[~]: pvesm status | grep vmstorage
vmstorage           dir     active      1842492420       175967808      1666524612    9.55%

NODE2
Bash:
root@pox2:[~]: df -T /var/lib/vm-storage
Filesystem                Type  1K-blocks      Used  Available Use% Mounted on
/dev/mapper/pve-vmstorage xfs  1842492420 151968060 1690524360   9% /var/lib/vm-storage
root@pox2:[~]: mount | grep vm-storage
/dev/mapper/pve-vmstorage on /var/lib/vm-storage type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=8,swidth=8,noquota)
root@pox2:[~]: ls -ld /var/lib/vm-storage
drwxr-xr-x 8 root root 93 Sep  4 16:00 /var/lib/vm-storage
root@pox2:[~]: pvesm status | grep vmstorage
vmstorage           dir     active      1842492420       151968060      1690524360    8.25%
 
1 - verify node names: pvecm nodes
2 - edit storage.cfg and add this line, down vmstorage: nodes nodename1,nodename2
3 - restart
systemctl restart pvedaemon
systemctl restart pvestatd