can't migrate vm unless i'm logged into that exact node

Please share the details about the current situation again
the situation is, when trying to migrate a vm from that host (but only if the vm is on that certain storage on that host) the cluster seems to wrongly think the vm has local disks attached to it. the issue storage is not shared, nor is an equally-named storage available on any other host. but still all hosts are disqualified from the migration:
1754314314855.png

Because, again, a ZFS storage with the same name, which is available on mutliple nodes, does count as local
does or doesn't? because when migrating from "local-zfs" on the issue node, the migration works. i can even move the vm's storage to the local-zfs storage before the migration and it works.

Shared storages are really only the ones serving a common state to multiple nodes at the same time:
i'm not running any shared storage. each host is installed on zfs and some are running an additional zfs pool to get more on-host storage.

Because according to the pvesh output you posted, the VM was not running.
the vm state doesn't matter. this goes for running and powered off vm's. i just used a random vm to generate the error message for you earlier. this seems like a bit of a cluster bug to me. though the issue host was recently fully updated and rebooted to try to resolve it. no go.
 
the situation is, when trying to migrate a vm from that host (but only if the vm is on that certain storage on that host) the cluster seems to wrongly think the vm has local disks attached to it. the issue storage is not shared, nor is an equally-named storage available on any other host. but still all hosts are disqualified from the migration:
View attachment 88837
That is precisely the issue. The storage is not available on those nodes, so you cannot migrate it without specifying a different target storage. In the UI that can only be done when the VM is running (hence the original error message in your very first post).
does or doesn't? because when migrating from "local-zfs" on the issue node, the migration works. i can even move the vm's storage to the local-zfs storage before the migration and it works.
Does. It is a storage that is local to the node. No other node can access the volume that lies on that storage. Other nodes could have a storage with the same name that would then automatically be chosen as the target for copying the disk. But your configuration and output shows that they don't have that, so you need to specify a different storage as the target manually.
 
That is precisely the issue. The storage is not available on those nodes, so you cannot migrate it without specifying a different target storage. In the UI that can only be done when the VM is running (hence the original error message in your very first post).
and usually i would do that right in the gui and go. but it's not letting me, insisting it needs the exactly storage "nvme4" to be available on the destination node (meaning it does let me select a different target storage, but the migrate button remains greyed out:
1754316235322.png

it also erroneously claims the vm is not running, when the dialog box clearly states the mode is "online"

so you need to specify a different storage as the target manually.
yeah that's what usually works, except in this case. as you can see in the screenshot above, i have selected a different storage manually on the destination host. but still a no-go.
 
Last edited:
Hey,

i didn´t get the whole situation. Sometimes there exists a snapshot, which references nvme4 in your case.
 
Hey,

i didn´t get the whole situation. Sometimes there exists a snapshot, which references nvme4 in your case.
just to confirm, there are no snapshots. also, this is not unique to a single vm; this is an issue with every vm on this host using the particular storage "nvme4"
 
Please post the output of the following while the VM is still running and the error message is like in your previous post in the UI:
Code:
pveversion -v
pvesh get /nodes/<insert source node name here>/qemu/108/migrate --output-format json-pretty
cat /etc/pve/storage.cfg
cat /etc/pve/qemu-server/108.conf
If you must censor the node names, please use a 1:1 mapping to pseudo-names.
 
pveversion -v
Code:
proxmox-ve: 8.4.0 (running kernel: 6.8.12-9-pve)
pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.8: 6.8.12-9
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.0-1
proxmox-backup-file-restore: 3.4.0-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.10
pve-cluster: 8.1.0
pve-container: 5.2.6
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.3
pve-firewall: 5.1.1
pve-firmware: 3.15-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.2
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.12
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2


pvesh get /nodes/<insert source node name here>/qemu/108/migrate --output-format json-pretty
Code:
root@MD72-HB2-1:~# pvesh get /nodes/MD72-HB2-1/qemu/108/migrate --output-format json-pretty
{
   "allowed_nodes" : [],
   "local_disks" : [
      {
         "cdrom" : 0,
         "drivename" : "scsi0",
         "is_attached" : 1,
         "is_tpmstate" : 0,
         "is_unused" : 0,
         "is_vmstate" : 0,
         "replicate" : 1,
         "shared" : 0,
         "size" : 107374182400,
         "volid" : "nvme4:vm-108-disk-0"
      }
   ],
   "local_resources" : [],
   "mapped-resource-info" : {},
   "mapped-resources" : [],
   "not_allowed_nodes" : {
      "MW83-RP0-1" : {
         "unavailable_storages" : [
            "nvme4"
         ]
      },
      "WS790-SAGE-1" : {
         "unavailable_storages" : [
            "nvme4"
         ]
      },
      "r730xd-1" : {
         "unavailable_storages" : [
            "nvme4"
         ]
      },
      "slgs1" : {
         "unavailable_storages" : [
            "nvme4"
         ]
      },
      "tmpc1" : {
         "unavailable_storages" : [
            "nvme4"
         ]
      }
   },
   "running" : 1
}


cat /etc/pve/storage.cfg
Code:
root@MD72-HB2-1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

zfspool: SATA3
        pool SATA3
        content rootdir,images
        mountpoint /SATA3
        nodes MD72-HB2-1
        sparse 1

zfspool: r730xd-1-rust
        pool r730xd-1-rust
        content rootdir,images
        nodes r730xd-1
        sparse 1

zfspool: nvme4
        pool nvme4
        content rootdir,images
        mountpoint /nvme4
        nodes MD72-HB2-1
        sparse 1

zfspool: nvme5
        pool nvme5
        content rootdir,images
        mountpoint /nvme5
        nodes MW83-RP0-1
        sparse 1


cat /etc/pve/qemu-server/108.conf
Code:
root@MD72-HB2-1:~# cat /etc/pve/qemu-server/108.conf
agent: 1
boot: order=scsi0;ide0
cores: 24
cpu: x86-64-v2-AES
ide0: none,media=cdrom
machine: pc-q35-8.0
memory: 16384
meta: creation-qemu=8.1.5,ctime=1723687819
name: Veeam1
net0: virtio=00:15:5a:ba:f0:24,bridge=vmbr0,firewall=1,mtu=1,tag=1
net1: virtio=BC:24:11:8A:E2:51,bridge=vmbr0,firewall=1,mtu=1,tag=901
numa: 1
ostype: win10
scsi0: nvme4:vm-108-disk-0,cache=writeback,discard=on,format=raw,iothread=1,size=100G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=b05a0839-dfaf-dedf-9de3-fb9e3255e393
sockets: 1
vmgenid: b4c27dd6-428c-1122-8693-b043891c7225
 
I cannot reproduce the issue here with such a configuration.

Please upgrade to the latest packages for Proxmox VE 8 and reload the UI in the browser (without keeping the browser cache) afterwards to see if the issue persists:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
https://pve.proxmox.com/wiki/Package_Repositories
yeah it looks like a weird bug of some kind somewhere. maybe even in the browser...

are you talking about updating the source node? cause i already did that.. i'm completely confused as well how this is even possible.
 
yeah it looks like a weird bug of some kind somewhere. maybe even in the browser...
What version do you see in the top-left corner of the web UI? In Firefox and Chromium, you can reload dropping the cache with Ctrl+Shift+R.
are you talking about updating the source node? cause i already did that.. i'm completely confused as well how this is even possible.
Do you mean after your last post? Because pve-manager: 8.4.1 from your output is not the latest available version for Proxmox VE 8.
 
What version do you see in the top-left corner of the web UI? In Firefox and Chromium, you can reload dropping the cache with Ctrl+Shift+R.
that depends on which node i log into the web ui with. on the problem node the version number say 8.4.1 like you mentioned. what i mean by updating is through the course of this thread i've tried updating to the latest version and rebooting the node.

but as i mentioned at the onset, when i connect to the web ui using the ip address of the problem node, the version number in the left corner says 8.4.1 and migrations work when using the ui. but not when using the shell. when i log into the cluster on any other node, migrations don't work on the ui or the shell.
 
You should keep your nodes updated to the same version. Again, 8.4.1 is not the latest available version for Proxmox VE 8.