Reassign disk option not giving any VM options in list

Red Squirrel

Renowned Member
May 31, 2014
42
9
73
If I have an unused disk, how do i go about assigning it to a VM? When I use the option, no VMs show up in the list to assign it to. I'm experimenting with converting vmdk files and found the command to convert, I placed the disks with a test VM and named the converted disks the same naming convention as the temp disk I created initially but when I go to assign it, it does not give me any option.
 
Have you tried using CLI (man qm // search "qm disk move")?
Other than that, perhaps if you can provide more details about your environment someone might be able to give you a better clue:
qm list
qm config [vmid]
qm status
pvecm status
pvesm status
cat /etc/pve/storage.cfg
journalctl -f // while doing qm disk move or GUI steps
tail -100 /var/log/pveproxy/access.log



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Here's all the output. Nothing special happens when I try to reassign, as I never get any option in the drop down to try to reassign to.

Screenshot from 2024-12-03 19-57-30.png

I also tried with a disk that was created within PVE, to rule out an issue with the conversion I did.

Code:
root@proxmoxtemp02:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID       
       102 testvm1              stopped    2048               0.00 0         
root@proxmoxtemp02:~# qm config 102
boot: order=ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
machine: pc-i440fx-9.0
memory: 2048
meta: creation-qemu=9.0.2,ctime=1733198303
name: testvm1
net0: e1000=BC:24:11:2B:B1:2F,bridge=vmbr0,firewall=1,link_down=1,tag=5
numa: 0
ostype: win8
scsihw: virtio-scsi-single
smbios1: uuid=ab3b8fb3-b48e-4862-aaaa-80b1efd1667c
sockets: 1
unused0: PVE_LUN2:102/Windows7-misc-flat.vmdk
unused1: PVE_LUN2:102/Windows7-misc.vmdk
unused2: PVE_LUN2:102/Windows7-misc.qcow2
unused3: PVE_LUN2:102/vm-102-disk-1.qcow2
unused4: PVE_LUN2:102/vm-102-disk-2.qcow2
unused5: PVE_LUN2:102/vm-102-disk-0.qcow2
vmgenid: 44318844-a33c-4aa7-8a99-a42bac6d1587
root@proxmoxtemp02:~# qm status
400 not enough arguments
qm status <vmid> [OPTIONS]
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# qm status 102
status: stopped
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# pvecm status
Cluster information
-------------------
Name:             PVEProduction
Config Version:   4
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Dec  3 19:56:05 2024
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1.60
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.1.1.39
0x00000002          1 10.1.1.40 (local)
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# pvesm status
Name             Type     Status           Total            Used       Available        %
PVE_LUN1          nfs     active     16910151680      6422746112      9628554240   37.98%
PVE_LUN2          nfs     active     19224941568     15977145344      2271165440   83.11%
PVE_LUN3          nfs     active      7690793984      5291480064      2008637440   68.80%
PVE_MISC          nfs     active     19224941568     15977145344      2271165440   83.11%
VM_LUN1           nfs     active     16910151680      6422746112      9628554240   37.98%
VM_LUN2           nfs     active     19224941568     15977145344      2271165440   83.11%
VM_LUN3           nfs     active      7690793984      5291480064      2008637440   68.80%
local             dir     active         8730792         2950260         5315432   33.79%
local-lvm     lvmthin     active         6873088               0         6873088    0.00%
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# cat /etc/pve/storage.cfg 
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

nfs: VM_LUN1
        export /volumes/raid1/vms_lun1/
        path /mnt/pve/VM_LUN1
        server isengard.i.anyf.ca
        content images
        options vers=3
        prune-backups keep-all=1

nfs: VM_LUN2
        export /volumes/raid2/vms_lun2/
        path /mnt/pve/VM_LUN2
        server isengard.i.anyf.ca
        content images
        options vers=3
        prune-backups keep-all=1

nfs: VM_LUN3
        export /volumes/raid3/vms_lun3/
        path /mnt/pve/VM_LUN3
        server isengard.i.anyf.ca
        content images
        options vers=3
        prune-backups keep-all=1

nfs: PVE_MISC
        export /volumes/raid2/pve_misc/
        path /mnt/pve/PVE_MISC
        server isengard.i.anyf.ca
        content vztmpl,iso,snippets,backup
        options vers=3
        prune-backups keep-all=1

nfs: PVE_LUN1
        export /volumes/raid1/pve_lun1/
        path /mnt/pve/PVE_LUN1
        server isengard.i.anyf.ca
        content images,rootdir
        options vers=3
        prune-backups keep-all=1

nfs: PVE_LUN2
        export /volumes/raid2/pve_lun2/
        path /mnt/pve/PVE_LUN2
        server isengard.i.anyf.ca
        content images,rootdir
        options vers=3
        prune-backups keep-all=1

nfs: PVE_LUN3
        export /volumes/raid3/pve_lun3/
        path /mnt/pve/PVE_LUN3
        server isengard.i.anyf.ca
        content images,rootdir
        options vers=3
        prune-backups keep-all=1

root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# journalctl -f 
Dec 03 19:55:10 proxmoxtemp02 systemd[1995269]: Reached target sockets.target - Sockets.
Dec 03 19:55:10 proxmoxtemp02 systemd[1995269]: Reached target basic.target - Basic System.
Dec 03 19:55:10 proxmoxtemp02 systemd[1995269]: Reached target default.target - Main User Target.
Dec 03 19:55:10 proxmoxtemp02 systemd[1995269]: Startup finished in 259ms.
Dec 03 19:55:10 proxmoxtemp02 systemd[1]: Started user@0.service - User Manager for UID 0.
Dec 03 19:55:10 proxmoxtemp02 systemd[1]: Started session-166.scope - Session 166 of User root.
Dec 03 19:55:10 proxmoxtemp02 sshd[1995266]: pam_env(sshd:session): deprecated reading of user environment enabled
Dec 03 19:55:10 proxmoxtemp02 login[1995289]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Dec 03 19:55:10 proxmoxtemp02 login[1995294]: ROOT LOGIN  on '/dev/pts/0' from '10.1.1.39'
Dec 03 19:56:19 proxmoxtemp02 pmxcfs[869]: [status] notice: received log
^C
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# 
root@proxmoxtemp02:~# tail -100 /var/log/pveproxy/access.log
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:46 -0500] "GET /api2/extjs/nodes/proxmoxtemp02/qemu/102/pending HTTP/1.1" 200 516
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:48 -0500] "GET /api2/json/nodes/proxmoxtemp02/status HTTP/1.1" 200 759
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:48 -0500] "GET /api2/json/nodes/proxmoxtemp02/qemu/102/status/current HTTP/1.1" 200 223
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:52 -0500] "GET /api2/json/nodes/proxmoxtemp02/status HTTP/1.1" 200 756
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:52 -0500] "GET /api2/extjs/nodes/proxmoxtemp02/qemu/102/pending HTTP/1.1" 200 519
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:52 -0500] "GET /api2/json/nodes/proxmoxtemp02/qemu/102/status/current HTTP/1.1" 200 223
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:53 -0500] "GET /api2/json/nodes/proxmoxtemp02/qemu/102/pending HTTP/1.1" 200 509
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:57 -0500] "GET /api2/json/nodes/proxmoxtemp02/status HTTP/1.1" 200 761
::ffff:10.1.1.39 - root@pam [03/12/2024:19:57:58 -0500] "GET /api2/json/nodes/proxmoxtemp02/qemu/102/status/current HTTP/1.1" 200 223
root@proxmoxtemp02:~#
 
Yeah I just figured that out now, editing it and then assigning a sata port did the trick. I do have other VMs though but they're on another host so guess only the ones on same host show up. I just migrated a VM over to same host and it shows up in list now.