CRITICAL - Move Disk Issues

gbr

Well-Known Member
May 13, 2012
125
2
58
Hi,

I have NFS and iSCSI storage. I've slowly been moving off of iSCSI onto he NFS (better bandwidth to storage, better speed).

Last night, something weird happened. Basically, all my storage temporarily disconnected, both iSCSI and NFS. I'm suspecting a switch issue.

The problem is, I can no longer use the GUI to move a disk. The move disk dialog comes up, and target storage and format remain greyed out. I believe it's an issue with the iSCSI, and I would really like to move data off of it. The reason I think it's an iSCSI issue is because migrate VM takes forever on iSCSI (waiting to actually start, and then waiting to start VM ob new server before migrating). On the NFS, it's immediate.

I'm running 3.2. I'd like to try a command line move, but I'm not sure of the format of the command.

So, two questions:

1. Where can I look to see why the GUI is greyed out?
2. Whats the format of the command line move (qm move?)

Gerald


ps: interestingly, syslog is showing me a lot of "kernel: sd 6:0:0:0: [sde] Spinning up disk..............................................................." on all 3 of my Proxmox servers. Is that an iSCSI issue? This machine only natively has sda and sdb
 
Last edited:
Anybody? I'm trying to move my disks of the iSCSI and onto the NFS. Sometimes it works, sometimes it doesn't. I'd say 1 out of 100 tries to move works. Ugh.
 
I think the storage migration should also be available from command line ("#qm move_disk", probably), as a workaround..?

Code:
# qm help move_disk
USAGE: qm move_disk <vmid> <disk> <storage> [OPTIONS]

  Move volume to different storage.

  <vmid>     integer (1 - N)

             The (unique) ID of the VM.

  <disk>     (ide0 | ide1 | ide2 | ide3 | sata0 | sata1 | sata2 | sata3 |
             sata4 | sata5 | scsi0 | scsi1 | scsi10 | scsi11 | scsi12 |
             scsi13 | scsi2 | scsi3 | scsi4 | scsi5 | scsi6 | scsi7 | scsi8
             | scsi9 | virtio0 | virtio1 | virtio10 | virtio11 | virtio12 |
             virtio13 | virtio14 | virtio15 | virtio2 | virtio3 | virtio4 |
             virtio5 | virtio6 | virtio7 | virtio8 | virtio9)

             The disk you want to move.

  <storage>  string

             Target Storage.

  -delete    boolean   (default=0)

             Delete the original disk after successful copy. By default the
             original disk is kept as unused disk.

  -digest    string

             Prevent changes if current configuration file has different
             SHA1 digest. This can be used to prevent concurrent
             modifications.

  -format    (qcow2 | raw | vmdk)

             Target Format.

Marco
 
The command take a LONG time (minutes) to actually do anything, but at least I can move the disks.
 
You could have some network or storage issue (don't know which) that slows disk move so much, and maybe this makes gui "timeouts" that gray out fields... difficult to say. But after disks are in safe places, you will have more options to restore a normal situation, hopefully.

Marco
 
I didn't delete the old images during the disk move. Now, I can't. Yikes. PITA!
 
are you running a cluster, and is it quorated? (try "#pvecm status" and "#clustat" to find out))

if your cluster lost quorum, for any reason, pve protects changes in the cluster putting (at least) /etc/pve in read only mode...

only if you're sure that you lost quorum, you can temporarily restore locked funziotnality isssuing

"#pvecm expected 1"

it could help to regain write access somewhere, and unlock some cluster function, but it is only an emergency soloution, use it with care, and be careful...

Marco
 
Hi,

Definitely have quorum:
# pvecm status
Version: 6.2.0
Config Version: 4
Cluster Name: norscan-cluster
Cluster Id: 7860
Cluster Member: Yes
Cluster Generation: 544
Membership state: Cluster-Member
Nodes: 4
Expected votes: 4
Total votes: 4
Node votes: 1
Quorum: 3
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox-2
Node ID: 3
Multicast addresses: 239.192.30.210
Node addresses: 10.1.15.2

Just weird issues with the iSCSI, I'm thinking.

Gerald
 
Just weird issues with the iSCSI, I'm thinking.

yes, the iscsi raw images could be still "active", and locked, perhaps.
I think it is perhas possibile to use LVM commands to "deactivate" their logical volumes, so to unlock them, end eventually delete them, but you have to be extra careful...

Marco
 
Managed to remove the unused disk images, and then removed the iSCSI device from Proxmox. Everything is back to normal. Whew.