Hello,
I have a failing hard drive on one of our nodes. So I had to migrate my running VMs to another node. I succeeded but it was not as easy as I thought.
A little summary of the setup of that server. Proxmox 5.1 is installed on an hard drive. The pve volume group is on that drive. I have pve/root and pve/data logical volume. pve-data volume is not currently used, but it could if I would need bulk space for a VM. I also have a second drive, a SSD. I created the volume group pve-ssd on that drive. I have the pve-ssd/data logical volume. My VMs are running from that volume. The failing disk is the hard drive.
The plan was to setup a temporary server, migrate the VM on it, shutdown the current server, remove it from the cluster, replace the hard drive, install proxmox on the new hard drive, make the server join the cluster again and migrate back the VMs.
First try. I try to migrate one VM to the temporary server. It fails immediately because the storage local-ssd-lvm is not found on the remote server. Really? Why it assumes all hosts are equal and have the same storage? The web UI should ask what storage I want to use on the remote node. So I added a drive on the temporary server and configured LVM using the same naming convention I used on the original server. Then I configured Proxmox to mark the volume as available on both server.
Second try. This time it says it cannot migrate VM using local disks. It was easy to do in the command line (found another thread about that on the forum). I wonder why the web UI can't do that?
Third time, all seem good. My VMs are currently migrating to the new node thanks to the command line!
I have a failing hard drive on one of our nodes. So I had to migrate my running VMs to another node. I succeeded but it was not as easy as I thought.
A little summary of the setup of that server. Proxmox 5.1 is installed on an hard drive. The pve volume group is on that drive. I have pve/root and pve/data logical volume. pve-data volume is not currently used, but it could if I would need bulk space for a VM. I also have a second drive, a SSD. I created the volume group pve-ssd on that drive. I have the pve-ssd/data logical volume. My VMs are running from that volume. The failing disk is the hard drive.
The plan was to setup a temporary server, migrate the VM on it, shutdown the current server, remove it from the cluster, replace the hard drive, install proxmox on the new hard drive, make the server join the cluster again and migrate back the VMs.
First try. I try to migrate one VM to the temporary server. It fails immediately because the storage local-ssd-lvm is not found on the remote server. Really? Why it assumes all hosts are equal and have the same storage? The web UI should ask what storage I want to use on the remote node. So I added a drive on the temporary server and configured LVM using the same naming convention I used on the original server. Then I configured Proxmox to mark the volume as available on both server.
Second try. This time it says it cannot migrate VM using local disks. It was easy to do in the command line (found another thread about that on the forum). I wonder why the web UI can't do that?
Third time, all seem good. My VMs are currently migrating to the new node thanks to the command line!