Upgrading cluster / moving iSCSI

Nhoague

Renowned Member
Sep 29, 2012
90
4
73
45
Colorado, USA
Hello there,

I am planning my upgrade from 3.x to 4.x. Ive read alot of the blog posts and they all recommend a full server backup and restore. I however have shared storage using iSCSI so I was hoping I could just remove the old VM from the host, remove the iSCSI from the old host, setup the iSCSI on the new host and then setup the VM and tell its hard disk to use the iSCSI device.

All this in theory works, but when I actually do this, my guests (both Linux and Windows) don't boot. They boot the first time correctly, but then Linux throws read-only file system at me, and Windows installs new PCI bridge drivers (like its a new mobo). After a reboot they both (Linux and Windows guests) fail to boot.

Any hints on what I should try next?

Thanks!
 
Hi,

don't you have LVM on top of iscsi? I've done the same today with 3 Nodes with 3.4:
  • removed one from the cluster and installed 4.3
  • Reconfigured Network (copied /etc/network/interfaces and /etc/udev/rules.d/70-persistent-net.rules
  • Reattached iSCSI/multipath (the other nodes still running), all logical volumes are there
  • Copied storage resources from /etc/pve/storage.cfg to new node (not overwritten but merged!)
  • Copied .conf-file from stopped vm (/etc/pve/nodes/OLD-NODE/qemu-server/ID.conf) to new host (/etc/pve/nodes/NEW-NODE/qemu-server/ID.conf) using scp (remove conf file on old node afterwards!!!)
  • Start VM on new node
Absolutely no problem. The vm reboots fine.

The "metadata" is just the tag in and the name of the logical volumes and the name of the shared storage in /etc/pve/storage.cfg.
So you have to make sure:
  • The Storage name is absolutely identical
  • The Configuration is moved so that the old cluster knows nothing about the vm
There is no database or something like that, only properties that can be edited easily. Thats absolutely great about proxmox.
 
I would love to do it that way, but no I do not run iSCSI over LVM. I have a Nimble storage SAN and the iscsi volumes have a very aggressive snapshot schedule for DR and replication. That said, I have to run iscsi direct to the VMs.

I do love Proxmox, I'm in no way bashing it, I would assume this same scenario happens on VMware since it is also KVM based.
 
Oh yea, 100% sure. My iscsi naming convention is pretty bullet proof. I havent checked the blkid, but that was my next intention when I get some time.

I was mainly wondering, is the PVE 4.x kernel that much different that by guests would think they have all new hardware?
 
I don't think that the Host Kernel has any influence to the guest hardware (when running kvm machines). The guest hardware depends on the qemu/kvm version and the given os type.
 
Whats weird is that when I boot up a guest off the same iscsi lun on PVE4.x, Windows will pop up that message "Getting Devices Ready" and then once it boots into the GUI, it runs the new hardware wizard. Odd huh?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!