KVM located on smb share problems after upgrade to 1.8

  • Thread starter Thread starter rbamburg
  • Start date Start date
R

rbamburg

Guest
Greetings;

I setup a test cluster enviorment on v1.4 and had it working as I expected.
Disks were stored on a ReadyNAS share which was mounted as a local drive (/mnt/store/pveimages)
2 Servers a Master and a Node both running AMD processors
Was able to migrate a KVM from the Master to the Node and back again while it was running.

Attempted to install a 2nd Node which consisted of a new Intel I7 processor and had toupgrade Proxmox to 1.8 to support the nic on the motherboard. Therefore I upgraded the Master and 1st node to 1.8 also.

After the upgrade, the local KVM's still worked as expected but those located on the smb share would no longer run.

Found an earlier post which suggested adding cache=writethrough to the ide line in the KVM's config which then allowed the KVM to start.

However, now I am unable to migrate the KVM to a different machine as the system seems to think the disk is local and attemps to copy the disk to the new node. Also got message that it could not be migrated online as it is a local

/usr/bin/ssh -t -t -n -o BatchMode=yes 172.16.0.7 /usr/sbin/qmigrate --online 172.16.0.8 113
Jul 29 16:07:40 starting migration of VM 113 to host '172.16.0.8'
Jul 29 16:07:41 copying disk images
Jul 29 16:07:41 Failed to sync data - can't do online migration - VM uses local disks
Jul 29 16:07:41 migration aborted
Connection to 172.16.0.7 closed.
VM 113 migration failed -

Am I missing something. Does the external smb share need to be connected a different way now?
Do I need to read the manual again???

Thanks for any help you can give.

BB
 
DUH! I didn't have the NAS storage "Shared" . Changed that and the migration works.

Does anyone have an answer to the cache=writethrough situation?