Hi Tinwen!
This is the current default behaviour for vm-disks stored on network-storage. You'll find the same effect if you put your disks onto NFS eg. So migrating the storage to a local one would be a two-step process:
1) Do the live migration to your desired node
2) Visit node->vm->hardware, find your disk and move the storage to a local one through the 'Disk Action' Dropdown Menu.
I guess that the design assumes you're good, once you have your disks on shared storage (which is very likely true if you have CEPH rbd on a fast network
). And access to the migration's target storage when using local storage is assumed as important, because chances are good that storage differs from node to node.