Migration LXC CT with bind mount point

morfair

Member
Feb 20, 2013
48
1
8
Hi all!

In LXC conf file i have this line:
Code:
mp0: /pvestorage/ceph/_custom/tftp/srv_tftp,mp=/srv/tftp
It work good, backup work good, but migration do not work!!

Code:
Ð½Ð¾Ñ 24 01:30:17 ERROR: migration aborted (duration 00:00:00): can't determine assigned storage for mountpoint 'mp0'
TASK ERROR: migration aborted
 
I was able to migrate a container by editing the container configuration file. I commented out the mount points, migrated, then uncommented them.

  1. Log into current PVE host machine as root
  2. Edit container file /etc/pve/lxc/100.conf using vi or another editor (use the correct number for your container)
  3. Comment-out the mount point lines by putting a pound sign " # " at the beginning of the line
  4. Save and exit
  5. In web console, migrate the container to new PVE host
  6. Log in to new host as root
  7. Edit config file to remove pound sign comment markers
  8. Start container on new host
 
Bump. I just hit this issue. Any idea on how to fix it ? I cannot edit the config files each time I need to migrate a LXC container.
 
I solved it for myself by doing the mount inside the container, treating it like a VM. I just added a small script in rc.local to do the mount command.

In my case it was a Glusterfs mount, and it works fine.

I haven't tried it for NFS or Ceph, but I expect that if Glusterfs worked, they are likely to work also. Just give it a try!
 
The point is: mp0 or other options are not handled correctly. Bin mount is just an example. And, there's a reason for using a single nfs client for a single nfs export. Resource-wise it makes a hell of a lot of sense.
 
Sure, I get your point. But I'm just another Proxmox user, using some pretty sophisticated software for free. I can't fix Proxmox for you. Good luck and hope you find a solution you are happy with! :)
 
Man you make me feel guilty. LOL! I'm not asking you to fix proxmox.

Forget free. Not even free software is free.
Proxmox is actually very expensive, but not in the commonly accepted way ;)
 
You need to add the shared=1 option on this line. This way the container can be migrated whereby the local mountpoint will be ignored during migration. You need to make sure that the local mount point is available on every node yourself (e.g. with gfs2).
Code:
mp0: /pvestorage/ceph/_custom/tftp/srv_tftp,mp=/srv/tftp,shared=1
 
You need to add the shared=1 option on this line. This way the container can be migrated whereby the local mountpoint will be ignored during migration. You need to make sure that the local mount point is available on every node yourself (e.g. with gfs2).
Code:
mp0: /pvestorage/ceph/_custom/tftp/srv_tftp,mp=/srv/tftp,shared=1
Solved my issue, thanks!
 
You need to add the shared=1 option on this line. This way the container can be migrated whereby the local mountpoint will be ignored during migration. You need to make sure that the local mount point is available on every node yourself (e.g. with gfs2).
Code:
mp0: /pvestorage/ceph/_custom/tftp/srv_tftp,mp=/srv/tftp,shared=1
Hi,

Yes, "shared=1" solved my issue too, but the problem is still there in PVE 6.2-15 : no way to set this option via GUI...
Or I missed something?

Christophe.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!