[SOLVED] Migration schlägt fehl

m0nn3

New Member
Nov 26, 2024
1
0
1
Hallo,

ich habe ein 2 Node Cluster und wollte einen Container migrieren.

Dabei kam folgender Fehler:

Markdown (GitHub flavored):
2024-11-26 21:08:04 ERROR: migration aborted (duration 00:00:00): cannot migrate local bind mount point 'mp0'

Meine Konfig sieht wie folgt aus:

Code:
mp0: /mnt/pve/Medien,mp=/media/NAS,replicate=0

/mnt/pve/Medien habe ich über Kontrollzentrum -> Storage -> SMB/CIFS hinzugefügt.

Muss ich in der Konfig das mp0 einem Parameter mitgeben das er dies beim migrieren auch wieder neu einbindet ?
Oder müsste ich um mein Samba Server nutzen zu können anders vorgehen, der Storage ist ja auch beiden Nodes verfügbar?

Vielen Dank schon mal im voraus!!!


EDIT:

dieser Fehler ist weg ( Option: ,shared=1), aber sobald ich migriere und dies auch fertig wird automatisch zurück migriert.

2024-11-26 21:27:35 starting migration of CT 105 to node 'thinkcentre' (192.168.178.161)
2024-11-26 21:27:35 ignoring shared 'bind' mount point 'mp0' ('/mnt/pve/Medien')
2024-11-26 21:27:35 found local volume 'SSD:subvol-105-disk-0' (in current VM config)
2024-11-26 21:27:35 start replication job
2024-11-26 21:27:35 guest => CT 105, running => 0
2024-11-26 21:27:35 volumes => SSD:subvol-105-disk-0
2024-11-26 21:27:36 create snapshot '__replicate_105-0_1732652855__' on SSD:subvol-105-disk-0
2024-11-26 21:27:36 using secure transmission, rate limit: none
2024-11-26 21:27:36 incremental sync 'SSD:subvol-105-disk-0' (__replicate_105-0_1732652782__ => __replicate_105-0_1732652855__)
2024-11-26 21:27:37 send from @__replicate_105-0_1732652782__ to SSD/subvol-105-disk-0@__replicate_105-0_1732652855__ estimated size is 37.9M
2024-11-26 21:27:37 total estimated size is 37.9M
2024-11-26 21:27:37 TIME SENT SNAPSHOT SSD/subvol-105-disk-0@__replicate_105-0_1732652855__
2024-11-26 21:27:38 successfully imported 'SSD:subvol-105-disk-0'
2024-11-26 21:27:38 delete previous replication snapshot '__replicate_105-0_1732652782__' on SSD:subvol-105-disk-0
2024-11-26 21:27:39 (remote_finalize_local_job) delete stale replication snapshot '__replicate_105-0_1732652782__' on SSD:subvol-105-disk-0
2024-11-26 21:27:39 end replication job
2024-11-26 21:27:39 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=thinkcentre' -o 'UserKnownHostsFile=/etc/pve/nodes/thinkcentre/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.178.161 pvesr set-state 105 \''{"local/elitedesk":{"last_node":"elitedesk","last_sync":1732652855,"last_try":1732652855,"duration":3.428566,"storeid_list":["SSD"],"fail_count":0,"last_iteration":1732652855}}'\'
2024-11-26 21:27:40 start final cleanup
2024-11-26 21:27:41 migration finished successfully (duration 00:00:06)
TASK OK
dann die andere automatische migration:
2024-11-26 21:27:53 starting migration of CT 105 to node 'elitedesk' (192.168.178.160)
2024-11-26 21:27:53 ignoring shared 'bind' mount point 'mp0' ('/mnt/pve/Medien')
2024-11-26 21:27:53 found local volume 'SSD:subvol-105-disk-0' (in current VM config)
2024-11-26 21:27:53 start replication job
2024-11-26 21:27:53 guest => CT 105, running => 0
2024-11-26 21:27:53 volumes => SSD:subvol-105-disk-0
2024-11-26 21:27:54 create snapshot '__replicate_105-0_1732652873__' on SSD:subvol-105-disk-0
2024-11-26 21:27:54 using secure transmission, rate limit: none
2024-11-26 21:27:54 incremental sync 'SSD:subvol-105-disk-0' (__replicate_105-0_1732652855__ => __replicate_105-0_1732652873__)
2024-11-26 21:27:54 send from @__replicate_105-0_1732652855__ to SSD/subvol-105-disk-0@__replicate_105-0_1732652873__ estimated size is 624B
2024-11-26 21:27:54 total estimated size is 624B
2024-11-26 21:27:54 TIME SENT SNAPSHOT SSD/subvol-105-disk-0@__replicate_105-0_1732652873__
2024-11-26 21:27:55 successfully imported 'SSD:subvol-105-disk-0'
2024-11-26 21:27:55 delete previous replication snapshot '__replicate_105-0_1732652855__' on SSD:subvol-105-disk-0
2024-11-26 21:27:56 (remote_finalize_local_job) delete stale replication snapshot '__replicate_105-0_1732652855__' on SSD:subvol-105-disk-0
2024-11-26 21:27:56 end replication job
2024-11-26 21:27:56 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=elitedesk' -o 'UserKnownHostsFile=/etc/pve/nodes/elitedesk/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.178.160 pvesr set-state 105 \''{"local/thinkcentre":{"fail_count":0,"storeid_list":["SSD"],"last_iteration":1732652873,"last_node":"thinkcentre","last_sync":1732652873,"duration":3.307596,"last_try":1732652873}}'\'
2024-11-26 21:27:57 start final cleanup
2024-11-26 21:27:58 migration finished successfully (duration 00:00:06)
TASK OK

Ich sehe da keinen Grund warum er jetzt wieder zurück migriert..

Danke schon mal !

Fehler war das ich unter Rechenzentrum -> HA eingestellt hatte das der immer auf der einen Node laufen soll..

Post kann zu.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!