[SOLVED] 401 No ticket with pct remote-migrate

Etienne Charlier

Well-Known Member
Oct 29, 2018
64
11
48
21
Good Evening,

I'm trying to migrate LXC containers from one cluster ( host running this script) to a stand alone proxmox server.
Keep getting error "401 No ticket"
Can someone help me on this ?
Bash:
#!/bin/bash
SRV_02_TOKEN="root@pam!root-srv-02 fc2b2fa6-887a-4e46-b862-c0d6a754bd3a"
SRV_02_HOST="srv-02.phi8.ovh"
SRV_02_FINGERPRINT="DB:43:F3:EC:92:12:EE:E6:49:EA:43:AF:DB:FE:85:47:42:41:6E:A1:D9:0C:C2:8A:1B:42:A3:CD:45:45:86:F3"
SRV_02_ENDPOINT="apitoken=${SRV_02_TOKEN},host=${SRV_02_HOST},fingerprint=${SRV_02_FINGERPRINT}"
SRV_02_STORAGE="storage-1"
SRV_02_BRIDGE="LAN"
VMID=80001

pct remote-migrate $VMID $VMID     "$SRV_02_ENDPOINT" \
              --target-bridge  $SRV_02_BRIDGE \
          --target-storage $SRV_02_STORAGE \
          --restart


Code:
# bash -x migrate-to.sh
+ SRV_02_TOKEN='root@pam!root-srv-02 fc2b2fa6-887a-4e46-b862-c0d6a754bd3a'
+ SRV_02_HOST=srv-02.phi8.ovh
+ SRV_02_FINGERPRINT=DB:43:F3:EC:92:12:EE:E6:49:EA:43:AF:DB:FE:85:47:42:41:6E:A1:D9:0C:C2:8A:1B:42:A3:CD:45:45:86:F3
+ SRV_02_ENDPOINT='apitoken=root@pam!root-srv-02 fc2b2fa6-887a-4e46-b862-c0d6a754bd3a,host=srv-02.phi8.ovh,fingerprint=DB:43:F3:EC:92:12:EE:E6:49:EA:43:AF:DB:FE:85:47:42:41:6E:A1:D9:0C:C2:8A:1B:42:A3:CD:45:45:86:F3'
+ SRV_02_STORAGE=storage-1
+ SRV_02_BRIDGE=LAN
+ VMID=80001
+ pct remote-migrate 80001 80001 'apitoken=root@pam!root-srv-02 fc2b2fa6-887a-4e46-b862-c0d6a754bd3a,host=srv-02.phi8.ovh,fingerprint=DB:43:F3:EC:92:12:EE:E6:49:EA:43:AF:DB:FE:85:47:42:41:6E:A1:D9:0C:C2:8A:1B:42:A3:CD:45:45:86:F3' --target-bridge LAN --target-storage storage-1 --restart


401 No ticket
 
the token part is wrong.. it should look like this apitoken=PVEAPIToken=${TOKENID}=${TOKEN_SECRET}, where ${TOKENID} seems to be 'root@pam!root-srv-02' and ${TOKEN_SECRET} 'fc2b2fa6-887a-4e46-b862-c0d6a754bd3a', but I'd strongly advise you to delete that one and create a new one since you posted it here publicly!
 
Thanks for your answer !

Token ahs been deleted just after having created the post ;-) ( and the system is not visible on the Internet)
Didn't want to mess up with the value as I was "convinced" my issue was a syntax problem.

For what it's worth, a "full example" would be useful in the docs !

Many thanks,
Etienne
 
Thanks, Token syntax is ok..

Now next question:

After successful run without the --delete 1 , I have two copies of my LXC: one "usable" on the destination node, and one in "locked" mode ( in migration with small plane icon).
I can't delete the source LXC ( computer says no: CT is locked (migrate) (500) )
What's the secret sauce to remove those "ghosts" ?
 
Last edited:
Thanks, Token syntax is ok..

Now next question:

After successful run without the --delete 1 , I have two copies of my LXC: one "usable" on the destination node, and one in "locked" mode ( in migration with small plane icon).
I can't delete the source LXC ( computer says no: CT is locked (migrate) (500) )
What's the secret sauce to remove those "ghosts" ?
Looks like pct unlock does the trick...
 
Ticket syntax solved,

Using the same script, several LXC can't be migrated...
ths with big additional raw disks ( eg 100G like this one)

For other execution, just changing the VMID in the script and it works !
What could it be ?

Code:
root@san:~# ./migrate-to.sh
Establishing API connection with remote at 'srv-02.phi8.ovh'
2023-02-21 05:18:36 remote: started tunnel worker 'UPID:srv-02:0002D894:002763D8:63F4461C:vzmtunnel:90004:root@pam!root-srv-02:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2023-02-21 05:18:36 local WS tunnel version: 2
2023-02-21 05:18:36 remote WS tunnel version: 2
2023-02-21 05:18:36 minimum required WS tunnel version: 2
2023-02-21 05:18:36 websocket tunnel started
2023-02-21 05:18:36 shutdown CT 90004
2023-02-21 05:18:42 starting migration of CT 90004 to node 'srv-02' (srv-02.phi8.ovh)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2023-02-21 05:18:43 found local volume 'PM-ENIAC-1:90004/vm-90004-disk-0.raw' (in current VM config)
2023-02-21 05:18:43 found local volume 'PM-ENIAC-1:90004/vm-90004-disk-1.raw' (in current VM config)
tunnel: -> sending command "disk-import" to remote
tunnel: <- got reply
tunnel: accepted new connection on '/run/pve/90004.storage'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/pve/90004.storage'
5242880+0 records in
5242880+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 358.771 s, 59.9 MB/s
tunnel: -> sending command "query-disk-import" to remote
tunnel: done handling forwarded connection from '/run/pve/90004.storage'
tunnel: <- got reply
2023-02-21 05:24:43 disk-import: 2161+1302570 records in
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:24:44 disk-import: 2161+1302570 records out
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:24:45 disk-import: 21474836480 bytes (21 GB, 20 GiB) copied, 358.819 s, 59.8 MB/s
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:24:46 volume 'PM-ENIAC-1:90004/vm-90004-disk-0.raw' is 'storage-1:90004/vm-90004-disk-0.raw' on the target
tunnel: -> sending command "disk-import" to remote
tunnel: <- got reply
tunnel: accepted new connection on '/run/pve/90004.storage'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/pve/90004.storage'
26214400+0 records in
26214400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 1214.12 s, 88.4 MB/s
tunnel: done handling forwarded connection from '/run/pve/90004.storage'
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:45:03 disk-import: 752+6562443 records in
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:45:04 disk-import: 752+6562443 records out
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:45:05 disk-import: 107374182400 bytes (107 GB, 100 GiB) copied, 1214.12 s, 88.4 MB/s
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-02-21 05:45:06 volume 'PM-ENIAC-1:90004/vm-90004-disk-1.raw' is 'storage-1:90004/vm-90004-disk-1.raw' on the target
2023-02-21 05:45:06 mapped: net0 from vmbr2 to LAN
tunnel: -> sending command "config" to remote
tunnel: <- got reply
2023-02-21 05:45:06 ERROR: error - tunnel command '{"cmd":"config","firewall-config":null,"conf":"arch: amd64\ncores: 2\ncpulimit: 1\ncpuunits: 1000\nfeatures: fuse=1\nhostname: nc.volantis.phi8.ovh\nlock: migrate\nmemory: 2048\nmp0: storage-1:90004/vm-90004-disk-1.raw,mp=/data,acl=1,backup=1,size=100G\nnet0: name=net0,bridge=LAN,gw=172.16.0.1,hwaddr=9A:E7:5A:E5:36:AF,ip=172.16.90.4/16,type=veth\nonboot: 1\nostype: ubuntu\nrootfs: storage-1:90004/vm-90004-disk-0.raw,size=20G\nswap: 0\nunprivileged: 1\n"}' failed - failed to handle 'config' command - 403 Permission check failed (changing feature flags (except nesting) is only allowed for root@pam)
2023-02-21 05:45:06 aborting phase 1 - cleanup resources
2023-02-21 05:45:06 ERROR: found stale volume copy 'storage-1:90004/vm-90004-disk-0.raw' on node 'srv-02'
2023-02-21 05:45:06 ERROR: found stale volume copy 'storage-1:90004/vm-90004-disk-1.raw' on node 'srv-02'
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2023-02-21 05:45:09 start final cleanup
2023-02-21 05:45:09 start container on source node
2023-02-21 05:45:12 ERROR: migration aborted (duration 00:26:36): error - tunnel command '{"cmd":"config","firewall-config":null,"conf":"arch: amd64\ncores: 2\ncpulimit: 1\ncpuunits: 1000\nfeatures: fuse=1\nhostname: nc.volantis.phi8.ovh\nlock: migrate\nmemory: 2048\nmp0: storage-1:90004/vm-90004-disk-1.raw,mp=/data,acl=1,backup=1,size=100G\nnet0: name=net0,bridge=LAN,gw=172.16.0.1,hwaddr=9A:E7:5A:E5:36:AF,ip=172.16.90.4/16,type=veth\nonboot: 1\nostype: ubuntu\nrootfs: storage-1:90004/vm-90004-disk-0.raw,size=20G\nswap: 0\nunprivileged: 1\n"}' failed - failed to handle 'config' command - 403 Permission check failed (changing feature flags (except nesting) is only allowed for root@pam)
migration aborted
 
2023-02-21 05:45:06 ERROR: error - tunnel command '{"cmd":"config","firewall-config":null,"conf":"arch: amd64\ncores: 2\ncpulimit: 1\ncpuunits: 1000\nfeatures: fuse=1\nhostname: nc.volantis.phi8.ovh\nlock: migrate\nmemory: 2048\nmp0: storage-1:90004/vm-90004-disk-1.raw,mp=/data,acl=1,backup=1,size=100G\nnet0: name=net0,bridge=LAN,gw=172.16.0.1,hwaddr=9A:E7:5A:E5:36:AF,ip=172.16.90.4/16,type=veth\nonboot: 1\nostype: ubuntu\nrootfs: storage-1:90004/vm-90004-disk-0.raw,size=20G\nswap: 0\nunprivileged: 1\n"}' failed - failed to handle 'config' command - 403 Permission check failed (changing feature flags (except nesting) is only allowed for root@pam)

the "fuse" feature is a problem here (and in general, everything that is root@pam only is, since the migration only works via a token that is by definition less-privileged). there is a "SuperUser" patch series waiting to be picked up again that would allow delegating the "power" of root@pam to arbitrary users/tokens, but it's not yet finished:

https://bugzilla.proxmox.com/show_bug.cgi?id=2582
 
the "fuse" feature is a problem here (and in general, everything that is root@pam only is, since the migration only works via a token that is by definition less-privileged). there is a "SuperUser" patch series waiting to be picked up again that would allow delegating the "power" of root@pam to arbitrary users/tokens, but it's not yet finished:

https://bugzilla.proxmox.com/show_bug.cgi?id=2582
Thanks for the explanation !

If I understand correctly, I need to use backup/restore to migrate LXC with mutiple disks ... while remote-migrate is in "preview"...
 
no.. multiple mountpoints work fine, just nothing that requires root@pam (or some other things which are not implemented yet - like snapshots ;))
 
no.. multiple mountpoints work fine, just nothing that requires root@pam (or some other things which are not implemented yet - like snapshots ;))
token has been created for root@pam ( root@pam is the only existing user on both source and destination pve).
I made sure the token is not restricted ( don't have the system at hands)
But I'll try to deactivate "fuse" and keep trying !
 
yeah, but even a non-priv-separated token of the user 'root@pam' is not equivalent to the user 'root@pam' for all the special "root required" checks ;)
 
  • Like
Reactions: Etienne Charlier
yeah, but even a non-priv-separated token of the user 'root@pam' is not equivalent to the user 'root@pam' for all the special "root required" checks ;)
Raah !! this is the "real root cause" of my issues !!!

THANKS a lot for your support !
Proxmox (Staff) Rocks ;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!