vSphere 5.5 -> PVE 8.2 Auto Import - 401 Error

mhammett

Renowned Member
Mar 11, 2009
161
2
83
DeKalb, Illinois, United States
I have had some successful migrations from the same hosts, but on different datastores. The successful datastore is named "RAID10". The unsuccessful datastore is named "NEW". "NEW" doesn't have any special characters, which would cause it to fail.




Code:
root@PVE1:~# systemctl status pve-esxi*
● pve-esxi-fuse-VMWare13.scope
     Loaded: loaded (/run/systemd/transient/pve-esxi-fuse-VMWare13.scope; transient)
  Transient: yes
     Active: active (running) since Sat 2024-06-15 13:02:24 CDT; 2h 46min ago
      Tasks: 6 (limit: 309312)
     Memory: 6.2M
        CPU: 11min 32.768s
     CGroup: /system.slice/pve-esxi-fuse-VMWare13.scope
             └─41384 /usr/libexec/pve-esxi-import-tools/esxi-folder-fuse --skip-cert-verification --change-user nobody --change-group nogroup -o allow_other --ready-fd 13 --user root --pas>

Jun 15 13:02:24 PVE1 systemd[1]: Started pve-esxi-fuse-VMWare13.scope.
Jun 15 13:02:28 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: esxi fuse mount ready
Jun 15 15:41:32 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error looking up file or directory: http error code 401
Jun 15 15:41:32 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error handling request: http error code 401

● pve-esxi-fuse-VMware15.scope
     Loaded: loaded (/run/systemd/transient/pve-esxi-fuse-VMware15.scope; transient)
  Transient: yes
     Active: active (running) since Sat 2024-06-15 13:20:52 CDT; 2h 28min ago
      Tasks: 6 (limit: 309312)
     Memory: 3.8M
        CPU: 9min 3.036s
     CGroup: /system.slice/pve-esxi-fuse-VMware15.scope
             └─45385 /usr/libexec/pve-esxi-import-tools/esxi-folder-fuse --skip-cert-verification --change-user nobody --change-group nogroup -o allow_other --ready-fd 14 --user root --pas>

Jun 15 13:20:52 PVE1 systemd[1]: Started pve-esxi-fuse-VMware15.scope.
Jun 15 13:21:03 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: esxi fuse mount ready
Jun 15 13:58:00 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error looking up file or directory: http error code 401
Jun 15 13:58:00 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error handling request: http error code 401
Jun 15 15:41:02 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error looking up file or directory: http error code 401
Jun 15 15:41:02 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error handling request: http error code 401
 
Last edited:
Where can I find additional logging on "esxi-folder-fuse" or "pve-esxi"?



Code:
root@PVE1:~# ls -hal /run/pve/import/esxi/VMWare13/mnt/ha-datacenter/NEW/UBNT\ UniFi\ 2/
total 0
-r--r--r-- 1 root root 554 Dec 31  1969 'UBNT UniFi 2.vmdk'

That works, so it can assumingly reach the data.
 
Last edited:
Code:
root@PVE1:~# journalctl -u pve-esxi*
Jun 15 13:02:24 PVE1 systemd[1]: Started pve-esxi-fuse-VMWare13.scope.
Jun 15 13:02:28 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: esxi fuse mount ready
Jun 15 13:20:52 PVE1 systemd[1]: Started pve-esxi-fuse-VMware15.scope.
Jun 15 13:21:03 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: esxi fuse mount ready
Jun 15 13:58:00 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error looking up file or directory: http error code 401
Jun 15 13:58:00 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error handling request: http error code 401
Jun 15 15:41:02 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error looking up file or directory: http error code 401
Jun 15 15:41:02 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error handling request: http error code 401
Jun 15 15:41:32 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error looking up file or directory: http error code 401
Jun 15 15:41:32 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error handling request: http error code 401
Jun 15 15:56:39 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error handling request: cached read failed: http error code 500
Jun 16 03:11:02 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error looking up file or directory: http error code 401
Jun 16 03:11:02 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error handling request: http error code 401
Jun 16 03:14:17 PVE1 esxi-folder-fus[45385]: PVE1 esxi-folder-fuse[45385]: error handling request: cached read failed: http error code 500
Jun 16 03:27:57 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: STAT ON VERSION?
Jun 16 03:27:57 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: STAT ON VERSION?
Jun 16 03:27:59 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: STAT ON VERSION?
Jun 16 03:38:40 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error looking up file or directory: http error code 401
Jun 16 03:38:40 PVE1 esxi-folder-fus[41384]: PVE1 esxi-folder-fuse[41384]: error handling request: http error code 401
 
/var/log on the vSphere host doesn't show anything. I determined that by issuing a grep for one of the VMs I was trying to migrate, attempted a migration, grepped again, and then compared the differences.
 
From the PVE Web UI:

Code:
can't open '/run/pve/import/esxi/VMware15/mnt/ha-datacenter/RAID10/NickBsmoker/NickBsmoker.vmx' - No such file or directory (500)

From /var/log/pveproxy/access.log

Code:
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:44 -0500] "GET /api2/extjs/cluster/nextid?_dc=1718588127794 HTTP/1.1" 200 26
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:44 -0500] "GET /api2/json/nodes/PVE1/network?type=any_bridge HTTP/1.1" 200 390
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:44 -0500] "GET /api2/json/nodes/localhost/capabilities/qemu/cpu HTTP/1.1" 200 684
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:44 -0500] "GET /api2/json/nodes/PVE1/storage?format=1&content=images HTTP/1.1" 200 481
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:45 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1063
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:45 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 1768
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:48 -0500] "GET /api2/extjs/nodes/PVE1/storage/VMware15/import-metadata?volume=ha-datacenter%2FRAID10%2FNickBsmoker%2FNickBsmoker.vmx&_dc=1718588127837 HTTP/1.1" 200 176
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:48 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1062
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:49 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 1747
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:51 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1062
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:52 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 1745
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:54 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1062
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:55 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 1742
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:58 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1066
::ffff:10.1.1.37 - root@pam [16/06/2024:20:35:58 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 1765
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:01 -0500] "GET /api2/json/cluster/tasks HTTP/1.1" 200 1066
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:01 -0500] "GET /api2/json/cluster/resources HTTP/1.1" 200 1765
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:02 -0500] "GET /api2/extjs/cluster/nextid?_dc=1718588145673 HTTP/1.1" 200 26
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:02 -0500] "GET /api2/json/nodes/PVE1/network?type=any_bridge HTTP/1.1" 200 386
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:02 -0500] "GET /api2/extjs/nodes/PVE1/storage/VMware15/import-metadata?volume=ha-datacenter%2FRAID10%2FNickBsmoker%2FNickBsmoker.vmx&_dc=1718588145712 HTTP/1.1" 200 176
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:02 -0500] "GET /api2/json/nodes/localhost/capabilities/qemu/cpu HTTP/1.1" 200 679
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:02 -0500] "GET /api2/extjs/cluster/nextid?_dc=1718588145805&vmid=101 HTTP/1.1" 200 26
::ffff:10.1.1.37 - root@pam [16/06/2024:20:36:02 -0500] "GET /api2/json/nodes/PVE1/storage?format=1&content=images HTTP/1.1" 200 481
 
Okay, I made some amount of breakthrough, though I don't understand why.


Code:
root@PVE1:~# ls -hal /run/pve/import/esxi/VMware15/mnt/ha-datacenter/RAID10/NickBsmoker/NickBsmoker.vmx
ls: cannot access '/run/pve/import/esxi/VMware15/mnt/ha-datacenter/RAID10/NickBsmoker/NickBsmoker.vmx': No such file or directory
root@PVE1:~# ls -hal /run/pve/import/esxi/VMware15/mnt/ha-datacenter/RAID10/NickBsmoker/
total 0
-r--r--r-- 1 root root 531 Dec 31  1969 NickBsmoker.vmdk
root@PVE1:~# ls -hal /run/pve/import/esxi/VMware15/mnt/ha-datacenter/RAID10/BIND1/
total 0
-r--r--r-- 1 root root  64G Dec 31  1969 BIND1-flat.vmdk
-r--r--r-- 1 root root  526 Dec 31  1969 BIND1.vmdk
-r--r--r-- 1 root root 3.2K Dec 31  1969 BIND1.vmx

Proxmox only sees NickBsmoker.vmdk for a VM that won't migrate. However, it sees the two VMDKs and the VMX for a VM that did migrate correctly.

I go to the vSphere host I'm pulling this from and:


Code:
/vmfs/volumes/dd362778-b976554f/NickBsmoker # ls -hal
total 1941909
drwxr-xr-x    1 root     root          15 Jun 17 00:49 .
drwxr-xr-x    1 root     root          57 Mar 24  2023 ..
-rw-------    1 root     root       16.0G Jun 17 00:49 NickBsmoker-flat.vmdk
-rw-------    1 root     root        8.5K Jun 17 00:49 NickBsmoker.nvram
-rw-------    1 root     root         531 Jun 16 17:03 NickBsmoker.vmdk
-rw-r--r--    1 root     root           0 Mar 23  2020 NickBsmoker.vmsd
-rwxr-xr-x    1 root     root        2.7K Jun 17 00:49 NickBsmoker.vmx
-rw-r--r--    1 root     root         266 Mar 23  2020 NickBsmoker.vmxf
-rw-r--r--    1 root     root      167.9K Jun 16 16:43 vmware-10.log
-rw-r--r--    1 root     root      167.7K May 30  2022 vmware-5.log
-rw-r--r--    1 root     root      147.0K May 31  2022 vmware-6.log
-rw-r--r--    1 root     root      146.9K Jun  1  2022 vmware-7.log
-rw-r--r--    1 root     root      285.7K Jun 16 15:14 vmware-8.log
-rw-r--r--    1 root     root      167.8K Jun 16 16:41 vmware-9.log
-rw-r--r--    1 root     root      167.9K Jun 17 00:49 vmware.log
/vmfs/volumes/dd362778-b976554f/BIND1 # ls -hal
total 6438138
drwxr-xr-x    1 root     root          15 Jun 16 13:15 .
drwxr-xr-x    1 root     root          57 Mar 24  2023 ..
-rw-------    1 root     root       64.0G Jun 16 13:15 BIND1-flat.vmdk
-rw-------    1 root     root        8.5K Jun  8 13:07 BIND1.nvram
-rw-------    1 root     root         526 Oct 24  2023 BIND1.vmdk
-rw-r--r--    1 root     root           0 May 18  2019 BIND1.vmsd
-rwxr-xr-x    1 root     root        3.2K Jun 16 13:15 BIND1.vmx
-rw-r--r--    1 root     root         363 Oct 24  2023 BIND1.vmxf
-rw-r--r--    1 root     root      148.9K May 31  2022 vmware-10.log
-rw-r--r--    1 root     root      208.9K Apr 10  2023 vmware-11.log
-rw-r--r--    1 root     root      148.7K May 10  2022 vmware-6.log
-rw-r--r--    1 root     root      148.7K May 10  2022 vmware-7.log
-rw-r--r--    1 root     root      169.7K May 30  2022 vmware-8.log
-rw-r--r--    1 root     root      169.8K May 30  2022 vmware-9.log
-rw-r--r--    1 root     root      411.0K Jun 16 13:15 vmware.log

I don't see any meaningful difference between the two. The files that are missing from NickBsmoker have the same permissions on the VMware side as BIND1, which transferred correctly.
 
I had the same problem. I was able to solve it as follows:
I created a new VM on the ESXi with the existing hard drive. I was able to import this new VM onto the PVE.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!