[SOLVED] NFS Storage, Empty content in GUI, but not really

jorel83

Active Member
Dec 11, 2017
26
2
43
41
Hello people,

I have spent about 4 hours reading on the forums about the same issue but basically nothing written solves the problem. Either this is a feature or a bug, I'm not sure.

I have a 3 host cluster running Ceph and Im in the process of leaving Esxi and gain a 4:th proxmox host and attempting to migrate a few ESXi-vms.

Main problem is the NFS mounted storage that I also use for backups is what ever I've done, the UI is only listing files that Proxmox it self has written to the NFS, everything else such as converted vmdks to raw files gets listed in the cli but not in the ui.

I have added new NFS share directly to the directory where files is, but to no avail. Also tried moving the raw-files to the same dir as the backup files is in, hoping it would be listed in the ui, but not.

The NAS running NFS is a Synology on DSM 6.2

root@proxmox1:/mnt/pve# ceph status
cluster:
id: ae11baad-49a5-49eb-b1af-0a7282073eb6
health: HEALTH_OK

services:
mon: 3 daemons, quorum proxmox1,proxmox2,proxmox3
mgr: proxmox3(active), standbys: proxmox1, proxmox2
osd: 7 osds: 7 up, 7 in

data:
pools: 1 pools, 256 pgs
objects: 5083 objects, 19734 MB
usage: 64361 MB used, 7156 GB / 7219 GB avail
pgs: 256 active+clean

io:
client: 9124 B/s wr, 0 op/s rd, 1 op/s wr

pvesm status
Name Type Status Total Used Available %
Ceph-pool_ct rbd active 2368782061 20208109 2348573952 0.85%
Ceph-pool_vm rbd active 2368782061 20208109 2348573952 0.85%
local dir active 59342780 5586192 50712444 9.41%
local-lvm lvmthin active 157179904 0 157179904 0.00%
nfs01 nfs active 8425266304 4290893312 4134372992 50.93%


Local disks are too small for moving the vmdk:s there. Does anybody have any suggestions on how to proceed?

Thanks :)
 
can you post an example of what a file is listed on the cmdline but not on the gui?

there is a strict directory structure for storages:

dump: backupfiles (vma.gz, tar.gz, etc.)
images/<VMID>: vm disks with specific naming: vm-<ID>-disk-<NUM>.[raw|qcow2|vmdk]
template/cache: lxc templates
template/iso: vm iso images

only the correct files (and correct naming) get displayed in the gui

so you have to move the .raw/vmdk/qcow2 file to :

images/100/vm-100-disk-X.raw/vmdk/qcow2 (if the vmid is 100)

so that it shows up in the gui, if you want to see it with the vm as unused disk, you have to do a 'qm rescan'
 
  • Like
Reactions: jorel83
Hi Dominik, thanks a lot for you reply! :)

Your answer was indeed the issue! I apparently didn't try all combination of files but moved the raw-file as you described and then it became visible in the gui, tried to boot the machine off that file from the NAS and it booted. Then moved the disk to Ceph via the famous "move button" :)

It would be great if your answer could be woven in to this part of the wiki: https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#VMware_to_Proxmox_VE_.28KVM.29

I would have done it myself but seems like the wiki isnt open for registration :)

Thanks again!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!