[SOLVED] Directory not listed in gui

Moxxorp

Member
Feb 1, 2019
18
2
8
55
After changing some baremetal on the server, everything worked except this directory.

The drives are rearranged, where sdf1 is now sda1.

Where's the data? Deleted? Config issue?


Details......

VM Hard Disks missing all from this location:

i300muv:550504/vm-55054-disk-0.raw,cache=writeback,iothread=1,size=4g,ssd=1

In Proxmox GUI node/disks/directory:
emtpy (nothing at all listed)


fstab has only default entries.

/etc/pve/storage.cfg references:
zfspool: i300
pool i300
content images,rootdir
nodes wth
sparse 1

dir: i300muv
path /rox/i300/i300muv
content backup, images
maxfiles 6
sparse 1

zfspool: i300_img
pool i300
content images
sparse 0


# pvesm status | grep i300
Name Type Status Total Used Available %
i300 zfspool active 282394624 158874384 123520240 56.26%
i300_img zfspool active 282394624 158874384 123520240 56.26%
i300muv dir active 186379008 62858880 123520128 33.73%


# zfs list
Everything but the i300muv listed.


# ll /rox/i300/i300muv/images -R
total 8
drwxr-xr-x 2 root root 4096 May 15 19:43 images
drwxr-xr-x 2 root root 4096 May 15 19:43 dump

/rox/i300/i300muv/images:
total 0

/rox/i300/i300muv/dump:
total 0


# ll by-path | grep sda
pci-0000:03:00.0-sas-phy4-lun-0
pci-0000:03:00.0-sas-phy4-lun-0-part1 -> ../../sta1
pci-0000:03:00.0-sas-phy4-lun-0-part9 -> ../../sta9


# ll by-label | grep sda
i300 -> ../../sda1


# ll by-uuid | grep sda

11655003427998622145 ../../sda1
 
Hi,
After changing some baremetal on the server, everything worked except this directory.

The drives are rearranged, where sdf1 is now sda1.

Where's the data? Deleted? Config issue?


Details......

VM Hard Disks missing all from this location:

i300muv:550504/vm-55054-disk-0.raw,cache=writeback,iothread=1,size=4g,ssd=1

In Proxmox GUI node/disks/directory:
emtpy (nothing at all listed)


fstab has only default entries.

/etc/pve/storage.cfg references:
zfspool: i300
pool i300
content images,rootdir
nodes wth
sparse 1

dir: i300muv
path /rox/i300/i300muv
content backup, images
maxfiles 6
sparse 1

zfspool: i300_img
pool i300
content images
sparse 0


# pvesm status | grep i300
Name Type Status Total Used Available %
i300 zfspool active 282394624 158874384 123520240 56.26%
i300_img zfspool active 282394624 158874384 123520240 56.26%
i300muv dir active 186379008 62858880 123520128 33.73%

Are the zfs filesystems mounted properly? Use zfs list -o name,mounted,mountpoint to check. It might be the case that the directories in /rox/i300/i300muv get created by PVE at boot before the zfs filesystems are mounted, causing the mount operation to fail. If that's the case, use pvesm set i300muv --mkdir 0 to disable that behavior.

# zfs list
Everything but the i300muv listed.

It's not a zfs filesystem, but a normal directory, so that is to be expected.
 
# zfs list -o name,mounted,mountpoint
NAME MOUNTED MOUNTPOINT
h2a yes /rox/h2a
i300 no /rox/i300

syslog error
May 20 01:03:38 eno pveproxy[14949]: Warning: unable to close filehandle GEN4 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1593.
 
Plenty free inodes.
#df -i

Filesystem Inodes IUsed IFree IUse% Mounted on
udev 4626105 718 4625387 1% /dev
tmpfs 4631361 1022 4630339 1% /run
/dev/mapper/pve-root 3653632 62759 3590873 2% /



#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 55G 55G 0 100% /
(everything else 0-2%)
 

Attachments

  • 20200520 MC 01 root.png
    20200520 MC 01 root.png
    7.5 KB · Views: 5
  • 20200520 MC 03 root proc files.png
    20200520 MC 03 root proc files.png
    6 KB · Views: 5
The \proc\kcore is fine, it's not ... real.

Issue is a folder under root, where I put an image at one point, while working on how to get it into the pve construct. It took a while to fill the drive. Probably coincidence it filed up when I did the change, or I'm barking up the wrong tree.
 
Plenty of space now, 25g free. Still getting

# zfs list -o name,mounted,mountpoint
NAME MOUNTED MOUNTPOINT
h2a yes /rox/h2a
i300 no /rox/i300
 
Plenty of space now, 25g free. Still getting

# zfs list -o name,mounted,mountpoint
NAME MOUNTED MOUNTPOINT
h2a yes /rox/h2a
i300 no /rox/i300

To mount the filesystem, you need to make sure that the mount point directory (i.e. /rox/i300) is empty. Then use zfs mount i300. Now you should see your data again. Use pvesm set i300muv --mkdir 0 to make sure PVE doesn't create new empty directories below the mount point during the next boot.
 
  • Like
Reactions: Moxxorp
zfs mount i300
GUI shows the dir.
Machines are booted.
Everything back to normal.

Proxmox Rox !
TY Fabian_E !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!