unable to activate storage - directory is expected to be a mount point but is not mounted

Elmani335

New Member
Jun 22, 2023
12
0
1
Hey, hope everyone is good ?

First of all, I'm sorry for the long post, I just try to give all the important info I have, basically it is explained in the title, at a random moment, ALL my vms shut down, and I tried restarting proxmox (the Poweredge R820 server directly) but same error :

(I still have 1vm running because it's on another drive)

Code:
unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

And the storage status is also unknown in the Gui :
1701032267798.png

This is the Node informations I have, it is set up as a directory :1701032319191.png

The systemctl status of the mnt-pve-RAID_DATA.mount :

1701032413837.png

Cat of storage.cfg :
Bash:
root@local:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: RAID_DATA
        path /mnt/pve/RAID_DATA
        content backup,rootdir,images,vztmpl,snippets,iso
        is_mountpoint 1
        nodes local
        preallocation falloc
        shared 0

zfspool: D1-18To
        pool D1-18To
        content images,rootdir
        mountpoint /D1-18To
        nodes local

The RAID_DATA fdisk -l output :
Code:
Disk /dev/sda: 4.91 TiB, 5397163278336 bytes, 10541334528 sectors
Disk model: PERC H710     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 22C1FFAC-A004-4A02-854E-42A22353169B

Device     Start         End     Sectors  Size Type
/dev/sda1   2048 10541334494 10541332447  4.9T Linux filesystem

output of ls -lh disk/by-uuid :
Code:
root@local:~# ls -lh /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 12 Nov 26 21:37 02943FE6943FDABD -> ../../zd48p3
lrwxrwxrwx 1 root root 10 Nov 26 21:37 12CC-4126 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Nov 26 21:37 15791782985296914653 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Nov 26 21:37 20c893b4-0f05-4256-9728-0370dfe544ce -> ../../dm-1
lrwxrwxrwx 1 root root 11 Nov 26 21:37 22E649F6E649CB2D -> ../../zd0p2
lrwxrwxrwx 1 root root 12 Nov 26 21:37 2b0cb994-b7fb-4580-8f1a-b11a4018401b -> ../../zd32p2
lrwxrwxrwx 1 root root 10 Nov 26 21:37 36826f34-b57e-4f2b-939f-663681b5f2c2 -> ../../sda1
lrwxrwxrwx 1 root root 15 Nov 26 21:37 62D1-6091 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 11 Nov 26 21:37 7042486142482E62 -> ../../zd0p1
lrwxrwxrwx 1 root root 10 Nov 26 21:37 8bb64fdf-b3c9-48ea-b854-52a22a20a30f -> ../../dm-0
lrwxrwxrwx 1 root root 12 Nov 26 21:37 de3462f2-10c1-4ced-af89-d56654ee8811 -> ../../zd16p2
lrwxrwxrwx 1 root root 12 Nov 26 21:37 FA16ECEC16ECAAB9 -> ../../zd48p2

And sorry for the long text, but here is my lsblk if it can help :

SDA - SDA1 is the RAID_DATA (4.9T)

Code:
root@local:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   4.9T  0 disk
└─sda1               8:1    0   4.9T  0 part
sdb                  8:16   0  16.4T  0 disk
├─sdb1               8:17   0  16.4T  0 part
└─sdb9               8:25   0     8M  0 part
sdc                  8:32   1   956M  0 disk
├─sdc1               8:33   1   200M  0 part
└─sdc2               8:34   1   756M  0 part
zd0                230:0    0    32G  0 disk
├─zd0p1            230:1    0   549M  0 part
└─zd0p2            230:2    0  31.5G  0 part
zd16               230:16   0   100G  0 disk
├─zd16p1           230:17   0     1M  0 part
├─zd16p2           230:18   0     2G  0 part
└─zd16p3           230:19   0    98G  0 part
zd32               230:32   0   100G  0 disk
├─zd32p1           230:33   0     1M  0 part
├─zd32p2           230:34   0     2G  0 part
└─zd32p3           230:35   0    98G  0 part
zd48               230:48   0  14.6T  0 disk
├─zd48p1           230:49   0    16M  0 part
├─zd48p2           230:50   0  14.6T  0 part
└─zd48p3           230:51   0     2G  0 part
nvme0n1            259:0    0 465.8G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0   512M  0 part /boot/efi
└─nvme0n1p3        259:3    0 465.3G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   3.5G  0 lvm
  │ └─pve-data     253:4    0 338.4G  0 lvm
  └─pve-data_tdata 253:3    0 338.4G  0 lvm
    └─pve-data     253:4    0 338.4G  0 lvm


I already tried to mount manually, but it was leading me to litteraly infinit consol waiting ? it was doing nothing

If someone have any idea I would appreciate any help
 
Last edited:
I already tried to mount manually, but it was leading me to litteraly infinit consol waiting ? it was doing nothing
Looks like Proxmox is right: it needs to be mounted but it's not. Check with journalctl (scroll with the arrow keys) to see if there are some clues or error messages around the time that you tried mounting it. Maybe there's a problem (that's making it real slow, like restoring the RAID) with the controller or drive(s)?
 
Looks like Proxmox is right: it needs to be mounted but it's not. Check with journalctl (scroll with the arrow keys) to see if there are some clues or error messages around the time that you tried mounting it. Maybe there's a problem (that's making it real slow, like restoring the RAID) with the controller or drive(s)?
Ok I checked the journalCTL, I started a manual mount again :

1701034053792.png
let run for some minutes, and here the out :
Bash:
Nov 26 22:24:28 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:24:38 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:24:49 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:24:57 local pvedaemon[2716]: <root@pam> starting task UPID:local:0000A175:00045742:6563B7A9:vncshell::root@pam:

Nov 26 22:24:57 local pvedaemon[41333]: starting termproxy UPID:local:0000A175:00045742:6563B7A9:vncshell::root@pam:

Nov 26 22:24:57 local pvedaemon[2717]: <root@pam> successful auth for user 'root@pam'

Nov 26 22:24:57 local login[41344]: pam_unix(login:session): session opened for user root(uid=0) by (uid=0)

Nov 26 22:24:57 local systemd-logind[2279]: New session 4 of user root.

Nov 26 22:24:57 local systemd[1]: Started Session 4 of user root.

Nov 26 22:24:57 local login[41378]: ROOT LOGIN  on '/dev/pts/1'

Nov 26 22:24:58 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:25:08 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:25:18 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:25:28 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:25:38 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:25:48 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Nov 26 22:25:58 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'


and it does continue so on with the same msg

But I also found that in the journalctl :


Bash:
Nov 26 21:27:10 local systemd[1]: Mounting Mount storage 'RAID_DATA' under /mnt/pve...
Nov 26 21:28:40 local systemd[1]: mnt-pve-RAID_DATA.mount: Mounting timed out. Terminating.
Nov 26 21:30:10 local systemd[1]: mnt-pve-RAID_DATA.mount: Mount process timed out. Killing.
Nov 26 21:30:10 local systemd[1]: mnt-pve-RAID_DATA.mount: Killing process 1337 (mount) with signal SIGKILL.
Nov 26 21:31:41 local systemd[1]: mnt-pve-RAID_DATA.mount: Mount process still around after SIGKILL. Ignoring.
Nov 26 21:31:41 local systemd[1]: mnt-pve-RAID_DATA.mount: Failed with result 'timeout'.
Nov 26 21:31:41 local systemd[1]: mnt-pve-RAID_DATA.mount: Unit process 1337 (mount) remains running after unit stopped.
Nov 26 21:31:41 local systemd[1]: Failed to mount Mount storage 'RAID_DATA' under /mnt/pve.



I just don't know why it is not mouting, is there are any commands to check / see info on disks / mounts, or even check for errors ?
 
ok, so after a manual :
systemctl start mnt-pve-RAID_DATA.mount

I got this :
journalctl -xe
Bash:
Linux local 5.15.74-1-pve #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
Nov 27 00:10:19 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'
Nov 27 00:10:29 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'
Nov 27 00:10:37 local systemd[1]: mnt-pve-RAID_DATA.mount: Mount process still around after SIGKILL. Ignoring.
Nov 27 00:10:37 local systemd[1]: mnt-pve-RAID_DATA.mount: Failed with result 'timeout'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit mnt-pve-RAID_DATA.mount has entered the 'failed' state with result 'timeout'.
Nov 27 00:10:37 local systemd[1]: mnt-pve-RAID_DATA.mount: Unit process 1343 (mount) remains running after unit stopped.
Nov 27 00:10:37 local systemd[1]: mnt-pve-RAID_DATA.mount: Unit process 105954 (mount) remains running after unit stopped.
Nov 27 00:10:37 local systemd[1]: Failed to mount Mount storage 'RAID_DATA' under /mnt/pve.
░░ Subject: A start job for unit mnt-pve-RAID_DATA.mount has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit mnt-pve-RAID_DATA.mount has finished with a failure.
░░
░░ The job identifier is 879 and the job result is failed.
Nov 27 00:10:39 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'
Nov 27 00:10:49 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'
Nov 27 00:10:59 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'
Nov 27 00:11:09 local pvestatd[2713]: unable to activate storage 'RAID_DATA' - directory is expected to be a mount point but is not mounted: '/mnt/pve/RAID_DATA'

Now my question is, how to give more time to mount ? this is aprox. 5Tb of data maybe it need more time, and It is timing out around 1min of execution ? I think ?

Thx !
 
if you manually try to mount, does it eventually finish?
 
Have you ever looked in the iDRAC what the condition of your hardware is and if there is an entry there what could have happened? Does the controller work as it should?
 
if you manually try to mount, does it eventually finish?
I tried manual run for the whole night, nothing, not even logs in the journatctl, I decided to add it to Fstab, and reboot, but know I just can't even SSH the server :confused: And no more webUI, I think proxmox try to mount it infinitely but can't so It is just not continuing to boot to the web UI, and now I just need to use my KVM to control the proxmox machine, later I'll try to change the fstab to remove the not mounting /dev/sda1 I just can't figure out why it doesn't want to mount
 
Have you ever looked in the iDRAC what the condition of your hardware is and if there is an entry there what could have happened? Does the controller work as it should?
Yes I did boot into BIOS, and checked the iDRAC, I did a health check on the 6 SAS disks, (wich is the 5Tb RAID_DATA on dev/sda ) all the drives are fine, but I think proxmox starting up is faster than the controller need time to init all the 6 disks, ? is that even possible ? shouldn't the R820 first boot and chek the Raid controller before starting up the os (proxmox) ? I checked the BIOS but there isn't any "boot time" option idk how I can delay the proxmox boot to let the controller do his job correctly maybe it's the issue ? (First time I have this mount issue with proxmox)
 
The controller is initialized when it starts and then so are all the disks attached to it. So it's unlikely that PVE will boot faster than the controller is ready to deliver data.

I'm thinking more of the fact that the VD is damaged and the controller is spitting out some kind of message, e.g. that it has been reset. Look in the syslog to see if there are any messages, regardless of whether it is mounted or not, if the controller doesn't want to, something will be found in the syslog. Otherwise try to mount and see if anything is logged.

Maybe a FW upgrade would help you solve the problem or the controller might want to be reseated.
 
T
The controller is initialized when it starts and then so are all the disks attached to it. So it's unlikely that PVE will boot faster than the controller is ready to deliver data.

I'm thinking more of the fact that the VD is damaged and the controller is spitting out some kind of message, e.g. that it has been reset. Look in the syslog to see if there are any messages, regardless of whether it is mounted or not, if the controller doesn't want to, something will be found in the syslog. Otherwise try to mount and see if anything is logged.

Maybe a FW upgrade would help you solve the problem or the controller might want to be reseated.
Thx for the reply !
actually, my proxmox os is run on a NVME, wich is on a NVME to PCIe adapter, I will try to check the iDRAC logs to see if I can have see anything, I'm not an expert and my english isn't the best, could you explain me what is the "VD" and "FW upgrade" this is a simple homelab server I made to host micro services and some VMs to train myself on linux machines

thx !
 
VD = Virtual Disk, if you create a RAID you will create an VD automatically
FW Upgrade = Firmware Upgrade
 
  • Like
Reactions: Elmani335

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!