[SOLVED] Storage Mount Error

Abu Sayed

New Member
Jul 25, 2023
6
0
1
I have installed proxmox 8.2 , here I configure 3 VM , after rebooting proxmox its show this error:

unable to activate storage 'Disk' - directory is expected to be a mount point but is not mounted: '/mnt/pve/Disk' (500)
and show disk state: unknown.
How to resolve this?
 
Last edited:
The error message seems clear enough. The given location is expected to be a mount point but nothing is mounted. Since PVE doesn't mount anything there in a default installation, you must have done it. I can't tell you how to fix it with the information provided.

1. What is supposed to be mounted there? Does that drive still exist (post output of "lsblk" in code tags).
2. How did you mount it in the first place? Is it listed in /etc/fstab?
3. Are there any errors related to it in the journal?
 
I created two hardware RAID arrays on my disks. The first is a RAID 1 setup for the Proxmox OS on /dev/sda with a size of 744 GB. The second is a RAID 5 array for data, which is 1.5 TB on /dev/sdb.

Currently, my data disk is showing as "Unknown" and the status is "Active: NO."

manually I don't add any entry on /etc/fstab previously.

I've attached an image showing the lsblk output and a summary of the disks. Could you please advise me on how to recover all my VMs?

Lsblk Output:
---------------------------
root@ve:~# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 744.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 743.1G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 6.2G 0 lvm
│ └─pve-data-tpool 252:4 0 610.7G 0 lvm
│ └─pve-data 252:5 0 610.7G 1 lvm
└─pve-data_tdata 252:3 0 610.7G 0 lvm
└─pve-data-tpool 252:4 0 610.7G 0 lvm
└─pve-data 252:5 0 610.7G 1 lvm
sdb 8:16 0 1.5T 0 disk
===================================================
journalctl log:
----------------------

root@ve:~# journalctl -xe
Jul 30 21:44:21 ve pvedaemon[1307]: unable to activate storage 'Disk' - directory is expected to be a mount point but is n>
Jul 30 21:44:23 ve pvedaemon[1308]: unable to activate storage 'Disk' - directory is expected to be a mount point but is n>
Jul 30 21:44:25 ve pvedaemon[1308]: unable to activate storage 'Disk' - directory is expected to be a mount point but is n>
Jul 30 21:44:27 ve pvedaemon[1307]: unable to activate storage 'Disk' - directory is expected to be a mount point but is n>
Jul 30 21:44:29 ve pvedaemon[1307]: unable to activate storage 'Disk' - directory is expected to be a mount point but is n>
Jul 30 21:44:30 ve pvestatd[1280]: unable to activate storage 'Disk' - directory is expected to be a mount point but is no>
Jul 30 21:44:31 ve pvedaemon[103786]: unable to activate storage 'Disk' - directory is expected to be a mount point but is>
Jul 30 21:44:32 ve pvedaemon[103786]: unable to activate storage 'Disk' - directory is expected to be a mount point but is>
Jul 30 21:44:34 ve pvedaemon[1307]: unable to activate storage 'Disk' - directory is expected to be a mount point but is n>
 

Attachments

  • deskboard.png
    deskboard.png
    105.2 KB · Views: 7
Last edited:
Geeze, why all the redacting? Nobody cares what your VM's are named.

It looks like /dev/sda is set up as a typical PVE storage. There are boot and efi partitions, /dev/sda1 & sda2. There is an LVM partition, /dev/sda3, with two logical volumes, "swap" and "root", that are used as the system swap and root partitions, respectively. The rest of the partition is configured as an lvm-thin called "data". That's about 610 GB of block storage and is where you would normally put your VM disks. In the GUI these show up as "local", for the root volume, and "local-lvm" for the data volume.

On the other hand /dev/sdb is just a disk that isn't partitioned. Did you format it with a filesystem using "mkfs"? Is there anything already on it?

Moving on, "Disk" is the name of a directory storage you created. Based on the error message you told PVE that a volume was supposed to be mounted on /mnt/pve/Disk, but there is nothing present. Maybe you mounted it by hand and never added it to /etc/fstab? It is hard to say. It would be helpful to know what steps you took to add "Disk" in the first place.

Assuming that you did format /dev/sdb with a filesystem, you could mount it like this:
mount /dev/sdb /mnt/pve/Disk

If that works, add it to /etc/fstab so it gets mounted when the system starts:
/mnt/sdb1 /mnt/pve/Disk ext4 defaults 0 1

If that doesn't work, then you will need to partition and format /dev/sdb first, then follow the previous steps. Note that this will destroy any data that's already there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!