storage inaccessible has grey question mark, status unknown

fox95

Member
Apr 14, 2022
11
0
6
I have 2 hard drives running on prmx.

the first drive is for local and has all my vm's os's etc...

i added a second drive (two acutally raid0) that I wanted to use as storage for two different vm's (1. ubuntu, 2. truenas)

i successfully add the drive, and had it working properly between the two. and a folder directory on it was mounted on the ubuntu server.

then i shut down everything and forgot to unmount it, so i am not sure this is what caused it to disappear. because on reboot the storage says status unknown.

when I oringally added to the truenas vm i used the uuid to make sure it would maintain its position and not lose it using just sdbx etc...

true nas os is on the other drive and it wont boot now. but ubuntu will boot and its on the same drive.

so I have fudged something up and hope that i can recover it with out re installing everything. i haven't done anything yet to try and recover it becuase im not sure of what im doing and dont want to ruin it if theres a chance to save it.

do the disk need re-initilaized?

any help is greatly appreciated.


btw, when initially added this second drive, i did it as a directory, it appears to be showing as a lvm now.... i cant remember if it showed like that before or not? but i distinctly recall selecting "directory" when i first set it up.
 

Attachments

  • prx3.JPG
    prx3.JPG
    89.7 KB · Views: 73
  • prx2.JPG
    prx2.JPG
    92.9 KB · Views: 67
  • prx1.JPG
    prx1.JPG
    89.4 KB · Views: 71
Not sure how to help you, but the whole setup sounds wrong. Truenas uses ZFS and ZFS shouldn't be used on top of raid: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers

If you used the webUI to create a raid0, then its ZFS as software raid and zfs on top of zfs is also a bad idea, as ZFS got massive overhead and nested ZFSs will multiply overhead.
Lets say ZFS got 3 times the write amplification compared to LVM-thin. So you only get 1/3 of the performance and disks will wear 3 times faster. Put ZFS on top of ZFS and you get a write amplification of factor 9 (3*3) so only 1/9 performance and disks that die 9 times faster.
And you will waste a lot of space. A ZFS pool should only be filled to 80%. With ZFS on top of ZFS that means only 64% of capacity usable, because you only get 80% of 80%.

And raid0 in general is a terrible idea for everything except temporary data, as losing a single disk will cause you to lose everything. So its even more unreliable than a single disk without any raid.
And HDDs/SSDs are consumables. All of them will fail sooner or later. So with a single disk or raid0 its not the question if you will loose all your data, the only question is how early.
And usually you choose TrueNAS with its ZFS because you really care about your data. Otherwise running ZFS would be way too expensive.
 
Last edited:
Not sure how to help you, but the whole setup sounds wrong. Truenas uses ZFS and shouldn't be used on top of hardware raid: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers

If you used the webUI to create a raid0, then its ZFS as software raid and zfs on top of zfs is also a bad idea, as ZFS got massive overhead and nested ZFSs will multiply overhead.

And raid0 in general is a terrible idea for everything except temporary data, as losing a single disk will cause you to lose everything. So its even more unreliable than a single disk without any raid.
I understand raid 0. All data being put on the raid 0 drive is being moved off it very quickly. So failure isnt a concern. Speed was more important

When i set up the two drives for the truenas they were done in megaraid in the server bios


If it is unrecoverable what is your suggestion for the configuration of then vm’s and storage?

I need nextcloud running that is quick and easy to access for immediate data but is able to move all its data to a backup drive in the background
 
In my opinion, the best option would be to buy a HBA card, use PCI passthrough to passthrough that HBA with all of its disks into the TrueNAS VM and then create the raid inside TrueNAS. You could then use iSCSI/NFS/SMB for your Nextcloud data directory so that the big data part of that VM is stored on the TrueNAS too. Another option would be to use disk passthrough. But keep in mind that with it, TrueNAS will only see virtual disks and got no direct access to the real physical disks like you would get with using PCI passthrough of a HBA.

Or if you care more about performance and less about your data, then ZFS in general (and so TrueNAS) isn't a good choice. Might be better than to use a HW Raid card with cache and BBU and skip the whole ZFS/TrueNAS idea and use LVM-Thin instead to store some virtual disks. You could then use VMs or LXCs with these virtual disks, format them with something simple like ext4 or XFS and run your Nextcloud/NAS software on them.
 
In my opinion, the best option would be to buy a HBA card, use PCI passthrough to passthrough that HBA with all of its disks into the TrueNAS VM and then create the raid inside TrueNAS. You could then use iSCSI/NFS/SMB for your Nextcloud data directory so that the big data part of that VM is stored on the TrueNAS too. Another option would be to use disk passthrough. But keep in mind that with it, TrueNAS will only see virtual disks and got no direct access to the real physical disks like you would get with using PCI passthrough of a HBA.

Or if you care more about performance and less about your data, then ZFS in general (and so TrueNAS) isn't a good choice. Might be better than to use a HW Raid card with cache and BBU and skip the whole ZFS/TrueNAS idea and use LVM-Thin instead to store some virtual disks. You could then use VMs or LXCs with these virtual disks, format them with something simple like ext4 or XFS and run your Nextcloud/NAS software on them.
Thanks for the input. Ill consider all of it

In the mean time id like to understand why the one storage area became lost.

What would be the method for setting up prxmx with separate storfage drives? It seems like this should be easily done with no chance of the disk going missing. Isnt it the point of virtualization to get wway from having to add more physical hardware?

Essentially inwas just running two different machines that shared a common folder and there was a breakdown down somewhere
 
Last edited:
Isnt it the point of virtualization to get wway from having to add more physical hardware?
But you talked about performance and there you will get the best performance by keeping it simple with as less nested filesystems and storage or virtualization layers as possible. Stuff like ZFS is great, but overhead easily add up. One example a measured: When I write 1GB of 4k sync writes inside a Debian VM it will write 62GB of data to the NAND of my SSDs. So every 1GB causes 62GB of writes because of a write amplification factor of 62. I still use it, because I like the features and great data integrity of ZFS, but a SSD that wears 62 times faster with just 1/62th of the performance is really bad if you compare it to some HW raid with xfs on top.
What would be the method for setting up prxmx with separate storfage drives? It seems like this should be easily done with no chance of the disk going missing.
You can format single disks with LVM/ZFS/LVM-Thin/ext4/xfs in webUI at "YourNodeName -> Disks". Or you manually partition, format and mount your disks and add them as a storage at "Datacenter -> Storge -> Add".
i successfully add the drive, and had it working properly between the two. and a folder directory on it was mounted on the ubuntu server.
By the way...you can't add the same virtual disk to two VMs. If you do that, you corrupt your data when its mounted at both VMs at the same time. Thats like connecting the same physical HDD to two PCs in parallel. You need to use SMB/NFS network shares if you want to share data between two VMs.
 
Last edited:
i ran across this:

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)


with some reading i think i understand just a bit on why zfs can cause some havoc with what im trying to do, but i am very newbie so this is another piece of the puzzle that im learning as i go about storage systems, its a whole other world that i didnt realize existed.....i thought all storage was the same and you just wrote data to a folder. :\
 
responding with what fixed this so someone in the future who searches this out doesn't come to a dead end.

apparently when i set up truenas i did not pass the hdd through to is properly with the serial#. So when i shut down the truenas vm it lost its path the storage drive and all info on it gone.

a very simple fix of:
1. enter prxmx shell
2. ls -l /dev/disk/by-id (to see the missing drives serial ID #)
3. enter this into the cli: qm set 105 -scsi1 /dev/disk/by-id/<hard drive serial numerber>

then the drive reappeared and i could start the vm normal and it's pool showed back up again in truenas. no data loss, voila...

is this the best way to set up truenas? this remains to be seen as of now, after reading dunuin's comments and doing some brief research there seems to be alot more at stake when using zfs systems as im intending. More experimentation will be necessary. it's quite possible my configuration continues on just fine for my purposes. But if you're storing sensitive documents i see no reason to take any risks.

this video was most helpful in understanding how to pass the drives through to prxmx:

https://www.youtube.com/watch?v=2mvCaqra6qY
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!