Help please to sort out my storage

Batterbits

New Member
Sep 2, 2022
6
0
1
[Newbie useless git alert]
I have been running proxmox for a couple of years, and muddling by. It has become apparent that I haven't got a clue what I'm doing with regard to storage.

I currently have 2 VMs (Home assistant and Blue Iris) and 3 CTs (Plex, Frigate and PiHole)
I want to move to Agent DVR for my surveillance and I seem not to have any available space on my 10TB of disk, even though (to my mind) I'm not using it correctly
I'd like to have 2 large storage areas on my filesystem, one for plex media, and one for CCTV recordings.
I would ideally like to keep my existing HA VM, and the Plex CT, and the BI setup, until I have the AgentDVR setup and running. The Frigate can go, but I want to play with that in the future.

Looking at the Plex CT for example (I keep missing recordings due to "out of space":
1671103846276.png
Yet, this is a different size to the ct volume
1671103949330.png
And in the console:
1671104014766.png
1671104550744.png


So, my questions.
What is the "best" way of assigning storage? is it ZFS, or just as Directories? or thin volumes? And can I achieve this without starting again from a bare metal install?
Is it possible to give the containers and VMs access to a big bucket of space, and for them to use what they need?


This is my hardware:
1671104122305.png
LVM:
1671104173254.png
LVM-Thin:
1671104281850.png
ZFS:
1671104328071.png

I accept that I should have come here first, so my apologies for trying to do it for myself!
 
Don't use the df command when working with ZFS. Use zfs list -o space and zpool list if you want to know how full your ZFS pools are. And keep in mind that a ZFS pool shouldn't be filled up more than 80%.

Would also be helpful to show us your pool layout: zpool status
 
Last edited:
What is zpool get ashift ZFS-Data and zfs get volblocksize reporting?

I think I see two problems:
1.) you use a raidz1 but probably didn`t increased the block size of your ZFS storage before creating your zvols. So you are probably wasting capacity, as every zvol will be bigger than needed because of padding block overhead.
2.) Your ZFS storage is either not set to use thin-provisioning or your Bl-Box VM isn't using discard, so deleted stuff inside the guest won'T be actually freed up on the ZFS pool.
 
Last edited:
Here are those values:

View attachment 44491
Jup, that is bad. Now, every VM (but not LXC) basically needs 150% space. With a blocksize of 16K they only would need 112% and with a blocksize of 64K only 103%. But this can't be changed later without destroying and recreating the virtual disks. Easiest way to fix this would be to backup your VMs 100 and 101, then change the "Block Size" at "Datacenter -> Storage -> ZFS-Data -> Edit" from the default 8K to 16K or 64K. And then restore both VMs, so new virtual disks with the correct volblocksize will be created.
 
Last edited:
Then I would make sure that the "Thin Provisioning" checkbox is set at "Datacenter -> Storage -> ZFS-Data -> Edit" as well, so you don't waste space when a VM/LXC isn't using it.

And I also would set a pool-wide quota, so you can never completely fill the pool by accident. zfs set quota=1.96T ZFS-Data for example would make sure that always 20% of the pool will have to be kept free, which you should manually make sure anyway. Or a 90% quota with zfs set quota=2.2T ZFS-Data where you then need to monitor the pool and remove stuff as soon as the pool is filled more than 1.96T.

When filling the pool more than 80% it will become slow and fragment faster, so this should be avoided.
 
Last edited:
Thanks Dunuin.
I can easily take a backup of 100 - it's only 40GB, but the BlueIris (VM101) is 1.4Tb, which I don't seem to have room for:

1671110268955.png

It has about 600GB of files, which I don't need, but I think the backup will still require 1.4Tb even if I delete the old files. Any way around this?
 
Thanks Dunuin.
I can easily take a backup of 100 - it's only 40GB, but the BlueIris (VM101) is 1.4Tb, which I don't seem to have room for:

View attachment 44492

It has about 600GB of files, which I don't need, but I think the backup will still require 1.4Tb even if I delete the old files. Any way around this?
Make sure the "discard" checkbox of the VMs virtual disks is set. Then let the guest OS do a trim/discard (on linux guests this can be done by running fstrim -a) after deleting unneeded stuff inside the VM. When you then store the backup with zstd compression it shouldn't be that big, as empty space should be highly compressible.
 
Thanks Dunuin, I really appreciate your help.

I ended up deleting the BI instance. With the other tweaks My Plex instance is now showing the full 1.6TB, so hopefully no more failed recordings.
I'd like to be able to share the pool, essentially with the CCTV recordings, Plex recordings and additional storage for my Enigma2 recorder.
I'd prefer to have one large storage area, - rather than have to specify what each device is able to use.

So, for example - 500GB for containers & VMs
all the rest as "variable" storage, which can shrink or grow for each VM/CT as required.
It seems that I have a mixture of LVM and ZFS storage, so can I combine them (or recreate) into one large storage?

1671181217515.png
I'm confused by this output - the GUI shows:
1671181381467.png
Also it seems like I have an unused 4.5Tb LVM
1671181879509.png
Would I be able to reclaim that space, and add it to the ZFS pool?
Finally, I have a couple of disks which look different from the rest - /dev/sdb and /dev/sdg (see my first post) which are both Type=Unknown and GPT=no. Do I need to do anything with these?
 
It seems that I have a mixture of LVM and ZFS storage, so can I combine them (or recreate) into one large storage?
Jup, you got 3 types of storages. LVM for the root filesystem ("local"), LVM-Thin for VMS/LXCs ("local-lvm"), then your big ZFS storage on pool ZFS-Data and then another LVM or LVM-Thin storage on sdg. You can't combine ZFS / LVM/ LVM-Thin.

Would I be able to reclaim that space, and add it to the ZFS pool?
Finally, I have a couple of disks which look different from the rest - /dev/sdb and /dev/sdg (see my first post) which are both Type=Unknown and GPT=no. Do I need to do anything with these?
You use a raidz1 pool which can'T be expanded yet. If you don't want these sdg+sdc LVM disks, you would need to destroy and recreate that ZFS pool and create it again with sdb+sdg as a 5th and 6th disk.
For better IOPS performance, I would recommend using 6 disk striped mirror (three 2 disk mirrors striped together) or at least a 6 disk striped raidz1 (two 3 disk raidz1 striped together), as you also store virtual disks on it. Or for better data integrity a 6 disk raidz2.

And raid never replaces a real backup. You still should have a copy of everything on another NAS/disk. So "I don't have the space to create backups of everything for destroying the ZFS pool" isn't an excuse, then you really should buy more disks for backups.
 
Last edited:
Jup, you got 3 types of storages. LVM for the root filesystem ("local"), LVM-Thin for VMS/LXCs ("local-lvm"), then your big ZFS storage on pool ZFS-Data and then another LVM or LVM-Thin storage on sdg. You can't combine ZFS / LVM/ LVM-Thin.
So, looking at my lvm-thin storage, I think it might be broken. In any case, it looks like I'm not using it:

1671625977337.png
but I have 2 VM disks (total 20GB) on my local-lvm storage -I think that's on pve though

1671626482344.png

I also get the following errors in the log:
WARNING: Not using device /dev/sdg for PV xUfQRF-07u2-AONq-UeBN-HcJ2-X9lT-1ADp0Z.
WARNING: PV xUfQRF-07u2-AONq-UeBN-HcJ2-X9lT-1ADp0Z prefers device /dev/sdb because device name matches previous.
WARNING: Device /dev/sdb has size of 1758109232 sectors which is smaller than corresponding PV size of 8790545968 sectors. Was device resized?
WARNING: One or more devices used as PVs in VG Data have changed sizes.


Any advice on how to fix this - or should I delete it and recreate it? I don't mind what type of storage it is, but it's 4Tb of unused storage currently
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!