Understanding storage, lack of true on-ramp for newbies

marq

New Member
Jun 10, 2025
9
0
1
Trying to be calm, (forgive me I'm frustrated and cannot find an answer) I have been really enjoying proxmox and it brings a lot of joy to my world for the last 6 months.

I was given a fantastic old Dell PowerEdge Server with a lot of hardware (48 cores, 96 Gigs of Ram and 15tb of storage). I watched a lot of youtube to get my storage right. Long story short after doing all I learned to setup my storage, making a template drive, ISO, Container and VM disks, ZFS and even connecting my synology... I go to deploy a VM today and get an error because for some reason every LXC or VM drops something on "local" and local is now FULL. Local is the drive is the Proxmox OS. On the Dell it is two SSD cards in a hardware RAID.

When you make drives, in datacenter you can say, this drive is for backups, and this one is for snippets and templates, and this one is ISO, but NO WHERE have I found any information on what this means, when I deploy a VM or LXC this goes here, and this goes there. I am sitting on literally 27 tbs of drives, and for some reason my setup has vast amounts of empty drives, and Proxmox trying to cram the containers on local, a 14gb card from amazon. No google searches have had a clear answer to this, frankly, simple question. I never get an option to select where I want to put the container when I install a new LXC, only for the containers drives. Local is automatically where the container is placed.

Again. I'm sure everyone is rolling their eyes, but as someone trying to learn and doing all they can to not bother anyone, the STORAGE page on proxmox website goes deep on ZSF and so on, but I have yet to find anywhere that tells me when I say a drive (in datacenter) is for "THIS" that means that "what" is going to be put there when I deploy a thing.

I am now removing most of the LXC's I have spooled up, and trying to figure out how to get things off my local and to the other drives I have setup for containers.

Can anyone tell me, if I deploy an LXC, what do I need to set a drive (or pool) to deploy the container... It always askes me where I want the VM's drive.

Thank you.
 
You have to let PVE know which storage it should use in what way. Just having some physical disks won't do that. See Datacenter > Storage.
If you format a physical disk via node > Disks > ZFS (just an example) It allows you to create a fitting storage for it too.
For example local should be used for files such as isos, backups, templates, etc. local-lvm would be used for virtual disks.
Storages also have content types you can select. It limits what it can be used for. local isn't configured to store virtual disks by default, for example.
These storages are default ones when using default installer values but if you show me the output of this I can tell you more and give recommendations
Bash:
lsblk -o+FSTYPE,MODEL
cat /etc/pve/storage.cfg
Once you have another storage you can move these virtual disks there via the GUI in Hardware/Resources.
 
Last edited:
  • Like
Reactions: marq and Johannes S
Impact, Thank you for the help. I tried to give you as much info. In the end, I see that "local" says ISO, but I frankly want nothing on it, it has the OS, and it does not let me pick the other drive is ISO.

When I got the server it had 14 1.4tb drives. 5 blanks and 6 250 gig SSD's. I fill the blanks with some drives for backups. There is also the "boot" drive, it is the two flash mem cards in RAID in the back of the server, that is where I installed the proxmox os, and a NAS connected for media.

I took the 14 drives and made a single ZSF, this is the drives for my containers.

I put the other 6 drives into a ZSF, this what I split up to and wanted "everything else" to be except backups.

I added, I use the 2 TB SSD I had for backups, and not really using the others i added.

Screenshot 2025-09-13 183339.png

Bash:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS     FSTYPE      MODEL
sda                  8:0    0   1.1T  0 disk                             AL14SEB120NY
├─sda1               8:1    0   1.1T  0 part                 zfs_member 
└─sda9               8:9    0     8M  0 part                             
sdb                  8:16   0   1.1T  0 disk                             AL14SEB120NY
├─sdb1               8:17   0   1.1T  0 part                 zfs_member 
└─sdb9               8:25   0     8M  0 part                             
sdc                  8:32   0   1.1T  0 disk                             AL14SEB120NY
├─sdc1               8:33   0   1.1T  0 part                 zfs_member 
└─sdc9               8:41   0     8M  0 part                             
sdd                  8:48   0   1.1T  0 disk                             AL14SEB120NY
├─sdd1               8:49   0   1.1T  0 part                 zfs_member 
└─sdd9               8:57   0     8M  0 part                             
sde                  8:64   0   1.1T  0 disk                             AL14SEB120NY
├─sde1               8:65   0   1.1T  0 part                 zfs_member 
└─sde9               8:73   0     8M  0 part                             
sdf                  8:80   0   1.1T  0 disk                             AL14SEB120NY
├─sdf1               8:81   0   1.1T  0 part                 zfs_member 
└─sdf9               8:89   0     8M  0 part                             
sdg                  8:96   0   1.1T  0 disk                             AL14SEB120NY
├─sdg1               8:97   0   1.1T  0 part                 zfs_member 
└─sdg9               8:105  0     8M  0 part                             
sdh                  8:112  0   1.1T  0 disk                             AL14SEB120NY
├─sdh1               8:113  0   1.1T  0 part                 zfs_member 
└─sdh9               8:121  0     8M  0 part                             
sdi                  8:128  0   1.1T  0 disk                             AL14SEB120NY
├─sdi1               8:129  0   1.1T  0 part                 zfs_member 
└─sdi9               8:137  0     8M  0 part                             
sdj                  8:144  0   1.1T  0 disk                             AL14SEB120NY
├─sdj1               8:145  0   1.1T  0 part                 zfs_member 
└─sdj9               8:153  0     8M  0 part                             
sdk                  8:160  0   1.1T  0 disk                             AL14SEB120NY
├─sdk1               8:161  0   1.1T  0 part                 zfs_member 
└─sdk9               8:169  0     8M  0 part                             
sdl                  8:176  0   1.1T  0 disk                             AL14SEB120NY
├─sdl1               8:177  0   1.1T  0 part                 zfs_member 
└─sdl9               8:185  0     8M  0 part                             
sdm                  8:192  0   1.1T  0 disk                             AL14SEB120NY
├─sdm1               8:193  0   1.1T  0 part                 zfs_member 
└─sdm9               8:201  0     8M  0 part                             
sdn                  8:208  0   1.1T  0 disk                             AL14SEB120NY
├─sdn1               8:209  0   1.1T  0 part                 zfs_member 
└─sdn9               8:217  0     8M  0 part                             
sdo                  8:224  0 298.1G  0 disk                             WDC WD3200BVVT-63A26Y0
├─sdo1               8:225  0 298.1G  0 part                 zfs_member 
└─sdo9               8:233  0     8M  0 part                             
sdp                  8:240  0 298.1G  0 disk                             WDC WD3200BVVT-63A26Y0
├─sdp1               8:241  0 298.1G  0 part                 zfs_member 
└─sdp9               8:249  0     8M  0 part                             
sdq                 65:0    0 465.8G  0 disk                             Samsung SSD 850 EVO 500GB
sdr                 65:16   0   1.8T  0 disk                             Samsung SSD 860 EVO 2TB
└─sdr1              65:17   0   1.8T  0 part /mnt/pve/backup ext4       
sds                 65:32   0 223.6G  0 disk                             SSDSC2KG240G7R
├─sds1              65:33   0 223.6G  0 part                 zfs_member 
└─sds9              65:41   0     8M  0 part                             
sdt                 65:48   0 223.6G  0 disk                             SSDSC2KG240G7R
├─sdt1              65:49   0 223.6G  0 part                 zfs_member 
└─sdt9              65:57   0     8M  0 part                             
sdu                 65:64   0 223.6G  0 disk                             SSDSC2KG240G7R
├─sdu1              65:65   0 223.6G  0 part                 zfs_member 
└─sdu9              65:73   0     8M  0 part                             
sdv                 65:80   0 223.6G  0 disk                             SSDSC2KG240G7R
├─sdv1              65:81   0 223.6G  0 part                 zfs_member 
└─sdv9              65:89   0     8M  0 part                             
sdw                 65:96   0 223.6G  0 disk                             SSDSC2KG240G7R
├─sdw1              65:97   0 223.6G  0 part                 zfs_member 
└─sdw9              65:105  0     8M  0 part                             
sdx                 65:112  0 223.6G  0 disk                             SSDSC2KG240G7R
├─sdx1              65:113  0 223.6G  0 part                 zfs_member 
└─sdx9              65:121  0     8M  0 part                             
sdy                 65:128  0  29.8G  0 disk                             IDSDM
├─sdy1              65:129  0  1007K  0 part                             
├─sdy2              65:130  0   512M  0 part /boot/efi       vfat       
└─sdy3              65:131  0  29.3G  0 part                 LVM2_member
  ├─pve-swap       252:0    0   3.6G  0 lvm  [SWAP]          swap       
  ├─pve-root       252:1    0  12.8G  0 lvm  /               ext4       
  ├─pve-data_tmeta 252:2    0     1G  0 lvm                             
  │ └─pve-data     252:4    0  10.8G  0 lvm                             
  └─pve-data_tdata 252:3    0  10.8G  0 lvm                             
    └─pve-data     252:4    0  10.8G  0 lvm


Here is the other command.

Bash:
dir: local
        path /var/lib/vz
        content iso
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images

zfspool: tank
        pool tank
        content rootdir,images
        mountpoint /tank
        nodes pve

zfspool: vm-disks
        pool vm-disks
        content images,rootdir
        mountpoint /vm-disks
        nodes pve

dir: ISO
        path /tank/iso
        content iso
        prune-backups keep-all=1
        shared 0

dir: templates
        path /tank/templates
        content import,vztmpl,snippets
        prune-backups keep-all=1
        shared 0

zfspool: dump
        pool dump
        content images,rootdir
        mountpoint /dump
        nodes pve

cifs: nas-media
        path /mnt/pve/nas-media
        server 192.168.86.11
        share Media
        content images
        prune-backups keep-all=1
        username xxxxxxx

dir: backup
        path /mnt/pve/backup
        content import,backup,snippets,vztmpl
        is_mountpoint 1
        nodes pve
        shared 0
 
I took the 14 drives and made a single ZSF
Hopefully with more than one drive redundancy. With so many drives it might be best to create a few mirrors or at least have multiple VDEVs for performance. zfs list and zpool status might be interesting here.

frankly want nothing on it, it has the OS, and it does not let me pick the other drive is ISO
You can disable storages such as local storage if you don't use it. local-lvm appears unused as well. I see another storage with ISO content which should be able to be selected in appropriate places.

I like to keep the amount of storages fairly minimal but it seems like you got the hang of it but let me know if there's anything still unclear.
 
Last edited:
  • Like
Reactions: marq
Impact, thank you again for helping

For the 14 drives, when I made it, I set the raid so each drive is mirrored. The ZSF is 8.36tb. Did the same thing for the tank. Posted the commands below.

When you say "You can disabled storage as local, do you mean Datacenter >> Storage >> Local and remove the Enable checkmark? It can still be used as the boot drive? Would this keep it from being used for anything other than the OS?

I just deployed a LCX via a helper script, and it let me select after I made some changes where to drop the template... but as you can see in the screenshot, it still (did not ask me) went and got debian-12 and dropped it on local. This has been my issue.

Screenshot 2025-09-13 194758.png


Bash:
NAME                         USED  AVAIL  REFER  MOUNTPOINT
dump                        1.59M   289G    96K  /dump
tank                         128M   645G   104K  /tank
tank/iso                     104K   645G   104K  /tank/iso
tank/templates               124M   645G   124M  /tank/templates
vm-disks                    6.82G  7.47T   144K  /vm-disks
vm-disks/subvol-100-disk-0   596M  3.42G   596M  /vm-disks/subvol-100-disk-0
vm-disks/subvol-101-disk-0  1.64G  2.36G  1.64G  /vm-disks/subvol-101-disk-0
vm-disks/subvol-103-disk-0   590M  1.42G   590M  /vm-disks/subvol-103-disk-0
vm-disks/subvol-107-disk-0  2.00G  2.00G  2.00G  /vm-disks/subvol-107-disk-0
vm-disks/subvol-108-disk-0   517M  3.50G   517M  /vm-disks/subvol-108-disk-0
vm-disks/subvol-117-disk-0  1.20G  2.80G  1.20G  /vm-disks/subvol-117-disk-0

  pool: dump
 state: ONLINE
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Sun Aug 10 00:24:03 2025
config:

        NAME                                            STATE     READ WRITE CKSUM
        dump                                            ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            ata-WDC_WD3200BVVT-63A26Y0_WD-WXF1A81L4253  ONLINE       0     0     0
            ata-WDC_WD3200BVVT-63A26Y0_WD-WX11E7141702  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 00:00:00 with 0 errors on Sun Aug 10 00:24:02 2025
config:

        NAME                                       STATE     READ WRITE CKSUM
        tank                                       ONLINE       0     0     0
          mirror-0                                 ONLINE       0     0     0
            ata-SSDSC2KG240G7R_BTYM738404NC240AGN  ONLINE       0     0     0
            ata-SSDSC2KG240G7R_BTYM748103WP240AGN  ONLINE       0     0     0
          mirror-1                                 ONLINE       0     0     0
            ata-SSDSC2KG240G7R_BTYM73840395240AGN  ONLINE       0     0     0
            ata-SSDSC2KG240G7R_BTYM738405VF240AGN  ONLINE       0     0     0
          mirror-2                                 ONLINE       0     0     0
            ata-SSDSC2KG240G7R_BTYM748103BD240AGN  ONLINE       0     0     0
            ata-SSDSC2KG240G7R_BTYM738409DL240AGN  ONLINE       0     0     0

errors: No known data errors

  pool: vm-disks
 state: ONLINE
  scan: scrub repaired 0B in 00:00:59 with 0 errors on Sun Aug 10 00:25:02 2025
config:

        NAME                        STATE     READ WRITE CKSUM
        vm-disks                    ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            scsi-350000398382b4045  ONLINE       0     0     0
            scsi-350000398382b22dd  ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            scsi-350000398382b292d  ONLINE       0     0     0
            scsi-350000398382b18a5  ONLINE       0     0     0
          mirror-2                  ONLINE       0     0     0
            scsi-35000039838428be1  ONLINE       0     0     0
            scsi-350000398382b2815  ONLINE       0     0     0
          mirror-3                  ONLINE       0     0     0
            scsi-350000398382b28fd  ONLINE       0     0     0
            scsi-350000398382b4099  ONLINE       0     0     0
          mirror-4                  ONLINE       0     0     0
            scsi-350000398382b41cd  ONLINE       0     0     0
            scsi-350000398382b42d9  ONLINE       0     0     0
          mirror-5                  ONLINE       0     0     0
            scsi-350000398382b4a41  ONLINE       0     0     0
            scsi-350000398382b4a69  ONLINE       0     0     0
          mirror-6                  ONLINE       0     0     0
            scsi-350000398382b4885  ONLINE       0     0     0
            scsi-350000398382b7289  ONLINE       0     0     0

errors: No known data errors
 
I think that I figured out why local is used, that is where the ISO is already. I need to go in there and find all these to free up the space.
 
Disabling the local storage in Datacenter > Storage does not hinder or break any OS functionality. It's simply a Directory storage pointing to /var/lib/vz similar to how your ISO storage points to /tank/iso. local is special in that you can't (permanently) delete it but disabling is fine.
I don't use these scripts myself but if you visit your templates storage on the sidebar you should be able to see a CT Templates tab where you can download/upload templates. template should also show up as Storage option during CT creation in the Template tab.

To find the iso/template you can do
Bash:
find /var/lib/vz -type f -name "*.iso" -or -name "*.zst"
or simply set the ISO/template content type on local again and delete via the GUI by visiting the storage on the sidebar.
 
Last edited:
  • Like
Reactions: marq
Cool, I was able to find an ISO that crashed my system and led me to this post in a temp in vz. Removing this file, and another copy of took the drive from 99.4 to 56.7%

Strangely, I ran both of the commands you shared Impact and both yielded nothing listed... they are in a secret location I guess. As long as future stuff is stored somewhere else, I "Should" be okay.

Thank you for helping me out... Us newbies are doing are best to get up to speed.
 
It might be somewhere else. You could alter the path to search the whole system but that might take quite a while.
Another option is to install locate (apt install locate), run updatedb and then search like this
Bash:
locate --regextype egrep --regex "\.(iso|zst)$"
There are some default exclusions and it doesn't index network shares (alter via config in /etc/updatedb.findutils.cron.local) so you might not find everything.

I'm a big fan of gdu to investigate storage usage
Bash:
apt install gdu
gdu /
It's like WizTree for the CLI if you're familiar with that.
 
Last edited:
Will take a look. I just broke it again. I have set LOCAL as not being active, and went to go do the thing that I started to do, deploy a VM. It went to go the ISO and during the extract after downloading the ISO it crashed again because it extracted it to the local drive even when I have it set to not active. /var/temp looks to be on the same flash mem card.

I will recover and go look for all these ISOs to make space.
 
Some things are temporarily written to /var/tmp/. This is hardcoded and not related to the local storage.
Backups can also use temporary storage, for example if you no snapshot support, but that path can be overwritten with --tmpdir.
If you don't use the local-lvm storage (data) you could remove it and then extend the root volume (where / is on) with something like this.
Bash:
lvremove pve/data
lvresize -r -l +99%FREE pve/root
Verify with lvs and df -hT.
Note that installing PVE on a SD Card is a bad idea and not really supported. Also see more results.
 
Last edited:
  • Like
Reactions: marq
I removed the pve/data because I was not using it. Got below.

Bash:
root@pve:~# lvresize -r -l +99%FREE /pve/root

  "/pve/root": Invalid path for Logical Volume.

  Run `lvresize --help' for more information

I guess I could grab the backups of the containers I would REALLY like to save and crash the whole server, Go ahead and upgrade to 9 and restore from the backups. This next time I will split the drives differently and in the DELL Lifecycle Raid two if the 14 1.4 TB drives using the hardware raid controller for the OS disks.

ZFS the remaining 12 with RAID 1 and use it for Container disks. ZSF the 6 SDD in Raid 1 and put all the other things in that pool. Use the 1 2 TB SSD to store the backups.

Would welcome your thoughts on my plan.
 
Sorry, the leading slash is bogus. I edited my post.
For disks connected via a non-IT mode HW RAID controller you don't want to use ZFS: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
LVM-Thin would be my next best recommendation.

As for the storage setup I find it hard to give a recommendation as I don't know what your plans are. I generally try to keep it simple.
One SSD based storage/pool for the guest main/OS drives and one for the data drives and/or backups and so forth.
I like to use a separate boot drive for PVE but it's not a must. I use cheap used Intel DC SATA SSDs for that.
 
Last edited: