Proxmox Storage issues

strikestarcraft

New Member
Dec 7, 2022
7
0
1
Hello Forum! Happy to be here and a new proxmox user.

To get right into it, I messed up.

Current config is a 4 disk RAID 5 array managed by a RAID card on the server in a Raid5 configuration, for a total of 6TB of storage (2TB x 4disks = 8TB - 2TB loss from Raid5)
Essentially I did not configure storage correctly on the server or dedicate a singular disk/partition to my proxmox install, which I was told was fine. All of my servers are running smoothly, with one exception. The bootdisk size is set to 100gb on all the servers. I've also run out of local backup storage and backups won't complete.

Attached a screenshot of my current disk config [Datacenter Disks.png]. I do have a spare 256gb SSD which I can backup my VMs/config to, but if there's a way to correct this without blowing my entire setup away that would be great. I did also happen to notice another storage array (local-lvm) which appears to have the rest of the ~5.9TB, it would be great if I could allocate that space over. [local-lvm.png]
 

Attachments

  • Datacenter Disks.png
    Datacenter Disks.png
    36.9 KB · Views: 17
  • local-lvm.png
    local-lvm.png
    23.7 KB · Views: 17
You should inspect what takes so much space in /, I suspect the backups.

Your links will remove thin LVM entirely. This is normally not what you want.
 
Thanks for the response, I've checked the backups and they're not taking anymore than 20/30gb of space. (only 3 active VMs in total anyway)

Any other pointers would be appreciated.
 
Please show your /etc/pce/storage.cfg
in this file is all configuration of your storages
 
Looks like I may have found a potential solution
Re: https://gist.github.com/laineantti/4fc29acbbd25593619a76b413e42b78f

If someone could verify that this would work that would be great.
That will delete the "local-lvm" storage and extend the "local" storage. So you would end up with a 6TB "local" instead of a 100GB "local" + 5.9TB "local-lvm". But "local-lvm" is better for storing VM/LXCs, as "local" would only support to store virtual disks as qcow2 files and there got more overhead because it is using copy-on-write (so less performance).

And as far as I know you can't shrink the size of a LVM-Thin pool (thats what your "local-lvm" storage is). You would need to back all guests up, then destroy that LVM-Thin pool, extend the size of the LV used as root filesystem ("local" storge) and create a new LVM-thin pool with the remaining space and create a new "local-lvm" storage.

It's by the way not a good idea to store your backups on "local". Because when you lose your raid array, you lose both, the VMs and all your backups, at the same time. Would be better to have a dedicated backup disk (or even better a Proxmox Backup Server).
 
I've gained a much better understanding on where my data is
Proxmox_Backup_Main - Has my backups, shares the exact same storage as local(pve)
There was also a "Proxmox Backup" which I had deleted which I suspect is still taking storage space on the disk
Local_lvm is where the VMs which are currently running are living

1670428007103.png

@bbgeek17
root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 5.5T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 5.5T 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 15.8G 0 lvm
│ └─pve-data-tpool 253:4 0 5.3T 0 lvm
│ ├─pve-data 253:5 0 5.3T 1 lvm
│ ├─pve-vm--101--disk--0 253:6 0 100G 0 lvm
│ ├─pve-vm--300--disk--0 253:7 0 100G 0 lvm
│ └─pve-vm--200--disk--0 253:9 0 100G 0 lvm
└─pve-data_tdata 253:3 0 5.3T 0 lvm
└─pve-data-tpool 253:4 0 5.3T 0 lvm
├─pve-data 253:5 0 5.3T 1 lvm
├─pve-vm--101--disk--0 253:6 0 100G 0 lvm
├─pve-vm--300--disk--0 253:7 0 100G 0 lvm
└─pve-vm--200--disk--0 253:9 0 100G 0 lvm
sr0 11:0 1 1024M 0 rom


-------------------------------------------------------------------------------------


root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
base-100-disk-0 pve Vri---tz-k 100.00g data
base-102-disk-0 pve Vri---tz-k 100.00g data
data pve twi-aotz-- <5.31t 0.97 0.33
root pve -wi-ao---- 96.00g
snap_vm-101-disk-0_Wazuh_installed_loopback pve Vri---tz-k 100.00g data vm-101-disk-0
snap_vm-300-disk-0_hybrid_running pve Vri---tz-k 100.00g data vm-300-disk-0
swap pve -wi-ao---- 8.00g
vm-101-disk-0 pve Vwi-aotz-- 100.00g data 12.37
vm-200-disk-0 pve Vwi-aotz-- 100.00g data 4.20
vm-300-disk-0 pve Vwi-aotz-- 100.00g data 11.46


-------------------------------------------------------------------------------------


root@pve:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=65944968k,nr_inodes=16486242,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=13195864k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=35777)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=13195860k,nr_inodes=3298965,mode=700,inode64)


-------------------------------------------------------------------------------------


root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 1.7M 13G 1% /run
/dev/mapper/pve-root 94G 89G 525M 100% /
tmpfs 63G 49M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/fuse 128M 20K 128M 1% /etc/pve
tmpfs 13G 0 13G 0% /run/user/0


-------------------------------------------------------------------------------------


root@pve:~# du -h -d1 /
14G /Proxmox_Backup_Main
du: cannot access '/proc/27804/task/27804/fd/3': No such file or directory
du: cannot access '/proc/27804/task/27804/fdinfo/3': No such file or directory
du: cannot access '/proc/27804/fd/4': No such file or directory
du: cannot access '/proc/27804/fdinfo/4': No such file or directory
0 /proc
8.7G /var
4.0K /home
92M /boot
12K /backup
2.3G /usr
0 /sys
20K /lost+found
4.0K /media
46M /dev
4.0K /mnt
40K /tmp
1.7M /run
4.9M /etc
4.0K /opt
4.0K /srv
40K /root
65G /Proxmox_Backup
89G /


Looks like /Proxmox_Backup is the culprit


@milew I believe you meant pve (typo?) see below

root@pve:/etc# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

dir: Promox_Backup_Main
path /Proxmox_Backup_Main
content images,backup
nodes pve
prune-backups keep-all=1
shared 0
 

Attachments

  • 1670428107834.png
    1670428107834.png
    14.3 KB · Views: 2
@bbgeek17 I was actually thinking about doing that earlier and would've loved to have executed on it. Unfortunatly when I go to my node > Disks > LVM/LVM-Thin > Create Volume Group, the option is greyed out (I suppose due to having reserved all of the space in the first place)

Didn't plan this out when I first stood it up...

1670428978799.png
 
@bbgeek17 I'm willing to give it a shot. My only reservation is that I'm not sure if it would enable the local-lvm to have a "Backup" tab defined in the Proxmox UI, or if it would have the same limitations as the current local-lvm to not be able to have backups sent to it

1670431783471.png1670431798335.png

vs

1670431813098.png
 
local-lvm to have a "Backup" tab
it would be a completely separate, logically isolated storage that will have nothing to do with local-lvm.
It all will still be backed by your RAID, and if you lose the RAID you will lose everything.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: strikestarcraft
It all will still be backed by your RAID, and if you lose the RAID you will lose everything.
Just to say I again. Everything means everything...your backups won't help you at all. And it is not that uncommon to lose a raid array...I personally for example once lost my whole array while doing a rebuild after changing a failed disk. Backups should be in best case offsite, in worser case on another machine onsite, in an even worser case on a dedicated disk on the same host and worst case you put the backups on the same disk or raid array.
So while it is possible to store your backup on that raid array, I would highly recommend not to do it.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!