Proxmox Storage

niekniek89

New Member
Mar 16, 2024
12
0
1
Hello,

I am new to this forum today, and I already have a question.
I switched from ESXI to Proxmox, and the first experiences are already great (I regret not switching sooner).

Now I see that 2 partitions are created by default:

Local
Local-lvm

Now I also see that this can be combined into 1 using 3 commands.
Are there any risks involved, or can you do this without any problems?
Without getting into trouble with upgrades later.

it's about:

lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root

Thanks.
 
You can do this without issues, the upgrades do not depend on storage structure.
You'd naturally want to delete "local-lvm" storage pool from PVE GUI or CLI first, before you take away underlying storage structure.
And, of course, you'd want to confirm that you dont have any disk images (volumes) in that storage pool.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
PVE is block device centric and not filesystem centric like VMware products. So yes, you could destroy that LVM-thin pool and allocate the space to your root LV. But I would only do that in case you got dedicated disks for your VM/LXC storage and you don't plan to store the virtual disks on "local".
There is a reason why the storage is split in to a filesystem based storage and a block device based storage. Storing virtual disks as thin volumes on that LVM-Thin pool will perform better as you don't need an additional filesystem adding overhead and there is native support for stuff like thin-provisioning and snapshots and you don't need to add additional overhead by using qcow2 image files (which are Copy-on-Write).
 
Last edited:
thanks for your answers.
I have executed the 3 commands, and now I have 1 full volume.
I want the following:

hassos vm
docker lxc

better not to do it then?
I have 1x Nvme ssd 256gb
 
Last edited:
So, is that an issue to resize both locals to 1 local storage with my setup?

I have 1 virtual machine and 1 lxc container.
my local is now 70gb which I cannot use.
 
"local-lvm" was the LVM-Thin pool that by default was meant to store your virtual disks of your LXCs/VMs. If you ran those commands you destroy that thin pool with all virtual disks on it. If you want to store new VMs/LXCs on that "local" storage you would need to set the content type of that storage to LXC/VM. This isn`t enabled by default because it is recommended to store those virtual disks as block devices on your destroyed "local-lvm" instead as images files on your root filesystem ("local").
 
thanks for your answer.
So it is wiser to stay away from the local and local lvm?

I thought, if it doesn't hurt, I'd rather use all the storage for my VM machines and containers.
Now I have 72GB which I can't use for anything.

what could happen in the worst case if I do?
can it hurt?
I now have a VM running on it (after the adjustment).
 
Last edited:
There is your root filesystem including the "local" storage. There you store the whole host OS, ISOs, templates, backups, temporary files and whatever files you need to store.
Then there is the "local-lvm" just for virtual disks to store your VMs/LXCs.

Rule of thumb for me:
A.) If you got dedicated system disks and don't need/want to store VMs/LXCs on them: destroy the thin pool and extend your root LV and root filesystem to have more space to store files so you don't waste space
B.) in case you don't got dedicated disks to store your VMs/LXCs: when installing PVE use the "advanced LVM configuration options" to define how big you want the "local" and "local-lvm" to be. Backups shouldn't be stored on the same disk your VMs/LXCs reside on, so I would recommend the the root filesystem to be 32GB (for the OS itself so you got some free space for logs/temp files like ISO upload) + X GB for "local" to store ISOs. How big "X GB" should be then depends on how many ISOs/templates you want to store. All the remaining space I would assign to the "local-lvm" thin pool so your VMs/LXCs got as much space available as possible.
 
  • Like
Reactions: niekniek89
Thanks for your answer @Dunuin .
I reinstalled it with the following:

maxroot: 20gb
minfree: 16gb
swapsize: 8gb

that's better than executing the 3 commands i guess?

Local is now:

1710745457036.png

and local-lvm is now:

1710745486777.png
is this okay? i think enough storage for OS?
 
Last edited:
For me 20GB are a little bit on the low side. Keep in mind that stuff like logs will grow over the years. And uploading a 8GB ISO means you would need 8GB free space + additional 8GB of temporary space while uploading, so 16GB. So you are pretty limited with those 21GB.
 
Thanks @Dunuin

thanks for your answer.
but this is a better and safer way than the 3 commands?
Then I will adjust my installation one more time to 32 GB.

but then I know that everything is fine.
 
It depends what you want. Again:
You can only store VMs/LXCs on "local-lvm". Not anything else, like files.
So you have to decide how much space you want for VMs/LXCs and how much space for everything else.
 
I want to have as much space as possible for VMs/LXCs on "local-lvm.
I can also get ISOs from my NAS using NFS.
 
Last edited:
Do remember that ISOs can easily be stored elsewhere - and anyway are usually easily accessible on the Web. But as Dunuin correctly pointed out - to upload the ISO you need approx. 200% of space.
 
but do I have enough space now (apart from uploading/storing ISOs) for the OS and any upgrades?

20GB local
203GB local-lvm
 
Why not?
But remember you'll also want a Backup solution/storage place. Once you have that figured out - store your ISOs there.
 
I've got some old nodes with 20GB root filesystem and they are working fine for years with some tuning like more strict log retention settings. But for new nodes I always want more space (like 32GB) to be future proof and to reduce the chances running out of space (for example in case there is an warning/error spamming your logs growing the log files by multiple GBs per day).
 
Last edited:
I usually use 512gb NVME for main PVE drive - of which 100gb goes Local, and rest Local-LVM. I never store any ISOs there (have different SSD drive for that - including backups etc.).
All "main critical" VMs and LXCs get put in Local-LVM, while all others go to the other SSD.
 
Unfortunately I only have 1 NVMe SSD.
This is enough for me for now.

thanks for your help everyone!
I have now reinstalled with 32GB root.
I am now restoring machines from my NFS share.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!