Brand New User

athijssen

New Member
Jul 12, 2025
11
2
3
I have always been in tech but mostly on the manufacturing side, recently started a home lab and as part of that I have a PowerEdge R730, after some research I landed on wanting to use ProxMox as a virtualization platform so I can run a DC, NAS and a few application servers on that host.
I downloaded and installed it but where I am stuck is disk management and i am looking for some guidance on what to do first/how

so the server has 24 xeon cores, 128gb ram and 2 raid 10 clusters (4 x 256 SSD for a total of 1TB and 4 x 2 TB HDD for total of 4GB)
i sat down and planned my VMs to divide the resources but here is where i ran aground

1. installed Proxmox VE and it asked what disk - i tried the SSD with a 32gn disk and also with the full disk.
2. within pve i see my 2 physical disks sda and sdb both with some partions on it
3. both sda3 and sdb3 show a VG assigned to LVs
4. when i try to create a VM, it first wants a drive to park the ISO's I have only been able to pick a USB drive for this, but its big enough and should work (i dont need them permanent on pve as i will store them in NAS for reference and I will have proxmox backup to protect the VMs
5. when creating a VM i have to pick a disk and i cannot seem to make disk space available even though i have free unused disks, and free space on other LV

so storage management is where i am stuck

i would like to know the recommended process, i am a true newbie on this so forgive my ignorance

1. when i install proxmox ve on the SSD, do i select size as 1TB or 32Gb?
2. once installed, how do i make the rest of the space available to install VM's on
3. once i manage that, do i create storage in advance for each of the VM's i plan to create or how do i manage the distibution of storage between VMs


i have searched for answers and have not find much info on starting truly from scratch, from my recent learnings I have a feeling it has to do with partioning but the how is elusive to me still since the HDD could be empty (during the install PVE made 3 partions on both drives biosboot, efi and lvm) i tried to wipe it since the internet indicated i made have to to make a partition but it errors because sdb3 has a "holder"

i did see a when i allow pve to take the whole disk during install it makes a 1TB LVM partion, but if i set it smaller it makes a smaller LVM but the remaining space seems still unclaimable.

I am sure for some of you this is easy and basic but I am learning as I go together with many other new systems


any help is appreciated,
 
I installed my Proxmox on my latest server build using two ZFS mirrored VDEVs of Samsung enterprise SATA SSDs (SM863a), which is sort of analogous to RAID 10 but not completely. I used four 1TB drives, yielding a 2 TB pool. For me that is way more space than I need to run the half dozen VMs I run, plus about 20 docker containers.

Are you trying to install to a ZFS pool or are you using some other file system? Also are you using hardware raid? Hardware raid is incompatible with ZFS. Definite no-no. But you can use hardware raid with ext4, or other file systems.

If you are using hardware raid, and can switch the controller to be a JBOD, I would install the 4 256 GB SSDs in a mirrored VDEV. I would use that for both the boot partition and to store VM's. I use a minimalist approach to my data on Proxmox, preferring to keep my important data on a separate NAS device. As a result, my VMs are all very small: Mostly 32gb, some 64 gb, and one 200gb. I rely on NFS shares to my VMs for data I need to persist, I use the NFS driver in docker for my persistent docker volumes, and same for Kubernetes (CSI NFS driver). So for me, a 1TB pool for the boot and all my VMs is way more than I would need. I would probably use the 4 2TB drives in the ZFS equivalent of a RAID5 (RAIDZ1) for data storage. But if you like to keep your data in your VMs then you could put the boot drive on the smaller SSDs and the VMs on the larger SSDs. Either way you will need to configure the second ZFS pool after installation.

1. when i install proxmox ve on the SSD, do i select size as 1TB or 32Gb?

I need to know if you are using a hardware raid controller before answering this

2. once installed, how do i make the rest of the space available to install VM's on

Under "datacenter" go to "storage" and click add. If you are using a hardware raid controller, with already formatted disks, I would probably try to add it as a "directory", otherwise I would add it using "ZFS"


3. once i manage that, do i create storage in advance for each of the VM's i plan to create or how do i manage the distibution of storage between VMs

No you do not "usually" create storage in advance, it is basically done during the VM install process. Although you can sort of create storage in advance, if you are attaching something like a QCOW2 image to a VM.
 
I installed my Proxmox on my latest server build using two ZFS mirrored VDEVs of Samsung enterprise SATA SSDs (SM863a), which is sort of analogous to RAID 10 but not completely. I used four 1TB drives, yielding a 2 TB pool. For me that is way more space than I need to run the half dozen VMs I run, plus about 20 docker containers.

Are you trying to install to a ZFS pool or are you using some other file system? Also are you using hardware raid? Hardware raid is incompatible with ZFS. Definite no-no. But you can use hardware raid with ext4, or other file systems.

If you are using hardware raid, and can switch the controller to be a JBOD, I would install the 4 256 GB SSDs in a mirrored VDEV. I would use that for both the boot partition and to store VM's. I use a minimalist approach to my data on Proxmox, preferring to keep my important data on a separate NAS device. As a result, my VMs are all very small: Mostly 32gb, some 64 gb, and one 200gb. I rely on NFS shares to my VMs for data I need to persist, I use the NFS driver in docker for my persistent docker volumes, and same for Kubernetes (CSI NFS driver). So for me, a 1TB pool for the boot and all my VMs is way more than I would need. I would probably use the 4 2TB drives in the ZFS equivalent of a RAID5 (RAIDZ1) for data storage. But if you like to keep your data in your VMs then you could put the boot drive on the smaller SSDs and the VMs on the larger SSDs. Either way you will need to configure the second ZFS pool after installation.



I need to know if you are using a hardware raid controller before answering this :

Under "datacenter" go to "storage" and click add. If you are using a hardware raid controller, with already formatted disks, I would probably try to add it as a "directory", otherwise I would add it using "ZFS" Since I have hardware raid, i assume i need Directory My plan was to spread even the VM images over the SSD and use the HDD split in half 2tb for truenas storage and 2 tb for proxmox backup but even the OS of those id like to run of the SSDs
So i should reinstall proxmox with only a 32GB drive so i can reclaim the rest through the datacenter storage approach by creating a "directory" out of the free space on both SDD and HDD so i can divide those up during the VM creations?





No you do not "usually" create storage in advance, it is basically done during the VM install process. Although you can sort of create storage in advance, if you are attaching something like a QCOW2 image to a VM.
Thanks
 
Last edited: