Custom disk layout possible?

speck

New Member
May 8, 2025
4
0
1
I'm starting to play with a 3-node cluster (installed from the Proxmox ISO) to evaluate it as a potential VMware replacement (like half of the world, it seems...)

Since the purpose of the cluster so far is just to test the installation process and get a feel for the UI, I created 3 VMs on our existing VMware stack. Each of the VMs I created with a 16GB virtual drive as I plan to host the VMs and ISO storage on a separate iSCSI/NFS NAS to be shared by the cluster.

Proxmox installed fine and without complaint on the 16GB drive. The first hint of trouble was when I tried to upload a 5GB Windows Server 2025 ISO; the upload hung part-way through, silently. I didn't wait longer than 5 minutes to see if it would elegantly time out or not.

That's when I discovered the documentation mentions that uploads are first received and staged on the host locally in /var/tmp before being moved to the final destination, so even though there's 300GB free in the NFS export, the 3.1GB free in /var/tmp is a limiting factor. Sure enough, the root filesystem was 100% full. Luckily I already had an SSH session open to the host so I could clear it out.

Ok, no problem; the point of this whole exercise is to learn things like this. I can just bump up the size of the virtual disks for the hosts and start over... but it got me thinking: on other systems I've worked on, there have been both official and sometimes less-official policies about creating different mountpoints on their own devices/logical volumes so that a filesystem filling up doesn't grind the whole system to a halt. The usual suspects that come to mind are: /home, /tmp, /var/log, (/var/log/audit on RHEL).

What are your thoughts on the best way to make the filesystem as resilient to these sorts of problems? I would love to have a way to customize the disk layout more than the few options presented by the installer; should I try to go the "install Debian or Ubuntu then install the packages necessary for Proxmox" route?
 
yes, if you want more detailed control over the mountpoints than installing on top of Debian is the way to go.
 
Thanks fabian for the helpful reply.

Is installing on Ubuntu a possibility?

As I'm evaluating Proxmox, one of the things I am trying to keep in mind is our eventual goal to purchase a support agreement for our production use, so I want to keep the system in a state such that a support agreement will be possible... but one of the other requirements for our production system will be that the OS is supported as well; I'm not sure how to accomplish that with Debian.

I've been approaching this considering Proxmox as an appliance, similar to how I view ESXi: best installed as provided by the manufacturer. It seems I need to rethink that approach and instead to build up the OS the way that fits our needs, then to install Proxmox on top of it like I would with Samba, Apache, nginx, or any other software package...


-Cheers,

speck
 
Thanks janus57.

For anyone stumbling across this later, here's how I have things right now:

I've installed Proxmox from the official installer ISO (8.4.1).

I added a second virtual disk, to test out mirroring the boot device. I am planning in the future to use zfs snapshots to save state for disaster recovery, possibly with a scheduled cron job...

I've increased the virtual disks in VMware to 64GB. There's nothing special about that size, I just wanted enough breathing room while keeping it small enough to ferret out any problems that I might run into later if the system were to run out of disk space. I don't want to cause problems by running out of space, but I am interested in learning how well the system might handle it. For production use, the servers we've ordered will have 800GB SSDs to host the OS.

Launching the installer, I selected ZFS (RAID-1) for the installation target. After confirming the settings, I unchecked the "Automatically reboot after successful installation" box and watched the system install; it only took maybe 45-60 seconds.

Once at the "Installation Successful" screen, I pressed Control-Alt-F3 to open a root shell, and confirmed that the zfs rpool is consuming both virtual disks. I created two new filesystems under rpool/ROOT/pve-1, with refquotas and mountpoints so that /var/tmp and /var/log have quotas of 16GB each.

I also configured an 8GB tmpfs filesystem to be mounted at /tmp.

At this point, I pressed Control-Alt-F4 to get back into the installer then clicked "Reboot". The system rebooted and started up just fine.

I do notice that there is no swap space configured, while my previous install with ext4 had created a 1GB swap partition. Not sure how important that will be. The other interesting thing to note is that the ZFS bootloader identified the still-mounted installer ISO as "MacOS", which is no concern.

The filesystem now looks like this:
Code:
root@proxmox-3:~# uname -a
Linux proxmox-3 6.8.12-9-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-9 (2025-03-16T19:18Z) x86_64 GNU/Linux

root@proxmox-3:~# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                       16G     0   16G   0% /dev
tmpfs                     3.2G  1.5M  3.2G   1% /run
rpool/ROOT/pve-1           62G  1.6G   60G   3% /
rpool/ROOT/pve-1/var-log   16G  1.2M   16G   1% /var/log
rpool/ROOT/pve-1/var-tmp   16G  256K   16G   1% /var/tmp
tmpfs                      16G   46M   16G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
efivarfs                  256K   28K  224K  11% /sys/firmware/efi/efivars
tmpfs                     8.0G     0  8.0G   0% /tmp
rpool                      60G  128K   60G   1% /rpool
rpool/var-lib-vz           60G  128K   60G   1% /var/lib/vz
rpool/ROOT                 60G  128K   60G   1% /rpool/ROOT
rpool/data                 60G  128K   60G   1% /rpool/data
/dev/fuse                 128M   16K  128M   1% /etc/pve
tmpfs                     3.2G     0  3.2G   0% /run/user/0

If anyone is interested in more details, I would be happy to share.

I also would be very interested if anyone with more experience or battle scars has any reason why this approach won't work...

-Cheers,

speck
 
Last edited: