Default Partitioning Question

Anotheruser

Member
Sep 21, 2022
70
19
13
When Proxmox is installed via the installer (UEFI with GRUB / secure boot) it creates three partitions by default.
sda1 1007K Bios Parition
sda2 512M EFI Parition
sda3 Parition for the main data

Is there a particular reason that proxmox only has a single joined EFI/Boot partition instead of separate EFI and Boot partitions?
(Ubuntu for example creates a 512MB EFI (thats mounted at /boot/efi) and a dedicated 2GB /boot partition)
What is sda1 used for? is it even used during efi installs?
Where is sda2 mounted inside proxmox? at /boot or /boot/efi?

Thanks for any input? :)
 
Last edited:
Is there a particular reason that proxmox only has a single joined EFI/Boot partition instead of separate EFI and Boot partitions?
(Ubuntu for example creates a 512MB EFI (thats mounted at /boot/efi) and a dedicated 2GB /boot partition)
It used to make sense to place the kernel and the initrd into a separate /boot partition somewhere around the beginning of the disk, since older systems had trouble accessing sectors on large disks. Nowadays this does not have to be separated into another partition.
The EFI partition on the other hand has to be separate in our case since it needs to consist of a FAT file system, while sda3 uses either zfs, xfs, lvm...

What is sda1 used for? is it even used during efi installs?
sda1, the BIOS Partition, is where the GRUB bootloader stores its second stage BIOS bootloader on GPT partitioned disks.

Where is sda2 mounted inside proxmox? at /boot or /boot/efi?
The EFI partition is mounted at /boot/efi.
 
Last edited:
Does that mean that all the rest of /boot (everything excpect the /boot/efi subfolder) is stored on the main sda3 partition together with the rest of the system?
correct
 
sda1, the BIOS Partition, is where the GRUB bootloader stores its second stage BIOS bootloader on GPT partitioned disks.
ok, does the sda1 partition include any sensitive / not publicly available data (like OS configs)?
is it mounted somewhere / what file system is it?

Currently working on full disk encryption (installed via official iso, sda3 is replaced by a LUKS Volume with ZFS on top), sda1 and sda2 are unencrypted and i am trying to prevent data leakage.

Are there any major differences in the grub configuration compared to a normal debian install?
Default debian only creates two partitions sda1 (/boot/efi) and sda2 (/ ) not counting swap, which means that similar to proxmox the contents of /boot, except the /boot/efi subfolder are located inside the main partition, but debian doesnt have the extra second stage parition (sda1 on proxmox)
Is there a particular reason that this second stage of grub has been put on a separate partition?
Gets this second stage utilized before or after initramfs is initialized?
Asking because would it be possible to encrypt sda1?
I know sda2 / the efi parition cant be encrypted since it contains the bootloader
 
Currently working on full disk encryption (installed via official iso, sda3 is replaced by a LUKS Volume with ZFS on top), sda1 and sda2 are unencrypted and i am trying to prevent data leakage.
Would be interested in your results if you get that working. I heard a lot of problems like ZFS tries to import pools before LUKS got encrypted and so on.
I'm using LUKS+MDADM raid for swap + ZFS native encryption for he root filesystem and VM storage for years. But at least for the VM storage ZFS on LUKS would be very interesting so clustering makes sense without broken migration.
 
Last edited:
Would be interested in your results if you get that working. I heard a lot of problems like ZFS tries to import pools before LUKS got encrypted and so on.
I'm using LUKS+MDADM raid for swap + ZFS native encryption for he root filesystem and VM storage for years. But at least for the VM storage ZFS on LUKS would be very interesting so clustering makes sense without broken migration.
i already have that working perfectly fine since over a month if i am based of the official iso, i am currently just trying to get a deeper understanding of proxmox, the boot process & linux in general. Because my endgoal is to do all that from a debian base to get even more control
 
Also i just discovered that whats called EFI parition at proxmox (sda2) is more or less equal to the "boot parition" of debian in terms of content NOT the /boot/efi

proxmox
Screenshot_20240326_164527.png

debian
Screenshot_20240326_164533.png

/boot/grub comparison
Screenshot_20240326_164625.png
 
Because my endgoal is to do all that from a debian base to get even more control
Could be a bit harder to install it using an rpool as Debian won't support ZFS out of the box.
Do you also use dropbear-initramfs or clevis to remotely unlock your LUKS?
 
ok, does the sda1 partition include any sensitive / not publicly available data (like OS configs)?
No it does not contain any sensitive data. It only contains the second stage bootloader.

is it mounted somewhere / what file system is it?
It's not mounted anywhere and it does not contain a file system.

Is there a particular reason that this second stage of grub has been put on a separate partition?
This partition is needed by GRUB when performing a legacy BIOS boot from a GPT partitioned disk.

Gets this second stage utilized before or after initramfs is initialized?
Before. The second stage bootloader runs before the kernel is even started.

Asking because would it be possible to encrypt sda1?
This would break the legacy BIOS boot process, since the BIOS boot partition contains the second stage bootloader that gets loaded by the first stage bootloader living in the very first sector of the disk.
 
  • Like
Reactions: Dunuin
But at least for the VM storage ZFS on LUKS would be very interesting so clustering makes sense without broken migration.
None Root Disks ZFS on LUKS is easy

Straight copied out of my wiki so sorry for the formating


// Base Setup - Once Per Host

// Optional: Fill device with random data (specially recommended with HDDs) for more details look checklist for disk preparation

dd if=/dev/urandom of=<device>



// Create Folder to store decryption keys of non root disks

mkdir /root/crypt/keys/

chmod -R 400 /root/crypt/

chown -R root:root /root/crypt/



// Create a script that automatically unlocks the drives after boot

// Create a text file that stores the unlock information of non root disks

touch /root/crypt/datadisk-cryptindex

chmod 600 /root/crypt/datadisk-cryptindex

chown root:root /root/crypt/datadisk-cryptindex



// Create the actual script that unlocks all the data crypt disks

nano /root/crypt/datadisk-unlock.sh


Code:
################# Datadisk / Non Root Unlock Script V4


# /root/crypt/datadisk-unlock.sh


# File path to the luks volume list
luksdevices_list="/root/crypt/datadisk-cryptindex"




# Check if the volumes file exist and stop if it doesnt
if [ ! -r "$luksdevices_list" ]; then
    echo "$luksdevices_list doesnt exist."
    exit 1
fi




# Read the content of the cryptindex and store them inside an array
declare -A volumes


while IFS= read -r line; do
    LUKSNAME=$(echo $line | awk '{print $1}')
    UUID=$(echo $line | awk '{print $2}')
    volumes[$UUID]=$LUKSNAME
done < "$luksdevices_list"


# Debug Line / Echo content of the array
#echo "Volumes Array: ${volumes[@]}"




# Loop through all the disks
for UUID in "${!volumes[@]}"
    do
        LUKSNAME=${volumes[$UUID]}
        echo "Importing Device:"
        echo "LUKSNAME: $LUKSNAME"
        echo "UUID: $UUID"


        # Unlocks the volume with keyfile
        cryptsetup luksOpen --key-file /root/crypt/keys/$LUKSNAME.key /dev/disk/by-uuid/$UUID $LUKSNAME --allow-discards


done




// Ensure that the permissions of the script are set correctly

chmod 700 /root/crypt/datadisk-unlock.sh

chown root:root /root/crypt/datadisk-unlock.sh



// Let the script run automatically directly after boot

// Add the script as systemd service

nano /etc/systemd/system/datadisk-unlock.service


Code:
[Unit]
Description=Automatically Import Luks devices
After=network.target

[Service]
Type=oneshot
ExecStart=/bin/bash /root/crypt/datadisk-unlock.sh

[Install]
WantedBy=multi-user.target



// Enable the service

systemctl enable datadisk-unlock.service





// For every disk

// Create Encryption Key File (each disks has its own)

dd if=/dev/urandom of=/root/crypt/keys/crypt-disk1.key bs=512 count=8



// Change Permission so only root can read the file

chmod 400 /root/crypt/keys/crypt-disk1.key

chown root:root /root/crypt/keys/crypt-disk1.key





// Format the drive and create a partition

// Clear all current partition tables of the disk & Create a new GPT partition table

sgdisk -og /dev/disk/by-id/diskyouarecurrentlyworkingon



// Create a new partition that will span accross the entire disk

sgdisk -n1:0:0 -t1:8309 /dev/disk/by-id/diskyouarecurrentlyworkingon





// Create the luks volume

cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 -i 6000 /dev/disk/by-id/diskyouarecurrentlyworkingon-part1 --key-file /root/crypt/keys/crypt-disk1.key



Parameters

--type luks2 ensure that luks2 is used (but should be default with cryptsetup 2.1 or newer)

-c aes-xts-plain64 Sets the cipher - in this case the aes-xts-plain64

-s 512 Sets the Key Size

-h sha256 Sets the hash algorithm that is used

–iter-time 3000 or -i 3000 How long it takes (in ms so 3000 means 3 seconds) to unlock on the system where it was originally created. The number of milliseconds to spend with PBKDF2 password processing.



// You can find more parameters, settings and explanations regarding luks here https://wiki.archlinux.org/title/Dm-crypt/Device_encryption#Encryption_options_for_LUKS_mode



// Mount the luks device

cryptsetup luksOpen /dev/disk/by-id/whateverdiskyouarecurrentlyworkingon-part1 --key-file /root/crypt/keys/crypt-5452452.key crypt-5452452



// Add second decryption password for recovery purposes

cryptsetup luksAddKey /dev/disk/by-id/whateverdiskyouarecurrentlyworkingon-part1 --key-file /root/crypt/keys/crypt-5452452.key



// Add the drive id to the automatic unlock script index

nano /root/crypt/datadisk-cryptindex

// One Disk per line in the form luksname and UUID (you can get the UUID with blkid)

// Example like crypt-datatest1 {{disk uuid}}



// Repeat for the other disks





// Option 1 - Create ZFS Pool

Create ZFS Pool / Raid

// ALWAYS use the luks mapped devices as devices, NEVER the disks directly otherwise the data will be stored next to the luks container unencrypted!!

// If a pool has multiple vdevs data is distributed between than and they behave a little bit similar to a conventional raid 0 / stripe



// Different Parameters / arguments during pool creation

// General

-f force, might ignore potential warnings



// Pool parameters basic (lower case o) must be in front of each parameter if you add multiple ones

-o ashift=12 sets the ashift to 12 (which is correct for drives with 4k sectors ashift=13 would be for drives with 8K sectors)



// Pool parameters advanced (upper case o) must be in front of each parameter if you add multiple ones

-O autotrim=on|off Enables or disables the automatic trim feature for SSDs.

-O dedup=on|off Enables or disables deduplication for the pool.

-O compression=lz4 Enabled compression for the pool

-O encryption=on|off Enables or disables zfs native encryption for the pool. (we are already using LUKS in this guide, this is fully independent of that so leaf it disabled

-O logbias=throughput|latency Sets the write cache priority for the ZFS Intent Log (ZIL).

-O primarycache=all|none|metadata Sets the primary cache setting for the pool.

-O secondarycache=all|none|metadata Sets the secondary cache setting for the pool.

-O recordsize= Sets the size of data blocks in the pool.

-O redundant_metadata=most|all Specifies how redundant the metadata in the pool should be stored.

// you can see even more potential parameters with the zpool create -O help command



// Pool Name

<pool> Choose a name for the pool, following naming standard if applicable like dpool-01-nvme-a



// Devices

raidz1 <devices> the actual data vdev

cache <cache-device> Adds a layer2arc / ssd cache to a pool

log <log-device> Adds a log device



// You can find more informations under https://openzfs.org/wiki/System_Administration#Pool_creation and https://openzfs.github.io/openzfs-docs/Performance and Tuning/Workload Tuning.html



// Examples for most common configurations

Basic Raidz1 Pool

zpool create -o ashift=12 -O compression=lz4 <pool> raidz1 /dev/mapper/<device1> /dev/mapper/<device2> /dev/mapper/<device3>



Basic Raidz2 Pool

zpool create -o ashift=12 -O compression=lz4 <pool> raidz2 /dev/mapper/<device1> /dev/mapper/<device2> /dev/mapper/<device3> <device4>



Basic Raidz3 Pool

zpool create -o ashift=12 -O compression=lz4 <pool> raidz3 <devices>



Raidz3 Pool with de-duplication, a layer2arc cache and a log device

zpool create -o ashift=12 -O compression=lz4 -O dedup=on dpool-01 raidz3 <device1> <device2> <device3> <device4> <device5> cache <cache-device> log <log-device>



Basic Equivalent to Raid 10 (two vdevs that each contain two disks as a mirror / raid1)

zpool create -o ashift=12 -O compression=lz4 <pool> mirror <device1> <device2> mirror <device3> <device4>



Basic Equivalent to Raid 60 (stripe of two raidz2 / raid6)

zpool create -o ashift=12 -O compression=lz4 <pool> raidz2 <device1> <device2> <device3> <device4> raidz2 <device5> <device6> <device7> <device8>



// Create a vm-disk subdataset to be used by proxmox for vm-disks zfs create <pool>/subpoolname

zfs create dpool-01/virtual-disks

zfs create -O dedup=on dpool-01/virtual-disks-dedup

zfs create dpool-01/data



// Optional: List all zfs datasets

zfs list

zfs get all or zfs get all <pool/datasetname>



// Add the vm-disk pool to proxmox via the webui

// Go to the datacenter view then Storage > Add > ZFS and add the subpools we just created (NOT pool root)



// Verify that automatic unlocking, zfs import and vm autostart is working
 
Last edited:
  • Like
Reactions: Dunuin
This would break the legacy BIOS boot process, since the BIOS boot partition contains the second stage bootloader that gets loaded by the first stage bootloader living in the very first sector of the disk.
Thanks a lot for the very detailed insight :)

Does that mean (since i am running UEFI only) i could technically delete it?
 
Could be a bit harder to install it using an rpool as Debian won't support ZFS out of the box.
Do you also use dropbear-initramfs or clevis to remotely unlock your LUKS?
depends on the system, i currently have a setup script for local manual unlock (multiple disks can unlock with one password) i am then just unlocking via ipmi and aas alternative dropbear.
I am still working on clevis / tang, i managed to get it working inside my test vms but it hasnt been reliable and multinode hasnt worked at all (3 tang servers with 2 needed for unlock)
 
If clevis and tang isn't working, I used SSH + an expect script to remotely autounlock the nodes at the initramfs stage.
So basically...
1.) try to connect to the dropbear-initramfs SSH server
2.) in case it works because initramfs is waiting for the passprase it will auto type in the passphrase via "expect"
 
Last edited:
If clevis and tang isn't working, I used SSH + an expect script to remotely autounlock the nodes at the initramfs stage.
So basically...
1.) try to connect to the dropbear-initramfs SSH server
2.) in case it works because initramfs is waiting for the passprase it will auto type in the passphrase via "expect"
was thinking about the same, but i am currently sticking to mostly unlocking via ipmi since i am barely reboot stuff anyway
 
Does that mean (since i am running UEFI only) i could technically delete it?
Yes, you could technically remove the BIOS boot partition. EFI boot would keep working as long as an EFI bootloader is present.
 
  • Like
Reactions: Anotheruser

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!