Hey everyone,
I'd appreciate some advice on properly partitioning and configuring my Proxmox VM/CT & storage server. I want to make sure I'm following best practices before I start deploying VMs and containers.
Here's what I'm working with:
Hardware Setup:
OS Tier: 2x PNY CS900 500GB SATA SSDs (mirrored)
Connected directly to motherboard SATA ports
Dedicated solely for Proxmox installation
Hot Tier: 2x Samsung 970 EVO Plus 1TB NVMe (mirrored)
Connected directly to motherboard
Intended for anything requiring high speed and/or caching
Warm Tier: 4x Seagate ST400FM0053 400GB Enterprise SSDs (RAIDZ1)
12G SAS Enterprise drives via LSI3008 controller
Planned for VM and container storage
Cold Tier: 4x Seagate ST4000NM0024 4TB HDDs (RAIDZ1)
Standard SATA III drives via LSI3008 controller
Intended for backups and archival storage
Current Setup Notes:
I've configured 4x 16GB partitions across all Enterprise SSDs as swap space for Proxmox, all with equal priority.
Before I start building out my VMs and containers, I'd love to hear your thoughts on optimizing this setup. Any suggestions for best-practice configurations with this hardware would be greatly appreciated! I would prefer to run ZFS on top of all disks, but if any other solutions are better, please let me know.
Thanks in advance!
I'd appreciate some advice on properly partitioning and configuring my Proxmox VM/CT & storage server. I want to make sure I'm following best practices before I start deploying VMs and containers.
Here's what I'm working with:
Hardware Setup:
- AMD 3950X with 64GB RAM
- ASUS X570-PRO motherboard
- 3U rackmount chassis
- APC UPS system
OS Tier: 2x PNY CS900 500GB SATA SSDs (mirrored)
Connected directly to motherboard SATA ports
Dedicated solely for Proxmox installation
Hot Tier: 2x Samsung 970 EVO Plus 1TB NVMe (mirrored)
Connected directly to motherboard
Intended for anything requiring high speed and/or caching
Warm Tier: 4x Seagate ST400FM0053 400GB Enterprise SSDs (RAIDZ1)
12G SAS Enterprise drives via LSI3008 controller
Planned for VM and container storage
Cold Tier: 4x Seagate ST4000NM0024 4TB HDDs (RAIDZ1)
Standard SATA III drives via LSI3008 controller
Intended for backups and archival storage
Current Setup Notes:
I've configured 4x 16GB partitions across all Enterprise SSDs as swap space for Proxmox, all with equal priority.
Before I start building out my VMs and containers, I'd love to hear your thoughts on optimizing this setup. Any suggestions for best-practice configurations with this hardware would be greatly appreciated! I would prefer to run ZFS on top of all disks, but if any other solutions are better, please let me know.
Thanks in advance!
Bash:
# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl
zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1
zfspool: ice
pool ice
content rootdir,images
mountpoint /ice
nodes monster
zfspool: hot
pool hot
content images,rootdir
mountpoint /hot
nodes monster
zfspool: warm
pool warm
content images,rootdir
mountpoint /warm
nodes monster
# pvesm status
Name Type Status Total Used Available %
hot zfspool active 942931968 68176732 874755236 7.23%
ice zfspool active 11214389248 11138945739 75443508 99.33%
local dir active 466892416 9049728 457842688 1.94%
local-zfs zfspool active 457842880 96 457842784 0.00%
warm zfspool active 1051170816 680 1051170135 0.00%
# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
UUID=576ee968-1371-46f0-8b46-eec809fd90a3 none swap sw,pri=10 0 0
UUID=ab4a944a-22c6-47f6-8281-bdf55e6e3092 none swap sw,pri=10 0 0
UUID=179ab63c-dd17-493c-8da6-e2ff75b9c937 none swap sw,pri=10 0 0
UUID=57a0c83a-0df0-4cce-9caf-5da96cd05473 none swap sw,pri=10 0 0
# lsblk --ascii -M -o +HOTPLUG,ROTA,PHY-SEC,FSTYPE,MODEL,TRAN,WWN
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS HOTPLUG ROTA PHY-SEC FSTYPE MODEL TRAN WWN
sda 8:0 0 465.8G 0 disk 0 0 512 PNY CS900 500GB SSD sata 0x5f8db4c2514039ea
|-sda1 8:1 0 1007K 0 part 0 0 512 0x5f8db4c2514039ea
|-sda2 8:2 0 1G 0 part 0 0 512 vfat 0x5f8db4c2514039ea
`-sda3 8:3 0 464.8G 0 part 0 0 512 zfs_member 0x5f8db4c2514039ea
sdb 8:16 0 465.8G 0 disk 0 0 512 PNY CS900 500GB SSD sata 0x5f8db4c2514039eb
|-sdb1 8:17 0 1007K 0 part 0 0 512 0x5f8db4c2514039eb
|-sdb2 8:18 0 1G 0 part 0 0 512 vfat 0x5f8db4c2514039eb
`-sdb3 8:19 0 464.8G 0 part 0 0 512 zfs_member 0x5f8db4c2514039eb
sdc 8:32 0 3.6T 0 disk 0 1 4096 ST4000NM0024-1HT178 sas 0x5000c500668a951a
|-sdc1 8:33 0 3.6T 0 part 0 1 4096 zfs_member 0x5000c500668a951a
`-sdc9 8:41 0 8M 0 part 0 1 4096 0x5000c500668a951a
sdd 8:48 0 3.6T 0 disk 0 1 4096 ST4000NM0024-1HT178 sas 0x5000c500668a9428
|-sdd1 8:49 0 3.6T 0 part 0 1 4096 zfs_member 0x5000c500668a9428
`-sdd9 8:57 0 8M 0 part 0 1 4096 0x5000c500668a9428
sde 8:64 0 3.6T 0 disk 0 1 4096 ST4000NM0024-1HT178 sas 0x5000c500668a9ad1
|-sde1 8:65 0 3.6T 0 part 0 1 4096 zfs_member 0x5000c500668a9ad1
`-sde9 8:73 0 8M 0 part 0 1 4096 0x5000c500668a9ad1
sdf 8:80 0 3.6T 0 disk 0 1 4096 ST4000NM0024-1HT178 sas 0x5000c500668a9bce
|-sdf1 8:81 0 3.6T 0 part 0 1 4096 zfs_member 0x5000c500668a9bce
`-sdf9 8:89 0 8M 0 part 0 1 4096 0x5000c500668a9bce
sdg 8:96 0 372.6G 0 disk 0 0 4096 ST400FM0053 sas 0x5000c5003013cd77
|-sdg1 8:97 0 14.9G 0 part [SWAP] 0 0 4096 swap 0x5000c5003013cd77
`-sdg2 8:98 0 357.7G 0 part 0 0 4096 zfs_member 0x5000c5003013cd77
sdh 8:112 0 372.6G 0 disk 0 0 4096 ST400FM0053 sas 0x5000c5003013ced3
|-sdh1 8:113 0 14.9G 0 part [SWAP] 0 0 4096 swap 0x5000c5003013ced3
`-sdh2 8:114 0 357.7G 0 part 0 0 4096 zfs_member 0x5000c5003013ced3
sdi 8:128 0 372.6G 0 disk 0 0 4096 ST400FM0053 sas 0x5000c5003013ce0f
|-sdi1 8:129 0 14.9G 0 part [SWAP] 0 0 4096 swap 0x5000c5003013ce0f
`-sdi2 8:130 0 357.7G 0 part 0 0 4096 zfs_member 0x5000c5003013ce0f
sdj 8:144 0 372.6G 0 disk 0 0 4096 ST400FM0053 sas 0x5000c5003013cd2b
|-sdj1 8:145 0 14.9G 0 part [SWAP] 0 0 4096 swap 0x5000c5003013cd2b
`-sdj2 8:146 0 357.7G 0 part 0 0 4096 zfs_member 0x5000c5003013cd2b
zd0 230:0 0 32G 0 disk 0 0 16384
|-zd0p1 230:1 0 31G 0 part 0 0 16384 ext4
|-zd0p2 230:2 0 1K 0 part 0 0 16384
`-zd0p5 230:5 0 975M 0 part 0 0 16384 swap
zd16 230:16 0 32G 0 disk 0 0 16384
|-zd16p1 230:17 0 512K 0 part 0 0 16384
|-zd16p2 230:18 0 2G 0 part 0 0 16384
`-zd16p3 230:19 0 30G 0 part 0 0 16384 zfs_member
zd32 230:32 0 9.4T 0 disk 0 0 16384
`-zd32p1 230:33 0 9.4T 0 part 0 0 16384 xfs
nvme0n1 259:0 0 931.5G 0 disk 0 0 512 Samsung SSD 970 EVO Plus 1TB nvme eui.0025385111b08261
|-nvme0n1p1 259:4 0 931.5G 0 part 0 0 512 zfs_member nvme eui.0025385111b08261
`-nvme0n1p9 259:5 0 8M 0 part 0 0 512 nvme eui.0025385111b08261
nvme1n1 259:1 0 931.5G 0 disk 0 0 512 Samsung SSD 970 EVO Plus 1TB nvme eui.0025385111b1ee65
|-nvme1n1p1 259:2 0 931.5G 0 part 0 0 512 zfs_member nvme eui.0025385111b1ee65
`-nvme1n1p9 259:3 0 8M 0 part 0 0 512 nvme eui.0025385111b1ee65
# zpool status
pool: hot
state: ONLINE
scan: scrub repaired 0B in 00:00:10 with 0 errors on Sun Sep 14 00:24:11 2025
config:
NAME STATE READ WRITE CKSUM
hot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNX0R150579T ONLINE 0 0 0
nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNX0R133272W_1 ONLINE 0 0 0
errors: No known data errors
pool: ice
state: ONLINE
scan: scrub repaired 0B in 01:55:49 with 0 errors on Sun Sep 14 02:19:51 2025
config:
NAME STATE READ WRITE CKSUM
ice ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST4000NM0024-1HT178_Z4F00JXK ONLINE 0 0 0
ata-ST4000NM0024-1HT178_Z4F00EMS ONLINE 0 0 0
ata-ST4000NM0024-1HT178_Z4F00JX7 ONLINE 0 0 0
ata-ST4000NM0024-1HT178_Z4F00EDS ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:00:49 with 0 errors on Sun Sep 14 00:24:54 2025
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-PNY_CS900_500GB_SSD_PNY251425040201039EA-part3 ONLINE 0 0 0
ata-PNY_CS900_500GB_SSD_PNY251425040201039EB-part3 ONLINE 0 0 0
errors: No known data errors
pool: warm
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
warm ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-35000c5003013cd77-part2 ONLINE 0 0 0
scsi-35000c5003013ced3-part2 ONLINE 0 0 0
scsi-35000c5003013ce0f-part2 ONLINE 0 0 0
scsi-35000c5003013cd2b-part2 ONLINE 0 0 0
errors: No known data errors
# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hot 928G 5.92G 922G - - 0% 0% 1.00x ONLINE -
mirror-0 928G 5.92G 922G - - 0% 0.63% - ONLINE
nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNX0R150579T 932G - - - - - - - ONLINE
nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNX0R133272W_1 932G - - - - - - - ONLINE
ice 14.5T 4.26T 10.3T - - 2% 29% 1.00x ONLINE -
raidz1-0 14.5T 4.26T 10.3T - - 2% 29.3% - ONLINE
ata-ST4000NM0024-1HT178_Z4F00JXK 3.64T - - - - - - - ONLINE
ata-ST4000NM0024-1HT178_Z4F00EMS 3.64T - - - - - - - ONLINE
ata-ST4000NM0024-1HT178_Z4F00JX7 3.64T - - - - - - - ONLINE
ata-ST4000NM0024-1HT178_Z4F00EDS 3.64T - - - - - - - ONLINE
rpool 464G 13.0G 451G - - 2% 2% 1.00x ONLINE -
mirror-0 464G 13.0G 451G - - 2% 2.80% - ONLINE
ata-PNY_CS900_500GB_SSD_PNY251425040201039EA-part3 465G - - - - - - - ONLINE
ata-PNY_CS900_500GB_SSD_PNY251425040201039EB-part3 465G - - - - - - - ONLINE
warm 1.39T 936K 1.39T - - 0% 0% 1.00x ONLINE -
raidz1-0 1.39T 936K 1.39T - - 0% 0.00% - ONLINE
scsi-35000c5003013cd77-part2 358G - - - - - - - ONLINE
scsi-35000c5003013ced3-part2 358G - - - - - - - ONLINE
scsi-35000c5003013ce0f-part2 358G - - - - - - - ONLINE
scsi-35000c5003013cd2b-part2 358G - - - - - - - ONLINE
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
hot 65.0G 834G 96K /hot
hot/vm-100-disk-0 32.5G 864G 2.82G -
hot/vm-101-disk-0 32.5G 864G 3.09G -
ice 10.4T 71.9G 140K /ice
ice/vm-100-disk-0 10.4T 7.35T 3.10T -
rpool 13.0G 437G 96K /rpool
rpool/ROOT 4.35G 437G 96K /rpool/ROOT
rpool/ROOT/pve-1 4.35G 437G 4.35G /
rpool/data 96K 437G 96K /rpool/data
rpool/var-lib-vz 8.63G 437G 8.63G /var/lib/vz
warm 680K 1002G 140K /warm
# arcstat
time read ddread ddh% dmread dmh% pread ph% size c avail
20:20:40 0 0 0 0 0 0 0 488M 2.0G 57.6G