storage config

caplam

Active Member
Nov 14, 2018
22
0
41
Hello,
I have not used proxmox since a long time and i'm about to setup a new server for proxmox.
I used to put vm vdisks on a ssd pool and passthrough spinners to the vms needing it.
So i plan to install proxmox on a single ssd. Then make a pool of 2 ssd for vms (1 or 2 docker vms, 1 windows, ...)
But i have also a hungry vm to setup which will need more storage.
For this vm i plan to add 2 spinners (low perf) in a sas das (can add until 6 more). I can also add ssd in a das (connected with usb3.2. gen2).
My plan would be to make a volume with cache enabled for my vm. The vm is security onion which manage storage with lvm inside.
What would be the best way to manage my storage?
 
I tried to extract your workload from your description:
2 docker VMs
1 Windows
1 "hungry"
I can also add ssd in a das (connected with usb3.2. gen2).
Dont do that. disk response times are unpredictable behind a USB bridge and have a tendency of timing out- the consequence of which is to lock up your storage at best and your whole server at worst- requiring a hard reset to clear.
What would be the best way to manage my storage?
The Simplest. Add up all your space needs, and use a ZFS mirror to supply. 4 VMs dont seem like a large request; 2x 4TB NVME drives should do nicely.
 
  • Like
Reactions: Johannes S
Tank you for your answer.
Is the problem of of response time the same with usb4/thunderbolt3 ?
I will start with 4 vm but it might extend. For example i plan to learn a bit about kubernetes. I could also transfer some load of my first server which is exhausting its memory.
Using 4Tb mirror of nvme drives will not be sufficient for my security onion vm. I need more space.
When you install security onion the disk space allocated is divided in / & /nsm. The nsm partition holds all the data (influxdb and elastic search storage, pcap files) and it's managed by lvm. The more retention you want, the more you need storage.
For now i have 1TB space which allow me 2 days of full retention of pcap. Rules for storage repartition between pcap files and elastic storage are a bit unclear to me. but i know if i extend the /nsm partition i will have more retention.
Since /nsm is backed by lvm i could extend it with another storage space like for the home partition in the example below:

Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 976M 0 part /boot/efi
├─sda2 8:2 0 977M 0 part /boot
└─sda3 8:3 0 28.1G 0 part
  ├─debian--srv--vg-root 254:0 0 7.4G 0 lvm /
  ├─debian--srv--vg-var  254:1 0 2.2G 0 lvm /var
  ├─debian--srv--vg-swap_1 254:2 0 1.4G 0 lvm [SWAP]
  ├─debian--srv--vg-tmp 254:3 0 660M 0 lvm /tmp
  └─debian--srv--vg-home 254:4 0 16.4G 0 lvm /home
sdb 8:16 0 70G 0 disk
└─sdb1 8:17 0 70G 0 part
  └─debian--srv--vg-home 254:4 0 86.4G 0 lvm /home

sda would be a vdisk on the zfs mirror and sdb would be on hdd mirror.

Can i provide a vdisk for / and 2 other vdisks for /nsm: 1 will be on zfs mirror (on top of the nvme) and 1 on a mirror of hdd. Then make the lvm with cache inside the vm?
I don't know if i'm clear. The goal is to extend the storage space with hdd but without being too slow: if i store elastic data on hdd only queries will be too slow.
In fact i'm wondering what would be better:
make the lvm with cache inside the vm or present it a large cached volume ?
which will be the simplest to manage, extend if needed, backup (i already have pbs in a docker on my first server)?


edit: if it can help i have plenty of memory to put in the server.
 
Last edited:
Is the problem of of response time the same with usb4/thunderbolt3 ?
The issue is not the transport, its the bridge. I imagine thunderbolt is less susceptible (if for no other reason then the bridge chips are more expensive) but I never used thunderbolt for server use, so cant say.

Using 4Tb mirror of nvme drives will not be sufficient for my security onion vm. I need more space.
So use larger drives. My advice was arbitrary, as you never mentioned what your actual needs are.
For now i have 1TB space which allow me 2 days of full retention of pcap. Rules for storage repartition between pcap files and elastic storage are a bit unclear to me. but i know if i extend the /nsm partition i will have more retention.
Partitions are not disks, and vice versa. Its completely possible to do what you ask with a single vdisk. or multiple. use LVM inside the guest to suit.
 
So giving the answers i changed my mind on how to use the ressources i have.
Here is the hardware i can use:
- minisforum ms02 i9 285hx (24 cores) with 10Gbe, 2,5GBe and 2x25 Gbe interfaces and 4x64Gb ram.
- 4 nvme slots (i have 2x4tb samsung 990 evo plus and 1 1Tb sn580 )
- and then i will put a lsi 9400-8e hba in the pci slot with a 8 bays sas 12G das.
I plan for this das to have 6x4Tb sas 12G Hdd and 2x800G or 960G 12G ssd (enterprise grade)
Later i plan to use an usb4v2 gpu dock to learn on ai.

But still what is on my mind for now is storage planning.
Planned usage is some lxc containers, docker vm or kubernetes cluster: this can fit on a mirror with the 2 990 evo plus ssd.
I also plan to have a security onion vm (elasticsearch based so probably lots of small writes and lantecy sensitive when doing queries) and a vulnerability scanner (high cpu usage).
I'll probably be able to install S.0 as distributed setup to try to balance the load. The different nodes have different needs (cpu, ram, storage)
For now it's a standalone setup on my "production server" but ram and cpu usage are killing it and storage isn't sufficient.
For this (and to learn zfs) i plan to use zfs storage with the content of the das.
I read many things about zfs but still don't get it.
For the hdd, raidz2 or mirror ?
what would be the better usage of the sas ssd ? metadata special device, slog ?
How should i offer storage to the S.O vm: dataset or zvol ?
I can't say if the vm requires sync or async writes.

Despite it's not for prod (it's my homelab) i'm looking for the better approach to keep simple from a maintenance perspective.
After setup is complete i'll move this in my basement where it'll be powered by an ups.
My network is 10G capable.
Sorry i can't be more specific on the workload.