Hello there, ill be setting up a new Proxmox Server in the near future.
Supermicro mainboard H11SSL-i
AMD EPYC 7282 (2.80 GHz, 16-core, 64 MB)
128 GB (4x 32 GB) ECC Reg DDR4 RAM 2 Rank
4x960 GB SATA III Intel SSD 3D-NAND TLC 2.5" (D3 S4510) => RaidZ1 vmdata
2x6 TB SATA III Western Digital Ultrastar 3.5" 7.2k (512e) => zfs mirror vzdumps and data storage
*2x 240gb enterprise SSD mirror for PVE (to be chosen)
//The server will be hosting Windows AD, elk stack netflow, elk stack with log aggregation (winlogbeat ect.), possibly cisco fmc kvm(also "huge" database operations) + tons of other small VMs that are normally not IO heavy
I've been looking into openzfs basics and how to create encrypted datasets.
Did set up a PVE on Ryzen 3700x 8-core with 32gb ram and a ton of consumer SSDs for learning purposes.
Did make an encrypted dataset via:
zfs create -o encryption=on -o keyformat=passphrase -o reservation=none ssdpool/encrypt
and added it to storage.cfg
The vm disks (zvols? ) now appear as ssdpool/encrpyt/****
Also followed the zfs-mount-generator manpage to unlock it via passphrase on start.
I got some questions, since this will be a core machine and soon be needed in production and it will be hard to change storage config once ins up and running.
The 4x1tb enterprise SSDs will host VM-data, the 2x6tb hhds are meant to act as backup targets for vzdumps and (possibly) Storage Space for windows shares/home folders ect.
Are there huge risks of having a zfs mirror with those HDDS that do not feature power loss protection ?
Am i correct to understand, that in this kind of copy on write file-system, a power loss should not corrupt data ?
There will be weekly backups to another location.
Performance:
When i read about zfs lz4 compression (and before noticing, that there is a compression=on default) i did small performance tests
by using "Move Disk" of a VM disk in the Proxmox GUI from one (encrypted) zfs dataset to another and doing ftp transfer to the ubuntu vm on this disk.
While the ftp transfer created some noticeable CPU load, the moving of the the disk pinned the 8core cpu to 90%+ load on 16Threads for some seconds.
The once deemed overkill 16c/32t processor now suddenly worries me a bit, imagining that a VM could use 8 cores to the max and cause disk usage at the same time (if just for a minute).
Would you worry about disabling compression, when using zfs encryption, or is ZFS "smart" enough not to suffocate my virtual machienes CPU/IO resources?
Im afraid of killing/corrupting critical VMs because of a backup task or moving a disk.
Encryption is critical, compression would just be a nice to have atm.
I've read that swap on zfs can be problematic.
At least if the host system starts swapping and the swap would be located at the zfs pool therefore needing ram and computation ... crash.
Would you disable swap on the PM host? Should i disable swapping in VMs (where possible) and over-allocate more ram?
How much ram would you leave untouched for ZFS ? The current plan is to not use more than ~80gb for VMs and leaving more than enough memory empty.
Plan might change tho. We needed more than 64gb so we went with 128gb.
Without a UPS i should use Default(no chache) for virtual disks, even if located on the raidz1 SSDs with powerloss protection right? What about SSD emulation option?
I've seen some discussions about blocksize, shoudl i worry ? It won't be an IO heavy server outside of backup tasks.
Am i correct, that having an encrypted zfs mirror for the host root partition is currently not supported ? (i did it with debian manual config+luks back then for my other servers)
Coming from old old HP DL380 G7 with HDDs this server should have no problems handling our load, but im afraid to create bottlenecks or IO issues, as this server will be business critical.
That's a lot of things, apologies for the wall of text and thanks for your input in advance.
Supermicro mainboard H11SSL-i
AMD EPYC 7282 (2.80 GHz, 16-core, 64 MB)
128 GB (4x 32 GB) ECC Reg DDR4 RAM 2 Rank
4x960 GB SATA III Intel SSD 3D-NAND TLC 2.5" (D3 S4510) => RaidZ1 vmdata
2x6 TB SATA III Western Digital Ultrastar 3.5" 7.2k (512e) => zfs mirror vzdumps and data storage
*2x 240gb enterprise SSD mirror for PVE (to be chosen)
//The server will be hosting Windows AD, elk stack netflow, elk stack with log aggregation (winlogbeat ect.), possibly cisco fmc kvm(also "huge" database operations) + tons of other small VMs that are normally not IO heavy
I've been looking into openzfs basics and how to create encrypted datasets.
Did set up a PVE on Ryzen 3700x 8-core with 32gb ram and a ton of consumer SSDs for learning purposes.
Did make an encrypted dataset via:
zfs create -o encryption=on -o keyformat=passphrase -o reservation=none ssdpool/encrypt
and added it to storage.cfg
The vm disks (zvols? ) now appear as ssdpool/encrpyt/****
Also followed the zfs-mount-generator manpage to unlock it via passphrase on start.
I got some questions, since this will be a core machine and soon be needed in production and it will be hard to change storage config once ins up and running.
The 4x1tb enterprise SSDs will host VM-data, the 2x6tb hhds are meant to act as backup targets for vzdumps and (possibly) Storage Space for windows shares/home folders ect.
Are there huge risks of having a zfs mirror with those HDDS that do not feature power loss protection ?
Am i correct to understand, that in this kind of copy on write file-system, a power loss should not corrupt data ?
There will be weekly backups to another location.
Performance:
When i read about zfs lz4 compression (and before noticing, that there is a compression=on default) i did small performance tests
by using "Move Disk" of a VM disk in the Proxmox GUI from one (encrypted) zfs dataset to another and doing ftp transfer to the ubuntu vm on this disk.
While the ftp transfer created some noticeable CPU load, the moving of the the disk pinned the 8core cpu to 90%+ load on 16Threads for some seconds.
The once deemed overkill 16c/32t processor now suddenly worries me a bit, imagining that a VM could use 8 cores to the max and cause disk usage at the same time (if just for a minute).
Would you worry about disabling compression, when using zfs encryption, or is ZFS "smart" enough not to suffocate my virtual machienes CPU/IO resources?
Im afraid of killing/corrupting critical VMs because of a backup task or moving a disk.
Encryption is critical, compression would just be a nice to have atm.
I've read that swap on zfs can be problematic.
At least if the host system starts swapping and the swap would be located at the zfs pool therefore needing ram and computation ... crash.
Would you disable swap on the PM host? Should i disable swapping in VMs (where possible) and over-allocate more ram?
How much ram would you leave untouched for ZFS ? The current plan is to not use more than ~80gb for VMs and leaving more than enough memory empty.
Plan might change tho. We needed more than 64gb so we went with 128gb.
Without a UPS i should use Default(no chache) for virtual disks, even if located on the raidz1 SSDs with powerloss protection right? What about SSD emulation option?
I've seen some discussions about blocksize, shoudl i worry ? It won't be an IO heavy server outside of backup tasks.
Am i correct, that having an encrypted zfs mirror for the host root partition is currently not supported ? (i did it with debian manual config+luks back then for my other servers)
Coming from old old HP DL380 G7 with HDDs this server should have no problems handling our load, but im afraid to create bottlenecks or IO issues, as this server will be business critical.
That's a lot of things, apologies for the wall of text and thanks for your input in advance.