I am in a similar situation, but for me security is #1, then #2 is convenience (easy configure+manage). (Reasonable) performance difference (+-10%) is not so important.
I am researching and considering 3 options for web/app servers (e.g urbackup, vaultwarden, plex, nextcloud) in:
Now I understand, thanks!
The poroxmox (re)installation didn't/wouldn't securely erase them?
Also, does this vulnerability apply only to zfs native encryption? I.e. would a LUKS full disk encryption protect my SSDs without exposing such vulnerability?
Do you perhaps know if this process is SAFE for an existing/running system with 2xdisk zfs mirror rpool (compression is enabled too)?
And also, what happens to rpool/data? Is it just left unencrypted? That means I would have to encrypt it separately?
I changed the default 8k to 4k, to better match most of my VM's.
Is this the only change i need to perform? (I do not have any VMs yet)
My expectation is that when a new VM is created, the accompanying zvol will be automatically set to 4k by pve?
So if I understand correctly: If I have a DB that is using a 64K blocksize running in a VM, I will need to change the VM OS/FS blocksize to 64K(since most default to 4K), in addition to adjusting the zvol volblocksize to 64K?
I am not sure I nailed it!
I intend to only use linux vm and containers, no data will be directly stored on the proxmox host ZFS filesystems. Some of these vms will be hosting database servers, others docker containers with various servers, others media server like plex for my media files, a...
Is there some rule-of-thumb or formula for the ZFS pool block size (which defaults to 8k)?
I have a ZFS mirror pool of 2 HDDs (with ashift (4k = 12)) and I am not sure how to set it up or what are the pros/cons for different use cases?
As I am new to proxmox, zfs and fio, and before sharing any results, I wanted to confirm that I am doing it right (ie. I am testing the right things and in the right way)!
I created a ZFS mirror pool over 2 HDDs for vms only (the proxmox host is on another ZFS pool of SSDs). I...
Very elaborate answer, crystal clear to me now; thanks @mattlach and @Dunuin!
I have 2 ZFS pools/mirrors: one with 2 consumer grade SSDs (proxmox os) and one with 2 SATA HDDs (vms and data).
I considered assigning part of the SSD space to 2 dedicated SLOG/cache (L2ARC) partitions for the SATA...
This is an old but interesting thread.
In Proxmox's own "ZFS tips and tricks" (and here), it is indeed mentioned that if you have only 1 SSD, do split it for caching and logs:
My specific question now is:
what is the recommendation if you have proxmox sitting on a simple ZFS 2xSDD mirror...
I did a clean re-install of proxmox. Shutdown, issued a magic packet using a command I had unsuccessfully used in the past, and it just worked!! No other config, nothing at all. :D
FYI, the command to issue a magic packet from my windows machine was:
[...]wol.exe -i 192.168.1.255 4c:cc:6a:a0:ea:33
Perhaps there is something going wrong with the power off state and the state of the NIC at that point in Proxmox. In all guides I've read they mention bringing the system down with poweroff or shutdown for WOL to work.
In windows 10, WOL doesn't work when in "Shut down" state, but rather from...
I confirmed that Wake On Lan works with Windows 10 Pro on a Lenovo ThinkStation P310 with onboard Intel I219-LM GB Ethernet controller. I had to make some changes in the adapter settings (eg. disable "reduce speed on power down") and the UEFI BIOS to make it work. I was able to wake up...
I recently installed Proxmox 7 (5.11.22-4-pve) on a Lenovo ThinkStation P310 [PN:30ASS0ME1R] with onboard Intel I219-LM GB Ethernet controller. I couldn't get Wake on Lan (WOL) to work on this machine, no matter what I tried until now.
The BIOS (is up-to-date) supports WoL and is...
Thanks for poining me to the right direction! You are absolutely right, I did some diggin' and the standard passmark benchmark (part of it at least), uses uncached async sequential reads and writes with block size of 16K and IO depth of 20. I couldn't use passmark on linux because their suite...
I installed proxmox (using ext4 filesystem) on a 5212MB SATA SSD (used consumer-grade Sandisk SD8SB8U512G1001) to evaluate it. I run some benchmarks using fio, to get an idea of how fast it is. Before installing proxmox, I benchmarked the disk with PassMark's Performance Test on Win1o...
Any idea where I can find which PCIe SAS HBA/IT controllers are supported by and will 100% work with Proxmox and ZFS?
I am scouting for an internal PCIe SAS HBA/IT controller. I'd like to use SAS HDDs, but there is no onboard support on my HP Z240 motherboard. Eventually, I want to...
I recently acquired an HP Z240 Workstation with Xeon E3-1270 v6 (4Cx3,80GHz+HT), 16GB DDR4 RAM, 512GB SATA SSD and NVS 315. I want to use it as an all-in-one, always-on, home server for backups, NAS, Plex, NextCloud, Bitwarden, etc. I will host Ubuntu VMs and Docker containers, on top...