I am in a similar situation, but for me security is #1, then #2 is convenience (easy configure+manage). (Reasonable) performance difference (+-10%) is not so important.
I am researching and considering 3 options for web/app servers (e.g urbackup, vaultwarden, plex, nextcloud) in:
LXC on...
Now I understand, thanks!
The poroxmox (re)installation didn't/wouldn't securely erase them?
Also, does this vulnerability apply only to zfs native encryption? I.e. would a LUKS full disk encryption protect my SSDs without exposing such vulnerability?
Do you perhaps know if this process is SAFE for an existing/running system with 2xdisk zfs mirror rpool (compression is enabled too)?
And also, what happens to rpool/data? Is it just left unencrypted? That means I would have to encrypt it separately?
I changed the default 8k to 4k, to better match most of my VM's.
Is this the only change i need to perform? (I do not have any VMs yet)
My expectation is that when a new VM is created, the accompanying zvol will be automatically set to 4k by pve?
So if I understand correctly: If I have a DB that is using a 64K blocksize running in a VM, I will need to change the VM OS/FS blocksize to 64K(since most default to 4K), in addition to adjusting the zvol volblocksize to 64K?
I am not sure I nailed it!
I intend to only use linux vm and containers, no data will be directly stored on the proxmox host ZFS filesystems. Some of these vms will be hosting database servers, others docker containers with various servers, others media server like plex for my media files, a...
Is there some rule-of-thumb or formula for the ZFS pool block size (which defaults to 8k)?
I have a ZFS mirror pool of 2 HDDs (with ashift (4k = 12)) and I am not sure how to set it up or what are the pros/cons for different use cases?
Hi All,
As I am new to proxmox, zfs and fio, and before sharing any results, I wanted to confirm that I am doing it right (ie. I am testing the right things and in the right way)!
I created a ZFS mirror pool over 2 HDDs for vms only (the proxmox host is on another ZFS pool of SSDs). I...
Very elaborate answer, crystal clear to me now; thanks @mattlach and @Dunuin!
I have 2 ZFS pools/mirrors: one with 2 consumer grade SSDs (proxmox os) and one with 2 SATA HDDs (vms and data).
I considered assigning part of the SSD space to 2 dedicated SLOG/cache (L2ARC) partitions for the SATA...
This is an old but interesting thread.
In Proxmox's own "ZFS tips and tricks" (and here), it is indeed mentioned that if you have only 1 SSD, do split it for caching and logs:
My specific question now is:
what is the recommendation if you have proxmox sitting on a simple ZFS 2xSDD mirror...
I did a clean re-install of proxmox. Shutdown, issued a magic packet using a command I had unsuccessfully used in the past, and it just worked!! No other config, nothing at all. :D
FYI, the command to issue a magic packet from my windows machine was:
[...]wol.exe -i 192.168.1.255 4c:cc:6a:a0:ea:33
Perhaps there is something going wrong with the power off state and the state of the NIC at that point in Proxmox. In all guides I've read they mention bringing the system down with poweroff or shutdown for WOL to work.
In windows 10, WOL doesn't work when in "Shut down" state, but rather from...
Hi All,
I confirmed that Wake On Lan works with Windows 10 Pro on a Lenovo ThinkStation P310 with onboard Intel I219-LM GB Ethernet controller. I had to make some changes in the adapter settings (eg. disable "reduce speed on power down") and the UEFI BIOS to make it work. I was able to wake up...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.