Looking for best practices about a NAS VM and encrypted storage.

allyx

Member
Feb 16, 2022
8
0
6
36
Hello,

I have a bunch of questions about setting up storage in a recommended/safest way.

My end goal is that I have a NAS with a few storage "buckets" that I can mount/share into other vm's or externally via samba, nfs, etc.

The tricky thing is, I want all(or some) of the storage to be encrypted at startup and I would unlock it manually on restart.

What I've tried and didn't work reliably:
- The whole storage disk is luks-encrypted, and it contains a single partition inside
- The disk is passed-through to a VM running OpenMediaVault
- I'm using the luks plugin for OMV to unlock the drive
- I'm sharing a bunch of directories via NFS/SMB to my other VMs and externally.

The issue is - OMV doesn't behave well with its shares not being available(encrypted disk initially not mounted), so when I unlock it _sometimes_ NFS shares are not working correctly. The whole thing feels very fragile.

Of course, I could build a dummy OMV shares structure on the initial FS, then use a script to unlock and mount the encrypted drive inside those dirs, but I want to avoid heavily modifying the behavior of the NAS.

So,
I'm thinking of the following strategy:
- Again, the whole disk is encrypted and contains a single partition
- The disk is added as a Proxmox storage that contains qcow2 files
- The qcow2s are attached to the NAS vm or wherever needed as storage partitions so even if the disk is not unlocked at startup, the vms would boot, but will be unable to mount the locked qcow2's
- A simple addition on the host to allow unlocking the storage disk via some sort of UI

The downsides/questions I'm concerned about:

- One more layer of abstraction - instead of passing-trhough a native disk and partition, now I would have a fully-encrypted disk, a partition on top, that contains qcow's that contain a fs+partition inside them. I'm wondering if there is a more elegant way to organize this kind of storage

- Is there any concern/danger in having big qcow2 files - let's say 1-2TB or the file size itself doesn't matter at all?

- Does Proxmox itself behave well if a virtual disk that is attached to a vm isn't available on startup? Let's say I have a NAS vm - the system disk is available, but the storage qcow2 is encrypted at Proxmox startup and vm startup?

- Can a virtual disk be dynamically "turned on" when it becomes available? In the example above - let's say the NAS is started without its storage virtual disk. When I unlock the encrypted disk and the qcow2 becomes available, can I attach it to a running vm so it can be mounted live i.e. virtually hot-plugged?

- Is there a "recommended" way to add additional functionality to the Proxmox host? A way of running custom scripts on the host and adding to/extending the Proxmox UI? As a generic way, I was thinking simply running a separate web UI having the tools to unlock the encrypted storage. I've seen people add Cockpit/Webmin to the host, but I don't want to bloat the Proxmox install.

Am I overengineering it? Maybe there's a simpler and more elegant way of achieving what I wanted. My very simple thought is - I want my storage to be encrypted so even if somebody steals the hardware, the data is encrypted until I decrypt it manually - which would happen only in a long power outage where the UPS will shut it down. Any advice is welcome :)
 
Why don't you use a NAS OS that supports encryption? You could for example passthrough disks into a TrueNAS VM (disk passthrough of individual disks or better passing through a whole HBA using PCI passthrough) and then create your encrypted raid inside the TrueNAS VM?
 
  • Like
Reactions: allyx
It's a very lightweight server. TrueNAS is awesome, but to run smoothly it wants to eat a lot of memory.
 
The issue is - OMV doesn't behave well with its shares not being available(encrypted disk initially not mounted), so when I unlock it _sometimes_ NFS shares are not working correctly. The whole thing feels very fragile.
seems to me you're posting in the wrong community ;)

I dont have experience with OMV so cant answer specifically for the repo, but generally you want your services to have defined dependencies, eg, nfsd being dependent on mount dependent on luks. if your service dependencies are properly defined that would be the correct behavior, and you just need to set a longer service timeout.
 
  • Like
Reactions: allyx
seems to me you're posting in the wrong community ;)

I dont have experience with OMV so cant answer specifically for the repo, but generally you want your services to have defined dependencies, eg, nfsd being dependent on mount dependent on luks. if your service dependencies are properly defined that would be the correct behavior, and you just need to set a longer service timeout.
I guess my questions are in a more general sense - since I'm not very experienced with Proxmox, I want to learn what are the good approaches of securing storage in Proxmox in general - forget about OMV, let's say you have a whatever VM that you want to make sure its data is encrypted and safe in case someone steals the hardware.

So my dilemma is whether the good practices lean towards encrypting on the Proxmox storage side(so an encrypted disk containing unencrypted qcow2s) or each vm to have its own fs encrypted on the guest side...

The first setup sounds easier as you only need to unlock one drive and then just run all the vm's compared to the 2nd approach where you need to unlock each vm's fs on boot.

On the other side, I saw is Proxmox complains and won't start a VM if one of its volumes is missing, unless the volume is detached. Also this approach means I have to configure fallback for each vm starting without some of its volumes attached, then dynamically mounting them, which is a hassle on its own.

As the saying goes - there's a thousand ways to skin a cat, but since I'm not experienced I want to see how people do it :) I doubt I'm the only one wanting to have their data encrypted in case of hardware theft...
 
So my dilemma is whether the good practices lean towards encrypting on the Proxmox storage side(so an encrypted disk containing unencrypted qcow2s) or each vm to have its own fs encrypted on the guest side...
ah.

We need to take a step backwards. What are you trying to accomplish? more to the point, who are you trying to guard the data from?

best practices (not accounting for proxmox) mean that the data encrypted at rest cannot be accessed with physical access, which means the data should not be accessible until as far into the logical process as viably possible. in your case, if you simply must house your nas on proxmox you really do want the encryption to be unlockable from inside the guest vm. The redhat way to deal with unlocking in an enterprise environment is using clevis, which works relatively well and allows (requires) the key hash to be off server which means if the server is removed from the network the encrypted volume will not unlock/mount. IF you intend to use this method (or similar,) you want to make sure the clevis server is NOT on proxmox and NOT exposed to the internet- since, again, it means that physical access to the device will allow access.

Honestly, though- are you sure this serves any useful purpose? if the answer is "to learn" its simpler to do it all virtual and you dont cause harm when you mess up.
 
  • Like
Reactions: allyx
So, before I switched to a hypervisor I had a separate machine hosting my data - I did set it up with a full disk encryption that I entered the password for on boot. Since the machine itself was on a UPS, it gets booted maybe once a year if not less, so it's not an issue for me. I've also played around with hosting the key remotely, auto-unlocking, etc, but ultimately I don't mind entering the password once a year :)

"What are you trying to accomplish? more to the point, who are you trying to guard the data from?"
If someone steals the hardware and tries to power it on, they won't be able to retrieve any personal data - that's the ultimate goal.

So from what you're saying, you'd rather encrypt the fs inside the VM, instead of encrypting the storage the VM is on on the Proxmox side.

There's an additional benefit of that approach that since the vms are encrypted inside, I don't need to care about encrypting the backup of the vms itself.

The only downside I see here is if I have several encrypted vm's, each of them I have to unlock separately on boot, or somehow daisy-chain them.
 
Last edited:
You can also setup full disk encryption with PVE. Its just a pain to setup, as PVE doesn't support any encryption out of the box.
 
  • Like
Reactions: allyx
You can also setup full disk encryption with PVE. Its just a pain to setup, as PVE doesn't support any encryption out of the box.
That's out of question, because I have other vm's that I need to be started on boot unattended.
 
You can unlock a full disc encrypted PVE using SSH before PVE actually starts booting using initramfs-dropbear. In case you want just protection against theft you could setup a raspberry pi zero, let it try to auto unlock PVE every minute using crontab and hide it somewhere. As long as both won't get stolen your data on the PVE would still be safe.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!