Testing Nested Proxmox on Encrypted Debian Install (Proxmox 6.4 + Debian 10.9)

When initially installing Debian is it necessary to install the ‘standard system utilities’ or is only ssh needed?
Will Proxmox install everything that it needs to run the same way as it would if I were using the ISO?
 
I always install the standard system utilities. I unchecked that once in a VM and administrating was really annoying because all the basic programms that you always need where missing and need to be installed later.
 
  • Like
Reactions: Whitterquick
Possibly a silly question but…

If I have Proxmox installed over an encrypted Debian install, and then create an LXC container in the default volume, that is encrypted too right? Same for VMs? But if I want to install VMs on a separate physical drive then that drive cannot be LUKS encrypted? Encrypting each one as part of their installer offers the same effect from a security point of view, other than having to unlock each of them individually at boot?

2nd question: If I want to install a few Docker containers and I use a basic Debian LXC as my Docker host, is there anything I would miss out on compared to installing Debian in a VM? As they will be managed via browser the functionality should be the same right?
 
Possibly a silly question but…

If I have Proxmox installed over an encrypted Debian install, and then create an LXC container in the default volume, that is encrypted too right? Same for VMs?
yes
But if I want to install VMs on a separate physical drive then that drive cannot be LUKS encrypted?
You can encrypt any drive with LUKS. But you need to unlock it individually. And if you use ZFS you can use the ZFS native encryption.
Encrypting each one as part of their installer offers the same effect from a security point of view, other than having to unlock each of them individually at boot?
Encrypted drives won't give you any protection as long as they are unlocked. So with that in mind its a good thing to have additional encrypted drives that you may only unlock if you really need to access it. For example a dedicated drive as backup storage. If you only do manual backups once a month or so, it would be useful to just unlock it as long as you need it, so most of the time the backups are encrypted and unaccessible.
2nd question: If I want to install a few Docker containers and I use a basic Debian LXC as my Docker host, is there anything I would miss out on compared to installing Debian in a VM? As they will be managed via browser the functionality should be the same right?
Docker will run inside a LXC but as far as I know the staff is still recommending installing Docker to a VM. And if you want to run some services that are accessible from the internet or you need to mount some SMB/NFS shares I would always choose VMs and not LXCs. They are just more secure because of the better isolation and less annoying to manage. For example you can't mount any SMB/NFS shares inside a unprivileged LXC. And if you mount that share on the host and use bind-mount to bring it into the LXC you can't use snapshots anymore.
 
  • Like
Reactions: Whitterquick
yes

You can encrypt any drive with LUKS. But you need to unlock it individually. And if you use ZFS you can use the ZFS native encryption.

Encrypted drives won't give you any protection as long as they are unlocked. So with that in mind its a good thing to have additional encrypted drives that you may only unlock if you really need to access it. For example a dedicated drive as backup storage. If you only do manual backups once a month or so, it would be useful to just unlock it as long as you need it, so most of the time the backups are encrypted and unaccessible.

Docker will run inside a LXC but as far as I know the staff is still recommending installing Docker to a VM. And if you want to run some services that are accessible from the internet or you need to mount some SMB/NFS shares I would always choose VMs and not LXCs. They are just more secure because of the better isolation and less annoying to manage. For example you can't mount any SMB/NFS shares inside a unprivileged LXC. And if you mount that share on the host and use bind-mount to bring it into the LXC you can't use snapshots anymore.
I must have asked that LUKS question about half a dozen times but it only just clicked why I cannot see it when setting up because you used the word unlock lol, thanks!

When you say to install Docker in a VM I guess you mean on a full OS host such as Debian?

Any solution for easily unlocking encrypted drives (other than a plain text password file)?
 
I must have asked that LUKS question about half a dozen times but it only just clicked why I cannot see it when setting up because you used the word unlock lol, thanks!

When you say to install Docker in a VM I guess you mean on a full OS host such as Debian?

Any solution for easily unlocking encrypted drives (other than a plain text password file)?
That really depends. I want all my disks unlocked while or right after boot so that everything is always accessible. I don't have data thats needs to be super secure and therefore it doesn't need to be decrypted on the fly. The two benefits I want are:
1.) be able to RMA a failed drive to get a replacement even if I can't shred all data on it because of failures
2.) if someone breaks in and steals the server the power gets unplugged and everything is encrypted again

So I use initramfs-dropbear to be able to unlock the root disk over SSH while booting. After proxmox has finished booting a systemd script is run that will unlock all my ZFS pools. I store the keys on the encrypted root partition so I don't have to manually type them in. Because VMs cant start if the VM storage is locked I disabled that "start at boot" and start all VMs using a script after my VM storage is unlocked.

But this way the encryption won't help anything against hackers or attackers that have physical access to the server. If you for example got a VM for online banking it would be better to manually unlock it, start the VM, make a money transfer, shutdown the VM and lock the storage again.

I personally like passphrases more than keyfiles, because in case of an emergency I always can use the keyboard/IPMI and type it in to unlock a storage manually.
 
  • Like
Reactions: Whitterquick
That really depends. I want all my disks unlocked while or right after boot so that everything is always accessible. I don't have data thats needs to be super secure and therefore it doesn't need to be decrypted on the fly. The two benefits I want are:
1.) be able to RMA a failed drive to get a replacement even if I can't shred all data on it because of failures
2.) if someone breaks in and steals the server the power gets unplugged and everything is encrypted again

So I use initramfs-dropbear to be able to unlock the root disk over SSH while booting. After proxmox has finished booting a systemd script is run that will unlock all my ZFS pools. I store the keys on the encrypted root partition so I don't have to manually type them in. Because VMs cant start if the VM storage is locked I disabled that "start at boot" and start all VMs using a script after my VM storage is unlocked.

But this way the encryption won't help anything against hackers or attackers that have physical access to the server. If you for example got a VM for online banking it would be better to manually unlock it, start the VM, make a money transfer, shutdown the VM and lock the storage again.

I personally like passphrases more than keyfiles, because in case of an emergency I always can use the keyboard/IPMI and type it in to unlock a storage manually.
What I want is pretty much what you described. If a drive dies or if someone steals them (or the server) I don’t want to worry about if they have access to my data. I understand the risks of hackers or whatever being able to penetrate while they are live, it’s more about if they are ever taken away.
 
Some quick questions:

1) If I have 2 NICs on my system, can I use one of them exclusively for a nested instance of Proxmox (used for testing)?

2) Is it better to use both together in a NIC-Teaming (bonding?) instead?

3) Can the nested Proxmox be in a Cluster with the Proxmox that is hosting it?

I was messing around with the networking and other settings and screwed it up to the point of needing a reinstall :) enjoying the testing but that’s why I prefer a separate nested instance lol
 
Some quick questions:

1) If I have 2 NICs on my system, can I use one of them exclusively for a nested instance of Proxmox (used for testing)?
Sure, pepending on your hardware you could also try to PCI passthrough a NIC to the VM so the VM got direct physical access to the NIC without virtio virtualization.
2) Is it better to use both together in a NIC-Teaming (bonding?) instead?
That depends. Does your router support LACP?
3) Can the nested Proxmox be in a Cluster with the Proxmox that is hosting it?
I don't know that.
I was messing around with the networking and other settings and screwed it up to the point of needing a reinstall :) enjoying the testing but that’s why I prefer a separate nested instance lol
If I mess around with the network interfaces I try to only change one NIC at a time so that I atleast always have one working connection to be able to SSH into the system to manually change the "/etc/network/interfaces".
But if you got a IPMI with KVM or direct physical access that is not a problem at all. Then you can always access the console to revert things even if the complete network is down.
 
  • Like
Reactions: Whitterquick
3) Can the nested Proxmox be in a Cluster with the Proxmox that is hosting it?
In theory yes, as one can shoot themselves in the foot, but I'd not recommend both of that at all.
You get the classic bootstrapping issue with cluster quorum.

I did not read that whole thread closely, so not sure what your exact use case for that is, but if it's just to get an additional vote for the cluster quorum without running an extra physical node, you could use a QDevice for that: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
 
  • Like
Reactions: Whitterquick
That depends. Does your router support LACP?
It does not, so I guess that’s a No… I could still use the end NIC exclusive to a specific VM right? I tried adding both to the virtual bridge but it wouldn’t work. If I try to create a second bridge it says I can’t use the same subnet. Lots to learn I guess!

If I mess around with the network interfaces I try to only change one NIC at a time so that I atleast always have one working connection to be able to SSH into the system to manually change the "/etc/network/interfaces".
But if you got a IPMI with KVM or direct physical access that is not a problem at all. Then you can always access the console to revert things even if the complete network is down.
I think I created some kind of loop and the system wouldn’t boot at all. The router also went offline. May have been a coincidence but I couldn’t even SSH in afterwards which was weird. It was easier to reinstall than to try to fix it as it was a fairly new test setup.
 
It does not, so I guess that’s a No… I could still use the end NIC exclusive to a specific VM right? I tried adding both to the virtual bridge but it wouldn’t work. If I try to create a second bridge it says I can’t use the same subnet. Lots to learn I guess!
If you want to add both to a bridge you need to create a bond first so both NICs team up to work as a single NIC. There are some types of bonds that will work without LACP (active-passive/failover for example) but without LACP you won't get loadbalancing and therefore no better bandwidth.
If your mainboard/CPU allows you to use PCI passthrough you could passthrough one port of the NIC into a specific VM. That way both NICs should be able to work in the same subnet.
In general its not a good idea to give a host 2 IPs in the same subnet.
 
If you want to add both to a bridge you need to create a bond first so both NICs team up to work as a single NIC. There are some types of bonds that will work without LACP (active-passive/failover for example) but without LACP you won't get loadbalancing and therefore no better bandwidth.
If your mainboard/CPU allows you to use PCI passthrough you could passthrough one port of the NIC into a specific VM. That way both NICs should be able to work in the same subnet.
In general its not a good idea to give a host 2 IPs in the same subnet.
My PCI slots already full unfortunately.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!