Questions beforehand migrating to Proxmox

.Shad.

New Member
Dec 11, 2022
3
0
1
Belgium
Hey everybody,

I'm running a Debian Buster distro on a dedicated computer for two years now, I'm very satisfied with it. I've upgraded my disks some weeks ago, and while Borgbackup did an amazing job at restoring my important config files (I went for a fresh install on my new SSDs), I had to spend some hours to reinstall or reconfigure the useful utilities : smartctl, configure msmtp, sshd, and so on...

Then I thought that it could be interesting to give a shot at Proxmox, right now I'm mainly using Docker containers for whatever I need to run, to keep my distro as "stock" as possible. I'm beginning to have a quite strong knowledge in containers management so it's kinda natural for me. But somehow if I could run them in a VM, I would not have to bother about backing all my volume mount.
I would just need to ensure proper backups are being made. I could even think about HA as a mid-term goal.

So, I've been tinkering for some weeks with Proxmox VE 7.3 on a spare PC.

Pretty much excited about what I've seen until now, main benefits so far :

- VM management, more user-friendly than virsh or even virt-manager (both of them are not bad though, just I like Proxmox management interface)
- Ability to join nodes
- FIREWALLS for each entity : really a great feature imo
- Disks and disk health management

Yet, there are some caveats that prevent me from migrating to Proxmox atm :

- GPU Passthrough : I'm running an Emby server for my family, and I do need being able to use HW transcoding, some clients being quite old so HEVC is not being read natively. I read a lot of topics on this forum, Reddit, and so on. At a first glance, the tweaks seems not to be persistent across major versions upgrades. Plus, it seems easier to passthrough a dedicated GPU than an intel iGPU. I'd rather not use a dedicated GPU, because of the efficiency/power_needed ratio being way better with a iGPU than a GPU. Plus, for my usecase, I'd need a Quadro T400 or a GTX 1660, I only have a GT770.

- Restore host configuration : My second point concerns restorability or the whole system. Maybe I'm blind but I don't see any way to save the datacenter configuration, like I'd do on Synology DSM. What about firewall rules ? storages ? users and permissions ?
I fell upon this topic, suggesting that there is no official way to restore Proxmox aside doing what I already do on Debian : backuping crucial files. Kinda lacklusting in my opinion.

At the moment, if I could find a liable solution regarding the iGPU/GPU passthrough, I could definitely switch to Proxmox, if you guys have some useful advices I'd gladly hear them.

Thanks in advance.
 
- Restore host configuration : My second point concerns restorability or the whole system. Maybe I'm blind but I don't see any way to save the datacenter configuration, like I'd do on Synology DSM. What about firewall rules ? storages ? users and permissions ?
I fell upon this topic, suggesting that there is no official way to restore Proxmox aside doing what I already do on Debian : backuping crucial files. Kinda lacklusting in my opinion.
Firewall rules are easily restorable. Just backup the /etc/pve/firewall folder. Firewall rules of guests are stored together with the backups of a guest. But you still need the security groups, IP sets, asiases and datacenter/node rules of the /etc/pve/firewall/datacenter.fw.
Storage configs are stored in /etc/pve/storage.cfg (but no passwords/keys).
But host backups are on the PBS roadmap. Meanwhile you can backup the configs (/etc folder) or use the proxmox-backuo-client, dd or clonezilla to do a blocklevel backup of the whole system disk, for a easy restore of a complete out-of-the-box working PVE installation.
- GPU Passthrough : I'm running an Emby server for my family, and I do need being able to use HW transcoding, some clients being quite old so HEVC is not being read natively. I read a lot of topics on this forum, Reddit, and so on. At a first glance, the tweaks seems not to be persistent across major versions upgrades. Plus, it seems easier to passthrough a dedicated GPU than an intel iGPU. I'd rather not use a dedicated GPU, because of the efficiency/power_needed ratio being way better with a iGPU than a GPU. Plus, for my usecase, I'd need a Quadro T400 or a GTX 1660, I only have a GT770.
You will have to do major upgrades manually anyway. And for that you will have to read the changelog and upgrade instructions and these will point out if there are known issues with PCI passthrough and tell you how to fix it.
But yes, a dedicated GPU in a PCIe slot that is directly connected to your CPU (not the chipset) would be recommended.


And you don't even have to setup everything again. You can turn an Debian into a PVE: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
 
@Dunuin Thanks for your prompt answer. Good news for the upcoming PBS feature. I'll definitely look at the configuration files you mention.
Regarding the dedicated GPU, I'll probably find a low-power nuc with a good intel uhd or iris chipset. Then I won't have to bother with GPU/iGPU passthrough, because it would probably be my sole usecase.
 
@Dunuin Thanks for your prompt answer. Good news for the upcoming PBS feature. I'll definitely look at the configuration files you mention.
Regarding the dedicated GPU, I'll probably find a low-power nuc with a good intel uhd or iris chipset. Then I won't have to bother with GPU/iGPU passthrough, because it would probably be my sole usecase.
But then you shouldn't set your VM to autostart. Keep in mind that you will need the video output from time to time and your PVE host will have to display output when passing through the iGPU. Some examples:
1.) You change any hardware, like your M.2 SSD died, then your NICs names will change, network won't work anymore, webUI and SSH are down and you are forced to log in with keyboard and display to edit the network config file to fix it
2.) You need to do a PVE major version upgrade. Here you can't use the webUI or SSH and need to use the physical console again with keyboard and display to upgrade your PVE.
3.) There are some problems, webUI and SSH aren't working and you need to use the physical console to see whats going on...or your PVE host ist stuck in the initramfs step and can't boot
 
Last edited:
@Dunuin Sorry, I had not seen your answer. The NUC will run :
  • Debian on bare-metal with Docker and Emby
  • (Optional) : Proxmox on top of Debian, when I will have time to tinker with HA (once I will use Proxmox with efficiency, thus not a short-term goal)
Besides Emby, I don't need to passthrough the GPU.
Except if you tell me that mounting my GPU in Emby via Docker on Debian can lead to problems with Proxmox (I can't see why it could).

Thanks for your time.