missing features in Proxmox

Clericer

New Member
Aug 26, 2023
27
10
3
Hi everyone,

I’m wondering if there are specific reasons why the following features are not implemented in Proxmox. Maybe there are considerations or limitations that I haven't thought of, and I'd appreciate any insights from the community or developers.
  1. SSHFS as a storage type under Datacenter -> Storage
    I currently use SSHFS manually via console to mount storage directories, and it works well. But I'm curious why SSHFS is not officially available as a storage backend in the Proxmox GUI. Is there a technical or security reason for this?
  2. Upload/Download of Backup Files
    It would be very convenient to upload and download backup files directly through the Proxmox web interface. Is this feature missing for a reason, or is it planned for future releases?
  3. Encryption of Backup Files (especially for Templates)
    Having built-in encryption for backup files, especially for sensitive data or VM templates, would add an important layer of security. Are there particular reasons why Proxmox doesn’t yet offer encryption for backups natively?
 
  • Like
Reactions: Kingneutron
SSHFS is not a good file system for VMs, it uses SFTP on the backend so every time a thread read/write a file, you lock it, there is no concurrent/multithreaded access it has extremely poor performance as a result, if you use it like that, you may end up with a corrupt image more often than not (eg SSHFS ignores fsync which is extremely bad, all your reads and writes are cached into RAM until one side runs out of memory, crashes or has a power failure at which point your file becomes corrupted).

FUSE in general is not a good idea for critical file storage for the same reasons, poorly supported, poorly developed, it’s a hack to quickly get someone to write a custom file driver to rescue an esoteric disk format, not intended for critical enterprise work.

The developer behind SSHFS if I remember stopped development on it about a year ago, it has been on life support for well over a decade (I remember the dev already had issues keeping up when the Mac transitioned from PowerPC -> Intel).

You can download individual backup files on the PBS backend and backup encryption is also supported, when you set up the backup server, set the key, you can do this either server side or client side or even storage (eg ZFS encryption) side (or all for the paranoid ones)
 
Last edited:
  • Like
Reactions: Johannes S
Thank you for the thorough explanation.

My question was more about using it as a backup filesystem for transferring data, like with Samba or NFS, rather than running active VMs. I now understand why that wouldn't be a good idea. I was surprised that nearly every tip for transferring data between machines always recommends using SSH, yet the suggestion of using Samba instead of SSH_storage didn’t quite make sense to me.

For the backup system, it seems I would need to set up a Proxmox backup server (either as a VM or separately) to achieve this functionality, as a simple button for downloading and uploading files would be much more convenient. VMs and containers are just tar archives, after all.
 
SCP or SFTP is a well defined protocol, however that is single stream, single thread and commands shouldn’t return until all the data has been written. So you can use SFTP or other streams to transfer data over SSH, SSHFS is a different beast.

File system mounts by definition allows multiple programs to read/write simultaneously and because of the limitation of SFTP, it instead fakes being a file system because SFTP doesn’t support file system semantics, it’d be horrendously slow and practically unusable as a file system if you wanted the proper file system semantics.

As for your other comment, the PVE GUI doesn’t directly allow you to download the disk image, because it doesn’t know when and/or whether the backup program has ended writing a file. It has some simple backup features but is not intended as a full featured backup program, because you shouldn’t store your data on the same PVE file system - backups need to be on a separate system. The other issue is that in most cases, technicians work on Proxmox outside the datacenter, over a VPN from home. Prepping and downloading a 4TB disk image, even if you just accidentally clicked it over the GUI would not be nice, that and the fact not all file systems support represent disk images as files (eg Ceph, NVMoE etc). If Proxmox ever added this as a feature, most enterprise would want it permissioned, auditable and turn it off anyway (I don’t want my techs or customers to download disk images or backups to their computer without anyone noticing)

Don’t put PBS on the PVE you are backing up, you get a dependency loop - if your PBS is in PVE, how would you access your PBS when you need to restore your PVE?

For that PBS or Veeam or … should always be on a separate system in a separate physical location (you can run them in another PVE cluster if you really want to, I personally don’t like unnecessary complexity in my backup strategy).

You can still copy the files to your local system, if you use a SFTP manager, you can continuously or ad hoc download any backup you have stored on the PVE file system.
 
Last edited:
  • Like
Reactions: Johannes S and UdoB
Don’t put PBS on the PVE you are backing up, you get a dependency loop - if your PBS is in PVE, how would you access your PBS when you need to restore your PVE?
For small deployments (e.g. in a homelab), there is a scenario where running PBS in a container might make sense. But you have to carefully think through the recovery scenario and consider what trade-offs you are making.

If you only have a single node, then you can install PBS in a container (or VM), and write backups to a dedicated storage device other than what is used by PVE. That's often easier to do than adding an entire dedicated device for running PBS. In case of recovery, you'd have to reinstall a PVE base system without touching the dedicated storage device. Then you have to install a new PBS container and attach it to this storage device. And finally, you restore your backups. This will take a couple of hours of work. So, that makes it very unattractive in a production setting where extended downtime is costly. But it can be an acceptable prioritization of resources if you are short of redundant hardware.

If you can afford at least one extra node, you could install PBS on both of these machines (again, possibly virtualized by PVE). Mirror backups between both devices. This makes recovery much easier, if only one of the nodes fails. Reinstall PVE, add to the cluster, then restore backups. In fact, before even doing that, you can restore backups onto your remaining intact machine and get your (single node) cluster fully running. All of that can probably be done in less than an hour of downtime, if you are familiar with the required steps.

On the other hand, if you made the mistake and stored the backups in a storage pool that is owned by PVE as opposed to on a dedicated device, it could be extremely difficult or even impossible to restore your backups, if PVE was taken out due to a catastrophic failure.
 
Last edited:
True, you CAN do it, but I would not consider a backup that is in the same failure domain to be a true backup. You can rent a very cheap cloud device, install PBS and pump your data to it. Typically restores happen when you don't want to do them, will you remember at that point what you did and not accidentally blow away your backup? I've seen setups like that 'in production' with Commvault, where backups and production were in the same tape robot "to save money" - sure they were replicated in two datacenters, theoretically you have 4 copies, but if one goes down and then the tech goes and rotates out 1 wrong tape out of the total number of tapes he was supposed to replace ...

I personally run my homelab on things people did throw away, so for very cheap you can get decent amount of storage and a dedicated PBS (eg. one of those 1L devices with a 2TB disk is probably enough).
 
Last edited: