Backup error over SMB/CIFS from PVE

oldgoodname

New Member
Jan 24, 2024
15
0
1
Hello guys,

I have the same problem as decribed in another post: https://forum.proxmox.com/threads/cannot-backup-to-nas-eperm-operation-not-permitted.121224/

Their solution was to use NFS instead of CIFS, but that lead to problems like kind of a storage hang. The backup itself worked, but the storage information could not be retrieved during backup or just partly and also the monitoring system runs into timeouts. Also it seems not to work, with the option to squash all users which is a security problem as an executable with SUID bit set could lead to root rights on the storage. I also can not squash it to the backup user as on my storage, the lowest UID I can set is 1000, but the backup user has 34.

But let'S start from the beginning. This is my environment:

Storage: QNAP on vlan 1
Storage-Pool 1 (SSD): PVE cluster accesses it over NFS to store VMs
Storage-Pool 2 (HDD): Data-Pool where the share for PBS backup should be located and accessed over SMB/CIFS or NFS

PVE-Cluster with 2 nodes on vlan 1

PBS as VM on PVE-Cluster on vlan 2

Keep in mind, that the vlans are routed over a firewall.

I will mention some scenarios:

  1. Backup from PVE directly to SMB/CIFS share (without PBS) worked, but often lead to errors that the devive is busy (I think the QNAP). So the backup job failes and a VM stays in "locked" mode, which I always have to manually unlock. Even when only one backup is running at a time (Not a backup from both cluster nodes at the same time), that problem occurs. Best compression for me is GZIP, as the size of the file as nearly as small as with ZSTD, but it was to only compression mode, where accessing nextcloud share was as fast as when no backup is running. With all other compression modes I had performance loss.

  2. Connecting the PVE-Cluster to PBS and mounting a share that is locally mounted to the PBS as NFS share (and then added as datastore) lead to the bahaviour as I described at the beginning

  3. Connecting the PVE-Cluster to PBS where a local CIFS share is mounted, lead to the problem as described in the thread posted above. The CIFS share is mounted on the PBS over fstab files with file and folder mode 0777. So when I look at the permissions, all user should have read and write access. In the other thread, fabian from Proxmox said, it is a lack of permission, but they never figured out, what to configure.

To finalize this, does anyone know, how to correctly use a SMB/CIFS share mounted on the PBS to a QNAP NAS, so the PVE can connect over PBS to it an create backups?

PVE uses a PBS-User with Database-Backup Permission to connect the PBS-Share and PBS uses a QNAP user with read-write access to connect the SMB/CIFS share (fstab). Creating a file on the SMB/CIFS share from PBS cli works.

Here the relevant log file entries:

Code:
ERROR: VM 110 qmp command 'backup' failed - backup register image failed: command error: inserting chunk on store 'Backup_Repository' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - fchmod "/media/storage_backup/.chunks/bb9f/bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8.tmp_KHvxzy" failed: EPERM: Operation not permitted
INFO: aborting backup job

ERROR: Backup of VM 110 failed - VM 110 qmp command 'backup' failed - backup register image failed: command error: inserting chunk on store 'Backup_Repository' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - fchmod "/media/storage_backup/.chunks/bb9f/bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8.tmp_KHvxzy" failed: EPERM: Operation not permitted

So does anyone know, how to fix this? Currently I am limited to SMB/CIFS or NFS, but I would prefer SMB/CIFS if NFS only works with no squash. If possible in this configuration, I would prefer PBS backup over PVE backup, as handling is better, but I am also ok with PVE backup only, but as I mentioned before, I get a device busy failure 80% of backups.

Thanks a lot and let me know, if you need further informaiton.

best regards
 
Hi,
what is the output of ls -lah /media/storage_backup/.chunks/? Are all the folders owned by the correct user and group (should be backup:backup)? Do they have the correct access flags (should be drwxr-x---)?
 
Hey Chris,

thanks for your answer!

All subfolders in the .Chunks folder are owned by root (root:root) and has the following access flags: drwxrwxrwx

i defined in the /etc/fstab file to mount the share with dir_mode=0777 and file_mode=0777, so I think this is why the access flags are set as they are. Owner of the subfolder is root, because I think the share is mounted as root and also I triggerd the Add datastore job on the GUI as the root user.

What would be the correct way to add the datastore with the backup user, so the owner of the subfolders will be the backup user (backup:backup)?


Thanks and best regards
 
Hey Chris,

thanks for your answer!

All subfolders in the .Chunks folder are owned by root (root:root) and has the following access flags: drwxrwxrwx

i defined in the /etc/fstab file to mount the share with dir_mode=0777 and file_mode=0777, so I think this is why the access flags are set as they are. Owner of the subfolder is root, because I think the share is mounted as root and also I triggerd the Add datastore job on the GUI as the root user.

What would be the correct way to add the datastore with the backup user, so the owner of the subfolders will be the backup user (backup:backup)?


Thanks and best regards
I would not explicitly set the ownership and access modes via the mount parameters on the fstab but rahter setup the backup user and group to have the correct permissions on the datastore. Please check that you have the advanced folder permissions turned on on the QNAP side, this seems to be a required option.

The permission denied error probably stems from missing acl permissions.
 
Hey Chris,
so I tried the following:

  1. I activated the QNAP advanced permissions, but configured nothing special
  2. I created a new CIFS share and mounted it via fstab with the connection property "defaults", so no explicit file- or dir-mode. After creating the datastore, it was not accessible by Proxmox backup GUI. The access flags were drwxr-xr-x and the owner was still root:root. I was wondering about the behaviour, because all other users not root should still be able to read the datastore, or must a special user (backup?) have write access to be accessible on the GUI?
  3. I re-mounted the CIFS share again with file- and dir-mode 0777 and it become accessible via GUI, because all users now have rwx rights. But a backup from PVE over PBS datastore is still not possible, same error.

To recap it, I am able to backup VMs over CIFS if it is directly mounted in PVE and PVE backup used, but I am not able to backup to the same share, when it is mounted via CIFS in the PBS and connected to the PVE server. Due to this behaviour, I think no ACL is missing, but who knows. Keep in mind, I have activated the advanced permissions, but not the windows ACL support.

One last question, does the backup-user of PBS must be the owner of the datastore? The last thing I can do is to execute chown backup:backup /media/storage_backup, but I think that feels not right.

Is there a best practice, how to mount a CIFS share to use for proxmox backup server backups? Any other clue I can try?

thanks a lot and best regards
 
The access flags were drwxr-xr-x and the owner was still root:root. I was wondering about the behaviour, because all other users not root should still be able to read the datastore, or must a special user (backup?) have write access to be accessible on the GUI?
You seem to mix PBS authenticated users with the user the PBS api server process runs as. The latter needs to own the datastore files and be able to write to the datastore to create locks, add/remove chunks ecc.

To recap it, I am able to backup VMs over CIFS if it is directly mounted in PVE and PVE backup used, but I am not able to backup to the same share, when it is mounted via CIFS in the PBS and connected to the PVE server
The Backup written by Proxmox VE are written as the root user (with unprivileged LXC being an exception), so I assume that is why the backups can be written without error here.

One last question, does the backup-user of PBS must be the owner of the datastore? The last thing I can do is to execute chown backup:backup /media/storage_backup, but I think that feels not right.
Again, the user authenticating with the backup server should not be confused with the user as which the API server runs, which also should own all relevant files and folders with respect to the backup server, including all files in a datastore. This is the PAM user backup on your server, see e.g. the output of id backup. This user must be able to access and write files on the CIFS share.

Further, on the QNAP site there might be access control lists in place, which limit access to a share and its subfolders. You will have to setup these accordingly, so that user backup on PBS side can access and write files there. A quick search gave me https://www.qnap.com/en/how-to/faq/article/how-to-configure-sub-folders-acl-for-nfs-clients, which might help with setting up the correct permissions, although specified for NFS.
 
Hi Chris,

let's make it a little bit easier (I hope). Starting with the users involved:

  • root = root user on Linux OS (PBS)
  • backup = backup user on Linux OS (PBS). You called it PBS API user that runs the server process, which should be usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
  • service.pbs = Is my service user to connect PVE with PBS. It is no local Linux user, just visible on the PBS GUI
  • service.qnap = Is my service user to connect the CIFS share on PBS, so it is created locally on QNAP NAS

Now recap the steps I have done till now:

  1. Mounted the QNAP share from PBS using service.qnap, which has r/w access on the share. As I used fstab to mount it, it was done as the root user also setting it as owner (root:root). I also used file- and dir-mode 0777, which lead to the access rights rwxrwxrwx. I created two files on the share as the backup user (share-root and within .chunks/0000) from PBS CLI (sudo -u backup echo "Test" > Test.txt), which worked perfectly as well. So on the PBS side, everything works, I think. Now I connected/mounted the QNAP share over PBS to PVE using service.pbs, which has full admin rights on PBS datastore for testing purpose. Mounting worked, but creating a backup leads to the described error in the initial post.
  2. As to your second reply, I mounted the share to PBS without setting explicit file- and dir mode, but without setting explicit owner (backup), which does not work, as the backup user doesn't have enough rights on the share.
  3. As to your suggestion, I enabled the advanced permission on the QNAP and read the article you sent me. I am wondering, why this should be required, because the difference is only, that you can change the permission on sub-folders if you want, which is not possible with advanced folder permissions disabled. As I want all folders have the same permissions, this makes no difference to me. So still backup not working as in number 1.
  4. I tried some other stuff like setting explicit uid and gid of the backup user (34) in the fstab file, but I could not mount it this way.
  5. Last thing I tried was to set the backup user as explicit owner of all files and folders on the share, but this doesn't work. After the command executed "successfully", the owner of the share, files and folders is still root:root. I used the following command: chown -R backup:backup /media/sharename


One thing I also recognized is, that when a backup fails, it creates a folder VM and a sub-folder with the VM-ID. In the VM-ID folder, there is a file located called "owner". The only thing that is written in there is the username service.pbs, but thats may wanted so? And the folders for the backups have an interesting format with characters that may can lead to problems. One folder is called 2024-01-27T18:07:41Z, but on the GUI of the QNAP NAS, it cannot display the ":", so it shows some other charcters. The used encoding on the NAS is Western Europe/Latin1.

So now I don't know what else I can do. Do you have any last suggesstion?

Have you, at Proxmox, ever tested this scenario: PVE -> PBS -> CIFS Share (with QNAP or other vendor)
Is this scenario officially supported by Proxmox?

thanks a lot for your help and best regards
 
Hi Chris,

let's make it a little bit easier (I hope). Starting with the users involved:

  • root = root user on Linux OS (PBS)
  • backup = backup user on Linux OS (PBS). You called it PBS API user that runs the server process, which should be usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
  • service.pbs = Is my service user to connect PVE with PBS. It is no local Linux user, just visible on the PBS GUI
  • service.qnap = Is my service user to connect the CIFS share on PBS, so it is created locally on QNAP NAS

Now recap the steps I have done till now:

  1. Mounted the QNAP share from PBS using service.qnap, which has r/w access on the share. As I used fstab to mount it, it was done as the root user also setting it as owner (root:root). I also used file- and dir-mode 0777, which lead to the access rights rwxrwxrwx. I created two files on the share as the backup user (share-root and within .chunks/0000) from PBS CLI (sudo -u backup echo "Test" > Test.txt), which worked perfectly as well. So on the PBS side, everything works, I think. Now I connected/mounted the QNAP share over PBS to PVE using service.pbs, which has full admin rights on PBS datastore for testing purpose. Mounting worked, but creating a backup leads to the described error in the initial post.
  2. As to your second reply, I mounted the share to PBS without setting explicit file- and dir mode, but without setting explicit owner (backup), which does not work, as the backup user doesn't have enough rights on the share.
  3. As to your suggestion, I enabled the advanced permission on the QNAP and read the article you sent me. I am wondering, why this should be required, because the difference is only, that you can change the permission on sub-folders if you want, which is not possible with advanced folder permissions disabled. As I want all folders have the same permissions, this makes no difference to me. So still backup not working as in number 1.
  4. I tried some other stuff like setting explicit uid and gid of the backup user (34) in the fstab file, but I could not mount it this way.
  5. Last thing I tried was to set the backup user as explicit owner of all files and folders on the share, but this doesn't work. After the command executed "successfully", the owner of the share, files and folders is still root:root. I used the following command: chown -R backup:backup /media/sharename


One thing I also recognized is, that when a backup fails, it creates a folder VM and a sub-folder with the VM-ID. In the VM-ID folder, there is a file located called "owner". The only thing that is written in there is the username service.pbs, but thats may wanted so? And the folders for the backups have an interesting format with characters that may can lead to problems. One folder is called 2024-01-27T18:07:41Z, but on the GUI of the QNAP NAS, it cannot display the ":", so it shows some other charcters. The used encoding on the NAS is Western Europe/Latin1.

So now I don't know what else I can do. Do you have any last suggesstion?

Have you, at Proxmox, ever tested this scenario: PVE -> PBS -> CIFS Share (with QNAP or other vendor)
Is this scenario officially supported by Proxmox?

thanks a lot for your help and best regards
In general setting up a PBS datastore backed by a CIFS/NFS share is not recommended because of the additional failure modes (NAS and network have to be online for restores) and the bad performance, especially for verify and garbage collection tasks.

The QNAP not being able to show the path for the snapshot folder should not be an issue, as long as the underlying filesystem handles this correctly for the share, also posix acls and atime must be available. Maybe that is not the case for your setup?

The owner inside the snapshot group name is indeed the owner of the backup group and created together with the first backup, so that is correct.

Nevertheless, I quickly tested a setup with a datastore on a cifs share on my local test environment to double check your issues(please adapt to fix your requirements if taking this as template). I have no QNAP at my disposal, but the approach should be the same. Maybe this might help find the permission configuration issue:

  • Created a samba share on a Linux box, adding the user share with the following share config
    Code:
    [scratchpad_test] 
        path = /scratchpad/test
        read only = no
        writeable = yes
        browseable = yes
        valid users = share
        create mask = 0644
        directory mask = 0755
        force user = share
  • set the samba password via smbpasswd for the user share and set the ownership to the backing folder /scratchpad/test to usershare.
  • Mounted the share on the PBS host via mount -t cifs -o user=share,uid=backup,gid=backup //<cifs-share-host>/scratchpad_test /mnt/test, so that the mounted cifs share will be mapped to user and group backup.
  • Created folder for datastore via mkdir /mnt/test/datastore on the PBS host
  • Added the datastore via the WebUI to PBS, then the backup storage to PVE, performed test backup
With this everything worked as expected and I encountered no permission issues.

I hope this helps tackling the issue.

Edit: Fixed typos
 
Last edited:
Hey Chris,

while I tried to set up the share on a linux server, with the config you provided, to check if that works in my setup as well, I recognized that I made a dumb mistake. When I tried to mount it with the parameter uid and gid, which doesn't worked, I missed to change the permission of the credentials file I used. And this was the key. It is no enough, if the backup user has write permission on the share, it must be the owner of it.

In my setup, root was the owner, but all users (including backup user) had write access as well, but that doesn't work. Now I mounted the share with backup user as owner and that worked. It is not necessary, that the advanced permissions are enabled on the QNAP, just set backup user as owner. My fstab entry looks as follows:

//qnap-fqdn/sharename /media/mountpoint cifs credentials=/home/credentialfile,user,uid=backup,gid=backup 0 0

With this entry, the backup user will be the owner of the share and sub-folders and files, even if the folder of the mountpoint was originally created as root.

So thanks for your help. Now it works, let's say more or less. I still have the same problem, as when I use the PVE included backup mechanism. Every 10 to 20 VMs backuped, the job failed saying the device is busy:


Code:
ERROR: Backup of VM 106 failed - unable to open file '/etc/pve/nodes/pve02/qemu-server/106.conf.tmp.2620261' - Device or resource busy
INFO: Failed at 2024-01-29 18:34:06

This lead to a locked VM and failed backup job. It must be kind of a timeout problem or so. Do you know if there is a setting to increase that in PBS?

thanks and best regards
 
I still have the same problem, as when I use the PVE included backup mechanism. Every 10 to 20 VMs backuped, the job failed saying the device is busy:


Code:
ERROR: Backup of VM 106 failed - unable to open file '/etc/pve/nodes/pve02/qemu-server/106.conf.tmp.2620261' - Device or resource busy
INFO: Failed at 2024-01-29 18:34:06
This has nothing to do with the initial permission error on the CIFS share, but seems rather related to the proxmox cluster filesystem not being available for some time. Please check your systemd journal on the Proxmox VE host from around the time of the error via journalctl --since <DATETIME> --until <DATETIME>. Is this node part of a cluster, has it a dedicated network for corosync? It might be that the backup traffic is increasing the latency for the corosync traffic to a point above tollerance, leading to the loss of quorum.
 
Hey Chris,

I checked the log files and found 2 things. Seems the PVE host still tries to connect a CIFS share that does not exist or is mounted on the host. I think that is a remnant of the testing. I will try to reboot the PVE host tomorrow to look, if it disappears. But I don't think, that I makes any problem at the moment.

Code:
kernel: CIFS: VFS: \\storage\Backup BAD_NETWORK_NAME: \\storage\Backup

On the other side I found some issues with corosync you mentioned. To be honest, I use two zotac mini pcs which have only 1Gbit network interface and no seperate interface only for corosync. When I say two, you will think about a missing third quorum vote and you are right. I want to set up a quorum VM on the qnap to not facing problems when a node crashes, but at the moment it is not setup. This is only a part of the log during the backup-problem. I can post the complete log if wanted.

Code:
[TOTEM ] A processor failed, forming new configuration: token timed out (3000ms), waiting 3600ms for consensus.
[QUORUM] Sync members[1]: 2
[QUORUM] Sync left[1]: 1
[TOTEM ] A new membership (2.11bc3) was formed. Members left: 1
[TOTEM ] Failed to receive the leave message. failed: 1
[QUORUM] This node is within the non-primary component and will NOT provide any services.
[QUORUM] Members[1]: 2
[MAIN  ] Completed service synchronization, ready to provide service.
[dcdb] notice: members: 2/1019
[status] notice: node lost quorum
[status] notice: members: 2/1019
[dcdb] crit: received write while not quorate - trigger resync
[dcdb] crit: leaving CPG group
unable to write lrm status file - unable to open file '/etc/pve/nodes/pve02/lrm_status.tmp.1145' - Permission denied
[dcdb] notice: start cluster connection
[dcdb] crit: cpg_join failed: 14
[dcdb] crit: can't initialize service

Do you think it can be solved by adding a quorum so the minimum of three are present, or does this make no difference as the network will still be "overloaded"? Is it possible to limit the backup-speed, so it will make no problems anymore?

thanks and best regards
 
Okay, so as expected the loss in quorum causes the issues you are seeing. Since you cannot put corosync on its dedicated network, I would recommend to at least set up a bandwidth limit to the backup traffic to the Proxmox Backup Server. You can do this by adding a traffic control rule via Configuration > Traffic Control on the PBS WebUI. Note however that corosync requires low latency, not high bandwidth, so this might not be enough.

In any case it is recommended to add an external voter for 2 node clusters, that will help for all maintenance tasks ecc. See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
 
Hi Chris,

I set a traffic control rule to limit the bandwidth to 10 MiB/s, but even this was not enough. The backup of the machines that worked were not really slower, so I think the bandwith was never high and as you said, the latency is the problem. I will try to install an external voter and hope that this will work, but it will need maybe until next week.

I will write a comment, if it makes the situation better for CIFS backup.

Thanks and best regards
 
Hi Chris,
I was really busy and so it needed longer to set up. I installed a qdevice on the QNAPs virtualization station, so the quorum device is not located on one of the PVE hosts. However, this does not solve the problem. I still get the error message that the device is busy from time to time.

So I nailed down the backup speed from only 10 MIB per second to 1, but I had not enough time to really check, if it is better than, because you guessed it, the backup took ages. But what I recognized was, that my monitoring system alerted, that the linux load was very high (who ever knows how it is calculated) at the time.

So as a conclusion, even though the PVE host installation files, the VM files and the backup files are located on three different storage pools and two different devices, the IO impact seems to high.

At the moment, I think there are only two options left:
  1. Change the compression mode from zstd to gzip, as this had the least impact when backing up from PVE to QNAP directly. By default, you cannot choose this option when creating a new backup job backing up to a PBS connected storage (greyed out to zstd). I the documentation, I found something about pigz. Does this work when installed on the PBS? If so, what steps are needed, so I can choose it in the backup job?
  2. Limit the IO priority etc on the backup job. Any thoughts on this?

Thanks a lot in advance and best regards
 
I was really busy and so it needed longer to set up. I installed a qdevice on the QNAPs virtualization station, so the quorum device is not located on one of the PVE hosts. However, this does not solve the problem. I still get the error message that the device is busy from time to time.
Do you still loose quorum on the node running the backup? Can you exclude that this is caused by some other failure mode which triggers especially during high network bandwidth traffic, e.g. I one had a bad NIC which tended to reset itself under load. Can you maybe share a larger portion of the systemd journal at the time around the backup, e.g. by dumping the contents via journalctl --since <DATETIME> --until <DATETIME> > journal.txt and attaching the resulting file. Also, please share the full backup task log.

Change the compression mode from zstd to gzip, as this had the least impact when backing up from PVE to QNAP directly. By default, you cannot choose this option when creating a new backup job backing up to a PBS connected storage (greyed out to zstd). I the documentation, I found something about pigz. Does this work when installed on the PBS? If so, what steps are needed, so I can choose it in the backup job?
No, for PBS based backups you cannot change the compression, this is not configurable.
Limit the IO priority etc on the backup job. Any thoughts on this?
Yes, worth a try.
 
Hey Chris,

I attached the logs you want. The Journal log from the host of which the VMs have been backuped. I also attached the backup job log from PVE.

In the journal log, you will find some errors about non reachable storages/connections. These connections must be some leftovers from testing. They arent configured on the GUI anymore.

The PVE host have two network interface which are connection to two different ubiquiti enterprise switches. These switches uses RSTP to shutdown one interface, because ubiquiti still does not provide stacking feature. Don't know if this is a problem during backup, but I havent recognized any problems during normal operation.

During the next week, I will test the change of the IO priority for the backup process and will let you know, if it makes any difference.

thanks and best regards
 

Attachments

I want to set up a quorum VM on the qnap to not facing problems when a node crashes, but at the moment it is not setup. This is only a part of the log during the backup-problem.
Well, it seems that there IS a qdevice already set up, at least we can see [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms) int the logs. Also, it does seem like the link looses connection during the backup, so you definitely want to make sure that the network is stable
Code:
Feb 08 12:04:59 pve02 corosync[1081]:   [KNET  ] link: host: 1 link: 0 is down
Feb 08 12:04:59 pve02 corosync[1081]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Feb 08 12:04:59 pve02 corosync[1081]:   [KNET  ] host: host: 1 has no active links
Feb 08 12:05:00 pve02 corosync[1081]:   [TOTEM ] Token has not been received in 2250 ms

In general, I would also suggest to remove the outdated NFS and CIFS configured, so they don't unnecessary spam you logs.
 
Hi Chris,
as I wrote in replay #14, I have set up a qdevice. So that should be working fine now. I now restarted the node 2 and the messages about the leftover connections are gone.

I will check the network again, if ubiquiti has implemented some stacking technologies since I last searched for it, or what the best practice is, when having two network ports connected to two different switches for redundency.

I think we have checked a lot and the installation seems good and should work. Now I have to search for problems elsewhere. I think we are done here and I want to say thank you again and keep on going doing the nice work at proxmox

thanks!
 
as I wrote in replay #14, I have set up a qdevice
Ah okay, seems I have overlooked that... Just skimmed trough the thread again before my reply, did not have in mind all the details.

I think we have checked a lot and the installation seems good and should work. Now I have to search for problems elsewhere. I think we are done here and I want to say thank you again and keep on going doing the nice work at proxmox
Okay, please keep us posted if you manage to find a cause and/or solution, that might help others running into the same issue. Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!