[SOLVED] PBS 3.3.1: Backup tasks hang after concluding uploads

Telencephalon

Active Member
Nov 29, 2018
12
5
43
44
Hi! Thank you for PBS, it is a great piece of software. I've been testing it for a month on ~60TB-scale backup tasks, and it has been holding up really very well. I'm running off the community repository -- apologies, I'll get a subscription as soon as I can carve out the budget. I'm currently on PBS 3.3.1 (the latest in the repo as of today; more version info below). Since one or two point releases, I'm experiencing an issue that is quite a blocker, so please allow me to report it:

My backup tasks start out just fine and upload all the chunks, but upon conclusion, the tasks hang and never fully conclude and return. Subsequent tasks fail because they can't acquire a lock on the datastore. Even killing the associated proxmox-backup-client process doesn't cause the task to return. The only way I can get rid of those hanging tasks is by restarting proxmox-backup-proxy. Interestingly, the actual backup snapshot shows up just fine in the datastore. So the task hangs after having done all the work...!

My setup:
  • OS: Proxmox PVE. No VMs or containers running.
    Output ofpveversion:
    pve-manager/8.3.1/fb48e850ef9dde27 (running kernel: 6.8.12-4-pve)
  • PBS on same host as PVE, installed in host OS (not a VM or CT).
    Output of proxmox-backup-manager versions:
    proxmox-backup-server 3.3.1-1 running version: 3.3.1
  • backup sources mounted as NFS on same host
  • proxmox-backup-client (3.3.1-1) runs on same host, as root cronjob
  • datastore is a ZFS dataset on same host
Here's a debug-level log output from proxmox-backup-client (with timestamps injected using unbuffer and ts commands). You'll see that it happily uploads some final chunks and then outputs the usual summary information. Then index.json gets uploaded and some more frames are exchanged, and then it just stops there and hangs forever. The task never returns. Note that previously, when a task concluded successfully, the last log output ends in lines like "Duration: XXX s. End Time: Mon Dec 8 XX:XX:XX 2024". These are missing here.

Command: proxmox-backup-client backup vmd.pxar:/mnt/vmd --repository 'root@pam!root-XXX@localhost:pbs_v-storagex' --ns XXX_data --change-detection-mode=metadata

Code:
Dec 08 09:42:54 append chunks list len (512)
Dec 08 09:42:54 Connection: send frame=Headers { stream_id: StreamId(21351), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: send frame=Data { stream_id: StreamId(21351), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 Connection: received frame=Headers { stream_id: StreamId(21351), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: received frame=Data { stream_id: StreamId(21351), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 append chunks list len (512)
Dec 08 09:42:54 Connection: send frame=Headers { stream_id: StreamId(21353), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: send frame=Data { stream_id: StreamId(21353), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 Connection: received frame=Headers { stream_id: StreamId(21353), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: received frame=Data { stream_id: StreamId(21353), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 append chunks list len (293)
Dec 08 09:42:54 Connection: send frame=Headers { stream_id: StreamId(21355), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: send frame=Data { stream_id: StreamId(21355), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 Connection: received frame=Headers { stream_id: StreamId(21355), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: received frame=Data { stream_id: StreamId(21355), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 v-data2.ppxar.didx: reused 20.233 TiB from previous snapshot for unchanged files (5459800 chunks)
Dec 08 09:42:54 v-data2.ppxar.didx: had to backup 0 B of 20.233 TiB (compressed 0 B) in 786.20 s (average 0 B/s)
Dec 08 09:42:54 v-data2.ppxar.didx: backup was done incrementally, reused 20.233 TiB (100.0%)
Dec 08 09:42:54 v-data2.ppxar.didx: Reused 6 from 5459806 chunks.
Dec 08 09:42:54 v-data2.ppxar.didx: Average chunk size was 3.886 MiB.
Dec 08 09:42:54 v-data2.ppxar.didx: Average time per request: 143 microseconds.
Dec 08 09:42:54 Connection: send frame=Headers { stream_id: StreamId(21357), flags: (0x5: END_HEADERS | END_STREAM) } peer=Client
Dec 08 09:42:54 Connection: received frame=Headers { stream_id: StreamId(21357), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: received frame=Data { stream_id: StreamId(21357), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 Upload index.json to 'root@pam!root-v-storagex-backups@localhost:8007:pbs_v-storagex'
Dec 08 09:42:54 Connection: send frame=Headers { stream_id: StreamId(21359), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: send frame=Data { stream_id: StreamId(21359), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 Connection: received frame=Headers { stream_id: StreamId(21359), flags: (0x4: END_HEADERS) } peer=Client
Dec 08 09:42:54 Connection: received frame=Data { stream_id: StreamId(21359), flags: (0x1: END_STREAM) } peer=Client
Dec 08 09:42:54 Connection: send frame=Headers { stream_id: StreamId(21361), flags: (0x5: END_HEADERS | END_STREAM) } peer=Client

Examinging the syslog on the host, I see that the last few chunks get added successfully, and then log entries from this task just end. I see no other relevant error messages whatsoever. I'm not sure where the 27 sec time offset between the client log and the syslog comes from, but this is the same task.
Code:
Dec 08 09:42:27 v-storage3 proxmox-backup-proxy[494488]: successfully added chunk 1260d83bbcf100b2f4f962f4d2bf938dd55490004be4450aa1a61169daa3ebc9 to dynamic index 2 (offset 15658790343380, size 2230064)
Dec 08 09:42:27 v-storage3 proxmox-backup-proxy[494488]: successfully added chunk 54544d336337a3efcfbf5a14c36602b7ef8b2da3afaa04e52c3102d752e059ae to dynamic index 2 (offset 15658792573444, size 3208618)
Dec 08 09:42:27 v-storage3 proxmox-backup-proxy[494488]: successfully added chunk 179ce43db5c3ff9365552f2309524a44c5985cda98e81d36af84bec085e13c62 to dynamic index 2 (offset 15658795782062, size 6690249)
Dec 08 09:42:27 v-storage3 proxmox-backup-proxy[494488]: successfully added chunk 8c9777e3418783d06a7633ceb6601e797ed97682691af806008fdc22a1c4eb5a to dynamic index 2 (offset 15658802472311, size 6600153)
Dec 08 09:42:27 v-storage3 proxmox-backup-proxy[494488]: successfully added chunk 27c651d7e1e78cd18da485c1643e95866b6a8c851938ecdfbb2e8e64ee220c5a to dynamic index 2 (offset 15658809072464, size 3233596)
Dec 08 09:42:27 v-storage3 proxmox-backup-proxy[494488]: successfully added chunk 86e39f5965096997b6f85f546921c2721bcc5df0d298d90339482c6f005c43bb to dynamic index 2 (offset 15658812306060, size 3237114)
Dec 08 09:50:53 v-storage3 smartd[7314]: Device: /dev/sda [SAT], CHECK POWER STATUS spins up disk (0x81 -> 0xff)
Dec 08 09:51:03 v-storage3 smartd[7314]: Device: /dev/sdc [SAT], CHECK POWER STATUS spins up disk (0x81 -> 0xff)
...
Dec 08 09:59:12 v-storage3 proxmox-backup-proxy[494488]: rrd journal successfully committed (25 files in 0.017 seconds)

The server-side task log contains the same entries as the syslog and ends in the same abrupt way. Note that the reader task that gets launched with every backup task in change-detection-mode=metadata return just fine.

I'd be happy to provide further debug info or tests, please let me know what would be of interest. Any hints appreciated!
 
Last edited:
Hi,
datastore is a ZFS dataset on same host
out of interest, what storage layout are you using here?

Since one or two point releases, I'm experiencing an issue that is quite a blocker, so please allow me to report it:
Can you please verify since which version you are seeing the issue? `/var/log/apt/history.log` and the zipped rotated files in the same folder can help indentify which package version got installed when.

My backup tasks start out just fine and upload all the chunks, but upon conclusion, the tasks hang and never fully conclude and return
How long did you wait for the backup job to finish. Please note that there was this change introduced in proxmox-backup-server version 3.2.12-1. This checks that all known chunks for the backup snapshot are still present when finishing the backup, and since you have
reused 20.233 TiB from previous snapshot for unchanged files (5459800 chunks)
That might take some time...

Could you attach to the server side process via strace via strace -p <pid> and check if you do see a lot of stat calls, as well as checking the iowait and disk io metrics on the server when the backup job apparently hangs. This would help identifying this as the underlying issue.
 
out of interest, what storage layout are you using here?
  • The ZFS pool consists of 45 x 24TB HDDs, arranged in five raidz1 vdevs (so, 9 disks per vdev) and combined for a total of 764 TB effective capacity.
  • I'm currently using ~100GB RAM for ZFS ARC cache
  • I don't have a ZFS special_device for fast metadata caching, but I am very seriously considering adding a flash disk for that :D
  • The machine is a 45drives Storinator S45 base, EPYC 8124P CPU, 128 GB RAM
  • For the record, here are the benchmark results:
    Code:
    # proxmox-backup-client benchmark --repository 'root@pam!root-v-storagex-backups@localhost:pbs_v-storagex'
    Uploaded 752 chunks in 5 seconds.
    Time per request: 6671 microseconds.
    TLS speed: 628.72 MB/s   
    SHA256 speed: 1474.07 MB/s   
    Compression speed: 441.45 MB/s   
    Decompress speed: 514.73 MB/s   
    AES256/GCM speed: 3257.70 MB/s   
    Verify speed: 395.41 MB/s   
    ┌───────────────────────────────────┬────────────────────┐
    │ Name                              │ Value              │
    ╞═══════════════════════════════════╪════════════════════╡
    │ TLS (maximal backup upload speed) │ 628.72 MB/s (51%)  │
    ├───────────────────────────────────┼────────────────────┤
    │ SHA256 checksum computation speed │ 1474.07 MB/s (73%) │
    ├───────────────────────────────────┼────────────────────┤
    │ ZStd level 1 compression speed    │ 441.45 MB/s (59%)  │
    ├───────────────────────────────────┼────────────────────┤
    │ ZStd level 1 decompression speed  │ 514.73 MB/s (43%)  │
    ├───────────────────────────────────┼────────────────────┤
    │ Chunk verification speed          │ 395.41 MB/s (52%)  │
    ├───────────────────────────────────┼────────────────────┤
    │ AES256 GCM encryption speed       │ 3257.70 MB/s (89%) │
    └───────────────────────────────────┴────────────────────┘
Can you please verify since which version you are seeing the issue? `/var/log/apt/history.log` and the zipped rotated files in the same folder can help indentify which package version got installed when.
My task log goes back to 2024-12-08, and I definitely had this issue already on that day. I had updated from 3.3.0-2 to 3.3.1-1 the day before, on 2024-12-07. So I can't say with certainty whether the "hanging tasks" issue was already present in 3.3.0-2. As far as I remember -- but I'm not entirely sure -- it appeared with 3.3.1-1.

How long did you wait for the backup job to finish. Please note that there was this change introduced in proxmox-backup-server version 3.2.12-1. This checks that all known chunks for the backup snapshot are still present when finishing the backup, and since you have
That might take some time...
Oh, that's good to know. However, I currently have three tasks that have been running/hanging for 17h to 25h. The system is completely idle, I see practically 0% CPU usage on all the proxmox-backup-client and proxmox-backup-proxy processes and no disk I/O (see also below).

Could you attach to the server side process via strace via strace -p <pid> and check if you do see a lot of stat calls, as well as checking the iowait and disk io metrics on the server when the backup job apparently hangs. This would help identifying this as the underlying issue.
I'm afraid strace only shows that the server is waiting for something:
Code:
# strace -p 1120443
strace: Process 1120443 attached
futex(0x77c185d74ff8, FUTEX_WAIT_PRIVATE, 1, NULL
... and no other output for at least 10 minutes.

With three hanging backup tasks, the system is almost completely idle : The PBS gui shows CPU Usage 9, IO wait 9 (but all IO is on the root disk; there is no IO on the ZFS pool -- I think it is mostly the PBS gui that's doing some logging, and the CPU is in a low-power state). Load average 3.1%. Absolutely no disk I/O on the ZFS pool.


I also just updated to 3.3.2-1 and restarted the proxmox-backup-proxy and proxmox-backup services. The issue has remained unchanged.


Some new observations:
Examining the task logs, I noticed that there is a second way in which the server-side task logs may end: Sometimes, the last logged API call is /finish (which however never concludes), whereas other tasks hang before that, right after the last chunk. Here's the tail of a task log that ended with a call to /finish:
Code:
# tail '/var/log/proxmox-backup/tasks/E6/UPID:v-storage3:00078B98:007484E6:00000002:6755ECB1:backup:pbs_v\x2dstoragex\x3ahost-v\x2dstorage3:root@pam!root-v-storagex-backups:'
2024-12-08T14:11:13-05:00: Checksum: 98634f623abf8b47ed2525636b35da5c391aa651ba7147de97ce4f06562e2073
2024-12-08T14:11:13-05:00: Size: 68947496254723
2024-12-08T14:11:13-05:00: Chunk count: 16735064
2024-12-08T14:11:13-05:00: Upload size: 10109684365 (0%)
2024-12-08T14:11:13-05:00: Duplicates: 16730748+1 (99%)
2024-12-08T14:11:13-05:00: Compression: 72%
2024-12-08T14:11:13-05:00: successfully closed dynamic index 2
2024-12-08T14:11:13-05:00: POST /blob
2024-12-08T14:11:13-05:00: add blob "/z1pool_45drives/proxmox_backups/ns/v-data3/host/v-storage3/2024-12-08T19:00:01Z/index.json.blob" (319 bytes, comp: 319)
2024-12-08T14:11:13-05:00: POST /finish

Note that what's missing compared to a normal task conclusion are these last four lines:
Code:
POST /finish
syncing filesystem
successfully finished backup
backup finished successfully
TASK OK

In the client log, the ending looks identical to before:
Code:
Dec 11 11:05:21 v-data2.ppxar.didx: reused 20.235 TiB from previous snapshot for unchanged files (5460568 chunks)
Dec 11 11:05:21 v-data2.ppxar.didx: had to backup 0 B of 20.236 TiB (compressed 0 B) in 736.28 s (average 0 B/s)
Dec 11 11:05:21 v-data2.ppxar.didx: backup was done incrementally, reused 20.236 TiB (100.0%)
Dec 11 11:05:21 v-data2.ppxar.didx: Reused 14 from 5460582 chunks.
Dec 11 11:05:21 v-data2.ppxar.didx: Average chunk size was 3.886 MiB.
Dec 11 11:05:21 v-data2.ppxar.didx: Average time per request: 134 microseconds.
Dec 11 11:05:21 Connection: send frame=Headers { stream_id: StreamId(21365), flags: (0x5: END_HEADERS | END_STREAM) } peer=Client
Dec 11 11:05:21 Connection: received frame=Headers { stream_id: StreamId(21365), flags: (0x4: END_HEADERS) } peer=Client
Dec 11 11:05:21 Connection: received frame=Data { stream_id: StreamId(21365), flags: (0x1: END_STREAM) } peer=Client
Dec 11 11:05:21 Upload index.json to 'root@pam!root-v-storagex-backups@localhost:8007:pbs_v-storagex'   
Dec 11 11:05:21 Connection: send frame=Headers { stream_id: StreamId(21367), flags: (0x4: END_HEADERS) } peer=Client
Dec 11 11:05:21 Connection: send frame=Data { stream_id: StreamId(21367), flags: (0x1: END_STREAM) } peer=Client
Dec 11 11:05:21 Connection: received frame=Headers { stream_id: StreamId(21367), flags: (0x4: END_HEADERS) } peer=Client
Dec 11 11:05:21 Connection: received frame=Data { stream_id: StreamId(21367), flags: (0x1: END_STREAM) } peer=Client
Dec 11 11:05:21 Connection: send frame=Headers { stream_id: StreamId(21369), flags: (0x5: END_HEADERS | END_STREAM) } peer=Client

So the client sends some frame, and then seems to wait forever for a reply, consistent with the strace.



Another observation: I previously reported that the reader tasks associated with the hanging backup tasks succeed, but actually they don't: They end with
Code:
TASK ERROR: connection error: not connected
:
Here's the tail of a reader task log:
Code:
2024-12-11T11:01:16-05:00: GET /chunk
2024-12-11T11:01:16-05:00: download chunk "/z1pool_45drives/proxmox_backups/.chunks/9368/93680f6d787931304600ae313019308d07a5aea86a3da6ce453bb4df52ccb3e4"
2024-12-11T11:01:25-05:00: GET /chunk
2024-12-11T11:01:25-05:00: download chunk "/z1pool_45drives/proxmox_backups/.chunks/6b46/6b461799bb52477fb17927eb4dea3f18ed7fc729ad65a22a80c3e0d176f10332"
2024-12-11T11:02:11-05:00: GET /chunk
2024-12-11T11:02:11-05:00: download chunk "/z1pool_45drives/proxmox_backups/.chunks/2c75/2c7559b930fc72e898128247bb6b67761f6e567f7c8dc8c420f338cd9960fc2b"
2024-12-11T11:02:26-05:00: GET /chunk
2024-12-11T11:02:26-05:00: download chunk "/z1pool_45drives/proxmox_backups/.chunks/18cf/18cf2b79d9577375d2d9bc0cf0047d67758d4cb62317bd98ab512226d578caa6"
2024-12-11T11:02:49-05:00: TASK ERROR: connection error: not connected

I had this issue ever sine I installed PBS about a month ago, and I didn't pay too much attention to it because in this thread, the conclusion is that this is just a cosmetic issue that's hard to fix, so I ignored it. The reader task ends with this error 2-3 minutes before the last output from the backup task, which might be an indication that these two issues are not related, isn't it?

Well... Does this make any sense to you? Thanks.
 
Last edited:
I'm afraid strace only shows that the server is waiting for something:
Sorry, forgot the -f flag as otherwise strace will not trace threads and child processes, please try to get a more useful trace via strace -fp $(pidof proxmox-backup-proxy).

Examining the task logs, I noticed that there is a second way in which the server-side task logs may end: Sometimes, the last logged API call is /finish (which however never concludes), whereas other tasks hang before that, right after the last chunk. Here's the tail of a task log that ended with a call to /finish:
The finish call is the one which leads to the stating of chunks, so that points towards the previously mentioned patch.

Another observation: I previously reported that the reader tasks associated with the hanging backup tasks succeed, but actually they don't: They end with
This is currently expected, patches to fix this are work in progress, see [0].

Well... Does this make any sense to you? Thanks.
Yes, so far all points to the mentioned patch being at fault. If you could confirm by generating the strace output would be great, thanks for your efforts.

[0] https://lore.proxmox.com/pbs-devel/20241204083149.58754-1-c.ebner@proxmox.com/T/
 
Aaaaaah, yes, strace -f showed thousands of statx calls within a few seconds. I'm attaching a file with about 2 seconds worth of output. So this is basically expected behavior. May I ask what you would recommend me to do about this? Is there anything I can do to speed this up, or is there a way to disable this check? Would adding a special_device flash disks for ZFS metadata storage help?
 

Attachments

Last edited:
May I ask what you would recommend me to do about this? Is there anything I can do to speed this up, or is there a way to disable this check? Would adding a special_device flash disks for ZFS metadata storage help?
While adding a special device (you do want to have a mirror for redundancy) will help, also with respect to general operation on the datastore, there is no easy opt-out to disable the check. Given that this most likely will affect others with similar setups and scale, I send a patch to revert the changes for now [0]. I will keep you posted on this issue. Thanks again for the report and your debugging efforts to pinpoint the issue.

[0] https://lore.proxmox.com/pbs-devel/20241212075204.36931-1-c.ebner@proxmox.com/T/
 
  • Like
Reactions: Telencephalon
Thank you very much for the kind support. I'll upgrade as soon as your patch is merged and released.

Let me add that I tweaked my ARC parameters, and it seems I was able to speed the new chunk presence checking process quite enormously. Primarily, I realized that by mistake, I had set my max ARC size to 50GB rather than the intended 100GB, so I fixed that. I also changed the configurable parameter zfs_arc_meta_balance from the default value of 100 to 5000. This sets the balance between using ARC for data vs. metadata, and a higher value means more emphasis on metadata. I also increased the ARC min size to 80GB, to force it to really make use of the assigned RAM. Together, this increased the "Demand metadata hits" (as reported by arc_summary) from 94% to 99%. Shockingly, now it only takes ~1.5h to check ~5M chunks for presence, rather than the >17h I had reported previously, and the backup tasks return OK within that very reasonable time!

ZFS tuning is complicated, and I'm by no means an expert in this -- as I'm sure you are able to tell. So please take my numbers with a grain of salt, it would take some more testing to ensure that I didn't mess up something. But I guess the take home message is very clear: Fast metadata accesses are very, very important in ZFS, also for PBS workloads. Apologies for taking your time to come to this rather obvious realization. I'm not sure if this changes your conclusion that the new checking process is too costly, but I would guess that making it optional is still a good idea. Thanks!
 
Let me add that I tweaked my ARC parameters, and it seems I was able to speed the new chunk presence checking process quite enormously. Primarily, I realized that by mistake, I had set my max ARC size to 50GB rather than the intended 100GB, so I fixed that. I also changed the configurable parameter zfs_arc_meta_balance from the default value of 100 to 5000. This sets the balance between using ARC for data vs. metadata, and a higher value means more emphasis on metadata. I also increased the ARC min size to 80GB, to force it to really make use of the assigned RAM. Together, this increased the "Demand metadata hits" (as reported by arc_summary) from 94% to 99%. Shockingly, now it only takes ~1.5h to check ~5M chunks for presence, rather than the >17h I had reported previously, and the backup tasks return OK within that very reasonable time!
This is actually a good workaround for the time being, and should give you also speedups in other housekeeping tasks such as phase 2 of garbage collection. Skewing the ARC towards metadata caching will however lead to increased cache misses for data access, so if this is a viable permanent setting pretty much depends on the storage access patterns.

ZFS tuning is complicated, and I'm by no means an expert in this -- as I'm sure you are able to tell. So please take my numbers with a grain of salt, it would take some more testing to ensure that I didn't mess up something. But I guess the take home message is very clear: Fast metadata accesses are very, very important in ZFS, also for PBS workloads
Yes, further I guess you cache is hot now, so you will get a high cache hit rate, this would need some longer term monitoring to give more conclusive results.

Apologies for taking your time to come to this rather obvious realization. I'm not sure if this changes your conclusion that the new checking process is too costly, but I would guess that making it optional is still a good idea. Thanks!
No need to apologize, on the contrary: thank you for sharing your workaround! And no, this does not change the fact that stating the known chunks on backup finish does not scale well.
 
  • Like
Reactions: Johannes S
Hi @Chris ,

When do you think we can expect a patch to be available in the binary repository?

Might the problem only occur in some backups, depending on the chunks saved with the dirty bitmap?

--
Best regards,
Luca
 
When do you think we can expect a patch to be available in the binary repository?
Unfortunately I cannot give you an ETA for this, I will however keep you posted on progress in this thread.

Might the problem only occur in some backups, depending on the chunks saved with the dirty bitmap?
To clarify, this has nothing to do with the dirty bitmap per se. It will however be limited to incremental backups for both, VM and LXC backups, as only incremental backups index known chunks of the previous backup snapshots. As mentioned, the check is limited to known chunks. Further, this will only be problematic for large backup snapshots, containing millions of files.
 
Thanks, I get it. Unfortunately my VMs are massive, with tens of terabytes of data and heavy I/O operations (SMTP, NAS, etc.). This leads to large indices and lengthy verification processes. I'm going to apply the patch.

Kind regards
--
Luca
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!