PVE Logging

Gabgobie

New Member
Oct 8, 2023
11
1
3
Good evening everybody,

first of all I would like to thank the you for this awesome project.

Since I have just experienced a boot drive failure caused by the write load, I have been looking into ways to reduce the load on my drives. While doing so, I've seen that a lot of the write load is produced by the logs. Therefore I am looking into disabling logging to disk and using a dedicated log aggregation VM with its own drive for that. Maybe we can turn this thread into a tutorial in the long term. For now my intention is to document the path I am taking in this regard.

So far, there are some unknowns to me here. Btw I am on PVE8.1.10

- How do I disable all logging to disk in PVE?
Taking a look at the wiki, the logging service should be rsyslogd but there is only one file in /etc/rsyslog.d/ (which is postfix.conf) and everything is pointing me to think that it is actually journald. After setting Storage=none in /etc/systemd/journald.conf, no logs show up in the GUI which would be expected and when calling journalctl I get a message telling me that there are no logs available.

Bash:
root@pve1:~# journalctl
No journal files were found.
-- No entries --

So far so good.

But taking a look at the ls /var/log output there are still files being generated and if I delete the folder I won't have access to the WebUI as pveproxy will fail if its access.log file doesn't exist so it seems to circumvent the journald log. After restoring the /var/log/pveproxy/access.log directory and file (including permissions), it will work fine again.

Bash:
root@pve1:~# ls /var/log
alternatives.log  btmp  chrony  ifupdown2  lastlog  private  pve  pve-firewall.log  pveproxy  README  wtmp

Am I right in suspecting that they are circumventing journald and how would I get them to stop logging to disk/use journald so I can send those logs to the aggregation system?

- What options are there for me to use for log aggregation?
From the PVE GUI I can see that InfluxDB and Graphite are supported for metrics out of the box but does that include logs? Are they still going to work when journald's storage setting is set to none or would I have to set the server to forward logs to in the journald.conf?

Best,
Gab
 
Last edited:
Further reading has brought me to this thread which has an answer to the pveproxy log issue although it hasn't gotten much attention.

It suggests symlinking /var/log/pveproxy/access.log to /dev/null which seems to work to get rid of it writing to disk although I can't say I am a big fan of the solution as it would also prevent me from getting these logs to my log aggregation system. I would much prefer if pveproxy would log to journald so there is a unified way of getting logs.

One could also move it to RAM by using the following /etc/fstab entry: tmpfs /var/log/pveproxy/ tmpfs defaults,uid=33,gid=33,size=1024m 0 0, however this would still defeat the purpose of a unified logging system.
I also expect the above-mentioned method to cause issues as the log of pveproxy can be quite a lot and I don't know how it handels being out of space. Going by the thread I linked in the first line, I would expect for space to run out within the first 24h.

Are there any good reasons I am missing for not having pveproxy log to the standard logging service? The same goes for any other services that still appear in /var/log.
 
I'm back.


DISCLAIMERS:
This is my first time looking at Perl code and everything I think to know about Perl is from websearches I did to understand the behavior of the code I was inspecting.
I haven't had the time to test what happens in case the log files run out of space. It could potentially brick parts of the system in unforseen ways until the next reboot which would clear the dedicated log storage.


For pveproxy, taking a look at the source, the log file is hard-coded in line 106 to be written to /var/log/pveproxy/access.log and this file is automatically rotated. It is used for both pveproxy and spiceproxy.

As it seems pveproxy is already using the syslog function from the PVE::SafeSyslog package for some of its log messages. I wasn't able to find the lines that actually log to the access.log file in a reasonable amount of time.

Thinking about the reason for the access logging to bypass the syslog, the only good one I can come up with would be that there is no log facility intended for this use.

Going on the the part where I try to solve my issue:

1. move the access log into a tmpfs and adjust the log rotation to avoid excessive memory usage
  • add tmpfs /var/log/pveproxy/ tmpfs defaults,uid=33,gid=33,size=1024m 0 0 to your /etc/fstab.
    • The suggested line would allow for 1G of memory to be allocated to your access.log. Adjust the size as needed.
    • uid=33 and gid=33 are www-data. Without them pveproxy won't be able to access the log file.
  • adjust /etc/logrotate.d/pve to your needs. In my case to no longer keep old versions of the access log to conserve memory
    • rotate
      • rotate 7 the default setting
      • rotate 0 don't keep rotated versions (my choice, although I'd say it would be reasonable to use 1 instead)
      • rotate -1 keep all rotated versions
    • size <size> (mutually exclusive with time values -> repace dailyor whichever other value is configured)
      • size 100: 100 bytes
      • size 100k: 100 kilobytes
      • size 100M: 100 Megabytes
      • size 100G: 100 Gigabytes
    • Alternatively to size you could also go for a combination of frequency and maxsize <size> (my preference)
2. move the rest of the log files to memory
  • add tmpfs /var/log/ tmpfs defaults,uid=0,gid=0,size=1024m 0 0 to your /etc/fstab.
    • The suggested line would allow for 1G of memory to be allocated for all of your other logfiles.
  • look through /etc/logrotate.d/and adjust the files to your needs
    • If I am not mistaken, /etc/logrotate.conf will set the default values
    • Take the possible settings from the manpage
At this point I would expect that no more logfiles touch your disk. So far so good. The next goal in line would be to reduce the memory impact.

In the previous steps we already went through the log rotation settings. You can increase the frequency at which logrotate checks if the size you specified has been reached by moving the file mv /etc/cron.daily/logrotate /etc/cron.hourly/logrotate or by removing the file rm /etc/cron.daily/logrotate and instead adding your own line in crontab with a custom interval, be aware though that the file you are giving up for this includes error handling and informs you of any abnormal behavior. It is up to you to determine if you need that.

The logging can be further optimized but at this point we are going into detail so I will try to keep this short.
The following will provide us with a list of applications to take a look at in regards to logging.
Code:
root@pve:~# ls /var/log
alternatives.log  btmp  chrony  ifupdown2  lastlog  private  pve  pve-firewall.log  pveproxy  README  wtmp
  • alternatives.log: Not worth the effort to me
  • btmp: From my understanding logging failed logins
  • chrony/: For me this is currently an empty directory. Not worth going after
  • ifupdown2/: I'm pretty sure this is needed for Proxmox to work properly. Alternative versions of your interface settings are saved here. I'd recommend not touching this.
  • lastlog: I can only assume the contents from the name
  • private/: In my case an empty directory
  • pve/: There's only the tasks folder in here. It's needed for the UI to display properly.
  • pve-firewall.log: Logging behavior can be influenced with the appropriate values in
    • /etc/pve/firewall/cluster.fw
      • log_ratelimit
    • /etc/pve/nodes/<nodename>/host.fw
      • log_level_in
      • log_level_out
      • log_nf_conntrack
    • /etc/pve/firewall/<VMID>.fw
      • log_level_in
      • log_level_out
  • wtmp
I have yet to configure CEPH but from what I read, by default it's logging to /var/log/ceph. It can however be configured to log to log to syslog instead.

This was quite a journey and I'd be happy to read any additions to this post.

I think at this point this would be ripe to add the Tutorial Prefix. Below I'll link other sources I found for reducing the write load on your main PVE disks.


Best,
Gab
 
Why not just using proper enterprise disks in a mirror instead of reinventing the logging wheel?
Why not just monitor the wearout of the drives and replace them when needed?

Most of the changes you are doing are hard to maintain. Any update may overwrite your /etc/logrotate.d changes, there might be changes in the logging system used by any PVE package, there may be some new package that again logs to /var/log...

IMHO, the problem isn't that PVE logs too much but that you need proper hardware for PVE.
 
TL;DR:
Why not just using proper enterprise disks in a mirror instead of reinventing the logging wheel?
Expensive and not feasible for home use.

Why not just monitor the wearout of the drives and replace them when needed?
This should be done anyways but why cause more wearout than necessary?

Most of the changes you are doing are hard to maintain.
Agreed. This is an issue I didn't think about.

the problem isn't that PVE logs too much
Agreed. I seem to have been unclear about the issue I take with the current approach.

you need proper hardware for PVE
Disagreed. PVE could run on just about anything. Log generation shouldn't be the reason you need to spend more on hardware.



L;R:
I have to disaggree on some of that.

I will always think of unnecessary wear as an issue. Mirroring the drives (which I do) doubles the wear.

Enterprise hardware is expensive. Why pay the premium when it is very much not necessary. In this case the only thing necessitating the use of expensive hardware is the durability against logging.

I agree that the changes can be hard to maintain. This is an aspect I didn't think about. Although the only time this would become an issue is if one of these new applications were to generate so much log data that the logs no longer fit within the boundary I set for the tmpfs which has replaced /var/log.

the problem isn't that PVE logs too much
I may have been unclear if you think that my issue is the amount of logging from PVE and I can see how it came to that. The more I went over this the more I realised that I won't be able to have everything log to the syslog instead of files, which is what I am actually taking issue with. Over the course of writing this post my attention shifted towards protecting my disks over trying to force everything into the syslog some way or another. I appreciate extensive logging. I also appreciate a unified approach. The optimal case for me would be to have everything log to syslog which in turn pushes the logs to Grafana Loki or any other log aggregation server.

I appreciate you sharing your views on the matter! I believe nothing can be accomplished/improved/learned without discussion.

Best,
Gab
 
Did you ever find a solution to reduce the logs? I found that most of the writing to the disk is coming from the Ceph Monitors (ceph-mon) vs journald, now, I am trying to find a way to send them to memory or disable them or move them to RAM:
  • ceph-mon -f --cluster ceph --id N3 --setuser ceph --setgroup ceph [rocksdb:low]
  • ceph-mon -f --cluster ceph --id N3 --setuser ceph --setgroup ceph [ms_dispatch]
I see around 270-300KB/s written to the boot disk, mostly from ceph-mon, that's around 24GB/day and 10TB/year, just idle, you have to add all the additional VM/CT/OS workload when not idle, any idea how to address the Ceph logging? Thank you
 
Did you ever find a solution to reduce the logs? I found that most of the writing to the disk is coming from the Ceph Monitors (ceph-mon) vs journald, now, I am trying to find a way to send them to memory or disable them or move them to RAM:
  • ceph-mon -f --cluster ceph --id N3 --setuser ceph --setgroup ceph [rocksdb:low]
  • ceph-mon -f --cluster ceph --id N3 --setuser ceph --setgroup ceph [ms_dispatch]
I see around 270-300KB/s written to the boot disk, mostly from ceph-mon, that's around 24GB/day and 10TB/year, just idle, you have to add all the additional VM/CT/OS workload when not idle, any idea how to address the Ceph logging? Thank you
Both yes and no.

I never deployed my "solution" to prod because I never tested the stability. The reason for me opening this thread was that I had my boot drive die after 96TB written in the development setup so even that has mirrored drives now.

My solution boiled down to adjusting the journald settings and mounting a ramdisk. The following are my notes from back when I did this but it's been quite a while and I haven't had the time to really look into this so be careful.

log_to_memory.txt
```
# Protect Boot SSDs by keeping the logs in memory

1. Open journald config
nano /etc/systemd/journald.conf

2. Set Storage=none
options are found at: man journald.conf

3. Optional: redirect logs to log aggregation server, for example grafana loki

4. Mount tmpfs for /var/log so the applications that force logging to file still go to memory
root@pve1:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0

# Log to memory. Disable to have persistent logs. Reduce size later.
tmpfs /var/log/ tmpfs defaults,noexec,uid=0,gid=0,size=1024m 0 0
tmpfs /var/log/pveproxy/ tmpfs defaults,noexec,uid=33,gid=33,size=1024m 0 0

5. Adjust settings for /etc/logrotate.conf and /etc/logrotate.d/*
https://forum.proxmox.com/threads/pve-logging.144215/
```

logrotate.conf
```
# see "man logrotate" for details
# /etc/logrotate.conf
# global options do not affect preceding include directives
# rotate log files daily
daily
# keep 1 previous versions worth of backlogs
rotate 1
# second rotation condition
maxsize 5M
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
#dateext
# prevent log files from being compressed
nocompress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may also be configured here.
# Don't try to mail old logs
nomail
# Don't throw an error if tere is no logfile
missingok
# Skip if empty
notifempty
```

If you want to go this route, make sure to watch the memory usage so you can adjust the space reserved for the memdisk and/or the rotation settings. I'd love to hear back from you so I know if this is safe to deploy. Maybe I'll pick the topic back up myself if I find the time some day.

Regarding your specific question about ceph logging: That should be covered by either adjusting journald or having the directory the logs go to be a memdisk but even though I plan on deploying ceph, I haven't gotten to that part yet so I don't know what logging mechanism it's using.

I would also still love to see Proxmox officially supporting to send the logs to a log aggregation system and not writing anything to disk on its own.

Best,
Gab

P.S.:
The attached `logrotate.txt` file is actually `logrotate.conf`. `.conf` is not part of the allowed extensions so I had to rename it.

P.P.S.:
I think on a default setup, the only thing that is logging directly to the filesystem instead of logging to journald is the access log for the WebUI/API but don't take me by my word and verify for yourself. The access log part is covered by the second tmpfs mount that creates the /var/log/pveproxy directory because if that directory doesn't exist, the pveproxy will fail to start and the WebUI (and API?) will be unavailable. If that happens to you, SSH is your best friend so make sure to setup SSH before making any other changes and verify that it is working as intended.


Edit:
My apologies, I lied. I just had a look at my dev machine. It's been online for 102 days and the memory logging is enabled. One thing to mention is that I set the following in /etc/systemd/journald.conf
Code:
[Journal]
Storage=volatile
That way I keep the ability to view the logs in the UI.
 

Attachments

Last edited:
Both yes and no.

I never deployed my "solution" to prod because I never tested the stability. The reason for me opening this thread was that I had my boot drive die after 96TB written in the development setup so even that has mirrored drives now.

My solution boiled down to adjusting the journald settings and mounting a ramdisk. The following are my notes from back when I did this but it's been quite a while and I haven't had the time to really look into this so be careful.

log_to_memory.txt
```
# Protect Boot SSDs by keeping the logs in memory

1. Open journald config
nano /etc/systemd/journald.conf

2. Set Storage=none
options are found at: man journald.conf

3. Optional: redirect logs to log aggregation server, for example grafana loki

4. Mount tmpfs for /var/log so the applications that force logging to file still go to memory
root@pve1:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0

# Log to memory. Disable to have persistent logs. Reduce size later.
tmpfs /var/log/ tmpfs defaults,noexec,uid=0,gid=0,size=1024m 0 0
tmpfs /var/log/pveproxy/ tmpfs defaults,noexec,uid=33,gid=33,size=1024m 0 0

5. Adjust settings for /etc/logrotate.conf and /etc/logrotate.d/*
https://forum.proxmox.com/threads/pve-logging.144215/
```

logrotate.conf
```
# see "man logrotate" for details
# /etc/logrotate.conf
# global options do not affect preceding include directives
# rotate log files daily
daily
# keep 1 previous versions worth of backlogs
rotate 1
# second rotation condition
maxsize 5M
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
#dateext
# prevent log files from being compressed
nocompress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may also be configured here.
# Don't try to mail old logs
nomail
# Don't throw an error if tere is no logfile
missingok
# Skip if empty
notifempty
```

If you want to go this route, make sure to watch the memory usage so you can adjust the space reserved for the memdisk and/or the rotation settings. I'd love to hear back from you so I know if this is safe to deploy. Maybe I'll pick the topic back up myself if I find the time some day.

Regarding your specific question about ceph logging: That should be covered by either adjusting journald or having the directory the logs go to be a memdisk but even though I plan on deploying ceph, I haven't gotten to that part yet so I don't know what logging mechanism it's using.

I would also still love to see Proxmox officially supporting to send the logs to a log aggregation system and not writing anything to disk on its own.

Best,
Gab

P.S.:
The attached `logrotate.txt` file is actually `logrotate.conf`. `.conf` is not part of the allowed extensions so I had to rename it.

P.P.S.:
I think on a default setup, the only thing that is logging directly to the filesystem instead of logging to journald is the access log for the WebUI/API but don't take me by my word and verify for yourself. The access log part is covered by the second tmpfs mount that creates the /var/log/pveproxy directory because if that directory doesn't exist, the pveproxy will fail to start and the WebUI (and API?) will be unavailable. If that happens to you, SSH is your best friend so make sure to setup SSH before making any other changes and verify that it is working as intended.
Thank you very much, I am going to implement Ceph logs in RAM, how do you determine uid and gid values? I am planning to add this line to /etc/fstab, is it correct? I have a lot of RAM therefore I doubled the amount, thoughts?

tmpfs /var/log/ceph/ tmpfs defaults,noexec,uid=0,gid=0,size=2048m 0 0

Do I have to reboot after for the changes to take effect?
 
Last edited:
Thank you very much, I am going to implement Ceph logs in RAM, how do you determine uid and gid values? I am planning to add this line to /etc/fstab, is it correct? I have a lot of RAM therefore I doubled the amount, thoughts?

tmpfs /var/log/ceph/ tmpfs defaults,noexec,uid=0,gid=0,size=2048m 0 0

Do I have to reboot after for the changes to take effect?
UID and GID 0 are the root account.
33 is the www-data user (at least on my system)

To find them I just read the passwd file sudo cat /etc/passwd

Ceph isn't installed on the machine I just ran the command on but there is a ceph user:
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
There may be other ceph users for the monitors on your system.

The size you want to allocate depends on the log retention you want to have. You calculated 24GB/day => 1GB/Hour.
You should also keep in mind that all logs are lost if the machine is shutting down so you may want to think of some mechanism to periodically flush logs to disk. That /should/ cause less wear than directly logging to disk because the write amplification is reduced.

Another option is to have dedicated log drives. If your machine has PCIe lanes available, you could consider to use an Optane drive for all logs. They have high write endurance and are affordable. To me the most expensive part about them are the lanes they consume for very little storage space. (Talking about the small 32Gig M.2 drives)
 
UID and GID 0 are the root account.
33 is the www-data user (at least on my system)

To find them I just read the passwd file sudo cat /etc/passwd

Ceph isn't installed on the machine I just ran the command on but there is a ceph user:

There may be other ceph users for the monitors on your system.

The size you want to allocate depends on the log retention you want to have. You calculated 24GB/day => 1GB/Hour.
You should also keep in mind that all logs are lost if the machine is shutting down so you may want to think of some mechanism to periodically flush logs to disk. That /should/ cause less wear than directly logging to disk because the write amplification is reduced.

Another option is to have dedicated log drives. If your machine has PCIe lanes available, you could consider to use an Optane drive for all logs. They have high write endurance and are affordable. To me the most expensive part about them are the lanes they consume for very little storage space. (Talking about the small 32Gig M.2 drives)
Thanks, I ended up adding tmpfs /var/log/ceph/ tmpfs defaults 0 0, it seems to work fine.
 
  • Like
Reactions: Gabgobie
Thank you very much, I am going to implement Ceph logs in RAM, how do you determine uid and gid values? I am planning to add this line to /etc/fstab, is it correct? I have a lot of RAM therefore I doubled the amount, thoughts?

tmpfs /var/log/ceph/ tmpfs defaults,noexec,uid=0,gid=0,size=2048m 0 0

Do I have to reboot after for the changes to take effect?
Hey,

I just revisited the topic and remembered that CEPH has configuration values that allow for it to log to syslog. That way you won't need another tmpfs if you set your storage to volatile in journald.conf

This is the relevant part of the ceph documentation:
https://docs.ceph.com/en/latest/rados/troubleshooting/log-and-debug/#confval-log_file

Best,
Gab
 
Hey,

I just revisited the topic and remembered that CEPH has configuration values that allow for it to log to syslog. That way you won't need another tmpfs if you set your storage to volatile in journald.conf

This is the relevant part of the ceph documentation:
https://docs.ceph.com/en/latest/rados/troubleshooting/log-and-debug/#confval-log_file

Best,
Gab
Please could you share what and how you did it?

Did you modify /etc/ceph/ceph.conf something like this:

[global]
log_to_syslog = true

I implemented in my system the above changes, is there a way to confirm that Ceph logs are going to syslog? I have already confirmed that syslog is going to RAM, by running systemctl status systemd-journald:

Code:
Dec 05 17:20:27 N1 systemd-journald[386]: Journal started
Dec 05 17:20:27 N1 systemd-journald[386]: Runtime Journal (/run/log/journal/077b1ca4f22f451ea08cb39fea071499) is 8.0M, max 641.7M, 633.7M free.
Dec 05 17:20:27 N1 systemd-journald[386]: Runtime Journal (/run/log/journal/077b1ca4f22f451ea08cb39fea071499) is 8.0M, max 641.7M, 633.7M free.
Dec 05 17:20:27 N1 systemd-journald[386]: Received client request to flush runtime journal.
Dec 06 07:46:32 N1 systemd-journald[386]: Data hash table of /run/log/journal/077b1ca4f22f451ea08cb39fea071499/system.journal has a fill level at 75.0 (109526 of 146033 items, 419>
Dec 06 07:46:32 N1 systemd-journald[386]: /run/log/journal/077b1ca4f22f451ea08cb39fea071499/system.journal: Journal header limits reached or header out-of-date, rotating.
Notice: journal has been rotated since unit was started, output may be incomplete.

If I run journalctl -n 10 I get the following:

Code:
Dec 06 09:56:15 N1 ceph-mon[1064]: 2024-12-06T09:56:15.000-0500 7244ac0006c0  0 log_channel(audit) log [DBG] : from='client.? 10.10.10.6:0/522337331' entity='client.admin' cmd=[{">
Dec 06 09:56:15 N1 ceph-mon[1064]: 2024-12-06T09:56:15.689-0500 7244af2006c0  1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:20 N1 ceph-mon[1064]: 2024-12-06T09:56:20.690-0500 7244af2006c0  1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:24 N1 ceph-mon[1064]: 2024-12-06T09:56:24.156-0500 7244ac0006c0  0 mon.N1@0(leader) e3 handle_command mon_command({"format":"json","prefix":"df"} v 0)
Dec 06 09:56:24 N1 ceph-mon[1064]: 2024-12-06T09:56:24.156-0500 7244ac0006c0  0 log_channel(audit) log [DBG] : from='client.? 10.10.10.6:0/564218892' entity='client.admin' cmd=[{">
Dec 06 09:56:25 N1 ceph-mon[1064]: 2024-12-06T09:56:25.692-0500 7244af2006c0  1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:30 N1 ceph-mon[1064]: 2024-12-06T09:56:30.694-0500 7244af2006c0  1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:34 N1 ceph-mon[1064]: 2024-12-06T09:56:34.379-0500 7244ac0006c0  0 mon.N1@0(leader) e3 handle_command mon_command({"format":"json","prefix":"df"} v 0)
Dec 06 09:56:34 N1 ceph-mon[1064]: 2024-12-06T09:56:34.379-0500 7244ac0006c0  0 log_channel(audit) log [DBG] : from='client.? 10.10.10.6:0/3531591958' entity='client.admin' cmd=[{>
Dec 06 09:56:35 N1 ceph-mon[1064]: 2024-12-06T09:56:35.695-0500 7244af2006c0  1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>

I think it is safe to assume Ceph logs are being stored in Syslog, therefore also in RAM
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!