PVE Kernel Panic during nightly backup

Entropywrench

New Member
Sep 13, 2016
11
0
1
49
I have two ProxMox servers I am testing for clients that regularly crash while doing their respective nightly backup. The Boxes are nearly identical both with 32 gigs of ram and only one or two VMs per machine. There is a SSD drive purely handling the swap file I added as a troubleshooting step. Backups are happening to a USB3 external drive.

Letting the backup run to local storage will always crash the system. Backing up to the USB might give me a week of up time then eventually crash.

Often the VMs need to be restarted by hand after performing

qm stop 100
qm unlock 100
qm start 100

Thoughts, and if there's additional configs or output that would be helpful, please let me know.


Recorded from kern.log

Code:
Sep 12 21:51:26 ccikvm01 kernel: [777529.260996] INFO: task txg_sync:1399 blocked for more than 120 seconds.
Sep 12 21:51:26 ccikvm01 kernel: [777529.261007]       Tainted: P           O    4.4.15-1-pve #1
Sep 12 21:51:26 ccikvm01 kernel: [777529.261021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 12 21:51:26 ccikvm01 kernel: [777529.261033] txg_sync        D ffff88083f3c3aa8     0  1399      2 0x00000000
Sep 12 21:51:26 ccikvm01 kernel: [777529.261050]  ffff88083f3c3aa8 ffff88083f3c3a88 ffff88084b688ec0 ffff8808402d6740
Sep 12 21:51:26 ccikvm01 kernel: [777529.261051]  ffff88083f3c4000 ffff88086fc97180 7fffffffffffffff ffff8807beeb4c28
Sep 12 21:51:26 ccikvm01 kernel: [777529.261052]  0000000000000001 ffff88083f3c3ac0 ffffffff8184d945 0000000000000000
Sep 12 21:51:26 ccikvm01 kernel: [777529.261054] Call Trace:
Sep 12 21:51:26 ccikvm01 kernel: [777529.261058]  [<ffffffff8184d945>] schedule+0x35/0x80
Sep 12 21:51:26 ccikvm01 kernel: [777529.261059]  [<ffffffff81850b85>] schedule_timeout+0x235/0x2d0
Sep 12 21:51:26 ccikvm01 kernel: [777529.261062]  [<ffffffff810b4d61>] ? wakeup_preempt_entity.isra.58+0x41/0x50
Sep 12 21:51:26 ccikvm01 kernel: [777529.261064]  [<ffffffff8102d736>] ? __switch_to+0x256/0x5c0
Sep 12 21:51:26 ccikvm01 kernel: [777529.261066]  [<ffffffff8184ce3b>] io_schedule_timeout+0xbb/0x140
Sep 12 21:51:26 ccikvm01 kernel: [777529.261071]  [<ffffffffc00c7d7c>] cv_wait_common+0xbc/0x140 [spl]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261073]  [<ffffffff810c4000>] ? wait_woken+0x90/0x90
Sep 12 21:51:26 ccikvm01 kernel: [777529.261076]  [<ffffffffc00c7e58>] __cv_wait_io+0x18/0x20 [spl]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261114]  [<ffffffffc022cc50>] zio_wait+0x120/0x200 [zfs]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261130]  [<ffffffffc01b5eb8>] dsl_pool_sync+0xb8/0x440 [zfs]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261147]  [<ffffffffc01cecb9>] spa_sync+0x369/0xb30 [zfs]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261149]  [<ffffffff810ac9f2>] ? default_wake_function+0x12/0x20
Sep 12 21:51:26 ccikvm01 kernel: [777529.261168]  [<ffffffffc01e2a74>] txg_sync_thread+0x3e4/0x6a0 [zfs]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261169]  [<ffffffff810ac599>] ? try_to_wake_up+0x49/0x400
Sep 12 21:51:26 ccikvm01 kernel: [777529.261187]  [<ffffffffc01e2690>] ? txg_sync_stop+0xf0/0xf0 [zfs]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261190]  [<ffffffffc00c2e9a>] thread_generic_wrapper+0x7a/0x90 [spl]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261192]  [<ffffffffc00c2e20>] ? __thread_exit+0x20/0x20 [spl]
Sep 12 21:51:26 ccikvm01 kernel: [777529.261193]  [<ffffffff810a0eba>] kthread+0xea/0x100
Sep 12 21:51:26 ccikvm01 kernel: [777529.261194]  [<ffffffff810a0dd0>] ? kthread_park+0x60/0x60
Sep 12 21:51:26 ccikvm01 kernel: [777529.261196]  [<ffffffff81851e0f>] ret_from_fork+0x3f/0x70
Sep 12 21:51:26 ccikvm01 kernel: [777529.261197]  [<ffffffff810a0dd0>] ? kthread_park+0x60/0x60


system info:

Code:
root@ccikvm01:/var/log# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  2.72T  1.66T  1.05T         -    49%    61%  1.00x  ONLINE  -
root@ccikvm01:/var/log# zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors
root@ccikvm01:/var/log# free
             total       used       free     shared    buffers     cached
Mem:      32904992   30333112    2571880      55980       1792     105504
-/+ buffers/cache:   30225816    2679176
Swap:     33554428          0   33554428
root@ccikvm01:/var/log#


pveversion

Code:
proxmox-ve: 4.2-60 (running kernel: 4.4.15-1-pve)
pve-manager: 4.2-17 (running version: 4.2-17/e1400248)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.15-1-pve: 4.4.15-60
lvm2: 2.02.116-pve2
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-43
qemu-server: 4.0-85
pve-firmware: 1.1-8
libpve-common-perl: 4.0-72
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-56
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6-1
pve-container: 1.0-72
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.3-4
lxcfs: 2.0.2-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5.7-pve10~bpo80
 
Hi

can you send the output of
zfs list -t all
swapon --show
cat /etc/pve/storage.cfg
cat /proc/sys/vm/swappiness
 
The output you mentioned from kern.log is not a kernel panic. It's hang-notice. If you system really crashed, you're not going to find anything in any logfile, at least not on the machine itself.

For proper crash analysis, you need to setup crashdump. A simpler method could be to use netconsole, but not everything is visible there.
 
Hi

can you send the output of
zfs list -t all
swapon --show
cat /etc/pve/storage.cfg
cat /proc/sys/vm/swappiness


I did change the zfs pool yesterday, added a log and a cache.

Code:
root@ccikvm01:~# zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        rpool                                    ONLINE       0     0     0
          mirror-0                               ONLINE       0     0     0
            sda2                                 ONLINE       0     0     0
            sdb2                                 ONLINE       0     0     0
        logs
          ata-CT240BX200SSD1_1620F01C069E-part3  ONLINE       0     0     0
        cache
          ata-CT240BX200SSD1_1620F01C069E-part4  ONLINE       0     0     0

errors: No known data errors
root@ccikvm01:~# zpool iostat -v rpool
                                            capacity     operations    bandwidth
pool                                     alloc   free   read  write   read  write
---------------------------------------  -----  -----  -----  -----  -----  -----
rpool                                    1.67T  1.05T    380     73  2.70M   556K
  mirror                                 1.67T  1.05T    380     71  2.70M   401K
    sda2                                     -      -     32     13  1.49M   413K
    sdb2                                     -      -     32     13  1.50M   413K
logs                                         -      -      -      -      -      -
  ata-CT240BX200SSD1_1620F01C069E-part3  1.91M  4.97G      0      3      3   248K
cache                                        -      -      -      -      -      -
  ata-CT240BX200SSD1_1620F01C069E-part4  26.6G  5.42G      4    122  31.8K  1.92M
---------------------------------------  -----  -----  -----  -----  -----  -----

Code:
root@ccikvm01:~# zfs list -t all
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     1.67T   983G    96K  /rpool
rpool/ROOT                1.42T   983G    96K  /rpool/ROOT
rpool/ROOT/pve-1          1.42T   983G  1.42T  /
rpool/data                 252G   983G    96K  /rpool/data
rpool/data/vm-100-disk-1  82.4G   983G  82.4G  -
rpool/data/vm-100-disk-2   135G   983G   135G  -
rpool/data/vm-101-disk-1  35.2G   983G  35.2G  -
rpool/swap                8.50G   990G  1.83G  -

Code:
root@ccikvm01:~# swapon --show
NAME      TYPE      SIZE   USED PRIO
/dev/sdc2 partition  32G 418.5M   -1

Code:
root@ccikvm01:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        maxfiles 6
        content vztmpl,iso,backup

zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

dir: ext_backup
        path /mnt/ext_backup/vzdump
        maxfiles 10
        content images,backup,iso,vztmpl

Code:
root@ccikvm01:~# cat /proc/sys/vm/swappiness
60
 
The output you mentioned from kern.log is not a kernel panic. It's hang-notice. If you system really crashed, you're not going to find anything in any logfile, at least not on the machine itself.

For proper crash analysis, you need to setup crashdump. A simpler method could be to use netconsole, but not everything is visible there.

Yeah, what you see is the last thing in the log files before the system time resets to 0 and the reboot logging begins. I'll see what I can do about getting more info. I have witnessed one of the servers crash/reboot" while on site, nothing gets logged to the physical console, the machine just restarted under moderate io.
 
Apart from setting up crashdump does anyone have any other thoughts? Lost one of the machines again to an abrupt backup related reboot over the weekend.

Code:
Task viewer: Backup
Output
Status
Stop
INFO: starting new backup job: vzdump 101 --node ccikvm01 --compress gzip --quiet 1 --mailnotification always --storage ext_backup --mode snapshot --mailto mnielsen@theitmachine.com
INFO: Starting Backup of VM 101 (qemu)
INFO: status = running
INFO: update VM 101: -lock backup
INFO: VM Name: CCIFAX01
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/ext_backup/vzdump/dump/vzdump-qemu-101-2016_09_19-03_15_01.vma.gz'
INFO: started backup task 'a5834cf8-00d0-47ae-932b-fac494a9326f'
INFO: status: 0% (106430464/42949672960), sparse 0% (29757440), duration 3, 35/25 MB/s
INFO: status: 1% (437125120/42949672960), sparse 0% (33034240), duration 28, 13/13 MB/s
INFO: status: 2% (859045888/42949672960), sparse 0% (37203968), duration 61, 12/12 MB/s
INFO: status: 3% (1303773184/42949672960), sparse 0% (37318656), duration 89, 15/15 MB/s
INFO: status: 4% (1718091776/42949672960), sparse 0% (37834752), duration 125, 11/11 MB/s
INFO: status: 5% (2200829952/42949672960), sparse 0% (67923968), duration 155, 16/15 MB/s
INFO: status: 6% (2577137664/42949672960), sparse 0% (68898816), duration 179, 15/15 MB/s
 
You can try to set the swapiness on 1.
Have you restrict the zfs arc cache?
 
You can try to set the swapiness on 1.
Have you restrict the zfs arc cache?
I am afraid you'll have to help me set the zfs arc cache. How can I determine what it is (assume default) and what parameters are required to specify what it should be?

Code:
root@ccikvm01:~# cat /proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 4545380590 125177953188783
name                            type data
hits                            4    107152542
misses                          4    53154115
demand_data_hits                4    45341002
demand_data_misses              4    9682985
demand_metadata_hits            4    54145698
demand_metadata_misses          4    559983
prefetch_data_hits              4    7654436
prefetch_data_misses            4    42907979
prefetch_metadata_hits          4    11406
prefetch_metadata_misses        4    3168
mru_hits                        4    75939640
mru_ghost_hits                  4    2131427
mfu_hits                        4    23692556
mfu_ghost_hits                  4    32564
deleted                         4    39245244
mutex_miss                      4    19391
evict_skip                      4    199477495
evict_not_enough                4    1078521
evict_l2_cached                 4    236186828800
evict_l2_eligible               4    111118711296
evict_l2_ineligible             4    8567961600
evict_l2_skip                   4    9127924
hash_elements                   4    6609408
hash_elements_max               4    8248980
hash_collisions                 4    57006216
hash_chains                     4    1968701
hash_chain_max                  4    12
p                               4    3401643261
c                               4    13751484736
c_min                           4    33554432
c_max                           4    16847355904
size                            4    13718108456
hdr_size                        4    1084504824
data_size                       4    11455257088
metadata_size                   4    374767104
other_size                      4    416950392
anon_size                       4    24576
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    1012290560
mru_evictable_data              4    915059712
mru_evictable_metadata          4    7813120
mru_ghost_size                  4    12278654976
mru_ghost_evictable_data        4    11451837952
mru_ghost_evictable_metadata    4    826817024
mfu_size                        4    10817709056
mfu_evictable_data              4    10540189184
mfu_evictable_metadata          4    31933952
mfu_ghost_size                  4    229777408
mfu_ghost_evictable_data        4    226762752
mfu_ghost_evictable_metadata    4    3014656
l2_hits                         4    108644
l2_misses                       4    53045426
l2_feeds                        4    139823
l2_rw_clash                     4    104
l2_read_bytes                   4    444423168
l2_write_bytes                  4    202627829760
l2_writes_sent                  4    120218
l2_writes_done                  4    120218
l2_writes_error                 4    0
l2_writes_lock_retry            4    47
l2_evict_lock_retry             4    17
l2_evict_reading                4    2
l2_evict_l1cached               4    594512
l2_free_on_write                4    602089
l2_cdata_free_on_write          4    430
l2_abort_lowmem                 4    2
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    39342077952
l2_asize                        4    27235079680
l2_hdr_size                     4    386629048
l2_compress_successes           4    10700968
l2_compress_zeros               4    0
l2_compress_failures            4    16068805
memory_throttle_count           4    0
duplicate_buffers               4    0
duplicate_buffers_size          4    0
duplicate_reads                 4    0
memory_direct_count             4    198
memory_indirect_count           4    457638
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    2262851368
arc_meta_limit                  4    12635516928
arc_meta_max                    4    3018967848
arc_meta_min                    4    16777216
arc_need_free                   4    0
arc_sys_free                    4    526479360

Setting swapiness is no problem, set it to 1.
 
Update.
About 20 days ago I changed the time of day the backups run so they shouldn't overlap or fall within the window of other system operations. This seemed to work well but over the weekend it looks like the larger backup didn't finish before the zfs scrub kicked off. Memory error/lockup/crash

Clearly I didn't get Kdump or netconsole working correctly as nothing was logged in /var/crash. Can anyone point me to the documentation to get it working successfully with ProxMox. Can't imagine it's any different than ubuntu, or other distros but I appear to be wrong.

Wolfgang, I've dug around trying to find what the zfs arc cache size should be. The link to referenced previous doesn't lead me any where dealing with memory limiting and tuning.

If the problem server has 2.4T of zfs storage,32G of ram with 16G provisioned for all the VMs, what should the cache size be?
 
Further update. I had another nice run of uptime by spacing out the backups, the smaller of which ran late afternoon, the larger running from 6pm till midnight and never when ZFS was scheduled to do any sort of maintenance.

From the logs that did save, it looks like the same sort of problem as above. Thoughts?

Still having a nightmarish problem getting crashdumps to work. I clearly must be doing something wrong, it should be the same as any other debian-like system. Is there a definitive step by step to get crashdumps to work with proxmox?
 
Hi all, updated a server on the last crash to 4.3.9 and now got a different set of info in the logs just before the crash. I'll research this out but wanted to share in case it helps

Code:
Nov 22 05:15:02 ccikvm01 vzdump[24801]: INFO: Starting Backup of VM 101 (qemu)
Nov 22 05:15:02 ccikvm01 qm[24804]: <root@pam> update VM 101: -lock backup
Nov 22 05:17:01 ccikvm01 CRON[24945]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov 22 05:22:02 ccikvm01 systemd-timesyncd[1949]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.061s/0.001s/-1ppm
Nov 22 05:25:01 ccikvm01 CRON[25485]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Nov 22 05:25:01 ccikvm01 CRON[25486]: (root) CMD (/usr/bin/pveupdate)
Nov 22 05:25:11 ccikvm01 rrdcached[2345]: flushing old values
Nov 22 05:25:11 ccikvm01 rrdcached[2345]: rotating journals
Nov 22 05:25:11 ccikvm01 rrdcached[2345]: started new journal /var/lib/rrdcached/journal/rrd.journal.1479813911.950699
Nov 22 05:25:11 ccikvm01 rrdcached[2345]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1479806711.950674
Nov 22 05:25:13 ccikvm01 smartd[2302]: Device: /dev/sde [SAT], open() failed: No such device
Nov 22 05:30:00 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:30:20 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:31:10 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:31:30 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:31:50 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:32:20 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:32:20 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:33:10 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:33:30 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:34:52 ccikvm01 pve-firewall[2559]: firewall update time (31.186 seconds)
Nov 22 05:34:52 ccikvm01 pvestatd[2558]: status update time (34.998 seconds)
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-node/ccikvm01: -1
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-node/ccikvm01: /var/lib/rrdcached/db/pve2-node/ccikvm01: illegal attempt to update using
time 1479814492 when last update time is 1479814492 (minimum one second step)
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/101: -1
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/100: -1
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/ccikvm01/ext_backup: -1
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/ccikvm01/local: -1
Nov 22 05:34:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/ccikvm01/local-zfs: -1
Nov 22 05:35:02 ccikvm01 CRON[26165]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Nov 22 05:35:51 ccikvm01 pve-firewall[2559]: firewall update time (29.752 seconds)
Nov 22 05:35:52 ccikvm01 pvestatd[2558]: status update time (29.536 seconds)
Nov 22 05:36:22 ccikvm01 rrdcached[2345]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-vm/100) failed with status -1. (/var/lib/rrdcached/db/pve2-vm/100: illegal attempt to u
pdate using time 1479814287 when last update time is 1479814492 (minimum one second step))
Nov 22 05:36:22 ccikvm01 rrdcached[2345]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-vm/101) failed with status -1. (/var/lib/rrdcached/db/pve2-vm/101: illegal attempt to u
pdate using time 1479814287 when last update time is 1479814492 (minimum one second step))
Nov 22 05:37:42 ccikvm01 rrdcached[2345]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/ccikvm01/local) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/ccik
vm01/local: illegal attempt to update using time 1479814367 when last update time is 1479814492 (minimum one second step))
Nov 22 05:37:42 ccikvm01 rrdcached[2345]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/ccikvm01/local-zfs) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/
ccikvm01/local-zfs: illegal attempt to update using time 1479814367 when last update time is 1479814492 (minimum one second step))
Nov 22 05:38:52 ccikvm01 pve-firewall[2559]: firewall update time (21.354 seconds)
Nov 22 05:38:52 ccikvm01 pvestatd[2558]: status update time (19.960 seconds)
Nov 22 05:38:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/ccikvm01/ext_backup: -1
Nov 22 05:38:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/ccikvm01/local: -1
Nov 22 05:38:52 ccikvm01 pmxcfs[2463]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/ccikvm01/local-zfs: -1
Nov 22 05:39:12 ccikvm01 rrdcached[2345]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/ccikvm01/ext_backup) failed with status -1. (/var/lib/rrdcached/db/pve2-storage
/ccikvm01/ext_backup: illegal attempt to update using time 1479814492 when last update time is 1479814732 (minimum one second step))
Nov 22 05:39:45 ccikvm01 pvestatd[2558]: got timeout
Nov 22 05:43:09 ccikvm01 systemd-modules-load[1494]: Module 'fuse' is builtin
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!