Server sometimes slow

epistula1

New Member
Oct 15, 2009
7
0
1
hi,

i have a problem with my pve-server. the server is sometimes so slow that is not usable for us. after a few minutes the server runs normal and it is fun to work with it. and then after few minutes the same. so slow ... this is the all time state: runs normal, runs slow, ...

I've tried many things to fix the problem, but i have no luck. is there anybody here with the same problem or have a solution for me?

my server: dell poweredge r610; one intel xeon e5520 2.26ghz, 8m; 12gb memory; two 73gb sas 10k (raid 1, perc 6/i) for the system; 12tb sata ii 7,2 k (raid 6, perc 6/e) over sas for the storage; pve-1.4b2

my pveperf:

CPU BOGOMIPS: 36178.66
REGEX/SECOND: 570453
HD SIZE: 16.49 GB (/dev/pve/root)
BUFFERED READS: 121.38 MB/sec
AVERAGE SEEK TIME: 4.48 ms
FSYNCS/SECOND: 2686.12
DNS EXT: 47.25 ms
DNS INT: 4.59 ms

thank's for you help!
 
hi,

i have a problem with my pve-server. the server is sometimes so slow that is not usable for us. after a few minutes the server runs normal and it is fun to work with it. and then after few minutes the same. so slow ... this is the all time state: runs normal, runs slow, ...

I've tried many things to fix the problem, but i have no luck. is there anybody here with the same problem or have a solution for me?

my server: dell poweredge r610; one intel xeon e5520 2.26ghz, 8m; 12gb memory; two 73gb sas 10k (raid 1, perc 6/i) for the system; 12tb sata ii 7,2 k (raid 6, perc 6/e) over sas for the storage; pve-1.4b2

my pveperf:

CPU BOGOMIPS: 36178.66
REGEX/SECOND: 570453
HD SIZE: 16.49 GB (/dev/pve/root)
BUFFERED READS: 121.38 MB/sec
AVERAGE SEEK TIME: 4.48 ms
FSYNCS/SECOND: 2686.12
DNS EXT: 47.25 ms
DNS INT: 4.59 ms

thank's for you help!

find out what process produces the high load (with ps). do you use KVM (with virtio) and OpenVZ guests or just KVM or just OpenVZ?
 
i use only kvm (two windows-vms with vitio) and other vms (win and linux) without virtio. i had the same problems with pve-1.3.
 
try to find out the bottleneck (CPU/RAM/hdd i/o)
and locate the process.

for me it were windows VMs. But had do move them to physical servers quickly, so i cannot reproduce it right now.
 
We are seeing high loads as well - its all disk i/o wait times of 50-60% as the norm since we upgraded to 1.3 on a bunch of systems.

Not fun -

did not want to hijack the thread - but figured it it helps

PHP:
00:00:01  cpu %usr %nice   %sys %irq %softirq    %wait %idle             _cpu_
00:10:01  all   16     2      3    0        0       63    15
            0   15     3      3    0        0       65    14
            1   16     3      3    0        0       62    16
            2   17     2      3    0        0       63    15
            3   16     2      3    0        0       62    16
00:20:01  all   21     1      4    0        0       57    17
            0   21     0      4    0        0       57    17
            1   22     1      3    0        0       57    17
            2   21     0      4    0        0       58    17
            3   21     1      4    0        0       57    17
00:30:01  all   12     0      2    0        0       42    43
            0   11     0      2    0        0       43    43
            1   14     0      2    0        0       42    42
            2   12     0      3    0        0       42    43
            3   14     0      3    0        0       40    43
00:40:01  all   18     0      3    0        0       28    50
            0   17     0      3    0        0       29    51
            1   18     0      3    0        0       29    49
            2   19     0      3    0        0       28    49
            3   19     1      3    0        0       27    50

iotop does not show much sadly

interestingly enough - when no openvz servers are running - we still see high disk i/o - thus leading me to believe the issue is within proxmox somewhere

We are using an adaptec Raid card 512mb on board- Raid 10
8 1TB drives 7200RPM

the disk i/o prior to 1.3.x was much lower
 
interestingly enough - when no openvz servers are running - we still see high disk i/o - thus leading me to believe the issue is within proxmox somewhere

Please can you send me the output of 'ps auxww' (when no openvz servers are running).
 
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 10316 748 ? Ss 00:45 0:02 init [2]
root 2 0.0 0.0 0 0 ? S< 00:45 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S< 00:45 0:00 [migration/0]
root 4 0.0 0.0 0 0 ? S< 00:45 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< 00:45 0:00 [watchdog/0]
root 6 0.0 0.0 0 0 ? S< 00:45 0:00 [migration/1]
root 7 0.0 0.0 0 0 ? S< 00:45 0:00 [ksoftirqd/1]
root 8 0.0 0.0 0 0 ? S< 00:45 0:00 [watchdog/1]
root 9 0.0 0.0 0 0 ? S< 00:45 0:00 [migration/2]
root 10 0.0 0.0 0 0 ? S< 00:45 0:01 [ksoftirqd/2]
root 11 0.0 0.0 0 0 ? S< 00:45 0:00 [watchdog/2]
root 12 0.0 0.0 0 0 ? S< 00:45 0:00 [migration/3]
root 13 0.0 0.0 0 0 ? S< 00:45 0:00 [ksoftirqd/3]
root 14 0.0 0.0 0 0 ? S< 00:45 0:00 [watchdog/3]
root 15 0.0 0.0 0 0 ? S< 00:45 0:00 [events/0]
root 16 0.0 0.0 0 0 ? S< 00:45 0:00 [events/1]
root 17 0.0 0.0 0 0 ? S< 00:45 0:01 [events/2]
root 18 0.0 0.0 0 0 ? S< 00:45 0:00 [events/3]
root 19 0.0 0.0 0 0 ? S< 00:45 0:00 [khelper]
root 52 0.0 0.0 0 0 ? S< 00:45 0:01 [kblockd/0]
root 53 0.0 0.0 0 0 ? S< 00:45 0:01 [kblockd/1]
root 54 0.0 0.0 0 0 ? S< 00:45 0:01 [kblockd/2]
root 55 0.0 0.0 0 0 ? S< 00:45 0:01 [kblockd/3]
root 58 0.0 0.0 0 0 ? S< 00:45 0:00 [kacpid]
root 59 0.0 0.0 0 0 ? S< 00:45 0:00 [kacpi_notify]
root 165 0.0 0.0 0 0 ? S< 00:45 0:00 [kseriod]
root 220 0.0 0.0 0 0 ? S 00:45 0:00 [ubstatd]
root 222 0.0 0.0 0 0 ? S 00:45 0:00 [pdflush]
root 223 0.0 0.0 0 0 ? S 00:45 0:10 [pdflush]
root 224 0.0 0.0 0 0 ? S< 00:45 0:04 [kswapd0]
root 293 0.0 0.0 0 0 ? S< 00:45 0:00 [aio/0]
root 294 0.0 0.0 0 0 ? S< 00:45 0:00 [aio/1]
root 295 0.0 0.0 0 0 ? S< 00:45 0:00 [aio/2]
root 296 0.0 0.0 0 0 ? S< 00:45 0:00 [aio/3]
root 1025 0.0 0.0 0 0 ? S< 00:45 0:00 [scsi_eh_0]
root 1035 0.0 0.0 0 0 ? S< 00:45 0:00 [ata/0]
root 1036 0.0 0.0 0 0 ? S< 00:45 0:00 [ata/1]
root 1037 0.0 0.0 0 0 ? S< 00:45 0:00 [ata/2]
root 1038 0.0 0.0 0 0 ? S< 00:45 0:00 [ata/3]
root 1039 0.0 0.0 0 0 ? S< 00:45 0:00 [ata_aux]
root 1064 0.0 0.0 0 0 ? S< 00:45 0:00 [scsi_eh_1]
root 1066 0.0 0.0 0 0 ? S< 00:45 0:00 [scsi_eh_2]
root 1090 0.0 0.0 0 0 ? S< 00:45 0:00 [ksuspend_usbd]
root 1095 0.0 0.0 0 0 ? S< 00:45 0:00 [khubd]root 1263 0.0 0.0 0 0 ? S< 00:45 0:00 [kjournald]
root 1351 0.0 0.0 16740 948 ? S<s 00:45 0:00 udevd --daemon
root 1538 0.0 0.0 0 0 ? S< 00:45 0:00 [edac-poller]
root 1968 0.0 0.0 0 0 ? S< 00:45 0:00 [kpsmoused]
root 2623 0.0 0.0 0 0 ? S< 00:45 0:00 [ksnapd]
root 2656 0.3 0.0 0 0 ? S< 00:45 2:09 [kjournald]
root 2657 0.0 0.0 0 0 ? S< 00:45 0:00 [kjournald]
daemon 2790 0.0 0.0 8024 532 ? Ss 00:45 0:00 /sbin/portmap
statd 2801 0.0 0.0 10140 760 ? Ss 00:45 0:00 /sbin/rpc.statd
root 2935 0.0 0.0 187512 1900 ? Sl 00:45 0:06 /usr/sbin/rsyslogd -c3
root 2967 0.0 0.0 48868 1176 ? Ss 00:45 0:02 /usr/sbin/sshd
root 2993 0.0 0.0 10132 672 ? Ss 00:45 0:00 /usr/sbin/inetd
root 3058 0.0 0.0 36844 2296 ? Ss 00:45 0:00 /usr/lib/postfix/master
postfix 3067 0.0 0.0 38948 2384 ? S 00:45 0:00 qmgr -l -t fifo -u
root 3282 0.0 0.0 0 0 ? S 00:45 0:00 [vzmond]
root 3363 0.0 0.0 66072 3144 ? Ss 00:45 0:01 sshd: root
root 3365 0.0 0.0 65932 3100 ? Ss 00:45 0:00 sshd: root
root 4307 0.0 0.0 65932 3108 ? Ss 00:45 0:00 sshd: root
root 4309 0.0 0.0 65932 3104 ? Ss 00:45 0:00 sshd: root
root 4365 0.0 0.1 74688 20944 ? S 07:58 0:04 pvedaemon worker
root 5298 0.0 0.0 68332 17984 ? S 00:48 0:00 pvedaemon worker
root 6595 0.0 0.0 65932 3100 ? Ss 10:00 0:00 sshd: root@pts/0
root 6597 0.0 0.0 18840 1912 pts/0 Ss 10:00 0:00 -bash
root 6680 0.0 0.0 16016 1104 pts/0 R+ 10:02 0:00 ps auxww
root 6681 0.0 0.0 18840 876 pts/0 D+ 10:02 0:00 -bash
root 6769 0.0 0.0 28384 1012 ? S 06:25 0:00 /USR/SBIN/CRON
root 6771 0.0 0.0 8832 1140 ? Ss 06:25 0:00 /bin/sh -c test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
root 6809 0.0 0.0 8832 584 ? S 06:25 0:00 /bin/sh -c test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
root 6810 0.0 0.0 3784 720 ? S 06:25 0:00 run-parts --report /etc/cron.daily
root 7111 0.0 0.1 74684 20944 ? S 08:07 0:04 pvedaemon worker
www-data 7121 0.0 0.1 244996 27228 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 7122 0.0 0.1 244860 26488 ? S 06:25 0:00 /usr/sbin/apache2 -k start
root 9690 0.0 0.0 36832 2184 ? S 06:26 0:00 /usr/sbin/sendmail -i -FCronDaemon -oem root
root 9691 0.0 0.0 8840 1180 ? S 06:26 0:00 /bin/bash /etc/cron.daily/mlocate
root 9692 0.0 0.0 36820 2172 ? S 06:26 0:00 /usr/sbin/postdrop -r
root 9693 0.0 0.0 65680 17504 ? S 00:53 0:08 /usr/bin/perl -w /usr/bin/pvetunnel -p /var/run/pvetunnel.pid
root 9695 0.0 0.0 42532 2788 ? S 00:53 0:00 /usr/bin/ssh -N -o BatchMode=yes -L 50001:localhost:83 98.100.0.160
root 9697 0.1 0.0 5308 2140 ? R 06:26 0:20 /usr/bin/updatedb.mlocate
root 9709 0.1 0.1 81604 23804 ? S 00:53 0:33 /usr/bin/perl -w /usr/bin/pvemirror -p /var/run/pvemirror.pid
ntp 9720 0.0 0.0 22384 1428 ? Ss 00:53 0:00 /usr/sbin/ntpd -p /var/run/ntpd.pid -u 105:107 -g
daemon 9769 0.0 0.0 16360 444 ? Ss 00:53 0:00 /usr/sbin/atd
root 9809 0.0 0.0 19836 1052 ? Ss 00:53 0:00 /usr/sbin/cron
root 10335 0.0 0.1 244728 30092 ? Ss 00:54 0:01 /usr/sbin/apache2 -k start
root 10467 0.0 0.0 3800 576 tty1 Ss+ 00:54 0:00 /sbin/getty 38400 tty1
root 10469 0.0 0.0 3800 576 tty2 Ss+ 00:54 0:00 /sbin/getty 38400 tty2


will place more in next post due to limits of forum post
 
Last edited:
root 2656 0.3 0.0 0 0 ? S< 00:45 2:09 [kjournald]
root 2657 0.0 0.0 0 0 ? S< 00:45 0:00 [kjournald]
daemon 2790 0.0 0.0 8024 532 ? Ss 00:45 0:00 /sbin/portmap
statd 2801 0.0 0.0 10140 760 ? Ss 00:45 0:00 /sbin/rpc.statd
root 2935 0.0 0.0 187512 1900 ? Sl 00:45 0:06 /usr/sbin/rsyslogd -c3
root 2967 0.0 0.0 48868 1176 ? Ss 00:45 0:02 /usr/sbin/sshd
root 2993 0.0 0.0 10132 672 ? Ss 00:45 0:00 /usr/sbin/inetd
root 3058 0.0 0.0 36844 2296 ? Ss 00:45 0:00 /usr/lib/postfix/master
postfix 3067 0.0 0.0 38948 2384 ? S 00:45 0:00 qmgr -l -t fifo -u
root 3282 0.0 0.0 0 0 ? S 00:45 0:00 [vzmond]
root 3363 0.0 0.0 66072 3144 ? Ss 00:45 0:01 sshd: root
root 3365 0.0 0.0 65932 3100 ? Ss 00:45 0:00 sshd: root
root 4307 0.0 0.0 65932 3108 ? Ss 00:45 0:00 sshd: root
root 4309 0.0 0.0 65932 3104 ? Ss 00:45 0:00 sshd: root
root 4365 0.0 0.1 74688 20944 ? S 07:58 0:04 pvedaemon worker
root 5298 0.0 0.0 68332 17984 ? S 00:48 0:00 pvedaemon worker
root 6595 0.0 0.0 65932 3100 ? Ss 10:00 0:00 sshd: root@pts/0
root 6597 0.0 0.0 18840 1912 pts/0 Ss 10:00 0:00 -bash
root 6680 0.0 0.0 16016 1104 pts/0 R+ 10:02 0:00 ps auxww
root 6681 0.0 0.0 18840 876 pts/0 D+ 10:02 0:00 -bash
root 6769 0.0 0.0 28384 1012 ? S 06:25 0:00 /USR/SBIN/CRON
root 6771 0.0 0.0 8832 1140 ? Ss 06:25 0:00 /bin/sh -c test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
root 6809 0.0 0.0 8832 584 ? S 06:25 0:00 /bin/sh -c test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
root 6810 0.0 0.0 3784 720 ? S 06:25 0:00 run-parts --report /etc/cron.daily
root 7111 0.0 0.1 74684 20944 ? S 08:07 0:04 pvedaemon worker
www-data 7121 0.0 0.1 244996 27228 ? S 06:25 0:00 /usr/sbin/apache2 -k start
www-data 7122 0.0 0.1 244860 26488 ? S 06:25 0:00 /usr/sbin/apache2 -k start
root 9690 0.0 0.0 36832 2184 ? S 06:26 0:00 /usr/sbin/sendmail -i -FCronDaemon -oem root
root 9691 0.0 0.0 8840 1180 ? S 06:26 0:00 /bin/bash /etc/cron.daily/mlocate
root 9692 0.0 0.0 36820 2172 ? S 06:26 0:00 /usr/sbin/postdrop -r
root 9693 0.0 0.0 65680 17504 ? S 00:53 0:08 /usr/bin/perl -w /usr/bin/pvetunnel -p /var/run/pvetunnel.pid
root 9695 0.0 0.0 42532 2788 ? S 00:53 0:00 /usr/bin/ssh -N -o BatchMode=yes -L 50001:localhost:83 98.100.0.160
root 9697 0.1 0.0 5308 2140 ? R 06:26 0:20 /usr/bin/updatedb.mlocate
root 9709 0.1 0.1 81604 23804 ? S 00:53 0:33 /usr/bin/perl -w /usr/bin/pvemirror -p /var/run/pvemirror.pid
ntp 9720 0.0 0.0 22384 1428 ? Ss 00:53 0:00 /usr/sbin/ntpd -p /var/run/ntpd.pid -u 105:107 -g
daemon 9769 0.0 0.0 16360 444 ? Ss 00:53 0:00 /usr/sbin/atd
root 9809 0.0 0.0 19836 1052 ? Ss 00:53 0:00 /usr/sbin/cron
root 10335 0.0 0.1 244728 30092 ? Ss 00:54 0:01 /usr/sbin/apache2 -k start
root 10467 0.0 0.0 3800 576 tty1 Ss+ 00:54 0:00 /sbin/getty 38400 tty1
root 10469 0.0 0.0 3800 576 tty2 Ss+ 00:54 0:00 /sbin/getty 38400 tty2
root 10470 0.0 0.0 3800 580 tty3 Ss+ 00:54 0:00 /sbin/getty 38400 tty3
root 10471 0.0 0.0 3800 576 tty4 Ss+ 00:54 0:00 /sbin/getty 38400 tty4
root 10472 0.0 0.0 3800 580 tty5 Ss+ 00:54 0:00 /sbin/getty 38400 tty5
root 10473 0.0 0.0 3800 580 tty6 Ss+ 00:54 0:00 /sbin/getty 38400 tty6
postfix 22682 0.0 0.0 38900 2200 ? S 09:04 0:00 pickup -l -t fifo -u -c
 
iostat shows: Linux 2.6.24-7-pve (obadiah) 10/19/2009 _x86_64_

avg-cpu: %user %nice %system %iowait %steal %idle
17.87 0.34 3.73 29.53 0.00 48.53

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 205.81 1231.80 9427.91 180465906 1381240386
dm-0 0.01 0.00 0.04 360 5528
dm-1 4.43 16.40 30.33 2402066 4442784
dm-2 1233.71 1215.39 9397.55 178061354 1376792064


looks like it might just be a mlocate update that is killing us -
perhaps thats not needed?

i would rather hunt a bit than kill a user experience here
 
I am so happy to see this post. Ever since we migrated from xeon 5470 -promox 1.1 to xeon 5520 - proxmox 1.4b, we have had customer complaining that at times their server gets slow.

We only run openvz...

Something is going on in this beta that needs to get fixed.

Thanks.
 
I tried iotop but it says command not found. I have been receiving complains about our VPS running slow at time for the past two weeks.

Time to time I get load spikes of more than 25.

If the release of 1.4 this week does not fix this I will have to roll back to proxmox 1.1.

Anyone knows how to downgrade proxmox from 1.4 to 1.1 or 1.2 without having to reinstall?

Here is an extract of the command top that I just ran.

Code:
top - 21:12:19 up 13 days,  7:07,  1 user,  load average: 31.62, 18.92, 15.52
Tasks: 768 total,   2 running, 764 sleeping,   0 stopped,   2 zombie
Cpu(s): 29.6%us, 68.5%sy,  0.0%ni,  1.5%id,  0.0%wa,  0.1%hi,  0.1%si,  0.0%st
Mem:  24674488k total, 19795136k used,  4879352k free,  1864528k buffers
Swap: 24117240k total,        0k used, 24117240k free, 11453576k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                  
13247 libuuid   20   0  557m 259m 4880 S  201  1.1  13244:35 mysqld                                    
26680 510       20   0 31028  18m 6016 D  100  0.1   0:08.06 php                                       
26662 510       20   0 31032  18m 6016 D   99  0.1   0:08.32 php                                       
26676 510       20   0 31032  18m 6016 D   99  0.1   0:08.12 php                                       
26730 510       20   0 31032  18m 6016 D   99  0.1   0:08.26 php                                       
26732 510       20   0 31036  18m 6016 D   99  0.1   0:08.24 php                                       
26645 510       20   0 31028  18m 6016 D   99  0.1   0:08.60 php                                       
26659 510       20   0 31028  18m 6016 D   95  0.1   0:08.42 php                                       
26582 510       20   0 42460  30m 6252 S    3  0.1   0:06.14 php                                       
 7764 99        20   0  9292 3620 2136 S    1  0.0   0:00.06 httpd                                     
23032 root      20   0 19476 1852  940 R    1  0.0   0:02.54 top                                       
26958 99        20   0  6504 2468 1284 D    1  0.0   0:00.02 proftpd                                   
26962 root      20   0  7480 2744 1332 R    1  0.0   0:00.02 pureauth                                  
16103 root      20   0 10304 7736 1740 S    0  0.0   0:56.56 tailwatchd                                
    1 root      20   0 10316  752  620 S    0  0.0   0:12.56 init                                      
    2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd                                  
    3 root      RT  -5     0    0    0 S    0  0.0   0:02.48 migration/0                               
    4 root      15  -5     0    0    0 S    0  0.0   0:08.72 ksoftirqd/0                               
    5 root      RT  -5     0    0    0 S    0  0.0   0:00.06 watchdog/0                                
    6 root      RT  -5     0    0    0 S    0  0.0   0:01.78 migration/1                               
    7 root      15  -5     0    0    0 S    0  0.0   0:01.92 ksoftirqd/1
 
I do not have IO issues so I am not sure that iotop will help.

Here is an extract of my iotop:

Code:
Total DISK READ: 0 B/s | Total DISK WRITE: 274.72 K/s
  PID USER      DISK READ  DISK WRITE   SWAPIN    IO>    COMMAND                                       
 6397 99             0 B/s       0 B/s  0.00 %  0.00 % ./lshttpd
13404 libuuid        0 B/s    3.66 K/s  0.00 %  0.00 % mysqld --basedir=/ --datadir=/var/lib/mysql --us
13406 libuuid        0 B/s  131.87 K/s  0.00 %  0.00 % mysqld --basedir=/ --datadir=/var/lib/mysql --us
13788 root           0 B/s    3.66 K/s  0.00 %  0.00 % [vlogger]   
13496 libuuid        0 B/s  131.87 K/s  0.00 %  0.00 % mysqld --basedir=/ --datadir=/var/lib/mysql --us
 8192 48             0 B/s       0 B/s  0.00 %  0.00 % httpd
    1 root           0 B/s       0 B/s  0.00 %  0.00 % init [2]
    2 root           0 B/s       0 B/s  0.00 %  0.00 % [kthreadd]
    3 root           0 B/s       0 B/s  0.00 %  0.00 % [migration/0]
    4 root           0 B/s       0 B/s  0.00 %  0.00 % [ksoftirqd/0]
    5 root           0 B/s       0 B/s  0.00 %  0.00 % [watchdog/0]
    6 root           0 B/s       0 B/s  0.00 %  0.00 % [migration/1]
    7 root           0 B/s       0 B/s  0.00 %  0.00 % [ksoftirqd/1]
    8 root           0 B/s       0 B/s  0.00 %  0.00 % [watchdog/1]
    9 root           0 B/s
 
Well there is no mysql running on the host... but all the openvz vps on this server run mysql...

I hope this is a good hint for the proxmox guru to find out what is going on.... And I hope this will be fixed in the release of 1.4 otherwise we will have to roll back to 1.1 or 1.2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!