i/o problem

Ok I have some more info:
I did a strace of vzdump when it is failing:

select(8, [6], NULL, NULL, {1, 0}) = 0 (Timeout)open("/proc/497989/stat", O_RDONLY) = 10
ioctl(10, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff16baec20) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(10, 0, SEEK_CUR) = 0
fstat(10, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
fcntl(10, F_SETFD, FD_CLOEXEC) = 0
read(10, "497989 (vzdump) S 497977 497989 "..., 4096) = 276
close(10) = 0

And it will repeat that constantly...

The servers was running 2 OpenVZ VMs
 
Last edited:
And the /var/log/messages:
Code:
Jan 26 14:57:02 sint kernel: EXT3-fs: barriers disabled
Jan 26 14:57:02 sint kernel: kjournald starting.  Commit interval 5 seconds
Jan 26 14:57:02 sint kernel: EXT3-fs (dm-5): using internal journal
Jan 26 14:57:02 sint kernel: EXT3-fs (dm-5): 7 orphan inodes deleted
Jan 26 14:57:02 sint kernel: EXT3-fs (dm-5): recovery complete
Jan 26 14:57:02 sint kernel: EXT3-fs (dm-5): mounted filesystem with ordered data mode
Jan 26 15:04:03 sint kernel: syslogd       D ffff88036314e300     0  4834   4109  102 0x00020004
Jan 26 15:04:03 sint kernel: ffff880363febcc8 0000000000000086 ffff880363febc68 ffff88066b987198
Jan 26 15:04:03 sint kernel: 0000000000000001 ffff880363febd58 ffff880363febdd0 0000000000000282
Jan 26 15:04:03 sint kernel: ffff88036314e8c8 ffff880363febfd8 000000000000f788 ffff88036314e8c8
Jan 26 15:04:03 sint kernel: Call Trace:
Jan 26 15:04:03 sint kernel: [<ffffffff8109d3c9>] ? ktime_get_ts+0xa9/0xe0
Jan 26 15:04:03 sint kernel: [<ffffffff8111dcb0>] ? sync_page+0x0/0x50
Jan 26 15:04:03 sint kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff8111dced>] sync_page+0x3d/0x50
Jan 26 15:04:03 sint kernel: [<ffffffff814e894f>] __wait_on_bit+0x5f/0x90
Jan 26 15:04:03 sint kernel: [<ffffffff8111dea3>] wait_on_page_bit+0x73/0x80
Jan 26 15:04:03 sint kernel: [<ffffffff81092a50>] ? wake_bit_function+0x0/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff81135765>] ? pagevec_lookup_tag+0x25/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff8111e48b>] wait_on_page_writeback_range+0xfb/0x190
Jan 26 15:04:03 sint kernel: [<ffffffff811347a4>] ? generic_writepages+0x24/0x30
Jan 26 15:04:03 sint kernel: [<ffffffff811347e5>] ? do_writepages+0x35/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff8111e5db>] ? __filemap_fdatawrite_range+0x5b/0x60
Jan 26 15:04:03 sint kernel: [<ffffffff8111e658>] filemap_write_and_wait_range+0x78/0x90
Jan 26 15:04:03 sint kernel: [<ffffffff811b7f7a>] vfs_fsync_range+0xba/0x190
Jan 26 15:04:03 sint kernel: [<ffffffff811b80bd>] vfs_fsync+0x1d/0x20
Jan 26 15:04:03 sint kernel: [<ffffffff811b8120>] do_fsync+0x60/0xa0
Jan 26 15:04:03 sint kernel: [<ffffffff811b8190>] sys_fsync+0x10/0x20
Jan 26 15:04:03 sint kernel: [<ffffffff810478d0>] sysenter_dispatch+0x7/0x2e
 
And the second snippet:
Jan 26 15:04:03 sint kernel: lvremove D ffff88036b5ad420 0 499718 497989 0 0x00000000
Jan 26 15:04:03 sint kernel: ffff8802b228db88 0000000000000086 ffff8802b228db48 ffffffff813f4cac
Jan 26 15:04:03 sint kernel: 0000000000000008 0000000000001000 0000000000000000 000000000000000c
Jan 26 15:04:03 sint kernel: ffff88036b5ad9e8 ffff8802b228dfd8 000000000000f788 ffff88036b5ad9e8
Jan 26 15:04:03 sint kernel: Call Trace:
Jan 26 15:04:03 sint kernel: [<ffffffff813f4cac>] ? dm_table_unplug_all+0x5c/0xd0
Jan 26 15:04:03 sint kernel: [<ffffffff8109d3c9>] ? ktime_get_ts+0xa9/0xe0
Jan 26 15:04:03 sint kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff811c3b2e>] __blockdev_direct_IO+0x6fe/0xc20
Jan 26 15:04:03 sint kernel: [<ffffffff8124332d>] ? get_disk+0x7d/0xf0
Jan 26 15:04:03 sint kernel: [<ffffffff811c1737>] blkdev_direct_IO+0x57/0x60
Jan 26 15:04:03 sint kernel: [<ffffffff811c0900>] ? blkdev_get_blocks+0x0/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff8111fbab>] generic_file_aio_read+0x70b/0x780
Jan 26 15:04:03 sint kernel: [<ffffffff811c2211>] ? blkdev_open+0x71/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff81184fe3>] ? __dentry_open+0x113/0x330
Jan 26 15:04:03 sint kernel: [<ffffffff8121f248>] ? devcgroup_inode_permission+0x48/0x50
Jan 26 15:04:03 sint kernel: [<ffffffff8118796a>] do_sync_read+0xfa/0x140
Jan 26 15:04:03 sint kernel: [<ffffffff81198ae2>] ? user_path_at+0x62/0xa0
Jan 26 15:04:03 sint kernel: [<ffffffff81092a10>] ? autoremove_wake_function+0x0/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff811c0ccc>] ? block_ioctl+0x3c/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff8119b0f2>] ? vfs_ioctl+0x22/0xa0
Jan 26 15:04:03 sint kernel: [<ffffffff8119b29a>] ? do_vfs_ioctl+0x8a/0x5d0
Jan 26 15:04:03 sint kernel: [<ffffffff81188375>] vfs_read+0xb5/0x1a0
Jan 26 15:04:03 sint kernel: [<ffffffff811884b1>] sys_read+0x51/0x90
Jan 26 15:04:03 sint kernel: [<ffffffff8100b242>] system_call_fastpath+0x16/0x1b
Jan 26 15:04:03 sint kernel: vgs D ffff88066ad26080 0 499725 6723 0 0x00000000
Jan 26 15:04:03 sint kernel: ffff8805cec43b88 0000000000000082 0000000000000000 ffffffff813f4cac
Jan 26 15:04:03 sint kernel: 0000000000000008 0000000000001000 0000000000000000 000000011a9e4336
Jan 26 15:04:03 sint kernel: ffff88066ad26648 ffff8805cec43fd8 000000000000f788 ffff88066ad26648
Jan 26 15:04:03 sint kernel: Call Trace:
Jan 26 15:04:03 sint kernel: [<ffffffff813f4cac>] ? dm_table_unplug_all+0x5c/0xd0
Jan 26 15:04:03 sint kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff811c3b2e>] __blockdev_direct_IO+0x6fe/0xc20
Jan 26 15:04:03 sint kernel: [<ffffffff8124332d>] ? get_disk+0x7d/0xf0
Jan 26 15:04:03 sint kernel: [<ffffffff811c1737>] blkdev_direct_IO+0x57/0x60
Jan 26 15:04:03 sint kernel: [<ffffffff811c0900>] ? blkdev_get_blocks+0x0/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff8111fbab>] generic_file_aio_read+0x70b/0x780
Jan 26 15:04:03 sint kernel: [<ffffffff811c2211>] ? blkdev_open+0x71/0xc0
Jan 26 15:04:03 sint kernel: [<ffffffff81184fe3>] ? __dentry_open+0x113/0x330
Jan 26 15:04:03 sint kernel: [<ffffffff8121f248>] ? devcgroup_inode_permission+0x48/0x50
Jan 26 15:04:03 sint kernel: [<ffffffff8118796a>] do_sync_read+0xfa/0x140
Jan 26 15:04:03 sint kernel: [<ffffffff81198ae2>] ? user_path_at+0x62/0xa0
Jan 26 15:04:03 sint kernel: [<ffffffff81092a10>] ? autoremove_wake_function+0x0/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff811c0ccc>] ? block_ioctl+0x3c/0x40
Jan 26 15:04:03 sint kernel: [<ffffffff8119b0f2>] ? vfs_ioctl+0x22/0xa0
Jan 26 15:04:03 sint kernel: [<ffffffff8119b29a>] ? do_vfs_ioctl+0x8a/0x5d0
Jan 26 15:04:03 sint kernel: [<ffffffff81188375>] vfs_read+0xb5/0x1a0
Jan 26 15:04:03 sint kernel: [<ffffffff811884b1>] sys_read+0x51/0x90
Jan 26 15:04:03 sint kernel: [<ffffffff8100b242>] system_call_fastpath+0x16/0x1b
Jan 26 15:06:03 sint kernel: flush-253:4 D ffff88036b538580 0 1088 2 0 0x00000000
Jan 26 15:06:03 sint kernel: ffff880363e9d950 0000000000000046 0000000000000000 0000000000000001
Jan 26 15:06:03 sint kernel: ffff88036b513cc0 0000000000000001 ffffea0018fe0800 000000011a9fa553
Jan 26 15:06:03 sint kernel: ffff88036b538b48 ffff880363e9dfd8 000000000000f788 ffff88036b538b48
Jan 26 15:06:03 sint kernel: Call Trace:
Jan 26 15:06:03 sint kernel: [<ffffffff811bbb20>] ? end_buffer_async_write+0x0/0x180
Jan 26 15:06:03 sint kernel: [<ffffffff8111dcb0>] ? sync_page+0x0/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0
Jan 26 15:06:03 sint kernel: [<ffffffff8111dced>] sync_page+0x3d/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e87fa>] __wait_on_bit_lock+0x5a/0xc0
Jan 26 15:06:03 sint kernel: [<ffffffff8111dc87>] __lock_page+0x67/0x70
Jan 26 15:06:03 sint kernel: [<ffffffff81092a50>] ? wake_bit_function+0x0/0x40
Jan 26 15:06:03 sint kernel: [<ffffffff8113466a>] write_cache_pages+0x36a/0x480
Jan 26 15:06:03 sint kernel: [<ffffffff81132f10>] ? __writepage+0x0/0x40
Jan 26 15:06:03 sint kernel: [<ffffffff811347a4>] generic_writepages+0x24/0x30
Jan 26 15:06:03 sint kernel: [<ffffffff811347e5>] do_writepages+0x35/0x40
Jan 26 15:06:03 sint kernel: [<ffffffff811b280d>] __writeback_single_inode+0xdd/0x2c0
Jan 26 15:06:03 sint kernel: [<ffffffff811b2a73>] writeback_single_inode+0x83/0xc0
Jan 26 15:06:03 sint kernel: [<ffffffff811a26c0>] ? iput+0x30/0x70
Jan 26 15:06:03 sint kernel: [<ffffffff811b2cd6>] writeback_sb_inodes+0xe6/0x1a0
Jan 26 15:06:03 sint kernel: [<ffffffff811b2e3b>] writeback_inodes_wb+0xab/0x1b0
Jan 26 15:06:03 sint kernel: [<ffffffff811b31eb>] wb_writeback+0x2ab/0x400
Jan 26 15:06:03 sint kernel: [<ffffffff814e78a5>] ? thread_return+0x4e/0x809
Jan 26 15:06:03 sint kernel: [<ffffffff811b34e9>] wb_do_writeback+0x1a9/0x250
Jan 26 15:06:03 sint kernel: [<ffffffff8107b800>] ? process_timeout+0x0/0x10
Jan 26 15:06:03 sint kernel: [<ffffffff811b35f3>] bdi_writeback_task+0x63/0x1b0
Jan 26 15:06:03 sint kernel: [<ffffffff810928e7>] ? bit_waitqueue+0x17/0xc0
Jan 26 15:06:03 sint kernel: [<ffffffff811473e0>] ? bdi_start_fn+0x0/0x100
Jan 26 15:06:03 sint kernel: [<ffffffff81147466>] bdi_start_fn+0x86/0x100
Jan 26 15:06:03 sint kernel: [<ffffffff811473e0>] ? bdi_start_fn+0x0/0x100
Jan 26 15:06:03 sint kernel: [<ffffffff81092436>] kthread+0x96/0xa0
Jan 26 15:06:03 sint kernel: [<ffffffff8100c2ca>] child_rip+0xa/0x20
Jan 26 15:06:03 sint kernel: [<ffffffff810923a0>] ? kthread+0x0/0xa0
Jan 26 15:06:03 sint kernel: [<ffffffff8100c2c0>] ? child_rip+0x0/0x20
Jan 26 15:06:03 sint kernel: kjournald D ffff88036cc748c0 0 1139 2 0 0x00000000
Jan 26 15:06:03 sint kernel: ffff88036cc97c40 0000000000000046 0000000000000000 ffffffff813f4cac
Jan 26 15:06:03 sint kernel: ffff88036cc97bb0 ffffffff81012959 ffff88036cc97bf0 000000011a9e56d5
Jan 26 15:06:03 sint kernel: ffff88036cc74e88 ffff88036cc97fd8 000000000000f788 ffff88036cc74e88
Jan 26 15:06:03 sint kernel: Call Trace:
Jan 26 15:06:03 sint kernel: [<ffffffff813f4cac>] ? dm_table_unplug_all+0x5c/0xd0
Jan 26 15:06:03 sint kernel: [<ffffffff81012959>] ? read_tsc+0x9/0x20
Jan 26 15:06:03 sint kernel: [<ffffffff811bb720>] ? sync_buffer+0x0/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0
Jan 26 15:06:03 sint kernel: [<ffffffff811bb765>] sync_buffer+0x45/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e894f>] __wait_on_bit+0x5f/0x90
Jan 26 15:06:03 sint kernel: [<ffffffff811bb720>] ? sync_buffer+0x0/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e89f8>] out_of_line_wait_on_bit+0x78/0x90
Jan 26 15:06:03 sint kernel: [<ffffffff81092a50>] ? wake_bit_function+0x0/0x40
Jan 26 15:06:03 sint kernel: [<ffffffffa00892c1>] ? __journal_file_buffer+0xd1/0x230 [jbd]
Jan 26 15:06:03 sint kernel: [<ffffffff811bb716>] __wait_on_buffer+0x26/0x30
Jan 26 15:06:03 sint kernel: [<ffffffffa008beee>] journal_commit_transaction+0x9ce/0x1130 [jbd]
Jan 26 15:06:03 sint kernel: [<ffffffff8107b6ec>] ? lock_timer_base+0x3c/0x70
Jan 26 15:06:03 sint kernel: [<ffffffff8107c34b>] ? try_to_del_timer_sync+0x7b/0xe0
Jan 26 15:06:03 sint kernel: [<ffffffffa008efc8>] kjournald+0xe8/0x250 [jbd]
Jan 26 15:06:03 sint kernel: [<ffffffff81092a10>] ? autoremove_wake_function+0x0/0x40
Jan 26 15:06:03 sint kernel: [<ffffffffa008eee0>] ? kjournald+0x0/0x250 [jbd]
Jan 26 15:06:03 sint kernel: [<ffffffff81092436>] kthread+0x96/0xa0
Jan 26 15:06:03 sint kernel: [<ffffffff8100c2ca>] child_rip+0xa/0x20
Jan 26 15:06:03 sint kernel: [<ffffffff810923a0>] ? kthread+0x0/0xa0
Jan 26 15:06:03 sint kernel: [<ffffffff8100c2c0>] ? child_rip+0x0/0x20
Jan 26 15:06:03 sint kernel: syslogd D ffff88036314e300 0 4834 4109 102 0x00020004
Jan 26 15:06:03 sint kernel: ffff880363febcc8 0000000000000086 ffff880363febc68 ffff88066b987198
Jan 26 15:06:03 sint kernel: 0000000000000001 ffff880363febd58 ffff880363febdd0 0000000000000282
Jan 26 15:06:03 sint kernel: ffff88036314e8c8 ffff880363febfd8 000000000000f788 ffff88036314e8c8
Jan 26 15:06:03 sint kernel: Call Trace:
Jan 26 15:06:03 sint kernel: [<ffffffff8109d3c9>] ? ktime_get_ts+0xa9/0xe0
Jan 26 15:06:03 sint kernel: [<ffffffff8111dcb0>] ? sync_page+0x0/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0
Jan 26 15:06:03 sint kernel: [<ffffffff8111dced>] sync_page+0x3d/0x50
Jan 26 15:06:03 sint kernel: [<ffffffff814e894f>] __wait_on_bit+0x5f/0x90
Jan 26 15:06:03 sint kernel: [<ffffffff8111dea3>] wait_on_page_bit+0x73/0x80
Jan 26 15:06:03 sint kernel: [<ffffffff81092a50>] ? wake_bit_function+0x0/0x40
Jan 26 15:06:03 sint kernel: [<ffffffff81135765>] ? pagevec_lookup_tag+0x25/0x40
 
I've run into a very similar problem during stress-testing a W2k8 guest. If a backup is done while running iozone in the guest, the backup task hangs when it tries to remove the snapshot volume with "lvremove". When idle, the same guest can be backed up without any problems. In my setup the problem is reproducible.

I think this is related to the following Debian problem report: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=659762

The logical volume of the VM is left suspended, so that all processes accessing it are blocked in "D" state. This can be checked with "dmsetup info -c". If a "s" appears in the Stat field, the volume is still suspended.

Manual recovery is possible with "dmsetup resume <logical volume name>". Both, backup task and guest OS then continue their work.
 
This can happen if the snapshot runs out of space.

When the snapshot is created it has a specific size, default is 1GB.
As data is written to the volume, a Copy on Write (CoW) happens from the volume into the snapshot.
If the snapshot is too small, you run out of space for the CoWs and the snapshot becomes unusable.

You can increase the size of the snapshot by editing /etc/vzdump.conf
With this setting the VM can write 15GB before the snapshot fails to function:
Code:
size: 15000

Also, make sure you have enough free space on your LVM volume for whatever size you pick or it will fail to create the snapshot.
 
I already had increased my snapshot volume size to 12 GiB before. Now I've dumped the LV stats each second, until it hangs. The last output from lvdisplay is:

Code:
# open                 1
LV Size                40.00 GiB
Current LE             10240
COW-table size         12.00 GiB
COW-table LE           3072
Allocated to snapshot  28.76%
Snapshot chunk size    4.00 KiB

The volume group is 2 TB and has 1.8 TB free, so free space should not be an issue.
 
Today I got again a problem with the backup and all commands related to lvm hangs.
 
That does not really answers my question.
Sorry I thought it will.
Yes everything is updated and aptitude update; aptitude full-upgrade doesn't show any updates.

Code:
root@timo:~# pveversion -v
pve-manager: 2.0-38 (pve-manager/2.0/af81df02)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-6-pve: 2.6.32-55
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-23
qemu-server: 2.0-25
pve-firmware: 1.0-15
libpve-common-perl: 1.0-17
libpve-access-control: 1.0-17
libpve-storage-perl: 2.0-12
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-5
ksm-control-daemon: 1.1-1

Code:
root@timo:~# uname -a
Linux timo 2.6.32-7-pve #1 SMP Thu Feb 16 09:00:32 CET 2012 x86_64 GNU/Linux
root@timo:~# cat /proc/version
Linux version 2.6.32-7-pve (root@maui) (gcc version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Thu Feb 16 09:00:32 CET 2012
 
was this system installed using a beta proxmox cd? If so then then there may not be enough space on the lvm volume group .

what is the output of:
vgs

this is ours on a rc-1 system:
Code:
vgs
  VG   #PV #LV #SN Attr   VSize VFree 
  pve    1   3   0 wz--n- 1.82t 16.00g
 
@bread-baker:
Yes but I was one of the first relaises:
Code:
root@timo:~# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  pve       1   3   0 wz--n- 299.50g   4.00g
  vmdisks   1   4   0 wz--n- 999.99g 517.98g

As you can see it is only 4GB.

If we reinstall the node with a new ISO rc1 will the free size will be larger then 4GB? We see on our 1.9 systems that 4GB is small for busy containers.

@dietmar:
Code:
root@timo:~# lvs
  LV            VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  data          pve     -wi-ao 197.50g
  root          pve     -wi-ao  75.00g
  swap          pve     -wi-ao  23.00g
  vm-200-disk-1 vmdisks -wi--- 150.00g
  vm-201-disk-1 vmdisks -wi--- 150.01g
  vm-202-disk-1 vmdisks -wi-a- 150.00g
  vm-203-disk-1 vmdisks -wi---  32.00g
 
for us re install on proxmox beta systems was the easiest way to increase free volume group space.

Currently, vzdump is using only 1GB snapshot space by default (size parameter, see 'man vzdump'). Please try to use 'lvs' during the backup to see if the snapshot runs out of space.
 
Currently, vzdump is using only 1GB snapshot space by default (size parameter, see 'man vzdump'). Please try to use 'lvs' during the backup to see if the snapshot runs out of space.
We currently have a size of 4000

Code:
root@timo:~# tail /etc/vzdump.conf
#bwlimit: KBPS
#ionize: PRI
#lockwait: MINUTES
#stopwait: MINUTES
size: 4000
maxfiles: 2
#size: MB
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST



for us re install on proxmox beta systems was the easiest way to increase free volume group space.
What is the current default freespace?
 
I have had very similar issues on my boxes - and have not found a conclusive fix.

In my case I have assumed the issue seems to be the lsi megaraid controllers.

The io timout issues seem to have many bug reports on redhat relating to the 120 second timeout. Changing the disk scheduler to noop helps.. but in short. it still sucks!

Simply running a dd if=/dev/zero of=out.img... can cause it to get hung up on io waits.

My current raid controllers have no ram cache on board. So am changing them all to lsi megaraids with 500MB cache and BBU. Hopefully it will solve the problem.

in short. a fix would be wonderfull!

Currently.. just doing a vsdump backup hangs the system!

Rob
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!