iSCSI Storage is not online

Extcee

Active Member
Apr 11, 2011
50
1
26
Hi All.

Im wondering if you can help me?

Scenario:
I have a 3 node cluster (3rd node RGManager stopped, just a quorum machine)
All 3 nodes are connected to a Dell MD3200i SAN using iscsiadm
I have VM images stored on the SAN (formatted in Linux LVM)
VMs are active and running however over the last few days my backups have been failing..

When trying to browse to my LVM Image storage or the iSCSI target (tgh-md3200-01) I get Internal Server Error (500)

In my backups I get:

ERROR: Backup of VM 405 failed - storage 'tgh-md3200-01' is not online

PVEStatus
root@proxnode3:~# pvecm status
Version: 6.2.0
Config Version: 29
Cluster Name: proxve
Cluster Id: 7113
Cluster Member: Yes
Cluster Generation: 1016
Membership state: Cluster-Member
Nodes: 3
Expected votes: 3
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxnode3
Node ID: 3
Multicast addresses: 239.192.27.228
Node addresses: 10.10.1.53

root@proxnode3:~# pvesm status
storage 'tgh-md3200-01' is not online
storage 'tgh-md3200-01' is not online
ProxSANImages lvm 0 0 0 0 100.00%
local dir 1 841194904 1052688 840142216 0.63%
tgh-md3200-01 iscsi 0 0 0 0 100.00%



Node3 (sampled here) has no crucial data on it, therefore I have done a full system reboot.. however this has not worked.

Does anyone have any ideas?

Must note iSCSI target has been working fine over the last few months...

Thanks in advance... much appreciated.
 
post your storage.cfg:

> cat /etc/pve/storage.cfg

what version do you run (pveversion -v), upgraded recently? do you use multipath?
 
Also forgot.

Output of multipath -ll

mpathd (36782bcb00024b4c80000386f404ffc35) dm-3 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:62 sdd 8:48 active ready running
|- 9:0:0:62 sdj 8:144 active ready running
|- 5:0:0:62 sdk 8:160 active ready running
`- 8:0:0:62 sdm 8:192 active ready running
mpathc (36782bcb00024b4c800003b074059617f) dm-4 DELL,MD32xxi
size=3.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:51 sdb 8:16 active ready running
|- 9:0:0:51 sde 8:64 active ready running
|- 5:0:0:51 sdf 8:80 active ready running
`- 8:0:0:51 sdh 8:112 active ready running
mpathb (36782bcb00024b88100004b2751edefb6) dm-5 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 4:0:0:61 sdc 8:32 active ready running
|- 9:0:0:61 sdg 8:96 active ready running
|- 5:0:0:61 sdi 8:128 active ready running
`- 8:0:0:61 sdl 8:176 active ready running
 
Hi Tom.

Heres the output:

root@proxnode3:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
shared
content images,iso,vztmpl,backup,rootdir
maxfiles 4


dir: ProxCT01
path /mnt/ProxCT01
content iso,vztmpl,backup,rootdir
maxfiles 0
nodes proxnode1


dir: ProxCT02
path /mnt/ProxCT02
content iso,vztmpl,rootdir,backup
maxfiles 0
nodes proxnode2


iscsi: tgh-md3200-01
target iqn.1984-05.com.dell:powervault.md3200i.6782bcb00024b8810000000051024745
portal 10.10.142.101
content none


lvm: ProxSANImages
vgname ProxSANImages
base tgh-md3200-01:0.0.51.scsi-mpathh
shared
content images
 
From node1 (with mpathh as a drive)

root@proxnode1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
shared
content images,iso,vztmpl,backup,rootdir
maxfiles 4


dir: ProxCT01
path /mnt/ProxCT01
content iso,vztmpl,backup,rootdir
maxfiles 0
nodes proxnode1


dir: ProxCT02
path /mnt/ProxCT02
content iso,vztmpl,rootdir,backup
maxfiles 0
nodes proxnode2


iscsi: tgh-md3200-01
target iqn.1984-05.com.dell:powervault.md3200i.6782bcb00024b8810000000051024745
portal 10.10.142.101
content none


lvm: ProxSANImages
vgname ProxSANImages
base tgh-md3200-01:0.0.51.scsi-mpathh
shared
content images


root@proxnode1:~# multipath -ll
Sep 27 10:39:02 | multipath.conf +2, invalid keyword: {
Sep 27 10:39:02 | multipath.conf +23, invalid keyword: devices
Sep 27 10:39:02 | multipath.conf +24, invalid keyword: device
Sep 27 10:39:02 | multipath.conf +25, invalid keyword: vendor
Sep 27 10:39:02 | multipath.conf +26, invalid keyword: product
Sep 27 10:39:02 | multipath.conf +27, invalid keyword: path_grouping_policy
Sep 27 10:39:02 | multipath.conf +28, invalid keyword: prio
Sep 27 10:39:02 | multipath.conf +29, invalid keyword: polling_interval
Sep 27 10:39:02 | multipath.conf +30, invalid keyword: path_checker
Sep 27 10:39:02 | multipath.conf +31, invalid keyword: path_selector
Sep 27 10:39:02 | multipath.conf +32, invalid keyword: hardware_handler
Sep 27 10:39:02 | multipath.conf +33, invalid keyword: failback
Sep 27 10:39:02 | multipath.conf +34, invalid keyword: features
Sep 27 10:39:02 | multipath.conf +35, invalid keyword: no_path_retry
Sep 27 10:39:02 | multipath.conf +36, invalid keyword: rr_min_io
Sep 27 10:39:02 | multipath.conf +37, invalid keyword: prio_callout
Sep 27 10:39:02 | multipath.conf +39, invalid keyword: }
mpathd (36782bcb00024b4c80000386f404ffc35) dm-5 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 12:0:0:62 sdy 65:128 active ready running
| |- 11:0:0:62 sdx 65:112 active ready running
| |- 5:0:0:62 sdf 8:80 active ready running
| `- 8:0:0:62 sdm 8:192 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 6:0:0:62 sds 65:32 active ghost running
|- 7:0:0:62 sdq 65:0 active ghost running
|- 9:0:0:62 sdr 65:16 failed faulty running
`- 10:0:0:62 sdn 8:208 active ghost running
mpathb (36782bcb00024b88100004b2751edefb6) dm-3 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 9:0:0:61 sdp 8:240 failed faulty running
| |- 7:0:0:61 sdk 8:160 active ghost running
| |- 6:0:0:61 sdo 8:224 active ghost running
| `- 10:0:0:61 sdj 8:144 active ghost running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 12:0:0:61 sdw 65:96 active ready running
|- 11:0:0:61 sdv 65:80 active ready running
|- 5:0:0:61 sdc 8:32 active ready running
`- 8:0:0:61 sdl 8:176 active ready running
mpathh (36782bcb00024b4c800003b074059617f) dm-4 DELL,MD32xxi
size=3.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 12:0:0:51 sdu 65:64 active ready running
| |- 11:0:0:51 sdt 65:48 active ready running
| |- 5:0:0:51 sdb 8:16 active ready running
| `- 8:0:0:51 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 6:0:0:51 sdh 8:112 active ghost running
|- 7:0:0:51 sdg 8:96 active ghost running
|- 9:0:0:51 sdi 8:128 failed faulty running
`- 10:0:0:51 sde 8:64 active ghost running
root@proxnode1:~#
 
post the VM config of 405:

> qm config 405
 
Hi Tom.

root@proxnode1:~# qm config 405
balloon: 4096
bootdisk: virtio0
cores: 2
memory: 8192
name: tgh-zms-01
net0: virtio=3A:79:AB:38:01:86,bridge=vmbr0
ostype: l26
sockets: 2
virtio0: ProxSANImages:vm-405-disk-1,cache=writethrough,size=150G
root@proxnode1:~#
 
...
root@proxnode1:~# multipath -ll
Sep 27 10:39:02 | multipath.conf +2, invalid keyword: {
Sep 27 10:39:02 | multipath.conf +23, invalid keyword: devices
Sep 27 10:39:02 | multipath.conf +24, invalid keyword: device
Sep 27 10:39:02 | multipath.conf +25, invalid keyword: vendor
Sep 27 10:39:02 | multipath.conf +26, invalid keyword: product
Sep 27 10:39:02 | multipath.conf +27, invalid keyword: path_grouping_policy
Sep 27 10:39:02 | multipath.conf +28, invalid keyword: prio
Sep 27 10:39:02 | multipath.conf +29, invalid keyword: polling_interval
Sep 27 10:39:02 | multipath.conf +30, invalid keyword: path_checker
Sep 27 10:39:02 | multipath.conf +31, invalid keyword: path_selector
Sep 27 10:39:02 | multipath.conf +32, invalid keyword: hardware_handler
Sep 27 10:39:02 | multipath.conf +33, invalid keyword: failback
Sep 27 10:39:02 | multipath.conf +34, invalid keyword: features
Sep 27 10:39:02 | multipath.conf +35, invalid keyword: no_path_retry
Sep 27 10:39:02 | multipath.conf +36, invalid keyword: rr_min_io
Sep 27 10:39:02 | multipath.conf +37, invalid keyword: prio_callout
Sep 27 10:39:02 | multipath.conf +39, invalid keyword: }
..

these errors looks great, seems there is an issue with your multipath config - correct this or post it.
(see also http://pve.proxmox.com/wiki/ISCSI_Multipath#Dell)

and you still not answered these two questions:
  1. your version (pveversion -v)
  2. you upgraded recently?
 
Ahh thanks Tom.

The multipath config errors I am aware of, but they don't seem to cause me an issue (as of yet)

root@proxnode1:~# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1


I upgraded to Proxmox V3.1 when that dropThanped a few weeks back.. but this issue has only come around the last week or so?

Thanks a lot.
 
Updated my multipath.conf on my "spare" node and I still get a few errors:

root@proxnode3:~# multipath -ll
Sep 27 14:32:55 | multipath.conf +3, invalid keyword: selector
Sep 27 14:32:55 | multipath.conf +17, invalid keyword: polling_interval
36782bcb00024b88100004b2751edefb6 dm-5 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 4:0:0:61 sdc 8:32 active ready running
|- 5:0:0:61 sdf 8:80 active ready running
|- 6:0:0:61 sdj 8:144 active ready running
`- 7:0:0:61 sdk 8:160 active ready running
36782bcb00024b4c800003b074059617f dm-3 DELL,MD32xxi
size=3.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:51 sdb 8:16 active ready running
|- 5:0:0:51 sde 8:64 active ready running
|- 6:0:0:51 sdh 8:112 active ready running
`- 7:0:0:51 sdi 8:128 active ready running
36782bcb00024b4c80000386f404ffc35 dm-4 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:62 sdd 8:48 active ready running
|- 5:0:0:62 sdg 8:96 active ready running
|- 7:0:0:62 sdm 8:192 active ready running
`- 6:0:0:62 sdl 8:176 active ready running
root@proxnode3:~#

Multipath.conf:


defaults {
polling_interval 2
selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}


devices {
device {
vendor "DELL"
product "MD32xxi"
path_grouping_policy group_by_prio
prio rdac
polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
}
}


multipaths {
multipath {
wwid "6782bcb00024b4c80000380d404ff3d0"
alias tgh-md3200-01
}
multipath {
wwid "6782bcb00024b88100004b2751edefb6"
alias tgh-md3200-01-proxct01
}
multipath {
wwid "6782bcb00024b4c80000386f404ffc35"
alias tgh-md3200-01-proxct02
}

}
 
Hi Tom.

Think i've got it.. Was it the surrounding " 's on the wwid?

I have not included blacklist exceptions (yet)..

Rebooting node now to test... Will report back imminently
 
Ahh.. I see.. On the Dell Specific settings on the Wiki it shows:

defaults {
polling_interval 2
selector "round-robin 0"
path_grouping_policy multibus
Where as above it shows:
defaults {
polling_interval 2
patch_selector "round-robin 0"
path_grouping_policy multibus
Updated on one of my nodes..

did /etc/init.d/multipath-tools restart and it worked straight away.

Thanks a lot.. really appreciate it.

Can someone update the wiki when possible to help others?
 
Ahh.. I see.. On the Dell Specific settings on the Wiki it shows:

defaults {
polling_interval 2
selector "round-robin 0"
path_grouping_policy multibus
Where as above it shows:
defaults {
polling_interval 2
patch_selector "round-robin 0"
path_grouping_policy multibus
Updated on one of my nodes..

did /etc/init.d/multipath-tools restart and it worked straight away.

Thanks a lot.. really appreciate it.

Can someone update the wiki when possible to help others?

thanks for feedback, glad to help.
wiki is already up to date.
 
@extcee, in your post you wrote
"patch_selector "round-robin 0""

just for other readers, obviously it was intended as is in the wiki, ie:

Ahh.. I see.. On the Dell Specific settings on the Wiki it shows:
defaults {
polling_interval 2
selector "round-robin 0"
path_grouping_policy multibus

Where as above it shows:

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
Updated on one of my nodes..

Marco
 
Just updated my nodes with the correct config and I am still getting errors..

EDIT - OLD WWID in conf
Could you just look over the following if possible and see if correct?

root@proxnode3:~# cat /etc/multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}
devices {
device {
vendor "DELL"
product "MD32xxi"
path_grouping_policy group_by_prio
prio rdac
polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
}
}


multipaths {
multipath {
wwid 6782bcb00024b4c80000380d404ff3d0
alias tgh-md3200-01
}
multipath {
wwid 6782bcb00024b88100004b2751edefb6
alias tgh-md3200-01-proxct01
}
multipath {
wwid 6782bcb00024b4c80000386f404ffc35
alias tgh-md3200-01-proxct02
}

}
root@proxnode3:~# multipath -ll
Sep 27 16:41:17 | multipath.conf +16, invalid keyword: polling_interval
36782bcb00024b88100004b2751edefb6 dm-5 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:61 sdh 8:112 active ghost running
|- 4:0:0:61 sdc 8:32 active ghost running
|- 6:0:0:61 sdi 8:128 active ghost running
`- 5:0:0:61 sdj 8:144 active ghost running
36782bcb00024b4c800003b074059617f dm-4 DELL,MD32xxi
size=3.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:51 sdb 8:16 active ready running
|- 7:0:0:51 sde 8:64 active ready running
|- 6:0:0:51 sdf 8:80 active ready running
`- 5:0:0:51 sdg 8:96 active ready running
36782bcb00024b4c80000386f404ffc35 dm-3 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:62 sdd 8:48 active ready running
|- 7:0:0:62 sdk 8:160 active ready running
|- 6:0:0:62 sdl 8:176 active ready running
`- 5:0:0:62 sdm 8:192 active ready running
root@proxnode3:~# tail /var/log/syslog
Sep 27 16:40:33 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:40:35 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:40:43 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:40:45 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:40:53 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:40:55 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:41:03 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:41:05 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:41:13 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 16:41:15 proxnode3 pvestatd[3313]: WARNING: storage 'tgh-md3200-01' is not online
root@proxnode3:~#



EDIT:

Turns out my WWID of tgh-md3200-01 was different than the one in the Dell MDM..

Updated WWID in /etc/mulitpath.conf and done /etc/init.d/multipath-tools restart but I am still getting the error.. Do I need to reboot the nodes?
 
Still getting Internal Server Error (500)

For my sanity, this is my multipath.conf (i have replicated to all 3 nodes) and restarted the 3rd node.. but still having issues.

root@proxnode3:~# cat /etc/multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}


devices {
device {
vendor "DELL"
product "MD32xxi"
path_grouping_policy group_by_prio
prio rdac
polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
}
}


multipaths {
multipath {
wwid 6782bcb00024b4c800003b074059617f
alias tgh-md3200-01
}
multipath {
wwid 6782bcb00024b88100004b2751edefb6
alias tgh-md3200-01-proxct01
}
multipath {
wwid 6782bcb00024b4c80000386f404ffc35
alias tgh-md3200-01-proxct02
}
}

Ouput of multipath -ll

root@proxnode3:~# multipath -ll
Sep 27 17:20:07 | multipath.conf +17, invalid keyword: polling_interval
36782bcb00024b88100004b2751edefb6 dm-5 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:61 sdk 8:160 active ghost running
|- 4:0:0:61 sdc 8:32 active ghost running
|- 5:0:0:61 sdf 8:80 active ghost running
`- 6:0:0:61 sdj 8:144 active ghost running
36782bcb00024b4c800003b074059617f dm-3 DELL,MD32xxi
size=3.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:51 sdb 8:16 active ready running
|- 5:0:0:51 sde 8:64 active ready running
|- 6:0:0:51 sdh 8:112 active ready running
`- 7:0:0:51 sdi 8:128 active ready running
36782bcb00024b4c80000386f404ffc35 dm-4 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 4:0:0:62 sdd 8:48 active ready running
|- 5:0:0:62 sdg 8:96 active ready running
|- 6:0:0:62 sdl 8:176 active ready running
`- 7:0:0:62 sdm 8:192 active ready running
root@proxnode3:~#


root@proxnode3:~# tail /var/log/syslog
Sep 27 17:19:44 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:19:46 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:19:54 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:19:56 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:20:04 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:20:06 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:20:14 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:20:16 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:20:24 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online
Sep 27 17:20:26 proxnode3 pvestatd[3312]: WARNING: storage 'tgh-md3200-01' is not online




What the hell am I missing guys?

Thanks as always..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!