Reinstalled node doesn't start osd's after reboot

lifeboy

Renowned Member
After getting the node up and running again (https://forum.proxmox.com/threads/i...it-was-actually-noapic-that-was-needed.43720/) I now have a problem that has been reported a few time elsewhere, but not in the systemd based version of Proxmox. After a reboot none of the 9 OSD's on this server start.

If I mount the OSD manually like below, it starts:
# mount -o "rw,noatime,attr2,inode64,noquota" /dev/cciss/c0d7p1 /var/lib/ceph/osd/ceph-7
# ceph-disk trigger /dev/cciss/c0d7p2

However, I can't find out how to coax systemd into doing this at startup. Can someone help me please?

Code:
# pveversion -v
proxmox-ve: 4.4-111 (running kernel: 4.4.128-1-pve)
pve-manager: 4.4-24 (running version: 4.4-24/08ba4d2d)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.117-2-pve: 4.4.117-110
pve-kernel-4.4.128-1-pve: 4.4.128-111
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+2
libqb0: 1.0.1-1
pve-cluster: 4.0-55
qemu-server: 4.0-115
pve-firmware: 1.1-12
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-9~pve4
pve-container: 1.0-106
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.8-2~pve4
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 10.2.10-1~bpo80+1

Code:
# pve-ceph list
...
/dev/cciss/c0d7 :
 /dev/cciss/c0d7p2 ceph journal, for /dev/cciss/c0d7p1
 /dev/cciss/c0d7p1 ceph data, active, unknown cluster a6092407-216f-41ff-bccb-9bed78587ac3, osd.7, journal /dev/cciss/c0d7p2
...
I'm focussing only in one OSD here and will apply the solution to all the other's
 
You need to check what is going on with the ceph-osd.target, this one is responsible for activating the OSDs.

Also note, that PVE 4.x will be EoL end of June!
 
Code:
# systemctl cat ceph-osd.target
# /lib/systemd/system/ceph-osd.target
[Unit]
Description=ceph target allowing to start/stop all ceph-osd@.service instances at once
PartOf=ceph.target
[Install]
WantedBy=multi-user.target ceph.target

# systemctl cat ceph.target
# /lib/systemd/system/ceph.target
[Unit]
Description=ceph target allowing to start/stop all ceph*@.service instances at once
[Install]
WantedBy=multi-user.target

However, I'm none the wiser since I don't know what should be "going on" with these? Can you please give more detail?

This is a straight forward install, it should just be right, not so?
 
No, there is a misunderstanding. I meant, check the status of the ceph-osd.target and if there is nothing, then check the syslog/journal.
 
I still don't understand what you mean by "check the status of the ceph-osd.target"? If you mean, osd-7, the it is mounted and running because I mounted it manually and started it.

I've attached the log, since I don't really know what I'm looking for in there. Can you see something that gives an indication why the osd doesn't get mounted and started.
 

Attachments

  • ceph-osd.7.log.zip
    126.8 KB · Views: 2
Code:
systemctl status ceph-osd.target

cat /var/log/syslog
journalctl -xb
 
Code:
# systemctl status ceph-osd.target
● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once
   Loaded: loaded (/lib/systemd/system/ceph-osd.target; enabled)
   Active: active since Sat 2018-06-09 16:14:56 SAST; 1 day 21h ago

Jun 09 16:14:56 hp1 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once.

# cat /var/log/syslog
Jun 11 06:25:22 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.002s/+6ppm
Jun 11 06:28:09 hp1 rsyslogd0: action 'action 17' resumed (module 'builtin:ompipe') [try http://www.rsyslog.com/e/0 ]
Jun 11 06:28:09 hp1 rsyslogd-2359: action 'action 17' resumed (module 'builtin:ompipe') [try http://www.rsyslog.com/e/2359 ]
Jun 11 06:28:09 hp1 postfix/pickup[3587]: 556981E0C2D: uid=0 from=<root>
Jun 11 06:28:09 hp1 postfix/cleanup[23401]: 556981E0C2D: message-id=<20180611042809.556981E0C2D@hp1.gtst.xyz>
Jun 11 06:28:09 hp1 postfix/qmgr[1265]: 556981E0C2D: from=<root@hp1.gtst.xyz>, size=4713, nrcpt=1 (queue active)
Jun 11 06:28:11 hp1 pvemailforward[23404]: forward mail to <me@greentree.systems>
Jun 11 06:28:11 hp1 postfix/pickup[3587]: 5EA8D1E0C69: uid=65534 from=<root>
Jun 11 06:28:11 hp1 postfix/cleanup[23401]: 5EA8D1E0C69: message-id=<20180611042809.556981E0C2D@hp1.gtst.xyz>
Jun 11 06:28:11 hp1 postfix/qmgr[1265]: 5EA8D1E0C69: from=<root@hp1.gtst.xyz>, size=4880, nrcpt=1 (queue active)
Jun 11 06:28:11 hp1 postfix/local[23403]: 556981E0C2D: to=<root@hp1.gtst.xyz>, orig_to=<root>, relay=local, delay=2.1, delays=0.1/0.05/0/2, dsn=2.0.0, status=sent (delivered to command: /usr/bin/pvemailforward)
Jun 11 06:28:11 hp1 postfix/qmgr[1265]: 556981E0C2D: removed
Jun 11 06:28:13 hp1 postfix/smtp[23407]: connect to aspmx.l.google.com[2a00:1450:400c:c06::1b]:25: Network is unreachable
Jun 11 06:28:15 hp1 postfix/smtp[23407]: 5EA8D1E0C69: to=<me@greentree.systems>, relay=aspmx.l.google.com[64.233.166.27]:25, delay=4, delays=0/0.07/2.6/1.2, dsn=2.0.0, status=sent (250 2.0.0 OK 1528691295 l14-v6si38799256wrb.217 - gsmtp)
Jun 11 06:28:15 hp1 postfix/qmgr[1265]: 5EA8D1E0C69: removed
Jun 11 06:59:31 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.001s/+6ppm
Jun 11 07:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 07:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 07:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528694092.088603
Jun 11 07:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528686892.088602
Jun 11 07:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 07:17:01 hp1 CRON[32181]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 11 07:33:39 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.001s/+6ppm
Jun 11 08:07:47 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+6ppm
Jun 11 08:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 08:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 08:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528697692.088614
Jun 11 08:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528690492.088616
Jun 11 08:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 08:17:01 hp1 CRON[11180]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 11 08:41:55 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.001s/0.002s/0.000s/+6ppm
Jun 11 09:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 09:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 09:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528701292.088621
Jun 11 09:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528694092.088603
Jun 11 09:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 09:16:04 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+6ppm
Jun 11 09:17:01 hp1 CRON[23972]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 11 09:50:12 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+6ppm
Jun 11 10:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 10:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 10:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528704892.088615
Jun 11 10:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528697692.088614
Jun 11 10:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 10:17:02 hp1 CRON[2643]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 11 10:24:20 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+6ppm
Jun 11 10:58:28 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+6ppm
Jun 11 11:05:14 hp1 systemd[1]: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 11807 (find)
Jun 11 11:05:14 hp1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jun 11 11:05:14 hp1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jun 11 11:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 11:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 11:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528708492.088604
Jun 11 11:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528701292.088621
Jun 11 11:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 11:17:01 hp1 CRON[14962]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 11 11:32:37 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+7ppm
Jun 11 12:06:45 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.002s/0.000s/+7ppm
Jun 11 12:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 12:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 12:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528712092.088617
Jun 11 12:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528704892.088615
Jun 11 12:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 12:17:01 hp1 CRON[26185]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 11 12:40:53 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.001s/+7ppm
Jun 11 13:14:52 hp1 rrdcached[1109]: flushing old values
Jun 11 13:14:52 hp1 rrdcached[1109]: rotating journals
Jun 11 13:14:52 hp1 rrdcached[1109]: started new journal /var/lib/rrdcached/journal/rrd.journal.1528715692.088621
Jun 11 13:14:52 hp1 rrdcached[1109]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1528708492.088604
Jun 11 13:14:52 hp1 pmxcfs[1204]: [dcdb] notice: data verification successful
Jun 11 13:15:01 hp1 systemd-timesyncd[722]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.002s/0.000s/+7ppm
Jun 11 13:17:01 hp1 CRON[4880]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)

# journalctl -xb
-- Logs begin at Sat 2018-06-09 16:14:40 SAST, end at Mon 2018-06-11 13:25:55 SAST. --
Jun 09 16:14:40 hp1 systemd-journal[259]: Runtime journal is using 8.0M (max allowed 99.1M, trying to leave 148.7M free of 983.0M available → current limit 99.1M).
Jun 09 16:14:40 hp1 systemd-journal[259]: Runtime journal is using 8.0M (max allowed 99.1M, trying to leave 148.7M free of 983.0M available → current limit 99.1M).
Jun 09 16:14:40 hp1 kernel: Initializing cgroup subsys cpuset
Jun 09 16:14:40 hp1 kernel: Initializing cgroup subsys cpu
Jun 09 16:14:40 hp1 kernel: Initializing cgroup subsys cpuacct
Jun 09 16:14:40 hp1 kernel: Linux version 4.4.128-1-pve (root@CT196942) (gcc version 4.9.2 (Debian 4.9.2-10+deb8u1) ) #1 SMP PVE 4.4.128-111 (Wed, 23 May 2018 14:00:0
Jun 09 16:14:40 hp1 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-4.4.128-1-pve root=/dev/mapper/pve-root ro quiet noapic
Jun 09 16:14:40 hp1 kernel: KERNEL supported cpus:
Jun 09 16:14:40 hp1 kernel:   Intel GenuineIntel
Jun 09 16:14:40 hp1 kernel:   AMD AuthenticAMD
Jun 09 16:14:40 hp1 kernel:   Centaur CentaurHauls
Jun 09 16:14:40 hp1 kernel: x86/fpu: Legacy x87 FPU detected.
Jun 09 16:14:40 hp1 kernel: x86/fpu: Using 'lazy' FPU context switches.
Jun 09 16:14:40 hp1 kernel: e820: BIOS-provided physical RAM map:
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f3ff] usable
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x000000000009f400-0x000000000009ffff] reserved
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000d7e67fff] usable
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x00000000d7e68000-0x00000000d7e6ffff] ACPI data
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x00000000d7e70000-0x00000000d7ffffff] reserved
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fecfffff] reserved
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee0ffff] reserved
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved
Jun 09 16:14:40 hp1 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000167ffefff] usable
Jun 09 16:14:40 hp1 kernel: NX (Execute Disable) protection: active
Jun 09 16:14:40 hp1 kernel: SMBIOS 2.3 present.
Jun 09 16:14:40 hp1 kernel: DMI: HP ProLiant DL320s G1, BIOS W04 06/10/2008
Jun 09 16:14:40 hp1 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jun 09 16:14:40 hp1 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jun 09 16:14:40 hp1 kernel: e820: last_pfn = 0x167fff max_arch_pfn = 0x400000000
Jun 09 16:14:40 hp1 kernel: MTRR default type: write-back
Jun 09 16:14:40 hp1 kernel: MTRR fixed ranges enabled:
Jun 09 16:14:40 hp1 kernel:   00000-9FFFF write-back
Jun 09 16:14:40 hp1 kernel:   A0000-BFFFF uncachable
Jun 09 16:14:40 hp1 kernel:   C0000-FFFFF write-protect
Jun 09 16:14:40 hp1 kernel: MTRR variable ranges enabled:
Jun 09 16:14:40 hp1 kernel:   0 base 0D8000000 mask FF8000000 uncachable
Jun 09 16:14:40 hp1 kernel:   1 base 0E0000000 mask FE0000000 uncachable
Jun 09 16:14:40 hp1 kernel:   2 disabled
Jun 09 16:14:40 hp1 kernel:   3 disabled
Jun 09 16:14:40 hp1 kernel:   4 disabled
Jun 09 16:14:40 hp1 kernel:   5 disabled
Jun 09 16:14:40 hp1 kernel:   6 disabled
Jun 09 16:14:40 hp1 kernel:   7 disabled
Jun 09 16:14:40 hp1 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC  UC- WT  
Jun 09 16:14:40 hp1 kernel: e820: last_pfn = 0xd7e68 max_arch_pfn = 0x400000000
Jun 09 16:14:40 hp1 kernel: found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] mapped at [ffff8800000f4f80]
Jun 09 16:14:40 hp1 kernel: Scanning 1 areas for low memory corruption
Jun 09 16:14:40 hp1 kernel: Base memory trampoline at [ffff880000098000] 98000 size 28672
 
As an unexpected bonus (grrr) we had a UPS failure (or actually overload), so the cluster restart again and once again these disks don't start and I can't start them manually either. NOTE: Of the four machines in the cluster, 3 start fine and so do their OSD's. However, the 4th machine doesn't mount the drives and the OSD's don't start.

As reported higher up in this thread, mounting the drives is successful.

Code:
# mount
...
/dev/cciss/c0d1p1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d2p1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d3p1 on /var/lib/ceph/osd/ceph-2 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d4p1 on /var/lib/ceph/osd/ceph-3 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d5p1 on /var/lib/ceph/osd/ceph-4 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d6p1 on /var/lib/ceph/osd/ceph-6 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d7p1 on /var/lib/ceph/osd/ceph-7 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d8p1 on /var/lib/ceph/osd/ceph-9 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d9p1 on /var/lib/ceph/osd/ceph-17 type xfs (rw,noatime,attr2,inode64,noquota)
/dev/cciss/c0d10p1 on /var/lib/ceph/osd/ceph-18 type xfs (rw,noatime,attr2,inode64,noquota)

If I start the OSD's from the GUI, they start.

Code:
# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled)
  Drop-In: /lib/systemd/system/ceph-osd@.service.d
           └─ceph-after-pve-cluster.conf
   Active: active (running) since Tue 2018-06-12 11:36:02 SAST; 10h ago
  Process: 24526 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 24577 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─24577 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Jun 12 11:36:02 hp1 ceph-osd-prestart.sh[24526]: create-or-move updated item name 'osd.0' weight 1.8136 at location {host=hp1,root=default} to crush map
Jun 12 11:36:02 hp1 systemd[1]: Started Ceph object storage daemon.
Jun 12 11:36:02 hp1 ceph-osd[24577]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jun 12 11:36:10 hp1 ceph-osd[24577]: 2018-06-12 11:36:10.374955 7fdf395f2800 -1 osd.0 13879 log_to_monitors {default=true}

So my question still remains. What do I do the change this situation to get these to automatically start.

I had this machine installed as a node previously (using Proxmox 4.0 if I recall correctly), and then upgraded to 4.4 systematically over time. So is there a regression or other bug in 4.4 that appears when a fresh install is done?

I could upgrade this cluster to 5.2, but that involves too much risk and I can't deal with that now. I'm currently deploying a new 5.2 cluster into a different environment and once that is done we can tackle this upgrade again. In the meantime I need to fix for this startup problem and truth is I have no idea where to look. The documentation is not helpful here, so I hope somewhere here will be able to provide some guidance please.
 
This sounds more like a timing problem. The disks might need longer to be mapped. To test this, you can restart the ceph-osd.target after the machine is up and couldn't do it by its own.

Note: PVE 4.x will be EoL end of June!
 
So this morning I just mounted the disk manually and left them. When I reloaded the GUI for OSD's about 30 minutes later all the OSD's had started. The mounting of the disk therefor the primary problem. Where can I check if the disk mounting instruction is actually being given?
 
This sounds more like a timing problem. The disks might need longer to be mapped. To test this, you can restart the ceph-osd.target after the machine is up and couldn't do it by its own.
The ceph-osd.target is starting the OSDs on boot, then you see all the ceph-osd@ID.service for each OSD. With systemctl status and journal -xb you can find out what was going on with those services.
 
For old systems (fstore) still may be relevant: `ceph-volume` is not a dependency just recommended, so it's possible that it's not getting installed (or gets removed) at the upgrade. Without it filestores do not get mounted, thus osd's will not be present and can't start.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!