XFS file-restore

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
mhmm ok one thing you can try though is the following:

start a file-restore, try to open a disk

then run:
Code:
ps ax | grep file-restore

and post the output here

this should show you a single process with an argument that contains 'file-restore' in the '-kernel' parameter of the restore vm
(it'll probably also show the 'grep' command itself, ignore that)
note the first column (the PID of the vm)

now go into the directory: '/proc/<PID>/fd'
where <PID> is the pid of the vm from before

and post the output of 'ls -lh'

thanks
 

gosha

Active Member
Oct 20, 2014
298
18
38
Russia
mhmm ok one thing you can try though is the following:

start a file-restore, try to open a disk

then run:
Code:
ps ax | grep file-restore

and post the output here

this should show you a single process with an argument that contains 'file-restore' in the '-kernel' parameter of the restore vm
(it'll probably also show the 'grep' command itself, ignore that)
note the first column (the PID of the vm)

now go into the directory: '/proc/<PID>/fd'
where <PID> is the pid of the vm from before

and post the output of 'ls -lh'

thanks
Hi!

I am trying to open the disc drive-scsi1
and get the error:
proxmox-file-restore failed: Error: mounting 'drive-scsi1.img.fidx/part/1' failed: all mounts failed or no supported file system (500)

I am trying:
root@cn4:~# ps ax | grep file-restore
3126786 pts/1 S+ 0:00 grep file-restore
root@cn4:~#
does not find... :(

But if I am trying to open the disc drive-scsi0 (on the same VM but with ext4)
all works:

scsi0.png


in CLI via proxmox-backup-client map... this drive-scsi1 (with XFS) work too.

---
Best regards
Gosha
 
Last edited:

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
root@cn4:~# ps ax | grep file-restore
3126786 pts/1 S+ 0:00 grep file-restore
root@cn4:~#
does not find... :(
you are definitely on the wrong node then... the file restore starts a vm which must have that in the arguments...
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
does it work with a different vm ?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
also please check the node where your browser connects, not the node you have selected...
 

gosha

Active Member
Oct 20, 2014
298
18
38
Russia
also please check the node where your browser connects, not the node you have selected...
I connected to nonde cn1:

# ps ax | grep file-restore
3043000 ? Sl 0:03 qemu-system-x86_64 -chardev file,id=log,path=/dev/null,logfile=/dev/fd/20,logappend=on -serial chardev:log -vnc none -enable-kvm -kernel /usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/bzImage -initrd /dev/fd/19 -append quiet panic=1 -daemonize -pidfile /dev/fd/18 -name pbs-restore-vm -m 128 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi6.img.fidx,read-only=on,if=none,id=drive0 -device pci-bridge,id=bridge2,chassis_nr=2 -device virtio-blk-pci,drive=drive0,serial=drive-scsi6,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi5.img.fidx,read-only=on,if=none,id=drive1 -device virtio-blk-pci,drive=drive1,serial=drive-scsi5,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi4.img.fidx,read-only=on,if=none,id=drive2 -device virtio-blk-pci,drive=drive2,serial=drive-scsi4,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi3.img.fidx,read-only=on,if=none,id=drive3 -device virtio-blk-pci,drive=drive3,serial=drive-scsi3,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi2.img.fidx,read-only=on,if=none,id=drive4 -device virtio-blk-pci,drive=drive4,serial=drive-scsi2,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi1.img.fidx,read-only=on,if=none,id=drive5 -device virtio-blk-pci,drive=drive5,serial=drive-scsi1,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi0.img.fidx,read-only=on,if=none,id=drive6 -device virtio-blk-pci,drive=drive6,serial=drive-scsi0,bus=bridge2 -device vhost-vsock-pci,guest-cid=10,disable-legacy=on


3043583 pts/1 S+ 0:00 grep file-restore

root@cn1:~# cd /proc/304300/fd
-bash: cd: /proc/304300/fd: No such file or directory
root@cn1:~#
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
you missed a 0

3043000 != 304300
 

gosha

Active Member
Oct 20, 2014
298
18
38
Russia
you missed a 0

3043000 != 304300
Oops!

another try:

root@cn1:~# ps ax | grep file-restore


3053385 ? Sl 0:03 qemu-system-x86_64 -chardev file,id=log,path=/dev/null,logfile=/dev/fd/20,logappend=on -serial chardev:log -vnc none -enable-kvm -kernel /usr/lib/x86_64-linux-gnu/proxmox-backup/file-restore/bzImage -initrd /dev/fd/19 -append quiet panic=1 -daemonize -pidfile /dev/fd/18 -name pbs-restore-vm -m 128 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi6.img.fidx,read-only=on,if=none,id=drive0 -device pci-bridge,id=bridge2,chassis_nr=2 -device virtio-blk-pci,drive=drive0,serial=drive-scsi6,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi5.img.fidx,read-only=on,if=none,id=drive1 -device virtio-blk-pci,drive=drive1,serial=drive-scsi5,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi4.img.fidx,read-only=on,if=none,id=drive2 -device virtio-blk-pci,drive=drive2,serial=drive-scsi4,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi3.img.fidx,read-only=on,if=none,id=drive3 -device virtio-blk-pci,drive=drive3,serial=drive-scsi3,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi2.img.fidx,read-only=on,if=none,id=drive4 -device virtio-blk-pci,drive=drive4,serial=drive-scsi2,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi1.img.fidx,read-only=on,if=none,id=drive5 -device virtio-blk-pci,drive=drive5,serial=drive-scsi1,bus=bridge2 -drive file=pbs:repository=root@pam@192.168.110.49:8007:main,,snapshot=vm/200/2021-06-09T15:05:02Z,,archive=drive-scsi0.img.fidx,read-only=on,if=none,id=drive6 -device virtio-blk-pci,drive=drive6,serial=drive-scsi0,bus=bridge2 -device vhost-vsock-pci,guest-cid=10,disable-legacy=on


3053931 pts/1 S+ 0:00 grep file-restore
root@cn1:~#

root@cn1:/proc/3053385/fd# ls -lh
total 0
lrwx------ 1 root root 64 Jun 10 12:54 0 -> /dev/null
lrwx------ 1 root root 64 Jun 10 12:54 1 -> /dev/null
lrwx------ 1 root root 64 Jun 10 12:54 10 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Jun 10 12:54 11 -> 'socket:[26133222]'
lrwx------ 1 root root 64 Jun 10 12:54 12 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 13 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Jun 10 12:54 14 -> 'anon_inode:[eventfd]'
l-wx------ 1 root root 64 Jun 10 12:54 15 -> /var/log/proxmox-backup/file-restore/qemu.log
l-wx------ 1 root root 64 Jun 10 12:54 16 -> /dev/null
lrwx------ 1 root root 64 Jun 10 12:54 17 -> 'socket:[34381285]'
lrwx------ 1 root root 64 Jun 10 12:53 18 -> '/tmp/file-restore-qemu.pid.tmp_jqD7oT (deleted)'
lrwx------ 1 root root 64 Jun 10 12:53 19 -> '/tmp/file-restore-qemu.initramfs.tmp_Ug4261 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 2 -> 'socket:[34378552]'
l-wx------ 1 root root 64 Jun 10 12:53 20 -> /var/log/proxmox-backup/file-restore/qemu.log
lrwx------ 1 root root 64 Jun 10 12:54 21 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Jun 10 12:54 22 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 23 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Jun 10 12:54 24 -> 'socket:[34378549]'
lrwx------ 1 root root 64 Jun 10 12:54 25 -> 'socket:[34378550]'
lrwx------ 1 root root 64 Jun 10 12:54 26 -> 'socket:[34378549]'
lrwx------ 1 root root 64 Jun 10 12:54 27 -> 'socket:[34390167]'
lrwx------ 1 root root 64 Jun 10 12:54 28 -> '/tmp/#524614 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 29 -> 'socket:[34382384]'
lr-x------ 1 root root 64 Jun 10 12:54 3 -> anon_inode:inotify
lrwx------ 1 root root 64 Jun 10 12:54 30 -> '/tmp/#524615 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 31 -> 'socket:[34381299]'
lrwx------ 1 root root 64 Jun 10 12:54 32 -> '/tmp/#524616 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 33 -> 'socket:[34367351]'
lrwx------ 1 root root 64 Jun 10 12:54 34 -> '/tmp/#524617 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 35 -> 'socket:[34392131]'
lrwx------ 1 root root 64 Jun 10 12:54 36 -> '/tmp/#525015 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 37 -> 'socket:[34376396]'
lrwx------ 1 root root 64 Jun 10 12:54 38 -> '/tmp/#525217 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 39 -> 'socket:[34376397]'
lrwx------ 1 root root 64 Jun 10 12:54 4 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Jun 10 12:54 40 -> '/tmp/#525218 (deleted)'
lrwx------ 1 root root 64 Jun 10 12:54 41 -> /dev/kvm
lrwx------ 1 root root 64 Jun 10 12:54 42 -> anon_inode:kvm-vm
lrwx------ 1 root root 64 Jun 10 12:54 43 -> anon_inode:kvm-vcpu:0
lrwx------ 1 root root 64 Jun 10 12:54 44 -> /dev/vhost-vsock
lrwx------ 1 root root 64 Jun 10 12:54 45 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 46 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 47 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 48 -> 'socket:[34378552]'
lrwx------ 1 root root 64 Jun 10 12:54 49 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 5 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 50 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 51 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 52 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 53 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 54 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 55 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 56 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 57 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 58 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 59 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 6 -> 'anon_inode:[eventpoll]'
lrwx------ 1 root root 64 Jun 10 12:54 60 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 61 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 62 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 63 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 64 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jun 10 12:54 65 -> 'anon_inode:[eventfd]'
l-wx------ 1 root root 64 Jun 10 12:54 7 -> '/tmp/file-restore-qemu.pid.tmp_jqD7oT (deleted)'
l-wx------ 1 root root 64 Jun 10 12:54 8 -> 'pipe:[34381298]'
lrwx------ 1 root root 64 Jun 10 12:54 9 -> 'anon_inode:[signalfd]'

root@cn1:/proc/3053385/fd
 

gosha

Active Member
Oct 20, 2014
298
18
38
Russia
and on cn1:

Code:
root@cn1:~# cat /var/log/proxmox-backup/file-restore/qemu.log
[2021-06-10T12:53:00+05:00] PBS file restore VM log
[init-shim] beginning user space setup
[init-shim] debug: agetty start failed: /sbin/agetty not found, probably not running debug mode and safe to ignore
[init-shim] reached daemon start after 0.95s
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vdf' ('drive-scsi1'): found partition '/dev/vdf1' (1, 536868814848B)
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vdd' ('drive-scsi3'): found partition '/dev/vdd1' (1, 53686042624B)
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vdb' ('drive-scsi5'): found partition '/dev/vdb1' (1, 10736369664B)
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vdg' ('drive-scsi0'): found partition '/dev/vdg1' (1, 10268704768B)
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vdg' ('drive-scsi0'): found partition '/dev/vdg2' (2, 467664896B)
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vde' ('drive-scsi2'): found partition '/dev/vde1' (1, 214746267648B)
[2021-06-10T07:53:02Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vdc' ('drive-scsi4'): found partition '/dev/vdc1' (1, 10736369664B)
[2021-06-10T07:53:03Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vda' ('drive-scsi6'): found partition '/dev/vda1' (1, 10736369664B)
[2021-06-10T07:53:03Z INFO  proxmox_restore_daemon::proxmox_restore_daemon::disk] Supported FS: reiserfs, ext3, ext4, ext2, vfat, msdos, iso9660, hfsplus, hfs, sysv, v7, ntfs, ufs, jfs, xfs, befs, f2fs, btrfs
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (reiserfs) - EINVAL: Invalid argument
EXT4-fs (vdf1): VFS: Can't find ext4 filesystem
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (ext3) - EINVAL: Invalid argument
EXT4-fs (vdf1): VFS: Can't find ext4 filesystem
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (ext4) - EINVAL: Invalid argument
EXT2-fs (vdf1): error: can't find an ext2 filesystem on dev vdf1.
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (ext2) - EINVAL: Invalid argument
FAT-fs (vdf1): bogus number of FAT structure
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (vfat) - EINVAL: Invalid argument
FAT-fs (vdf1): bogus number of FAT structure
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (msdos) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (iso9660) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (hfsplus) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (hfs) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (sysv) - EINVAL: Invalid argument
VFS: could not find a valid V7 on vdf1.
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (v7) - EINVAL: Invalid argument
ntfs: (device vdf1): read_ntfs_boot_sector(): Primary boot sector is invalid.
ntfs: (device vdf1): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover.
ntfs: (device vdf1): ntfs_fill_super(): Not an NTFS volume.
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (ntfs) - EINVAL: Invalid argument
ufs: ufs_fill_super(): bad magic number
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (ufs) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (jfs) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (xfs) - EINVAL: Invalid argument
befs: (vdf1): invalid magic header
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (befs) - EINVAL: Invalid argument
F2FS-fs (vdf1): Can't find valid F2FS filesystem in 1th superblock
F2FS-fs (vdf1): Can't find valid F2FS filesystem in 2th superblock
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (f2fs) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z WARN  proxmox_restore_daemon::proxmox_restore_daemon::disk] mount error on '/dev/vdf1' (btrfs) - EINVAL: Invalid argument
[2021-06-10T07:53:06Z ERROR proxmox_backup::server::rest] GET /api2/json/list?path=ZHJpdmUtc2NzaTEuaW1nLmZpZHgvcGFydC8x: 400 Bad Request: [client 0.0.0.0:807] mounting 'drive-scsi1.img.fidx/part/1' failed: all mounts failed or no supported file system
root@cn1:~#
 
Last edited by a moderator:

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
mhmm... ok thanks for the log, sadly it does not really tells us whats wrong... can you tell me how you formatted the xfs drive ? (os/kernel version/cmd) ?
 

gosha

Active Member
Oct 20, 2014
298
18
38
Russia
mhmm... ok thanks for the log, sadly it does not really tells us whats wrong... can you tell me how you formatted the xfs drive ? (os/kernel version/cmd) ?
on this VM:

# uname -a
Linux dstore 3.16.0-11-amd64 #1 SMP Debian 3.16.84-1 (2020-06-09) x86_64 GNU/Linux

all disks with xfs have been formatted via command: mkfs.xfs /dev/sdb1 etc.

root@dstore:~# fdisk -l /dev/sdb
Disk /dev/sdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xba5fb393
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1048573951 1048571904 500G 83 Linux
root@dstore:~#
 
Last edited:

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,852
613
133
32
Vienna
just fyi: the patch is applied and a package is uploaded to pvetest (proxmox-backup-restore-image 0.2.3)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!