Allow migration and replication of disks on ZFS encrypted storage

May 9, 2022
7
12
8
Hi,

I came across the problem, that migration and replication does not work if the disk(s) are on a encrypted ZFS storage - which is not well documented by the way.

The problem arose since the ZFS export function in pve-storage use the -R option of zfs send and this option does not work with encrypted datasets unless you use -w as well. Since this is no option because we cannot transfer the data encrypted, since the target pool has a different encryption key etc., the question is why use -R in the first place?

Some discussion about the features of -R
  1. Dataset properties are implicit included if using -R. But which dataset property of an ZVOL needs to be synced? Assuming my cluster members are identical configured, I see no need to copy properties.
  2. All decent file systems are copied it using -R. Can a ZVOL of a vm-disk have decent file systems in proxmox? I don't think so.
  3. Clones are also preserved it sending a dataset/snapshot with -R. Maybe this could be a pitfall.

After all this discussions I just tried it by patching /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm


746,752c746
< my $cmd = ['zfs', 'send'];
< my $encrypted = $class->zfs_get_properties($scfg, 'encryption', "$scfg->{pool}/$dataset");
< if ($encrypted !~ m/^off$/) {
< push @$cmd, '-v';
< } else {
< push @$cmd, '-Rpv';
< }
---
> my $cmd = ['zfs', 'send', '-Rpv'];

This patch checks if the dataset is encrypted and omits -R and -p option during zfs send if so. This leads to an unencrypted stream of data, which is totally fine since our target pool will be encrypted as well.


I just tried this patch with hot and cold migration, replication, with snapshot etc. and everything works like a charm.


Question is: Do I miss something or is it possible to consider this patch be included?

I also opened an issue on github (https://github.com/proxmox/pve-storage/issues/10), but since this forum has a greater audience I repost it here.


regards
stefan
 
Hi,

after some testing its time to improve my former solution.

One of the problem if you replicate by sending encrypted snapshots unencrypted is the change of your encryption parameters. This topic is discussed in ZFSonLinux community since incremental snapshot could brick your data if the IV of you encryption change etc. Even if they fixed some of the issues there will still be problems if you have multiple sources where you send incremental snapshots from.

Best way so solve all this is to make every ZVOL its own encryption root without inheritance and replicate with -w option to send a raw stream which included all encryption parameters.

If we send an replication stream with -w the dataset of our ZVOL became its own encryption root and does not have its key loaded. Therefore you need to make further adjustments to the import part of ZFSPoolPlugin, to change the keylocation to its parent and load the key.

IMPORTANT: This solution will only work if both datasets on boths PVE nodes does share the same encryption passphrase and its stored in a file.

To get this up and running you need to patch /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm as followed:

Code:
746,752c746
<     my $cmd = ['zfs', 'send'];
<     my $encrypted = $class->zfs_get_properties($scfg, 'encryption', "$scfg->{pool}/$dataset");
<     if ($encrypted !~ m/^off$/) {
<         push @$cmd, '-Rpvw';
<     } else {
<         push @$cmd, '-Rpv';
<     }
---
>     my $cmd = ['zfs', 'send', '-Rpv'];
808,819d801
<     ### set key location and load key
<     my $encrypted = $class->zfs_get_properties($scfg, 'encryption', $zfspath);
<     if ($encrypted !~ m/^off$/) {
<         my $keystatus = $class->zfs_get_properties($scfg, 'keystatus', $zfspath);
<         if ($keystatus eq "unavailable") {
<             my ($parent) = $zfspath =~ /(.*)\/.*$/;
<             my $keylocation = $class->zfs_get_properties($scfg, 'keylocation', $parent);
<             my $keyformat = $class->zfs_get_properties($scfg, 'keyformat', $parent);
<             eval { run_command(['zfs', 'set', "keylocation=$keylocation", $zfspath]) };
<             eval { run_command(['zfs', 'set', "keyformat=$keyformat", $zfspath]) };
<             eval { run_command(['zfs', 'load-key', $zfspath]) };
<         }
<     }


Furthermore you need a service which loads all keys on startup, since pve does not do this and otherwise you VM could not start automatically, i.e. "/sbin/zfs load-key -r tank/vmdata_encrypted".

This solution avoids changing the key with "zfs change-key" like suggested in the bug report https://bugzilla.proxmox.com/show_bug.cgi?id=2350. Since changing the encryption key will brick subsequent incremental sends etc. To avoid all this every ZVOL is its own encryption root. This also allows you to send incremental backups from host A and host B after a failover, which is not possible if you have different keys or replicate unencrypted.

Keep in mind that the encryption root is changed on replication and not for all existing ZVOL. If you want to do this in advanced, i.e. to setup you backup you need to:
zfs get name,keylocation,keyformat tank/vmdata_encrypted
zfs change-key -l -o keylocation=XXX -o keyformat=YYY tank/vmdata_encrypted/vm-XXX-disk-Y

From my point of view the encryption of each ZVOL would also be the best solution to integrate this into the main product. This allows you to choose different encryption primitives for different VMs and gets rid of all the replication, backup and other sync problems which came with inherited encryption.

Does anybody see any drawbacks in making every ZVOL its own encryption root? Beside the fact that the initial key load took some time if you have many ZVOLs.

regards
stefan
 
A big THANK YOU for this comprehensive guide. I'm currently implementing it and it seems to be very clear to me that this approach will work somehow :cool:

Results: ONLINE MIGRATION of VMs¹ only works, if you set up a replication job beforehand and OFFLINE MIGRATION works out of the box for VMs and CTs.

¹As CTs are always "restarted" during migration the CT online migration is more like an offline migration. So the replication job for CTs is not needed necessarily.

And one small detail might be good to know for others too:

1. Applying your patch file (the code you were posting) using patch /root/ZFSPoolPlugin.pm.patch /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm did result in an error because of a missing bracket:

Code:
2024-11-23 23:28:29 [pve] Missing right curly or square bracket at /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm line 870, at end of line
2024-11-23 23:28:29 [pve] syntax error at /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm line 870, at EOF
2024-11-23 23:28:29 [pve] Compilation failed in require at /usr/share/perl5/PVE/Storage.pm line 38, <DATA> line 960.
2024-11-23 23:28:29 [pve] BEGIN failed--compilation aborted at /usr/share/perl5/PVE/Storage.pm line 38, <DATA> line 960.
2024-11-23 23:28:29 [pve] Compilation failed in require at /usr/share/perl5/PVE/CLI/pvesm.pm line 19, <DATA> line 960.
2024-11-23 23:28:29 [pve] BEGIN failed--compilation aborted at /usr/share/perl5/PVE/CLI/pvesm.pm line 19, <DATA> line 960.
2024-11-23 23:28:29 [pve] Compilation failed in require at /usr/sbin/pvesm line 6, <DATA> line 960.
2024-11-23 23:28:29 [pve] BEGIN failed--compilation aborted at /usr/sbin/pvesm line 6, <DATA> line 960.

I could resolve this by adding the curly bracket before your comment ### set key location and load key. So my patch file would look like this:

ZFSPoolPlugin.pm.patch

Code:
755,761c755
<     my $cmd = ['zfs', 'send'];
<     my $encrypted = $class->zfs_get_properties($scfg, 'encryption', "$scfg->{pool}/$dataset");
<     if ($encrypted !~ m/^off$/) {
<         push @$cmd, '-Rpvw';
<     } else {
<         push @$cmd, '-Rpv';
<     }
---
>     my $cmd = ['zfs', 'send', '-Rpv'];
817,829d810
<     }
<     ### set key location and load key
<     my $encrypted = $class->zfs_get_properties($scfg, 'encryption', $zfspath);
<     if ($encrypted !~ m/^off$/) {
<         my $keystatus = $class->zfs_get_properties($scfg, 'keystatus', $zfspath);
<         if ($keystatus eq "unavailable") {
<             my ($parent) = $zfspath =~ /(.*)\/.*$/;
<             my $keylocation = $class->zfs_get_properties($scfg, 'keylocation', $parent);
<             my $keyformat = $class->zfs_get_properties($scfg, 'keyformat', $parent);
<             eval { run_command(['zfs', 'set', "keylocation=$keylocation", $zfspath]) };
<             eval { run_command(['zfs', 'set', "keyformat=$keyformat", $zfspath]) };
<             eval { run_command(['zfs', 'load-key', $zfspath]) };
<         }

What I did and my results​

1. Made sure I'm using the same encryption key for the ZFS datasets on my nodes. As I have two nodes I just took the key from node #1 and copied it over to node #2 to the exact same place. As the ZFS dataset tank/encrypted on node #2 has already been unlocked during startup using the old key I just issued the following command to change the encryption key to the new keyfile (the one from node #1): zfs change-key -l -o keylocation=[URL='https://forum.proxmox.com/file:///root/tank_key'][COLOR=#ffffff]file:///root/tank_key[/COLOR][/URL] -o keyformat=raw tank/encrypted/vm-data-migrate

Optional but in my opinion this makes sense to not mess with already created ZFS datasets.
2. Created new ZFS dataset below the already encrypted root (e.g. tank/encrypted/vm-data-migrate) on both nodes: zfs create -o mountpoint=/storage/tank-encrypted/vm-data-migrate tank/encrypted/vm-data-migrate

3. Added the new ZFS dataset to my cluster using PROXMOX GUI.

4. Moved the storage of one (or more) of my VMs into this new ZFS dataset using PROXMOX GUI (Click VM -> Hardware -> Click Hard Disk -> Disc Action -> Move Storage).

5. Now the storage of e.g. VM 999 was located at tank/encrypted/vm-data-migrate/vm-999-disk-0 but still not its own encryption root yet:

zfs get name,keylocation,keyformat,encryption,keystatus,encryptionroot tank/encrypted/vm-data-migrate/vm-999-disk-0
Code:
NAME                                          PROPERTY        VALUE                                            SOURCE
tank/encrypted/vm-data-migrate/vm-999-disk-0  name            tank/encrypted/vm-data-migrate/vm-999-disk-0     -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keylocation     none                                             default
tank/encrypted/vm-data-migrate/vm-999-disk-0  keyformat       raw                                              -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryption      aes-256-gcm                                      -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keystatus       available                                        -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryptionroot  tank/encrypted                                   -

6. Encrypted the ZFS dataset for VM 999 (made ZFS dataset its own encryption root without inheritance) using the same keyfile as for the already encrypted root: zfs change-key -l -o keylocation=[URL='https://forum.proxmox.com/file:///root/tank_key'][COLOR=#ffffff]file:///root/tank_key[/COLOR][/URL] -o keyformat=raw tank/encrypted/vm-data-migrate/vm-999-disk-0

Note: During my tests it was not possible to have the ZFS dataset of VM 999 within an unencrypted root and add encryption afterwards using the change-key command. Probably this works somehow but I didn't know / find how. Another method would be to create the encrypted dataset beforehand and sync the unencrypted one into it using this command: zfs send tank/vm-data-migrate/vm-999-disk-0 | zfs receive -x encryption tank/encrypted/vm-data-migrate/vm-999-disk-0

7. Seems like it has worked out:

zfs get name,keylocation,keyformat tank/encrypted/vm-data-migrate/vm-999-disk-0
Code:
NAME                                          PROPERTY        VALUE                                            SOURCE
tank/encrypted/vm-data-migrate/vm-999-disk-0  name            tank/encrypted/vm-data-migrate/vm-999-disk-0     -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keylocation     file:///root/tank_key                            local
tank/encrypted/vm-data-migrate/vm-999-disk-0  keyformat       raw                                              -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryption      aes-256-gcm                                      -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keystatus       available                                        -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryptionroot  tank/encrypted/vm-data-migrate/vm-999-disk-0     -

8. Now I patched ZFSPoolPlugin.pm using the patch from above on both nodes and started my first migration of VM 999.

9. I started with an OFFLINE MIGRATION from node #1 to node #2. During the process the following error message appeared:

Migration log:
Code:
copying local disk images
full send of tank/encrypted/vm-data-migrate/vm-999-disk-0@__migration__ estimated size is 13.8G
total estimated size is 13.8G
TIME        SENT   SNAPSHOT tank/encrypted/vm-data-migrate/vm-999-disk-0@__migration__

[pve] cannot set property for 'tank/encrypted/vm-data-migrate/vm-999-disk-0': keylocation must not be 'none' for encrypted datasets
[pve] cannot set property for 'tank/encrypted/vm-data-migrate/vm-999-disk-0': 'keyformat' is readonly

[pve] successfully imported 'tank-encrypted-vm-data-migrate:vm-999-disk-0'
volume 'tank-encrypted-vm-data-migrate:vm-999-disk-0' is 'tank-encrypted-vm-data-migrate:vm-999-disk-0' on the target
migration finished successfully (duration 00:00:34)

But the migration seems to have worked out (command executed on node #2):

zfs get name,keylocation,keyformat tank/encrypted/vm-data-migrate/vm-999-disk-0
Code:
NAME                                          PROPERTY        VALUE                                            SOURCE
tank/encrypted/vm-data-migrate/vm-999-disk-0  name            tank/encrypted/vm-data-migrate/vm-999-disk-0     -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keylocation     file:///root/tank_key                            local
tank/encrypted/vm-data-migrate/vm-999-disk-0  keyformat       raw                                              -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryption      aes-256-gcm                                      -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keystatus       available                                        -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryptionroot  tank/encrypted/vm-data-migrate/vm-999-disk-0     -

Migrating back also worked flawlessly.

10. Now it was time to try the ONLINE MIGRATION too. And I can say – it initially did not work out for me! The migration itself did not throw any error:

Migration log:
Code:
starting VM 999 on remote node 'pve2'
volume 'tank-encrypted-vm-data-migrate:vm-999-disk-0' is 'tank-encrypted-vm-data-migrate:vm-999-disk-0' on the target
start remote tunnel
ssh tunnel ver 1
starting storage migration
scsi0: start migration to nbd:192.168.9.1:60001:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 0.0 B of 16.0 GiB (0.00%) in 0s

[...]

drive-scsi0: transferred 16.0 GiB of 16.0 GiB (100.00%) in 28s, ready
all 'mirror' jobs are ready
switching mirror jobs to actively synced mode
drive-scsi0: switching to actively synced mode
drive-scsi0: successfully switched to actively synced mode
starting online/live migration on tcp:192.168.9.1:60000
set migration capabilities
migration downtime limit: 100 ms
migration cachesize: 256.0 MiB
set migration parameters
spice client_migrate_info
start migrate command to tcp:192.168.9.1:60000
migration active, transferred 959.4 MiB of 2.2 GiB VM-state, 781.6 MiB/s
migration active, transferred 1.5 GiB of 2.2 GiB VM-state, 793.3 MiB/s
average migration speed: 746.9 MiB/s - downtime 21 ms
migration status: completed
all 'mirror' jobs are ready
drive-scsi0: Completing block job...
drive-scsi0: Completed successfully.
drive-scsi0: mirror-job finished
stopping NBD storage migration server on target.
Waiting for spice server migration
migration finished successfully (duration 00:00:45)

But after checking the ZFS dataset on node #2 the encryption parameters have not been preserved:

zfs get name,keylocation,keyformat,encryption,keystatus,encryptionroot tank/encrypted/vm-data-migrate/vm-999-disk-0
Code:
NAME                                          PROPERTY        VALUE                                            SOURCE
tank/encrypted/vm-data-migrate/vm-999-disk-0  name            tank/encrypted/vm-data-migrate/vm-999-disk-0     -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keylocation     none                                             default
tank/encrypted/vm-data-migrate/vm-999-disk-0  keyformat       raw                                              -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryption      aes-256-gcm                                      -
tank/encrypted/vm-data-migrate/vm-999-disk-0  keystatus       available                                        -
tank/encrypted/vm-data-migrate/vm-999-disk-0  encryptionroot  tank/encrypted                                   -

After I set up a replication job from node #1 to node #2 the online migration worked and the encryption parameters are preserved and in sync.

Migration log:
Code:
use dedicated network address for sending migration traffic (192.168.9.2)
starting migration of VM 999 to node 'pve2' (192.168.9.2)
found local, replicated disk 'tank-encrypted-vm-data-migrate:vm-999-disk-0' (attached)
scsi0: start tracking writes using block-dirty-bitmap 'repl_scsi0'
replicating disk images
start replication job
guest => VM 999, running => 3573163
volumes => tank-encrypted-vm-data-migrate:vm-999-disk-0
create snapshot '__replicate_999-0_1732410934__' on tank-encrypted-vm-data-migrate:vm-999-disk-0
using insecure transmission, rate limit: none
incremental sync 'tank-encrypted-vm-data-migrate:vm-999-disk-0' (__replicate_999-0_1732410900__ => __replicate_999-0_1732410934__)
send from @__replicate_999-0_1732410900__ to tank/encrypted/vm-data-migrate/vm-999-disk-0@__replicate_999-0_1732410934__ estimated size is 1.50M
total estimated size is 1.50M
TIME        SENT   SNAPSHOT tank/encrypted/vm-data-migrate/vm-999-disk-0@__replicate_999-0_1732410934__
[pve2] successfully imported 'tank-encrypted-vm-data-migrate:vm-999-disk-0'
delete previous replication snapshot '__replicate_999-0_1732410900__' on tank-encrypted-vm-data-migrate:vm-999-disk-0
(remote_finalize_local_job) delete stale replication snapshot '__replicate_999-0_1732410900__' on tank-encrypted-vm-data-migrate:vm-999-disk-0
end replication job
starting VM 999 on remote node 'pve2'
volume 'tank-encrypted-vm-data-migrate:vm-999-disk-0' is 'tank-encrypted-vm-data-migrate:vm-999-disk-0' on the target
start remote tunnel
ssh tunnel ver 1
starting storage migration
scsi0: start migration to nbd:192.168.9.2:60002:exportname=drive-scsi0
drive mirror re-using dirty bitmap 'repl_scsi0'
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 1.4 MiB of 1.4 MiB (100.00%) in 0s
drive-scsi0: transferred 1.4 MiB of 1.4 MiB (100.00%) in 1s, ready
all 'mirror' jobs are ready
switching mirror jobs to actively synced mode
drive-scsi0: switching to actively synced mode
drive-scsi0: successfully switched to actively synced mode
starting online/live migration on tcp:192.168.9.2:60001
set migration capabilities
migration downtime limit: 100 ms
migration cachesize: 256.0 MiB
set migration parameters
spice client_migrate_info
start migrate command to tcp:192.168.9.2:60001
migration active, transferred 1.0 GiB of 2.2 GiB VM-state, 1.2 GiB/s
average migration speed: 1.1 GiB/s - downtime 57 ms
migration status: completed
all 'mirror' jobs are ready
drive-scsi0: Completing block job...
drive-scsi0: Completed successfully.
drive-scsi0: mirror-job finished
# /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' -o 'UserKnownHostsFile=/etc/pve/nodes/pve2/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.9.2 pvesr set-state 999 \''{"local/pve1":{"last_iteration":1732410934,"last_node":"pve1","duration":3.846521,"last_try":1732410934,"last_sync":1732410934,"storeid_list":["tank-encrypted-vm-data-migrate"],"fail_count":0}}'\'
stopping NBD storage migration server on target.
Waiting for spice server migration
migration finished successfully (duration 00:00:20)
TASK OK

I used PVE 8.2.7 to try this.

Hints, thoughts and comments are welcome ;)
 
Last edited:
  • Like
Reactions: UdoB
After some days of testing this in production I can say: It works just fine, but ... :cool:

1. You have to take GOOD care of monitoring your encryption roots and get notified if someone (by accident) moved a VM/CT to an unencrypted storage / another encrypted storage and forgot to set the encryption parameters accordingly.

I do this by using my simple monit-script monit-zfs-get-properties.sh (Shellcheck: 100% :cool:) which just gets all encryption roots recursively and reports back its count as exit code.

The Monit configuration looks like this:
Code:
# Check ZFS encryption roots
check program zfs-encryption-roots with path "/root/monit-zfs-get-properties.sh"
    if status != 3 then alert

2. You also have to take GOOD care of monitoring the patched ZFSPoolPlugin.pm file and patch it accordingly after an update. The recent update from PROXMOX 8.2 to PROXMOX 8.3 for example replaced the file with the unpatched version. Probably this process / the patch file needs adaption from time to time. For now it just works.

This is the other simple monit-script monit-zfs-patch-check.sh (Shellcheck: 100% :cool:) I wrote to monitor the patch status of the file and support you with patching. Please create a backup of the "original state" before and test the patching manually as things might change with time. Absolutely no warranties here.

The Monit configuration looks like this:
Code:
# Check PROXMOX ZFS patch
check program proxmox-zfs-patch with path "/root/monit-zfs-patch-check.sh"
    if status != 0 then alert

If you forgot to patch the file (mostly after an update of PROXMOX) and the system runs its replication tasks you might get errors like this one:

Code:
999-0: start replication job
999-0: guest => VM 999, running => 35538
999-0: volumes => tank-encrypted-vm-data-migrate:vm-999-disk-0
999-0: freeze guest filesystem
999-0: create snapshot '__replicate_999-0_1732759341__' on tank-encrypted-vm-data-migrate:vm-999-disk-0
999-0: thaw guest filesystem
999-0: using insecure transmission, rate limit: none
999-0: incremental sync 'tank-encrypted-vm-data-migrate:vm-999-disk-0' (__replicate_999-0_1732752141__ => __replicate_999-0_1732759341__)

999-0: cannot send tank/encrypted/vm-data-migrate/vm-999-disk-0@__replicate_999-0_1732759341__: encrypted dataset tank/encrypted/vm-data-migrate/vm-999-disk-0 may not be sent with properties without the raw flag
999-0: warning: cannot send 'tank/encrypted/vm-data-migrate/vm-999-disk-0@__replicate_999-0_1732759341__': backup failed
999-0: command 'zfs send -Rpv -I __replicate_999-0_1732752141__ -- tank/encrypted/vm-data-migrate/vm-999-disk-0@__replicate_999-0_1732759341__' failed: exit code 1
999-0: [pve02] cannot receive: failed to read from stream

999-0: [pve02] command 'zfs recv -F -- tank/encrypted/vm-data-migrate/vm-999-disk-0' failed: exit code 1
999-0: delete previous replication snapshot '__replicate_999-0_1732759341__' on tank-encrypted-vm-data-migrate:vm-999-disk-0
999-0: end replication job with error: failed to run insecure migration: command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve02' -o 'UserKnownHostsFile=/etc/pve/nodes/pve02/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.42.2 -- pvesm import tank-encrypted-vm-data-migrate:vm-999-disk-0 zfs tcp://192.168.42.1/24 -with-snapshots 1 -snapshot __replicate_999-0_1732759341__ -allow-rename 0 -base __replicate_999-0_1732752141__' failed: exit code 255

Just patch the file in this case and everything should be up and running again.

Now have fun with a working PROXMOX setup with full encrypted ZFS datasets and migration ability ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!