[SOLVED] Proxmox AES-NI Support

Jay L

Active Member
Jan 27, 2018
19
3
43
54
Hi,

I am running Proxmox on a fanless PC which runs a J3160 which includes AES-NI acceleration. A container on this machine happens to be running OpenVPN. Does Proxmox expeose the AES-NI extentions to containers? I am not exactly sure how to test this or if there is something I need to do to test this, and so I figured that I would as the experts here.

Thank you!
 
Hi,

I am running Proxmox on a fanless PC which runs a J3160 which includes AES-NI acceleration. A container on this machine happens to be running OpenVPN. Does Proxmox expeose the AES-NI extentions to containers? I am not exactly sure how to test this or if there is something I need to do to test this, and so I figured that I would as the experts here.

Thank you!

Update so I found this page: https://www.cyberciti.biz/faq/how-to-find-out-aes-ni-advanced-encryption-enabled-on-linux-system/

Those tools confirm that the module is installed. However the test to see if it is working using "openssl engine" results in the following:

(rdrand) Intel RDRAND engine
(dynamic) Dynamic engine loading support​

Does this mean that AES-NI is not actually working?

Thank you!
 
Almost 5 years later, you can also just set the flag without setting an explicit cpu model:

View attachment 47407

Thanks very much for this. I didn't realize AES wasn't enabled in a guest VM by default. I am running Untangle NGFW in a guest VM under Proxmox version 8 with all default settings. Then I found your flag to enable AES as shown above. I've since done this after backing up my guest VM and the results are significantly improved with openssl tests upon rebooting the Untangle guest.
 
  • Like
Reactions: LnxBil
Almost 5 years later, you can also just set the flag without setting an explicit cpu model:
And another year later... I'm not sure this actually works?

At least with a TrueNAS Scale 24.10 guest on Proxmox 8.3, I tried every combination of generic CPU types x86-64v2, x86-64v2-AES, and x86-64v3 both with and without the AES flag manually enabled... and in all cases, encryption performance was unusable. 12 Broadwell cores 100% loaded for just 200MB/s throughput...

When I set a modern explicit CPU model or Host, it works flawlessly, like 100x less CPU load for similar throughput.

I also had so much trouble with excessive pfsense VPN CPU usage in the past (probably on Proxmox 7) that I moved to bare metal. I think I was trying to use generic CPU types with the AES flag back then, as I had a cluster with dissimilar hardware so Host CPU type broke live migration. I'll have to give it a try again because I think setting an explicit CPU type (old enough to be supported by all the nodes) would fix the AES support and still allow migration.
 
And another year later... I'm not sure this actually works?
"Works for me"

Code:
root@vm-x86-64-v3 ~ > openssl speed -evp aes-256-cbc
Doing aes-256-cbc for 3s on 16 size blocks: 82707883 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 64 size blocks: 22518939 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 256 size blocks: 5780765 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 1024 size blocks: 1459706 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 8192 size blocks: 184347 aes-256-cbc's in 3.00s
Doing aes-256-cbc for 3s on 16384 size blocks: 91169 aes-256-cbc's in 3.00s
OpenSSL 1.1.1w  11 Sep 2023
built on: Sun Nov  3 04:59:56 2024 UTC
options:bn(64,64) rc4(8x,char) des(int) aes(partial) blowfish(ptr)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/reproducible-path/openssl-1.1.1w=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
aes-256-cbc     441108.71k   480404.03k   493291.95k   498246.31k   503390.21k   497904.30k

on the hypervisor it is a bit faster:

Code:
root@hypervisor ~ > openssl speed -evp aes-256-cbc
Doing AES-256-CBC for 3s on 16 size blocks: 88678175 AES-256-CBC's in 3.00s
Doing AES-256-CBC for 3s on 64 size blocks: 23516151 AES-256-CBC's in 3.00s
Doing AES-256-CBC for 3s on 256 size blocks: 5961784 AES-256-CBC's in 2.99s
Doing AES-256-CBC for 3s on 1024 size blocks: 1495080 AES-256-CBC's in 2.99s
Doing AES-256-CBC for 3s on 8192 size blocks: 187132 AES-256-CBC's in 2.99s
Doing AES-256-CBC for 3s on 16384 size blocks: 93664 AES-256-CBC's in 2.99s
version: 3.0.14
built on: Sun Sep  1 14:59:10 2024 UTC
options: bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -fzero-call-used-regs=used-gpr -DOPENSSL_TLS_SECURITY_LEVEL=2 -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/reproducible-path/openssl-3.0.14=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
CPUINFO: OPENSSL_ia32cap=0x7ffef3ffffebffff:0x21cbfbb
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
AES-256-CBC     472950.27k   501677.89k   510440.37k   512027.40k   512704.13k   513241.13k
 
After more testing, you seem to be right and my issues are not with Proxmox. pfSense can clearly tell me in the webUI that various AES hardware crypto features are available when using x86-64v3 generic CPU type plus AES flag. My OpenVPN performance is poor but that may be more of an OpenVPN or specific configuration issue, it's not slow enough to indicate AES is actually broken.

My Wireguard performance is about 50% slower for the same CPU load vs. bare metal, but this is also not an AES issue as Wireguard doesn't use AES. I am using virtualized NICs so there is hypervisor overhead there and lack of hardware offloading, in addition to hypervisor overhead for the CPU itself.

My Tailscale performance is awful. About 4x worse than Wireguard for the same CPU load and I don't see the same issue on bare metal. Still not due to broken AES, however.

I have no idea why Linux-based TrueNAS Scale can't make sense of the AES flag and perform alright with a generic CPU type, but it's safe to assume it's not Proxmox's fault.
 
I'm having similar problems when using disk encryption in TrueNAS Scale running on Proxmox. Even when setting the AES flag for x86-64-v2-AES the performance is incredibly slow. I was expecting more people having the same problem, but I guess people does not care enough about encryption. Link to my post on this forum