Preventing VM detection from RDTSC VM EXIT checks?

some_dingdong

New Member
Jul 19, 2020
9
0
1
33
I have been using VM for a while now, however, recently noticed that this VM isnt so undetectable afterall. Using pafish test shows that rdtsc vm exit calls are traced.
1595941484734.png
These are cpuid timing checks i believe.

Is it possible to hide these traces in proxmox?

Also i have found that someone managed to hide these traces by using configuration bellow:
1595941375381.png
But this is libvirt xml configuration. How can i use these settings in Proxmox?
 
You cannot hide the fact that you are running within a VM. While it is potentially viable to at least counteract the one vector you mentioned (rdtsc emulation detection [0]), there are many many more. QEMU and KVM are not made to be invisible, and while they include some measures to fool the most rudimentary VM detection checks (such as hiding certain paravirtualization aspects when asked to), they do not include the facilities to completely hide the fact that a hypervisor is present (I don't even know of any hypervisor project on x86 that has managed this).

[0] though even that probably needs some special kernel patches, I'm not sure how the libvirt config you posted would affect things (the instruction at fault here is not actually rdtsc, but cpuid, which forcibly causes not only a VM-exit but also an exit to userspace/QEMU, and thus takes a long time)
 
This is specific check that i want to hide. Im very inexperienced at anything related to kernel, however, atleast i would like to try these settings if someone claims it did work for them for related problem. So is it possible to apply configuration above? If yes, how?
 
You can try to emulate it as closely as possible, sure. Check out our custom CPU models feature for example, you can add the flags from your libvirt config there (the ones marked as "feature", e.g. "invtsc", "x2apic", etc...). The timer ones are more tricky, but the most relevant one (tsc) should be covered by adding "invtsc" to the flags.

Still, this is probably not going to fix the check you want to hide. The base technology (KVM and QEMU) is simply not made for this. If anything, it likes to deliberately expose itself as a VM for PV performance gains.
 
After some experiments and failures i have noticed that "invtsc" is intel specific feature. Sadly I cound not find amd equivalent. Neither i have found anything related to timer implementation besides using libvirt.

Someone managed to patch kvm by adding some extra code. However, i do not know how to implement custom patches to proxmox. This documentation only explains how to suggest patches for future releases but not how to implement them
 
After some experiments and failures i have noticed that "invtsc" is intel specific feature. Sadly I cound not find amd equivalent. Neither i have found anything related to timer implementation besides using libvirt.
"invtsc" is a bit of a corner case, so while technically not supported on AMD bare-metal, AFAIU "invtsc" is more of a workaround around a buggy "constant_tsc" implementation on Intel, so as long as your host AMD chip has "constant_tsc" you should be good to set the "invtsc" flag for your VMs. Although x86 timers are always a corner case, so YMMV.

What also might help: qm showcmd <vmid> --pretty shows you the raw QEMU command line of your current VM configuration. There you can see details on how PVE configures timers (or doesn't, and trusts the QEMU default most of the time).

Someone managed to patch kvm by adding some extra code. However, i do not know how to implement custom patches to proxmox. This documentation only explains how to suggest patches for future releases but not how to implement them
If by "implement them" you mean just testing them, then it's as easy as building the package (usually just a "make deb" in the repo cloned from https://git.proxmox.com) and then installing it via apt/dpkg. Then you can test on your local machine by applying the patches to the repo and building again.

If you want to patch the linux kernel code though, you might be better off contacting the KVM developers on their mailing list.
 
After many hours I have managed to create a patch for pve-kernel. I followed mixed instructions from developer documentation and this post. However, after installation I did not notice any affect. Files has been modified as described in this repository.
Here is what i did:

Code:
git clone git://git.proxmox.com/git/pve-kernel.git
cd pve-kernel
make submodule
cd ..
git clone pve-kernel/submodules/ubuntu-focal/.git test-kernel
cd test-kernel
git checkout -b current-pve
git am ../pve-kernel/patches/kernel/*
git checkout -b my-working-branch

Then i patched those two files (vmx.c and svm.c) and created a patch:

Code:
git commit -s -a
git add arch/x86/kvm/vmx/vmx.c arch/x86/kvm/svm.c
git format-patch -o my-patches/ --subject-prefix="PATCH kernel" current-pve..my-working-branch

Copied patch files into pve-kernel/patches/kernel and ran make deb from pve-kernel directory
Install generated deb packages into system with apt install *.deb and apt dist-upgrade

And at this point I have expected for them to work. However, patches have this line printk("[handle_rdtsc] fake rdtsc svm function is working\n"); but I cant find it in syslog. Either patch does not work or somehow I failed to build them correctly. I would appreciate any kind of help as I have been doing this for the first time.
 
When building your kernel, do you see these lines in the 'make deb' output:

Code:
applying patch '../../patches/kernel/0001-Make-mkcompile_h-accept-an-alternate-timestamp-strin.patch'
patching file scripts/mkcompile_h
applying patch '../../patches/kernel/0002-bridge-keep-MAC-of-first-assigned-port.patch'
patching file net/bridge/br_stp_if.c
applying patch '../../patches/kernel/0003-pci-Enable-overrides-for-missing-ACS-capabilities-4..patch'
patching file Documentation/admin-guide/kernel-parameters.txt
patching file drivers/pci/quirks.c
applying patch '../../patches/kernel/0004-kvm-disable-default-dynamic-halt-polling-growth.patch'
patching file virt/kvm/kvm_main.c
applying patch '../../patches/kernel/0005-Revert-KVM-VMX-enable-nested-virtualization-by-defau.patch'
patching file arch/x86/kvm/vmx/vmx.c
applying patch '../../patches/kernel/0006-Revert-scsi-lpfc-Fix-broken-Credit-Recovery-after-dr.patch'
patching file drivers/scsi/lpfc/lpfc.h
patching file drivers/scsi/lpfc/lpfc_hbadisc.c

There should be an "applying patch" line for your custom patch as well, if not, it's not being applied.

If it is being applied, you have installed the package, and you have rebooted, then it is pretty certain that your patched kernel is actually running - so it might be a logic error in your patch code (which you 1. haven't posted and 2. is a bit out of scope for the forum so that's left for you to fix anyway ;) ).
 
might be a logic error in your patch code (which you 1. haven't posted
I have posted link to the repository i have followed and those modifications are very small.
Is there a way to check build log without recompiling kernel again?
 
Last edited:
Is there a way to check build log without recompiling kernel again?
Not unless you saved the terminal output.

I have posted link to the repository i have followed and those modifications are very small.
Sorry, missed that. Did you also follow the advice to disable rdtscp for QEMU? I.e. add a line to your VM config:
Code:
args: -cpu host,rdtscp=off,hv_time,kvm=off,hv_vendor_id=null,-hypervisor
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!