Qemu for Proxmox (pve-qemu) with ALL Supported KVM and Emulated CPUs Debug and Release DEP Builds Available

lillypad

New Member
Mar 4, 2020
6
1
3
38
Hello Proxmox Users,

The Issue:

Proxmox (pve-qemu) currently only enables x86, x86_64 and ARM64 architectures when Qemu supports many more.

History:

I've seen old posts about using ARM and MIPs with Proxmox without much done to really address these missing features other than adding ARM to pve-qemu build script as a --target-list configure flag.

Posts on the topic of non-x86 Proxmox (2016-2017):
- https://forum.proxmox.com/threads/emulating-non-x86-machine-types.35801/
- https://pve.proxmox.com/pipermail/pve-user/2016-October/167497.html
- https://www.nicksherlock.com/2017/08/emulating-mips-guests-in-proxmox/

The Question:

If Qemu supports all of these architectures, why not enable them all?

The Solution:

To make development and testing a easier for these features, I created a build of pve-qemu from (git://git.proxmox.com/git/pve-qemu) with all CPUs supported by Qemu enabled.

The build comes with pve-qemu in both development and release DEB packages.

The modified build is called pve-qemu-unlocked as all CPU architectures are enabled that Qemu really supports and not just x86 based ones.

To test on Proxmox do the following:
wget https://github.com/lillypad/pve-qem...1.1-3_amd64/pve-qemu-kvm_4.1.1-3_amd64.tar.gz
tar -xzvf pve-qemu-kvm_4.1.1-3_amd64.tar.gz
cd pve-qemu-kvm_4.1.1-3_amd64/
sudo dpkg -i pve-qemu-kvm_4.1.1-3_amd64.deb

Source Code and Builds:
- https://github.com/lillypad/pve-qemu-unlocked

If you wish to build from source, simply run make and Docker will do everything else for you.

Future Goals:
- Make new architectures easy to test and implement for developers with pre-compiled pve-qemu binaries
- Modify UI to enable features needed for all Qemu supported architectures
- Implement CI for regular build snapshots

Support:
If you decide to test this and encounter any issues, please let me know the errors you have so I can incorporate fixes to this modified build.

Why?: I do malware analysis / reverse engineering and couldn't bare waiting for these features implemented on a decent virtualization server. Malware like Mirai needs to be analyzed at and this would make my life a little easier.
 
  • Like
Reactions: matrix
Nice, thanks!

The integration of non-default architectures in LXC is also a good thing, but it works too unreliable. I'm trying this for years with binfmt kernel support and the qemu-user binary.
 
  • Like
Reactions: lillypad
Nice, thanks!

The integration of non-default architectures in LXC is also a good thing, but it works too unreliable. I'm trying this for years with binfmt kernel support and the qemu-user binary.

I can't comment on LXC and it's implementation of other architectures and to be honest I don't have enough experience with it to fully say if this is the case but it is interesting to hear from you it's not that stable.

Like you probably already know, I read that using binfmt with qemu-user still means a lot of work to make the environments right for real practical execution of executable formats. With the ability to run the full distributions of ARM or MIPs Linux operating systems it makes dealing with environments a little easier.

Having a single Proxmox instance with a fully unlocked Qemu install is much more cost effective and easier to manage than managing hardware for all these CPU architectures for research projects.

I think that If the advanced features are there, why not enable them, what would be great is an "opt-in" setting, this way novice users know what they are getting into without leaving the advanced users behind.

UPDATE: Just started the Travis CI builds and will provide more information as those build become available, to start I will be supplying weekly builds! :)

:)
 
Last edited:
the hard part is not building qemu with support for more targets, but implementing and maintaining proper hardware/machine models long-term (especially w.r.t. live-migration). that's the main reason why there is no support in PVE yet - the use case is very niche, the maintenance overhead is rather high.
 
  • Like
Reactions: Moayad
the hard part is not building qemu with support for more targets, but implementing and maintaining proper hardware/machine models long-term (especially w.r.t. live-migration). that's the main reason why there is no support in PVE yet - the use case is very niche, the maintenance overhead is rather high.


Hello fabian,

I would have to respectfully disagree with you on both points.

...the use case is very niche,..."

Response:
The use case for these types of features is only increasing with the Cyber Security industry and becoming less niche very fast.
- https://blog.talosintelligence.com/
- https://www.proofpoint.com/us/blog
- https://asec.ahnlab.com/
- https://www.welivesecurity.com/
- https://blog.360totalsecurity.com/en/
- https://twitter.com/malwaretechblog
- https://twitter.com/malwrhunterteam
- https://www.malware-traffic-analysis.net/
- https://any.run/
- https://www.hybrid-analysis.com/
... I could keep going with more examples

"...but implementing and maintaining proper hardware/machine models long-term (especially w.r.t. live-migration). that's the main reason why there is no support in PVE yet..."

Response:
If I'm correct, live-migration is the movement of one machine to another without loss of connections for clients. This of course sounds like a great feature for stability of different services. However, this does not mean that this feature could be disabled for anything that is not x86 or x86_64. With disabling this feature on non-x86 type CPU architectures wouldn't it be less overhead?

:)
 
Response:
If I'm correct, live-migration is the movement of one machine to another without loss of connections for clients. This of course sounds like a great feature for stability of different services. However, this does not mean that this feature could be disabled for anything that is not x86 or x86_64. With disabling this feature on non-x86 type CPU architectures wouldn't it be less overhead?
special casing is seldom less overhead. Also, the HW layout of other architecture differs, not only in CPU options but in lot's of places.

If you really want to contribute a "unlocked QEMU experience" you can try to integrate it into qemu-server and work with devs over at pve-devel. First we'd probably need to cleanup the arm64/AARCH64 integration, then add others. Enabling the other QEMU targets isn't the real work..
 
I can't comment on LXC and it's implementation of other architectures and to be honest I don't have enough experience with it to fully say if this is the case but it is interesting to hear from you it's not that stable.

Like you probably already know, I read that using binfmt with qemu-user still means a lot of work to make the environments right for real practical execution of executable formats. With the ability to run the full distributions of ARM or MIPs Linux operating systems it makes dealing with environments a little easier.

I have my raspi and orangepi base images on my ZFS-PVE system and update it from time-to-time, so I chroot into them. The problems I had were easy to solve for a cli lover, but not production ready. I often had hanging processes that needed to be killed. This happened not that often, but manually killing processes is not the best you can do. I had e.g. a hang while updating the initramfs of my pi while beeing in dpkg, so the package installation did not finish properly etc. That was fixable, but still. It's not a PVE problem, PVE was not involved, it's a binfmt problem itself.

Having a single Proxmox instance with a fully unlocked Qemu install is much more cost effective and easier to manage than managing hardware for all these CPU architectures for research projects.

Have you experience in running e.g. a pi image in qemu? I tried that with vanilla qemu a few years and it was a lot of command line arguments and an extremely slow virtualisation experience.
 
Have you experience in running e.g. a pi image in qemu? I tried that with vanilla qemu a few years and it was a lot of command line arguments and an extremely slow virtualisation experience.

I have and yes, it is slower, that is to be expected as it's emulating the CPU of a different architecture than the one one the host machine. For debugging and detonating malware though it works enough for what we need it to. The main use case again would be research work.

special casing is seldom less overhead. Also, the HW layout of other architecture differs, not only in CPU options but in lot's of places.

If you really want to contribute a "unlocked QEMU experience" you can try to integrate it into qemu-server and work with devs over at pve-devel. First we'd probably need to cleanup the arm64/AARCH64 integration, then add others. Enabling the other QEMU targets isn't the real work..

Special casing could be avoided by allowing the us to add custom QEMU parameters in the Add Hardware menu.

I will definitely be considering visiting pve-devl to make a contribution, based on what you've stated about the code base I'd probably tackle the issue this way.

1. Enable all architectures for QEMU
2. Disable live-migration for anything that is non-x86
3. Add option when adding hardware to specify custom QEMU advanced parameters
4. Determine if an "opt-in" setting for advanced features would be needed

That would in theory be the least amount of work to get the features enabled. :)
 
Last edited:
Why?: I do malware analysis / reverse engineering and couldn't bare waiting for these features implemented on a decent virtualization server. Malware like Mirai needs to be analyzed at and this would make my life a little easier.

i'd like to ask out of curiosity, what does your setup look like in terms of networking?

malware analysis can be dangerous if there are misconfigurations.

what is your workflow for sandboxing and keeping samples in the VM?
 
  • Like
Reactions: lillypad
i'd like to ask out of curiosity, what does your setup look like in terms of networking?

malware analysis can be dangerous if there are misconfigurations.

what is your workflow for sandboxing and keeping samples in the VM?

Hello oguz,

There are no 100% secure solutions when it comes to dealing with malware.

However, it is possible to mitigate most of the risk by taking layered approach to security.

1. Have your malware research server on a completely segregated physical network (air-gap).
2. All VMs detonating malware should be on an isolated virtual network.
3. All communication to the internet should be run through an anonymous VPN service.
4. All virtual machines should be stripped of any details pertaining to your location and other metadata.
5. All machines that connect to the malware server shall be on the same network as the malware server.
6. Remote access to the network is not permitted under any circumstance thus physical access only via ethernet is possible.
7. All machines on the network shall be considered compromised at all times.

Even more security for these would follow similar plans to a Secure Compartmented Information Facility (SCIF) where even any signals including wireless are prevented from entering or exiting the secure area where the analysis equipment exists.

These are typical of setups of cyber military installations and threat operations centers where budgets are high for security and analysis of threats.

This is a VERY basic overview, but it should give you an idea.
 
Last edited:
Quick Update,

Had a look into the code-base today, the qm utility already has the ability to pass arbitrary arguments to virtual machines via the API.

This means step 1 is complete and step 3 is half complete with likely only needing to add an option to call that part of the API.

1. (done) Enable all architectures for pve-qemu
2. Disable live-migration for anything that is non-x86
3. (50% done) Add option when adding hardware to specify custom QEMU advanced parameters
4. Determine if an "opt-in" setting for advanced features would be needed

This means that implementation of this is already starting to look easier.

:)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!