I'd recommend Opnsense. It fits all these requirements and it can be installed as a VM on Proxmox VE (just download the iso-vga installer) https://opnsense.org/
It's basically an open source firewall. Among other functions, you can create...
The resolving commit for mentioned vioscsi (and viostor) bug was merged 21 Jan 2026 into virtio master (commit cade4cb, corresponding tag mm315).
So if the to-be-released version will be tagged as >= mm315, the patch will be there.
As of me...
That sounds like a reasonable fix. A minor delay in handoff between the nodes in the interest of security is very much acceptable. I look forward to seeing it pushed out once you get it applied and tested. Thanks!
understandable, the only mitigation I can currently think of is by utilizing a hook script, but that won't catch every case in the guest lifecycle.
I'll look into creating patches that resolve this problem by making the guest wait for a firewall...
Interesting, I’d have also expected firewall rules to be applied the whole time.
Can access be controlled by a firewall outside Proxmox? At least for external connections… That’s our primary method though we use Proxmox firewall in a few cases...
Just an update. I have confirmed that I can exploit this to establish an SSH session with the guest by simply probing the ssh port every second. As soon as I migrate the VM to another node the session connects and I can use that session as long...
This is true in theory but not always possible. There is, for example, a well-known enterprise-grade backup software, who allows backing up MS SQL clusters, but doesn't support the backup of MySQL/MariaDB/Oracle or PostgreSQL clusters.
For this...
I would never solve those RPO/RTO demands with a recovery solution, I would always go with internal replication/standby solutions within the database. There is also a reason those techniques exist in those products for decades. So those should be...
This is how I look at it.
You are offering to operate a bus for people. but instead of selling seats, you are putting busses inside your bus.
Whats the use case?! If you're trying to offer a customer resources that they can distribute between...
no.
There is almost NEVER a use case for nested hypervisors except for development/lab use. Even if we assume there are no cpu/ram performance degradation that occurs with modern VT extensions (hint: there are) the consequences of cascading...
You dont. I dont understand the use case enough to comment on the wisdom of the solution; please explain what you mean by VDS, and why you want proxmox inside them.
If possible (e.g. enough storage available) you should always restore to a new VMID, or create a backup of the broken system before overwriting it.
You can never be sure that a backup is valid and restorable until you have sucessfully restored...
Hi,
you may be interesed in the following admin guide section for configuring pveproxy: pveproxy - Proxmox VE API Proxy Daemon
Of course, a proper firewall setup is still recommended.
If everything works as expected with a previous kernel, an option would be to install PVE 9, pin [0] the kernel that works (i.e. 6.14) and then upgrade to PVE 9.1.
If it works, you could pin the same kernel on the rest of the nodes before...
I did not dismiss anything, I just try to understand your odd accusations, given that nothing changed for your existing PVE subscriptions.
You still get exactly the same value of our lowest subscription tier you choose to pay for, nothing more...
Well you could also think that it's bargain for enterprise environments that it's enough to have a support subscription for all your PBS and PVE nodes to have PDM without the nag and access to the enterprise repo instead of having to pay for PDM...
Same here. Using VE 9 with an HP DL 380 Gen10 and a RAID-Controller HPE MR416i-p Gen10+
May thats related to this: https://bugzilla.kernel.org/show_bug.cgi?id=220693
Rollback to 6.14 resolved it for now.