I've got my Proxmox being a router (pfSense) to my home network without issues. I'd agree this isn't something you'd use in a business but I have no issues. I have tried 2.1 with VirtIO drivers and the 2.0 with the e1000 drivers without a problem however I am currently using PCI Passthrough as my server has 4 NIC's.
So the configuration currently is 2 NIC's passed directly to the pfSense for LAN and WAN. The other 2 NIC's are bonded and used for other VM's on the host. I have a switch that supports this.
The advantage I see to this configuration is while the router is running on the host its got direct access to the NICs so can control them as if it was physical. Also its actually then quite isolated from the other VM's as well. Performance is great even when other VM's are working the system hard.
In my case I wanted to be able to push 200Mbit's of OpenVPN traffic which you cannot do without a reasonable CPU but didn't want extra heat / electricity costs of running a powerful second box for a router. This way all my VM's and router sit on one server (ML110 G7 with a Quad Core Xeon).
As NICs are cheap, if your hardware supports VT-d I recommend this for a home / lab environment.
FastLaneJB:
In this description you're having no issues. It's a great little home setup, but again it's not 255.255.0.0, no advanced network configs or vlans (one assumes)
In his original description and question he specified this netmask, if you're looking at a /24 netmask it's not even the same planet of networking complexity to achieve a reliable network. If he wanted to use a /24 netmask like you do at home (or at least like almost all home users would because who needs more than 254 devices at home?) he wouldn't need a managed switch or any complex networking setups, and it would be more or less plug and play.
LET ME ALSO REMIND YOU: He wants to use 1 physical NIC for all this traffic, I never would but it is one of his requirements. (If it helps I use the intel 10GBase-T nics and I love them, got me a stack in the lab to go with the 10GBase-T switch, but even if I have 20 of them it doesn't change him having a single nic)
If I was doing any home automation I would use 2 subnets, with a BOOTP for the automation devices/management server on their own subnet if that was supported. It seems like a very clean way to handle the process, but alas I don't do home automation much, too expensive for most folks around here and I mostly work with business to business ... Short of that I would use a managed switch with 2 vlans and iphelpers, then route the vlan so they can reach each other on the switch. Thus avoiding the need for the 255.255.0.0 while still supporting now up to 508 total addresses.
My assumption is that this can be done with the virtual machine method with little or no difficulty, however again the hardware requirements for the switch suddenly become an issue because a solid fully managed switch isn't cheap (poe one assumes for automation devices), and now you need at least 6 48 port switches to pass the address barrier of /24, or a bunch of wifi access points if you don't care about reliability.
That being said if someone has the money to invest in enough home automation that 254 devices isn't enough I think it's a stretch to believe that they cant afford proper hardware to run the network.
DeepB: If you could post a quick diagram of your layout I'd be happy to post a response, please include switch model (managed or web) and I'd be happy to give you a few more detailed examples to point you in the right direction along with how much can be done in the switch vs in pfsense (it will be faster in the switch in this case).