So just want to add my opinion on this. I work for a large consultancy that mostly is full of developers where I'm from an ops background.
As I see it a lot of development as others have said is starting to go along the docker route. Especially for certain tasks. As an example take a front end / back end web site. The back end is likely still traditional SQL of some sort but the front end could be throw away containers that scale up and down as demand requires.
I don't see that really as an pro docker argument or as PVE *needs* Docker as you can do this exactly also with LXC?
I can generate my template, start 10 of those, add another few, stop a few... Let them provide services at the current need. The thing really missing for your case is a orchestration tool, which handles that automatically (or as automatically as possible).
Just adding the function to start stop docker container wont bring such a tool to PVE, and if adding one I would prefer to do it with LXC (and VMs) as a) those are a super set of docker (AFAIK) as there the functionality is there and b) this is our CT technology, managing and updating two of them does not makes sense, causes more work and IMO its better to ensure that one works good and the problems with it gets fixed.
The lack of HA / live migration is a mute point because those containers don't contain any data you require. You say have 4 hosts to run this and if one of the hosts fails the other 3 just fire up 1/3 more containers each to cover the current load. Assuming you've specified the hardware to the normal N+1 you've no issues here at all.
Your describing exactly HA fail over here (or at least one way to do it, there are more), If a node fails this is exactly what our manager does, restart the Services distributed on all other nodes, so it seems we can do this already
You (in you as someone who wants to provide any service, which then should be reliable to be taken seroius) want HA, its not a mute point IMO.
At the minute I don't see it as a one size fits all but you cannot ignore docker. I do accept that adding this is likely not a small task for the Proxmox developers and they are focused on the solution they have which is excellent. I do still hope they consider adding it even in a basic form for throw away docker containers. After all VMware and Microsoft are adding this functionality to their platforms, Openstack is also branching out into containers. Sooner or later it might be not having this functionality in Proxmox will hurt.
Proxmox VE have since ever "branched out in containers", as it contained an ecosystem for container tools since the start of the project.
If you forget the (current) hype about Docker itself and reduce it to its functionality I do not think people really miss out using LXC for containerization, with the technology itself.
DAB's great by the way. Developers though are writing dockerfiles so it's kind of becoming a industry standard.
But a service provider always will have to generate its own images, or configure them, and thus using a tool and they have such a short training period that it shouldn't quite matter.
DAB produces also technology independent image files (i.e. a rootfs), you can run them with LXC, chroot in them, put them on a bare metal machine (install a kernel for that though), ...
So I, personally, would say the container technology LXC can do the stuff, adding another one does not bring value by itself. We should rather think of doing something with the ecosystem, e.g. in the direction of orchestration as this is where the "docker universe" shines more at the moment and this could actually bring real value towards PVE more easily then trying to fit all of the Docker ecosystem in it by puttings a lot of man hours in that and the result in the end that you do not really can do more now. This is my opinion on the topic.