That depends heavily on the required security isolation.
Docker is not known for its good out-of-the-box security and a lot of docker images out there are not created with security in mind (e.g. running as non-root, beeing able to run in read-only mode, etc.). Also, having a security-centric operation with respect to docker-compose.yml is also not the default and most "solutions" are "it-just-works" solutions passing through sockets or ports directly or just having them run on network: host and no direct isolation of images. There are however storage and network plugins than greatly enhance the default security with docker, but have to be installed, understood and maintained. Having every service on it's own capsulated network (no internet access) is also a must and not the default. A better solution would be to go the route directly to k8s, which has a much better default security model and isolation than docker itself, but of course the more complicated setup and overall hardware requirements (mostly more than one host).
That being said and back to original question with Docker:
I recommend splitting "services" at least with respect to security zones, e.g. only-internally available services from externally available services and depending on the security requirements. Having everything in its own VM has a lot of security enhancements if you also use VM firewalling. Docker + Linux firewall is a nightmare from the inside due to the entries in nat prerouting for passthroughed services. I just ran into this problem again yesterday and there seems to be no good solution besides firewalling from the outside. Depending on the hosted services, you may also use a virtual DMZ for each VM, e.g. via a firewall security group that only allows incoming traffic and only selected outgoing traffic to further cage the services (all from the PVE side). You can also do that on one big VM, but that may get very confusing and the chance of human errors that inflict all services is (much) higher.