Still trying to learn this stuff, but I think the vm is not going to yield resources to other vm's if it's busy, whereas a lxc container will - Is this correct?
It depends. If you have a lot of similiar VMs you can leverage KSM ( see:
https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM) ) to host a lot of these vms on the same machine by deduplicating their memory contents. This feature isn't available for lxcs so in theory it might even be possible to have more vms than containers with the same hardware
The main issue with lxcs is something else: They share the kernel with the Proxmox host and are less isolated than a VM. This makes setting them up for some tasks more envolved than with a vm if you don't want to sacrifice security (see
https://pve.proxmox.com/wiki/Linux_Container#_bind_mount_points and
https://pve.proxmox.com/wiki/Unprivileged_LXC_containers ). There are also cases where stuff breaks inside them which works just fine with a VM. For example it's not recommended to run docker or podman containers inside an lxc since they are containers too and thus use the same mechanisms of Linux. Normally they will work, but it happened several times in the past, that they got broken after an update. Although there was always a fix or workaround this meant something doesn't work after an update. This might be fine in your home, but not in a corporate environment and even at home it might be an issue if your family expect the service to work so they can enjoy their Internet or media files

The same docker applications inside an VM didn't care about the Host update they just continued to work. And obviouvsly sharing the kernel means that you can only run Linux workloads inside an lxc container, for a BSD system (like OPNsense, NetBSD) or Windows you will need a vm, no way around it. You also can't use any version of a Linux distribution in a container for the same reason, because stuff changed to much. For example at my place of work we had some legacy application which needed to be run from a SuSe EnterpriseLinux Server 11 (current version is 16 and SLES11 is out of support for years even before we finally get rid of that old application). Now we have VMware at work (which doesn't have lxcs anyhow) but even with ProxmoxVE we wouldn't have been able to run SLES11 in a container, there are just to many differences between SLES11 and a modern Linux system. It was also full of unpatched security issues (since not supported anymore), but being run as a VM the worst possible outcome would have been that somebody could take over said VM. In contrast in a container it would have been possible to break out to the host, wreaking havoc on anything else we run on that machine.
On the other hand lxcs sharing of the host kernel also has some strengths. For example it's quite easy to share an GPU between several lxcs so they can use it for stuff like hardware transcoding of media files, AI workloads etc. Sharing a GPU between different VMs is only possible with quite expensive enterprise GPUs. You can still passthrough a GPU to one VM but then other VMs or the host can't use it anymore. So if you want to leverage the integrated GPU of your Proxmox host you will loose it's functionality for anything except said VM and would host everything you need an GPU for from this VM (e.g. all docker containers which would like to use the GPU for Hardware acceleration). Normally this isn't much of a big deal (because you will use the Webinterface or ssh to do ProxmoxVE administration) but if you ever need to do troubleshooting it's quite handy to be able to still use an external display together with the console.
Another thing where lxcs shine are internal services (not exposed to the internet), who are more or less self-contained (no need for bind mounts) and don't need docker (e.G. pihole) because they then can be quite lightweight. They are also nice for testing things and throwing away the test container afterwards before setting up your production setup in a vm.