Problem with worker VM start initiated by Veeam on PVE

I really like VEEAM B&R with ESXi hosts. But today vmware is under broadcom and has ruined politics, so we need to move on to PVE.

I has same issue on one of three hosts, so i will try every solution, thanks.
 
I got info from Veeam support and we can actually leave the Veeam worker just in an idle/powered on state.

C:\Program Files\Veeam\Plugins\PVE\Service\
edit the appsettings.json
Under "Workers" you can change KeepTurnedOn from false to true
"KeepTurnedOn": true

Save and reboot server or Veeam PVE Service.

I am going to try this out for awhile and see how it does. It looks to barely use any cpu resources while idle but will eat up some ram but less risky then having it not power up correctly and miss checkpoints on VMs.

I did test what happens if you reboot the Veeam server with worker powered on, the Veeam worker will stay powered on, once a backup is started it resets the Veeam worker and you can see the uptime clock reset.

Seems good so far, will let it go for the rest of the week and see if it's more stable.


This config works for me, 45 days without issues.
 
In my situation problem was in bad qcow2 file (btrfs filesystem on host storage).

I was unable to migrate it, convert it, etc. QM check don't mate a difference.

I made another one, clone old to it with clonezilla and problem solved.