Currently upgraded my 5 node cluster to 5.3-9 and I have lost the ability to start/stop/migrate vms. Not sure if I should shut down the entire cluster and reboot them all(together vs rolling)
When I try to start a VM I get this
==> kern.log <==
Feb 8 22:55:08 r610-gw1lwr1 pvedaemon[2604]: <root@pam> starting task UPID:r610-gw1lwr1:00002827:00026F69:5C5E5D2C:hastart:104:root@pam:
Feb 8 22:55:09 r610-gw1lwr1 pvedaemon[2604]: <root@pam> end task UPID:r610-gw1lwr1:00002827:00026F69:5C5E5D2C:hastart:104:root@pam: OK
When I try to start a VM I get this
==> kern.log <==
Feb 8 22:55:08 r610-gw1lwr1 pvedaemon[2604]: <root@pam> starting task UPID:r610-gw1lwr1:00002827:00026F69:5C5E5D2C:hastart:104:root@pam:
Feb 8 22:55:09 r610-gw1lwr1 pvedaemon[2604]: <root@pam> end task UPID:r610-gw1lwr1:00002827:00026F69:5C5E5D2C:hastart:104:root@pam: OK