[SOLVED] Some vms fail to start after upgrading ceph

shenhuxi1221

New Member
Oct 30, 2022
5
1
3
After using apt dist upgrade to upgrade the system and restarting the server, some vms cannot be started.

Tail - f/var/log/syslog can see errors

The vm console that cannot be started also displays an error

See the figure below

I don't know what the problem is. The state of the ceph is normal.

The hard disk of the virtual machine uses ceph's rbd
 

Attachments

  • iShot_2022-10-30_15.43.37.png
    iShot_2022-10-30_15.43.37.png
    195.9 KB · Views: 11
  • iShot_2022-10-30_15.44.49.png
    iShot_2022-10-30_15.44.49.png
    794.3 KB · Views: 11
  • iShot_2022-10-30_16.00.13.png
    iShot_2022-10-30_16.00.13.png
    186.2 KB · Views: 10
  • iShot_2022-10-30_16.29.23.png
    iShot_2022-10-30_16.29.23.png
    136.7 KB · Views: 9
Last edited:
你好, please switch your GUI to english, before take screenshots. it is hard to translate it without OCR for chinese
 
hm... need more information:
hdd_pool ist what kind of storage? your log show missing rbd (which would be a ceph cluster). vm bootlog says it has no bootdevice in bios mode. did you switched your vm from uefi to bios? if yes, set it back to ovmf (uefi). else we need to see your storage config on the host side
 
hm... need more information:
hdd_pool ist what kind of storage? your log show missing rbd (which would be a ceph cluster). vm bootlog says it has no bootdevice in bios mode. did you switched your vm from uefi to bios? if yes, set it back to ovmf (uefi). else we need to see your storage config on the host side
The vm has always been the bios configuration, and this configuration has not changed.

The storage is a ceph cluster, using SSD as the storage pool of DB and WAL
 

Attachments

  • iShot_2022-10-30_17.12.33.png
    iShot_2022-10-30_17.12.33.png
    163.2 KB · Views: 4
The problem has been solved. Execute the following command to solve the problem:

rbd persistent-cache invalidate hdd_ pool/vm-146-disk-0
 
  • Like
Reactions: flames