I hope there will be some improvements here, as it seems to be the same issue with the AMD N40L/N54L :
AMD Turion(tm) II Neo N40L Dual-Core Processor and AMD Turion(tm) II Neo N54L Dual-Core Processor
No new firmware now
Hi
I've recently upgraded from 7.4 to 8.0.
I have a cluster of 5 nodes, and 2 have the below failed condition. I mention that these two are HP N54L.
All ceph-related process are failing like this :
0> 2023-06-28T12:19:08.096+0200 7f5efbaf3a00 -1 *** Caught signal (Illegal instruction) **...
Another view would be to delete them all and loose the data.
What would happen to a RBD if some pg are removed ? does it crash or something ?
Thanks for sharing any idea / experience.
Hi Everyone,
Thank you first for taking time to read this post.
I seek for help, as I was not able to get around this situation.
This blocks the two main pool of my cluster to become active :(
I've seen this post https://forum.proxmox.com/threads/ceph-osd-crashed.114137/#post-493297, and...
Hi, I have this current problem.
I've experienced a disk failure in my ceph cluster with proxmox.
I've replaced the disk, but now with the rebalancing / backfilling, one OSD crashes (osd.1).
When I set the 'nobackfill' flag, the osd does not crash and does crash right after the flag is...
On proxmox 7.0-11, I've encountered this situation :
I renamed the disk of a vm : "rbd -p ssdpool mv vm-120-disk-1 vm-120-disk-0"
Then this does not reflect on the GUI (obviously as it does not know), and I cannot change this via the GUI :
Is there a workaround ?
Why I'm not able to select...
there are some
rbd_data.5fb98cc6045b0e.000000000001d5c0
rbd_data.5fb98cc6045b0e.000000000000b509
rbd_data.5fb98cc6045b0e.0000000000004b7b
rbd_data.5fb98cc6045b0e.000000000001b1fd
rbd_data.5fb98cc6045b0e.0000000000015767
rbd_data.5fb98cc6045b0e.000000000000ff36...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.