Hi, I have this current problem.
I've experienced a disk failure in my ceph cluster with proxmox.
I've replaced the disk, but now with the rebalancing / backfilling, one OSD crashes (osd.1).
When I set the 'nobackfill' flag, the osd does not crash and does crash right after the flag is...
On proxmox 7.0-11, I've encountered this situation :
I renamed the disk of a vm : "rbd -p ssdpool mv vm-120-disk-1 vm-120-disk-0"
Then this does not reflect on the GUI (obviously as it does not know), and I cannot change this via the GUI :
Is there a workaround ?
Why I'm not able to select...
there are some
rbd_data.5fb98cc6045b0e.000000000001d5c0
rbd_data.5fb98cc6045b0e.000000000000b509
rbd_data.5fb98cc6045b0e.0000000000004b7b
rbd_data.5fb98cc6045b0e.000000000001b1fd
rbd_data.5fb98cc6045b0e.0000000000015767
rbd_data.5fb98cc6045b0e.000000000000ff36...
Hi @aaron, sorry for the late reply, the cluster was down for electrical instabilities reasons.
the remaining objects are stuff like :
rbd_directory
benchmark_data_proxmox1_3076_object142
benchmark_data_proxmox1_3076_object229
benchmark_data_proxmox1_1354523_object126...
Hi !
I moved all rbd from an initial to new pools, and now the initial pool (hddpool) still reports used space and objects ?
Could someone tell me more ?
root@proxmox5:~#ceph df
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
hddpool 3 101 277 GiB...
ok this went fine after some time by itself ....
Jun 24 11:37:13 proxmox5 ceph-osd[27428]: 2021-06-24T11:37:13.273+0200 7f6dac5cb700 -1 osd.1 0 failed to load OSD map for epoch 904831, got 0 bytes
Jun 24 11:58:24 proxmox5 ceph-osd[27428]: 2021-06-24T11:58:24.741+0200 7f6db2214700 -1 osd.1...
I could see previous posts about it
https://forum.proxmox.com/threads/replication-with-different-target-storage-name.35458/#post-384533
https://forum.proxmox.com/threads/zfs-volume-naming-for-replication.90263/
I added a ZFS storage on another node (pool3), and I had a previous ZFS pool on the first node (pool2).
I then wanted to configure replication of CT disk, but I had the error :
2021-06-24 12:02:03 104-0: (remote_prepare_local_job) storage 'pool2' is not available on node 'proxmox3'
2021-06-24...
I had latency issues with a 4TB disk, which I replaced with a 2TB disk.
I used @alexskysilk procedure : https://forum.proxmox.com/threads/ceph-osd-disk-replacement.54591/
However, the new osd does not start : "osd.1 0 failed to load OSD map for epoch 904831, got 0 bytes"
I've ceph 15.2.13 and...
Hello,
While chcking for a SMART error, I could check this bug.
When clicking in Disks / <the disk with SMART error>, the popup shows like below, and cannot be scrolled down to see the end of the ouput:
Thanks !
I see, thanks
sure, for example for the monitors
https://forum.proxmox.com/threads/web-ui-cannot-create-ceph-monitor-when-multiple-pulbic-nets-are-defined.59059/#post-385762
I had to create osds also, which have the same issue, so I found this ceph-adm stuff.
It also seems to be easier to...
yes, in order to add the ceph-adm tools on top of PVE.
I've face some limitations on proxmox, and ended up on this website as given above.
I saw this new tool, and before testing it, wanted to know.
I wanted to change to ceph-adm, as described here : https://docs.ceph.com/en/latest/cephadm/adoption/#cephadm-adoption
Is there any limitation ?
Thanks!
Hello,
I wanted also to create a monitor with the command 'pveceph mon create -mon-address 10.0.15.12'
I still have the issue in 6.3-6
Error: any valid prefix is expected rather than "192.168.1.10/24, 10.0.10.0/24".
command '/sbin/ip address show to '192.168.1.10/24, 10.0.10.0/24' up' failed...
not at all
my last clue that was my setup was undersized.
SSD i/o was too numberous generating more iowait and having snowball effect ...
the main diff was the HDD was having less i/o due to the nature of the storage (it was due to store only less frequent accessible data).
I shut down my...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.