Awesome that fixed the GUI Itself. I wonder if its' going to fix the spam I'm getting in journalctl and to console about trying to mount /mainceph cephfs mount?
I searched, a lot for how to fix this.
Had Ceph setup and working. Had two storages: mainceph an FS and RDBBlock an RDBBlock pool.
Ceph died in a fire and I had to rebuild everything.
But they are still hanging around:
https://ss.ecansol.com/uploads/2024/04/26/chrome_2024-04-26_21-02-22.png...
the first command is a wall of text that continues with more of:
root@pveclua:~# rados ls -p RDBRedMail|more
rbd_data.b904aebd3b3049.000000000003ae00
rbd_data.b904aebd3b3049.0000000000000123
rbd_data.b904aebd3b3049.0000000000135e00
rbd_data.b904aebd3b3049.00000000000e5c00...
OHHHH I think this might be a generic Linux issue. For root file systems over a certain size, you have to use LVM instead of standard partition stuff yeah?
https://ss.ecansol.com/uploads/2023/11/09/chrome_2023-11-09_10-11-31.png
https://ss.ecansol.com/uploads/2023/11/09/chrome_2023-11-09_10-12-27.png
root@pveclua:~# pveversion
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-15-pve)
Any ideas how to fix this?
Thanks,
Matt
you should
1: shutdown the vm gracefully
2: detach the storage volume
3: boot the VM and make sure everything works and no data is missing
4: then destroy the volume.
Getting this error: https://ss.ecansol.com/uploads/2023/08/07/chrome_1691423600.png
When I try to remove this unused disk: https://ss.ecansol.com/uploads/2023/08/07/chrome_1691423614.png
I initiated a disk removal operation, which churned like for days, and it nerfed the related RDP Pool in...
I'm thinking the fact that I have the rdb application enabled on some of these pools somehow may also be part of the problem but I don't know how to remove it:
pool 9 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on...
I followed the guide here with appropriate changes for my environment:
https://base64.co.za/enable-amazon-s3-interface-for-ceph-inside-proxmox/
For example I don't have node1 2 3 I have pveclua b c d
4 Nodes.
So I replaced all instances of node# with pveclua b c d etc.
However, when I went...
I opted in a few months ago to 6.1 The forum posts and such say 'updates will be received', but it's not doing it.
https://ss.ecansol.com/uploads/2023/06/26/SecureCRT_1687816654.png
https://ss.ecansol.com/uploads/2023/06/26/SecureCRT_1687816666.png...
that makes sense Aaron, thank you very much. However, that introduces a new (and scary) problem, of migrating the existing 3 nodes to that connection instead of the fabric mesh >.<
I'm not -super- familiar with bonding in Linux. But I'd be using Mikrotik switches most likely, so I'll poke...
I currently have 3 nodes A B C
There are dedicated links for A -> B -> C -> Back to A.
Fabric Mesh is configured and working.
Can I add a 4th node A -> B -> C -> D -> Back to A?
or nTH node?
I'm assuming so, because otherwise you'd have to have a trillion interfaces to go above 3 nodes, but...
I setup a cluster using a 10Gbps LAN network to a switch.
I then setup the fabric mesh stuff to facilitate dedicated host - host traffic as outlined in ceph documentation.
I'd like to add the fabric mesh network interfaces as a secondary link for the cluster, but I don't see any GUI options to...
hmmmm well I been tied up for a few days, and yes the summary page says 9.61 TB which lines up with the current usage somewhat.
I'll let the data build up for another week and there is a subset I can safely delete to see if the usage goes down proportionately
Thanks,
Matt
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.