@Alwin
in CEPH usage stats, on the GUI, it reports 2 TB full, for example, if I assign 2 TB to a VM,
but on this VM, only 2 GB :) are in use. So this is a little confusing.
Hi, we still use the env for some hard tests,
however, I run into a silly question;
using my CEPH as a backend for storage, on the usage stats, it shows the assigned, but not data really be in use statistic only,
so it always reserves the complete assigned space?
Thanks
Hi, just asking a silly question. Many people use docker / podman and so on. Will there be a future for pure LXC containers from proxmox VE 7.0 and later? Are there any plans/roadmap on this?
best and thanks
Ronny
Firefox 71.0 64bit, no addons, on Win10, 1909/18363.476,
pveversion: 6.1-3/37248ce6/5.3.10-1-pve
"Funny":
- on private session, no success
- cleaning up cache, no success
- cleaning up cache, waiting 5 min, success
Firefox issue ... I suppose. Maybe it is time to remove it from my desktop...
Hi, after upgrading all the things from 6.0 -> 6.1, my dashboard on CEPH is like empty in the tab services;
no mon/manager/mds are shown any more, but running well
Any ideas?
Best
Ronny
I run into a same situation where, when deleting a ceph monitoring, it is away from CEPH - config, but still showing up on the list of monitors with "unknown" and also still is on the list of monitors when creating rbd storage. I did not find any config file in /etc/pve so I suppose another...
Ok, as written there, they suggest to split into 2 pieces, and in some other document, they told into 4 :) But I think starting with 2 OSD on 1 NVMe, meaning 4 OSD on 2 NVMe on one node, this should be ok. Replica-Set setting to 3 is best I suppose, and CEPH knows not to use all the OSD on one...
@Alwin Is there any hint how to split one NVMe into 2 / 4 OSDs? I could not find any hint which could help me a little on the WWW, as far as I understood, working with partitions on the NVMe is not a good idea ?
@Alwin Thanks for this, I already read it as far as I could understand it.
I plan to use 3 Nodes, each with 2 x 6,4 TB NVMe ;-). Should I split one NVMe in two OSD to reach the best performance?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.