Search results

  1. G

    RAID-Z1: increase amount of disks

    Well ;-), we do not use just simple drives, but only enterprise classified. Means, a 7.68 TB NVMe on 3 or 5 systems, matters. A bunch of 3 is around then ~ 3000 Euro. Scaling and deploying a new product, it is not always easy to calculate storage needs and upcoming requirements on a regulated...
  2. G

    RAID-Z1: increase amount of disks

    Hi, for a bigger Project (SIEM), we want to start with 3 NVMe, using Raid-Z1. Later, I want to have the chance, to increase the pool, using 4 instead of 3 disks of same model/manufacturer. Is it possible to just increase the pool without any problems to re-setup something? If I understood it...
  3. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin F***; I forgot to enable discard *beng* and to enable trim Sorry for bothering you :( Stay healthy everyone!
  4. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin in CEPH usage stats, on the GUI, it reports 2 TB full, for example, if I assign 2 TB to a VM, but on this VM, only 2 GB :) are in use. So this is a little confusing.
  5. G

    CEPH: How to find the best start being prepared to select Hardware

    Hi, we still use the env for some hard tests, however, I run into a silly question; using my CEPH as a backend for storage, on the usage stats, it shows the assigned, but not data really be in use statistic only, so it always reserves the complete assigned space? Thanks
  6. G

    LXC container support from proxmox VE 7.0 - future

    Hi, just asking a silly question. Many people use docker / podman and so on. Will there be a future for pure LXC containers from proxmox VE 7.0 and later? Are there any plans/roadmap on this? best and thanks Ronny
  7. G

    [SOLVED] Bug? Proxmox VE 6.0 -> 6.1 update: Ceph dashboard missing information

    Firefox 71.0 64bit, no addons, on Win10, 1909/18363.476, pveversion: 6.1-3/37248ce6/5.3.10-1-pve "Funny": - on private session, no success - cleaning up cache, no success - cleaning up cache, waiting 5 min, success Firefox issue ... I suppose. Maybe it is time to remove it from my desktop...
  8. G

    [SOLVED] Bug? Proxmox VE 6.0 -> 6.1 update: Ceph dashboard missing information

    You´re right Tom. This is working on Chrome. On Firefox, not. I am not with Firefox ... I already tried with private session with no success either :(
  9. G

    [SOLVED] Bug? Proxmox VE 6.0 -> 6.1 update: Ceph dashboard missing information

    Hi, after upgrading all the things from 6.0 -> 6.1, my dashboard on CEPH is like empty in the tab services; no mon/manager/mds are shown any more, but running well Any ideas? Best Ronny
  10. G

    remove dead CEPH monitor after removing cluster node?

    I could fix it on removing all data in /var/lib/ceph/mon/mon-node~
  11. G

    remove dead CEPH monitor after removing cluster node?

    I run into a same situation where, when deleting a ceph monitoring, it is away from CEPH - config, but still showing up on the list of monitors with "unknown" and also still is on the list of monitors when creating rbd storage. I did not find any config file in /etc/pve so I suppose another...
  12. G

    CEPH: How to find the best start being prepared to select Hardware

    I will try both, one and two OSD on one NVME and then I will do some benchmarks and publish them here for everyone.
  13. G

    CEPH: How to find the best start being prepared to select Hardware

    Ok, as written there, they suggest to split into 2 pieces, and in some other document, they told into 4 :) But I think starting with 2 OSD on 1 NVMe, meaning 4 OSD on 2 NVMe on one node, this should be ok. Replica-Set setting to 3 is best I suppose, and CEPH knows not to use all the OSD on one...
  14. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin Is there any hint how to split one NVMe into 2 / 4 OSDs? I could not find any hint which could help me a little on the WWW, as far as I understood, working with partitions on the NVMe is not a good idea ?
  15. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin reading the docs along CEPH on their site, they suppose to split ONE nvme into 4 OSD. So I will try it out.
  16. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin Thanks for this, I already read it as far as I could understand it. I plan to use 3 Nodes, each with 2 x 6,4 TB NVMe ;-). Should I split one NVMe in two OSD to reach the best performance?
  17. G

    new three node PVE+Ceph cluster

    @sherminator when you´re CEPH is ready, some benchmark would be nice, not the pure single drive ;-) However, now I understand the need for controller, because you have 24 port case ;-) lol :) great shit :)
  18. G

    new three node PVE+Ceph cluster

    @sherminator : 1x Broadcom HBA 9300-8i For what :)? You only have 2 system disks, so a -4i would have been enough, or do you use Jbod or so for the other CEPH disks? Some benchmarks (4k IOPs) would be nice to see :) Thanks Btw...
  19. G

    CEPH: How to find the best start being prepared to select Hardware

    Hello everybody, We are currently faced with deciding what a possible new storage concept might look like. Unfortunately, we can only "rely" on what we have found on the internet at Howtos and information about CEPH. What do we want to achieve: - I'd like to have a growing cluster for both...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!