Search results

  1. G

    role / permission to prevent deletion of backup files

    Hi, we use Proxmox VE with Proxmox BS. On the Proxmox BS, we have to setup some users, I want to be able, that those users can perform backups, but are not able to delete any of the things. Why? I do not want to rely of the Proxmox VE authentication database and settings, I want to do the...
  2. G

    Proxmox 8.3: VM not starting anymore because of KVM wrong syntax

    @Falk R Ganz toller Hinweis, wenn ich ein Upgrade mache und es vorher ging alles, gehe ich auch davon aus, dass so etwas nach einem Upgrade noch gehen sollte. Also durchaus ein config-Change, der bei einer Migration nicht bedacht wurde.
  3. G

    Proxmox 8.3: VM not starting anymore because of KVM wrong syntax

    Workaround: set smbios1: to smbios1: on This let the VM start again Bug?
  4. G

    Proxmox 8.3: VM not starting anymore because of KVM wrong syntax

    Upgrading to 8.3 brings fun: kvm: -smbios type=1, : warning: short-form boolean option ' ' deprecated Please use =on instead kvm: -smbios type=1, : Invalid parameter ' ' TASK ERROR: start failed: QEMU exited with code 1
  5. G

    RAID-Z1: increase amount of disks

    Well ;-), we do not use just simple drives, but only enterprise classified. Means, a 7.68 TB NVMe on 3 or 5 systems, matters. A bunch of 3 is around then ~ 3000 Euro. Scaling and deploying a new product, it is not always easy to calculate storage needs and upcoming requirements on a regulated...
  6. G

    RAID-Z1: increase amount of disks

    Hi, for a bigger Project (SIEM), we want to start with 3 NVMe, using Raid-Z1. Later, I want to have the chance, to increase the pool, using 4 instead of 3 disks of same model/manufacturer. Is it possible to just increase the pool without any problems to re-setup something? If I understood it...
  7. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin F***; I forgot to enable discard *beng* and to enable trim Sorry for bothering you :( Stay healthy everyone!
  8. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin in CEPH usage stats, on the GUI, it reports 2 TB full, for example, if I assign 2 TB to a VM, but on this VM, only 2 GB :) are in use. So this is a little confusing.
  9. G

    CEPH: How to find the best start being prepared to select Hardware

    Hi, we still use the env for some hard tests, however, I run into a silly question; using my CEPH as a backend for storage, on the usage stats, it shows the assigned, but not data really be in use statistic only, so it always reserves the complete assigned space? Thanks
  10. G

    LXC container support from proxmox VE 7.0 - future

    Hi, just asking a silly question. Many people use docker / podman and so on. Will there be a future for pure LXC containers from proxmox VE 7.0 and later? Are there any plans/roadmap on this? best and thanks Ronny
  11. G

    [SOLVED] Bug? Proxmox VE 6.0 -> 6.1 update: Ceph dashboard missing information

    Firefox 71.0 64bit, no addons, on Win10, 1909/18363.476, pveversion: 6.1-3/37248ce6/5.3.10-1-pve "Funny": - on private session, no success - cleaning up cache, no success - cleaning up cache, waiting 5 min, success Firefox issue ... I suppose. Maybe it is time to remove it from my desktop...
  12. G

    [SOLVED] Bug? Proxmox VE 6.0 -> 6.1 update: Ceph dashboard missing information

    You´re right Tom. This is working on Chrome. On Firefox, not. I am not with Firefox ... I already tried with private session with no success either :(
  13. G

    [SOLVED] Bug? Proxmox VE 6.0 -> 6.1 update: Ceph dashboard missing information

    Hi, after upgrading all the things from 6.0 -> 6.1, my dashboard on CEPH is like empty in the tab services; no mon/manager/mds are shown any more, but running well Any ideas? Best Ronny
  14. G

    remove dead CEPH monitor after removing cluster node?

    I could fix it on removing all data in /var/lib/ceph/mon/mon-node~
  15. G

    remove dead CEPH monitor after removing cluster node?

    I run into a same situation where, when deleting a ceph monitoring, it is away from CEPH - config, but still showing up on the list of monitors with "unknown" and also still is on the list of monitors when creating rbd storage. I did not find any config file in /etc/pve so I suppose another...
  16. G

    CEPH: How to find the best start being prepared to select Hardware

    I will try both, one and two OSD on one NVME and then I will do some benchmarks and publish them here for everyone.
  17. G

    CEPH: How to find the best start being prepared to select Hardware

    Ok, as written there, they suggest to split into 2 pieces, and in some other document, they told into 4 :) But I think starting with 2 OSD on 1 NVMe, meaning 4 OSD on 2 NVMe on one node, this should be ok. Replica-Set setting to 3 is best I suppose, and CEPH knows not to use all the OSD on one...
  18. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin Is there any hint how to split one NVMe into 2 / 4 OSDs? I could not find any hint which could help me a little on the WWW, as far as I understood, working with partitions on the NVMe is not a good idea ?
  19. G

    CEPH: How to find the best start being prepared to select Hardware

    @Alwin reading the docs along CEPH on their site, they suppose to split ONE nvme into 4 OSD. So I will try it out.