Recent content by hybrid512

  1. H

    Ceph 17.2 Quincy Available as Stable Release

    @aaron thx for your kind help, works perfect.
  2. H

    Ceph 17.2 Quincy Available as Stable Release

    Hi @aaron , Worked ike a charm, thank you soooo much ! One last question : in ProxMox UI, I have a warning about crashed mgr modules, it is related to the error I had but now everything is fine, how can I clear this log ? Best regards
  3. H

    Ceph 17.2 Quincy Available as Stable Release

    Hi, I upgraded to Quincy and everything went fine except that I have this issue with the devicehealth module : 2022-08-14T14:06:33.206+0200 7f16e2549700 0 [devicehealth INFO root] creating main.db for devicehealth 2022-08-14T14:06:33.206+0200 7f16e2549700 -1 log_channel(cluster) log [ERR] ...
  4. H

    [SOLVED] Can't install ProxMox 6.x inside Virtualbox

    Wow, 512MB just for EFI ? Isn't it a bit big ? But ok, good to know. Thanks for the maxvz option ! Maybe something more explicit would be welcome such as a simple checkbox "Create default LVM storage for VMs/CTs" or something to enable/disable this feature would be a nice addition. Thanks...
  5. H

    [SOLVED] Can't install ProxMox 6.x inside Virtualbox

    That's a good point. This is a question I wanted to ask too ... Is it possible to disable that LVM group for VMs completely at install time ? It is useless to me since I'm using Ceph and other shared storage solutions so I'd like to remove this completely and reclaim that space for some other...
  6. H

    [SOLVED] Can't install ProxMox 6.x inside Virtualbox

    Damn ... you're right ! I was using a 8GB drive (this was the default value from Virtualobx and I thought this was enough for a base install). Using a 12GB hard drive fixed the issue. Thanks for your help.
  7. H

    [SOLVED] Can't install ProxMox 6.x inside Virtualbox

    Hi, I just wanted to quickly setup a small lab to testbed some ProxMox setup within Virtualbox but the installation keeps failing whatever the settings I try. I don't want to do something serious with this setup, this is just for test and demo purpose. Here is my setup : * Linux Ubuntu 19.10...
  8. H

    Can't unmap images of cloned running VMs on KRBD

    nope but this bug happen either after a living VM clone with more than 100GB of data to clone for some of them or after a failed OSD (not allways though)
  9. H

    Can't unmap images of cloned running VMs on KRBD

    I don't have logs, the clone itselfs is working without any issue, the VM is cloned properly and you can start it but if you want to restart it, you can't and you get a sysfs write error. I find many reports on the ceph ML when searching these errors on Google ... seems not so uncommon...
  10. H

    Can't unmap images of cloned running VMs on KRBD

    Then maybe this is a bug in qemu live mirror with krbd backend (it works well with librados).
  11. H

    Can't unmap images of cloned running VMs on KRBD

    The clone. I think the prb mainly occures because we clone a "living" image with moving blocks and not a snapshot or a "cold" image. One solution would be to create an automatic temporary snapshot before the clone, do the clone based on the snapshot then remove the temporary snapshot right...
  12. H

    Can't unmap images of cloned running VMs on KRBD

    Hi, I think I found a corner case. We have VMs running on Ceph in KRBD mode. I have a user who cloned such VM but in a living state, neither from a stopped VM nor a snapshot. I don't know why but it works except that when you want to remove the VM, when trying to remove the image, I get a sysfs...
  13. H

    RBD device stucks in some circumstances

    +1 ... any idea on how to fix or mitigate this issue ? It also happens in case of OSD loss even though I/Os pressure is much less intense with Luminous.
  14. H

    ProxMox 4 + Ceph Hammer : LXC on Ceph leads to bad i/o performances

    Hi, I had many KVMs running in Ceph with very good performances but I decided to give a try to LXC on Ceph in order to benefit for near bare metal performances and lighter overhead on ressources but I was soon disappointed on the disk i/o side. While in KVM (virtio-scsi + writeback cache mode)...
  15. H

    Multi-datacenter ?

    Thanks but I know this roadmap and this is more about upcoming release than real roadmap of longer term objectives ... and by now, there is nothing about multiple datacenter management in this document.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!