Search results

  1. I

    Proxmox not update total storage size

    I had even seen this, but my problem seems to be another, because I have 30TB on the ceph available for the proxmox and I don't see it inside the proxmox, if I decrease the size of the storage on the ceph it updates on the proxmox, but if it increases it does not update , if I create a new pool...
  2. I

    Proxmox not update total storage size

    Good morning, I'm using proxmox in version 6.3-2 in production in a cluster with 16 hosts and 205 VMs, I use it with a CEPH storage with RBD mounted in the proxmox without problems, but in the last few weeks I needed to increase the storage quota in the proxmox of 24TB to 26TB, but even changing...
  3. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    We were able to find the performance problem, the SSDs had been formatted to be able to put in the ceph again, apparently the formatting of the disks was not correct, I did not find anything of documentation talking about how to format the disk for use in CEPH, as they are WD disks, we use their...
  4. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    A problem that I noticed in my tests, several tests, is that when I disconnect the krbd from the storage settings in the proxmox the disk behaves as expected, without occupying the 100%, but much slower, of course. My doubt, what could make KRBD no longer perform, and how can I fix it?
  5. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    Is the same :( , in my cluster ceph i've many slow ops , vm windows i saw disk use on 100% , when I use another NFS storage for example I have no problems, I did the thesis with KRBD in proxmox 6.4.5 and I get the same result, I can't see what could be causing this slowdown, a lot, but then...
  6. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    Thanks for the answer, yes we saw that unsafe is problematic, but I'm doing tests with all of them, but we always use Write back and discard, unsafe was for testing , my default configuration is this
  7. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    In my tests I saw something interesting, when I run a task on another VM other than the test VM, I see that it influences the test VM on the same storage. Another thing we noticed in the tests, when we use new SSD disks out of the box and put them on only 3 hosts, we deploy ceph on these disks...
  8. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    I did not make any updates, just reinstalled the ceph (the whole cluster) but in the proxmox using KRBD I have this problem of the 100% used disk, this same ceph ran before without problems, but from one day to the other the windows machines started having this symptom. when i use NFS its...
  9. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    For more information i make a iperf test in my cluster CEPH and this is a result And proxmox to CEPH
  10. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    hey Dominic, this is my configurations qm config 101 root@devpve02:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,iso,vztmpl lvmthin: local-lvm thinpool data vgname pve content images,rootdir nfs: pvebackup disable export...
  11. I

    [SOLVED] KRBD and external CEPH slow - VM disk use 100%

    Hello, I have a problem in my proxmox cluster for some time, I have 16 nodes in my cluster with their VMs in an external CEPH cluster with 6 nodes and 72 osd SSD, everything is connected in 25Gb networks in 2 100Gb core switches each, I use an RBD pool and in the proxmox when mounting the...
  12. I

    [SOLVED] Problem add external rbd storage

    Mira thank you very much, perfect this solution , that's right, probably some rbd package was out of date because CEPH updated this octupus version and so it didn't connect, now all 16 hosts are ok after the update, thanks again.
  13. I

    [SOLVED] Problem add external rbd storage

    When i connect this storage return this And storage This is my configuration in storage cluster
  14. I

    [SOLVED] Problem add external rbd storage

    Hey Mira, i found this entrance in journalctl , this message apear when i enable RBD storage ceph May 03 14:56:52 pve1 pvestatd[2212]: status update time (5.716 seconds) May 03 14:57:00 pve1 systemd[1]: Starting Proxmox VE replication runner... May 03 14:57:00 pve1 systemd[1]: pvesr.service...
  15. I

    [SOLVED] Problem add external rbd storage

    Are all of those Ceph Monitors? Is the external pool called vmstore and the file in /etc/pve/priv/ceph is also called vmstore.keyring? Yes the name of keyring is the same of external pool , In the other test proxmox where I never had this storage it connects without problems, in this one I...
  16. I

    [SOLVED] Problem add external rbd storage

    Thanks for the return, I did the keyring exchange, but the error continues, if I do the same configuration on a new proxmox the storage connects without problems, in this cluster where I already had the storage just that I have the problem, it will be that by using the even IP it was om some...
  17. I

    [SOLVED] Problem add external rbd storage

    Good morning, I have a cluster with 16 Hosts proxmox and an external CEPH cluster configured to store the VMS from the promox, I was using it normally on the proxmox and recently we had to do a maintenance on the storage, we moved all the VMS to another storage, I allocated the RBD storage from...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!