Search results

  1. C

    [BUG] Ceph Recovery bar not showing percentage

    Oh, I forgot to attach the output file in my last post? Ah, I see... *.json is not allowed...I haven't seen it in my last post...sorry!
  2. C

    [BUG] Ceph Recovery bar not showing percentage

    Sure! Now we have an other recovery running, because 1 node failed yesterday. But same problem.
  3. C

    [BUG] Ceph Recovery bar not showing percentage

    Sorry, I dont get it in one picture as it gets cut away... I added the current state of the webinterface, too. In the shell it says "remaining 3d" and "progress 0.05682..." but on the webinterface it is quite more different?
  4. C

    [BUG] Ceph Recovery bar not showing percentage

    Hi, is the recovery bar in the ceph overview buggy? We have a recovery of an OSD running currently, but only 0.00% is show since ~2h. This is no big issue for us, but I just want to mention it, if it is not known yet. I also could not find anything in the search... proxmox-ve: 6.4-1...
  5. C

    Errors while backup

    Okay I shifted the starting time 2h earlier and that seems to fixed the issue. Nevertheless: I could image, that the NAS is pretty busy with also other backups and that caused the timeout. Is there a configuration available to change that timeout?
  6. C

    Errors while backup

    Hi, I have several VMs, which always get an error while backing up. What could be an issue here? 143: 2021-05-09 22:27:05 INFO: Starting Backup of VM 143 (qemu) 143: 2021-05-09 22:27:05 INFO: status = running 143: 2021-05-09 22:27:05 INFO: VM Name: P-INT-VM-0620-134-sipsystem 143: 2021-05-09...
  7. C

    Ceph missing Prometheus stats

    Its a bug in Ceph. I will wait for an update. :) https://tracker.ceph.com/issues/46142
  8. C

    Ceph missing Prometheus stats

    Hi, I am not quite sure wether it belongs here or not, as I have the issue since the last proxmox update - otherwise I am sorry! I updated Proxmox last weekend to 6.4-5 (I think from 6.3-3 or so) and after that, in the ceph prometheus module, some stats are missing. I am showing the exported...
  9. C

    [SOLVED] Promox VE Cluster reboots after switch shutdown

    Short feedback: Yes - seperating the networks worked! Thank you! /closed
  10. C

    [SOLVED] Promox VE Cluster reboots after switch shutdown

    Exactly - all nodes and their NICs are in the same subnet (/16). Thanks, I will try your suggestion!
  11. C

    [SOLVED] Promox VE Cluster reboots after switch shutdown

    Thanks for your feedback. I also saw that, that all NICs went down and I wondered why. Yes, I also already had the idea to test it again without running any VMs, to not destroy anything by accident. :) root@pve01:~# ip route default via 192.168.213.91 dev vmbr0 proto kernel onlink...
  12. C

    [SOLVED] Promox VE Cluster reboots after switch shutdown

    Hello all, I made a week ago on the weekend some maintanance work. Therefore I had to shutdown the 2 switches for the Proxmox/Ceph Cluster one after another. We are using the system as HCI for our VMs which works pretty good and we are happy with it. Every server has a connection to both...
  13. C

    Kernel 5.11

    Thank you that worked. For anyone who is interested: 1) add deb http://deb.debian.org/debian buster-backports main to /etc/apt/sources.list 2) apt update 3) apt install -t buster-backports btrfs-progs
  14. C

    Kernel 5.11

    That would be great. Its the only reason why I wanted to test the new kernel as I am waiting for the raid1c3 profile :)
  15. C

    Kernel 5.11

    Hi, its working so far on my enviroment (AMD Ryzen Threadripper 1900X). I wanted to test the btrfs raid1c3 feature, which was released on Kernel Version 5.5 but the profile is still missing? https://kernelnewbies.org/Linux_5.5#Btrfs_RAID1_with_3_and_4_copies_and_more_checksum_alternatives...
  16. C

    [SOLVED] Shrink disk on Ceph

    Ok, got it. I can access the Ceph RBD directly through qemu-img. Great! - Shrink partition inside the VM (Windows, gparted, etc.) - Shutdown VM - Resize the image: qemu-img resize -f raw --shrink rbd:<pool>/<disk-img> <size> (example: qemu-img resize -f raw --shrink rbd:VMs/vm-111-disk-1 55G)...
  17. C

    [SOLVED] Shrink disk on Ceph

    Hi, how do I shrink a disk on a Ceph-Cluster (the workflow to reduce the partition beforehand is clear)? I tried root@server:~# qm resize 111 scsi1 40G shrinking disks is not supported but failed. Yes, the "qm" manual says that shrinking is not supported but here it sounds as it is...
  18. C

    [SOLVED] VM correct disk size

    That worked with a test-VM, thanks!
  19. C

    [SOLVED] VM correct disk size

    Hello all, some time ago, I somehow resized the disk wrong. Now the Proxmox WebUI shows it as B instead of TB. I tried several resizes to get the TB-view again but failed. Currently, it shows roughly 14TB and I want to resize it to 15TB. What is the correct resize value to get the TB-view...