Search results

  1. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Still running on the setup described in post #27. The bugs I found back then should be fixed by now - but I never tried again... If you want to try: - Use a LXC container so you do not need to run keepalive for HA - the restart of those containers is fast enough - Use NixOS - as it is easy to...
  2. R

    Support for CephFS via NFS (NFS-Ganesha)

    Hi, you could have a look at my journey for NFS - CephFS on Proxmox... https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/ As far as I understand the bugs I encountered should be solved by now in the stable NFS Ganesha version 4 series...
  3. R

    Enable HA on all VM's

    I wrote a script for that to run regulary on each host node... root@proxmox07:~# cat /usr/local/bin/pve-ha-enable.sh #!/bin/bash # Running VMs VMIDS_R=$(qm list | grep running | awk '{print $1}' | tail -n +1) # Stopped VMs VMIDS_S=$(qm list | grep stopped | awk '{print $1}' | tail -n +1)...
  4. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    We had rbd image snapshots which we deleted but rados objects relating to those snapshots where left behind. We moved all VM disk images to a local disk storage and had to delete the rbd pool. Which caused further problems. Do not delete stuff on Ceph - just add disks... ;-)
  5. R

    Proxmox VE 7.2 released!

    We still miss "Maintenance Mode" coming from VMware with a shared storage. There is nothing visible like that on the Roadmap. Is something like that planned? Example: - Proxmox Node needs hardware maintenance - Instead of Shutdown klick "Maintenance Mode" - VMs are migrated away to other...
  6. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    Trying now to get rid of that broken RBD Pool: root@proxmox07:~# rados -p ceph-proxmox-VMs ls | head -10 rbd_data.7763f760a5c7b1.00000000000013df rbd_data.76d8c7e94f5a3a.00000000000102c0 rbd_data.7763f760a5c7b1.0000000000000105 rbd_data.f02a4916183ba2.0000000000013e45...
  7. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    Looking at https://forum.proxmox.com/threads/ceph-storage-usage-confusion.94673/ So here is the resut of the four counts root@proxmox07:~# rbd ls ceph-proxmox-VMs | wc -l 0 root@proxmox07:~# rados ls -p ceph-proxmox-VMs | grep rbd_data | sort | awk -F. '{ print $2 }' |uniq -c |sort -n |wc -l...
  8. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    I moved all VM disks from the Ceph RBD to a host local ZFS mirror und after migrating it looks like this: root@proxmox07:~# rbd disk-usage --pool ceph-proxmox-VMs root@proxmox07:~# ceph df detail --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 87 TiB 19 TiB 68...
  9. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    Hi, it seems that I am running out of space on the OSDs of a five node hyperconverged Proxmox Ceph cluster: root@proxmox07:~# rbd du --pool ceph-proxmox-VMs NAME PROVISIONED USED vm-100-disk-0 1 GiB 1 GiB vm-100-disk-1...
  10. R

    nfs share from lxc

    Use NFS Ganesha within a container - NFS Ganesha is a userspace NFS service, so it should not hang the Proxmox hosts kernel.
  11. R

    Proxmox Backup Server 2.1 released!

    Could not find a way to send a pull request to the PVE Doc git. https://pbs.proxmox.com/docs/managing-remotes.html#bandwidth-limit Typo: " congetsion "
  12. R

    Disk overview: Wearout percentage shows 0%, IPMI shows 17% ...

    Hi, we are running an older Proxmox Ceph cluster here and I am currently looking through the disks. So the OS disks have a Waerout of two percent but the Ceph OSDs still have 0%?!?!?!? So I looked into the Lenovo XClarity Controller: So for the OS disks it looks the same, but the Ceph...
  13. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    I only found out about msecli from this ZFS benchmark thread and back then had not considered it for my benchmarks. So yes, I was wrong - it should be 4KB NVMe block size. And the default Ceph block size is 4MB - no idea if Proxmox does changes to the RBDs here.
  14. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    The data for these graphs is collected by Zabbix agents into a Zabbix DB. From there I used the Zabbix plugin in Grafana. Our decision to use Zabbix is 10 years old and we moved away from Nagios. As long as we are still able to monitor everything (really everything!) in Zabbix we do not even...
  15. R

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    I performance-tested from 1 to 4 OSDs per NVMe. It really depends on the system configuration - to drive more OSDs you need more CPU threads. See this thread and the posts around there. With my experience so far now I would just create one OSD per device. As Ceph uses a 4M "block size" I would...
  16. R

    VM IO freeze for 15 seconds when Ceph node reboot gracefully

    So what magical and secret settings did you set then?
  17. R

    Running snapshot backup stalls VM - Backup throughput is low

    So here is the status... upgrade is in planning root@proxmox07:~# qm status 167 --verbose blockstat: scsi0: account_failed: 1 account_invalid: 1 failed_flush_operations: 0 failed_rd_operations: 0...
  18. R

    [SOLVED] PVE ZFS mirror installation without 512MByte Partition - how to convert to UEFI boot?

    Ok, took some time to find out... proxmox-boot-tool does not prepare the systemd-boot configuration if /sys/firmware/efi does not exist - so to prepare the sda2/sdb2 filesystem for systemd-boot before booting using UEFI I had to remove those checks from /usr/sbin/proxmox-boot-tool.
  19. R

    [SOLVED] PVE ZFS mirror installation without 512MByte Partition - how to convert to UEFI boot?

    So I was able to change the disk layout online by doing this: zpool status # !!! Be careful with device names and partition numbers!!!! zpool detach rpool sdb2 cfdisk /dev/sdb # Keep only partition 1 (BIOS), create partition 2 with EFI and partition 3 with ZFS fdisk -l /dev/sdb # Should look...
  20. R

    [SOLVED] PVE ZFS mirror installation without 512MByte Partition - how to convert to UEFI boot?

    The VMs all reside on Ceph RBDs shared between three nodes. I need to change all three nodes. So from my point of view I think splitting the ZFS mirror, repartition, zfs send the contents and reboot should be the easiest.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!