Recent content by Rainerle

  1. R

    [How to] Run Flatcar Container Linux on Proxmox

    Would it not be better to first try and run FlatCar as a LXC Container instead of a VM? Is anybody working on that?
  2. R

    Hardware replacement... slow OSD backfill...

    Disabling noout and taking the old OSDs on one node down speed up the process. We did set weight to 0 for the old OSDs, but that only took us so far...
  3. R

    Hardware replacement... slow OSD backfill...

    noout on or off does not seem to make a difference... Current state:
  4. R

    Hardware replacement... slow OSD backfill...

    Hi, we are currently busy with replacing 8 servers after the leasing time is over with 8 new machines. Had some problems with Broadcom 2x 100GBit BCM57508 P2100G BCM957508-P2100G and replaced them with the old but sturdy Mellanox cards so we are a little behind schedule with the OSD migration...
  5. R

    High network volume VMs with ping loss

    Decided to go with a baremetal installation for these VMs.
  6. R

    Update Proxmox 6.2 zu 8.4

    Das Leben ist zu kurz um Zeit mit alten, langsamen Computern zu verschwenden...
  7. R

    High network volume VMs with ping loss

    Hi, we operate three FlatCar based VMs on top of three Proxmox VE hosts that operate a memory based database. Hardware is still as described here running currently Proxmox VE 8.3.5. Disks in use are two NVMe as ZFS mirror handing out ZFS Volumes to the VMs with ZFS pools on top of those volumes...
  8. R

    CephFS: df versus du versus getfattr

    So it looks like this graph is coming from df: This is what df shows: root@proxmox01:~# df -h Filesystem Size Used Avail Use% Mounted on udev 252G 0 252G 0% /dev tmpfs 51G 2.2M 51G 1% /run...
  9. R

    HA NFS service for KVM VMs on a Proxmox Cluster with Ceph

    Still running on the setup described in post #27. The bugs I found back then should be fixed by now - but I never tried again... If you want to try: - Use a LXC container so you do not need to run keepalive for HA - the restart of those containers is fast enough - Use NixOS - as it is easy to...
  10. R

    Support for CephFS via NFS (NFS-Ganesha)

    Hi, you could have a look at my journey for NFS - CephFS on Proxmox... https://forum.proxmox.com/threads/ha-nfs-service-for-kvm-vms-on-a-proxmox-cluster-with-ceph.80967/ As far as I understand the bugs I encountered should be solved by now in the stable NFS Ganesha version 4 series...
  11. R

    Enable HA on all VM's

    I wrote a script for that to run regulary on each host node... root@proxmox07:~# cat /usr/local/bin/pve-ha-enable.sh #!/bin/bash # Running VMs VMIDS_R=$(qm list | grep running | awk '{print $1}' | tail -n +1) # Stopped VMs VMIDS_S=$(qm list | grep stopped | awk '{print $1}' | tail -n +1)...
  12. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    We had rbd image snapshots which we deleted but rados objects relating to those snapshots where left behind. We moved all VM disk images to a local disk storage and had to delete the rbd pool. Which caused further problems. Do not delete stuff on Ceph - just add disks... ;-)
  13. R

    Proxmox VE 7.2 released!

    We still miss "Maintenance Mode" coming from VMware with a shared storage. There is nothing visible like that on the Roadmap. Is something like that planned? Example: - Proxmox Node needs hardware maintenance - Instead of Shutdown klick "Maintenance Mode" - VMs are migrated away to other...
  14. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    Trying now to get rid of that broken RBD Pool: root@proxmox07:~# rados -p ceph-proxmox-VMs ls | head -10 rbd_data.7763f760a5c7b1.00000000000013df rbd_data.76d8c7e94f5a3a.00000000000102c0 rbd_data.7763f760a5c7b1.0000000000000105 rbd_data.f02a4916183ba2.0000000000013e45...
  15. R

    Proxmox Ceph: RBD running out ouf space, usage does not fit to VMs disk sizes

    Looking at https://forum.proxmox.com/threads/ceph-storage-usage-confusion.94673/ So here is the resut of the four counts root@proxmox07:~# rbd ls ceph-proxmox-VMs | wc -l 0 root@proxmox07:~# rados ls -p ceph-proxmox-VMs | grep rbd_data | sort | awk -F. '{ print $2 }' |uniq -c |sort -n |wc -l...