Recent content by mohnewald

  1. M

    Old chunk files not removed after prune + GC on Proxmox Backup Server

    Hello everyone, I’m seeing some unexpected old chunk files on my Proxmox Backup Server, even though prune and garbage collection jobs are running regularly. My prune configuration is: Daily: 7 Weekly: 5 Monthly: 2 Today is November 13th, 2025, and the last garbage collection ran...
  2. M

    Fleecing Storage (rbd)

    Hello Fabian, ok, i testet the backup directly from the VM Backup "Button". It seems that the fleecing only applies when its running by the Cluster Backup job. Now i get the info i expected: INFO: creating Proxmox Backup Server archive 'vm/100/2025-03-04T15:54:55Z' Logical volume...
  3. M

    Fleecing Storage (rbd)

    Hello, i have a PVE Cluster with Ceph Storage. I discovered the fleecing option in the Advanced backup Task where i can select a rbd storage. How does it work in detail? Does it create a rbd snapshot or where can i see the saved "fleecing" data on my rbd storage which it will merge once the...
  4. M

    Backup Performance - would multithread improve it?

    ...Easy answer to poor software architecture :p if it does not perform, throw hardware at it :D Back to my point: The performance "could" be there and maybe there is a way to parallelize it on the architechture level.
  5. M

    Backup Performance - would multithread improve it?

    Hello, We run our PBS and have mounted RBD block devices from a Ceph cluster. The Ceph cluster consists of 12, 14, and 16 enterprise HDDs (not SSDs), making it relatively cheap, large, but somewhat slow in terms of I/O performance. Benchmarks with parallel threads perform very well. However...
  6. M

    corosync - KNET - MTU Warnings (MTU: 1500 vs 1397)

    Hello spirit. Thanks for the feedback. :-)
  7. M

    corosync - KNET - MTU Warnings (MTU: 1500 vs 1397)

    Hello, i get a lot of logs like this. Is it something to worry about?: Dec 20 08:23:48 node01 corosync[2647]: [KNET ] udp: detected kernel MTU: 1500 Dec 20 08:23:48 node01 corosync[2647]: [KNET ] pmtud: PMTUD completed for host: 3 link: 0 current link mtu: 1397 Dec 20 08:23:48 node01...
  8. M

    Is there a Cross-Cluster Live Migration Feature?

    Hello Cheiss, thanks for the fast feedback. I had a look at it on the 08.03.2024 and my internal test-notes where: Man muss für den remote local storage VM Disks erlauben(Obwohl trotzdem auf CEPH migriert wird) Die alte VM bleibt im "Migrate" Zustand vorhanden Man muss jede VM Bridge manuell...
  9. M

    Is there a Cross-Cluster Live Migration Feature?

    Hello, Is there a cross-cluster live migration feature available? We operate multiple 3-node clusters and maintain one "empty 3-node cluster" that serves as our update/spare cluster. This allows us to rotate VMs from one cluster to another during major updates or release changes. We use Ceph...
  10. M

    Proxmox Cluster Migration (ceph only)

    Hello, i am thrilled to see pct remote_migrate and qm remote_mgirate in production. I read: https://forum.proxmox.com/threads/how-to-migrate-vm-from-one-pve-cluster-to-another.68762/page-2 and thought i could share our script we use to migrate VMs between Clusters. (over 400VMs so far)...
  11. M

    osd performance problem after upgrading ceph 14.2 => 15.2.15

    We never found out the root of the problem. Our new update policy is now to migrate the VMs to a empty/healthy cluster and never run big release updates on a live system We have 7 Clusters (3nodes each) and one cluster ist always empty/spare. We use some rbd export/import/diff script to move...
  12. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    I discovered that the CPU load for the OSDs at 100%. I would guess duo to very many small iops. Bit iostat does not show many iops. Anyway...so i ended up deleting ALL snapshots after i did not get a reply here...
  13. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    does "ceph osd set nosnaptrim" bring your IO back up again? in my case: root@cluster5-node01:~# rados bench -p scbench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix...
  14. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    my related post: https://forum.proxmox.com/threads/osd-performance-problem-after-upgrading-ceph-14-2-15-2-15.110212/