Search results

  1. M

    Old chunk files not removed after prune + GC on Proxmox Backup Server

    Hello everyone, I’m seeing some unexpected old chunk files on my Proxmox Backup Server, even though prune and garbage collection jobs are running regularly. My prune configuration is: Daily: 7 Weekly: 5 Monthly: 2 Today is November 13th, 2025, and the last garbage collection ran...
  2. M

    Fleecing Storage (rbd)

    Hello, i have a PVE Cluster with Ceph Storage. I discovered the fleecing option in the Advanced backup Task where i can select a rbd storage. How does it work in detail? Does it create a rbd snapshot or where can i see the saved "fleecing" data on my rbd storage which it will merge once the...
  3. M

    Backup Performance - would multithread improve it?

    Hello, We run our PBS and have mounted RBD block devices from a Ceph cluster. The Ceph cluster consists of 12, 14, and 16 enterprise HDDs (not SSDs), making it relatively cheap, large, but somewhat slow in terms of I/O performance. Benchmarks with parallel threads perform very well. However...
  4. M

    corosync - KNET - MTU Warnings (MTU: 1500 vs 1397)

    Hello, i get a lot of logs like this. Is it something to worry about?: Dec 20 08:23:48 node01 corosync[2647]: [KNET ] udp: detected kernel MTU: 1500 Dec 20 08:23:48 node01 corosync[2647]: [KNET ] pmtud: PMTUD completed for host: 3 link: 0 current link mtu: 1397 Dec 20 08:23:48 node01...
  5. M

    Is there a Cross-Cluster Live Migration Feature?

    Hello, Is there a cross-cluster live migration feature available? We operate multiple 3-node clusters and maintain one "empty 3-node cluster" that serves as our update/spare cluster. This allows us to rotate VMs from one cluster to another during major updates or release changes. We use Ceph...
  6. M

    Proxmox Cluster Migration (ceph only)

    Hello, i am thrilled to see pct remote_migrate and qm remote_mgirate in production. I read: https://forum.proxmox.com/threads/how-to-migrate-vm-from-one-pve-cluster-to-another.68762/page-2 and thought i could share our script we use to migrate VMs between Clusters. (over 400VMs so far)...
  7. M

    osd performance problem after upgrading ceph 14.2 => 15.2.15

    Hello List, we upgraded from ceph 14.2 to 15.2.15. Still on Debian 10. The Upgrade went quite smoothly and ended with HEALTH_OK. 2h later some low io problems started and VMs got unresponsive. Right around the snapshot cron jobs... It looks like the OSDs (SSD) are really busy now...
  8. M

    7.1.2 Installer Problem on RS500A-E10-RS12 with NVME

    Hello, i am getting this looping error when i boot the 7.1.2 Installer Image. If i remove the NVMe Drives the error does not happen. How can this be fixed? Cheers, Mario
  9. M

    How to backup MS Exchange properly

    Hello, how do you backup MS Exchange properly? This might be out-of-scope here but i dunno where else to ask. The VM based Backups are nice to have, but they are not consistent in the matter of the Exchange DB transaction log. Also such features as Mailbox/Mail Restore is (of course) not...
  10. M

    [SOLVED] How to rename pbs Backup Target Name

    Hello, how can i rename my backup target? It started as a test but is now live :) Content: /etc/pve/storage.cfg pbs: proxmox-backup-test datastore Backup-Cluster5a server 10.10.50.200 content backup .... When i change it to "pbs: proxmox-backup" in...
  11. M

    Problems with NAT/MASQ

    Hello, i have a Node with one physical nic. My Config is below. I have Problems with the Firewall. If i enable the Proxmox Firewall the VM Packets seem to never reach my NAT rule. In my external interface i just the the non-natted packet: tcpdump -i eno1 -n icmp tcpdump: verbose output...
  12. M

    Restore as clone

    Hello, can i restore a backup as clone somehow? Sometimes i want to test or look something up and it would be great if i cloud restore the Live system as copy/clone/id or something alike without touching the live system. Cheers, Michael
  13. M

    VM IO hickup when restarting ceph osds

    Hello, i run proxmox with ceph. When doing maintance (like Updates, changes on the network, etc..) i somtimes need to restart the OSDs. There are no VMs on the Node where i do the Maintance. I think i read somewhere, that when you have replica/size 3, one file/block(?) is always the primary...
  14. M

    Rollback needs all IO ressources (ceph storage)

    Hello, my cluster struggles with IO if i rollback a VM. Is there a way to IO Limit Rollback (ceph storage)? Cheers, Mario
  15. M

    need hardware recommendation for 3 Node Cluster

    Hello, does someone have a nice hardware recommendation for a 3 Node Cluster. Right now we have such nodes: 1U Rackmount / Supermicro SuperServer 1019P-WTR 1x CPU Intel Xeon Silver 4114 10C - 2.2 GHz Intel Xeon Silver 4114, 2.20GHz, 10C/20T, LGA 3647, tray 6x Stk 16GB Samsung DDR4-2666 CL19...
  16. M

    LDAP User Auth with proxmox 6.1.7

    Hello, i am trying to user LDAP User auth (realm): The tcp connection itself works: root@cluster6-node02:~# telnet 192.168.150.7 7389 Trying 192.168.150.7... Connected to 192.168.150.7. Escape character is '^]'. If i try to auth at the login, i dont see the host/node asking my ldap server...
  17. M

    Ceph stuck at: objects misplaced (0.064%)

    Hello, i am running 5.4-15 with ceph and i am stuck here wirh 0.064% misplaced since days and i dunno why. root@node01:~ # ceph -s cluster: id: 251c937e-0b55-48c1-8f34-96e84e4023d4 health: HEALTH_WARN 1803/2799972 objects misplaced (0.064%) mon node02 is...
  18. M

    How to add a 2nd Corosync Link

    Hello, i would like to add a 2nd corosync ring/network for redundancy. How can i add "Link 1" after i took that cluster into production? Thanks, Michael
  19. M

    Change 3 Node Ceph Cluster to 1 Node

    Hello i have a old Proxmox 4.4 Cluster and would like to "reinstall" it. My goal is to change that 3-Node Cluster with ceph storage to a Single-Node to get 2 free for a fresh install. I guess i just change the quorum to 1 and then migrate all VMS to that one node. Then turn off the two...
  20. M

    [SOLVED] Corosync/HA logic

    Hello, 3 Node Test Cluster. Test Case: Cut off node10 (master) cluster network connection ha-manager statuson node08: quorum OK master node10 (old timestamp - dead?, Tue Jan 8 17:12:52 2019) lrm node08 (active, Tue Jan 8 17:13:31 2019) lrm node09 (idle, Tue Jan 8 17:13:34 2019)...