Search results

  1. M

    Proxmox Cluster Migration (ceph only)

    Hello, i am thrilled to see pct remote_migrate and qm remote_mgirate in production. I read: https://forum.proxmox.com/threads/how-to-migrate-vm-from-one-pve-cluster-to-another.68762/page-2 and thought i could share our script we use to migrate VMs between Clusters. (over 400VMs so far)...
  2. M

    osd performance problem after upgrading ceph 14.2 => 15.2.15

    We never found out the root of the problem. Our new update policy is now to migrate the VMs to a empty/healthy cluster and never run big release updates on a live system We have 7 Clusters (3nodes each) and one cluster ist always empty/spare. We use some rbd export/import/diff script to move...
  3. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    I discovered that the CPU load for the OSDs at 100%. I would guess duo to very many small iops. Bit iostat does not show many iops. Anyway...so i ended up deleting ALL snapshots after i did not get a reply here...
  4. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    does "ceph osd set nosnaptrim" bring your IO back up again? in my case: root@cluster5-node01:~# rados bench -p scbench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix...
  5. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    my related post: https://forum.proxmox.com/threads/osd-performance-problem-after-upgrading-ceph-14-2-15-2-15.110212/
  6. M

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    We had the same problem this weekend. "ceph osd perf" was out of control. Source of the problem seemed to be the snaptrims. We disabled snaptrim with " ceph osd set nosnaptrim" to let the IO settle and then: ceph tell 'osd.*' injectargs '--osd-max-trimming-pgs 1' ceph tell 'osd.*' injectargs...
  7. M

    osd performance problem after upgrading ceph 14.2 => 15.2.15

    Hello List, we upgraded from ceph 14.2 to 15.2.15. Still on Debian 10. The Upgrade went quite smoothly and ended with HEALTH_OK. 2h later some low io problems started and VMs got unresponsive. Right around the snapshot cron jobs... It looks like the OSDs (SSD) are really busy now...
  8. M

    7.1.2 Installer Problem on RS500A-E10-RS12 with NVME

    Should i keep "noapic acpi=off" as kernel boot parameter?
  9. M

    7.1.2 Installer Problem on RS500A-E10-RS12 with NVME

    Hello, i am getting this looping error when i boot the 7.1.2 Installer Image. If i remove the NVMe Drives the error does not happen. How can this be fixed? Cheers, Mario
  10. M

    How to backup MS Exchange properly

    Hello, how do you backup MS Exchange properly? This might be out-of-scope here but i dunno where else to ask. The VM based Backups are nice to have, but they are not consistent in the matter of the Exchange DB transaction log. Also such features as Mailbox/Mail Restore is (of course) not...
  11. M

    [SOLVED] How to rename pbs Backup Target Name

    Hello, how can i rename my backup target? It started as a test but is now live :) Content: /etc/pve/storage.cfg pbs: proxmox-backup-test datastore Backup-Cluster5a server 10.10.50.200 content backup .... When i change it to "pbs: proxmox-backup" in...
  12. M

    Problems with NAT/MASQ

    Solution seems to be: iptables -t raw -I PREROUTING 1 -i fwbr+ -j CT --zone 1 https://pve-user.pve.proxmox.narkive.com/1xw9toDr/nat-problems-with-pve-firewall
  13. M

    Problems with NAT/MASQ

    Hello, i have a Node with one physical nic. My Config is below. I have Problems with the Firewall. If i enable the Proxmox Firewall the VM Packets seem to never reach my NAT rule. In my external interface i just the the non-natted packet: tcpdump -i eno1 -n icmp tcpdump: verbose output...
  14. M

    Restore as clone

    Hello, can i restore a backup as clone somehow? Sometimes i want to test or look something up and it would be great if i cloud restore the Live system as copy/clone/id or something alike without touching the live system. Cheers, Michael
  15. M

    VM IO hickup when restarting ceph osds

    what do you think about those steps to perform a smooth upgrade/maintanance: 1.) set noout first 2.) change set the primary-affinity to 0 on the affected OSDs ( https://ceph.io/geen-categorie/ceph-primary-affinity/ ) 3.) do the maintanance (network changed, upgrades or whatever) 4.) change set...
  16. M

    VM IO hickup when restarting ceph osds

    Hello, i run proxmox with ceph. When doing maintance (like Updates, changes on the network, etc..) i somtimes need to restart the OSDs. There are no VMs on the Node where i do the Maintance. I think i read somewhere, that when you have replica/size 3, one file/block(?) is always the primary...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!