Recent content by dpl

  1. D

    [SOLVED] MS Exchange with PBS(VSS Support)

    We use manually: https://microsoft.github.io/CSS-Exchange/Databases/VSSTester/ for Clean Up / Purging the Exchange Transaction Logs after Proxmox VM Snapshot Backups
  2. D

    [SOLVED] Network down on reboot because of Failed to start Wait for udev To Complete Device Initialization

    Thank you very much, we also wasted many hours debugging, although the solution (unmounting the defective ipmi iso) is so simple
  3. D

    Proxmox Cluster Upgrade from 6.4 to 7.3 (using Remote Ceph Storage)

    Hello Richard, We have successfully completed the infrastructure upgrade. GPU passthrough also works so far on the 10 machines under kernel 5.15 We only had problems starting UEFI VMs with Ceph storage for a short time. The error "rbd_cache_policy=writeback: invalid conf option...
  4. D

    Proxmox Cluster Upgrade from 6.4 to 7.3 (using Remote Ceph Storage)

    Running the migrated VMs from the 12 nodes all on the thirteenth node is not intended. The thirteenth node is only used to take over the "VM (KVM) profile settings" and "VM (block) data" in the meantime, so that a data transfer (to CEPH) from the local storage (ZFS) of the nodes takes place...
  5. D

    Proxmox Cluster Upgrade from 6.4 to 7.3 (using Remote Ceph Storage)

    Hello all, We are currently running a Proxmox cluster consisting of 12 nodes with PVE 6.4 (latest patches) and local ZFS storage. In the future, the PVE cluster will be fully connected using Ceph storage. To migrate the infrastructure to the new PVE version 7.3, we wanted to use our external...
  6. D

    Ceph: sudden slow ops, freezes, and slow-downs

    @YAGA - add SSDs / NVMEs to the nodes - create a "replicated_rule" based on "device-class" and move the "cephfs_metadata" pool to the SSDs/NVMEs Maybe this will speed up your CephFS "a bit".
  7. D

    Ceph: sudden slow ops, freezes, and slow-downs

    Base Changelog https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.15
  8. D

    Ceph: sudden slow ops, freezes, and slow-downs

    @hthpr I read now often in the forum that there are problems with the 5.15 kernel (should also be problems with the pci / gpu passthrough). The changelog of 5.15.49 is long, https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.15.49 if necessary it is worth to search for "scheduler" fixes...
  9. D

    Ceph: sudden slow ops, freezes, and slow-downs

    You don't need to replicate the "payload" data of the VM itself, you have the shared Ceph RBD storage for that. You only need to run the VM in the Proxmox cluster in HA mode. (All other data cluster solutions like DRBD or GlusterFS are unnecessary)
  10. D

    Extremely SLOW Ceph Storage from over 60% usage ???

    https://forum.proxmox.com/threads/ceph-sudden-slow-ops-freezes-and-slow-downs.111144/#post-479654
  11. D

    Ceph: sudden slow ops, freezes, and slow-downs

    2. CephFS is very resource hungry due to the metadata handling of the MDS, the setup should be well tuned, this requires good planning with division into HDD, SSD and NVME device classes and offloading of the Wal+DB. Yes CephFS has the advantage that you can mount it multiple times directly...
  12. D

    Ceph: sudden slow ops, freezes, and slow-downs

    We have had a similar problem, we could not solve it: https://forum.proxmox.com/threads/extremely-slow-ceph-storage-from-over-60-usage.101051/ Therefore, a few comments from my side: 1. With 4 nodes you should not have a monitor or standby MDS active on each node, if one node fails, the...
  13. D

    Extremely SLOW Ceph Storage from over 60% usage ???

    It is probably due to the activated SWAP root@ceph1-minirack:~# for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | grep kB | egrep -v "0 kB" pvedaemon worke8536 kB pmxcfs 1192 kB ceph-osd 3188 kB ceph-osd 3184 kB pve-firewall 5544 kB pvestatd...
  14. D

    Extrem LANGSAMER Ceph Storage ab über 60% Auslastung ???

    Es liegt wohl am aktivierten SWAP root@ceph1-minirack:~# for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | grep kB | egrep -v "0 kB" pvedaemon worke8536 kB pmxcfs 1192 kB ceph-osd 3188 kB ceph-osd 3184 kB pve-firewall 5544 kB pvestatd 796 kB...
  15. D

    Extremely SLOW Ceph Storage from over 60% usage ???

    And again the RAM is full to swapping root@ceph1-minirack:~# cat /proc/meminfo MemTotal: 24656724 kB MemFree: 208708 kB MemAvailable: 14458664 kB Buffers: 14197188 kB Cached: 229064 kB SwapCached: 4968 kB Active: 4332072 kB Inactive...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!