Recent content by dpl

  1. D

    Proxmox Wiki about Status? Live Migration? VGPU Support? Overview

    @LnxBil @tom Okey, we are now using https://pve-stack.com/
  2. D

    Proxmox Wiki about Status? Live Migration? VGPU Support? Overview

    Well ... In order to make the whole topic a little more concrete, I have set up an API-capable wiki under the new domain: https://proxmox-stack.com. We could make 2 nodes from our lab environment permanently available for this project. Now only the “Boot Environment” part is missing to be...
  3. D

    Proxmox Wiki about Status? Live Migration? VGPU Support? Overview

    @floh8 Why should it? The kernel releases come from Cannonical, the userland from the Debian team and the Proxmox individual patches from the Proxmox team. For quality control, a handful of VMs are operated on 2 nodes with shared storage and various scenarios can be tested in advance via API...
  4. D

    Proxmox Wiki about Status? Live Migration? VGPU Support? Overview

    @floh8 No further migration, at that time the migration of the VMs from nodes with kernel 6.8 to nodes with kernel 6.5 was necessary. @Kingneutron This is not about opening a support ticket, but about increasing transparency in the wiki and generally having an overview of which features are...
  5. D

    Proxmox Wiki about Status? Live Migration? VGPU Support? Overview

    Hello Proxmox Team, We operate a Proxmox (8.2.2) VM cluster with 20 nodes and Enterprise Subscription. We operate our (external) Proxmox CEPH Storage on 5 additional nodes, as we do not want to mix the “hyperconverged storage” with the VMs nodes. The oldest CPU type is: -> 24x Intel(R) Xeon(R)...
  6. D

    [SOLVED] MS Exchange with PBS(VSS Support)

    We use manually: https://microsoft.github.io/CSS-Exchange/Databases/VSSTester/ for Clean Up / Purging the Exchange Transaction Logs after Proxmox VM Snapshot Backups
  7. D

    [SOLVED] Network down on reboot because of Failed to start Wait for udev To Complete Device Initialization

    Thank you very much, we also wasted many hours debugging, although the solution (unmounting the defective ipmi iso) is so simple
  8. D

    Proxmox Cluster Upgrade from 6.4 to 7.3 (using Remote Ceph Storage)

    Hello Richard, We have successfully completed the infrastructure upgrade. GPU passthrough also works so far on the 10 machines under kernel 5.15 We only had problems starting UEFI VMs with Ceph storage for a short time. The error "rbd_cache_policy=writeback: invalid conf option...
  9. D

    Proxmox Cluster Upgrade from 6.4 to 7.3 (using Remote Ceph Storage)

    Running the migrated VMs from the 12 nodes all on the thirteenth node is not intended. The thirteenth node is only used to take over the "VM (KVM) profile settings" and "VM (block) data" in the meantime, so that a data transfer (to CEPH) from the local storage (ZFS) of the nodes takes place...
  10. D

    Proxmox Cluster Upgrade from 6.4 to 7.3 (using Remote Ceph Storage)

    Hello all, We are currently running a Proxmox cluster consisting of 12 nodes with PVE 6.4 (latest patches) and local ZFS storage. In the future, the PVE cluster will be fully connected using Ceph storage. To migrate the infrastructure to the new PVE version 7.3, we wanted to use our external...
  11. D

    Ceph: sudden slow ops, freezes, and slow-downs

    @YAGA - add SSDs / NVMEs to the nodes - create a "replicated_rule" based on "device-class" and move the "cephfs_metadata" pool to the SSDs/NVMEs Maybe this will speed up your CephFS "a bit".
  12. D

    Ceph: sudden slow ops, freezes, and slow-downs

    Base Changelog https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.15
  13. D

    Ceph: sudden slow ops, freezes, and slow-downs

    @hthpr I read now often in the forum that there are problems with the 5.15 kernel (should also be problems with the pci / gpu passthrough). The changelog of 5.15.49 is long, https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.15.49 if necessary it is worth to search for "scheduler" fixes...
  14. D

    Ceph: sudden slow ops, freezes, and slow-downs

    You don't need to replicate the "payload" data of the VM itself, you have the shared Ceph RBD storage for that. You only need to run the VM in the Proxmox cluster in HA mode. (All other data cluster solutions like DRBD or GlusterFS are unnecessary)
  15. D

    Extremely SLOW Ceph Storage from over 60% usage ???

    https://forum.proxmox.com/threads/ceph-sudden-slow-ops-freezes-and-slow-downs.111144/#post-479654