Recent content by zeuxprox

  1. Z

    [SOLVED] Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Hello, as mentioned in yesterday's post, after installing the 6.17.4-2-pve kernel (enabling the no-subscription repository on PBS), all backups were performed correctly last night. Question: when will kernel proxmox-kernel-6.17.4-2-pve be available in the enterprise repository? Thank you
  2. Z

    [SOLVED] Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Hi, as suggested @Heracleos , I enabled the non-subscription repository and installed Kernel 6.17.14-2-pve with the command apt install proxmox-kernel-6.17.4-2-pve . I'll update you tomorrow morning if the problem is solved with Kernel 6.17.14-2-pve . I installed only the kernel, PBS version...
  3. Z

    [SOLVED] Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Hello, a few days ago, we upgraded our 5-node cluster (with Ceph 19.2.3) from PVE 8.4 to PVe 9.1.1 and PBS from 3 to 4.1.0. After these upgrades, we started experiencing the issues described in this thread. Now, after carefully reading this thread, I understand that installing the 6.17.4-2-pve...
  4. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, it works! the problem was a configuration on dorado. Now I have a question about multipath... From the GUI, Datacenter --> Storage --> Add --> iSCSI I added the first Controller (A) of dorado setting as portal the ip 192.168.51.60 and the output of the command iscsiadm -m session is: tcp...
  5. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, what do you think about this discussion related to Kernel 6.8 and iSCSI? https://serverfault.com/questions/1168100/what-could-prevent-iscsi-disk-to-mount-on-ubuntu I also tried Kernel 6.11.0-2-pve but the problem remains Thank you
  6. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I removed the pools (Datacenter --> Storage --> Add --> iSCSI ), rebooted PVE and then, always from the GUI added again the first Controller (192.168.51.60), but without luck. Now I have only one iSCSI entry in Storage: The output of iscsiadm -m session is: tcp: [1]...
  7. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I have a strange issue setting up iSCSI connection betwwen PVE 8.3.2 and Huawei Ocenastore dorado 3000 v6 via iSCSI and then configuring multipath. Because of this is a test environment, I create only one LUN. PVE has 4x10Gb Nics (Intel) and I setup two networks with 2 different VLANs...
  8. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hi, my NVMe disks are Micron 9300 MAX. Thanks
  9. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hello, I have a Cluster of 6 Nodes with 4 3.2 TB NVMe Disks for each Node. Now I want to add a Node but I have 4 6.4 NVMe Disks. Now I would like to keep the Cluster balanced and therefore I would like to use only 3.2 TB on the disks of the new node. The question is: how should I partition 6.4...
  10. Z

    Ceph 17.2 Quincy Available as Stable Release

    Hi All, I have a cluster of 5 nodes with Proxmox 7.1-12 and Ceph 16.2.7. This weekend I would like to upgrade Proxmox to 7.2 and Ceph to 17.2.1. My Ceph Cluster is made of 3 pools: device_health_metrics with 1 Placement Group Ceph-1-NVMe-Pool with 1024 Placement Groups Ceph-1-SSD-Pool with...
  11. Z

    PVE 7.1-12 Ceph n. pg not deep-scrubbed in time

    Hi, the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and...
  12. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, have used the command ceph pg but this command is incomplete, the output is: no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr>...
  13. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    The problem has not been resolved yet, and the /var/log/ceph/ceph.log is always full of messages as said in my previous post... Could someone help me, please? Thank you