Recent content by zeuxprox

  1. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, it works! the problem was a configuration on dorado. Now I have a question about multipath... From the GUI, Datacenter --> Storage --> Add --> iSCSI I added the first Controller (A) of dorado setting as portal the ip 192.168.51.60 and the output of the command iscsiadm -m session is: tcp...
  2. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, what do you think about this discussion related to Kernel 6.8 and iSCSI? https://serverfault.com/questions/1168100/what-could-prevent-iscsi-disk-to-mount-on-ubuntu I also tried Kernel 6.11.0-2-pve but the problem remains Thank you
  3. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I removed the pools (Datacenter --> Storage --> Add --> iSCSI ), rebooted PVE and then, always from the GUI added again the first Controller (192.168.51.60), but without luck. Now I have only one iSCSI entry in Storage: The output of iscsiadm -m session is: tcp: [1]...
  4. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I have a strange issue setting up iSCSI connection betwwen PVE 8.3.2 and Huawei Ocenastore dorado 3000 v6 via iSCSI and then configuring multipath. Because of this is a test environment, I create only one LUN. PVE has 4x10Gb Nics (Intel) and I setup two networks with 2 different VLANs...
  5. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hi, my NVMe disks are Micron 9300 MAX. Thanks
  6. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hello, I have a Cluster of 6 Nodes with 4 3.2 TB NVMe Disks for each Node. Now I want to add a Node but I have 4 6.4 NVMe Disks. Now I would like to keep the Cluster balanced and therefore I would like to use only 3.2 TB on the disks of the new node. The question is: how should I partition 6.4...
  7. Z

    Ceph 17.2 Quincy Available as Stable Release

    Hi All, I have a cluster of 5 nodes with Proxmox 7.1-12 and Ceph 16.2.7. This weekend I would like to upgrade Proxmox to 7.2 and Ceph to 17.2.1. My Ceph Cluster is made of 3 pools: device_health_metrics with 1 Placement Group Ceph-1-NVMe-Pool with 1024 Placement Groups Ceph-1-SSD-Pool with...
  8. Z

    PVE 7.1-12 Ceph n. pg not deep-scrubbed in time

    Hi, the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and...
  9. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, have used the command ceph pg but this command is incomplete, the output is: no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr>...
  10. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    The problem has not been resolved yet, and the /var/log/ceph/ceph.log is always full of messages as said in my previous post... Could someone help me, please? Thank you
  11. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, yesterday morning I updated my 5 nodes cluster from Proxmox 7.1-7 to 7.1-12 following this steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and nodeep-scrub I have 2 pools, one...
  12. Z

    Restore single Virtual Disk from PBS

    You can't restore, as Fabian said, a single virtual disk from the GUI. You can restore only files or directories.
  13. Z

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Hi, I have the same problem, Proxmox 7.1.7 and Windows Server Datacenter 2016/2019.