Search results

  1. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, it works! the problem was a configuration on dorado. Now I have a question about multipath... From the GUI, Datacenter --> Storage --> Add --> iSCSI I added the first Controller (A) of dorado setting as portal the ip 192.168.51.60 and the output of the command iscsiadm -m session is: tcp...
  2. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, what do you think about this discussion related to Kernel 6.8 and iSCSI? https://serverfault.com/questions/1168100/what-could-prevent-iscsi-disk-to-mount-on-ubuntu I also tried Kernel 6.11.0-2-pve but the problem remains Thank you
  3. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I removed the pools (Datacenter --> Storage --> Add --> iSCSI ), rebooted PVE and then, always from the GUI added again the first Controller (192.168.51.60), but without luck. Now I have only one iSCSI entry in Storage: The output of iscsiadm -m session is: tcp: [1]...
  4. Z

    iSCSI Huawei Oceanstore Dorado 3000 LVM and Multipath

    Hi, I have a strange issue setting up iSCSI connection betwwen PVE 8.3.2 and Huawei Ocenastore dorado 3000 v6 via iSCSI and then configuring multipath. Because of this is a test environment, I create only one LUN. PVE has 4x10Gb Nics (Intel) and I setup two networks with 2 different VLANs...
  5. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hi, my NVMe disks are Micron 9300 MAX. Thanks
  6. Z

    [SOLVED] Ceph 18.2.2 - How to partition disks

    Hello, I have a Cluster of 6 Nodes with 4 3.2 TB NVMe Disks for each Node. Now I want to add a Node but I have 4 6.4 NVMe Disks. Now I would like to keep the Cluster balanced and therefore I would like to use only 3.2 TB on the disks of the new node. The question is: how should I partition 6.4...
  7. Z

    Ceph 17.2 Quincy Available as Stable Release

    Hi All, I have a cluster of 5 nodes with Proxmox 7.1-12 and Ceph 16.2.7. This weekend I would like to upgrade Proxmox to 7.2 and Ceph to 17.2.1. My Ceph Cluster is made of 3 pools: device_health_metrics with 1 Placement Group Ceph-1-NVMe-Pool with 1024 Placement Groups Ceph-1-SSD-Pool with...
  8. Z

    PVE 7.1-12 Ceph n. pg not deep-scrubbed in time

    Hi, the morning of April 17 I upgraded my 5 nodes Proxmox Cluster (with Ceph 16.2.7) from 7.1-7 to 7.1-12 following these steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and...
  9. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, have used the command ceph pg but this command is incomplete, the output is: no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr>...
  10. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    The problem has not been resolved yet, and the /var/log/ceph/ceph.log is always full of messages as said in my previous post... Could someone help me, please? Thank you
  11. Z

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    Hi, yesterday morning I updated my 5 nodes cluster from Proxmox 7.1-7 to 7.1-12 following this steps: 1. Set noout, noscrub and nodeep-scrub before start the update process; 2. I have updated all 5 nodes without problems; 3. Unset the flags noout, noscrub and nodeep-scrub I have 2 pools, one...
  12. Z

    Restore single Virtual Disk from PBS

    You can't restore, as Fabian said, a single virtual disk from the GUI. You can restore only files or directories.
  13. Z

    Windows VMs stuck on boot after Proxmox Upgrade to 7.0

    Hi, I have the same problem, Proxmox 7.1.7 and Windows Server Datacenter 2016/2019.
  14. Z

    Proxmox Ceph Pool specify disks

    In the last step in the Gui, in each node under Ceph --> OSD, when you add OSDs to Ceph...
  15. Z

    Restore single Virtual Disk from PBS

    Hi, are you planning, in an upcoming release, to allow the restore of a single virtual disk in addition to the entire VM and the single file? Another really useful feature would be being able to create VLANs in the GUI ... Thank you
  16. Z

    Proxmox Ceph Pool specify disks

    Hi, In my cluster I have 2 pools, one for NVMe disks and one for SSD disks. These are the step that I followed to achieve my goal: Create 2 rules, one for NVMe and one for SSD: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> So for NVMe disks the above...
  17. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Hi, so if I understand correctly, you suggest to set the following Ceph flags: ceph osd set noscrub ceph osd set nodeep-scrub ceph osd set noout before starting the update of node 1 and to remove them with: ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout only when...
  18. Z

    Correct/Official procedure to update a PVE7 Cluster with Ceph 16.2

    Now the question is: before update node 2 do you advice to unset the OSD flags with ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout wait until Ceph is OK and then repeat the procedure as done for node 1 (set the OSD flags again, upgrade node 2 and then unset the OSD...