Search results

  1. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, I do the same, but for me with the "ceph config set ..." I have I/O errors with Samsung, Intel SSD, I do not see I/O errors with Crucial SSD. Best regards. Francis
  2. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, Finally I have again de "slow" warning. Francis
  3. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, l installed the PVE 8.4.1 en restarted all the "slow" OSD, for the moment no "slow" warning. Best regards. Francis
  4. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Thank you Yaga, Allready done, when I restart the OSDs process the slow warning disappear, but after a short time I have again the message. Best regards. Francis
  5. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Yes I do this Do not work and new I/O DISCARD errors Do not work Yes for the moment no more I/O DISCARD errors, reboot do not change the OSD "slow" problem. The OSD "slow" warning come after the Ceph upgrade from 19.2.0 to 19.2.1. Best regards. Francis
  6. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, Finally I remove "bdev_async_discard_threads" and "bdev_enable_discard" because I think that create I/O errors on some ssd disks. Best regards. Francis
  7. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, On our clusters 8.4.0 since the upgrade of Ceph 19.2.0 to 19.2.1 (pve2/pve3) I have warning messages. I applied the "solution" at https://github.com/rook/rook/discussions/15403 But this not resolve the "problem". Best regards. Francis
  8. F

    Configure Qdevice with two HA links.

    Hello, I have some clusters with 2 HA links and 4 nodes, 2 nodes in one rack and 2 nodes in another rack. I want to manage a rack crash so two nodes down and a HA link crash. Can I have to add one Qdevice on the two HA links or only on one HA link or I need two Qdevices one for each HA links...
  9. F

    Proxmox Cluster with local Gluster servers.

    Zubin, >> Gluster does not like Hardware Raid? Even if the filesystem Gluster Sits on is XFS? Hardware Raid is not necessary, Gluster manage the Raid in your case 3 replicats (same if you use zfs, btrfs, lvm raid). You need 2 switchs to avoid a spof, If the unique switch crash ("same" problem...
  10. F

    Proxmox Cluster with local Gluster servers.

    Zubin, For the Network you have 2 switches (10G/1G) or 2 switches (10G), 2 switches (1G) Storage, VM memory migration, VM production, HA (Bond 2x10G), HV and VM management, HA (Bond 4x1G) or Storage, VM memory migration, HA (Bond 2x10G), HV and VM management, HA (Bond 2x1G), VM production...
  11. F

    Proxmox Cluster with local Gluster servers.

    Hi Zubin, Yes it is working, the problem is the shutdown of a node, Gluster do not stop correctly with Debian (I do not test if the problem is solved now). >> Each 1TB is formatted with XFS, because it doesn't do the raw disk, correct? yes and do not use hardware raid for GlusterFS. >> Seems...
  12. F

    One or more devices could not be used because the label is missing or invalid

    Hello, I had a "similar" problem after a reboot I had two disks reversed and set FAULTY (with label invalid) and I have the same error messages than you with the "replace". [0:0:4:0] disk xxxx xxxxxxxxxxx xxxx /dev/sde [0:0:5:0] disk xxxx xxxxxxxxxxx xxxx /dev/sdd...
  13. F

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Hello Glowsome Yes sorry the correct path for the file is "/etc/systemd/system/lvmlockd.service.d/". I do not understand. Best regards. Francis
  14. F

    Proxmox Cluster with local Gluster servers.

    HI Imran, Of course but not so much and you have the LVM capabilities, change disk size, increase LVM size, move data online, etc... If you have time to compare with and without LVM you are welcome. Best regards. Francis
  15. F

    Proxmox Cluster with local Gluster servers.

    Hi Imran, Gluster can not have direct access to the disk, Gluster is on top another filesystem. You have to: - configure the 10x4TB disks in JBOD, - put the 10 disks in a Volume Group, - create a Logical Volume on each 10 disks (1LV -> 1DD), - create a filesystem (ext4/xfs) on each 10...
  16. F

    Proxmox Cluster with local Gluster servers.

    Hi, I suppose you use the two Storage Server (SS) for the Gluster servers and the Proxmox Server (PS) as a Gluster client and you have a bond of 2x10Gb back to back for the SS's and another bond 2x10Gb to connect via a switch the PS. For Gluster do not use hardware raid, gluster have its own...
  17. F

    Proxmox Cluster with local Gluster servers.

    Hi imran.tee, The performance of GulsterFS depend on your disk, network bandwidth, number of nodes, etc... generally you have good performance. ISCSI on top of GlusterFS no, but why ? For GlusterFS you do not need iSCSI for iSCSI you do not need GlusterFS. With iSCSI you can be a Target...
  18. F

    Configure fence device /etc/pve/ha/fence.cfg ?

    Hi mgabriel, Thank I know that but we need to have small "isolated" two nodes clusters, we do not want/cant manage raspberrys and we can not have VMs on another clusters (isolated cluster). For us Two nodes and fence is the best solution, why fence is remove ??? On some clusters we want to...
  19. F

    Configure fence device /etc/pve/ha/fence.cfg ?

    Hi mgabriel, Thank you, but for QDevice I need another node. Best regards. Francis