Recent content by FrancisS

  1. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hi Yaga, Are you sure because in the 19.2.2 changelog there is only one change (critical) not related with the slow warning ? squid: rgw: keep the tails when copying object to itself (pr#62711, cbodley)
  2. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, I do the same, but for me with the "ceph config set ..." I have I/O errors with Samsung, Intel SSD, I do not see I/O errors with Crucial SSD. Best regards. Francis
  3. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, Finally I have again de "slow" warning. Francis
  4. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, l installed the PVE 8.4.1 en restarted all the "slow" OSD, for the moment no "slow" warning. Best regards. Francis
  5. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Thank you Yaga, Allready done, when I restart the OSDs process the slow warning disappear, but after a short time I have again the message. Best regards. Francis
  6. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Yes I do this Do not work and new I/O DISCARD errors Do not work Yes for the moment no more I/O DISCARD errors, reboot do not change the OSD "slow" problem. The OSD "slow" warning come after the Ceph upgrade from 19.2.0 to 19.2.1. Best regards. Francis
  7. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, Finally I remove "bdev_async_discard_threads" and "bdev_enable_discard" because I think that create I/O errors on some ssd disks. Best regards. Francis
  8. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, On our clusters 8.4.0 since the upgrade of Ceph 19.2.0 to 19.2.1 (pve2/pve3) I have warning messages. I applied the "solution" at https://github.com/rook/rook/discussions/15403 But this not resolve the "problem". Best regards. Francis
  9. F

    Configure Qdevice with two HA links.

    Hello, I have some clusters with 2 HA links and 4 nodes, 2 nodes in one rack and 2 nodes in another rack. I want to manage a rack crash so two nodes down and a HA link crash. Can I have to add one Qdevice on the two HA links or only on one HA link or I need two Qdevices one for each HA links...
  10. F

    Proxmox Cluster with local Gluster servers.

    Zubin, >> Gluster does not like Hardware Raid? Even if the filesystem Gluster Sits on is XFS? Hardware Raid is not necessary, Gluster manage the Raid in your case 3 replicats (same if you use zfs, btrfs, lvm raid). You need 2 switchs to avoid a spof, If the unique switch crash ("same" problem...
  11. F

    Proxmox Cluster with local Gluster servers.

    Zubin, For the Network you have 2 switches (10G/1G) or 2 switches (10G), 2 switches (1G) Storage, VM memory migration, VM production, HA (Bond 2x10G), HV and VM management, HA (Bond 4x1G) or Storage, VM memory migration, HA (Bond 2x10G), HV and VM management, HA (Bond 2x1G), VM production...
  12. F

    Proxmox Cluster with local Gluster servers.

    Hi Zubin, Yes it is working, the problem is the shutdown of a node, Gluster do not stop correctly with Debian (I do not test if the problem is solved now). >> Each 1TB is formatted with XFS, because it doesn't do the raw disk, correct? yes and do not use hardware raid for GlusterFS. >> Seems...
  13. F

    One or more devices could not be used because the label is missing or invalid

    Hello, I had a "similar" problem after a reboot I had two disks reversed and set FAULTY (with label invalid) and I have the same error messages than you with the "replace". [0:0:4:0] disk xxxx xxxxxxxxxxx xxxx /dev/sde [0:0:5:0] disk xxxx xxxxxxxxxxx xxxx /dev/sdd...
  14. F

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Hello Glowsome Yes sorry the correct path for the file is "/etc/systemd/system/lvmlockd.service.d/". I do not understand. Best regards. Francis