ertanerbek's latest activity

  • E
    ertanerbek reacted to bbgeek17's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Thank you for providing additional information. We will review and digest. We do not use either LVM or QCOW in our integration with PVE, so we have limited exposure with these technologies in some of our legacy customer environments. Our most...
  • E
    The exact cause of this problem is the discard operation. Scenario 1: The guest resides on any source disk structure. When I try to clone this guest into an LVM setup in qcow format, whether the SSD and DISCARD feature on the disk are enabled or...
  • E
    ertanerbek reacted to bbgeek17's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Hi @ertanerbek, we’re mostly on the same page: Fibre Channel is far from dead in large enterprise environments. That said, investing in legacy entry SANs (for example an HPE MSA or older Dell ME models), or even trying to repurpose them, purely...
  • E
    Yes, you are right. For this reason, instead of LVM, perhaps OCFS2 or GFS2 — since it is integrated with Corosync — could be better options that may be supported in the future compared to LVM.
  • E
    ertanerbek reacted to spirit's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    The lock is managed by proxmox code directly. (through pmxcfs/corosync). You can't delete 2 lvm volumes at the same time.
  • E
    ertanerbek reacted to tiboo86's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    To be honest, I’m not sure whether your issue is specifically caused by the fact that you're using FC. But what I can say is that, on my side, with iSCSI, I don’t have any locking problems at all.
  • E
    Hello @tiboo86 Thank you very much for your feedback and for this extensive sharing. I hope both myself and many others will benefit from it. However, this is an IPSAN system, and the issue I am experiencing is on the FC SAN side. Let’s...
  • E
    ertanerbek reacted to tiboo86's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Hi @ertanerbek, No problem — here is our full setup in detail. We are running a three-node Proxmox cluster, and each node has two dedicated network interfaces for iSCSI. These two NICs are configured as separate iSCSI interfaces using...
  • E
    I am sharing my multipath configuration and multipath output also storage file, I’m also using OCFS2, and it’s almost perfect. In fact, OCFS2 itself is excellent, but Proxmox forces me to use its own lock mechanism. At the operating system...
    • 1764615391548.png
    • 1764615610661.png
    • 1764616042370.png
    • 1764616299422.png
  • E
    Hi Tibo, If possible, could you share everything? If you have a successful implementation, it could also help others who face issues in the future. By the way, why did you have to tweak the queue-depth and kernel parameters? Do we really need...
  • E
    ertanerbek reacted to tiboo86's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Hi @ertanerbek, I’m running a three-node Proxmox VE 9.1.1 cluster connected to a Huawei Dorado 5000, but using iSCSI + Linux Multipath + LVM (shared). In my setup, I haven’t encountered any problems during simultaneous “Move Storage”...
  • E
    RDM, TCP Offload, RoCE, NVMe-OF, NVMe-oF + RDMA, and Serv-IO (for Ethernet) are all excellent technologies with many benefits. They reduce CPU load and lower access times. However, no matter what they achieve, the real issue is not the connection...
  • E
    ertanerbek reacted to PmUserZFS's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Well, different approaches are beeing worked on, nvme-oF that tries to reduce latency
  • E
    ertanerbek reacted to waltar's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Each disk is block storage, what I mean is the direct usage of block storage from an application side. We will see in 10years.
  • E
    ertanerbek reacted to alma21's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    yes - true - I assume ~80% of VMWare customers are riding this dead horse also with VMFS/VMDK :) I mean still a valid approach would be if Proxmox as company would hire/pay some core/veteran developers of OCFS2 or 3rd party to integrate it...
  • E
    Most of my 25-year professional career has been spent working with storage devices. A large portion of that involved projects at the government level. I can confidently say that the SAN storage architecture cannot simply disappear. Even today...
  • E
    Years ago, when I wrote about the potential issues of VSAN, many people on the VMware side told me I was talking nonsense. However, developments have shown that SDS architectures are not very suitable for virtualization environments. In a serious...
  • E
    I tried this as well, since I thought the issue might exist across all cache systems, so I also tested with directsync, but the problem remained the same. The issue lies in the Lock mechanism applied at the Proxmox layer. It’s not only in this...
  • E
    ertanerbek reacted to alma21's post in the thread Proxmox 9.1.1 FC Storage via LVM with Like Like.
    Hi, have you also tried cache=none with raw images ?
  • E
    General system info : CLONE Speed limit : 300MB/s Wipe Removed Volumes was not selected A time At this point I cloned two machines while DISCARD was enabled on their disks. As shown, the storage wrote very little data, which is normal because...
    • 1764375991169.png
    • 1764376027275.png