Thank you for providing additional information. We will review and digest. We do not use either LVM or QCOW in our integration with PVE, so we have limited exposure with these technologies in some of our legacy customer environments. Our most...
The exact cause of this problem is the discard operation.
Scenario 1: The guest resides on any source disk structure. When I try to clone this guest into an LVM setup in qcow format, whether the SSD and DISCARD feature on the disk are enabled or...
Hi @ertanerbek, we’re mostly on the same page: Fibre Channel is far from dead in large enterprise environments. That said, investing in legacy entry SANs (for example an HPE MSA or older Dell ME models), or even trying to repurpose them, purely...
Yes, you are right. For this reason, instead of LVM, perhaps OCFS2 or GFS2 — since it is integrated with Corosync — could be better options that may be supported in the future compared to LVM.
To be honest, I’m not sure whether your issue is specifically caused by the fact that you're using FC.
But what I can say is that, on my side, with iSCSI, I don’t have any locking problems at all.
Hello @tiboo86
Thank you very much for your feedback and for this extensive sharing. I hope both myself and many others will benefit from it. However, this is an IPSAN system, and the issue I am experiencing is on the FC SAN side.
Let’s...
Hi @ertanerbek,
No problem — here is our full setup in detail.
We are running a three-node Proxmox cluster, and each node has two dedicated network interfaces for iSCSI.
These two NICs are configured as separate iSCSI interfaces using...
I am sharing my multipath configuration and multipath output also storage file,
I’m also using OCFS2, and it’s almost perfect. In fact, OCFS2 itself is excellent, but Proxmox forces me to use its own lock mechanism. At the operating system...
Hi Tibo,
If possible, could you share everything? If you have a successful implementation, it could also help others who face issues in the future.
By the way, why did you have to tweak the queue-depth and kernel parameters? Do we really need...
Hi @ertanerbek,
I’m running a three-node Proxmox VE 9.1.1 cluster connected to a Huawei Dorado 5000, but using iSCSI + Linux Multipath + LVM (shared).
In my setup, I haven’t encountered any problems during simultaneous “Move Storage”...
RDM, TCP Offload, RoCE, NVMe-OF, NVMe-oF + RDMA, and Serv-IO (for Ethernet) are all excellent technologies with many benefits. They reduce CPU load and lower access times. However, no matter what they achieve, the real issue is not the connection...
yes - true - I assume ~80% of VMWare customers are riding this dead horse also with VMFS/VMDK :)
I mean still a valid approach would be if Proxmox as company would hire/pay some core/veteran developers of OCFS2 or 3rd party to integrate it...
Most of my 25-year professional career has been spent working with storage devices. A large portion of that involved projects at the government level. I can confidently say that the SAN storage architecture cannot simply disappear. Even today...
Years ago, when I wrote about the potential issues of VSAN, many people on the VMware side told me I was talking nonsense. However, developments have shown that SDS architectures are not very suitable for virtualization environments. In a serious...
I tried this as well, since I thought the issue might exist across all cache systems, so I also tested with directsync, but the problem remained the same. The issue lies in the Lock mechanism applied at the Proxmox layer. It’s not only in this...
General system info :
CLONE Speed limit : 300MB/s
Wipe Removed Volumes was not selected
A time
At this point I cloned two machines while DISCARD was enabled on their disks. As shown, the storage wrote very little data, which is normal because...