Hi all,
I'd like some advice on how to solve a problem in my home lab:
I have a VM A inside HA and using shared storage disk, on this machine I have software that needs the fastest possible random read/write speed of little sized files and so far using its own disk has been an huge bottleneck. Mainly it is used to build software projects, it is 80% read source files, 20% writes compiled objects.
Taking account that periodic backups of accessed files is fine (don't need it in real time), I was thinking about a separate VM B in same node of A, not in HA, and using native storage to provide a NFS share used by A for the operations. I know a failure will let me have the share not working anymore but I accept this, dealing with failure has precedence.
But, are there better ways to handle this (apart buying 100x costly hardware
)?
TIA!
I'd like some advice on how to solve a problem in my home lab:
I have a VM A inside HA and using shared storage disk, on this machine I have software that needs the fastest possible random read/write speed of little sized files and so far using its own disk has been an huge bottleneck. Mainly it is used to build software projects, it is 80% read source files, 20% writes compiled objects.
Taking account that periodic backups of accessed files is fine (don't need it in real time), I was thinking about a separate VM B in same node of A, not in HA, and using native storage to provide a NFS share used by A for the operations. I know a failure will let me have the share not working anymore but I accept this, dealing with failure has precedence.
But, are there better ways to handle this (apart buying 100x costly hardware

TIA!