[SOLVED] "fast" disk access for a VM inside HA and shared storage

bly

Member
Mar 15, 2024
54
13
8
Hi all,
I'd like some advice on how to solve a problem in my home lab:
I have a VM A inside HA and using shared storage disk, on this machine I have software that needs the fastest possible random read/write speed of little sized files and so far using its own disk has been an huge bottleneck. Mainly it is used to build software projects, it is 80% read source files, 20% writes compiled objects.

Taking account that periodic backups of accessed files is fine (don't need it in real time), I was thinking about a separate VM B in same node of A, not in HA, and using native storage to provide a NFS share used by A for the operations. I know a failure will let me have the share not working anymore but I accept this, dealing with failure has precedence.

But, are there better ways to handle this (apart buying 100x costly hardware :cool:)?
TIA!
 
I don't really deal in HA, so I may say something wrong here. But in general I would point out couple of observations to you. First, for maximum read speed will be in a NVMe drive. You don't say what kind of drive(s) you have in your system. Second, if we are talking about a drive array, striped drives (or vdevs for ZFS) will give the fastest array set up. Third, anything hosted on the same Proxmox instance, on the same VLAN or perhaps even on the same vmbr, will be faster than anything that has to traverse the NIC, the switch or your router.

Tell us more about your physical host set up please. And what is the need/use case for HA?
 
  • Like
Reactions: bly
> I have a VM A inside HA and using shared storage disk, on this machine I have software that needs the fastest possible random read/write speed of little sized files and so far using its own disk has been an huge bottleneck. Mainly it is used to build software projects, it is 80% read source files, 20% writes compiled objects

Generally, if you need the "fastest speed" you don't run it in a VM, and also don't run it on zfs.

How big is the code corpus? ' du -s -h ' on the source dir

If you have sufficient RAM, you could look into compiling in a RAMdisk and just rclone/rsync to disk/SSD after a build run... If not, look into a fast Optane drive - or something nvme with a high TBW rating (Use Enterprise-level SSD if this is Prod!) and pass it thru to the VM. This will impact HA, but how often are you expecting it to fall over?

You could also look into distcc and spread the compile between several nodes

https://search.brave.com/search?q=i...summary=1&conversation=7a24530be23c5c3453cd00
 
  • Like
Reactions: bly and Johannes S
First, for maximum read speed will be in a NVMe drive. You don't say what kind of drive(s) you have in your system. Second, if we are talking about a drive array, striped drives (or vdevs for ZFS) will give the fastest array set up [...] anything hosted on the same Proxmox instance, on the same VLAN or perhaps even on the same vmbr, will be faster than anything that has to traverse the NIC, the switch or your router.

Tell us more about your physical host set up please. And what is the need/use case for HA?
Yes, that was the reason I was thinking about a share from a VM in same node from a machine with fast disk access, the VM on nvme.
The case use of HA is, it is the development machine I don't want lose more than the minimal time needed if failing. Compilations can wait until I recover from fail but development won't be stopped.

[...] compiling in a RAMdisk [...] nvme with a high TBW rating (Use Enterprise-level SSD if this is Prod!) and pass it thru to the VM. This will impact HA
It will be the best. How HA is affected if I passthru a disk? I know I cannot migrate VM if it has node's physical things attached. Does it move to failover node anyways?
 
Thank you all for the hints and suggests!
I will tweak here and there and will experiment with some setups to see which one best meets my needs.
 
Last edited: