Hello!
I'll tell you a little about my situation.
I'm recently migrating all my processes to pipelines and since it's local I prefer to have my own runners. Right now I have 3 instances (just normal VM Ubuntu 20) with 10 runners each.
The problem is that I am noticing performance problems, especially on the disk IOPS. My configuration is ultra low cost and is as follows:
The current ZFS pool "HDD-pool", if I have all the runners working (the usual) has up to a 50 % of IO delay, making the jobs of the pipeline fail sometimes.
I try to run:
And it seems to reduce it a bit, but I haven't found the command to check if it config correctly either.
For me it is not important if it fails and everything is lost so those extra writes do not matter to me. I just have to get another new VM up and run Ansible.
So I have two questions here.
Thanks and all advice is welcome!
I'll tell you a little about my situation.
I'm recently migrating all my processes to pipelines and since it's local I prefer to have my own runners. Right now I have 3 instances (just normal VM Ubuntu 20) with 10 runners each.
The problem is that I am noticing performance problems, especially on the disk IOPS. My configuration is ultra low cost and is as follows:
- 32 GB SATA SSD --> For install Proxmox system
- 500 GB Seagate Barracuda ST500DM009 --> In a ZFS pool "HDD-pool" for images and VM Disk. Right now have all the runners and images
- 120 GB Kingston A400 SSD (3 drives) --> I recently buy not config yet
The current ZFS pool "HDD-pool", if I have all the runners working (the usual) has up to a 50 % of IO delay, making the jobs of the pipeline fail sometimes.
I try to run:
zfs set sync=disabled HDD-pool
And it seems to reduce it a bit, but I haven't found the command to check if it config correctly either.
For me it is not important if it fails and everything is lost so those extra writes do not matter to me. I just have to get another new VM up and run Ansible.
So I have two questions here.
- Can I do something to optimize the ZFS HDD-pool?
- And what is the best configuration so that the TBW of the SSDs does not burn due to excessive writes in ZFS? Maybe use another system instead of ZFS. They are normal not enterprise SSDs and I am worried about having to change them after two days of work load.
Thanks and all advice is welcome!
Last edited: