Today I was moving some VM's manually between servers via backup&restore, and I noticed that my system copies the dump file all okay between servers, but when I start the restore process, it is really sluggish and my I/O wait time increases.
This is more of an intro to my actual question, but still I think it's worth to know:
My investigation lead me to this process:
As you can notice, vma extract uses /var/tmp which is NOT tmpfs and uses persistent storage, in this case, my spinning plate HDD's. It'd introduce mix of Read/Write operations that would explain the slowliness of the whole proccess.
Could this be switched to just /tmp ?
This is more of an intro to my actual question, but still I think it's worth to know:
- I have one storage group, with two 2TB Exo HDD's configured under Software RAID1 and LVM on top of that <- this is where my OS and backups reside
- Copying backups to the above storage occurs at full gigabit speed (~100MiB/s)
- I have SSD disk configured for my VM storage <- this is where I'm restoring my backups
- Proxmox 8
My investigation lead me to this process:
Code:
/bin/bash -c set -o pipefail && cstream -t 83886080 -- /mnt/dumpy/dump/vzdump-qemu-124-2023_07_10-20_09_57.vma | vma extract -v -r /var/tmp/vzdumptmp32211.fifo - /var/tmp/vzdumptmp32211
As you can notice, vma extract uses /var/tmp which is NOT tmpfs and uses persistent storage, in this case, my spinning plate HDD's. It'd introduce mix of Read/Write operations that would explain the slowliness of the whole proccess.
Could this be switched to just /tmp ?