Acronis bootable media PAINFULLY slow

Dec 16, 2024
11
2
3
Hi,
During a near disaster, we opted to restore some VM's, backupped from VMWare vSphere by Acronis to local SAN storage.
The restore was performed from a bootable media, supplied by Acronis.
However, this process was unable to finish.
Restoring a small VM on about 50GB of data took more than 7 hours!

The picture was the same, even when performing the restore from cloud, so neither local storage nor internet connection could be the culprit.
Performing the restore with same bootable media from within a new VMware ESXI VM was done in less than 9 minutes.
We did not see any spikes in neither network, CPU etc. during the restore process in Prox.

We were wondering if there were any performance tweaks or something that we could do in the restore VM/bootable media, in order to gain more performance?
Do you perhaps have input on how this restore VM should be configured?
 
how did you configure the VM and your storage? without any details, it will be hard to tell you what might the culprit..
 
Right, sorry.
Here's the Prox VM hardware settings, and 1 of the nodes.
Prox Acronis VM.jpg
Prox node.jpg
Storage is done via Ceph, all proc and Ceph is running on a 3 node cluster.


Let me know all info you need to investigate further, it would be very nice to be able to speed this up!

Thanks!
 
did you test how the storage performs in general (could you maybe provide more details there?)? maybe the issue is not the restore per se, but the target storage?
 
what kind of NVMEs are your OSDs on? if you try a restore, how does the monitoring look like?
 
NVME's are these: 6X 1.92TB Data Center NVMe Read Intensive AG Drive U2 Gen4 with carrier
No HBA, No Raid controller, directly connected to bus.

Not entirely sure about the monitoring during restore - how can i check this?
 
well a screenshot of that ceph osd list while the restore is crawling along might already give a clue ;) /proc/pressure/io might also be interesting.
 
Just for testing, I've started 9 W11 VM, 3 on each node, with chrystal disk mark.
That leaves OSD's like this:
OSD Diskmark.jpg

While looking at Ceph performance, it looks like this:
IOPS diskmark.jpg
IOPS read max at +72K, Reads at 4,5GiB/s
 
Last edited:
then it must be something peculiar about acronis..
 
so, it's multi threads.
try 1 VM + CDM in "Real World Performance"

EDIT : Don't forget to use virtio-scsi drivers version at least 266, previous can hang.
Well, this was just for testing, to see if the Prox/Ceph environment was not configured to best practice.

As we really don't see any issues here, I suspect this is something with Acronis Bootable Media <-> Prox.
I have been unable to find any information on which VM config should be the best for Bootable Media, what puzzles me is, that the media also is based on some Linux distrubution AFAIK.
 
VMWare vSphere by Acronis
Without more info, I think we can assume this problem is caused by the Acronis restore procedure being designed/integrated for VMware - as in here?
AFAICT there is, as of now, no Acronis implementation for Proxmox - see here the numerous requests.

I don't use Acronis, but I'm going to assume that it is doing a block-level restore.
As a test on the above (test) VM add another similar sized disk on the same above storage (128G) and from within the VM; dd copy from original source disk to the newly added one. See how long it takes. I'm not sure this exactly replicates the Acronis restore procedure (compression etc.) - but if this procedure also takes an exaggerated long duration - you will know it is not an Acronis issue - but probably storage/NW related.
 
Without more info, I think we can assume this problem is caused by the Acronis restore procedure being designed/integrated for VMware - as in here?
AFAICT there is, as of now, no Acronis implementation for Proxmox - see here the numerous requests.

I don't use Acronis, but I'm going to assume that it is doing a block-level restore.
As a test on the above (test) VM add another similar sized disk on the same above storage (128G) and from within the VM; dd copy from original source disk to the newly added one. See how long it takes. I'm not sure this exactly replicates the Acronis restore procedure (compression etc.) - but if this procedure also takes an exaggerated long duration - you will know it is not an Acronis issue - but probably storage/NW related.
Correct - Acronis does not, as of yet, have any native support for Proxmox.
But this only goes for agent-less backups, which is possible in some other hypervisors, including vSphere/ESXI.

However, the bootable media is some kind of Linux distri, so one could rightfully assume, that it would play nicely with Prox, being Debian based.

I opened a case with Acronis yesterday, and they somewhat agreed that the culprit seems to be the bootable media, however, they were unable to explain which settings the Prox VM config should have, in order to be best possible compatible.

We did some additional testing in Prox, just to make sure that we do not see any issues here, this is what we wrote to Acronis support:
I have tested both local storage -> restore and Cloud storage -> restore – both (within the bootable media) give no better speed than 200-400KB/s!!
Copying data round from vSphere VM to Prox VM and vice versa within Windows VM, yields more than 4GB/s
Copying data from local storage to Windows VM, either hypervisor, yields more than 500MB/s
Booting the media in vSphere and doing restore there from cloud works with around 1.3GB/s

I'll let you know the outcome of this, perhaps someone in the future might benefit from our findings.

Thanks for support until now guys, appreciated!
 
Last edited: