file restore test

Oct 17, 2008
96
6
73
48
Netherlands
Hi,

we just moved our biggest Windows fileserver from VMware to proxmox ..
Before moving 20+TB we discovered not to use REFS as this would give problems to restore files from it.

So we build an NTFS volume for it with clustersize 16KB to exceed the standard 16TB volume limit on 4KB size.

Now we are getting error 400 during a restore procedure. See attachment.
Sometimes we get connection timeout 596

Already trying to run a chkdsk atm. No problems found

What else could i do to get around this?


Using PVE 8.0.3 and PBS 3.0.1
 

Attachments

  • Screenshot from 2023-12-01 14-53-43.png
    Screenshot from 2023-12-01 14-53-43.png
    44.3 KB · Views: 37
Last edited:
After replacing our backup host (it got a corrupt controller) i finally managed to do the upgrade to 8.1.3 (PVE)

Unfortunately the message remains. (see attachment)

What could it be? Maybe the clustersize of 16KB?
 
you need to check on the PVE host where you attempted the file restore..
 
Found it, sorry ... was looking at another host than my browser was leaning focusing at.
It is the qemu.log as attached:


There is some complaining about a corrupt $MFT, but chkdsk cannot find anything.
 

Attachments

Last edited:
thanks! it's possible that such large cluster sizes are not supported by the Linux NTFS driver. it might be possible to improve support by switching to the newer ntfs3 driver.

edit: quick test, 16k cluster size is not the issue, maybe the disk size is, will report back..

edit: also works with a 20TB disk with 32k cluster size - could you maybe try updating your PVE system to the current version? if it still doesn't work then, more details regarding the disk initialization and contents would maybe help to reproduce..
 
Last edited:
I definitely want to evaluate whether switching to ntfs3 (the newer in-kernel driver) makes sense, for that it would help to narrow down which parameters are currently not working so that we can test them one by one.

e.g.,
- large filesystem
- deduplication
- compression
- redundant/split volumes
- ... ?

I am not an NTFS expert (yet ;)), so I am probably missing some details!

there will always be some setups that cannot work out of the box with the generic file-restore VM, e.g. anything using encryption or similar things is out of scope by design.
 
could you provide the exact parameters used to create and fill the volume?

- size
- steps to create the volume (e.g., which tool is used to format, which options are selected, ..)
- does it fail to be browseable when backed up while empty?
- if not, what kind of data is on it when it becomes "unbrowseable"?

thanks!
 
Damn... this one works:

Created the VM from the same original template:

-add virtio device (TST 200GB), full (no discard) (later changed to discard)
-in windows: disk manager (old school NT4) selected gpt device
-format: NTFS from default to 16 KB allocation unit (quick format)
-made a test-dir with a testfile
-did a backup and it worked.


i will change the size to 2000GB 10000GB 20000GB 30000GB and run the backup again in between.
 
  • Like
Reactions: fabian