Tape restore Problem - TASK ERROR: unable to find media

stormtronix

Renowned Member
Jul 23, 2014
33
1
73
I want to restore a snapshot from a continous tape pool with about 15 tapes,
I have a changer for 8 Tapes nad the Problem is that the restore always terminates with TASK ERROR: unable to find media,
because I cannot put all needed Tapes in the changer. And I can also find no way to figure out which tapes are necessary for the restore.

Is there no other possibility, as to restore the whole Pool?
Why is the job terminated instead of asking me to insert another tape ?
 
hi,

can you post the full task log (or at least the las 20 lines where the error occurs)

i'll try to recreate and test that soon
 
2023-05-04T06:53:13+02:00: Mediaset '5344d606-bbb8-4c44-8dc5-f7c01b6d94d6'
2023-05-04T06:53:13+02:00: Pool: Pool3
2023-05-04T06:53:57+02:00: found snapshot vm/395/2023-03-25T23:00:05Z on 000038L2: file 17
2023-05-04T06:53:57+02:00: Phase 1: temporarily restore snapshots to temp dir
2023-05-04T06:53:57+02:00: loading media '000038L2' into drive 'TandbergDrive'
2023-05-04T06:55:11+02:00: found media label 000038L2 (bb7cdd9e-97dd-49fc-ba48-66a2d0d8a4bb)
2023-05-04T06:55:11+02:00: Encryption key fingerprint:
2023-05-04T06:55:11+02:00: was at file 2, moving to 17
2023-05-04T06:56:40+02:00: now at file 17
2023-05-04T06:56:40+02:00: File 17: snapshot archive store1:vm/395/2023-03-25T23:00:05Z
2023-05-04T06:57:11+02:00: Phase 2: restore chunks to datastores
2023-05-04T06:57:11+02:00: loading media '000012L2' into drive 'TandbergDrive'
2023-05-04T06:59:07+02:00: found media label 000012L2 (4e1f6458-6fab-40e5-ab67-3ee1c950dbdf)
2023-05-04T06:59:07+02:00: was at file 2, moving to 643
2023-05-04T07:00:02+02:00: now at file 643
2023-05-04T07:00:02+02:00: File 643: chunk archive for datastore 'store1'
2023-05-04T07:00:21+02:00: restored 8.271 MB (435.75 KB/s)
2023-05-04T07:00:21+02:00: restored 5 chunks
2023-05-04T07:00:21+02:00: was at file 643, moving to 644
2023-05-04T07:00:27+02:00: now at file 644
2023-05-04T07:00:27+02:00: File 644: chunk archive for datastore 'store1'
2023-05-04T07:00:57+02:00: restored 51.124 MB (1.74 MB/s)
2023-05-04T07:00:57+02:00: restored 22 chunks
2023-05-04T07:00:57+02:00: was at file 644, moving to 645
2023-05-04T07:00:59+02:00: now at file 645
2023-05-04T07:00:59+02:00: File 645: chunk archive for datastore 'store1'
2023-05-04T07:01:24+02:00: restored 38.231 MB (1.54 MB/s)
2023-05-04T07:01:24+02:00: restored 16 chunks
2023-05-04T07:01:24+02:00: was at file 645, moving to 646
2023-05-04T07:01:25+02:00: now at file 646
2023-05-04T07:01:25+02:00: File 646: chunk archive for datastore 'store1'
2023-05-04T07:01:46+02:00: restored 15.392 MB (707.19 KB/s)
2023-05-04T07:01:46+02:00: restored 6 chunks
2023-05-04T07:01:46+02:00: was at file 646, moving to 647
2023-05-04T07:01:50+02:00: now at file 647
2023-05-04T07:01:50+02:00: File 647: chunk archive for datastore 'store1'
2023-05-04T07:02:00+02:00: restored 26.428 MB (2.6 MB/s)
2023-05-04T07:02:00+02:00: restored 11 chunks
2023-05-04T07:02:00+02:00: was at file 647, moving to 648
2023-05-04T07:02:19+02:00: now at file 648
2023-05-04T07:02:19+02:00: File 648: chunk archive for datastore 'store1'
2023-05-04T07:02:43+02:00: restored 5.106 MB (219.11 KB/s)
2023-05-04T07:02:43+02:00: restored 3 chunks
2023-05-04T07:02:43+02:00: was at file 648, moving to 649
2023-05-04T07:02:45+02:00: now at file 649
2023-05-04T07:02:45+02:00: File 649: chunk archive for datastore 'store1'
2023-05-04T07:03:08+02:00: restored 17.889 MB (756.52 KB/s)
2023-05-04T07:03:08+02:00: restored 8 chunks
2023-05-04T07:03:08+02:00: was at file 649, moving to 650
2023-05-04T07:03:14+02:00: now at file 650
2023-05-04T07:03:14+02:00: File 650: chunk archive for datastore 'store1'
2023-05-04T07:03:26+02:00: restored 9.224 MB (783.44 KB/s)
2023-05-04T07:03:26+02:00: restored 4 chunks
2023-05-04T07:03:26+02:00: was at file 650, moving to 651
2023-05-04T07:03:44+02:00: now at file 651
2023-05-04T07:03:44+02:00: File 651: chunk archive for datastore 'store1'
2023-05-04T07:04:09+02:00: restored 43.707 MB (1.77 MB/s)
2023-05-04T07:04:09+02:00: restored 17 chunks
2023-05-04T07:04:09+02:00: was at file 651, moving to 652
2023-05-04T07:04:10+02:00: now at file 652
2023-05-04T07:04:10+02:00: File 652: chunk archive for datastore 'store1'
2023-05-04T07:04:33+02:00: restored 28.213 MB (1.22 MB/s)
2023-05-04T07:04:33+02:00: restored 19 chunks
2023-05-04T07:04:33+02:00: was at file 652, moving to 653
2023-05-04T07:04:35+02:00: now at file 653
2023-05-04T07:04:35+02:00: File 653: chunk archive for datastore 'store1'
2023-05-04T07:04:56+02:00: restored 7.655 MB (364.21 KB/s)
2023-05-04T07:04:56+02:00: restored 12 chunks
2023-05-04T07:04:56+02:00: was at file 653, moving to 654
2023-05-04T07:05:13+02:00: now at file 654
2023-05-04T07:05:14+02:00: File 654: chunk archive for datastore 'store1'
2023-05-04T07:05:31+02:00: restored 27.231 MB (1.52 MB/s)
2023-05-04T07:05:31+02:00: restored 11 chunks
2023-05-04T07:05:31+02:00: was at file 654, moving to 655
2023-05-04T07:05:45+02:00: now at file 655
2023-05-04T07:05:45+02:00: File 655: chunk archive for datastore 'store1'
2023-05-04T07:05:49+02:00: restored 3.608 MB (843.13 KB/s)
2023-05-04T07:05:49+02:00: restored 3 chunks
2023-05-04T07:05:49+02:00: was at file 655, moving to 656
2023-05-04T07:06:13+02:00: now at file 656
2023-05-04T07:06:13+02:00: File 656: chunk archive for datastore 'store1'
2023-05-04T07:06:36+02:00: restored 12.17 MB (527.7 KB/s)
2023-05-04T07:06:36+02:00: restored 6 chunks
2023-05-04T07:06:36+02:00: was at file 656, moving to 657
2023-05-04T07:06:39+02:00: now at file 657
2023-05-04T07:06:39+02:00: File 657: chunk archive for datastore 'store1'
2023-05-04T07:06:59+02:00: restored 15.236 MB (727.34 KB/s)
2023-05-04T07:06:59+02:00: restored 7 chunks
2023-05-04T07:06:59+02:00: was at file 657, moving to 659
2023-05-04T07:07:13+02:00: now at file 659
2023-05-04T07:07:13+02:00: File 659: chunk archive for datastore 'store1'
2023-05-04T07:07:37+02:00: restored 31.321 MB (1.34 MB/s)
2023-05-04T07:07:37+02:00: restored 14 chunks
2023-05-04T07:07:37+02:00: was at file 659, moving to 660
2023-05-04T07:07:38+02:00: now at file 660
2023-05-04T07:07:38+02:00: File 660: chunk archive for datastore 'store1'
2023-05-04T07:07:42+02:00: restored 386.353 KB (104.96 KB/s)
2023-05-04T07:07:42+02:00: restored 1 chunks
2023-05-04T07:07:42+02:00: was at file 660, moving to 661
2023-05-04T07:08:07+02:00: now at file 661
2023-05-04T07:08:07+02:00: File 661: chunk archive for datastore 'store1'
2023-05-04T07:08:14+02:00: restored 1.647 MB (251.13 KB/s)
2023-05-04T07:08:14+02:00: restored 1 chunks
2023-05-04T07:08:14+02:00: loading media '000026L2' into drive 'TandbergDrive'
2023-05-04T07:09:37+02:00: WARN: Error during restore, partially restored snapshots will NOT be cleaned up
2023-05-04T07:09:37+02:00: TASK ERROR: unable to find media '000026L2' (offline?)
 
Last edited:
yeah the current code seems to assume all tapes of the media set are in the changer, would you mind opening a bug report here: https://bugzilla.proxmox.com

as a workaround you can either:

* iteratively compile a list of tapes you need, (e.g. in the above example it says you need 000038L2,000012L2,000026L2, and possibly more)
so you could retry until it goes through with the correct tapes
* you can remove the drive from the changer config, and use it as a 'standalone' drive. this way instead of trying to automatically load the tapes,
it will prompt you to load the tape manually. this can then be done by e.g. the webui of the changer or the 'pmtx' tool on the cli
* you can try to change tapes during a restore, i.e. replace tapes it's already read from by ones that it didn't yet (depends if the change is able to do this and seems unpractical)
 
  • Like
Reactions: mow
Hello Dominik,
I´m already trying the 1.* since a few weeks - the delay is kind of big, because the tapes are stored offsite, and I always request one tape after the other .... what brings me to the next point/Feature request:
The job should tell me which tapes I need before it starts or maybe kind of simulation of the job, so you can see which tapes will be rquired...
 
well for that you can simply request all mails from the media-set?

aside from that, i'm not sure if it's wise to let the media set get so big. in a disaster recovery situation, a restore would take an immense amount of time... better to keep more smaller media-sets
The job should tell me which tapes I need before it starts or maybe kind of simulation of the job, so you can see which tapes will be rquired...
if you restore a single snapshot that's not really possible until the snapshot is temporarliy restored, since we don't know beforehand which snapshot contains which chunks

but yeah you can open a feature request. we could print it as soon as we know
 
I found out that this was working correctly in the past - I already did big restores for testing purposes and the job was waiting until I put the tape into changer - can you pls fix this asap? I do not want to change 15 Tapes manually...
 
Hello Dominik,
thanks for the fix, it works as intenden now - I am asked for the tape to be put in the changer and the restore continues as soon as the tape is inserted.

But now I have another Problem when I restore a complete Pool of 4 Tapes it reads everything but nothing is written, the Snapshots are visible though from pve site, but when i want to access them for file-restore the following message appears:

Starting VM failed. See output above for more information. (500)

I did the restore to a complete new Store for testing purposes and thought there maybe missing some parent snapshots which are expected to be on the store, so i restored the Pool again to the original location but same issue
 
attached the log file - why is every chunk skipped?
 

Attachments

  • task-doris-tape-restore-2023-06-14T11 03 47Z.log
    563.5 KB · Views: 2
attached the log file - why is every chunk skipped?
this normally happens if the chunk is already on the datastore (so no need to restore it from tape again) is that not the case?

Starting VM failed. See output above for more information. (500)
which versions do you use?

also check the restore vm log under:

/var/log/proxmox-backup/file-restore/qemu.log

(on the pve machine you connect your webinterface to)
 
this normally happens if the chunk is already on the datastore (so no need to restore it from tape again) is that not the case?
as I said, the first try was to a completely empty new store,
and the second test as I thought this might be the issue to the old store where all the other snapshots of the affected machine are located - both with same result
which versions do you use?
always updated and actually PVE with subscription PBS with no-subscription (yet-will surely buy if we can resolve my problems)
also check the restore vm log under:

/var/log/proxmox-backup/file-restore/qemu.log

(on the pve machine you connect your webinterface to)
log is empty/last entry from april
 
as I said, the first try was to a completely empty new store,
and the second test as I thought this might be the issue to the old store where all the other snapshots of the affected machine are located - both with same result
that sounds weird, can you try again from the cli:

Code:
proxmox-tape restore <options>
check the man page/docs about the option 'man proxmox-tape'

always updated and actually PVE with subscription PBS with no-subscription (yet-will surely buy if we can resolve my problems)
could you please post the versions?

Code:
pveversions -v
proxmox-backup-manager versions --verbose

EDIT: (pressed enter too fast)
log is empty/last entry from april
do you have more than one node, if yes check the other ones for that log file
 
that sounds weird, can you try again from the cli:

Code:
proxmox-tape restore <options>
check the man page/docs about the option 'man proxmox-tape'
Cannot Test right now because I´m running anothe Test right now
could you please post the versions?

Code:
pveversions -v
[/QUOTE]
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph: 16.2.11-pve1
ceph-fuse: 16.2.11-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
[QUOTE="dcsapak, post: 564727, member: 36072"]

proxmox-backup-manager versions --verbose
proxmox-backup 2.4-1 running kernel: 5.15.107-2-pve
proxmox-backup-server 2.4.2-2 running version: 2.4.2
pve-kernel-5.15 7.4-3
pve-kernel-5.13 7.1-9
pve-kernel-5.15.107-2-pve 5.15.107-2
pve-kernel-5.15.107-1-pve 5.15.107-1
pve-kernel-5.15.104-1-pve 5.15.104-2
pve-kernel-5.15.102-1-pve 5.15.102-1
pve-kernel-5.15.83-1-pve 5.15.83-1
pve-kernel-5.15.64-1-pve 5.15.64-1
pve-kernel-5.15.53-1-pve 5.15.53-1
pve-kernel-5.15.39-1-pve 5.15.39-1
pve-kernel-5.15.35-2-pve 5.15.35-5
pve-kernel-5.15.35-1-pve 5.15.35-3
pve-kernel-5.13.19-6-pve 5.13.19-15
pve-kernel-5.13.19-1-pve 5.13.19-3
ifupdown2 3.1.0-1+pmx4
libjs-extjs 7.0.0-1
proxmox-backup-docs 2.4.2-1
proxmox-backup-client 2.4.2-1
proxmox-mail-forward 0.1.1-1
proxmox-mini-journalreader 1.2-1
proxmox-offline-mirror-helper 0.5.1-1
proxmox-widget-toolkit 3.7.3
pve-xtermjs 4.16.0-2
smartmontools 7.2-pve3
zfsutils-linux 2.1.11-pve1

EDIT: (pressed enter too fast)

do you have more than one node, if yes check the other ones for that log file
8other nodes checked - no entry
 
another thing I do not understand is why the pool has two sets of tapes on different dates?
The empty result is from the first 4 Tapes, right now I´m trying to restore the other 15, but this result won´t be available before next week
Maybe there is something broken?
1686924621927.png
 
another thing I do not understand is why the pool has two sets of tapes on different dates?
that depends on how much data there is written on the tapes? and when the tapes are overwritten/reused?

do you still have the task log of that media-set from jan 24? there should be a log line for each new medium that was used
 
Hello Dominik,
my restore on another pbs, with the same changer and same versions worked as intended, seems not to be a problem with the mediaset - so the question remains why everything is skipped on my main pbs...

best regards
Ralf
 
my restore on another pbs, with the same changer and same versions worked as intended, seems not to be a problem with the mediaset - so the question remains why everything is skipped on my main pbs...
would you mind posting the task log from such a restore again? also the datastore config ?
and the journal while the restore is running, maybe we can see more there...
 
Hello Dominik,
I made a further test with the separate PBS I used for testing only, the one on which the first restore of all snapshots from one VM worked and the files are accessible:
I tried to restore the whole mediaset an guess what? It is skipping everything.
So I tried to restore only the snapshots of one VM and this worked, but the results are the sam as on my main PBS: the snapshots are shown but when i select them for File-Restore on my PVE it says "Starting VM failed. See output above for more information. (500)"

2023-07-05T13:20:51+02:00: starting new backup reader datastore 'store2': "/store1"
2023-07-05T13:20:51+02:00: protocol upgrade done
2023-07-05T13:20:51+02:00: GET /download
2023-07-05T13:20:51+02:00: download "/store1/vm/3291/2021-11-26T19:21:03Z/index.json.blob"
2023-07-05T13:20:51+02:00: GET /download
2023-07-05T13:20:51+02:00: download "/store1/vm/3291/2021-11-26T19:21:03Z/drive-virtio1.img.fidx"
2023-07-05T13:20:51+02:00: register chunks in 'drive-virtio1.img.fidx' as downloadable.
2023-07-05T13:20:51+02:00: GET /chunk
2023-07-05T13:20:51+02:00: download chunk "/store1/.chunks/3aa8/3aa8ce655cf6638a0b7543c94dd22f394755439c5870781de4d13f9e2ea57b7b"
2023-07-05T13:20:51+02:00: GET /chunk: 400 Bad Request: reading file "/store1/.chunks/3aa8/3aa8ce655cf6638a0b7543c94dd22f394755439c5870781de4d13f9e2ea57b7b" failed: No such file or directory (os error 2)
2023-07-05T13:20:51+02:00: reader finished successfully
2023-07-05T13:20:51+02:00: TASK OK

And here the logs from the restore:







()







2023-07-05T12:56:03+02:00: Mediaset '97fa9779-a36d-4778-a044-2e1497b4f86f'
2023-07-05T12:56:03+02:00: Pool: Pool3
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-05-28T18:11:08Z on 000031L2: file 276
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-06-25T18:29:33Z on 000031L2: file 279
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-07-30T18:31:40Z on 000031L2: file 282
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-08-27T18:20:55Z on 000031L2: file 284
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-09-24T18:35:20Z on 000031L2: file 287
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-10-29T18:17:58Z on 000031L2: file 291
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-11-26T19:21:03Z on 000031L2: file 294
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-12-10T19:10:27Z on 000031L2: file 297
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-12-17T19:45:37Z on 000031L2: file 299
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-12-24T19:43:49Z on 000031L2: file 301
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2021-12-31T19:44:08Z on 000031L2: file 303
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-07T19:25:10Z on 000031L2: file 305
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-11T19:32:22Z on 000031L2: file 307
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-12T19:30:17Z on 000031L2: file 309
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-13T19:31:14Z on 000031L2: file 311
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-14T19:31:07Z on 000031L2: file 313
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-17T19:29:47Z on 000031L2: file 315
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-18T19:29:29Z on 000031L2: file 317
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-19T19:32:43Z on 000031L2: file 319
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-20T19:27:24Z on 000031L2: file 321
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-21T19:28:16Z on 000031L2: file 323
2023-07-05T12:56:03+02:00: found snapshot vm/3291/2022-01-24T19:47:32Z on 000031L2: file 325
2023-07-05T12:56:03+02:00: Phase 1: temporarily restore snapshots to temp dir
2023-07-05T12:56:03+02:00: Required media list: 000031L2
2023-07-05T12:56:03+02:00: trying to load media '000031L2' into drive 'Drive'
2023-07-05T12:57:22+02:00: found media label 000031L2 (36f69e0e-04bd-455f-ac16-4a77c4d7b3c2)
2023-07-05T12:57:22+02:00: Encryption key fingerprint: 54:97:8d:38:e1:60:98:f7
2023-07-05T12:57:22+02:00: was at file 2, moving to 276
2023-07-05T12:59:37+02:00: now at file 276
2023-07-05T12:59:37+02:00: File 276: snapshot archive store1:vm/3291/2021-05-28T18:11:08Z
2023-07-05T12:59:37+02:00: was at file 277, moving to 279
2023-07-05T13:00:07+02:00: now at file 279
2023-07-05T13:00:07+02:00: File 279: snapshot archive store1:vm/3291/2021-06-25T18:29:33Z
2023-07-05T13:00:07+02:00: was at file 280, moving to 282
2023-07-05T13:00:45+02:00: now at file 282
2023-07-05T13:00:45+02:00: File 282: snapshot archive store1:vm/3291/2021-07-30T18:31:40Z
2023-07-05T13:00:45+02:00: was at file 283, moving to 284
2023-07-05T13:00:57+02:00: now at file 284
2023-07-05T13:00:57+02:00: File 284: snapshot archive store1:vm/3291/2021-08-27T18:20:55Z
2023-07-05T13:00:57+02:00: was at file 285, moving to 287
2023-07-05T13:01:33+02:00: now at file 287
2023-07-05T13:01:33+02:00: File 287: snapshot archive store1:vm/3291/2021-09-24T18:35:20Z
2023-07-05T13:01:33+02:00: was at file 288, moving to 291
2023-07-05T13:02:28+02:00: now at file 291
2023-07-05T13:02:28+02:00: File 291: snapshot archive store1:vm/3291/2021-10-29T18:17:58Z
2023-07-05T13:02:28+02:00: was at file 292, moving to 294
2023-07-05T13:03:00+02:00: now at file 294
2023-07-05T13:03:00+02:00: File 294: snapshot archive store1:vm/3291/2021-11-26T19:21:03Z
2023-07-05T13:03:00+02:00: was at file 295, moving to 297
2023-07-05T13:03:34+02:00: now at file 297
2023-07-05T13:03:34+02:00: File 297: snapshot archive store1:vm/3291/2021-12-10T19:10:27Z
2023-07-05T13:03:34+02:00: was at file 298, moving to 299
2023-07-05T13:03:43+02:00: now at file 299
2023-07-05T13:03:43+02:00: File 299: snapshot archive store1:vm/3291/2021-12-17T19:45:37Z
2023-07-05T13:03:43+02:00: was at file 300, moving to 301
2023-07-05T13:03:49+02:00: now at file 301
2023-07-05T13:03:49+02:00: File 301: snapshot archive store1:vm/3291/2021-12-24T19:43:49Z
2023-07-05T13:03:49+02:00: was at file 302, moving to 303
2023-07-05T13:03:53+02:00: now at file 303
2023-07-05T13:03:53+02:00: File 303: snapshot archive store1:vm/3291/2021-12-31T19:44:08Z
2023-07-05T13:03:53+02:00: was at file 304, moving to 305
2023-07-05T13:03:57+02:00: now at file 305
2023-07-05T13:03:57+02:00: File 305: snapshot archive store1:vm/3291/2022-01-07T19:25:10Z
2023-07-05T13:03:57+02:00: was at file 306, moving to 307
2023-07-05T13:04:06+02:00: now at file 307
2023-07-05T13:04:06+02:00: File 307: snapshot archive store1:vm/3291/2022-01-11T19:32:22Z
2023-07-05T13:04:06+02:00: was at file 308, moving to 309
2023-07-05T13:04:11+02:00: now at file 309
2023-07-05T13:04:11+02:00: File 309: snapshot archive store1:vm/3291/2022-01-12T19:30:17Z
2023-07-05T13:04:11+02:00: was at file 310, moving to 311
2023-07-05T13:04:14+02:00: now at file 311
2023-07-05T13:04:14+02:00: File 311: snapshot archive store1:vm/3291/2022-01-13T19:31:14Z
2023-07-05T13:04:14+02:00: was at file 312, moving to 313
2023-07-05T13:04:16+02:00: now at file 313
2023-07-05T13:04:16+02:00: File 313: snapshot archive store1:vm/3291/2022-01-14T19:31:07Z
2023-07-05T13:04:16+02:00: was at file 314, moving to 315
2023-07-05T13:04:20+02:00: now at file 315
2023-07-05T13:04:20+02:00: File 315: snapshot archive store1:vm/3291/2022-01-17T19:29:47Z
2023-07-05T13:04:20+02:00: was at file 316, moving to 317
2023-07-05T13:04:22+02:00: now at file 317
2023-07-05T13:04:22+02:00: File 317: snapshot archive store1:vm/3291/2022-01-18T19:29:29Z
2023-07-05T13:04:22+02:00: was at file 318, moving to 319
2023-07-05T13:04:32+02:00: now at file 319
2023-07-05T13:04:32+02:00: File 319: snapshot archive store1:vm/3291/2022-01-19T19:32:43Z
2023-07-05T13:04:32+02:00: was at file 320, moving to 321
2023-07-05T13:04:49+02:00: now at file 321
2023-07-05T13:04:49+02:00: File 321: snapshot archive store1:vm/3291/2022-01-20T19:27:24Z
2023-07-05T13:04:49+02:00: was at file 322, moving to 323
2023-07-05T13:04:51+02:00: now at file 323
2023-07-05T13:04:51+02:00: File 323: snapshot archive store1:vm/3291/2022-01-21T19:28:16Z
2023-07-05T13:04:51+02:00: was at file 324, moving to 325
2023-07-05T13:04:55+02:00: now at file 325
2023-07-05T13:04:55+02:00: File 325: snapshot archive store1:vm/3291/2022-01-24T19:47:32Z
2023-07-05T13:09:45+02:00: All chunks are already present, skip phase 2...
2023-07-05T13:09:45+02:00: Phase 3: copy snapshots from temp dir to datastores
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-05-28T18:11:08Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-06-25T18:29:33Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-07-30T18:31:40Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-08-27T18:20:55Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-09-24T18:35:20Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-10-29T18:17:58Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-11-26T19:21:03Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-12-10T19:10:27Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-12-17T19:45:37Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-12-24T19:43:49Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2021-12-31T19:44:08Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-07T19:25:10Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-11T19:32:22Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-12T19:30:17Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-13T19:31:14Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-14T19:31:07Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-17T19:29:47Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-18T19:29:29Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-19T19:32:43Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-20T19:27:24Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-21T19:28:16Z' done
2023-07-05T13:09:45+02:00: Restore snapshot 'vm/3291/2022-01-24T19:47:32Z' done
2023-07-05T13:09:45+02:00: Restore mediaset '97fa9779-a36d-4778-a044-2e1497b4f86f' done
2023-07-05T13:09:45+02:00: TASK OK
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!