Search results

  1. G

    SOLVED: ZVOL with LVM (for targetcli) gets lockd after reboot

    I'm not sure if we're on the same page.. Let me rephrase. Following is the targetcli from my (still functioning) production environment: /> ls o- / ......................................................................................................................... [...] o- backstores...
  2. G

    SOLVED: ZVOL with LVM (for targetcli) gets lockd after reboot

    Oh, dmesg shows this during boot: [ 11.886889] Loading iSCSI transport class v2.0-870. [ 12.107469] Rounding down aligned max_sectors from 4294967295 to 4294967288 [ 12.107551] db_root: cannot open: /etc/target [ 15.733787] rx_data returned 0, expecting 48. [ 15.733799] iSCSI Login...
  3. G

    SOLVED: ZVOL with LVM (for targetcli) gets lockd after reboot

    Targetcli is empty now, because of the issue /backstores/block> cd / /> ls o- / ......................................................................................................................... [...] o- backstores...
  4. G

    SOLVED: ZVOL with LVM (for targetcli) gets lockd after reboot

    Hi, i'm abusing my PBS installation for some archival requirement. Installed targetcli-fb and assigned a zvol through it to PVE. Using this target directly from windows instead through PVE was working flawless. But now the zvol is being in use after a reboot... It seems the (PVE) LVM is...
  5. G

    ceph object storage

    Did you get past your end-boss? Did the ceph command alter the centrally used ceph.conf?
  6. G

    file restore test

    Yes, this one works! I tried it on 1 host on my staging environment... the second partition of the drive finally unfolds! Is there some testing with it that needs to be done before it can be taken in production? Martijn Too bad, M$ did not release any recent ReFS info for a linux driver ...
  7. G

    file restore test

    The test-machine stays working. Files can be restored. It feels like it has to do with the disks integrity. I'll investigate and report back here.
  8. G

    file restore test

    Arrrgg.. i cannot re-create the problem... on a fresh machine... i'm starting to realize something else must be wrong on a windows/ntfs level.
  9. G

    file restore test

    Tried 3 different sizes ... all worked well... given I used only some txt files to use during restore testing. Going to start over with dedup as well while throwing a few gb's of data on it as well...
  10. G

    file restore test

    Damn... this one works: Created the VM from the same original template: -add virtio device (TST 200GB), full (no discard) (later changed to discard) -in windows: disk manager (old school NT4) selected gpt device -format: NTFS from default to 16 KB allocation unit (quick format) -made a...
  11. G

    file restore test

    Good morning, i'll create a new machine (smaller one) with the same steps (they were manual but i think i can manage). This will speed up the backup attempt.
  12. G

    file restore test

    Unfortuantely... after removing the windows roles, duplicating the data, making a new backup .... the file restore returns the error. If i can help, i would be glad to do some extra tests. regards, Martijn
  13. G

    file restore test

    It took almost 4 days to dedup the data on the clone VM. https://learn.microsoft.com/en-us/answers/questions/1352079/turn-off-dedup-and-delete-chunkstore Made a new backup from it... no success... even after a reboot. So now making a fresh backup... back in a jiffy (20 hours or so on those...
  14. G

    file restore test

    Hmmz.. I have to upgrade again then, strange... Did not see it coming, of course the machiine is in use... too many applications have a permanent mount to it :-( By any chance, could windows dedup be the cause?
  15. G

    file restore test

    Found it, sorry ... was looking at another host than my browser was leaning focusing at. It is the qemu.log as attached: There is some complaining about a corrupt $MFT, but chkdsk cannot find anything.
  16. G

    file restore test

    Are you certain? I can only find PVE and system stuff in /var/log on my pve host...
  17. G

    file restore test

    Thx, i can see the directory: /var/log/proxmox-backup It only contains 2 sub-dirs called: API and TASKS
  18. G

    file restore test

    After replacing our backup host (it got a corrupt controller) i finally managed to do the upgrade to 8.1.3 (PVE) Unfortunately the message remains. (see attachment) What could it be? Maybe the clustersize of 16KB?
  19. G

    file restore test

    Had to replace the backuphost (hardware failure). Initial backup has been made and problem still exists with PBS 3.1.2. Still going to research this after upgrading PVE to 8.1.3