Corrupt Filesystem after snapshot

Discussion in 'Proxmox VE: Installation and configuration' started by cryptolukas, Jan 23, 2017.

  1. taenzerme

    taenzerme Member
    Proxmox Subscriber

    Joined:
    Sep 18, 2013
    Messages:
    35
    Likes Received:
    0
    I can confirm it does happen with SCSI Virtio devices, too, and neither do I think it's related to QNAP/Synology. NFS share on FreeNAS showed the same symptoms, unfortunately.

    Strange thing is: Some Vms (all Debian) do snapshots just fine and some (Debian, too) end up with a corrupted filesystem.

    I have not found a way to reproduce this reliably though.
     
  2. tafkaz

    tafkaz Member

    Joined:
    Dec 1, 2010
    Messages:
    79
    Likes Received:
    0
    damn...same here again.
    Using virtio-scsi and scsi VD's doesn't fix the problem.
    This is really dangerous!
    maybe better switch from qcow2 to raw?
     
  3. Bart Brouwer

    Bart Brouwer Member
    Proxmox Subscriber

    Joined:
    Jan 18, 2017
    Messages:
    97
    Likes Received:
    1
  4. Bart Brouwer

    Bart Brouwer Member
    Proxmox Subscriber

    Joined:
    Jan 18, 2017
    Messages:
    97
    Likes Received:
    1
    hmm after installing new things, my vm stuck again :( still no solution..
     
  5. Bart Brouwer

    Bart Brouwer Member
    Proxmox Subscriber

    Joined:
    Jan 18, 2017
    Messages:
    97
    Likes Received:
    1
    Did someone check this without the ram option?
    i will check this next week.
     

    Attached Files:

  6. dkuo

    dkuo New Member

    Joined:
    Mar 13, 2018
    Messages:
    2
    Likes Received:
    0
    This just happened to me in my environment as well. I had a VM on a Synology and created many snapshots without problems until I migrated it to a different host. At that point when I tried to create another snapshot it failed. Luckily I could create a clone from a snapshot I had made earlier in the week.
     
  7. frank lupo

    frank lupo New Member

    Joined:
    Dec 27, 2016
    Messages:
    16
    Likes Received:
    4
  8. dmulk

    dmulk Member

    Joined:
    Jan 24, 2017
    Messages:
    59
    Likes Received:
    3
    This is still an issue! Any news? Any way to work around this issue or recover from it?

    <D>
     
  9. GadgetPig

    GadgetPig Member

    Joined:
    Apr 26, 2016
    Messages:
    138
    Likes Received:
    19
    Those of you with qcow2/NFS/Virtio combo and getting corruption, could you also test if the corruption happens with a CIFS/SMB share on the NAS? I was just curious if the type of share has any effect.
    thnx
     
  10. David Wilson

    David Wilson New Member
    Proxmox Subscriber

    Joined:
    Dec 26, 2017
    Messages:
    8
    Likes Received:
    0
    I've been avoiding snapshots due to the possibility of the corruption issue still existing. I'll try test again on a non-critical VM when I get a chance.
    A good point @GadgetPig - I don't have the environment to test CIFS/SMB at this time, but I am interested to hear how it goes, if others do get around to testing what you've suggested.
     
  11. Petr Konderla

    Petr Konderla New Member
    Proxmox Subscriber

    Joined:
    Jul 18, 2018
    Messages:
    1
    Likes Received:
    0
    We've been using snapshot functionality for years (since 3x) and never got such problems .

    Now we got the same issues (sometimes corupted qcow images by snaphoting) - according the others posts may it be somehow related to 4.4.xversion ???

    All the time we use the same config - The same HP servers, images on local disk with HW array, ext4, qcow image.

    "Funny" is that there are no errors in the host area - just corrupted image.

    Does someone found what's wrong going on?

    Thanks
    Petr

    ps: yes, we can use LVM, but not everyone like it or they may have own needs.
     
  12. Bart Brouwer

    Bart Brouwer Member
    Proxmox Subscriber

    Joined:
    Jan 18, 2017
    Messages:
    97
    Likes Received:
    1
    i still got this problem. Proxmox doesnt react on this topic :(
     
  13. trupletz

    trupletz New Member
    Proxmox Subscriber

    Joined:
    Aug 19, 2018
    Messages:
    2
    Likes Received:
    0
    Hi Members,
    i run into the same Problem. But after a small analysis I mean i have found the problem and have also a solution:

    It's a hardisk Cache Problem with VirtIO in combination of caching...
    If I use the "write back" cache for my VMs - I have the Problem :-(
    If I use the "Default (No Cache)" Option - I haven't any Problems :)

    I hope this can help.
     
  14. Bart Brouwer

    Bart Brouwer Member
    Proxmox Subscriber

    Joined:
    Jan 18, 2017
    Messages:
    97
    Likes Received:
    1
    Hi, i tried this also.. 2 times went right.. third time the same mistake again. :(
     
  15. dkuo

    dkuo New Member

    Joined:
    Mar 13, 2018
    Messages:
    2
    Likes Received:
    0
    The disks were already using the default of "No Cache" for us when I had the problem.
     
  16. trupletz

    trupletz New Member
    Proxmox Subscriber

    Joined:
    Aug 19, 2018
    Messages:
    2
    Likes Received:
    0
    Hi Bart, hi dkuo,
    i have no more idea, sorry.
    But i have checked my Proxmox Server (5.2) now for one more week with Windows Server 2016 and also Linux (Lubuntu) VMs. I tried many Snapshots with two "Test-VMs" and take snapshots in deep (a snapshot recursion with more than 3 snapshots for a VM).
    I did not have problems with this VMs after create and delete the snapshots.
    But if i have in near future snapshot problems i will write it here in this "thread"
    Thanks and best regards,
    Matthias
     
  17. cvhideki

    cvhideki New Member

    Joined:
    Jan 19, 2016
    Messages:
    3
    Likes Received:
    0
    Hi guys,
    i have same problem after Rollback my VM (Proxmox 5.2-5)
    My VM after works 5-10 min. and gives an error "File system read only"
    My VM use "write back" cache, tried changed to default of "No Cache", result not successful has Corrupt Filesystem (2 screen)
    run check qcow2 iso
    qemu-img check -r all /var/lib/vz/images/100/vm-100-disk-1.qcow2

    Repairing OFLAG_COPIED data cluster: l2_entry=8000000220230000 refcount=2
    The following inconsistencies were found and repaired:

    955254 leaked clusters
    42 corruptions

    Double checking the fixed image now...
    No errors were found on the image.
    1064960/1064960 = 100.00% allocated, 3.68% fragmented, 0.00% compressed clusters
    Image end offset: 79676964864

    but i still have broken VM and file system to him

    maybe any have idea how resolve them?
     

    Attached Files:

    • 1.jpg
      1.jpg
      File size:
      19.3 KB
      Views:
      6
    • 2.jpg
      2.jpg
      File size:
      39.7 KB
      Views:
      4
    • 3.jpg
      3.jpg
      File size:
      55.4 KB
      Views:
      5
  18. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,464
    Likes Received:
    311
    Please update to the latest version an test again.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  19. David Wilson

    David Wilson New Member
    Proxmox Subscriber

    Joined:
    Dec 26, 2017
    Messages:
    8
    Likes Received:
    0
    Thanks Dietmar. Have there been any changes in the code that might address this problem?
     
  20. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,567
    Likes Received:
    412
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice