Recent content by davidindra

  1. Cannot start LXC container after its shutdown

    Hello, I experience a problem with LXC containers. When I shutdown one of them and attempt to start it again, it stucks and node statistics freeze (question marks in GUI, hanging lxc-info processes in htop). Debug start log file looks like this: PVE versions are these: And container config...
  2. Debian containers freeze LXC when restarted (kernel 4.15.18-10-pve)

    Nothing upgraded with dist-upgrade: root@prox1:~# apt dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
  3. Debian containers freeze LXC when restarted (kernel 4.15.18-10-pve)

    Hello, I am experiencing a problem with newest kernel available in pve-no-subscription repository, that's 4.15.18-10-pve. I've upgraded my system with apt update and apt upgrade, current versions are: root@prox1:~# pveversion -v proxmox-ve: 5.3-1 (running kernel: 4.15.18-10-pve) pve-manager...
  4. ZFS zfs_send_corrupt_data parameter not working

    Hello, please, do you have any other ideas? That Proxmox instance we were talking about behaves very unstable (random freezes etc.), I would be thankful for any ideas that might solve that problem by successfully moving broken data away. Thanks a lot David
  5. ZFS zfs_send_corrupt_data parameter not working

    I have ECC RAMs (non-ECC was reason of the initial corruption) now. Yes, it is. I've done scrub multiple times.
  6. ZFS zfs_send_corrupt_data parameter not working

    I have: disabled swap set arc_min to 4GB set arc_max to 8GB set zfs_compressed_arc_enabled to 0 And the system freezed completely (IO delay 90%). This was logged into dmesg: [ 7629.092954] Buffer I/O error on dev zd64, logical block 28727886, async page read [ 7629.095030] Buffer I/O error...
  7. ZFS zfs_send_corrupt_data parameter not working

    Disabling ARC cache compression isn't going to help?
  8. ZFS zfs_send_corrupt_data parameter not working

    That looks possible. So, how can I workaround that problem please?
  9. ZFS zfs_send_corrupt_data parameter not working

    I have some weird ARC cache problems longer time. It swaps a bit and arcstat says it usually uses max. 1GB of RAM (shortly after boot a lot is used, but then it drops in one second to around 1GB) out of 32GB. My /sys/module/zfs/parameters/zfs_arc_min and zfs_arc_max equals zero. Swap is on ZFS.
  10. ZFS zfs_send_corrupt_data parameter not working

    I ran this: dd if=/dev/zvol/rpool/data/vm-101-disk-1 bs=4096 | pv | dd bs=4096 of=/dev/null And got this: 109GiB 1:24:06 [21.7MiB/s] [...
  11. ZFS zfs_send_corrupt_data parameter not working

    Tried it too - it fails, sometimes even with kernel panic (saying something with VERIFY3()).
  12. ZFS zfs_send_corrupt_data parameter not working

    I've tried it without parameters and it failed again: root@prox2:~# zfs send rpool/data/vm-101-disk-1@actual | pv | zfs recv rpool/data/offload2-vm-101-disk-1 113GiB 1:07:52 [28.5MiB/s] [ <=> ] internal error: Invalid...
  13. ZFS zfs_send_corrupt_data parameter not working

    Exactly same result as in previous attempt.
  14. ZFS zfs_send_corrupt_data parameter not working

    Found out this: "The 'Invalid exchange' error you're seeing is EBADE which was what ZFS uses internally to report a checksum error." (here) - what doesn't make sense, because cat /sys/module/zfs/parameters/zfs_send_corrupt_data still gives 1. It again looks to me that code seen here doesn't...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!