Recent content by vanes

  1. V

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    This is my before/after i added "options zfs zfs_vdev_scheduler=none" to "/etc/modprobe.d/zfs.conf": Before root@C236:~# cat /sys/module/zfs/parameters/zfs_vdev_scheduler noop After root@C236:~# cat /sys/module/zfs/parameters/zfs_vdev_scheduler none Before/after (no difference) root@C236:~#...
  2. V

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    I think I'm in the same boat. Here is my post and diag. https://forum.proxmox.com/threads/proxmox-ve-6-0-released.56001/post-258777 I tried this https://forum.proxmox.com/threads/proxmox-ve-6-0-released.56001/post-259157 , but it didn`t help, server still crushing when scrub after some uptime...
  3. V

    Docker in LXC problem after PVE kernel update.

    @Stefan_R thanks, workaround helped. I created bug report https://bugzilla.proxmox.com/show_bug.cgi?id=2328
  4. V

    Docker in LXC problem after PVE kernel update.

    Yesterday i updated PVE 6.0 to latest kernel and then docker in LXC container stoped working. Need some help. When i run docker run hello-world i got this: root@Docker-LXC:~# docker run hello-world docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting...
  5. V

    Proxmox VE 6.0 released!

    root@j4205:~# cat /proc/cmdline initrd=\EFI\proxmox\5.0.15-1-pve\initrd.img-5.0.15-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs elevator=none This did tha trick. Should i remove "elevator=none" from "/etc/kernel/cmdline"? Then pve-efiboot-tool refresh
  6. V

    Proxmox VE 6.0 released!

    Did this, after reboot sheduler still mq-deadline root@j4205:~# for blk in /sys/block/s*; do echo -n "$blk: "; cat "$blk/queue/scheduler"; done /sys/block/sda: [mq-deadline] none /sys/block/sdb: [mq-deadline] none /sys/block/sdc: [mq-deadline] none /sys/block/sdd: [mq-deadline] none
  7. V

    Proxmox VE 6.0 released!

    My disk config on both servers is 4hdd raid10 zfs rpool with uefi boot and 2 ssd attached, but ssd`s not in use now. I disconnected ssd`s from one server, and going to test scrub without them after some uptime. Does it make sence?
  8. V

    Proxmox VE 6.0 released!

    root@c236:~# for blk in /sys/block/s*; do echo -n "$blk: "; cat "$blk/queue/scheduler"; done /sys/block/sda: [mq-deadline] none /sys/block/sdb: [mq-deadline] none /sys/block/sdc: [mq-deadline] none /sys/block/sdd: [mq-deadline] none /sys/block/sde: [mq-deadline] none /sys/block/sdf...
  9. V

    Proxmox VE 6.0 released!

    It`s bare metal install root on zfs raid10, uefi boot, asrock e3c236d2i intel pentium G4560T 16G ecc-ram. Second test/backup/home server on consumer j4205 mb. I started 1putty whith "dmesg -wT" another with "journalctl -f" and reproduced the problem (zpool scrub rpool in web shell) server hang...
  10. V

    Proxmox VE 6.0 released!

    "zpool scrub rpool" causes server hang (need reset) when done from web-shell in pve6 (clean root on zfs raid10 install, uefi boot) tested on two servers. seems like bug. When i do it (zpool scrub rpool) from putty everything fine.
  11. V

    Proxmox VE 6.0 released!

    Just installed clear 6.0 root on zfs with UEFI boot and trying to limit ARC size. The way i did it (add "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then "update-initramfs -u" then reboot) does not work anymore with UEFI boot. Command "echo 2147483648 >...
  12. V

    Proxmox VE 6.0 beta released!

    I am trying to limit zfs memory usage on PVE6 beta using this manual https://pve.proxmox.com/wiki/ZFS_on_Linux Added "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then run "update-initramfs -u" then reboot. After reboot i run "arcstat" and see: time read miss miss%...
  13. V

    Proxmox VE 6.0 beta released!

    I meant boot partitions when using ZFS root via UEFI (proxmox 6 using systemd-boot instead of grub when using uefi boot on zfs). How to make new disk bootable? If possible, write some manual please I figured it out. (zpool set autotrim=on rpool)