This is my before/after i added "options zfs zfs_vdev_scheduler=none" to "/etc/modprobe.d/zfs.conf":
Before
root@C236:~# cat /sys/module/zfs/parameters/zfs_vdev_scheduler
noop
After
root@C236:~# cat /sys/module/zfs/parameters/zfs_vdev_scheduler
none
Before/after (no difference)
root@C236:~#...
I think I'm in the same boat.
Here is my post and diag. https://forum.proxmox.com/threads/proxmox-ve-6-0-released.56001/post-258777
I tried this https://forum.proxmox.com/threads/proxmox-ve-6-0-released.56001/post-259157 , but it didn`t help, server still crushing when scrub after some uptime...
Yesterday i updated PVE 6.0 to latest kernel and then docker in LXC container stoped working. Need some help.
When i run docker run hello-world i got this:
root@Docker-LXC:~# docker run hello-world
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting...
root@j4205:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.0.15-1-pve\initrd.img-5.0.15-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs elevator=none
This did tha trick. Should i remove "elevator=none" from "/etc/kernel/cmdline"? Then pve-efiboot-tool refresh
My disk config on both servers is 4hdd raid10 zfs rpool with uefi boot and 2 ssd attached, but ssd`s not in use now. I disconnected ssd`s from one server, and going to test scrub without them after some uptime. Does it make sence?
It`s bare metal install root on zfs raid10, uefi boot, asrock e3c236d2i intel pentium G4560T 16G ecc-ram. Second test/backup/home server on consumer j4205 mb.
I started 1putty whith "dmesg -wT" another with "journalctl -f" and reproduced the problem (zpool scrub rpool in web shell) server hang...
"zpool scrub rpool" causes server hang (need reset) when done from web-shell in pve6 (clean root on zfs raid10 install, uefi boot)
tested on two servers.
seems like bug. When i do it (zpool scrub rpool) from putty everything fine.
Just installed clear 6.0 root on zfs with UEFI boot and trying to limit ARC size. The way i did it (add "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then "update-initramfs -u" then reboot) does not work anymore with UEFI boot.
Command "echo 2147483648 >...
I am trying to limit zfs memory usage on PVE6 beta using this manual https://pve.proxmox.com/wiki/ZFS_on_Linux
Added "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then run "update-initramfs -u" then reboot.
After reboot i run "arcstat" and see:
time read miss miss%...
I meant boot partitions when using ZFS root via UEFI (proxmox 6 using systemd-boot instead of grub when using uefi boot on zfs). How to make new disk bootable? If possible, write some manual please
I figured it out. (zpool set autotrim=on rpool)
Hi guys! I need some help. I installed proxmox 6 beta root on zfs raid1 on 2 sata ssd (uefi boot). Everything seems fine, but how is disk replace procedure now? Is it same as in previous version withoun uefi boot? Second question is about trim. Trim works automaticaly or need to be set up, for...
Hi, i successfully use hd610 GPU of my pentium G4560T on asrock E3c236d2i for plex/emby transcoding in LXC container. Some motherboards with ipmi can do that, some not. I think there is a big chance that you board can utilize iGpu for plex/emby.
I know that Supermicro X11SSH-F can do what you...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.