Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

It seems Proxmox VE 5.0-34 with kernel 4.13 and zfs 0.7.2 does not respect zfs_arc_max.

The arc_summary.py and still display "cat /proc/spl/kstat/zfs/arcstats | grep -C1 c_max" using 50% of RAM, although the "cat /sys/module/zfs/parameters/zfs_arc_max" is the same as in "/etc/modprobe.d/zfs.conf".

How to properly limit the zfs_arc_max in Proxmox VE 5.0-34? :)
 
Just a general improvement recommendation but shouldn't the install create the pool by disk ID instead of /dev/sdX as per the ZOL best practice recommendation. As of now we need to select ZFS Raid 1 on install, change default to use the ID, reboot, add new disks, partition them and then add them to the pool to then convert to RAID 10 which could be all done by the install.

see other threads here in the forum - because of a limitation of the installer environment this is currently (unfortunately) not possible.
 
How can I update it when I get the error?
zpool upgrade milliways
Code:
This system supports ZFS pool feature flags.

cannot set property for 'milliways': invalid argument for this pool operation
Code:
Oct 23 19:01:38 milliways systemd[1]: Stopping ZFS Event Daemon (zed)...
Oct 23 19:01:38 milliways systemd[1]: Stopped ZFS Event Daemon (zed).
Oct 23 19:01:38 milliways systemd[1]: Started ZFS Event Daemon (zed).
Oct 23 19:01:38 milliways zed[12245]: ZFS Event Daemon 0.7.2-pve1~bpo90 (PID 12245)
Oct 23 19:01:38 milliways zed[12245]: Processing events since eid=0

(not allowed to posts any links, remove xx)
hxxps://image.ibb.co/mniLY6/Ska_rmavbild_2017_10_23_kl_19_02_05.png

zpool events

Code:
TIME                           CLASS
internal error: Bad file descriptor
Aborted

The whole issue seems like this one;
hxxps://github.com/zfsonlinux/zfs/issues/4720

did you reboot after upgrading to get the new kernel (and 0.7.2 ZFS/SPL kernel modules?)
 
It seems Proxmox VE 5.0-34 with kernel 4.13 and zfs 0.7.2 does not respect zfs_arc_max.

The arc_summary.py and still display "cat /proc/spl/kstat/zfs/arcstats | grep -C1 c_max" using 50% of RAM, although the "cat /sys/module/zfs/parameters/zfs_arc_max" is the same as in "/etc/modprobe.d/zfs.conf".

How to properly limit the zfs_arc_max in Proxmox VE 5.0-34? :)

limiting ARC via zfs_arc_max should work as always, if it does not, please open a new thread and include "pveversion -v" and all the ZFS output!
 
  • Like
Reactions: chrone
did you reboot after upgrading to get the new kernel (and 0.7.2 ZFS/SPL kernel modules?)

Hi,
I have rebooted between, however, it seems that ZFS has been updated to 0.7.2, but I´m still on 4.4.6-1 PVE
 
Hi,
I have rebooted between, however, it seems that ZFS has been updated to 0.7.2, but I´m still on 4.4.6-1 PVE

okay, let's move this to a new thread and continue there ;) please include "pveversion -v" and the content of "/boot/grub/grub.cfg".
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!