No need to wait for the night. Rebooted again.
I`m running kernel 6.2.16-18-pve now with ZFS 2.1.13-pve1 - is it possible to update this kernel with ZFS 2.1.14 ?
My story of LUKS and ZFS
For one server I needed to encrypt as much as possible with auto password insert at boot.
I did it in using Proxmox 5.0 at that time and from that moment system still running up to this day.
Partitions:
for system raid disks I splitted into unencrypted boot partitions...
Hello fiona,
This night it happened again. BIOS and CPU microcode are up to date. External log monitoring didn't give a clue.
System worked very well for a long time before. On 2024-01-07 I did update, perhaps I need to go back to proxmox-kernel-6.2.16-18-pve
The server has rebooted unattended for the last 3 nights.
The first night I thought maybe it had something to do with sending backups to another server. In this process, the backup server is started using IPMI and the backup server is shut down after completion. But in the morning the backup...
All ZFS pools in the same host share the same ZFS memory. I don't know are you effected read (ARC) or write (dirty cache) cache. I can suggest to lower dirty cache or change zfs_txg_timeout. How will it help for you or will it help at all I don't know.
Hello,
I`m preparing server with new installation of Proxmox 8. I have HP Ethernet 10Gb 2-port 557SFP+ Adapter and this device initiates this kernel message at startup:
Kernel: 6.2.16-10-pve
[ 25.252287] ------------[ cut here ]------------
[ 25.252837] Voluntary context switch within RCU...
This is old thread but I run into this problem today too.
Choosing recovery mode from grub I saw controller problem. Switching from default (LSI 53C895A) to VirtIO SCSI solve the problem.
It was the last VM running with LSI 53C895A controller mode and upgrading kernel somehow matched with the...
As I told in 0.7 changing ARC size its almost immediately. After this I set ARC size back with echo to 12G but ARC stuck with 5/6. 4 hours passed. I think some new settings must be involved with that.
Yes, more disk = more speed/IO. But creating ZFS you must think about how much time you have until you change death/damaged disk. Like Raidz3 give more time compare to others options. I had strange situation then one disk go offline because of unknown reason and ZFS stops writing to it. Pool...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.