Search results

  1. N

    ghostly reboot at midnight

    This is standalone. If kernel 6.2.16-18-pve will work good I`ll update ZFS.
  2. N

    ghostly reboot at midnight

    No need to wait for the night. Rebooted again. I`m running kernel 6.2.16-18-pve now with ZFS 2.1.13-pve1 - is it possible to update this kernel with ZFS 2.1.14 ?
  3. N

    Proxmox Full Disk Encryption with ZFS Raid1 on LUKS | A couple last questions

    My story of LUKS and ZFS For one server I needed to encrypt as much as possible with auto password insert at boot. I did it in using Proxmox 5.0 at that time and from that moment system still running up to this day. Partitions: for system raid disks I splitted into unencrypted boot partitions...
  4. N

    ghostly reboot at midnight

    Hello fiona, This night it happened again. BIOS and CPU microcode are up to date. External log monitoring didn't give a clue. System worked very well for a long time before. On 2024-01-07 I did update, perhaps I need to go back to proxmox-kernel-6.2.16-18-pve
  5. N

    Proxmox randomly reboots while doing backup job

    Same problem ( post ), but I use my own script for backup sync and I`m not sure does it really happens because of backups
  6. N

    ghostly reboot at midnight

    The server has rebooted unattended for the last 3 nights. The first night I thought maybe it had something to do with sending backups to another server. In this process, the backup server is started using IPMI and the backup server is shut down after completion. But in the morning the backup...
  7. N

    ZFS poor performance when high IO load on one pool

    All ZFS pools in the same host share the same ZFS memory. I don't know are you effected read (ARC) or write (dirty cache) cache. I can suggest to lower dirty cache or change zfs_txg_timeout. How will it help for you or will it help at all I don't know.
  8. N

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    ZFS 2.2.0 have a lot of bugs. I prefer to wait for 'more' stable release. 2.2.1 is out https://github.com/openzfs/zfs/releases/tag/zfs-2.2.1
  9. N

    RCU context switch and network device problem

    As spectre-meltdown-checker is saying: > SUMMARY: CVE-2017-5753:OK CVE-2017-5715:OK CVE-2017-5754:OK CVE-2018-3640:OK CVE-2018-3639:OK CVE-2018-3615:OK CVE-2018-3620:OK CVE-2018-3646:OK CVE-2018-12126:OK CVE-2018-12130:OK CVE-2018-12127:OK CVE-2019-11091:OK CVE-2019-11135:OK CVE-2018-12207:OK...
  10. N

    RCU context switch and network device problem

    Hello, I`m preparing server with new installation of Proxmox 8. I have HP Ethernet 10Gb 2-port 557SFP+ Adapter and this device initiates this kernel message at startup: Kernel: 6.2.16-10-pve [ 25.252287] ------------[ cut here ]------------ [ 25.252837] Voluntary context switch within RCU...
  11. N

    rcu_sched self-detected stall on CPU

    This is old thread but I run into this problem today too. Choosing recovery mode from grub I saw controller problem. Switching from default (LSI 53C895A) to VirtIO SCSI solve the problem. It was the last VM running with LSI 53C895A controller mode and upgrading kernel somehow matched with the...
  12. N

    Proxmox VE 6.0 released!

    I just noticed that ksmtuned was missing after upgrade to PROXMOX 6
  13. N

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    As I told in 0.7 changing ARC size its almost immediately. After this I set ARC size back with echo to 12G but ARC stuck with 5/6. 4 hours passed. I think some new settings must be involved with that.
  14. N

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    Current ARC # arc_summary ------------------------------------------------------------------------ ZFS Subsystem Report Thu Aug 08 10:27:10 2019 Linux 5.0.18-1-pve 0.8.1-pve1 Machine: nmz-lt (x86_64)...
  15. N

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    After upgrade ZFS 0.8 works differently vs 0.7. I set ARC size with echo parameter and new changes in 0.8 is effected not immediate.
  16. N

    Recommended ZFS pool setup?

    Yes, more disk = more speed/IO. But creating ZFS you must think about how much time you have until you change death/damaged disk. Like Raidz3 give more time compare to others options. I had strange situation then one disk go offline because of unknown reason and ZFS stops writing to it. Pool...
  17. N

    Recommended ZFS pool setup?

    This is single pool and its very unbalanced in every way. You mush choose what you want. If your bugged allows 5 x big SSD then do Raidz2.
  18. N

    Official Documentation on Remove Old Kernels

    I`m old fashion user. I do it manually with aptitude and I see no problem with it. The new changes will help you :-)
  19. N

    XFS corruption on zvol virtual disk

    The usage of the ZFS pool impact performance but not the errors.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!