Search results

  1. N

    VPS Hosting providers - why no zfs ?

    ARC min and max should be not equal, max should be min+1, or min = max-1 Otherwise it will not limit
  2. N

    RaidZ1 performance ZFS on host vs VM

    If you want to speed up write you can set up sync=disabled But keep in mind you can lose some data in the power outage as of VM look here - https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-2.html
  3. N

    ghostly reboot at midnight

    Server is running for 19 days without random reboot. I changed network card to Intel 10GB and speeded CPU cooler. One reason in my mind was CPU temp spike.
  4. N

    USB3 passthrough to vm with 10G

    I'm curious, have you tried speed test?
  5. N

    HBA card borked or am I an idiot?

    Try to add mpt3sas.max_queue_depth=10000 to your kernel boot line in /etc/default/grub or /etc/kernel/cmdline
  6. N

    How to mount ZFS Install on Ubuntu Live to access Proxmox install files?

    You can download deb directly - http://ftp.no.debian.org/debian/pool/main/d/debsums/debsums_3.0.2.1_all.deb Using Live distro you can mount rpool and then bind mount /dev /sys /proc , chroot and use it as normal OS. In that environment you can install debsums, look at files and do changes.
  7. N

    How to mount ZFS Install on Ubuntu Live to access Proxmox install files?

    It should make no problem to install but If you are struggling you can download directly - https://packages.debian.org/bookworm/all/debsums/download
  8. N

    Yet Another "Poor ZFS performance issue"

    Mixing different performance disks will lower your all disk performance to the lowest disk performance.
  9. N

    Lessen Scrub Impact on ZFS Storage

    https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#scrub
  10. N

    How to mount ZFS Install on Ubuntu Live to access Proxmox install files?

    I would give a shot with #debsums for some curiosity
  11. N

    Storage - how to do it right?

    If you want to have auto healing using single disk then you have to set copies more than 1
  12. N

    [SOLVED] Unable to mount zfs VM disk

    Looking to your storage config your ZFS nvme_cluster pool must have mount point to /nvme_cluster rootfs should be: nvme_cluster:subvol-122-disk-0,size=32G
  13. N

    Yet another "ZFS on HW-RAID" Thread (with benchmarks)

    It is good to play with half way broken HDD to understand how filesystem react to the problems.
  14. N

    [SOLVED] VM with multiple monitors

    If you are connecting to Windows VM you can use this manual - https://www.itechtics.com/use-multiple-monitors-rdc/ I think the same could be achieved with Linux too.
  15. N

    problem with hotplug and 64GB ram

    Old thread but I have the same problem. If I set ram as hotplug and size 122880: TASK ERROR: memory size (122880) must be aligned to 4096 for hotplugging If I change to 123904: Kernel panic - not syncing: System is deadlocked on memory If I remove ram from hotplug and set the size 122880...
  16. N

    [SOLVED] Unable to mount zfs VM disk

    BTW is it standalone server or cluster ?
  17. N

    [SOLVED] Unable to mount zfs VM disk

    Lets begin to fix it from the ZFS point. Is your nvme_cluster ZFS pool dedicated to VM data? Then lets set his mountpoint to /nvme_cluste or /media/nvme_cluste as I do. #zfs set mountpoint=/nvme_cluste nvme_cluste
  18. N

    Yet another "ZFS on HW-RAID" Thread (with benchmarks)

    First of all - Why you want to use ZFS? All other stuff can do it too: H/R, mdadm, LVM, filesystems, etc. But none of them do data integrity check. Except others filesystems like btrfs. If you don't need data integrity validation before it reach program then use regular tools and by happy...
  19. N

    [SOLVED] Unable to mount zfs VM disk

    I see nvme_cluster mount point is / Doesn`t it overlaps with rpool ? Or your OS is in nvme_cluster ? I think this is how it should be. I see you are doing small mistakes in every step.
  20. N

    How to mount ZFS Install on Ubuntu Live to access Proxmox install files?

    Mounting in Live CD you can use -R : zpool import pool -R /another/mount/point So you could do what you need without mixing with Live CD / mountpoint Or -N : Import the pool without mounting any file systems. p.s. it is OK for /rpool/PVE and /rpool/DATA to be empty