Search results for query: ZFS

  1. U

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    ...AMD EPYC 9474F 48C (Zen4), DDR5 - AMD EPYC 7402p 24C (Zen2), DDR4 - Intel XL710 10G NICs VMs: mostly OpenBSD Linux Windows 10 Configuration: HA ZFS Pool Backup via Proxmox Backup Systems PBS: - AMD EPYC 7401p 24C (Zen1), DDR4 - AMD Ryzen 3 3200, 4C (Zen+), DDR4 - Intel XL710 10G NICs...
  2. U

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    ...storage devices are connected) [PCIE 4 x4] Intel X710-DA2 [M.2 Gen5 x4] WDS200T4X0E-EC (Pass-through to a virtual machine) Storage [Boot Volume] ZFS RAID0 KPM5XMUG400G x1 [Local VM Volume] ZFS RAID1 HUSMM3280ASS201 x2 [Disk] HUSMR3232ASS200 x2 (Pass-through to a virtual machine) [Disk]...
  3. M

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    ...Kernel Version - Linux 7.0.0-1-rc6-pve (2026-03-30T09:17Z) Boot Mode- EFI (Secure Boot) Manager Version - pve-manager/9.1.7/16b139a017452f16 No ZFS here, all LVM (thin volume for containers and vms) Everybodys hardware is different, we will expect to see different results. In My environment...
  4. B

    [Virtual Machine Config] Windows 11 Pro Memory Integrity: Does it require nested virtualization?

    ...What was archived so far : - compile openvmm - crate an Alpine VM using openvmm instead of qemu - on a sparse ZFS image - with network access via ssh - console access via TMUX - finally You get a linux vm on proxmox which uses openvmm on top of KVM instead of qemu, providing...
  5. SteveITS

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    ...we installed PVE after breaking the hardware RAID and setting the SAS disks to JBOD. The OS sees the drive model/serial directly and the ZFS boot mirror is operating fine. It's been running PVE about a year. Typically in my experience a hardware RAID will show the RAID controller as the...
  6. K

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    Yes, I understand this is the idea, and I have configured one like this already (one raid5 virtual disk and a single physical disk) in a different scenario. What I don't really know is if this is fine or has issues with ZFS.
  7. K

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    Thanks, I definitely don't want to lose ZFS features by using it over a raid setup. If the controller can be completely bypassed by setting it properly and not flashing it, then it would be fine. Even better if I can actually have a hardware raid 1 for the OS and then a ZFS raidz-1 for data...
  8. K

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    In our scenario vmware would be using local storage for boot and data, like PVE would. The difference is that Vmware needs a raid controller (an approved one, too), while PVE needs mostly anything that is non-raid to work. The idea here is to buy hardware that we can run with both software...
  9. J

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    ...times thatthere is no need to risc breaking your PERC by flashing. So imho you should be good with a "non-raid" setting. You can also use ZFS on HW RAID if you know what you are doing and it comes with some caveats like loosing some of ZFS features. I don't have any experience with such a...
  10. G

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    ...OS storage use a BOSS card with 2 m.2 For VM storage buy your perc card if you are planning to use local storage for vmware Buy HBA adapters separately and swap out for ZFS testing vmware does not require a raid controller, i think the majority of vmware customers are using BOSS/equivalent + SAN
  11. K

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    ...Dell server that can work both with PVE and Vmware. Vmware requires a compatible raid controller, PVE requires NO raid controller (I want to use ZFS so I can make a ZFS replica to another identical server). I know that modern PERC controllers can be set to expose disks in "non-raid" mode to...
  12. R

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    All fine, zfs as boot - mirror. CPU(s):24 x Intel(R) Core(TM) Ultra 9 285K (1 Socket) Kernel Version Linux 7.0.0-1-rc6-pve (2026-03-30T09:17Z) Boot Mode EFI (Secure Boot) Manager Version pve-manager/9.1.7/16b139a017452f16 Well done, thanks !
  13. M

    Proxmox 8.4.1 corrupted, unable to recover backup, to restore vm in another proxmox

    ...from those would be the best option. If that is not possible you can try to mount the pool in read-only mode: zpool import -o readonly=on zfsdata and if that works try to recover any data - if not I'm afraid you are out of luck. There is a user with a similar situation as you [0], who used...
  14. H

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    runs fine on AMD EPYC 9015 ASRockRack TURIND8UD-2T/X550 zfs mirror intel E810 nic ------------------------- AMD EPYC 3151 4-Core Processor GIGABYTE MJ11-EC1-OT zfs mirror ------------------------- Intel(R) Pentium(R) CPU D1508 @ 2.20GHz Supermicro X10SDV-2C-TLN2F zfs mirror Intel Corporation...
  15. T

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    Three machines upgraded CPUs: Ryzen 3600, Epyc Rome, Ryzen 5825U Storage: ZFS across all three for root and data. All smooth, no issues, everything operating as expected.
  16. spirit

    VM CPU issues: watchdog: BUG: soft lockup - CPU#7 stuck for 22s!

    This could be related to memory fragmentation. do you use zfs ?
  17. A

    IOMMU/DMAR Regression (PTE Write access) on Alder Lake-P (Ugreen DXP6800 Pro) – 6.17.13-2-pve vs 6.17.9-1-pve

    ...blocked for more than 1228 seconds. Trace shows wait_for_completion > flush_work > lru_add_drain_all suggesting a possible link with zfs or discs (in my setup). Memtest passed, smart looks clean, I'm back to 6.17.9-1-pve to see if things go back to normal, I had very recently updated to...
  18. T

    VM CPU issues: watchdog: BUG: soft lockup - CPU#7 stuck for 22s!

    ...pmt_class rc_core intel_hid intel_pmc_ssram_telemetry input_leds joydev i2c_algo_bit sparse_keymap intel_vsec acpi_pad acpi_tad mac_hid zfs(PO) nvme_fabrics nvme_core spl(O) nvme_keyring vhost_net nvme_auth vhost efi_pstore vhost_iotlb Apr 02 08:32:13 pve2 kernel: tap nfnetlink dmi_sysfs...
  19. W

    Need help with confusion setting up multiple disk Promox to manage mass storage

    ...storage needs Proxmox itself and it's immediate VM disks and VMs (which are basically VM disks). I believe (?), that a VM disk (living in ZFS mirrored pool) can be used effectively for downloading media content for caching and then offloaded to mass storage elsewhere (on the USB interface)...
  20. A

    Volume level caching

    ...to set up volume level cacheing so that "hot" data is stored on SSD's and "cold" data on mechanical. from my research I do NOT want L2ARC with zfs, as that is not true read/write caching and typically writes are cached in ram before the transaction log is flushed to disk. (I am aware of the...