Search results for query: ZFS

  1. G

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    ...OS storage use a BOSS card with 2 m.2 For VM storage buy your perc card if you are planning to use local storage for vmware Buy HBA adapters separately and swap out for ZFS testing vmware does not require a raid controller, i think the majority of vmware customers are using BOSS/equivalent + SAN
  2. K

    Dell PERC and ZFS: can I avoid flashing it in HBA mode?

    ...Dell server that can work both with PVE and Vmware. Vmware requires a compatible raid controller, PVE requires NO raid controller (I want to use ZFS so I can make a ZFS replica to another identical server). I know that modern PERC controllers can be set to expose disks in "non-raid" mode to...
  3. R

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    All fine, zfs as boot - mirror. CPU(s):24 x Intel(R) Core(TM) Ultra 9 285K (1 Socket) Kernel Version Linux 7.0.0-1-rc6-pve (2026-03-30T09:17Z) Boot Mode EFI (Secure Boot) Manager Version pve-manager/9.1.7/16b139a017452f16 Well done, thanks !
  4. M

    Proxmox 8.4.1 corrupted, unable to recover backup, to restore vm in another proxmox

    ...from those would be the best option. If that is not possible you can try to mount the pool in read-only mode: zpool import -o readonly=on zfsdata and if that works try to recover any data - if not I'm afraid you are out of luck. There is a user with a similar situation as you [0], who used...
  5. H

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    runs fine on AMD EPYC 9015 ASRockRack TURIND8UD-2T/X550 zfs mirror intel E810 nic ------------------------- AMD EPYC 3151 4-Core Processor GIGABYTE MJ11-EC1-OT zfs mirror ------------------------- Intel(R) Pentium(R) CPU D1508 @ 2.20GHz Supermicro X10SDV-2C-TLN2F zfs mirror Intel Corporation...
  6. T

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    Three machines upgraded CPUs: Ryzen 3600, Epyc Rome, Ryzen 5825U Storage: ZFS across all three for root and data. All smooth, no issues, everything operating as expected.
  7. spirit

    VM CPU issues: watchdog: BUG: soft lockup - CPU#7 stuck for 22s!

    This could be related to memory fragmentation. do you use zfs ?
  8. A

    IOMMU/DMAR Regression (PTE Write access) on Alder Lake-P (Ugreen DXP6800 Pro) – 6.17.13-2-pve vs 6.17.9-1-pve

    ...blocked for more than 1228 seconds. Trace shows wait_for_completion > flush_work > lru_add_drain_all suggesting a possible link with zfs or discs (in my setup). Memtest passed, smart looks clean, I'm back to 6.17.9-1-pve to see if things go back to normal, I had very recently updated to...
  9. T

    VM CPU issues: watchdog: BUG: soft lockup - CPU#7 stuck for 22s!

    ...pmt_class rc_core intel_hid intel_pmc_ssram_telemetry input_leds joydev i2c_algo_bit sparse_keymap intel_vsec acpi_pad acpi_tad mac_hid zfs(PO) nvme_fabrics nvme_core spl(O) nvme_keyring vhost_net nvme_auth vhost efi_pstore vhost_iotlb Apr 02 08:32:13 pve2 kernel: tap nfnetlink dmi_sysfs...
  10. W

    Need help with confusion setting up multiple disk Promox to manage mass storage

    ...storage needs Proxmox itself and it's immediate VM disks and VMs (which are basically VM disks). I believe (?), that a VM disk (living in ZFS mirrored pool) can be used effectively for downloading media content for caching and then offloaded to mass storage elsewhere (on the USB interface)...
  11. A

    Volume level caching

    ...to set up volume level cacheing so that "hot" data is stored on SSD's and "cold" data on mechanical. from my research I do NOT want L2ARC with zfs, as that is not true read/write caching and typically writes are cached in ram before the transaction log is flushed to disk. (I am aware of the...
  12. L

    [SOLVED] Stuck at "loading initial ramdisk"

    ...Linux 6.8.12-18-pve System: Lenovo M710 SFF intel core i5-7400 3.0 ghz DDR4 memory 8 gb OS drive - M.2 drive, 1 TB, teamgroup Storage pool is in ZFS, including 3 SSDs, samsung 2 TB each Currently running monitor via one of the displayport outputs Troubleshooting steps taken 1. Prolonged...
  13. H

    Storage types and replication , NFS and local ZFS in a cluster

    ...horse. My recommendation is just to consider all possibilities and have all information available before being blindsided later down the road. ZFS replication is the easiest, most set and forget configuration for a 2 node cluster. It definitely becomes wasteful and less attractive for a 3...
  14. RolandK

    Proxmox Virtual Environment 9.1 available!

    ...[445562.837128] Call Trace: [445562.837131] <TASK> [445562.837137] __schedule+0x468/0x1310 [445562.837150] ? dbuf_find+0x254/0x260 [zfs] [445562.837591] schedule+0x27/0xf0 [445562.837598] schedule_preempt_disabled+0x15/0x30 [445562.837603] __mutex_lock.constprop.0+0x508/0xa20...
  15. D

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    ...so far! (Uptime is less than 15 minutes so... will report back later...) System details: System: Dell PowerEdge R740XD CPU: Intel Xeon Gold 6154 ZFS as root-filesystem on SATA PM883's ZFS as VM-storage filesystem running NVMe Kioxia CD8-R's. The server does seem to be running a little bit...
  16. R

    ZFS error in journal log

    Sorry for my late response, thank you for your feedback! As you mentioned it is only logged once, difficulty is that those messages are reported at the end in the logs, so it is a timing issue. For now I leave it, as it is not a recurring error, only one per zfs pool after boot. Have a great day.
  17. t.lamprecht

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    ...Feedback about how the new kernel performs in any of your setups is welcome! Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into issues where using the opt-in 7.0 kernel seems to be the likely...
  18. K

    Migration to different storage

    Hello, Is it possible to migrate VM from a node to another node (same cluster or not) using Datacenter Manager by selecting a different storage (ex: source : local-zfs -> destination : local-zfs2) ?
  19. LnxBil

    Storage types and replication , NFS and local ZFS in a cluster

    ...and yet much more stable than any other solution without proper built-in HA like ZFS replication or even NFS or any other SPOF-based setup like OP's. Sure, 5 node cluster is better than 3, but if you're coming from a two node cluster VMware, you're not going to have a 5 node ceph cluster...
  20. H

    Storage types and replication , NFS and local ZFS in a cluster

    ...if you are not adequately prepared. To play devil’s advocate, do you truly need the 3 node environment? Could you not get by with a traditional ZFS replication setup (like you are used to) between two of the servers? The third server could then be a qdevice or a glorified quorum voting PVE...