Search results for query: ZFS

  1. T

    VM CPU issues: watchdog: BUG: soft lockup - CPU#7 stuck for 22s!

    ...pmt_class rc_core intel_hid intel_pmc_ssram_telemetry input_leds joydev i2c_algo_bit sparse_keymap intel_vsec acpi_pad acpi_tad mac_hid zfs(PO) nvme_fabrics nvme_core spl(O) nvme_keyring vhost_net nvme_auth vhost efi_pstore vhost_iotlb Apr 02 08:32:13 pve2 kernel: tap nfnetlink dmi_sysfs...
  2. W

    Need help with confusion setting up multiple disk Promox to manage mass storage

    ...storage needs Proxmox itself and it's immediate VM disks and VMs (which are basically VM disks). I believe (?), that a VM disk (living in ZFS mirrored pool) can be used effectively for downloading media content for caching and then offloaded to mass storage elsewhere (on the USB interface)...
  3. A

    Volume level caching

    ...to set up volume level cacheing so that "hot" data is stored on SSD's and "cold" data on mechanical. from my research I do NOT want L2ARC with zfs, as that is not true read/write caching and typically writes are cached in ram before the transaction log is flushed to disk. (I am aware of the...
  4. L

    [SOLVED] Stuck at "loading initial ramdisk"

    ...Linux 6.8.12-18-pve System: Lenovo M710 SFF intel core i5-7400 3.0 ghz DDR4 memory 8 gb OS drive - M.2 drive, 1 TB, teamgroup Storage pool is in ZFS, including 3 SSDs, samsung 2 TB each Currently running monitor via one of the displayport outputs Troubleshooting steps taken 1. Prolonged...
  5. H

    Storage types and replication , NFS and local ZFS in a cluster

    ...horse. My recommendation is just to consider all possibilities and have all information available before being blindsided later down the road. ZFS replication is the easiest, most set and forget configuration for a 2 node cluster. It definitely becomes wasteful and less attractive for a 3...
  6. RolandK

    Proxmox Virtual Environment 9.1 available!

    ...[445562.837128] Call Trace: [445562.837131] <TASK> [445562.837137] __schedule+0x468/0x1310 [445562.837150] ? dbuf_find+0x254/0x260 [zfs] [445562.837591] schedule+0x27/0xf0 [445562.837598] schedule_preempt_disabled+0x15/0x30 [445562.837603] __mutex_lock.constprop.0+0x508/0xa20...
  7. D

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    ...so far! (Uptime is less than 15 minutes so... will report back later...) System details: System: Dell PowerEdge R740XD CPU: Intel Xeon Gold 6154 ZFS as root-filesystem on SATA PM883's ZFS as VM-storage filesystem running NVMe Kioxia CD8-R's. The server does seem to be running a little bit...
  8. R

    ZFS error in journal log

    Sorry for my late response, thank you for your feedback! As you mentioned it is only logged once, difficulty is that those messages are reported at the end in the logs, so it is a timing issue. For now I leave it, as it is not a recurring error, only one per zfs pool after boot. Have a great day.
  9. t.lamprecht

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test and no-subscription

    ...Feedback about how the new kernel performs in any of your setups is welcome! Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into issues where using the opt-in 7.0 kernel seems to be the likely...
  10. K

    Migration to different storage

    Hello, Is it possible to migrate VM from a node to another node (same cluster or not) using Datacenter Manager by selecting a different storage (ex: source : local-zfs -> destination : local-zfs2) ?
  11. LnxBil

    Storage types and replication , NFS and local ZFS in a cluster

    ...and yet much more stable than any other solution without proper built-in HA like ZFS replication or even NFS or any other SPOF-based setup like OP's. Sure, 5 node cluster is better than 3, but if you're coming from a two node cluster VMware, you're not going to have a 5 node ceph cluster...
  12. H

    Storage types and replication , NFS and local ZFS in a cluster

    ...if you are not adequately prepared. To play devil’s advocate, do you truly need the 3 node environment? Could you not get by with a traditional ZFS replication setup (like you are used to) between two of the servers? The third server could then be a qdevice or a glorified quorum voting PVE...
  13. H

    Need help with confusion setting up multiple disk Promox to manage mass storage

    ...going to start off with an apology of my own ;) Sorry, but let’s dumb this down for my own sake. Essentially you have a high end mini PC with a ZFS mirror for the OS, and a few DAS devices connected over high speed USB. As far as I understand, you basically have a bunch of media and IoT...
  14. W

    Need help with confusion setting up multiple disk Promox to manage mass storage

    ...Turnkey FS was a disaster that I suspect is obsolete anyway. Here's the Proxmox host (on a higher end miniPC) configuration: 1. 1TB ZFS on 1TB NVME SSD x 2TB NVME SSD paired - Proxmox can have all 1TB for Proxmox overhead and magic 2. 1TB XFS on second partition of the 2TB NVME SSD...
  15. F

    Storage types and replication , NFS and local ZFS in a cluster

    Thank you for the clarification MnxBil, I wrongly thought that because the underlying file system of my nfs shares was ZFS, it might have been possible. I now understand that I have nothing to replicate to pve3. I do come from the road you described : ZFS replication inside the cluster with...
  16. LnxBil

    Storage types and replication , NFS and local ZFS in a cluster

    replicated to what? You need a ZFS source and destination pool. Your NFS cannot be it, so you have only one pool on pve3. I don't see a way with the hardware you have described. I would use a proper HA NFS solution or a SAN (dedicated as in a box with two controllers or distributed like ceph...
  17. J

    [SOLVED] HP Probleme mit Proxmox

    VMs auf HDDs betreiben ist generell nicht best practice ;) Aber als Datengrab schon vertretbar und genau für den Usecase wurde ZFS ja ursprünglich auch entwickelt, das kam als erstes in von Sun vertickten Solaris-Dateiservern zum Einsatz. So wie ich das verstehe will der OP die HDDs ja nicht...
  18. F

    Storage types and replication , NFS and local ZFS in a cluster

    ...on my third node : pve1 and pve2 have access to the nfs shares named SAS & SATA. pve3 does not use the nfs shares, it has local disks and two ZFS pools named SAS & SATA (both created before joining the cluster), that will be used as a zfs replication target. The gui (datacenter => storage =>...
  19. R

    Proxmox Upgrage

    ...kernels and updates GRUB to point to them correctly. This solves the hang-at-boot issue in the majority of post-upgrade cases. If you are using ZFS as the root filesystem, the process is slightly different. PVE 9 updated the ZFS ABI and if the pool was not imported cleanly, boot will stall...
  20. G

    [SOLVED] HP Probleme mit Proxmox

    Sers Johannes, das ist mir schon klar. Aber soweit ist der TE noch nicht. Ich sehe ihn mehr in der "Fehler-Ausbügel-Phase". Und ein ZFS nur auf HDDs ist ja auch nicht wirlich best practice.