Search results for query: ZFS

  1. U

    High VM-EXIT and Host CPU usage on idle with Windows Server 2025

    ...tap coretemp nct6683 vfio_pci vfio_pci_core irqbypass vfio_iommu_type1 vfio iommufd efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 zfs(PO) spl(O) uas btrfs libblake2b xor raid6_pq usbmouse usbkbd hid_generic usbhid hid usb_storage xhci_pci_renesas i40e nvme mpt3sas i2c_i801...
  2. I

    Proxmox Autoinstall selecting installation usb as target disk at installation

    ...usb to use it again. Do I maybe have an error in my answerfile? How can I correctly specify the disks that are supposed to be used for proxmox installation? My answerfile.toml disk setup: ``` [disk-setup] filesystem = "zfs" zfs.raid = "raid0" disk-list = ["sda", "sdb"] ``` Thanks in advance!
  3. D

    Feature request - zfs delegation with lxc

    It seems I am among a limited few that really see the need for this? - Fileserver, SAMBA running in an LXC... - ZFS acl support in an LXC.... - LXC to manage ZFS snapshots with a nice GUI as this is currently very limited in Proxmox WebGUI... - Docker running in an LXC using more stable ZFS...
  4. S

    cloud-init template für bereitstellung in whmcs

    Zum Windows Server 2025 wird sich vermutlich am Setup rein gar nichts verändert haben, oder @Bu66as? Stellst du an der Cloudbase Config auf dem Windows Server eigentlich gar nichts ein? - In diesem Thread...
  5. P

    Volume level caching

    ...Proxmox is essentially Debian with a custom kernel, a specialized API, and a Web GUI. The GUI is designed to manage "Standard" storage types: ZFS, LVM, LVM-Thin, NFS, SMB/CIFS, and Ceph. While the underlying Debian OS supports dm-cache, bcache, and OpenCAS, Proxmox does not have a management...
  6. P

    Comments on nodes disks?

    Hi, I think the only way would be to use ZFS/LVM naming scheme to implement some kind of "tagging" :-) Other than that disks have no custom fields to be used
  7. P

    Trouble with LCX and ipv6

    ...name=eth0,bridge=vmbr0,firewall=1,gw=172.27.26.1,hwaddr=BC:XX:XX:XX:67:8A,ip=172.27.26.12/23,ip6=auto,type=veth ostype: alpine rootfs: local-zfs:subvol-200-disk-0,size=3G swap: 512 unprivileged: 1 NPMplus:~# ip -6 route 2a02:560:XXXX:XXXX::/64 dev eth0 metric 256 expires 0sec...
  8. A

    Dell PowerEdge R640 compatibility

    ...VM's which have been migrated from VMware which have various apps running on them. The servers all have H740p RAID controllers so XFS, not ZFS, is used on all of them. We also still have a number of old R630/R730xd systems which have run for years without issues, None of these systems run any...
  9. Falk R.

    Proxmox + Windows VM

    Das kann gar nicht sein. Die Raid Controller sind sogar optimiert für SSDs. Maximal falsche Einstellungen bremsen da. Wenn man die Default Einstellungen nutzt wo Smartpath aktiv ist, wird es extrem langsam bei Raid5 und 6. Smartpath ist für Raid0 und Raid1 gedacht um die Latenz zu senken, aber...
  10. P

    Proxmox + Windows VM

    Wir haben den Fehler identifizieren können, Die RAID Controller die verbaut waren waren für die SSDs die verbaut waren schlicht zu Alt und zu Inperformant. Wir haben auf HBA umgestellt und nutzen nun ZFS-Raid dass läuft zwar etwa 10% langsamer als mit HW-Raid aber dass ist ja einstellungssache :)
  11. A

    PVE backups causing VM stalls

    I also can confirm this bug. It occurs on PVE 9 only and ZFS. To reproduce this error several factors need to coincide: run backup of big VM (with at least 8 GB RAM and big disk space). I didn't try only big RAM or big disk, usually they go together; PVE 9.1.х. I've been struggling with this...
  12. W

    Advice Needed - Proxmox PBS "Special Device"

    ZFS is not about preventing failures. It is more about building things in a way that they can be easily reparied as soon something will break. If your concern is losing one disk use a mirror of two. If your scenario is losing two disks then use a mirror of three of them or anything else that...
  13. C

    Using bind mount point with NextCloud LXC on ZFS (or alternate options?)

    ...sure your permissions are the same across all of them so each LXC can r/w the folder correctly. I would imagine this process is the same for ZFS as you create the ZFS array and the mount it to a location somewhere on Proxmox where you can browse the folder structure. I Hope this helps somewhat.
  14. I

    Using bind mount point with NextCloud LXC on ZFS (or alternate options?)

    ...my desktop). I can practice with a dummy LXC, if applicable. One thing I've run into is that I hear making bind mount points is different with ZFS (as opposed to the main drive ProxMox is installed on), and uses its own set of commands in the terminal. Can anyone tell me if I've got that...
  15. V

    Advice Needed - Proxmox PBS "Special Device"

    ...of thing). Referencing "Proxmox Backup Documentation PDF - 19 March 2026" Section 2.1.2 Page 7 - Backup Storage Section 13.1.2 Page 117 - ZFS Special Device *******************"" The current bill of materials: 1x Supermicro X10SRL-F 1x Xeon E5 2650v4 128GB ECC DDR4 1x Dual X520 NIC...
  16. Z

    zfs write stalls fixed but cant remember how, any ideas?

    thank you for the suggestion, all the dataset parameters are the same, i set it up to match the other drive. i am using sync standard and logbias latency for those settings what i changed completely fixed the issue across all datasets even those with high compression and dedup, i should have...
  17. cwt

    zfs write stalls fixed but cant remember how, any ideas?

    Maybe one of these? zfs set sync=disabled POOL/DATASET zfs set logbias=throughput POOL/DATASET
  18. Z

    zfs write stalls fixed but cant remember how, any ideas?

    okay so not sure if anyone can help with this but i have a ZFS drive in a single drive for one pool setup and it was experiencing issues where it would begin to write, fill the cache stall where writes dropped to 0 for 30-60 sec or so then repeat until writes were finished, it was pretty...
  19. B

    Hilfe nach SAS Controller Tausch - ZFS Pool ist nicht mehr da

    ...Partitions will be aligned on 2048-sector boundaries Total free space is 2925 sectors (1.4 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 5860515839 2.7 TiB BF01 zfs-118eed0bb2f543af 9 5860515840 5860532223 8.0 MiB BF07
  20. B

    Hilfe nach SAS Controller Tausch - ZFS Pool ist nicht mehr da

    ein fdisk liefert folgendes: fdisk -l /dev/sdb Disk /dev/sdb: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: HUS72403CLAR3000 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Also "leer"...