Recent content by GeraldH

  1. G

    [SOLVED] LIO target LUNs not restored on boot - not a TYPE_DISK block device

    yes - this loads the pool always! ln -s /lib/systemd/system/zfs-import@.service /etc/systemd/system/zfs-import.target.wants/zfs-import@poolname.service I'll with check with newer zfs versions again - if the problem still exists. Some months ago - when I last rebooted, I had no problems with...
  2. G

    [SOLVED] LIO target LUNs not restored on boot - not a TYPE_DISK block device

    thanks - yes the zpool cache is involved rm -f /etc/zfs/zpool.cache - reboot - works another reboot with existing /etc/zfs/zpool.cache - fails again setting another cache file for this pool doesn't help - fails so only removing the /etc/zfs/zpool.cache imports the zfs pool before the...
  3. G

    [SOLVED] LIO target LUNs not restored on boot - not a TYPE_DISK block device

    journalctl -b is attached. checked once again - manually starting rtslib-fb-targetctl.service works rtslib-fb-targetctl.service is loaded too early on ZFS systems who can suggest some systemd magic to make rtslib-fb-targetctl.service load after ZFS zvols are ready?
  4. G

    [SOLVED] LIO target LUNs not restored on boot - not a TYPE_DISK block device

    I've tried a PreExec sleep (up to 30s) in the rtslib-fb-targetctl.service - doesn't help - rtslib-fb-targetctl.service still fails.
  5. G

    [SOLVED] LIO target LUNs not restored on boot - not a TYPE_DISK block device

    Hello, I've activated the LIO iSCSI target on some promxox hosts for sharing local NVMe's over iSCSI (beside using those NVMe's for replicated storage) But now my LIO target LUNs are not restored on boot anymore: systemd[1]: Starting rtslib-fb-targetctl.service - Restore LIO kernel target...
  6. G

    WARN: no efidisk configured! Using temporary efivars disk.

    Hello Fiona, my two VM disks are remote iSCSI disks (over shared LVM) - no local disks are attached to survive a failure/reboot of an iSCSI storage the two VM disks are attached to different iSCSI storages and are mirrored with ZFS inside the VM if I create an EFI disk on one iSCSI storage...
  7. G

    WARN: no efidisk configured! Using temporary efivars disk.

    Hello Fiona, I would like to get rid of this warning also. My storage environment - commercial iSCSI storage connected over multipath and shared LVM works stable and quite fast the big problem is the missing snapshot functionality for VMs I'm migrating my Linux VMs to root-on-ZFS with ZFS...
  8. G

    VM with libiscsi hangs on missing disk - no timeout

    Hello, after some (much) fiddling, I managed to boot a VM with two (LVM) mirrored root disks over iSCSI with both libiscsi (User_Mode_iSCSI) and kernel iSCSI. I'm using other Proxmox nodes in the cluster as LIO targets. BTW: please fix iscsi-login-negotiation-failed - I skipped the tcp_ping in...
  9. G

    PVE 8 pve-firewall status no such alias

    Hello, I installed a new (virtual) PVE 8 instance and created a firewall config with the WebGUI alias a_intern -- 10.10.68.0/24 ip_set s_intern -- a_intern security group g_intern -- ACCESS tcp +s_intern 22,8006 and assigned this group to vmbr0 in a firewall rule: cat...
  10. G

    Error "online storage migration not possible if snapshot exists" on zfspool (bug in QemuMigrate.pm?)

    Hello, could you please look at this enhancement again - we would really, really need this functionality. We just dumped our central Ceph storage (not proxmox Ceph) because of performance and stability issues and switched back to DAS (direct attached storage) - NVMe SSDs and ZFS. With NVMe SSDs...
  11. G

    ZFS zvol on HDD locks up VM

    Yes - very good hint - 128k volblocksize makes a big difference ! I additionally run some fio test inside the VM - the 128k volblocksize fixes the problems on sequential IO (like the tar test above) but does not slow down random read/write tests. the VM stays responsive during the tests the...
  12. G

    ZFS zvol on HDD locks up VM

    I've already created a zvol with 4K volblocksize (8K is the default) - same results. Additionally I run the test on the proxmox host directly created a 4K zvol mkfs.xfs mounted Running the test generated 500-700 IOPs on one HDD and a load of >40 on the proxmox host - like inside the VM The...
  13. G

    ZFS zvol on HDD locks up VM

    Software pve-manager/6.3-3/eee5f901 Linux 5.4.78-2-pve #1 SMP PVE 5.4.78-2 Hardware DL380p Gen8 2x E5-2650 v2 256GB RAM LSI 2308 SAS HBA (IT mode) 4x 4TB SAS HDD (SEAGATE ST4000NM0023,SMEG4000S5 HGST HUS726040AL5210) 2x SATA SSD SAMSUNG 240GB (rpool) 4x SATA SSD Seagate/Intel (Nytro/4610) 480GB...
  14. G

    ZFS zvol on HDD locks up VM

    Repeated the test some more times - the results did not change IOPs to the host raw disk VM behavior proxmox host load ZFS zvol - HDD pool 500-800 completely unresponsive > 40 ZFS zvol - SSD pool 1000-3500 responsive 10-16 ZFS raw image file - HDD pool 60-180 (with pauses) responsive...
  15. G

    ZFS zvol on HDD locks up VM

    What makes me puzzled is not HW RAID vs ZFS zvol - it's ZFS raw image (ZFS dataset hosting a raw image file) vs ZFS zvol. The difference between ZFS raw image and ZFS zvol is way too big - for the same test a zvol generates 20x to 30x the number of IOPs - this quickly fills up even a SATA SSD...