Search results

  1. cpzengel

    Windows ISO starting forever and does not regognize second CD/DVD

    Hi Guys, if you experience this Problem with Windows 1909 / 20H2, just dont use 2xIDE CD/DVD Use one as SATA, other as IDE and it will work well then. Chriz
  2. cpzengel

    grub-install: error: cannot find EFI directory.

    same here, perhaps because of not rebooting, did you try after reboot again?
  3. cpzengel

    [SOLVED] ZFS storage "Detail" produces "Result verification failed (400)" error

    the pve gui is just checking zpool status for "scan:" new systems never have been scanned so for example zpool scrub rpool && zpool scrub -s rpool fixes the gui
  4. cpzengel

    [SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

    so finally /etc/modprobe.d/zfs.conf has to look like this options zfs zfs_arc_min=6442450944 options zfs zfs_arc_max=10737418240 important is to set the min lower than the max, only setting max did not work if the defautlts min 1/32 of ram is higher than max value follwed by a...
  5. cpzengel

    [SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

    its ignoring the max value! arcstat always sets upper limit to : ram - 2G!
  6. cpzengel

    [SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

    zfs_arc_max seems to be ignored in root@pve1:~# pveversion pve-manager/6.3-6/2184247e (running kernel: 5.4.103-1-pve) root@pve1:~# zfs -V zfs-2.0.3-pve2 zfs-kmod-2.0.3-pve2 As far I can see it manages itself and allways keeping 2GB left at the end root@pve1:~# arcstat && free -h time...
  7. cpzengel

    Probleme mit Samba unter PVE

    https://www.youtube.com/watch?v=0_WgIgOC5KE hier, besser gehts wohl nicht
  8. cpzengel

    Fileserver Frage

    https://www.youtube.com/watch?v=0_WgIgOC5KE hier, besser gehts wohl nicht
  9. cpzengel

    Problem sending mail from hardware node

    manually pvemailforward working, correct email running on cron users root@hostname:( any clue?
  10. cpzengel

    Cronjob to Mail not longer in 5.1?

    today another machine that pvemailforward is working but /root/.forward is ignored any idea?
  11. cpzengel

    ProxMox 6.2-15 Problem with HotPlug Hard Drives

    i heard 6.2.16 helped so far, can be closed as solution
  12. cpzengel

    ProxMox 6.2-15 Problem with HotPlug Hard Drives

    400 Parameter verification failed. virtio8: hotplug problem - Can't use string ("x86_64") as a HASH ref while "strict refs" in use at /usr/share/perl5/PVE/QemuServer/PCI.pm line 251. at /usr/share/perl5/PVE/API2/Qemu.pm line 1372. (500) Same Probleme here with FreeNAS 11.3 and PVE 6.2.14
  13. cpzengel

    PVE 6.0 & HPE MSA 2050 SAS

    habt ihr inzwischen das multipath hinbekommen? würde sonst 2 raids mit 2 volumes für zwei hosts aufsetzen und diese mit zfs formatieren
  14. cpzengel

    Proxmox Backup Server (beta)

    so this is a additional backup to zfs replication, but the better way than the recent backup, correct?
  15. cpzengel

    Proxmox Backup Server (beta)

    so is it based on zfs snapshots or qemu basicly?
  16. cpzengel

    RAM or Swap Full? How to Handle?

    Hi, how do I get rid of those overprovisioning RAM Messages? Is that Situation acceptable or do I have to tune?
  17. cpzengel

    cgmanager.service does not start

    i have a system running since v4 without any lxc´s since last updates cgmanager service is not running any more any idea what the consequences are and how to fix? Jun 20 11:50:47 pve32 systemd[1]: cgmanager.service: Scheduled restart job, restart counter is at 1. Jun 20 11:50:47 pve32...
  18. cpzengel

    Cannot start a container.

    Same here /rpool2/vms/subvol-998-disk-0 was not mounted Had to zfs mount -a After no complaint while mounting i was able to start the LXC
  19. cpzengel

    Change ZFS rpool HDDs to grow

    Perhaps someone can write an efi Version like mine ?
  20. cpzengel

    ZFS over iSCSI on Synology

    Basicly a few Steps On old NAS create iSCSI Share for the Space you want to use with ZFS Create a Datastore Type iSCSI in Proxmox VE to the one just created and disable it for usage as a Datastore On Terminal find the created Disk under /dev/disk/by-id Create zpool "zpool create MyNASZFS...