Search results

  1. S

    New Kernel PVE-5.15.30-2 breaks iscsi connections

    A) Client OS (iSCSI Initiiator): Debian 11.3 b) Server OS (iSCSI Target): QNAP 5.0.0.1986 c) when i boot with the kernel 5.15.30-2 the iScsi is broken and the PVE cannot access any of the LUNS / Storages d) swtich to the kernel and reboot e)Storage.cfg: cat /etc/pve/storage.cfg iscsi: HomeNAS...
  2. S

    New Kernel PVE-5.15.30-2 breaks iscsi connections

    what kind of information do you need? I’ve got 1x Intel Nuc 7th Gen NUC7i5DNHE 2x Intel Nuc 10th Gen NUC10i7FNHN They all 3 got the same Intel I219-V Ethernet Port.
  3. S

    New Kernel PVE-5.15.30-2 breaks iscsi connections

    Hey, so because in the other thread no one was answering and basically ignoring this fact I have to reopen a new one. So: with the new Kernel PVE-5.15.30 my iSCSI Connections breaks and cant be revived. only the with the "old" 5.13.19-6 kernel my iSCSI connections are working. I have currently...
  4. S

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    Maybe some peopel ignored that but after the Upgrade today my iSCSI is broken again. Even a reinstall can´t fix that issue. Can someone help me? Kind regards, --- Update: Old kernel and its working again: May 05 15:59:17 NucMox01 iscsiadm[634]: Logging in to [iface: default, target...
  5. S

    LXC Container old Versions available

    Hello everyone, i was testing out some options to bring my services from VMs to LXC containers, unfurtnatly the available templates arent up-2-date. The alpine is from last year and the available one on https://uk.lxd.images.canonical.com/ is from the 03.05.2022 and is the new 3.15.4 instead...
  6. S

    Problems with my new Intel Nucs

    I could fix it! GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt intel_idle.max_cstate=0 nmi_watchdog=1 nvme_core.default_ps_max_latency_us=0 acpiphp.disable=1 pcie_aspm=force pcie_aspm.policy=performance" You have to use this line in your /etc/default/grub and apply this changes with...
  7. S

    Problems with my new Intel Nucs

    Hey there, I´m using proxmox now quiet a while and im exited and very setiesfied with the product. Im using it at home for my HomeLab (currently 8-9 VMs, no LXC). I was so satisfied that I bought 2 new Intel Nuc10i7FNK2 with 32 GB RAM and a Patrio P300 (128GB). But on the new Nucs the Nucs...
  8. S

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    I have a strange problem: when I upgrade to 5.15. my iSCSi Target cant be reached anymore. I do not know why but It cannot be reached. Is this a problem with the Hardware Configuration I use? 2x Intel Nuc NUC10i7FNK2 (32 GB RAM) 1x Intel Nuc NUC7i5DNHE (16 GB RAM) iSCSI Target: QNAP NAS with...
  9. S

    After Update to Kernel PVE 5.15.27-1 no login

    i switches to the old Kernel for now, now its fixed and working again: I guess its a bug in the new kernel then?
  10. S

    After Update to Kernel PVE 5.15.27-1 no login

    i restarted the NAS to ensure everything is OK on this site. Syslog just shows: Mar 18 15:16:51 Nuc01 pmxcfs[1819]: [status] notice: received log Mar 18 15:17:01 Nuc01 CRON[7027]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Mar 18 15:17:39 Nuc01 pveproxy[2793]: proxy detected...
  11. S

    After Update to Kernel PVE 5.15.27-1 no login

    1. No, its just loading until I get communcation failure 2. Yes, it looks totally normal and fine for me: cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 172.16.24.100 Nuc01.fritz.box Nuc01 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback...
  12. S

    After Update to Kernel PVE 5.15.27-1 no login

    I see now when the problem appears: If I reboot the Node everything is fine UNTIL i Click on on of the storages markes with a ?, then everythings starts loading and i get a communcation error. Where I can see more information about it ? Its a NAS mounted via iSCSI, no problems so far with it on...
  13. S

    After Update to Kernel PVE 5.15.27-1 no login

    -- Boot 0cf9554a369546f8a623067acb77f4ff -- Mar 18 12:15:01 Nuc01 systemd[1]: Starting The Proxmox VE cluster filesystem... Mar 18 12:15:01 Nuc01 pmxcfs[1046]: [quorum] crit: quorum_initialize failed: 2 Mar 18 12:15:01 Nuc01 pmxcfs[1046]: [quorum] crit: can't initialize service Mar 18 12:15:01...
  14. S

    After Update to Kernel PVE 5.15.27-1 no login

    yes pvecm status Cluster information ------------------- Name: HomeLab Config Version: 2 Transport: knet Secure auth: on Quorum information ------------------ Date: Fri Mar 18 12:50:14 2022 Quorum provider: corosync_votequorum Nodes: 2 Node ID...
  15. S

    After Update to Kernel PVE 5.15.27-1 no login

    after a pvecm updatecerts the error disappeared but im back to the old error: Node is up and green but I cant check the status of the VM/Storage: communication failure (0) i also cannot access the Syslog Page/Update or any other page related to the host node. Its always Communcation failure.
  16. S

    After Update to Kernel PVE 5.15.27-1 no login

    I cant restart pve-manager, but the Node is up and running somehow the GUI now shows this error: Error Connection error 596: tls_process_server_certificate: certificate verify failed systemctl restart pve-manager systemctl restart pve-manager Failed to start pve-manager.service: Operation...
  17. S

    After Update to Kernel PVE 5.15.27-1 no login

    Hi veryone, i updated today from kernel 5.15.19-2 to kernel PVE 5.15.27-1, after the update i restarted as usual and now I can´t login anymore and the other node shows this node with "?" only. The syslog says "proxy detected vanished client connection". Is there any fix for that? proxmox-ve...
  18. S

    Cant migrate AARCH64 VM

    sure, here u go: root@Node01:~# pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.10.83-1-pve) pve-manager: 7.1-8 (running version: 7.1-8/7cd00e36) pve-kernel-libc-dev: 5.10.95-1 pve-kernel-5.10.83-1-pve: 5.10.83-4 ceph-fuse: 16.2.7 corosync: 3.1.5-pve2 criu: 3.15-1 glusterfs-client: 9.2-1...
  19. S

    Cant migrate AARCH64 VM

    I´m using a 2 Node Proxmox Cluster. The VM is a AARCH64 VM which i tried to migrate to the other node for a restart after an Update from the system. I got this error: Can't use an undefined value as an ARRAY reference at /usr/share/perl5/PVE/QemuServer.pm line 3268. If I cut the "arch...
  20. S

    Proxmox 7.1-10 slow ISO upload

    Found the problem for me and a solution. It was a faulty windows 10 nic driver what made the internal upload speed slow. I had to reinstall the windows and now everything works fine.