Search results

  1. U

    Mail Gateway LXC from debian12 to debian13

    Hi, I just went through DCM update to beta, and saw I needed to push the debain LXC from 12 to 13. I already had pve and pbs up to 13, so I got curious about pmg. If it's still on 12, no problem, I'll wait for the formal upgrade. Thank you very much!
  2. U

    Mail Gateway LXC from debian12 to debian13

    PS: I also see bookworm is still the default deebian base in the documentation, so is there the need to wait for this transition at all?
  3. U

    Mail Gateway LXC from debian12 to debian13

    Hi, I'm an happy user of the Mail Gateway LXC, installed when it was based on debian12. Now, following another LXC update from debian12 to debian13 for the Data Center 0.9, I'm thinking about also transitioning the MG LXC to debian13. Being a dedicated LXC (I used the base debian12 for DCM...
  4. U

    Is WOL usable with Intel X710 10GbE SFP+ ports in bond?

    Hi, I have an MS-01 machine, and would like to enable WOL on the two bonded Intel X710 10G NICs. The ethtool -s command do not work with an error, and the wol commands do not work either to boot the machine. # lspci ... 03:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710...
  5. U

    Updates refresh only works manually

    Hi, Do not know why, but the updates refresh only works if I run it manually, the automatic refresh always falis: Manual refresh: Automatic refresh: Here is the syslog at the time of the update refresh: Feb 07 05:50:58 dcm1 systemd[1]: Starting...
  6. U

    PVE node stuck

    Ok, so it got stable after the powersave cpu governor was selected, as cpu temps got lower. Already re-pasted the CPU some time ago, let's see how it goes. Also installed the new micronode and updated the BIOS on all of the three nodes (MS-01 and not), to get latest stability and security...
  7. U

    PVE node stuck

    Hi @l.leahu-vladucu Thank you very much, at the moment I put the cpu governor to powersave, and temps are more under control. The problem could be related to some more hot days lately. It seems more stable at the moment. In the meantime, I'll get the node down and run all of your points to...
  8. U

    PVE node stuck

    Finally had to manually power off the machine and restart it, it restarted without any faulty dmesg message. Also upgrading from 6.8.12-7-pve to 6.8.12-8-pve and let's see if and how that happens again (could be due to heat on the CPUs?)
  9. U

    PVE node stuck

    Hi, here I am with journalctl and the first faulty situation (there's a lot going on, can't get anything fruitful from here): https://pastebin.com/LYawfWmz Also, couldn't reboot, had to force reboot, waiting for it to reboot. root@pve1:~# reboot Failed to set wall message, ignoring: Connection...
  10. U

    PVE node stuck

    @l.leahu-vladucu yes, I'm going to to just that, as soon as I can get a reasonable connection and/or physically reboot the machine, thank you. I'll be back with more debugging/info on the problem. Thank you very much for the moment.
  11. U

    PVE node stuck

    Hi @l.leahu-vladucu I'm trying to manage the problem from remote, so I cannot do everything. Tha said: - All of the drivers are blacklisted (I can only show them on the other nodes atm, pve1 is really stuck) root@pve2:~# cat /etc/modprobe.d/pve-blacklist.conf # This file contains a list of...
  12. U

    PVE node stuck

    Also SSH seems to be stuck: I can connect, but freezes after I put the password
  13. U

    PVE node stuck

    Hi, I have a pve node on an MS-01 machine, and sometimes it gets stuck: It only responds to some commands, and the VMs but one seem to run (cannot stop/restart the stuck VM). The stuck VM uses a passed through Nvidia GPU. syslog: Jan 30 10:33:09 pve1 kernel: watchdog: BUG: soft lockup -...
  14. U

    Nvme problems on HP elitedesk 800 G2

    Hi, I have a node on a HP elitedesk 800 G2 sff, with a Lexar NQ710 nvme M.2 disk, ZFS. When I stress the disk (local backup for example, but also some operations inside a 150MB VM), it spikes IO delay and disk temp until it loses temporally the connection with the nvme, then it gets back to...
  15. U

    After last update: Cannot open iommu_group: No such file or directory

    Hi @Moayad Can confirm that 6.8.12-3-pve still manages to make the IOMMU work for the GPU on the "older" node, while 6.8.12-4-pve does not recognize the GPU as compatible. pvesh get /nodes/pve2/hardware/pci --pci-class-blacklist "" [...] 0x030000 │ 0x1912 │ 0000:00:02.0 │ 0 │ 0x8086...
  16. U

    After last update: Cannot open iommu_group: No such file or directory

    /etc/default/grub GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="" and /etc/kernel/cmdline root=ZFS=rpool/ROOT/pve-1 boot=zfs Are the very same on all three nodes (it should be enabled by...
  17. U

    After last update: Cannot open iommu_group: No such file or directory

    So, probably this kernel update lost compatibility with that graphic chipset. Is there a way to easily make it work? Should I just change hardware on the node?
  18. U

    After last update: Cannot open iommu_group: No such file or directory

    I'm always updated with the kernel, just upgraded with the latest 6.8.12-4-pve I do find these lines with dmesg | grep IOMMU on the failing node: [ 0.259159] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 [ 0.429491] pci 0000:00:02.0: DMAR: Disabling IOMMU for graphics on this...
  19. U

    After last update: Cannot open iommu_group: No such file or directory

    Hi, After last update, just one of my nodes throws the Cannot open iommu_group: No such file or directory error. The nodes are different Intel nodes, with integrated gpu, the other two are working fine: Working: Intel(R) Core(TM) i9-13900H Intel(R) Core(TM) i7-8700T Not working anymore...
  20. U

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Also had the problem, with the system also going stale (backups to NFS storage blocked for a LXC, other VMs running and connecting. Trying the workaround to see if it keeps the system more stable.