pve 8.0

  1. S

    PVE MGT reachable, VM reachable, ping internet working, trace working, TCP/other traffic reset from peer

    I installed a PVE 8.0.3 for my friend recent. It seems works since I can reach the MGT IP on both web and SSH. And I upload needed VM disk and import them, all of them are also up and running. However, It seems that: 1. MGT port can't connect to any update source, I tried curl with insecure...
  2. D

    Server goes on strike after reinstallation after starting an lxc or vm [pveproxy[896 897]: detected empty handle]

    Hello everyone, I have two s740 Futro that I previously ran in a Proxmox cluster with ZFS. But since there were always problems with reboot, I wanted to let the two run as separate nodes. So now to my problem, previously they ran without any problems. But not anymore after reinstalling pve 7.4...
  3. D

    [SOLVED] Packet loss issue at VLAN SDN VNet.

    Hello, I have a problem that I'm using VLAN SDN zones and at almost every VNet it looks good. But one VNet has a problem with packet loss (ping between VMs at same VLAN at same / diferrent nodes). I have a stack with 4 nodes, connected via 2x 25Gbps LACP (DACs to nexus 9300 series). This is...
  4. C

    Proxmox VE 8 Cluster - New nodes unhealthy

    Been using Proxmox for nearly a year stable on a single machine. Decided to add two more machines to create a cluster and eventual HA. Steps taken on original machine (metal-01): Datacenter -> Cluster -> Create Cluster Steps taken on metal-02 and metal-03: Installed PVE 8.03 -> RAID-Z1 on...
  5. L

    [TUTORIAL] How to fix a ZFS mess on PVE 8.0

    Hello everybody, I'm testing on my homelab some features on PVE 8.0 and i make a simple but non impossible misstake, I was setting the RAM to zfs to 4GB on /etc/modprobe.d/zfs.conf with the argument : options zfs zfs_arc_max=4294967296 and I run the command : update-initramfs -u and...
  6. P

    Intel IOMMU - Dell Poweredge 1950iii

    I've been looking through the Proxmox forums for the past two days in regards to enabling IOMMU on my server. I am trying to enable IOMMU in order to do a PCI passthrough of my Intel 3650 NIC. I have another Dell Poweredge R510 with the the same NIC and version of Proxmox with no issues. Here...
  7. R

    [SOLVED] Network speed stuck at 100Mb/s - Proxmox VE 8.0.2

    Hey, I've just installed Proxmox VE 8.0.2 on two different types of PC and I'm having the same issue. They both are stuck on 100Mb/s network speeds, not sure where to go from here. I did fresh install's just last week and made sure they are both up to date. root@TestLabServer:~# ethtool eno1...
  8. P

    [SOLVED] Cannot roll back VMs

    Dear Proxmox Users and Maintainers / Developers, I cannot seem to roll back (maybe only some) VMs. The full rollback log is: Logical volume "vm-196-disk-0" successfully removed. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set...
  9. U

    [SOLVED] PVE not booting after Ceph installation

    I have a three-node cluster with PVE8 and Ceph installed. The names are pfsense-1, pfsense-2 and r730. I have been running PVE for about a year and recently installed Ceph on these nodes. It worked well, but when I reboot the r730 node, it won't boot (I waited for 15 hours). I reinstalled the...
  10. E

    Light theme as default on PVE 8

    Hi, I'm having a bad luck on setting the light theme on Proxmox VE 8 as default. None of what I've found so far works for me. Why is even setting the dark theme as default? I would be very thankful for a working advice on how to achieve it. Thanks in advance!
  11. E

    Proxmox random crashes. Please help!

    This is new build. Tried both 7.2 and 8.0 PVE with same results. System will crash randomly, not even being heavy loaded. Specs: 24 x AMD Ryzen 9 5900X 12-Core Processor (1 Socket) Kernel Version Linux 6.2.16-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-7 (2023-08-01T11:23Z) PVE Manager Version...
  12. S

    Proxmox Cluster no HA only Config

    Hello everyone, maybe this question got asked somewhere but I couldnt find it: Is it possible to remove the HA functunality and use the cluster only as config cluster? So that the information about Storage user etc. are shared but no HA so I can turn on/off hosts if needed? Kind regards,
  13. R

    pve 8 and pre Quincy hyperconverged ceph versions possible

    At the moment I am running two pve clusters both with pve7.4. One of the two is using storage from a external ceph cluster running ceph Nautilus (14.2.22). This is working for me without any problems. Now in the online docs "Upgrade from 7 to 8" there under Prerequisites i read that for...