bug

  1. P

    Terminal Installer bypasses forced FQDN entry (bug?) in GUI installer

    GOAL: Successfully install proxmox 8.3 ISSUE: Unexpected behavior from the FQDN entry When default entry is kept, the ip_assigned:8006 doesn't work When you try to change it, you get error messages like "host name must match ip4" (or something like that) when I attempt to leave blank, get...
  2. M

    Prune without defined retention: Concerning logs but are backups safe?

    Hello, I clicked "Run now" under Prune Jobs without having a retention policy defined. It correctly identified that I had "no prune selection", but then proceeded to tell me it was deleting every snapshot on the server. It ran faster than I think it would've if it was actually pruning anything...
  3. E

    Deleting a ceph pool / storage is possible despite a running VM with disks on it!

    Hello lovely people, we're currently examining if PVE is a good alternative to our old ESXi environment and Ive set up some test machines (virtualized in ESXi) in order to test some things. Ive created some Ceph pools, ran some tests (hard shutdown of the PVE nodes simulating failures) etc etc...
  4. B

    Migration Error/Bug - command - 403 Permission check failed (changing feature flags (except nesting) is only allowed for root@pam)

    Migrating offline LXC container from single node to cluster. Migration Error: 2024-12-29 12:44:09 remote: started tunnel worker 'UPID:pve-nuc-0:00108181:00814094:67719879:vzmtunnel:903:root@pam!pdm-admin:' tunnel: -> sending command "version" to remote tunnel: <- got reply 2024-12-29 12:44:09...
  5. K

    Bug? no swap in restored LXC Debian

    I restored a Debian LXC with xorg from a Qotom firewall appliance to a Beelink EQR6 mini-pc, and the original has swap while the restored LXC does not. Have tried resizing swap in PVE GUI and with ' pct set 101 --swap 384 ' and the like, and the container still has 0 swap Beelink PVE 8.3.1...
  6. C

    reboot whole cluster because of CPU throttling, after new update

    We have a cluster of 25 nodes, all machines are backed up on PBS. The last update to the latest version, on proxmox nodes on which we have VM and LXC, was carried out on 24.11. However, on 3.12 in the morning, a problem appeared with one node. CPU load on one node increased to 100%, it was...
  7. C

    Updating Proxmox led to NVMe-Bug

    Hi all, I have a server running on Proxmox, which uses four NVMe-Drives in a ZFS-Raid-Z2. Since I have recently updated my Proxmox, since then I have the Issue that the NVMe-Drives periodically go down, and the VMs running on that datastore are crashing. Typically the Issue occurs during a...
  8. H

    Bug report - Web GUI missing status and metrics

    I open this thread to report a bug I have found since the PXM version 8.2.7 As you can see in the attachment, after some hours that server did start-up the Web GUI get broken with status of virtual machines and metrics history missing. This behavior is not present on PXM version 8.2.4. The same...
  9. O

    (pvecm) qdevice setup fails, can't get fixed pve-cluster package.

    Recently, I've added multiple qdevices to several clusters running older versions of PVE without any issues, using command: pvecm qdevice setup 192.168.1.xxx -f and corosync-qdevice. Issue regarding certificate authentication. However, my latest 2-node cluster is running VE 8.2.2, where the...
  10. P

    Kernel 6.8.x NFS server bug

    Hi, I'm having trouble after upgrading with NFS. It's literally crashing every time, I'm still gonna try 6.8.8 but I might need to switch to 6.5 if there is still no fix for this. 2024-09-11T19:22:31.190378+02:00 server2 kernel: [72134.823446] INFO: task nfsd:2056 blocked for more than 1228...
  11. K

    [8.2.4] [bug] service ceph-mon is not working properly

    tl:dr changing %i to corresponding name make service mon working. One of my mons keeps dying, restarting and cannot start again, so I investigate it. It cannot start due to misconfiguration in /etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve2.service file at "%i" variable, which points to...
  12. V

    [SOLVED] Strange network behaviour with LXC container and SDN on PVE

    Hi, I'm experiencing a strange behaviour on my PVE cluster with an LXC container. Context: I have a PVE cluster running on baremetal with version 8.1.3 with SDN Networking in place. I created an LXC container (Ubuntu22.04) on one host and I'm trying to reach the cluster API using Proxmoxer...
  13. M

    "Job Detail" shows suspend-mode backup as "Snapshot"

    When I create a backup job in the Datacenter view and choose the mode "Suspend", the "job Details" incorrectly show the job to be configured in "Snapshot" mode. Is this a bug in the web UI? This line in BackupJobDetail.js seems to be relevant :)
  14. P

    There are bugs in pve8.2

    For Windows 7 32-bit systems, if you configure 2GB of memory, there is no problem. If you configure 4GB of memory, the mouse and keyboard in VNC will fail. For Windows XP 32-bit systems, if you configure 2GB of memory, there is no problem. If you configure 4GB of memory, a blue screen will be...
  15. S

    Proxmox Thermal Throttling at 50c

    What happened Yesterday I rebooted proxmox (which had an uptime of months) and found that when I started up my handbrake lxc my CPU wouldnt go above 15W draw. I have updated the host system maybe twice in the last few months but never had any issue, its possible I never rebooted after those...
  16. T

    Migrated VMware ESXi of Linux VM to Proxmox 8.2.2. After Migrated -Booting Linux VMs into Rescue mode

    Hello Team I am testing Proxmox KVM 8.2.2. During migration:- From ESXi To Proxmox for Windows 11 and Windows 2022 are migrated well without any issue. When i try to migrate From ESXi To Proxmox for Linux VMs like Centos7. When i try to boot. its going to rescue mode How to fix. i am so...
  17. Z

    SMB failure leads to abnormal PVE reading and writing, quickly consuming Nvme lifespan

    Disk lifespanPreliminary cause analysis of the accident: ProxmoxVE experienced an SMB/CIFS mount down during the backup to SMB/CIFS task, triggering endless read and write operations on the local disk. At the same time, it also triggered gvt related errors Accident occurrence node: ` journalctl...
  18. D

    kernel:[ 9203.691567] watchdog: BUG: soft lockup - CPU#15 stuck for 6802s! [systemd-timesyn:639]

    I've just upgrade PVE server from 8.1.3 to the latest 8.2.2 and got this: sudo apt autoremove [sudo] password for dengolius: Reading package lists... Done Building dependency tree... Done Reading state information... Done The following packages will be REMOVED: pve-kernel-5.15...
  19. M

    QM machine type via cli

    Good morning, I wanted to create a virtual machine via CLI using the 'qm' command. If I try to enter 'q35' it works, but if I enter 'i440fx' it gives me an error. root@pve:~# qm create 100 --machine q35 root@pve:~# qm create 101 --machine i440fx 400 Parameter verification failed. machine...
  20. W

    [SOLVED] Kernel Bug unable to handle page fault

    Hello everyone, I have been experiencing issues with my server crashing since I rebuilt it. I tried changing the RAM speed from 3200MT/s to 3000MT/s after the first occurrence, but unfortunately the issue has persisted. Once the kernel bug occurs, the server becomes inaccessible via both the...