Hi, @pulipulichen
You posted only a screenshot and to make matters worse, it doesn't show all the logs' content (the wrapped parts).
If we could see the log in the CODE blocks (this < / > icon above), maybe we could help more.
Anyway, what I...
Full read of VM's source disk is required when VM is shutdown.
Then Dirty map skip full read only on next backup, because dirty map is managed by qemu process of the VM
Welcome, @Zexan
Hard to guess, because you haven't given the details, e.g. what exactly happens in which stage of installation.
Anyway, you check whether the same problem exists with installers of other systems, like Debian, Ubuntu or when...
Resurrecting this old thread.... I've come back to the topic and been running SR-IOV for a few weeks now without any issues but with noticeable improvements in latency. This time I kept things clean and simple:
One 25Gb PF for host (including...
I tried several other things as well to identify the issue but nothing is helping. At one time I thought of upgrading things, but that is also not possible. Problem is I can't even find logs for the services. systemctl start pvestatd or systemctl...
@nitrosont, dein top-Output zeigt das Problem ziemlich eindeutig — es ist kein CPU-Problem, sondern Speicher:
MiB Mem: 256.0 total, 1.2 free, 253.3 used — RAM praktisch voll
MiB Swap: 256.0 total, 0.1 free, 255.9 used — Swap ebenfalls voll...
Die Allocated Pages bestätigen das Bild: 1.673.851 Pages × 4 MiB (Standard-Pagesize MSA 2060) = ~6,39 TiB — das deckt sich mit den ~7,0 TB auf Pool-Ebene.
OCFS2 meldet 3,8T belegt → ca. 2,5 TiB sind stale-Allocations, die vom Filesystem...
Dass die MSA-Belegung nach Scrub-Abschluss unverändert bei 94% steht obwohl ~500 GB per UNMAP gemeldet wurden, ist auffällig. Die Datenträgergruppenbereinigung ist primär ein Integritätscheck (Parity-Verify) — die UNMAP-Verarbeitung läuft davon...
@Joe77, die winsat-Ergebnisse mit v3 vs v4 sind praktisch identisch (Random 16K Read 179 vs 192 MB/s) — CPU-Typ und VBS sind damit als Ursache ausgeschlossen. Dass eine einzelne SATA-SSD bei @boisbleu 831 MB/s Random Read liefert und deine drei...
Mine is doing this too. I've tried the above but no dice, still stops at the initramfs and makes me type zpool import rpool, after which an exit makes it boot perfectly. I try to fix this every now and again but it has been doing it for a while...
For reference, for other people stumbling upon here, there also is a bug report: https://bugzilla.proxmox.com/show_bug.cgi?id=5315
which mentions:
The Migrate to Proxmox VE wiki article mentions something similar (for encryption):
@iwik
Thanks for your reply. We had few issues... but this week with the upgrades we had a lost quorum which ended in a crash.
We are also testing the lvm over iscsi which was introduced in PVE 9.
As for now - NFS based Share would be the best...
Hallo @nimblefox,
Das OpenResty-Problem ist repo-seitig. Debian Trixie verwendet sqv zur Signaturprüfung, und seit 2026-02-01 werden SHA1-basierte Binding-Signaturen abgelehnt. Der OpenResty-Signing-Key nutzt noch SHA1 — das muss upstream gefixt...
Hotplug is supported in Proxmox VE. In your VMs settings you can configure which hardware types have hotplug enabled (for disks its enabled by default). For IDE disks hotplug seems not to work and for SCSI disks this only seems to work with...
Hallo,
der Thread ist zwar schon etwas älter, aber ich würde ihn gern fortführen. Ich bin ein absoluter Neuling bei Proxmox und ein Umsteiger von VMWare. Ich habe eine Maschine von unserem ESX importiert, diese läuft auch in Proxmox. In der...