Are you using a disk with ZFS?
If so, you cannot roll back to a snapshot other than the most recent one.
To roll back to a snapshot from before that, I believe you would need to delete all the intermediate snapshots.
That’s from my distant...
Hi,
I have a vm with a couple snapshots but it doesn't let me rollback a couple snapshots, only the most recent, I need to rollback once, delete that snapshot and then rollback again for the snapshot I want.
If this the normal in proxmox or I am...
Sorry, ist nicht böse gemeint, aber das Setup schreit schon nach Problemen.
Softraid mit Debian und dann erst PVE, nunja. Dazu noch I/O-lastige VMs wie Nextcloud, Immich und Mailcow… VM disks dann in qcow2… das rödelt sich alles tot. Alleine...
Based on findings by @fossaaen in this post. I believe I have found a solution.
Apparently, when a system reaches 80 % memory utilization a feature called Kernel Samepage Merging (KSM) kicks off. It scans all the pages in memory and merges them...
Same error, but with RTX5090 (GB202).
And yes, this is on gigabyte mainboard Z890 GAMING X WIFI7. Original firmware F7 (not working), upgraded to F21a - still not working.
Initially (F7 firmware) the card+audio device were in iommugroup 13...
Meanwhile I created like 9 ZFS pools. The easiest way to change the volblocksize is just to backup the VMs, destroy them and import them from backups after setting the new volblocksize for that pool (Datacenter -> Storage -> YourZFSStorage ->...
Hallo,
ich bin gerade dabei, meinen Server neu (erst Debian mit ext4 und SoftRaid1, dann Proxmox VE) aufzusetzen und frage mich, wie ich die SSD am besten nutze. Ich fand diesen Eintrag: https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache...
i have two nodes node-1 and node-2 i install one vm on node-1 i setup an replication from node-1 to node-2 but now the node-1 is failed so i first disabled the job in replication then i created new vm in node-2 but i didn't attach the disk...
Hello,
I attempted to replace the default TLS certificate used by Proxmox Datacenter Manager (PDM) with a certificate issued by an internal CA. Even after replacing proxy.pem and proxy.key in /etc/proxmox-datacenter-manager/, updating...
Just to add, this was all working fine when it was ESXi and vSAN.
I understand the theory of 10GBe LInks saturated, in a bond - but this is three nodes doing nothing.
This agrument has been going on for years physical nics versus vLANs.
I...
No switch problems detected, there is a single static trunk, two network interfaces carrying four other VLANs, which none had any issues which also carries storage traffic, it was specific to a single VLAN only which was carrying corosync...
That's bad. When one of "the other" networks does saturate this single physical wire... corosync will die.
QoS settings may help to prioritize the corosync VLAN, but recommended is to use a separate wire. The VLAN approach will be fine as a...
I have now changed the thermal paste on the pc, used some Arctic MX 6 paste, and it made wonders! Went from 56°C at 3200 RPM to 50°C at 2200 RPM. So thermal paste solved my problem, and i now have a quiet mini pc running.
Nevertheless, the...
Maybe but that is not what Proxmox does. It gets the memory usage according to the operating system from inside the VM using the balloon device/driver.
Hi guys!
As some of you might know we are organizing the Dutch Proxmox Day again.
Last year we decided to reach beyond our borders and have all presentations in English.
Which was a great success since almost half of the audience consisted of...
We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.
This release is based on Debian 13.2 "Trixie" but we're...
I have PVE 9.1.11 and encountered the problem with one of my Windows 11 25H2 test VMs after I installed the recent PVE updates.
What helped to fix the problem was reinstalling the previous pve-qemu-kvm version.
apt reinstall pve-qemu-kvm=10.2.1-2...
Interesting. I would have thought that running the qemu agent would be enough for the host to fetch the free command output but maybe it's a bad assumption.