Hey,
yes. That'll work, you just need access to the same backup storage. Backups do include the config, so you may have to update things like NIC names if they should be different on the new node after restoring.
Personally, I just minimize how much I customize PVE and back up /etc. I recently replaced my Intel server with a much newer AMD one and this approach had me back up pretty quickly. No worries about requiring different firmware packages or...
i think the correct command here would be
pvesh delete /cluster/acme/account/pvecluster
so the name in the path
yes on deletion we try to unregister that account.
if you delete the file from /etc/pve this should not break our stack, but...
Just a tip, you only need to add tso off.
There's no need to turn off all others.
I've been using this on /etc/network/interfaces for over 2 years with no issues:
post-up ethtool -K eno1 tso off
post-up ethtool -K vmbr0 tso off
From the apt history it looks like there was an in-between upgrade to many packages from Proxmox VE 9:
Start-Date: 2025-10-05 16:21:44
Commandline: apt upgrade
Install: [...] proxmox-kernel-6.14:amd64 (6.14.11-3, automatic)...
Hi!
I haven't tried to reproduce it yet: When was the update performed? Has the watchdog been inactive before already (some recent entries in journalctl -u watchdog-mux)?
Your code is great. I would just add the reference to the documentation that (almost) covers this case
and this script from github which recommends using a `sleep 30` before restarting anything, which I found very useful.
Good day
Help me figure out and implement the correct virtual machine configuration for a dual-socket motherboard (PVE 8.4, 6.8.12 kernel)
Given:
root@pve-node-04840:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes...
Hello!
We have the problem that very often Veeam fails to start the worker.
Now the problem could be with Veeam or with Proxmox. I then always have to manually abort the worker start in Proxmox.
When I manually repeat the backup job, it usually...
> One of pools is running on simple consumer (dramless) nvme's (3 disk raidz1, one disk missing)
Seriously, you're running a consumer-level 3-disk raidz DEGRADED with 1 disk MISSING, and posting about it here?? Fix your pool first.
If you want...
I mentioned a nvme config only as even a mix of 8 nvme and up to 16 sata/sas ports makes no sense anymore to me. If you want performance use a raid controller and if you want zfs for it's features don't select it.
So the "Standard VGA" solution seems to do the trick. I would prefer using virtviewer, but at least this is a pretty good workaround for text consoles.
I also tried to use consoleblank=0 as VM kernel cmdline parameter and on the server with Xeon...
Bug status: CONFIRMED, reproduced
TURNING WRITEMOSTLY OFF, FOR MDRAID IS ADVISED ASAP
HIGH DATA LOSS RISK
Experiment type: system clone to VM
Experiment start point: a running system in good condition after disabling writemostly
Experiment...
Oh, I see. Yes, that is because you have the saferemove flag enabled on the storage: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#pvesm_lvm_config The VM shuold already be functioning fine and not be blocked by that removal, just other VM...
> One of pools is running on simple consumer (dramless) nvme's (3 disk raidz1, one disk missing)
Seriously, you're running a consumer-level 3-disk raidz DEGRADED with 1 disk MISSING, and posting about it here?? Fix your pool first.
If you want...