Check connection via cluster interface (you need to separate via vlan or dedicated nic cluster traffic)
If you didn´'t do this yo can have troubles with cluster
Was ich schon überlegt habe:
bevor es den ersten Backupserver geshreddert hat, habe ich bereits 2 VMs mit Backup & Restore zum Proxmox migrieren können (und hab dann blöderweise den Proxmoxjob laufen lassen :-( ) . Da war rückblickend Veeam...
It looks like you run into the bug reported here: https://bugzilla.proxmox.com/show_bug.cgi?id=7271
Feel free to chime in there!
To temporarily downgrade pve-container you can run apt install pve-container=6.0.18
I just created a custom BIOS for WTR 5700U/5825U. All PSP components are updated to AGESA 1.0.1.1C and GOP is updated to 2.24.
It does fix the throttling issue for me, actually there is only one PCI device to pass (the second one), and the 2...
Hello everyone,
I have a cluster of two nodes and a qdevice for the quorum. I am using Proxmox VE 9.1, which I recently installed with the official ISO.
I have added my VMs to HA from the data center, and checking the replication of the...
I appreciate your thorough replies! It sounds like my best course of action will be to:
Once the sync completes, recreate the zpool on the larger drives
Create a new temporary namespace for new backups to go to while old backups are re-synced to...
Okay, dann machst du es richtig. Den Fehler konnte ich auch beobachten. Hat bei mir aber wieder funktioniert. Ist wahrscheinlich ein Problem im Veeam Plugin
The requests library doesn't post JSON data when using the data kwarg. Use json instead:
api_response = requests.post(api_node, headers=headers, json=data, verify=False)
See https://docs.python-requests.org/en/latest/api/#requests.Request
You will be able to take backups immediately, there will be no real blocking there. But deduplication can be temporarily not yet seen and thus the source PVE might have to back up more data in the first backup run, which then the sync job doesn't...
No, sync jobs only allow to sync backup snapshots newer than the already present snapshot on the target backup group. So if you are backing up to a backup group, which has not been synced back yet, the sync will not get the old backups. This...
I have a similar issue but I don't think the patch will fix it, it's related to propagating UID and GID on a mountpoint. Had issues on 6.1.0, downgrading to 6.0.18 fixed it. Logs:
run_buffer: 571 Script exited with status 1
lxc_init: 845 Failed...
I spoke with the vendor advising against mesh networking again. They now say that 2x bonded 10Gb is better performing, even though total bandwidth is lower, due to multipathing to OSDs. I have a hard time believing an OSD could saturate a 25Gb...
There is, indeed, no rhel-home as seen in your lsblk output. Is it possible there was another disk that you did not transfer?
Also, please use "CODE" </> tags to surround your configuration/command output
Blockbridge : Ultra low latency...
Das mit JBOD war nur als Synonym gedacht. Also wahllos Platten zu bestehenden RAIDs hinzufügen.
Wir haben nun damit ein wenig herumgespielt. Ich hatte noch drei 2TB-Platten, habe einen datastore angelegt, dann eine weitere Platte mit 2TB...
Hi
Yes I have check with lsblk and see that both swap and root are on sda2 as well. See lsblk output below and /etc/fstab
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 32G 0 disk
├─sda1 8:1 0...
Thanks for sharing @executable . I have quite similar problem. I tries reinstalling Ventoy, doing dd, changing Proxmox ISO, still the same. Tried Balena, worked
Hi,
I've came up with a working Python script to get all snapshots for specific VMs.
This is the snippet to GET all snapshots for a certain VM (working!):
api_node = proxmox_api + "/json/nodes/" + proxmox_node + "/qemu/" + node_id + "/snapshot"...