hmm, die sollten da dann auf jeden fall auftauchen[1]. Den Remote gibts? Wenn du dem Token testweise Admin auf / gibst, bekommst du dann die jobs als Antwort von der API?
[1]...
Hey,
der Token hat ggf. andere Permissions als der User. Könntest du nachschauen ob die Permissions[1] vom Token selbst passen?
[1] https://pbs.proxmox.com/docs/api-viewer/index.html#/config/sync
Hey,
when you install the PVE host you have to configure its IP address. What happened here is that the IP you have configured is not in the correct subnet, the configured IP is 192.168.100.2 which is in 192.168.100.0/24. But your LAN is...
Hey,
dazu kannst du dir [1] ansehen. Für dein Beispiel würde das so aussehen
*-*-10 03:00:00
wäre am 10. jeden Monats um 03:00.
[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_calendar_events
Hey,
how do you have the NICs configured in PVE? What's the output of ip a, ip r and cat /etc/network/interfaces? Does it have its IP on the correct subnet, and what subnet is that?
Hello everyone, I want to share my experience dealing with a backup issue on my Proxmox server.
I had a scheduled backup job, but the NFS storage I used had a network issue. This caused my backup to run for around 20 hours just for a 100GB...
Hey,
this is an english forum, so please try sticking to that. So more people are able to help you and this thread can be helpful to more. Also, provide some context, this says nothing about the problem you are having. Include things like
- what...
Löschen nicht, aber die Schedule kannst du entfernen, dann läuft der nicht automatisch.
Die Meldung kommt nicht vom GC selbst, sondern wenn nachgeschaut wird ob der per schedule jetzt gestartet werden soll.
@JensF jap, macht eigentlich keinen Sinn, dass das immer im Log landet. Sollte mit [1] behoben werden.
[1] https://lore.proxmox.com/pbs-devel/20251014075545.20528-1-h.laimer@proxmox.com/T/#u
Hey,
yes. That'll work, you just need access to the same backup storage. Backups do include the config, so you may have to update things like NIC names if they should be different on the new node after restoring.
Der unmounting Prozess wird mithilfe eines eigenen Maintenance modes abgebildet, damit wird verhindert, dass was neues startet. Im unmounting task log, sollte ausgegeben werden auf wie viele Operationen noch gewartet wird.
Generell wartet ein unmount task auf alle bereits laufenden Operationen und verhindert das Starten von neuen. Also kann das unmounting auch schon kurz nach dem start vom sync gestartet werden.
It's not possible via PMG's postfix-based DNSBL implementation.
But it should be possible via custom SpamAssassin rules, see
https://forum.proxmox.com/threads/spam-filtering-and-dnsbl.111156/post-479320...
Hey,
ceph has its own file under /etc/apt/sources.d/, you can change that to use bookworm temporarily. Then apt update shouldn't pull in any new minor versions as bookworm is a little behind trixie. But ceph really does not mind having small...
Hey,
you can create an active-backup bond[1], there are different types of bonds but for this active-backup is what you want. It'll basically use the other if one is not working. Config is pretty straight forward and looks something like this...
Hey,
yes, changing the bridge-ports of vmbr0 and ifreload -a is enough. But "only one VM reachable" is not a thing that happens if the NIC is fried. Could you post the output of cat /etc/network/interfaces and ip a? Also, does ifreload -avvv...