That should actually work since it's a check on the username. Can you provide a bit more detail? Which API call is it exactly, what are the parameters?
I'd suggest NOT running those in a container. Prefer running it inside a VM instead.
There's https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_custom_cloud_init_configuration for using cloud-init with snippets.
Is there anything you're missing/would like to see added?
Check if there are still leftover files in /etc/pve/nodes/<deleted-node>.
As long as there are files still in there, the node will show up in the UI.
Please make a backup of the files before you delete them, just to be safe!
Can you provide the complete VM config (qm config <VMID>) as well as the storage config (cat /etc/pve/storage.cfg)?
Feel free to mask any IPs, domains or comments you don't want public.
It would probably be good to create a new service file and a separate timer.
You can use the current service file as a base and edit the command of the new one to include --timespan today.
The User Whitelist is for mails that were marked as spam and put into quarantine.
If the sender is on the whitelist for a recipient, then instead of going into quarantine, it will be delivered. But this happens on a per-user basis. So if a sender sends to multiple recipients, but only one has it...
Remove the guest gateway if you don't want those guests to get to the outside.
They should still be able to communicate with any other guests on the same bridge. If they're outside your configured subnet, you'll have to add a route on both sides.
If you set your host as gateway, and enable...
Do you mean the `User Whitelist` under Administration?
If you select that tab, there's a drop-down box with email addresses if you have at least one containing whitelist entries.
Did you add yours to one of those mail addresses?
Did you add any up/post-up commands to add routes?
Do you NAT any outgoing packets on that bridge?
Do the guests have a gateway configured that forwards traffic to the outside?
Running tcpdump on that bridge could help narrow down where this comes from, and where outgoing packets are sent.
Did you change the command to use the --timespan today parameter, rather than yesterday? [0]
[0] https://pmg.proxmox.com/pmg-docs/pmg-admin-guide.html#chapter_pmgqm
Splitting replication traffic over different links is not supported.
You can set a migration network and it will be used for replication, but all replication traffic will go over that network then.
Is it just I/O failing in the guest, or is the whole VM process stuck?
Do you see any processes in `D` state when you check with ps auxwf?
Do you see any I/O errors or hung tasks in the journal?
Please provide the complete task log of a failing backup and the journal starting before the backup and ending afterwards (~5-10 minutes before and after).
If it's below what you can send per minute over the 1Gbit network, it should be fine.
I'd suggest running it over the 1Gbit network and monitoring it in case the network becomes the bottleneck.
Please provide the output of pveversion -v and the journal from the time of the backup: journalctl --since '2024-01-15 20:50:00' --until '2024-01-15 21:00:00' > journal.txt
I'm not sure I understand it correctly, the backup breaks the disk even though the guest continues to run?
And you have to...
It depends.
How much data changes each minute on those 4 guest's disks? If it's less than what the network and in turn the replication can handle, it should be fine.
Grundsätzlich sollte der Node den aktuellen Stand mitgeteilt bekommen, sobald er wieder joined. Man muss also hier nicht händisch intervenieren.
Es gibt grundsätzlich die Möglichkeit das `pmxcfs` (Cluster Filesystem das hinter /etc/pve steckt [0]) lokal zu mounten. Damit kannst du dann auch...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.