Hi @Stoiko Ivanov, thank you for your reply. You nailed it. Of course I was too dense to enable TLS and now that it's on, the mails go through in both directions... I'm facepalming real hard right now. I have a handful of other servers in my logs...
Hi, @t.a.s
https://www.postfix.org/DEBUG_README.html#debug_peer
Verbose logging for specific SMTP connections
In /etc/postfix/main.cf, list the remote site name or address in the debug_peer_list parameter. For example, in order to make the...
did you restore a backup from your old gateway - or set it up freshly?
one guess based on the debug-output:
- your PMG does not seem to have TLS enabled - maybe the sending servers are configured to not send mails over the internet without TLS...
Hi @Stoiko Ivanov, thank you for your reply. You nailed it. Of course I was too dense to enable TLS and now that it's on, the mails go through in both directions... I'm facepalming real hard right now. I have a handful of other servers in my logs...
There's a record tester at https://www.kitterman.com/spf/validate.html. There are rules, notably the 10-DNS-lookup limit, that catch people over time especially if using includes.
Pruning or syncing? There is a "transfer last ___" in the Advanced settings of a sync job, to send only the last "n" backups of each VM/CT. Otherwise the sync will transfer all backups and then the GC and then pruning will take effect.
Note that there may be additional systemd-timers, which are not visible in the classic crontab context. Run systemctl list-timers -a instead.
You also did not mention user-specific crontabs, editable by everybody by crontab -e - including one...
Hello all.
We have a PVE cluster with about 35 TB used (Ceph), going well.
This is backed up to a datastore on a PBS (12 HD in ZRAID2 + special devices) that says it uses 35.64 TB (65 groups, 3906 snapshots, dedup is 34.69).
Smoothly too.
This...
The peaks you see in the graphs at 1 AM and 5 AM are VM backup and then backup inside HA to Google Drive.
As far as I remember I did move HA to another PVE and shutdown it on the actual host. The consumption was the same (minus a few 0.5W for HA...
Hi, The last friday in the early morning a node reboot alone, but i don't know what trigger that, I chek the logs and see the next lines:
Memory failure: 0xca06f60: Sending SIGBUS to CPU 1/KVM:4190709 due to hardware memory corruption...
Although this is the official Linux Kernel 7.0.0 release, it is still classified as beta since Ubuntu 26.04 has not yet been released. Proxmox leverages the Ubuntu kernel, enhanced with custom compile flags, built-in ZFS support, and patches...
From the analysis until now, the IO pressure seems to be a cosmetic issue or rather accounting issue in the kernel. QEMU switched to using io_uring for event loops with QEMU 10.2. The issue appears in combination with IO threads, where a blocking...
its like using debian sid and then complaining about issues with it.
according to his logic debian sid should be qc-tested since the repository is available to thousands.
thats not how it works.
test = bleeding edge stuff with a decent chance...
Those expectations arise precisely because you’ve actually used it and experienced its shortcomings firsthand.
I understand your concerns, but ultimately, the use of the test repository is at your own risk.
i repeat, dont use the test-repo for production.
developers state its only for testing-purposes and not for production-use as evidenced by my screenshot from their own wiki.
you ignoring this is purely on you and on noone else.
you could have run...
@djsami thats why you dont use the test repository in production.
as the name implies it is a repository for testing things. its bleeding edge stuff.
if you need things to just work, use the enterprise repository, where only well tested releases...
Ich würde es mit beiden Varianten (Ubuntu- und Debian-Livecd testen), da man notfalls ja auch ein ProxmoxVE dadurch installieren kann, dass man erst Debian installiert und danach ProxmoxVE...
Yes thanks, i was aware of the naming issue, but was waiting for the final solution from @t.lamprecht. so pinning at least did the job right now. I expect a name change soon to come over that problem.