Anyone has an idea?
I could probably reconfigure postfix to do this but isn't that going to cause problems with pmg-log-tracker or any other part of the system?
Can't we just configure proxmox mail gateway to relay all email to another mta that does dkim signing?
That way you don't need to manually adjust postfix config files and it would probably be easier to upgrade?
Strangly, when I do "cat mail.msg | spamassassin -D 2>&1 | grep URIBL" it does tell me "2.5 URIBL_DBL_SPAM Contains a spam URL listed in the Spamhaus DBL".
So spamassassin is able to detect it but proxmox mail gateway wasn't?
I received an email containing a blacklisted url (in dbl.spamhaus.org).
I thought spamassassin is always check dbl.spamhaus.org by default but there is not even a report in the headers telling me it contains a blacklisted uri.
How do I make proxmox mail gateway check for blacklisted uri's and...
I have added an IP subnet in "Configuration->Mail Proxy->Whitelist->IP Network (Sender)" but noticed that one of the emails, sent from an IP in this subnet, is quarantained.
How can I prevent this from happening?
It looks like a bug to me.
The email was sent to pmg-smtp-filter if I'm not mistaken and it did nothing with it. Not even log a record of what happend.
Can I enable debug logging somehow?
I'm new here... And I tried this ( http://www.aleph-tec.com/eicar/index.php ) site to send an eicar virus as a test to my mailbox but the mail is never to be found. In the mail.log I can see it has been relayed to "relay=127.0.0.1[127.0.0.1]:10024" but it's not visible in my virus quarantaine...
@spirit, I think the problem is network related. Saturated gigabit port to the backupserver.
I read somewhere else that you have infiniband stuff in use? So I was wondering if you could help me out with that? I'm looking on ebay to find some (not too expensive) components to upgrade to...
I have 10 proxmox VM hosts and a shared storage.
Each proxmox host has 2x gigabit ports.
The shared storage has 6x gigabit ports.
I currently have each node set to do LACP with the switch using 2 ports. The same goes for the storage, with 6 ports.
Everything works fine but, offcourse, my VM's...
Okay, then I want to add more links to the backup server and use LACP. That makes sense...
The problem: the backupserver is located in rack A where a Cisco WS-C3560G-48TS-S switch is installed. The backup server is also used there.
The proxmox servers are located in rack B where a HP 2810-48G...
So if I have 10 nodes I need to set bwlimit to say 90 megabit? So in total 900mbit..
That way, if they all start their backup process, the 1 gigabit link won't get saturated?
Say you have 10 proxmox nodes and shared storage. And a backup server.
If I configure the proxmox cluster to backup every night at 2am, it'll start backing up from all nodes to this single backup server.
Everything is connected over a 1G network so it makes no sense to have all 10 proxmox nodes...
Okay, that makes sense.
So, if the backup is having troubles (in my case with the NFS server, it seems) it's normal for the VM to have issues too, because the VM's writes are delayed...
So I'm pretty sure my problem is just the backups / nfs. But I'm not sure what can be the cause...
Now while I'm thinking about it.. Is it the KVM process that's actually doing the backups? Or is it a separate process? I guess it's the KVM process (as it needs a snapshot for backups)
If it's the same process, then I can understand that if that process is having troubles accessing the NFS...
I'm planning on giving the 3.10 kernel a try, unless it's something nfs related like you said..
But I don't understand how that could interrupt the VM...
If they are pending ios on the nfs client, then why is it affecting my VM?
The VM is running from local storage (software raid10 on 4 ssd's). The VM isn't using NFS. Not inside the VM and not for it's VM image.
The only NFS mount point there is is to the backup server which is running FreeNAS...
Looks like proxmox 3.4 is still running a 2.6 kernel?
Isn't there a 3.X or 4.1 kernel available?
I remember having problems with a server a couple years ago. That server also had unexplained "blocked for more than 120 seconds" errors/warnings.
It went away by upgrading to a more recent kernel...
And the error I got on the host:
INFO: task lzop:632696 blocked for more than 120 seconds. Tainted: P --------------- 2.6.32-39-pve #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
lzop D ffff8810095121c0 0 632696 632686 0 0x00000000...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.