That's unfortunate. It looks like there is only a problem with the database connection. In future releases maybe you can separate installation and initialization. Then you can use a postgres container and connect it with the mail gateway container.
Hi,
i was trying to run the mail gateway in a docker container. My Dockerfile looks like this (not complete):
FROM debian:buster
ENV DEBIAN_FRONTEND noninteractive
RUN set -ex; \
\
apt-get update; \
apt-get install -y --no-install-recommends \
wget \
gnupg1 \...
That helped a lot. Backround: All mails with compressed attachments are moved into the quarantine. So the it deparmend has to be informed when a new mail is in quarantine. For monitoring we use Zabbix. So my new item looks like this:
UserParameter=pmg.quarantine.spamcount,/usr/bin/sudo...
That was my first thought. So I selected a whole year and there is nothing. In the /var/spool/pmg/cluster/1/spam are the ten mails but they are already deleted/send over the GUI.
Hello,
when I use /usr/bin/pmgsh get /quarantine/spamstatus the count variable is wrong. It shows 10 messages in spam but in the webfrontend no mails are shown. pmgversion is "pmg-api/5.2-7/9943bd5d".
@Whatever I will try to set ARC size min=max but should zfs not set the size dynamically between min and max? Swap is on an extra ssd with the proxmox os. Strange is that proxmox uses swap although there is plenty of free RAM (vm.swappiness = 10).
@guletz The cow on cow problem with btrfs was...
Thank you for your help. To limit IOPS is an option but then how can you achieve good r/w speeds? All ssd? Is there an option to limit the effect of high I/O to only the guest which generates the high I/O? I only activate l2arc because there is plenty of space left on the ssd.
The result of...
Hi,
we have a few problems with proxmox and high I/O. We use ZFS in the following configuration:
:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 33h43m with 0 errors on Mon Apr 15 10:07:13 2019
config:
NAME STATE READ WRITE CKSUM...
Thanks for the fast answer. I updated my version (This is a test cluster without subscription so I forgot to add deb http://download.proxmox.com/debian/pmg stretch pmg-no-subscription to the apt repository list), deleted the node with pmgcm delete 3 (pmgdb delete throws an error), added the node...
Hello,
I set up a mail cluster and I have a little problem with one of the nodes. The master (mx1) has no sync errors in the log but the node mx2 seems to have problems:
starting cluster syncronization
Mar 4 10:00:33 mx2 pmgmirror[1333]: detected rule database changes - starting sync from...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.