Hi,
I put this domain in the blacklist (Mail Filter -> Who Object -> Blacklist)
"news.progressiverailroading.com"
But I still receive emails from this domain:
Date: Fri, 5 Nov 2021 12:32:28 -0500 (CDT)
Message-ID...
Hi Folks,
I want to build a Proxmox Mail Gateways Cluster. Can someone tell me which ports/protocols are being used when one node joins another node?
Thanks,
Sanni
Hi!
I have a question concerning whitelisting. I put the domain xxx.com in the whitelist (Menu: Configuration > Mail Proxy > Whitelist). But Mails from this domain still get blocked by a DNSBL:
Apr 21 16:15:49 pmg1 postfix/postscreen[2399]: NOQUEUE: reject: RCPT from [40.107.6.63]:12207: 550...
Ok, thanks. Unfortunately there is no entry in my fstab for any mount:
root@pve1:/etc/pve# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
Can I just umount...
I removed the storage in the the Storage section within the UI by clicking "Remove".
The NFS storage has been removed from the fiel /etc/pve/storage.cfg immediately.
Hi!
I have set up a NFS storage in my Proxmox Cluster. I now removed it, but the NFS share is still mounted on all the nodes:
192.168.1.235:/srv/nfs on /mnt/pve/NFSTest type nfs4...
I will try the active-passive bonding method. I think in small separated segments like a storage network, it is a good solution.
Maybe bridging is the better approach when connecting servers to a wider network segment were STP is in place? Don't know...
Maybe someone else here can add his...
Hey,
thanks for your feedback! Sounds promising :)
Have you also been using Linux Bridges for redundancy? I was reading about it and some people are saying that using bridges connected to multiple switches (using spanning tree) is the better solution than active-backup bonding.
Cheers,
Sandra
Hi,
I would like to implement more redundancy to my pve/ceph cluster. The storage runs in a separate 10 Gbit network. I was thinking to add a second 10 Gbit switch and connect the serves NIC's to each of them using bonding with active-backup mode.
Is it advisable to do that? Is active-passive...
Thanks guys,
I don't have the connection yet.
The Ceph docs say:
"If your data centers have dedicated bandwidth and low latency, you can distribute your cluster across data centers easily. If you use a WAN over the Internet, you may need to configure Ceph to ensure effective peering...
Hi everybody,
we are running a 7 Node Proxmox Cluster. 3 Nodes are used for Ceph only.
We are considering to rent a second rack at another data center and run the same set up for the case of a disaster in our primary data center.
Now it came to my mind that Proxmox provides High Availability...
Thanks, no, I was not aware of the "noout" option. This is very helpfull, thanks a lot!!
The Ceph Nodes are connected with 1GB. Would it help to go on 10GB to make the recovery time shorter?
Best,
Sandra
Hi all,
we are running a small Ceph Cluster with 5 Nodes as a shared storage within our Proxmox Cluster. Currently we are running about 40 VM's and Containers. Everything works nicely. But I recognized that several VM's stopped working after one Node was shut down and some VM's started getting...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.