Search results

  1. S

    Blacklisting

    Hi, I put this domain in the blacklist (Mail Filter -> Who Object -> Blacklist) "news.progressiverailroading.com" But I still receive emails from this domain: Date: Fri, 5 Nov 2021 12:32:28 -0500 (CDT) Message-ID...
  2. S

    firewall rules when joining nodes

    Thanks Stoiko, yes, this helps! Sanni
  3. S

    firewall rules when joining nodes

    Hi Folks, I want to build a Proxmox Mail Gateways Cluster. Can someone tell me which ports/protocols are being used when one node joins another node? Thanks, Sanni
  4. S

    Domain in whitelist but still get blocked by DNSBL

    Thank you Stoiko, I understand! I can also add domain names and e-mail addresses there. What effect will it have? Best regards, Sandra
  5. S

    Domain in whitelist but still get blocked by DNSBL

    Hi! I have a question concerning whitelisting. I put the domain xxx.com in the whitelist (Menu: Configuration > Mail Proxy > Whitelist). But Mails from this domain still get blocked by a DNSBL: Apr 21 16:15:49 pmg1 postfix/postscreen[2399]: NOQUEUE: reject: RCPT from [40.107.6.63]:12207: 550...
  6. S

    Remove NFS Storage

    Ok, thanks. Unfortunately there is no entry in my fstab for any mount: root@pve1:/etc/pve# cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> /dev/pve/root / ext4 errors=remount-ro 0 1 /dev/pve/swap none swap sw 0 0 proc /proc proc defaults 0 0 Can I just umount...
  7. S

    Remove NFS Storage

    I removed the storage in the the Storage section within the UI by clicking "Remove". The NFS storage has been removed from the fiel /etc/pve/storage.cfg immediately.
  8. S

    Remove NFS Storage

    Hi! I have set up a NFS storage in my Proxmox Cluster. I now removed it, but the NFS share is still mounted on all the nodes: 192.168.1.235:/srv/nfs on /mnt/pve/NFSTest type nfs4...
  9. S

    redundant separate 10GBit network

    I will try the active-passive bonding method. I think in small separated segments like a storage network, it is a good solution. Maybe bridging is the better approach when connecting servers to a wider network segment were STP is in place? Don't know... Maybe someone else here can add his...
  10. S

    redundant separate 10GBit network

    Hey, thanks for your feedback! Sounds promising :) Have you also been using Linux Bridges for redundancy? I was reading about it and some people are saying that using bridges connected to multiple switches (using spanning tree) is the better solution than active-backup bonding. Cheers, Sandra
  11. S

    redundant separate 10GBit network

    Hi, I would like to implement more redundancy to my pve/ceph cluster. The storage runs in a separate 10 Gbit network. I was thinking to add a second 10 Gbit switch and connect the serves NIC's to each of them using bonding with active-backup mode. Is it advisable to do that? Is active-passive...
  12. S

    HA accross different locations

    Thanks guys, I don't have the connection yet. The Ceph docs say: "If your data centers have dedicated bandwidth and low latency, you can distribute your cluster across data centers easily. If you use a WAN over the Internet, you may need to configure Ceph to ensure effective peering...
  13. S

    HA accross different locations

    Hi everybody, we are running a 7 Node Proxmox Cluster. 3 Nodes are used for Ceph only. We are considering to rent a second rack at another data center and run the same set up for the case of a disaster in our primary data center. Now it came to my mind that Proxmox provides High Availability...
  14. S

    Fragen zum Mailgateway 5.0-67

    Genau diese Frage habe ich mir auch gestellt. Hast du dazu schon etwas herausgefunden? Sandra
  15. S

    Ceph - Basic Question

    Thanks Udo, I will check osd_max_backfills + osd_recovery_max_active. What SSD do you recommend?
  16. S

    Ceph - Basic Question

    The OS and the Journal are running on a Samsung 850 Pro. Then I have 2 OSDs running on small 500 GB disks.
  17. S

    Ceph - Basic Question

    Thanks, no, I was not aware of the "noout" option. This is very helpfull, thanks a lot!! The Ceph Nodes are connected with 1GB. Would it help to go on 10GB to make the recovery time shorter? Best, Sandra
  18. S

    Ceph - Basic Question

    Hi all, we are running a small Ceph Cluster with 5 Nodes as a shared storage within our Proxmox Cluster. Currently we are running about 40 VM's and Containers. Everything works nicely. But I recognized that several VM's stopped working after one Node was shut down and some VM's started getting...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!