Recent content by troycarpenter

  1. T

    PMG quarantine login error

    For a number of years, I have my quarantine setup as described here: https://pmg.proxmox.com/wiki/index.php/Quarantine_Web_Interface_Via_Nginx_Proxy Recently I switch from using LDAP to Keycloak for user authentication. However, for the quarantine, I still have the system sending emails with...
  2. T

    Mailcow + PMG make sense?

    I had a very old email system for over 10 years. About 5 years ago I put PMG in front of it to handle all incoming mail screenings (blacklists, spam, virus and quarantine handling for my users). Fast-forward to today and I've replaced the antiquated email system I had with mailcow. Using PMG...
  3. T

    SSD Wear

    Thanks all. I will look to replace the Samsungs with something better.
  4. T

    SSD Wear

    I have 6 nodes in my Proxmox cluster which are exclusively Ceph storage nodes (no VMs). Each node as a pair of Samsung 860 Pro 256G SATA SSD cards with the OS installed on the drives as mirrored zfs. These have been in operation for about 5 years. I have noticed the SSD wearout indicator for...
  5. T

    Ceph Questions and Thoughts

    Recently I combined two separate Proxmox clusters into one. Both clusters prior had separate Ceph clusters of three nodes each with 10 OSDs. Earlier this week I finally added the three nodes and OSDs to my converged cluster. All nodes are running Proxmox 8.1.11 (I see 8.2 is now available)...
  6. T

    Ceph SSD recommendations

    I'm using a 100Gpbs network for Ceph storage and am using a SSD cache drive for the OSDs in each chassis.
  7. T

    Ceph SSD recommendations

    Revisiting this. I've looked at the PM893 but at SATA3 it tops out at 6Gbps. Is it worth it to try to find SAS drives, that should top out at 12Gbps?
  8. T

    Ceph SSD recommendations

    I have been dragging my feet on this one, but I am looking for SSD recommendations for my Ceph servers. Currently each server has ten 5TB spinner drives with SSD Cache drives. The performance has been decent, but there are many times where guests give IO errors due to occasional high wait...
  9. T

    Combining two separate clusters

    Hi all. In our lab, we maintain two separate but identical Proxmox clusters with Ceph. Each cluster has 5 compute nodes and 3 storage nodes, so 8 total members per cluster. The storage nodes are cluster members but do not host any VMs. Each storage node has 10 5TB drives (spinners...I'll...
  10. T

    Merge two hyper-converged clusters into one

    Greetings, For many years we've been running two separate hyper-converged clusters with identical hardware. Each cluster has 5 compute nodes (running VMs only) and 3 Ceph storage nodes (not running any VMs). What I want to do is merge these two clusters into one. Does anyone have any best...
  11. T

    [SOLVED] Block emails that pass through a specific upstream server

    That worked. I made the change to the regex and switched the header back to Received. The first email to come across since then matched just fine.
  12. T

    [SOLVED] Block emails that pass through a specific upstream server

    I have attached the .eml, with the end user's email address changed. All other info (relays, IP addresses, etc) are unaltered. As far as the filter, Here's the What object: Before, I was just using the Received header, but that didn't work. Only worked when I started using Received-SPF as...
  13. T

    [SOLVED] Block emails that pass through a specific upstream server

    Something you said earlier made me try a different header to inspect. Instead of using the "Received" header, which there are many in the message, I switched to "Received-SPF" (of which there is only one of those headers) and the rule triggered. So even with the latest code, there still is...
  14. T

    [SOLVED] Block emails that pass through a specific upstream server

    I just watched another email sail through the system and delivered to the end user even with the latest versions installed.