Search results

  1. T

    SSD Wear

    Thanks all. I will look to replace the Samsungs with something better.
  2. T

    SSD Wear

    I have 6 nodes in my Proxmox cluster which are exclusively Ceph storage nodes (no VMs). Each node as a pair of Samsung 860 Pro 256G SATA SSD cards with the OS installed on the drives as mirrored zfs. These have been in operation for about 5 years. I have noticed the SSD wearout indicator for...
  3. T

    Ceph Questions and Thoughts

    Recently I combined two separate Proxmox clusters into one. Both clusters prior had separate Ceph clusters of three nodes each with 10 OSDs. Earlier this week I finally added the three nodes and OSDs to my converged cluster. All nodes are running Proxmox 8.1.11 (I see 8.2 is now available)...
  4. T

    Ceph SSD recommendations

    I'm using a 100Gpbs network for Ceph storage and am using a SSD cache drive for the OSDs in each chassis.
  5. T

    Ceph SSD recommendations

    Revisiting this. I've looked at the PM893 but at SATA3 it tops out at 6Gbps. Is it worth it to try to find SAS drives, that should top out at 12Gbps?
  6. T

    Ceph SSD recommendations

    I have been dragging my feet on this one, but I am looking for SSD recommendations for my Ceph servers. Currently each server has ten 5TB spinner drives with SSD Cache drives. The performance has been decent, but there are many times where guests give IO errors due to occasional high wait...
  7. T

    Combining two separate clusters

    Hi all. In our lab, we maintain two separate but identical Proxmox clusters with Ceph. Each cluster has 5 compute nodes and 3 storage nodes, so 8 total members per cluster. The storage nodes are cluster members but do not host any VMs. Each storage node has 10 5TB drives (spinners...I'll...
  8. T

    Merge two hyper-converged clusters into one

    Greetings, For many years we've been running two separate hyper-converged clusters with identical hardware. Each cluster has 5 compute nodes (running VMs only) and 3 Ceph storage nodes (not running any VMs). What I want to do is merge these two clusters into one. Does anyone have any best...
  9. T

    [SOLVED] Block emails that pass through a specific upstream server

    That worked. I made the change to the regex and switched the header back to Received. The first email to come across since then matched just fine.
  10. T

    [SOLVED] Block emails that pass through a specific upstream server

    I have attached the .eml, with the end user's email address changed. All other info (relays, IP addresses, etc) are unaltered. As far as the filter, Here's the What object: Before, I was just using the Received header, but that didn't work. Only worked when I started using Received-SPF as...
  11. T

    [SOLVED] Block emails that pass through a specific upstream server

    Something you said earlier made me try a different header to inspect. Instead of using the "Received" header, which there are many in the message, I switched to "Received-SPF" (of which there is only one of those headers) and the rule triggered. So even with the latest code, there still is...
  12. T

    [SOLVED] Block emails that pass through a specific upstream server

    I just watched another email sail through the system and delivered to the end user even with the latest versions installed.
  13. T

    [SOLVED] Block emails that pass through a specific upstream server

    root@pmg:~# pmgversion -v proxmox-mailgateway-container: 7.1-1 (API: 7.1-4/523ac520, running kernel: 5.13.19-6-pve) pmg-api: 7.1-4 pmg-gui: 3.1-3 clamav-daemon: 0.103.6+dfsg-0+deb11u1 ifupdown: residual config ifupdown2: 3.1.0-1+pmx3 libarchive-perl: 3.4.0-1 libjs-extjs: 7.0.0-1...
  14. T

    [SOLVED] Block emails that pass through a specific upstream server

    I have been searching and trying different solutions, but I can't seem to find the magic incantation that makes this work. I have a user getting blasted with various loosely related emails all from various email addresses and domains. However, they all are being used by the same email relay...
  15. T

    Guidance on Shared Storage

    I was just considering that last suggestion. I will see if I can do that, and see if it improves the performance.
  16. T

    Guidance on Shared Storage

    I have been fighting I/O performance issues on our Ceph server for some time. Sometimes the VMs I/O performance is so bad that I have to move the VM image to a local drive in order to get performance back. I'm now exploring other shared storage methods. Running Proxmox 7.1-11. When using the...
  17. T

    [SOLVED] PMG Tracking Blank ... again

    I just did an upgrade and the fix is there now. pmg-log-tracker_2.3.1-1
  18. T

    Odd directory that cannot be 'stat'-ed by root

    Thanks for the reply. Here's the getfacl for that directory. I'm afraid it doesn't give much insight. troy@neon-desktop:~$ getfacl /home/troy/SeaDrive getfacl: Removing leading '/' from absolute path names # file: home/troy/SeaDrive # owner: troy # group: troy user::rwx group::r-x other::r-x...
  19. T

    Odd directory that cannot be 'stat'-ed by root

    I am trying to schedule a backup job for users' directories on a Linux desktop workstation. We utilize a network cloud storage solution called Seafile, and the Linux desktop utility that gives users access to their files creates and mounts to a directory, typically named "SeaDrive". The...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!