mailserver on proxmox ve

Rajesh M

New Member
Jan 29, 2017
1
0
1
49
hi

we are planning to install our mail servers on proxmox ve

currently we are having around 12000 email users with around 6 tb of data spread between 4 different DELL servers,

Per server configuration is as follows
6 core processor with hyperthreading, 16 gb ram,
1 X 600 gb 15k rpm drive for os and qmail queue,
1 X 2000 gb drives for mail data
2 X 2000 gb for backup

centos 6, running qmail, spamassassin, mysql database server, dovecot.

email traffic per server is around 80000 emails per day

we wish to migrate to proxmox-ve with high-availablity with pro-support

QUESTION 1
could you please guide me on the system hardware requirements for the above - cpu, ram, data storage, etc. i am looking at around 10 tb mail data storage from future scalability

QUESTION 2
Also i have several windows servers and vps. Can proxmox seamlessly handled these too with HA ?

thanks for your time,
rajesh
 
Hi,

we wish to migrate to proxmox-ve with high-availablity with pro-support
Look here for pro support: https://www.proxmox.com/en/proxmox-ve/pricing
could you please guide me on the system hardware requirements for the above - cpu, ram, data storage, etc. i am looking at around 10 tb mail data storage from future scalability
As you need a shared storage setup to ensure high availability I'd suggest ceph here.
As I have no mail server set up the rest could be answered maybe better from someone else.

Also i have several windows servers and vps. Can proxmox seamlessly handled these too with HA ?

Yes, Virtual Machines and Container can be HA managed.
You always can setup a small test cluster (also virtually) and test it out yourself, this would be the fastest way to get a feeling.
 
As you need a shared storage setup to ensure high availability I'd suggest ceph here.
As I have no mail server set up the rest could be answered maybe better from someone else.

Ceph can definitively be the tool for this task.

We do host a customer's cluster of 9 virtual Mail-Servers (Zimbra) on tiered Ceph-Storage hosted in 3 separate Data-centers. maybe 2k users (guesstimating)

What this means is: We have multiple ceph-Clusters, that have servers in each of the datacenters.
  • T1 is a erasure coded ceph pool plain old HDD OSDs for ultra cold-storage.
  • T2 is a plain old ceph pool powered by HDD OSDs backed by a SSD Caching-layer for cold-storage
  • T3 is a plain old ceph pool powered by SSD OSDs for warm data.
  • T4 is a ceph pool on NVME based PCIE Cards for Hot data.
(please be advised we host this on our existing infrastructure. It is not purpose build for the client. What i am saying is that we used a Moon sized Hammer, when a chisel and regular sized hammer would have done)

Sitenote:
We also do have a 10-node Proxmox Cluster - used as failover - that is backed by a a dual redundant FreeNas Setup (SSD based ZFS with 512 GB Ram per node) . Its been able to handle the day-to-day use of 2k users without the customer even noticing. Again, setup on existing infrastructure, not purpose-build. You can most definitely get away with a setup a lot smaller.
 
Last edited:
We installed a 5 node for a customer and it works well. Keep in mind ceph cluster should never ever be full! So if 2 nodes checking out (happen already) u will lose 40% space. Proxmox can do snapshots inside the ceph storage, so u will need a lot of space.

we used per node:
128GB Ram
12 Core
6 x 4TB HD
1 x 200GB Intel DC SSD CEPH Journal
1 x 128GB SSD Proxmox System
10 Gbit CEPH Net
1 Gbit CEPH Backupnet
1 GBit Proxmox Net
10 GBit Customer Net