Search results

  1. M

    PMG Suitability and recommendations for customer / prospect

    @velocity08 Here is an image from one of our HA clusters. It has about 30 small domains on it.
  2. M

    VM disk performance

    Ah PVE 5.3... Once you get to PVE 6. there are some changes that might help. Getting rid of those IDE drives may help as well.
  3. M

    VM disk performance

    Ok just making sure it wasn't windows that that is a different ballgame and I've found that different versions of the virio drivers for windows can show different speeds.
  4. M

    VM disk performance

    If the ide disk is being used for anything other than a cdrom you will have bad performance. I would make sure your using virio scsi not the old virt io block that you have. Enable write back cache. Enable io thread and see how your performance goes. You didn't say what os the VM is.
  5. M

    Increasing Backup Speed?

    How do I increase the TLS speed? Is this still directly related to the process or other tweaks.
  6. M

    Increasing Backup Speed?

    Just to post a benchmark before the CPU's are upgraded I ran this on the PBS server against itself. Uploaded 107 chunks in 5 seconds. Time per request: 47380 microseconds. TLS speed: 88.52 MB/s SHA256 speed: 202.00 MB/s Compression speed: 337.06 MB/s Decompress speed: 561.61 MB/s AES256/GCM...
  7. M

    Selective Restore Feature

    That is correct @Matthi. Inside of VE is where I would like selective restore.
  8. M

    Selective Restore Feature

    Can we ask for a selective restore for VM's with multiple disks? Use case: 1 VM with 2 Disks. Disk 1 OS and Typical files - 200GB Disk 2 Data storage for application - 1TB If the corruption for the VM is only in the OS disk I would like the ability to selectively restore just the OS disk...
  9. M

    Backup Server configuration recommendation

    Holy Toledo Batman... 15TB SSD cost 3K.
  10. M

    Slow Backup reading source

    I was hitting this 50MB mark myself and I was told that it was an AES problem with my processor on the storage server. I my case the PBS server had old L5520 Xeons that didn't support AES so I ebayed some X5670's that will be here next week. I don't know if this is your problem or not...
  11. M

    PMG Suitability and recommendations for customer / prospect

    PMG doesn't have this option currently, it has been brought up multiple times before but isn't on the road map as far as I know. If Proxmox is listening. 1. Each transport map for a domain have the ability to use a different sending IP. 2. The ability to use a range of IP's on the transport...
  12. M

    Mail gateway cluster set up

    You will need two public IPs, one routed to each PMG. You will need an A record set up so you can reach the server by each name. Then you will set up the MX records to point to the same A (hostname) that you set up earlier. Both of the MX records must use the same priority. This is what will...
  13. M

    PMG Suitability and recommendations for customer / prospect

    As an email admin, I would never let any automated system send emails from my main domain. When you get blacklisted for sending mass mail your main domain will become useless and you will have a 48 to 72 hours headache trying to send legit non mass mail to your customers. So best use case is to...
  14. M

    Backups testing

    Linux servers that are not database servers typically are fine, I test restoring mine every couple of months. Database servers should also have another means of backups and you can restore the VM with the backup but may need to drop database and import external backups. Cant speak for windows.
  15. M

    PMG Suitability and recommendations for customer / prospect

    PMG would only be a restriction if your filtering outgoing email. If you're sending your target email on a subdomain, don't route that for spam filtering. Only route your main domain for external spam filtering.
  16. M

    Increasing Backup Speed?

    I've tested a handful of the hosts and they all return AES. AES256 GCM encryption speed │ 799.33 MB/s root@XXXX:~# grep -m1 -o aes /proc/cpuinfo. aes But the PBS server doesn't show AES...
  17. M

    lacp - bond - vmbr(multiple vlans) - opnsense

    Ok great, but you didn't need to use OVS to get this to work. Just uncheck the box for VLAN aware and you would have been golden.
  18. M

    Increasing Backup Speed?

    We've just recently started testing PBS and I really like the simplicity of the software. But I feel that something must be wrong because backups are much slower than vzdump to a NFS share. Network Bonded 10G to all servers to the TOR using LACP with L3+4 TOR switches = Arista in MLAG fashion...
  19. M

    lacp - bond - vmbr(multiple vlans) - opnsense

    Turn off bridge VLAN aware if your tagging on the NIC inside proxmomx. Having a VLAN-aware bridge is for when you tag internally on the interface of your VM inside OS. This bit me when I moved from OpenVswitch to Linux bridges.
  20. M

    Poor write performance on ceph backed virtual disks.

    When you upgraded to octopus did you destroy each OSD? I think I remember seeing performance problems on another post if you didn't. The upgrade docs don't say anything about it. https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus