Search results

  1. S

    PBS Off-site Sync (Cryptolock)

    Hello, We are really happy with PBS so far, great product! Question: On this moment we are running a PBS in our datacenter which back-ups all the VM's in the cluster. We want to protect us to for example cryptolockers/ransomware that makes not only the data in the VM unusable but in worst...
  2. S

    Verification takes long time

    Ah oke, thanks for the information! I will change it to once a weekly.
  3. S

    Verification takes long time

    Thank you! I understand that it not needed to run daily. But if verification of for example 100GB takes 15 minutes (example, for real times i have to look). If i only run verification once a month i have 30 x this backup job/data so it stil needs 30 x 15 minutes, right?
  4. S

    Verification takes long time

    Hello, We have about 150 Vm's in our proxmox cluster, the back-up with PBS is now super fast! The verification process we start daily @ 09:00 AM But it needs almost 12 hours to finish, CPU is only running at about 30%. Can we improve something to speedup this proces? or running multiple...
  5. S

    DirectAdmin Plugin

    Hello, You can find the plugin here: https://documentation.solarwindsmsp.com/spamexperts/documentation/Content/Integration/directadmin-addon.htm The plugin add/removes the domains from directadmin to the spamexperts spamfilter and set the delivery route to the directadmin server. I am not...
  6. S

    DirectAdmin Plugin

    Hello, Currenctly we use SpamExperts with a DirectAdmin plugin which automatically creates the domain in the spamexperts spamfilter. Is there a same plugin/addin for the Proxmox Mail Gateway? Kind regards, Sander
  7. S

    Proxmox Ceph Converged (HCI) or external ceph

    Hello Thanks for you response. Yes i have read them already, but they tell me (almost) nothing about the pre's or con's for going converged or to continu with a separate ceph cluster and separate proxmox cluster. I am looking for any advice, recommendation or experience what option is going to...
  8. S

    Proxmox Ceph Converged (HCI) or external ceph

    Hello, On this moment we have: 6 x Proxmox Nodes 2 x 10 cores (2 nodes have 2 x 14 cores) 512 GB RAM 4 x 10 GB (2 x 10 GB LACP for network en corosync and 2 x 10 GB LACP for Storage) 3 x Ceph Monitor Dual Core 4 GB RAM 2 x 10 GB LACP 4 x Ceph OSD 2 x 6 Core 2,6 Ghz 96 GB RAM 4 x 10 GB (2 x...
  9. S

    Proxmox external Ceph Disk Cache remommendation

    Hello, I searched this forum and google but i cannot find the final aswer.. We have a Proxmox cluster with a remote Ceph Luminous cluster. I see i get muge faster writes with Cache=writeback in the disk options in Proxmox, (random 4k up to 16x faster and Seq 10x faster) then with cache=none...
  10. S

    Since 6.0 backup hang vms

    Hello, I exactly have the same problem, with vm's from i backup from local storage and from ceph storage. Both backups going to NFS, limiting the backup speed wit bwlimit in vzdump.conf prevents this from happn but now my backups are to slow and cannot finish during night. in 6.0 i had no...
  11. S

    Ceph low performance (especially 4k)

    Hi, No not really, performance increased a little bit after a new switch bit still almost the same..
  12. S

    Ceph low performance (especially 4k)

    This helps 4k a little bit, i now get 5,2 MB/s 4k Read and 2,6 MB/s 4k Writ! But i think with this hardware this could me more right?
  13. S

    Ceph low performance (especially 4k)

    I am running on CentOS 7: "3.10.0-862.11.6.el7.x86_64" Can i add this option safe without problems?
  14. S

    Ceph low performance (especially 4k)

    I understand it is for lower latency, the traffic is almost nothing, i get the following information with ping -f: From Proxmox to OSD node: 678456 packets transmitted, 678456 received, 0% packet loss, time 60305ms rtt min/avg/max/mdev = 0.027/0.070/1.756/0.017 ms, ipg/ewma 0.088/0.067 ms From...
  15. S

    Ceph low performance (especially 4k)

    Yes it is an production cluster... Oké, and is it else useful to add 10GBe to the monitoring nodes?
  16. S

    Ceph low performance (especially 4k)

    I disabled osd enable op tracker = false and found on the internet also to disable: throttler perf counter = false Sequentieel read realy improved from 642 MB/s to 1016,5 MB/s but write and 4k read/write is almost the same unfortunately Are there any more configurations/options i can change to...
  17. S

    Ceph low performance (especially 4k)

    Can i do this on een running cluster without problems?
  18. S

    Ceph low performance (especially 4k)

    Oke thanks, can you tell of i can the fio benchmark on a running OSD disk? without losing data / connection etc.
  19. S

    Ceph low performance (especially 4k)

    And is it useful to disable cephfx? if i am right we need to reboot all the VM's to do this right?
  20. S

    Ceph low performance (especially 4k)

    Thank you for your responses! Can i do the fio test on an existing OSD disk without losing data? This cluster is already in production yes. i can add 10 GBE cards to the monitor nodes, will this help you think?