Search results

  1. F

    migrationg storage backend ends in new copy of all data

    hi, does it makes sense when you do a storage migration to backup the whole disk again? not only that the first backup is many times slower. i also guess the old disk will stay there on the backup server using data. example: migrating virtio0 from local disk to ceph means that the backup...
  2. F

    Server sizing and speed

    i now this page. but what does this really say? i did the test on a bps vm. it was in the middle field of the tests. but i did not note much difference between an attached ssd storage and sata storage. and thats exactly what i want to know, TLS speed is allmost the same hdd and ssd both between...
  3. F

    Server sizing and speed

    does nobody have examples of production servers? and sizing?
  4. F

    Server sizing and speed

    this is the speed on a test VM: TLS speed: 560.44 MB/s SHA256 speed: 348.40 MB/s Compression speed: 456.07 MB/s Decompress speed: 771.66 MB/s AES256/GCM speed: 1534.76 MB/s Verify speed: 243.81 MB/s on a new production system i guess it will be a lot faster. does anybody have figures how fast...
  5. F

    Server sizing and speed

    i was playing around with pbs now as a virtual machine and ceph storage as backend (ssd pool) the backup and restore speed was quite slow (only 30-40mb) i dont know why. network is 10G. (delta speed on second backup is impressive) the pbs was at low cpu and had only 60 iops. i dont know why it...
  6. F

    Server sizing and speed

    At the moment we are quite happy with just backuping to nfs. it works really reliable. but backup is very slow for big data because all needs to be send to the storage via lan. so i guess with the new backup server it will be way faster. (after the first initial backup) do you have any numbers...
  7. F

    Server sizing and speed

    So restoring is more io bound then backup? Because restoring was never a problem bevore. Have you got any numbers how many iops it takes? Using Enterprise SSD's for backup is a little bit expensive. As i wrote generally a big raid with 16 spinners with 7,2k disks will have about 100 iops. if teh...
  8. F

    Server sizing and speed

    Hi, we now have a 90TB backup servers just as simple nfs storage. With a lot of 8TB disks in one big hardware raid. Now we want PBS. In the docs it says its better with SSD'S (of course) but we also want to have about 90TB and with SSD's it is really expansive. With about 12-16 spinners in a...
  9. F

    Best Practice Network

    no this started before ceph was recovering. we had a really strange network issue. one switch of 2 in the stack was going mad... but it was impossible afterwards to rcunstruct what really happened. we did not have the time to play around and make more tests or analysis during the problem. just...
  10. F

    Best Practice Network

    At the moment we have 3 network cards with 2 links each. so we use lacp 1 for vm traffick/ coriosync (not so good) 1 for storage backupd and mons, and 1 for osd trafick. all connected with lacp to 2 junper 10g switches. actually this worked quite well for 7 years. our problem was that 1 of our...
  11. F

    Best Practice Network

    i dont trust only 2 switches anymore. ours are really strong but when they start to flip /route vlans around still everything goes bad,... maybe we had just terrible bad luck... i asked some network specialists and they said they never saw something similar..
  12. F

    Best Practice Network

    Ok in this chart it seems they share storage with osd network which os also not best practise as i think. generally the question ist: with corosync two networks confugured is it possible to sayforst network and failover network? last dasys we had bad luck. we have 4 big junpier switches and 3...
  13. F

    Best Practice Network

    Hi, we have a proxmox / ceph cluster since 8 and 5 years up and runnng. Now specially some ceph servers are end of life and we want to replace them. We now want to change the network for best stability as we had some problems sometimes. Now we have 2 juniper Switches (all 10G LACP connected)...
  14. F

    Ceph Octopus

    Hi, when will octopus be in the main (not test) repository?
  15. F

    Ldap verification only for some domains

    HI, ew changed the Exchange config and now recipient veryfication works, But it still sucks that the tracing center is then spamed with double-bounce messages. It should be possible to filter this messages away. Also i got the LDAP way working. A little bit complicate the rules (as now we need...
  16. F

    Ldap verification only for some domains

    no the others dont have ldap thats the problem...
  17. F

    Ldap verification only for some domains

    yes for us its also only "incomming" but we have differnet backend servers. one is our exchange cluster and others are on site mail servers....(configured in the transports tab) INTERNET <-> PMG <----> EXCHANGE SERVER, OTHER MAILSERVER1, OTHER MAILSERVERX where only the exchange can do LDAP! so...
  18. F

    Ldap verification only for some domains

    i was thinking about that. tomorrow my exchange colleague will check that.
  19. F

    Ldap verification only for some domains

    thanks. i did that allready. but this also blocks all emails forwarded to other server which do not have LDAP auth. (we have note only 1 exchane server behind it) i am also experimenting with complicated rulsets to avoid this. first make all spam check going to this servers then mark them as...
  20. F

    Ldap verification only for some domains

    Sep 21 22:45:15 mailgw02 postfix/cleanup[9439]: E7281A143E: message-id=<20200921204514.E7281A143E@mailgw02.exchange.at> Sep 21 22:45:15 mailgw02 postfix/qmgr[9344]: E7281A143E: from=<double-bounce@mailgw02.exchange.at>, size=251, nrcpt=1 (queue active) Sep 21 22:45:20 mailgw02...