Search results

  1. F

    new firewall for vm not working

    you are right i forgot about that :-) so many options to enable disable firewall. ;-)
  2. F

    new firewall for vm not working

    hi, i updatet my ceph test servers. and the firewall is enabled. on the host it is working fine. but the firwall for the vms not. everything is enabled. in options also the rules itself. but with iptables-save i cant see the new rules.. machines stoped started etc....
  3. F

    CEPH poor performance (4 nodes) -> config error?

    hi, first at all dont expect very high speeds for smaller clusters with one single threaded benchmark. the you should have in mind that: with replica of 3 you have only 1/3 of speed for all disks. for the journal writes on the same osd you loose again 1/2 speed. so you allready have only 1/6 of...
  4. F

    New packages in pvetest! Firewall, Html5 Console, Two-factor authentication

    html5 concole is nice :-) after updating i am missing the /etc/pve/firewall/.... in the gui its available. source & destination is a empty list...
  5. F

    ceph-performance and latency

    root@ceph2:/var/lib/ceph/osd/ceph-16# dd bs=1M count=2560 if=/dev/zero of=test conv=fdatasync,notrunc gives me around 170mb per disk which is ok so theoretical speed should be around 1900mb/sec for 45 disk with replica 2 and journal on disks if ceph would "eat" 30% of the speed only it would max...
  6. F

    ceph-performance and latency

    ok but what more or less was the performance gain? do you also think that 500mb write is bad for 45 disks (journals on disks) replica 2?
  7. F

    ceph-performance and latency

    do you have a performance comparison between old and new controllers? should be more then 2 times the speed. (and what i would expect from my setup) only 4k writes should be little bit slower
  8. F

    ceph-performance and latency

    http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/ so the raid controllers make big difference! i use the following card: MegaRAID SAS 9271-8i wich has also the LSISAS2208 Dual-Core RAID on Chip (ROC) on board. on 4k writes it performs best on the benchmarks-...
  9. F

    Proxmox VE Ceph Server released (beta)

    i tried with fio on a linux guest. same speed. this is also the max speed i get in rados bench for write. with reads i can do a lot more. for one vm machine i am really happy about the 500mb write. ande 500mb read. but for the whole cluster its not enogh. when i run 2 vms with write tests the...
  10. F

    Proxmox VE Ceph Server released (beta)

    ok i gout it: virtio makes a BIG! speed difference now. ide: 110 mb read virtio: 500mb read what is funny is that both give me 500mb/write! virtio and also ide also the cache modes make no difference. writeback or none is both 500mb i am using 4GB tests Crystal Diskmark so i dont hit the cache...
  11. F

    ceph-performance and latency

    does someone got nice performance out of the ceph cluster. reading around seems that ceph performs really poor. @patrick - did you get better speed meanwhile? i installed a new ceph cluster now (i had a small testcluster running for testing only) 3 nodes with: 2 times 2,6ghz xeons, 128g ram...
  12. F

    more then one network on one network card?

    hi, i have just 2 network cards ion all old servers. so i have one network for internal traffic (backup, isos etc) and on for the vms traffic. now i want also to use the ceph from the new servers. internal traffic should be separated from the ceph monitors i think? (ceph osds have their own...
  13. F

    Howto setup a spare node for PVE Cluster HA

    does rgmanager check the free ram? or does it make overcommitment?
  14. F

    Proxmox VE Ceph Server released (beta)

    hi, i purchased 3 server for ceph with 2 10gig nic each server. and two 10g switches. unfortunally the switches are not stackable (too expensive) what the best way to have fault tolerancy? 10g network bonded for monitors 10g network bonded for osd without stackable switch i cannot make any...
  15. F

    Proxmox VE Ceph Server released (beta)

    >>If you use latest ceph version firefly then it doesn't matter about journal. yeah but this feature will still be some time not for production :-(
  16. F

    Best I/O scheduler for the Host and guest

    thats right. there is no best scheduler. specially with backups you can also experience problems when you have the wrong scheduler. for small setups raid1 with sata we had best results with the cfq so that the backup did not block all the vms (it is the only one to support the ionice feature of...
  17. F

    Proxmox VE Ceph Server released (beta)

    what was your bench with mixed ssd & sata? the question is if a server with a lot more cpu and ram & nic power could handle that without performance loss... our machines have 2 x hexacore cpu and 128 ram.
  18. F

    Proxmox VE Ceph Server released (beta)

    >> The big reason i left it behind is performance issue. My both SSD and HDD pool was on same node and both pools took performance penalty ok and how strong is your server? ram cpu? you have only 1gig ethernet. we will use 2 x 10gig cards. so we have 20gig for osd traffic and 20 for mons >>...
  19. F

    option read /write limit removed in newer versions?

    on my newer test proxmox server i can't find the read/write limit options for the hardisks anymore. no matther what storage.... thanks