Search results

  1. G

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    @martin thank for notifying ! ceph still not runs with rdma enabled
  2. G

    Ceph Performance

    just test first with dados bench example 56 Mbit/s cluster rados bench -p rbd 300 write --no-cleanup -t 256 ... Total time run: 300.048707 Total writes made: 266600 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 3554.09 Stddev Bandwidth...
  3. G

    cluster replication between two locations

    typical windows server ... workload tons of files in windows filesystem, dedicated sql-server, and exchange server ... so transparent blockdevice replication would be best .... existing line 10mbit/s leased line is now only used for 15 Workstations RDP traffic.. QoS is just not really possible...
  4. G

    cluster replication between two locations

    @jeffwadsworth Thanks for these directions, but i see no real live solution yet, or have i missed something. also bandwidth calculation and latency demands for rbd-mirror are not known for me ... has anyone implemented this scenario ?
  5. G

    cluster replication between two locations

    yep i saw this, but we want to build ceph storage and replicate this other this thin-pipe .... this article refers to zfs
  6. G

    cluster replication between two locations

    Hi Folks, I plan a proxmox cluster for a customer, he has a dedicated line between two location, but only 10 Mbit/s synchronous. vm's will be (windows ad terminalserver , exchange, m$-sql server) clients connect via rdp on Terminalserver . We want to setup a cluster on both sites , so in case...
  7. G

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    @fabian Mellanox requested to see Infiniband.cc of your version ... can you please send me this ? Gerhard, can you send me Infiniband.cc which is compiled in in your version? From: Vladimir Koushnir Sent: Friday, October 06, 2017 8:23 PM To: Gerhard W. Recher <gerhard.recher@net4sec.com>...
  8. G

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    hi fabian ... this fix does not win the rdma match .... cluster will still not be able to come up rdma enabled :( i dropped mellanox a note and a link to our log files
  9. G

    Container no Disk stats

    just running a sysbench within a container (debian9) Gui does not graph Disk I/O ! working fine for KVM (linux/windows) guests how to get these Stats ? back
  10. G

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    @fabian any plans to commit this patch in your 12.2.1 instance ?
  11. G

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    @fabian any chance to get this incorporated ? rdma will *not* run without this patch :( https://github.com/ceph/ceph/pull/18091
  12. G

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    @fabian I'm just in conference with mellanox, we found a bug in 12.2.1 (also present in previous versions) they will push this today to 12.2.1 master. The bug occurs as gid_idx is not initialized in a Port object. offending messages: 2017-10-02 10:56:03.378774 7fb5f27fc700 20...
  13. G

    ceph performance 4node all NVMe 56GBit Ethernet

    Fabian, I installed new kernel, but rdma for ceph still does not work. I opened a discussion on ceph-user-list and and a case @ mellanox shall I initiate a new thread "rdma for ceph" or keep updating this thread ?
  14. G

    ceph performance 4node all NVMe 56GBit Ethernet

    fabian yes i meant this .... so both repros are identical ? deb https://download.ceph.com/debian-luminous stretch main deb http://download.proxmox.com/debian/ceph-luminous stretch main [\code] ok so i wait for Kernel 4.13.x ... how to get a notification if kernel is pushed ?
  15. G

    ceph performance 4node all NVMe 56GBit Ethernet

    fabian may i switch to ceph install from ceph ? before trying new kernel from proxmox ? perhaps proxmox ceph compiling instruction are not complete for utilizing RDMA on Mellanox connect3 pro ? regarding support for RDMA.... I think many customers out there are interested in this ... gaining...
  16. G

    ceph performance 4node all NVMe 56GBit Ethernet

    fabian, it's not ib but 56gBit/s Ethernet ..... may i donate a pair of nic's for your lab ? ceph mailing list is also not really helpful.
  17. G

    ceph performance 4node all NVMe 56GBit Ethernet

    @fabian i managed to change the systemd files as mentioned earlier in this thread. but ceph won't start.... so i reverted my ceph.conf not to use rdma :( I'm totally stuck -- Reboot -- Sep 26 18:56:10 pve02 systemd[1]: Started Ceph cluster manager daemon. Sep 26 18:56:10 pve02 systemd[1]...
  18. G

    IFconfig command not available on 5.0?

    simply install : apt-get install net-tools

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!