Search results

  1. G

    nvmedisk move to other slot

    Hi Followers, We made a mistake in physically placing nvme disks on our machine. currently 7 nvme per node, but all are tied to cpu#1 I search for a best practice to move half of them to disk-slots tied to cpu#2 my first approach from theory :) on each storage node: move.sh: (param: 1)...
  2. G

    Adding and activating network interface

    nope you have to configure it inside your vm !
  3. G

    Datacenter Summary Permission

    simply andd user and give permission with role pveauditor ... not perfect but sufficient :)
  4. G

    Ceph-iscsi howto ?

    Just installed tgt dpkg --list tgt* ||/ Name Version Architecture Description +++-==============================-====================-====================-================================================================== ii tgt...
  5. G

    Ceph-iscsi howto ?

    has anyone installed userland tgt ? to provide a rbd-iscsi multipath solution ?
  6. G

    Ceph-iscsi howto ?

    we need this for some existing vm-ware servers, as long as they are alive we like to have a rbd-iscsi gateway, to shutdown a old san ....
  7. G

    Ceph-iscsi howto ?

    Thank you Fabian, any plans to merge this ? or any other way with proxmox to accomplish the task ?
  8. G

    Ceph-iscsi howto ?

    Hi cant find ceph-iscsi for luminous in proxmox repros .... according to: http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
  9. G

    CEPH performance

    your infiniband network is on 10GigE or 40 or 56 ? Networkcards ? firmware ? Switches? firmware ? We have mellanox and our 1st step was to bump up firmware to latetes release also on our mellanox switch :) please use code-tags for cut&paste content ...
  10. G

    CEPH performance

    hmmm you have the standard ssd not high endurance version (3 DPW)! The Toshiba HK4 is a 2.5” enterprise SATA SSD that comes in two models: a read-intensive model (HK4R) and a high-endurance model (HK4E). The drives have capacity that ranges from 120GB to 1.92TB (depending on model type) and...
  11. G

    [SOLVED] very low benchmarks on 2008r2@PVE 5.1

    this only reflect cpu and ram bit no disk i/0 ..... i get these results BAREMETAL. PROXMOX vm.
  12. G

    [SOLVED] CEPH: FileStore to BlueStore

    I have done this: for each osd on each cluster member ... ID=$1 echo "wait for cluster ok" while ! ceph health | grep HEALTH_OK ; do echo -n "."; sleep 10 ; done echo "ceph osd out $ID" ceph osd out $ID sleep 10 while ! ceph health | grep HEALTH_OK ; do sleep 10 ; done echo "systemctl stop...
  13. G

    ceph performance 4node all NVMe 56GBit Ethernet

    @fabian , which steps to map a image via KRBD ? may i have to modify vm configuration ? can't follow your suggestion ... I have have no glue how to accomplish this :( just got a result from Mellanox: update 15:56 not really amusing topic #2 :(
  14. G

    ceph performance 4node all NVMe 56GBit Ethernet

    good news and bad news. Ceph will start and is obviously ok ... read is pretty fast rados bench -p rbd 60 seq --no-cleanup -t 56 hints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0 0 0 0 -...
  15. G

    ceph performance 4node all NVMe 56GBit Ethernet

    rdma is up and running, I uninstalled ceph and re-installed it from proxmox test repository. but performance is disappointing ... rados bench -p rbd 300 write --no-cleanup -t 56 Without rdma: Total time run: 300.065672 Total writes made: 277607 Write size: 4194304...
  16. G

    ceph performance 4node all NVMe 56GBit Ethernet

    i Dropped them a note, will come back if they answer ... complicated stuff ......
  17. G

    ceph performance 4node all NVMe 56GBit Ethernet

    Fabian, no I retried at Friday 13th :) 2017-10-13 15:19:45.542352 7ffbf73e4f80 0 ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable), process (unknown), pid 16829 2017-10-13 15:19:45.542384 7ffbf73e4f80 0 pidfile_write: ignore empty --pid-file 2017-10-13...
  18. G

    ceph performance 4node all NVMe 56GBit Ethernet

    @fabian mellanox said i'm not using ceph with rdma patch, but i'm conviced versions are ok. ceph -v ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable) ceph versions { "mon": { "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267)...
  19. G

    How to remove offline iSCSI storage?

    for my knowledge remove iscsi entry from /etc/pve/storage.cfg and from lio or whatever you used to define it

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!