Search results

  1. A

    ceph rbd slow down read/write

    Node have 2TB RAM, Container have 50G RAM This behavior is present on differnet pools and not only on Ceph, also this behaviour present on image mapped from node local storage device ssd. As an opportunity, I will definitely do it.
  2. A

    ceph rbd slow down read/write

    This is not CEPH trouble. The problem hidden in cached memory. If drop caches - the performance is return up until cache is not filled: # dstat -clrd --disk-util -D rbd3 -i 10 ----total-cpu-usage---- ---load-avg--- --io/rbd3-- --dsk/rbd3- rbd3 ----interrupts--- usr sys idl wai hiq siq| 1m 5m...
  3. A

    [SOLVED] Create backup fail with error: Cannot open: Permission de

    This variant not work if backup biggest then space in /tmp
  4. A

    [SOLVED] Create backup fail with error: Cannot open: Permission de

    Need set chmod permission on mountpoint too: # lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- ls -la /mnt/pve/backup-1 ls: cannot open directory '/mnt/pve/backup-1': Permission denied # lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- ls -la /mnt/pve/ ... drwxrwxrwx 4 root...
  5. A

    ceph rbd slow down read/write

    Thank you. If I consider that the problem is in the hw, I will make a post with this information. At the moment I do not think so.
  6. A

    ceph rbd slow down read/write

    This trouble not in hw layer, I'm sure of it. I found next: I increase RAM size for container and performance drop is moved to new size of RAM And after drop caches - performance is grow up to normal level find /mnt/data/maps/cache/coords -exec dd if={} of=/dev/null bs=1M \; ... 2108839 bytes...
  7. A

    ceph rbd slow down read/write

    SSD models in post https://forum.proxmox.com/threads/ceph-rbd-slow-down-write.55055/#post-253876 If performance is drop on SSD's then utilization must grow. In opposite - ssd and rbd utilization is reduced to minimum. To many info for post in code tag: $ tar tfz conf.tar.gz conf/...
  8. A

    ceph rbd slow down read/write

    Same result after echo 3 > /proc/sys/vm/drop_caches and read with bs=4M (dd if=/mnt/data/maps/planet-190513.osm.pbf of=/dev/null status=progress bs=4M) speed in lxc container is drop down after 42GiB read. Utilization is drop too: ... Device: rrqm/s wrqm/s r/s w/s rkB/s...
  9. A

    ceph rbd slow down read/write

    I was run in parallel one read on host from rbd and other read file in container from same rbd: In container reading speed was drop down after 42 GiB: # dd if=/mnt/data/maps/planet-190513.osm.pbf of=/dev/null status=progress 48297345536 bytes (48 GB, 45 GiB) copied, 595,004 s, 81,2 MB/s...
  10. A

    ceph rbd slow down read/write

    Even just reading the file to /dev/null degrades after a few minutes (in lxc container): # dd if=/mnt/data/maps/planet-190513.osm.pbf of=/dev/null status=progress 48314671616 bytes (48 GB, 45 GiB) copied, 655,003 s, 73,8 MB/s 94384404+1 records in 94384404+1 records out 48324815168 bytes (48...
  11. A

    ceph rbd slow down read/write

    and fio on rbd (outside lxc-container): # fio ceph-rbd-read.fio read-seq-4K: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64 read-seq-4M: (g=1): rw=read, bs=4M-4M/4M-4M/4M-4M, ioengine=libaio, iodepth=16 fio-2.16 Starting 2 processes Jobs: 1 (f=1): [_(1),R(1)] [72.8% done]...
  12. A

    ceph rbd slow down read/write

    host-8: sda 1:0:0:0 disk ATA XA3840ME10063 00ZU sata sda 0 4096 0 4096 512 0 deadline 128 128 0B sda 3.5T root disk brw-rw---- sdb 2:0:0:0 disk ATA XA3840ME10063 00ZU sata sdb 0 4096 0 4096 512 0 deadline...
  13. A

    ceph rbd slow down read/write

    Trouble not on osd or pool level. I think problem have on rbd and map level in lxc-container.
  14. A

    ceph rbd slow down read/write

    # rados -p bench2 bench 60 write -b 4M -t 16 --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects Object prefix: benchmark_data_lpr8_632662 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg...
  15. A

    ceph rbd slow down read/write

    I add file conf.tar.gz
  16. A

    ceph rbd slow down read/write

    Summary: pve-manager/5.3-5/97ae681d (running kernel: 4.15.18-9-pve) ceph version 12.2.8 (6f01265ca03a6b9d7f3b7f759d8894bb9dbb6840) luminous (stable) 4 nodes (per node: 4 nvme ssd & 2 sas ssd, bluestore) + 1 node with 4 sata ssd interconnect - 2x 10Gbps Created pool (512 PGs, replicated 3/2) on...
  17. A

    ceph can't disable tiering cache

    And this operation need restart all nodes?
  18. A

    ceph can't disable tiering cache

    I found out that I can not give up tiering with erasure code pool for store images because: "RBD can store image data in EC pools, but the image header and metadata still needs to go in a replicated pool. Assuming you have the usual pool named “rbd” for this purpose" For example: rbd create...
  19. A

    zfs loss storage when add new block device by FC-switch

    I start discussion at linux-kernel@vger.kernel.org - https://marc.info/?l=linux-kernel&m=155743464501768 Please support!
  20. A

    Proxmox VE Ceph Benchmark 2018/02

    https://software.intel.com/en-us/articles/open-vswitch-with-dpdk-overview

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!