Search results

  1. M

    Unable to change ceph networks

    Hi guys, I am trying to change Ceph networks. Since now I had one subnet (172.16.254.0/24) both for public and cluster network. The goal is to have this configuration: cluster_network = 10.10.112.0/24 public_network = 10.10.111.0/24 I have followed this tutorial...
  2. M

    Avoid Reuse of VMID

    Silimar problem is when you create permission to user for particular VMID. This remains valid after you delete server and that user has access to completely different server when the VMID is reused :( Not to mention that we are trying to pair VMID to billing to another system and it creates...
  3. M

    Live migration with ceph sometimes fails

    Hi, # pveceph status cluster: id: ecc963a4-009f-4236-87fe-e672a7cb5d49 health: HEALTH_OK services: mon: 5 daemons, quorum node97,node98,node99,node2,node1 (age 4d) mgr: node2(active, since 4d), standbys: node99, node97, node98, node4 mds: XXX:1 YYY:1...
  4. M

    Live migration with ceph sometimes fails

    # ceph --version ceph version 14.2.16 (5d5ae817209e503a412040d46b3374855b7efe04) nautilus (stable) # pveceph pool ls ┌────────────────────┬──────┬──────────┬────────┬───────────────────┬─────────────────┬──────────────────────┬────────────────┐ │ Name │ Size │ Min Size │ PG Num │...
  5. M

    Live migration with ceph sometimes fails

    Hi everyone, I'm using latest proxmox (6.3) and experiencing strange issue during live migration of KVM machines running on ceph block storage (ceph cluster was created through proxmox). Cluster is running fine for several years (was prevously on proxmox 5). This issue started only lately, I...
  6. M

    Missing node in GUI

    Hi Wolfgang, thanks for reply. I tried even the incognito mode, same result. Unfortunately the only thing that "resolves" the problem is reboot of all nodes (so I'm doing that one by one).
  7. M

    Missing node in GUI

    Hi, I have added a new node into existing 11-node cluster. The new node correctly shows on "pvecm status" on all nodes. I can also see it on " pvesh get cluster/config/nodes" (on all nodes). But I cannot see it on the webgui on the left side - when I connect to the GUI on all existing nodes, I...
  8. M

    Ceph random high latency

    I understand. But none of CPUs (threads) are utilized on 100 %. Even during the fio tests all threads on all CPUs have load like this: %Cpu0 : 27.0 us, 2.8 sy, 0.0 ni, 69.9 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
  9. M

    Ceph random high latency

    I tried to set noout flag and stopped all OSDs on the slowest node, fio results were the same.
  10. M

    Ceph random high latency

    Thank you for this tip. How did you identified the problematic VM? None of my VMs is using 100% CPU or making excessive disk read/write (according to graph available in proxmox).
  11. M

    Ceph random high latency

    First of all, thank you for your time. fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randwrite --bs=4k --direct=1 --size=512M --runtime=60 Way better: Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=27.6MiB/s][r=0,w=7073 IOPS][eta 00m:00s] Jobs: 1 (f=1)...
  12. M

    Ceph random high latency

    So you think that I should raise number of PGs for the main pool from 512 to 1024?
  13. M

    Ceph random high latency

    fio is IMHO very bad with small block size. It gets better with increasing BS, fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=60 --group_reporting I have experienced one bigger hang for a couple of seconds on the VM...
  14. M

    Ceph random high latency

    Plain Supermicro boards, no external HBA (most of the nodes are running on INTEL C622 https://www.supermicro.com/products/motherboard/xeon/C620/X11DDW-NT.cfm).
  15. M

    Ceph random high latency

    Hi, I'm running proxmox cluster with 5 nodes and pure SSD Ceph storage (currently about 20 OSDs, all enterprise grade INTEL S3710/S4500, bluestore). Nodes are connected through 10Gbit network. Storage is about 50% full. Everything (system, proxmox, ceph) is updated to latest versions. On top of...
  16. M

    Can't update centos7 (or install httpd) in unprivileged LXC container.

    Same here :-( 4.15.18-1-pve I cannot update CentOS 7 because of the filesystem package. Running transaction Updating : filesystem-3.2-25.el7.x86_64...
  17. M

    Disk slows down after resize (ceph)

    Hmm, interesting. It seems that there is no difference fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting Gives me about 50-60 MB/s on both VM's fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k...
  18. M

    Disk slows down after resize (ceph)

    When I tried to increase the block size to 128k, I'm getting on the 10GB VM speeds like: READ: bw=592MiB/s (621MB/s), 592MiB/s-592MiB/s (621MB/s-621MB/s), io=16.9GiB (18.2GB), run=29251-29251msec WRITE: bw=253MiB/s (265MB/s), 253MiB/s-253MiB/s (265MB/s-265MB/s), io=7393MiB (7752MB)...
  19. M

    Disk slows down after resize (ceph)

    fio --filename=/dev/sda --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=8k7030test But I started to investigate this issue after I found out that regular work with disk...
  20. M

    Disk slows down after resize (ceph)

    Thanks for the reply. Yes, the problem persists after stop/start. I tried to run it multiple times and the results were similar. I run only 3 node cluster with 2 osds per host. Drives are Intel S4500. But I dont know if this is relevant to my problem.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!