Search results

  1. Missing node in GUI

    Hi Wolfgang, thanks for reply. I tried even the incognito mode, same result. Unfortunately the only thing that "resolves" the problem is reboot of all nodes (so I'm doing that one by one).
  2. Missing node in GUI

    Hi, I have added a new node into existing 11-node cluster. The new node correctly shows on "pvecm status" on all nodes. I can also see it on " pvesh get cluster/config/nodes" (on all nodes). But I cannot see it on the webgui on the left side - when I connect to the GUI on all existing nodes, I...
  3. Ceph random high latency

    I understand. But none of CPUs (threads) are utilized on 100 %. Even during the fio tests all threads on all CPUs have load like this: %Cpu0 : 27.0 us, 2.8 sy, 0.0 ni, 69.9 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
  4. Ceph random high latency

    I tried to set noout flag and stopped all OSDs on the slowest node, fio results were the same.
  5. Ceph random high latency

    Thank you for this tip. How did you identified the problematic VM? None of my VMs is using 100% CPU or making excessive disk read/write (according to graph available in proxmox).
  6. Ceph random high latency

    First of all, thank you for your time. fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randwrite --bs=4k --direct=1 --size=512M --runtime=60 Way better: Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=27.6MiB/s][r=0,w=7073 IOPS][eta 00m:00s] Jobs: 1 (f=1)...
  7. Ceph random high latency

    So you think that I should raise number of PGs for the main pool from 512 to 1024?
  8. Ceph random high latency

    fio is IMHO very bad with small block size. It gets better with increasing BS, fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=60 --group_reporting I have experienced one bigger hang for a couple of seconds on the VM...
  9. Ceph random high latency

    Plain Supermicro boards, no external HBA (most of the nodes are running on INTEL C622 https://www.supermicro.com/products/motherboard/xeon/C620/X11DDW-NT.cfm).
  10. Ceph random high latency

    Hi, I'm running proxmox cluster with 5 nodes and pure SSD Ceph storage (currently about 20 OSDs, all enterprise grade INTEL S3710/S4500, bluestore). Nodes are connected through 10Gbit network. Storage is about 50% full. Everything (system, proxmox, ceph) is updated to latest versions. On top of...
  11. Can't update centos7 (or install httpd) in unprivileged LXC container.

    Same here :-( 4.15.18-1-pve I cannot update CentOS 7 because of the filesystem package. Running transaction Updating : filesystem-3.2-25.el7.x86_64...
  12. Disk slows down after resize (ceph)

    Hmm, interesting. It seems that there is no difference fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting Gives me about 50-60 MB/s on both VM's fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k...
  13. Disk slows down after resize (ceph)

    When I tried to increase the block size to 128k, I'm getting on the 10GB VM speeds like: READ: bw=592MiB/s (621MB/s), 592MiB/s-592MiB/s (621MB/s-621MB/s), io=16.9GiB (18.2GB), run=29251-29251msec WRITE: bw=253MiB/s (265MB/s), 253MiB/s-253MiB/s (265MB/s-265MB/s), io=7393MiB (7752MB)...
  14. Disk slows down after resize (ceph)

    fio --filename=/dev/sda --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=8k7030test But I started to investigate this issue after I found out that regular work with disk...
  15. Disk slows down after resize (ceph)

    Thanks for the reply. Yes, the problem persists after stop/start. I tried to run it multiple times and the results were similar. I run only 3 node cluster with 2 osds per host. Drives are Intel S4500. But I dont know if this is relevant to my problem.
  16. Disk slows down after resize (ceph)

    Hi all, I'm facing a strange problem. I'm using latest Proxmox with Ceph storage backend (SSD only), 10Gbit network, KVM virtualization, CentOS in guest. When I create a fresh VM with 10 GB attached Ceph storage (cache disabled, virtio drivers), I'm getting roughly these speeds in fio...
  17. After enabling built-in firewall connection drops randomly

    Hi, When I try to enable built-in firewall, everything seems to be working, but then are randomly some connections to proxmox node dropped. (VM seems unaffected, but they do not have "centralized" firewall enabled) It does not depends on the number of firewall rules or mode (default DROP or...
  18. [SOLVED] Network fails on VM/CTs after migration to another node

    Thank you for your replies, they helped me to resolve the problem. It was the switch. Apparently there is a function that links MAC address to the physical port and does not allow to move the MAC to different port. It is called port security and must be switched off for all Proxmox HN ports.
  19. [SOLVED] Network fails on VM/CTs after migration to another node

    Yes, they are able to ping each other through vmbr0 (public IP addresses). Unfortunately, no. When i do a clean VM shutdown on HN2, migrate the VM on HN1 and power it on, network interfaces bridged to vmbr0 do not work. Connection through vmbr1 still works though. I have removed all custom...
  20. [SOLVED] Network fails on VM/CTs after migration to another node

    No, I do not use any kind of hardware or software firewall at this moment. The whole proxmox installation is clean with no manual edit (except of the network interface renaming, which was useless :)).

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!