I have added a new node into existing 11-node cluster. The new node correctly shows on "pvecm status" on all nodes. I can also see it on " pvesh get cluster/config/nodes" (on all nodes).
But I cannot see it on the webgui on the left side - when I connect to the GUI on all existing nodes, I...
I understand. But none of CPUs (threads) are utilized on 100 %. Even during the fio tests all threads on all CPUs have load like this:
%Cpu0 : 27.0 us, 2.8 sy, 0.0 ni, 69.9 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
First of all, thank you for your time.
fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randwrite --bs=4k --direct=1 --size=512M --runtime=60
Way better: Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=27.6MiB/s][r=0,w=7073 IOPS][eta 00m:00s]
Jobs: 1 (f=1)...
fio is IMHO very bad with small block size. It gets better with increasing BS,
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=60 --group_reporting
I have experienced one bigger hang for a couple of seconds on the VM...
I'm running proxmox cluster with 5 nodes and pure SSD Ceph storage (currently about 20 OSDs, all enterprise grade INTEL S3710/S4500, bluestore). Nodes are connected through 10Gbit network. Storage is about 50% full. Everything (system, proxmox, ceph) is updated to latest versions. On top of...
Hmm, interesting. It seems that there is no difference
fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting
Gives me about 50-60 MB/s on both VM's
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k...
When I tried to increase the block size to 128k, I'm getting on the 10GB VM speeds like:
READ: bw=592MiB/s (621MB/s), 592MiB/s-592MiB/s (621MB/s-621MB/s), io=16.9GiB (18.2GB), run=29251-29251msec
WRITE: bw=253MiB/s (265MB/s), 253MiB/s-253MiB/s (265MB/s-265MB/s), io=7393MiB (7752MB)...
fio --filename=/dev/sda --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=8k7030test
But I started to investigate this issue after I found out that regular work with disk...
Thanks for the reply. Yes, the problem persists after stop/start. I tried to run it multiple times and the results were similar.
I run only 3 node cluster with 2 osds per host. Drives are Intel S4500. But I dont know if this is relevant to my problem.
I'm facing a strange problem. I'm using latest Proxmox with Ceph storage backend (SSD only), 10Gbit network, KVM virtualization, CentOS in guest.
When I create a fresh VM with 10 GB attached Ceph storage (cache disabled, virtio drivers), I'm getting roughly these speeds in fio...
When I try to enable built-in firewall, everything seems to be working, but then are randomly some connections to proxmox node dropped. (VM seems unaffected, but they do not have "centralized" firewall enabled)
It does not depends on the number of firewall rules or mode (default DROP or...
Thank you for your replies, they helped me to resolve the problem.
It was the switch. Apparently there is a function that links MAC address to the physical port and does not allow to move the MAC to different port. It is called port security and must be switched off for all Proxmox HN ports.
Yes, they are able to ping each other through vmbr0 (public IP addresses).
Unfortunately, no. When i do a clean VM shutdown on HN2, migrate the VM on HN1 and power it on, network interfaces bridged to vmbr0 do not work. Connection through vmbr1 still works though.
I have removed all custom...