Really interesting long report.
Ceph is healthy.
# pveceph lspools
Name size min_size pg_num %-used used
VMs 2 2 512 0.17 2879961897728
# echo VMs
rbd ls VMs
Interesting is that all VMs are...
Hello!
I have 8 node cluster PVE 6.0.4 with Nautilus Ceph.
On each node, there is 1 OSD.
Ceph cluster usage is about 48%
Ceph config is
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.200.201.0/22
fsid =...
Hello!
Please help to get Ceph after upgrade back.
I did upgrade by the manual without some issues, just there is no ceph-volume utility.
and now I have
pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5...
we are using host cpu in VMs and same test (iperf params) from hosts.
Why in VM
ens19: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.202.200.160 netmask 255.255.255.0 broadcast 10.202.200.255
inet6 fe80::7840:3bff:fe74:28a7 prefixlen 64 scopeid 0x20<link>...
CPU is not an issue we have 40 cores per node. RAM is not too. We have 512G-1T per node. We are trying with 1VM per node. Inside VM with 16 vCPU and 32RAM there is just iperf.
Sure we are using kernel with your patches
4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64
I did check buffers
ethtool -g ens8
Ring parameters for ens8:
Pre-set maximums:
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
Current hardware settings:
RX...
Hello!
Maybe it is no new and there is something already available but i cant find. Please point me into the right direction.
What we have:
VMs, debian 9.7 with virtIO newtwork interfaces 16 vCPU(host) 32Giga RAM.
VM to VM on the same node from the same vmbr about 12Giga.
Host to Host throw...
I did try with MTU 9000 on nodes, on bridges (vmbr) and on VM interface.
It is better now from VM to host - 14.7 Gbit/sec, from VM to VM on the same host - 11.2 Gbit/sec, between host and VM on another host 6.0Gbit/sec.
But between VM on one host and VM on another, it is just 5.6Gbit/sec.
Hello!
Did someone get success to get 10Gbit/s inside VM?
I'm using HP DL 580 G7 with 10Gbit/s NetXen interfaces. Proxmox - 5.2.
These interfaces are in the bridge which is in VM throw virtio interface.
From host to host there is 10Gbit/s, from 1 VM to another in the same node there is10Gbit/s...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.