I have 8 node cluster PVE 6.0.4 with Nautilus Ceph.
On each node, there is 1 OSD.
Ceph cluster usage is about 48%
Ceph config is
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.200.201.0/22
Please help to get Ceph after upgrade back.
I did upgrade by the manual without some issues, just there is no ceph-volume utility.
and now I have
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
we are using host cpu in VMs and same test (iperf params) from hosts.
Why in VM
ens19: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.202.200.160 netmask 255.255.255.0 broadcast 10.202.200.255
inet6 fe80::7840:3bff:fe74:28a7 prefixlen 64 scopeid 0x20<link>...
Sure we are using kernel with your patches
4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64
I did check buffers
ethtool -g ens8
Ring parameters for ens8:
RX Mini: 0
RX Jumbo: 0
Current hardware settings:
Maybe it is no new and there is something already available but i cant find. Please point me into the right direction.
What we have:
VMs, debian 9.7 with virtIO newtwork interfaces 16 vCPU(host) 32Giga RAM.
VM to VM on the same node from the same vmbr about 12Giga.
Host to Host throw...