Search results

  1. I

    VM's offline after OSD change

    But why one OSD stops work of cluster?
  2. I

    VM's offline after OSD change

    Really interesting long report. Ceph is healthy. # pveceph lspools Name size min_size pg_num %-used used VMs 2 2 512 0.17 2879961897728 # echo VMs rbd ls VMs Interesting is that all VMs are...
  3. I

    VM's offline after OSD change

    Hello! I have 8 node cluster PVE 6.0.4 with Nautilus Ceph. On each node, there is 1 OSD. Ceph cluster usage is about 48% Ceph config is [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.200.201.0/22 fsid =...
  4. I

    Proxmox 6 upgrade issues

    Fixed throw fixing ceph-volume utility path and manually scan and activation ceph-volume simple scan /dev/sdb1 ceph-volume simple activate 0 e29c3972-58a5-4934-940f-5419b95ec36e
  5. I

    Proxmox 6 upgrade issues

    Hello! Please help to get Ceph after upgrade back. I did upgrade by the manual without some issues, just there is no ceph-volume utility. and now I have pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve) pve-manager: 6.0-4 (running version: 6.0-4/2a719255) pve-kernel-5.0: 6.0-5...
  6. I

    VM cross nodes 10Giga

    we are using host cpu in VMs and same test (iperf params) from hosts. Why in VM ens19: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet 10.202.200.160 netmask 255.255.255.0 broadcast 10.202.200.255 inet6 fe80::7840:3bff:fe74:28a7 prefixlen 64 scopeid 0x20<link>...
  7. I

    VM cross nodes 10Giga

    CPU is not an issue we have 40 cores per node. RAM is not too. We have 512G-1T per node. We are trying with 1VM per node. Inside VM with 16 vCPU and 32RAM there is just iperf.
  8. I

    VM cross nodes 10Giga

    What additional information do you need? Please ask.
  9. I

    VM cross nodes 10Giga

    No firewall. Hosts CPUs are 4x Xeon E7 48xxx, totally 4 sockets. How it can be more than 12 if driver is 10. iperf -c ...
  10. I

    VM cross nodes 10Giga

    Sure we are using kernel with your patches 4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64 I did check buffers ethtool -g ens8 Ring parameters for ens8: Pre-set maximums: RX: 8192 RX Mini: 0 RX Jumbo: 0 TX: 8192 Current hardware settings: RX...
  11. I

    VM cross nodes 10Giga

    Yes. it can be if it is not ipv6. It is proxmox kernel. What to check? What to do?
  12. I

    VM cross nodes 10Giga

    qm config 127 bootdisk: scsi0 cores: 16 cpu: host memory: 32768 name: cbDataNode1 net0: virtio=AE:94:23:09:34:53,bridge=vmbr0 net1: virtio=7A:40:3B:74:28:A7,bridge=vmbr3 numa: 0 ostype: l26 scsi0: VMs_vm:vm-127-disk-0,size=64G scsihw: virtio-scsi-pci smbios1...
  13. I

    VM cross nodes 10Giga

    ethtool ens8f1 Settings for ens8f1: Supported ports: [ FIBRE ] Supported link modes: 1000baseT/Full 10000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: No Advertised link modes: 10000baseT/Full...
  14. I

    VM cross nodes 10Giga

    iperf without additional keys hot to host same 12G pveversion -v proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-9 (running version: 5.3-9/ba817b29) pve-kernel-4.15: 5.3-2 pve-kernel-4.15.18-11-pve: 4.15.18-33 pve-kernel-4.15.18-10-pve: 4.15.18-32...
  15. I

    VM cross nodes 10Giga

    Hello! Maybe it is no new and there is something already available but i cant find. Please point me into the right direction. What we have: VMs, debian 9.7 with virtIO newtwork interfaces 16 vCPU(host) 32Giga RAM. VM to VM on the same node from the same vmbr about 12Giga. Host to Host throw...
  16. I

    10Gbit/s in VM

    I did try with MTU 9000 on nodes, on bridges (vmbr) and on VM interface. It is better now from VM to host - 14.7 Gbit/sec, from VM to VM on the same host - 11.2 Gbit/sec, between host and VM on another host 6.0Gbit/sec. But between VM on one host and VM on another, it is just 5.6Gbit/sec.
  17. I

    10Gbit/s in VM

    @janos thank you!
  18. I

    10Gbit/s in VM

    @janos, can you please point me into one of them. I can't find something workable.
  19. I

    10Gbit/s in VM

    Hello! Did someone get success to get 10Gbit/s inside VM? I'm using HP DL 580 G7 with 10Gbit/s NetXen interfaces. Proxmox - 5.2. These interfaces are in the bridge which is in VM throw virtio interface. From host to host there is 10Gbit/s, from 1 VM to another in the same node there is10Gbit/s...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!