Recent content by pashadee

  1. P

    VLAN Bridge issue from Server 2019 Guest

    here's the capture from VM host: 00:28:11.377345 IP 10.100.0.15 > 10.213.0.1: ICMP echo request, id 64151, seq 1, length 64 00:28:11.377490 IP 10.213.0.1 > 10.100.0.15: ICMP echo reply, id 64151, seq 1, length 64 00:28:12.400844 IP 10.100.0.15 > 10.213.0.1: ICMP echo request, id 64151, seq 2...
  2. P

    VLAN Bridge issue from Server 2019 Guest

    So a little more info... working on my network diagnostics a little more :) When I ping from the VM guest and capture using tcpdump -i vmbr213 I get 00:17:47.811607 ARP, Request who-has 10.213.0.1 tell 10.213.0.211, length 46 00:17:48.835228 ARP, Request who-has 10.213.0.1 tell 10.213.0.211...
  3. P

    VLAN Bridge issue from Server 2019 Guest

    I'll have to look in to that as I've just been using ifdown bond0 ... ifup bond0 ... but haven't really had to do that too much as I'm not sure what the problem is, and the fact that I can ping the gateway from the host no problem indicates to me that both the bond, the vlan and the bridge are...
  4. P

    VLAN Bridge issue from Server 2019 Guest

    Thanks aliistif, If all else fails I could try that approach, just setting the vlan on the VM itself instead of on the host see if it makes any difference that way. For security I was hoping to be able to set it on the host instead though. Like I mention the strange thing in my setup is that I...
  5. P

    VLAN Bridge issue from Server 2019 Guest

    I also noticed that in /proc/net/vlan/config I have this: VLAN Dev name | VLAN ID Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD bond0.100 | 100 | bond0 bond0.201 | 201 | bond0 bond0.210 | 210 | bond0 bond0.211 | 211 | bond0 bond0.212 | 212 | bond0 bond0.214...
  6. P

    VLAN Bridge issue from Server 2019 Guest

    Hi guys, A little puzzled by what else to try here so I thought I'd reach out to this awesome community for some ideas. I configured a bond interface to have 5 different vlans. auto bond0 iface bond0 inet manual bond-slaves enp3s0 bond-miimon 100 bond-mode...
  7. P

    Corosync memory leak

    I'm using a separate network for ceph, shoudl corosync run on the same network, or does it matter if it runs on the front side (vm side)? what's recommended?
  8. P

    Corosync memory leak

    Thanks spirit. If I make the edit on host 1 and increment config version does corosync automatically replicate to the other 2 hosts or do I need to make same edit on all hosts? Thanks
  9. P

    Corosync memory leak

    Thanks robhost, I have 200-300 retransmits every second... so this seems excessive no? If that is pointing to the problem, any pointers on where I can get some guidance on fixing it. Sample line: Jul 27 14:05:48 px1-g5 corosync[11545]: [TOTEM ] Retransmit List: 775 776 777
  10. P

    Corosync memory leak

    Thanks for the pointer spirit! ... not sure how to check the corosync longs? /var/log/corosync has nothing in it. Is there a log redirection on proxmox nodes? Thanks!
  11. P

    Corosync memory leak

    Hi Guy, thanks for the response. It happens on every single node. Setup is as follows: 3 vm hosts and 8 storage nodes. The vm hosts are ceph monitors, while the storage nodes are ceph osds. All the VMs use the ceph RBD pools. The nodes themselves run on raid 1 ssds with zfs. Gigabit Network...
  12. P

    Corosync memory leak

    I should add this perhaps: Cluster 1 (11 nodes) proxmox-ve: 4.3-70 (running kernel: 4.4.21-1-pve) pve-manager: 4.3-7 (running version: 4.3-7/db02a4de) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.21-1-pve: 4.4.21-70 lvm2: 2.02.116-pve3 corosync-pve: 2.4.0-1 libqb0: 1.0-1 pve-cluster: 4.0-46...
  13. P

    Corosync memory leak

    Hi guys, Experiencing the same issue on 2 separate clusters of different sizes in two different locations with completely different hardware. Issue is that after a week or two of regular cluster operation memory usage by corosync grows to crazy levels for instance currently on a node it's at...
  14. P

    Ceph and KVM terrible disk IO

    Thanks for the pointer mir, I used fio and the results are not any better that's for sure. test: (groupid=0, jobs=1): err= 0: pid=19544: Thu Nov 3 18:14:11 2016 read : io=3071.7MB, bw=10929KB/s, iops=2732, runt=287815msec write: io=1024.4MB, bw=3644.5KB/s, iops=911, runt=287815msec cpu...
  15. P

    Ceph and KVM terrible disk IO

    Thanks for your reponse Udo! I was under the impression that everything Ceph was on the private network and public was being used for actual interfacing to the clients of the VMs. So for instance VM --> VirtIO --> librbd --> Mon (private) --> Stor (private) .. and reverse on the way back. That...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!