Search results

  1. P

    VLAN Bridge issue from Server 2019 Guest

    here's the capture from VM host: 00:28:11.377345 IP 10.100.0.15 > 10.213.0.1: ICMP echo request, id 64151, seq 1, length 64 00:28:11.377490 IP 10.213.0.1 > 10.100.0.15: ICMP echo reply, id 64151, seq 1, length 64 00:28:12.400844 IP 10.100.0.15 > 10.213.0.1: ICMP echo request, id 64151, seq 2...
  2. P

    VLAN Bridge issue from Server 2019 Guest

    So a little more info... working on my network diagnostics a little more :) When I ping from the VM guest and capture using tcpdump -i vmbr213 I get 00:17:47.811607 ARP, Request who-has 10.213.0.1 tell 10.213.0.211, length 46 00:17:48.835228 ARP, Request who-has 10.213.0.1 tell 10.213.0.211...
  3. P

    VLAN Bridge issue from Server 2019 Guest

    I'll have to look in to that as I've just been using ifdown bond0 ... ifup bond0 ... but haven't really had to do that too much as I'm not sure what the problem is, and the fact that I can ping the gateway from the host no problem indicates to me that both the bond, the vlan and the bridge are...
  4. P

    VLAN Bridge issue from Server 2019 Guest

    Thanks aliistif, If all else fails I could try that approach, just setting the vlan on the VM itself instead of on the host see if it makes any difference that way. For security I was hoping to be able to set it on the host instead though. Like I mention the strange thing in my setup is that I...
  5. P

    VLAN Bridge issue from Server 2019 Guest

    I also noticed that in /proc/net/vlan/config I have this: VLAN Dev name | VLAN ID Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD bond0.100 | 100 | bond0 bond0.201 | 201 | bond0 bond0.210 | 210 | bond0 bond0.211 | 211 | bond0 bond0.212 | 212 | bond0 bond0.214...
  6. P

    VLAN Bridge issue from Server 2019 Guest

    Hi guys, A little puzzled by what else to try here so I thought I'd reach out to this awesome community for some ideas. I configured a bond interface to have 5 different vlans. auto bond0 iface bond0 inet manual bond-slaves enp3s0 bond-miimon 100 bond-mode...
  7. P

    Corosync memory leak

    I'm using a separate network for ceph, shoudl corosync run on the same network, or does it matter if it runs on the front side (vm side)? what's recommended?
  8. P

    Corosync memory leak

    Thanks spirit. If I make the edit on host 1 and increment config version does corosync automatically replicate to the other 2 hosts or do I need to make same edit on all hosts? Thanks
  9. P

    Corosync memory leak

    Thanks robhost, I have 200-300 retransmits every second... so this seems excessive no? If that is pointing to the problem, any pointers on where I can get some guidance on fixing it. Sample line: Jul 27 14:05:48 px1-g5 corosync[11545]: [TOTEM ] Retransmit List: 775 776 777
  10. P

    Corosync memory leak

    Thanks for the pointer spirit! ... not sure how to check the corosync longs? /var/log/corosync has nothing in it. Is there a log redirection on proxmox nodes? Thanks!
  11. P

    Corosync memory leak

    Hi Guy, thanks for the response. It happens on every single node. Setup is as follows: 3 vm hosts and 8 storage nodes. The vm hosts are ceph monitors, while the storage nodes are ceph osds. All the VMs use the ceph RBD pools. The nodes themselves run on raid 1 ssds with zfs. Gigabit Network...
  12. P

    Corosync memory leak

    I should add this perhaps: Cluster 1 (11 nodes) proxmox-ve: 4.3-70 (running kernel: 4.4.21-1-pve) pve-manager: 4.3-7 (running version: 4.3-7/db02a4de) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.21-1-pve: 4.4.21-70 lvm2: 2.02.116-pve3 corosync-pve: 2.4.0-1 libqb0: 1.0-1 pve-cluster: 4.0-46...
  13. P

    Corosync memory leak

    Hi guys, Experiencing the same issue on 2 separate clusters of different sizes in two different locations with completely different hardware. Issue is that after a week or two of regular cluster operation memory usage by corosync grows to crazy levels for instance currently on a node it's at...
  14. P

    Ceph and KVM terrible disk IO

    Thanks for the pointer mir, I used fio and the results are not any better that's for sure. test: (groupid=0, jobs=1): err= 0: pid=19544: Thu Nov 3 18:14:11 2016 read : io=3071.7MB, bw=10929KB/s, iops=2732, runt=287815msec write: io=1024.4MB, bw=3644.5KB/s, iops=911, runt=287815msec cpu...
  15. P

    Ceph and KVM terrible disk IO

    Thanks for your reponse Udo! I was under the impression that everything Ceph was on the private network and public was being used for actual interfacing to the clients of the VMs. So for instance VM --> VirtIO --> librbd --> Mon (private) --> Stor (private) .. and reverse on the way back. That...
  16. P

    Ceph and KVM terrible disk IO

    Here's another test to add, from the same host the ubuntu vm is running on: root@vmh1:/# rbd create test02 --pool backup --size 20000 root@vmh1:/# rbd map test02 --pool backup /dev/rbd0 root@vmh1:/# mkfs.ext4 /dev/rbd0 mke2fs 1.42.12 (29-Aug-2014) Discarding device blocks: done...
  17. P

    Ceph and KVM terrible disk IO

    Hi guys, Need some bright minds in the Proxmox community :) I setup a storage cluster using Ceph (as the title suggests). It's a fairly large cluster consisting of over 200 osds. When I bench the cluster using rados bench, I get exactly the kind of performance I was expecting to get... doing...
  18. P

    Ceph OSD failures on good drives

    Hey Serge, I can't say that I know what your issue is but I would suspect a possible sata controller driver issue. Check your /var/log/messages and see if there are a bunch of commands the kernel is logging in relation to the scsi driver used. I have seen certain chipsets where the driver was...
  19. P

    Odd installation issue on ThinkServer TS430

    Hi guys, Not really sure if anyone has run in to this but after the 4th day of banging my head against the same box, I thought I would reach out to you guys :) I have 2 ThinkServer TS430 boxes, more or less the same with the exception of the BIOS, one runs the 3.X version and one runs the 2.X...
  20. P

    Unable to create volume group at /usr/bin/proxinstall line 630

    The 2 reasons I got this error was: 1. Accidentally specified a partition larger than the drive had space for, so creation was failing. 2. There is another drive in the system that already have a physical volume (part of lvm setup) named pve. This will affect you even if it's not on the drive...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!