Search results

  1. A

    Network Card negotiates down from 1 Gig to 100 Mb

    I will do that. Meanwhile a completely different driver intel chipset e1000 exhibits the same issues. It boots to 1000 mbit and in 45 seconds port link goes down an comes up at 100 mbit. Ethtool also saying that card advertising 100 mg full duplex while being capable of doing 1 gigabit full.
  2. A

    Network Card negotiates down from 1 Gig to 100 Mb

    So far with multiple hours of iperf3 centos7 with tg3 driver and its native kernel stays on 1000 without going down. Is there something I can do other than using different card vendor? This was a preproduction test for us and is not going very good for proxmox.
  3. A

    Network Card negotiates down from 1 Gig to 100 Mb

    The version in CentOS is 3.137 and so was the version in Proxmox 6 [Thu Oct 17 04:10:26 2019] tg3.c:v3.137 (May 11, 2014)
  4. A

    Network Card negotiates down from 1 Gig to 100 Mb

    Hmm, No ipmi interface (iDRAC) is using its dedicated port. So far centos 7 live CD keeps interface up at 1000 Gbit. I am not doing anything with the box yet just watching.
  5. A

    Network Card negotiates down from 1 Gig to 100 Mb

    I just booted live centos 7 cd. So I will wait and see if the card changes its ways. Right now it is 1G and up.
  6. A

    Network Card negotiates down from 1 Gig to 100 Mb

    [Thu Oct 17 04:10:33 2019] bpfilter: Loaded bpfilter_umh pid 1337 [Thu Oct 17 04:10:37 2019] tg3 0000:18:00.0 eno1: Link is up at 1000 Mbps, full duplex [Thu Oct 17 04:10:37 2019] tg3 0000:18:00.0 eno1: Flow control is on for TX and on for RX [Thu Oct 17 04:10:37 2019] tg3 0000:18:00.0 eno1: EEE...
  7. A

    Network Card negotiates down from 1 Gig to 100 Mb

    I have Dell R720xd with BCM 5720 nic. I upgraded firmware replaced boards, changed the switch ports, switch and cable. I still get card going down and changing speed to 100 from 1gbit. I can get it up to 1000 with ethtool but it comes back down to 100 and interrupts connection multiple times...
  8. A

    Roadmap

    +1 here. It has been mentioned to be updated one week after release week.
  9. A

    Revisiting CPU limit for LXC 2.0 / LXD

    It definitely works on LXD and top shows that. It doesn't work on proxmox lxc
  10. A

    How to prevent network config in lxc from being overwritten

    Dear all, I am trying to add multiple IP addresses inside the lxc container. I know that there are multiple methods, but the one I am looking for is to add extra IP address from within the container using standard network settings of guest OS. PCT seems to overwrite ifcfg-eth0 static IP...
  11. A

    Roadmap

    Tom, What bugs in Ceph jewel are you referring to ?
  12. A

    Proxmox VE 4.2 released!

    Thank you, will wait for HA people to to reply :-)
  13. A

    Proxmox VE 4.2 released!

    I have few questions. 1. What HA improvements are made in this release? 2. LXC 2.0 and LXD 2.0 support: it is not available in this release, do you have plans on when it is going to be available?
  14. A

    Proxmox 4 HA resource type IP address

    Sure it is done via different methods but none of it is part of Proxmox 4.x. You could possibly do something on Proxmox 3.x as HA was based on RedHat cluster software. Still it was not documented anywhere. Ability to do it within Proxomox would definitely be a great feature. Sadly Debian...
  15. A

    Intermitent cluster node failure

    Hello everyone, On my production pve-3.4-11 cluster (qdisk + 2 nodes) I am having a node evicting at the middle of the night once in a while. The only clues in the other node logs are: corosync.log Dec 19 01:02:28 corosync [TOTEM ] A processor failed, forming new configuration. Dec 19...
  16. A

    PVE 4 HA and redundant ring protocol (RRP)

    Is there a way to vote for migration over RRP ? Also do you have a plan for monitoring resources, i.e. VMs in general. It would be a great feature migrate or reboot VM if it is not accessible by monitor. It can be optional but nevertheless.
  17. A

    PVE 4 KVM live migration problem

    PVE 4 KVM live migration problem [SOLVED] Spirit, Thank you for all your help. You were the only one who took my nagging seriously. So far I couldn't reproduce the problem I complained about.
  18. A

    PVE 4 HA and redundant ring protocol (RRP)

    I agree with your observation . However any interface may and will go down, i.e. bond can go down effectively cutting off vms. If there is a supported RRP feature, would not it make sense to have a migration option? As far as complexity, a good documents would help to learn as long as...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!