I will do that. Meanwhile a completely different driver intel chipset e1000 exhibits the same issues. It boots to 1000 mbit and in 45 seconds port link goes down an comes up at 100 mbit. Ethtool also saying that card advertising 100 mg full duplex while being capable of doing 1 gigabit full.
So far with multiple hours of iperf3 centos7 with tg3 driver and its native kernel stays on 1000 without going down. Is there something I can do other than using different card vendor? This was a preproduction test for us and is not going very good for proxmox.
Hmm,
No ipmi interface (iDRAC) is using its dedicated port. So far centos 7 live CD keeps interface up at 1000 Gbit. I am not doing anything with the box yet just watching.
[Thu Oct 17 04:10:33 2019] bpfilter: Loaded bpfilter_umh pid 1337
[Thu Oct 17 04:10:37 2019] tg3 0000:18:00.0 eno1: Link is up at 1000 Mbps, full duplex
[Thu Oct 17 04:10:37 2019] tg3 0000:18:00.0 eno1: Flow control is on for TX and on for RX
[Thu Oct 17 04:10:37 2019] tg3 0000:18:00.0 eno1: EEE...
I have Dell R720xd with BCM 5720 nic. I upgraded firmware replaced boards, changed the switch ports, switch and cable. I still get card going down and changing speed to 100 from 1gbit. I can get it up to 1000 with ethtool but it comes back down to 100 and interrupts connection multiple times...
Dear all,
I am trying to add multiple IP addresses inside the lxc container. I know that there are multiple methods, but the one I am looking for is to add extra IP address from within the container using standard network settings of guest OS.
PCT seems to overwrite ifcfg-eth0 static IP...
I have few questions.
1. What HA improvements are made in this release?
2. LXC 2.0 and LXD 2.0 support: it is not available in this release, do you have plans on when it is going to be available?
Sure it is done via different methods but none of it is part of Proxmox 4.x. You could possibly do something on Proxmox 3.x as HA was based on RedHat cluster software. Still it was not documented anywhere. Ability to do it within Proxomox would definitely be a great feature. Sadly Debian...
Hello everyone,
On my production pve-3.4-11 cluster (qdisk + 2 nodes) I am having a node evicting at the middle of the night once in a while.
The only clues in the other node logs are:
corosync.log
Dec 19 01:02:28 corosync [TOTEM ] A processor failed, forming new configuration.
Dec 19...
Is there a way to vote for migration over RRP ? Also do you have a plan for monitoring resources, i.e. VMs in general. It would be a great feature migrate or reboot VM if it is not accessible by monitor. It can be optional but nevertheless.
PVE 4 KVM live migration problem [SOLVED]
Spirit,
Thank you for all your help. You were the only one who took my nagging seriously. So far I couldn't reproduce the problem I complained about.
I agree with your observation . However any interface may and will go down, i.e. bond can go down effectively cutting off vms. If there is a supported RRP feature, would not it make sense to have a migration option?
As far as complexity, a good documents would help to learn as long as...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.