Search results

  1. D

    Cluster node semi-offline?

    Both were correct: root@pve1:~# systemctl status pvestatd ● pvestatd.service - PVE Status Daemon Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-08-25 16:33:32 EDT; 1 weeks 4 days ago Process: 2426...
  2. D

    Cluster node semi-offline?

    Since rebooting one node of my three-node cluster this morning, I'm seeing some strange behavior, in that one node (often, but not always, the one I rebooted) appears offline, like this: The node is up and running, I can ssh to it, and in this case, I'm even logged into that node's web GUI...
  3. D

    Network won't start on boot

    Great, thanks. I'm a little reluctant to consider this "solved", as there was a period of months in which it worked fine, but this looks like an identified problem and solution.
  4. D

    Network won't start on boot

    That appears to have resolved the issue. Following https://linuxconfig.org/how-to-blacklist-a-module-on-ubuntu-debian-linux/, I created /etc/modprobe.d/blacklist.conf and added "blacklist ipmi_si" there. Then ran update-initramfs -u and rebooted. The system booted more quickly than it has...
  5. D

    Network won't start on boot

    Thanks. Here's the output of pveversion -v: proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve) pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3) pve-kernel-5.11: 7.0-6 pve-kernel-helper: 7.0-6 pve-kernel-5.4: 6.4-4 pve-kernel-5.11.22-3-pve: 5.11.22-6 pve-kernel-5.11.22-2-pve: 5.11.22-4...
  6. D

    Network won't start on boot

    Once I figure out how to get it off the machine with no network access, sure--I should be able to use SneakerNet with a USB stick. For the "full journal from boot", would that just be the output of journalctl? I expect it will be quite large.
  7. D

    Network won't start on boot

    Well, after having mostly gone away for a few months, this problem is back. I've upgraded my cluster to 7.0, restarted each node more than once, and the network has come up successfully. Until today. But unlike what I posted above, systemctl restart systemd-udevd followed by systemctl restart...
  8. D

    Network won't start on boot

    I have a three-node PVE 6.3 cluster running on three nearly-identical (identical except for RAM--two nodes have 48 GB; the third has 96 GB) blades of a Dell PowerEdge C6100, each with 2x Xeon X5650 CPUs, and each with a Chelsio T420-CR 2x 10Gbit NIC. Each pretty consistently fails to bring up...
  9. D

    vmbr0 doesn't come up on boot

    Yes, of course--should have mentioned that. Some stuff in dmesg looks like it might be relevant: root@pve1:~# dmesg | grep cxgb [ 2.894565] cxgb4 0000:03:00.4: Direct firmware load for cxgb4/t4fw.bin failed with error -2 [ 2.894570] cxgb4 0000:03:00.4: unable to load firmware image...
  10. D

    vmbr0 doesn't come up on boot

    I've been dealing with this problem since I installed PVE, but since I didn't reboot the servers very often I didn't bother dealing with it. Now I'm needing to reboot the servers more in the course of troubleshooting another issue, and this is becoming more of a hassle than it had been. I have...
  11. D

    Poor network performance on guest

    I'll check those out and see what they do. Thanks for all your help.
  12. D

    Poor network performance on guest

    Well, yes, there are other devices plugged into the switch. I don't think I can avoid that--the only way I have to connect with the VM that I'm concerned with is via SSH, and I don't have a trusted public key on the PVE host. But trying again, with all the other devices (i.e., except for the...
  13. D

    Poor network performance on guest

    The PVE node has 12 physical cores, which with HT should equal 24. The VM is assigned 4 cores, two other running VMs are assigned 2 cores each, and one other running VM is assigned a single core. The FreeNAS box is running on a Xeon E3-1230v2, which has 4 physical cores and HT. The other VMs...
  14. D

    Poor network performance on guest

    The VM is CentOS 6.7, the switch is a Dell PowerConnect 5524. The vmbr2 port on the PVE node, and port 1 on the FreeNAS box, are connected to the SFP+ ports on the switch using 2M twinax patch cables. The vmbr3 port on the PVE node, and port 2 on the FreeNAS box, are connected to each other...
  15. D

    Poor network performance on guest

    The obvious conclusion would appear to be that there's something wrong with the switch and/or the cables from the NICs to the switch. But there's still a huge disparity between the PVE node performance and the VM performance. I've tried a little cable-swapping without effect, but haven't tried...
  16. D

    Poor network performance on guest

    OK, here are the results. IPs are: VM via DAC: 192.168.2.3 VM via switch: 192.168.1.1 PVE node via DAC: 192.168.2.1 PVE node via switch: 192.168.1.33 FreeNAS via DAC: 192.168.2.2 FreeNAS via switch: 192.168.1.10 vmbrs are: vmbr2 is via switch vmbr3 is via DAC iperf results are...
  17. D

    Poor network performance on guest

    Let me make sure I understand your suggested course of action, since I try to minimize reboots of my host: Unconfigure eth6 (second port of the T420, direct-connected to the T420 in the FreeNAS box) Create vmbr3, bridged to eth6 Configure vmbr3 with an IP of 192.168.2.x (reboot host) Add a...
  18. D

    Poor network performance on guest

    Correct as modified--the second port of the T420 (eth6) isn't on a vmbr; I simply configured eth6 as 192.168.2.1. I actually have three vmbr interfaces configured, but two of them aren't involved here--one (vmbr0) is the management interface for the PVE host, and the other (vmbr1) is the WAN...
  19. D

    Poor network performance on guest

    I just installed the second T420 in the FreeNAS box. The first port on it is configured as before, at 192.168.1.10. The second port is configured as 192.168.2.2. On the PVE node, I configured eth6 (the second port of the T420 installed there) as 192.168.2.1 and rebooted. The guest VM is at...