Search results

  1. weehooey-bh

    iperf3 slower one direction

    Ah. Often see that with Dell and HP's older hardware. They released the hardware back before PVE was cool. :-) Then, you need to rely on the drivers in the kernel.
  2. weehooey-bh

    Offload copies for long term storage

    Ah! Thanks for that overview @markf1301 Your architecture makes sense. Yes, having dedicated hardware for PBS is the ideal situation, but there are many use cases where that is not possible for technical reasons or resource constraints (e.g. money, time). Based on what you have mentioned and...
  3. weehooey-bh

    iperf3 slower one direction

    I would try updating the drivers first.
  4. weehooey-bh

    iperf3 slower one direction

    Yes, it looks good. You would need to dig deeper. Might be hardware related. It does seem high. Might want to look into that.
  5. weehooey-bh

    Offload copies for long term storage

    I may not have understood your topology correctly. My understanding was you had a PBS local to your PVE (site #1). Then, you had a remote site (site #2) with a PBS and a Synology together. From this last post, it sounds like you have a PBS local to your PVE (site #1), a remote site with...
  6. weehooey-bh

    iperf3 slower one direction

    Thanks for sharing this information. Please run the following variations on the iperf3 command: # Node1 iperf3 -B 10.xxx.xxx.61 -s # Node2 with extra option "-P 8" iperf3 -B 10.xxx.xxx.63 -c 10.xxx.xxx.61 -P 8 iperf3 -B 10.xxx.xxx.63 -c 10.xxx.xxx.61 -P 8 -R # Node2 with extra options "-P 8...
  7. weehooey-bh

    iperf3 slower one direction

    Please post the iperf (iperf3) commands you are running on the server and clients. Please share your /etc/network/interfaces file. What model of NICs are you using on each host? What model are the switches you are connecting your PVE hosts to?
  8. weehooey-bh

    Offload copies for long term storage

    Mounting the NFS share from your Synology and using a Sync Job requires a couple of extra steps to set up, but once it is set up, it will be automatic and will not require you to SCP the files over manually. It would be slow, but the speed might be acceptable, given your use case.
  9. weehooey-bh

    [SOLVED] Node reboot while disk operation in Ceph

    @daubner Thank you for sharing this information. Your information makes it reasonable to conclude that a prolonged broadcast storm on the host network caused the reboot. This is based on the assumption that the cluster was set up much like you have it right now. I'll explain...
  10. weehooey-bh

    SDN an Network adapter

    These are two standalone hosts, correct? The SDN is easier configured in the web GUI. Are you adding a physical NIC to the PVE host and trying to make it available to the VMs? For a physical NIC on the host, you would add it to the bridge used by the SDN. You can do that in the...
  11. weehooey-bh

    Offload copies for long term storage

    Because of the way PBS handles the data, you will need to use another Datastore ("repository" is a misnomer) and a Sync Job to move the data. Yes, you can use Removable Datastores.
  12. weehooey-bh

    [SOLVED] Node reboot while disk operation in Ceph

    It sounds like you have an issue with your Corosync links. Please provide the following information: Contents of /etc/network/interfaces The output from corosync-cfgtool -s The output from ha-manager status The contents of /etc/ceph/ceph.conf
  13. weehooey-bh

    Certificate lost during cluster join

    Yes, when a node joins a cluster, many parts of its configuration are overwritten -- including the certs. If you add your custom cert after the node has joined the cluster, life will be fine.
  14. weehooey-bh

    [SOLVED] Node reboot while disk operation in Ceph

    When you say "crashed," what exactly happened? Did the node stop responding? Did it spontaneously reboot? Or... ?
  15. weehooey-bh

    Safely rebuild PVE node?

    Hey @Darkk It would be totally fine to wipe it and reload PVE before using pvenode to remove it from the cluster. This would be no different from having a node hard fail and then replacing it.
  16. weehooey-bh

    SDN with multiple VLAN trunks

    @intelliimpulse@intelliimp You are not the first. :) https://forum.proxmox.com/threads/sdn-vnet-trunking-tagging.146199/ https://bugzilla.proxmox.com/show_bug.cgi?id=5443 https://bugzilla.proxmox.com/show_bug.cgi?id=6272 I recommend you comment on the bug (aka feature request) that matches...
  17. weehooey-bh

    SDN with multiple VLAN trunks

    @wolfspyre We always used Open vSwitch (OVS) and encouraged our clients to do the same. It was easier to do some things. However, since PVE SDN was released, we have discouraged using OVS and strongly recommend SDN. Configuring OVS is done on a per-host basis where SDN is cluster-wide, which...
  18. weehooey-bh

    Suggestions

    You are welcome. :-) Use the Add button with the same plugin.
  19. weehooey-bh

    SDN with multiple VLAN trunks

    You do not need OVS for PVE SDN. You are describing SDN Simple Zones. Each Vnet is an isolated network that only connects VMs. You would use either SDN VLAN Zones or SDN EVPN/VXLAN Zones. If you use the VLAN Zone, you must ensure the switches are configured for the VLANS. If you use...
  20. weehooey-bh

    SDN with multiple VLAN trunks

    You could replace this code: # dummy nic for node-scoped intra-node-comms for VMs auto foonic iface foonic inet manual ovs_type OVSIntPort ovs_bridge vmbr2 ovs_mtu 9198 pre-up ip link set foonic txqueuelen 13888 auto vmbr2 iface vmbr2 inet manual ovs_type OVSBridge...