Search results

  1. weehooey-bh

    VM on VLAN tag 9 can’t reach gateway but host can

    Please provide the contents of your /etc/network/interfaces file. Have you enabled the PVE firewall?
  2. weehooey-bh

    VM on VLAN tag 9 can’t reach gateway but host can

    If the traffic leaves the Proxmox VE node tagged correctly, then the issue is with the network gear after the PVE host. Have you checked the switch config?
  3. weehooey-bh

    inital sync per externe HDD

    Sie können Wechseldatenspeicher verwenden: https://pbs.proxmox.com/docs/storage.html#removable-datastores 1. Schließen Sie die externe Festplatte an PBS1 an. 2. Erstellen Sie einen Datenspeicher auf der externen Festplatte. 3. Kopieren Sie den Datenspeicher mit einem Synchronisierungsjob auf...
  4. weehooey-bh

    Internode Networking

    Yes. That is the correct way to set up the VXLAN. VXLANs work, while VLANs do not, because VXLANs encapsulate the traffic they carry (think VPN, but without the encryption). VLANs tag each packet (actually, they are frames, but you get the idea), and all the network gear between the Proxmox...
  5. weehooey-bh

    How to delete a drive that failed

    Check Datacenter > Storage. Is it still defined there? If so, remove it. In Proxmox VE, storage and the underlying physical storage are separate things.
  6. weehooey-bh

    iperf3 slower one direction

    Ah. Often see that with Dell and HP's older hardware. They released the hardware back before PVE was cool. :-) Then, you need to rely on the drivers in the kernel.
  7. weehooey-bh

    Offload copies for long term storage

    Ah! Thanks for that overview @markf1301 Your architecture makes sense. Yes, having dedicated hardware for PBS is the ideal situation, but there are many use cases where that is not possible for technical reasons or resource constraints (e.g. money, time). Based on what you have mentioned and...
  8. weehooey-bh

    iperf3 slower one direction

    I would try updating the drivers first.
  9. weehooey-bh

    iperf3 slower one direction

    Yes, it looks good. You would need to dig deeper. Might be hardware related. It does seem high. Might want to look into that.
  10. weehooey-bh

    Offload copies for long term storage

    I may not have understood your topology correctly. My understanding was you had a PBS local to your PVE (site #1). Then, you had a remote site (site #2) with a PBS and a Synology together. From this last post, it sounds like you have a PBS local to your PVE (site #1), a remote site with...
  11. weehooey-bh

    iperf3 slower one direction

    Thanks for sharing this information. Please run the following variations on the iperf3 command: # Node1 iperf3 -B 10.xxx.xxx.61 -s # Node2 with extra option "-P 8" iperf3 -B 10.xxx.xxx.63 -c 10.xxx.xxx.61 -P 8 iperf3 -B 10.xxx.xxx.63 -c 10.xxx.xxx.61 -P 8 -R # Node2 with extra options "-P 8...
  12. weehooey-bh

    iperf3 slower one direction

    Please post the iperf (iperf3) commands you are running on the server and clients. Please share your /etc/network/interfaces file. What model of NICs are you using on each host? What model are the switches you are connecting your PVE hosts to?
  13. weehooey-bh

    Offload copies for long term storage

    Mounting the NFS share from your Synology and using a Sync Job requires a couple of extra steps to set up, but once it is set up, it will be automatic and will not require you to SCP the files over manually. It would be slow, but the speed might be acceptable, given your use case.
  14. weehooey-bh

    [SOLVED] Node reboot while disk operation in Ceph

    @daubner Thank you for sharing this information. Your information makes it reasonable to conclude that a prolonged broadcast storm on the host network caused the reboot. This is based on the assumption that the cluster was set up much like you have it right now. I'll explain...
  15. weehooey-bh

    SDN an Network adapter

    These are two standalone hosts, correct? The SDN is easier configured in the web GUI. Are you adding a physical NIC to the PVE host and trying to make it available to the VMs? For a physical NIC on the host, you would add it to the bridge used by the SDN. You can do that in the...
  16. weehooey-bh

    Offload copies for long term storage

    Because of the way PBS handles the data, you will need to use another Datastore ("repository" is a misnomer) and a Sync Job to move the data. Yes, you can use Removable Datastores.
  17. weehooey-bh

    [SOLVED] Node reboot while disk operation in Ceph

    It sounds like you have an issue with your Corosync links. Please provide the following information: Contents of /etc/network/interfaces The output from corosync-cfgtool -s The output from ha-manager status The contents of /etc/ceph/ceph.conf
  18. weehooey-bh

    Certificate lost during cluster join

    Yes, when a node joins a cluster, many parts of its configuration are overwritten -- including the certs. If you add your custom cert after the node has joined the cluster, life will be fine.
  19. weehooey-bh

    [SOLVED] Node reboot while disk operation in Ceph

    When you say "crashed," what exactly happened? Did the node stop responding? Did it spontaneously reboot? Or... ?
  20. weehooey-bh

    Safely rebuild PVE node?

    Hey @Darkk It would be totally fine to wipe it and reload PVE before using pvenode to remove it from the cluster. This would be no different from having a node hard fail and then replacing it.