Search results

  1. V

    Proxmox VM and host machine cannot make outgoing requests

    That's weird! By looking their : https://docs.hetzner.com/robot/dedicated-server/network/net-config-debian-ubuntu/#etcnetworkinterfaces-eni it seems pointopoint must be added too and the netmask /32. iface eno1 inet manual address <PUBLIC_IP> netmask 255.255.255.255 gateway <PUBLIC_GW>...
  2. V

    Proxmox VM and host machine cannot make outgoing requests

    Perfect, if you know it, set it statically into the config file. We never know how cloud providers manage their FW rules :/ May be you can still access to your PVE host by SSH because of a static rule (eg. a non-blocking rule) from their side, or their contrack is still alive, I dont know...
  3. V

    Proxmox VM and host machine cannot make outgoing requests

    Humm, I think it's because of the strict IP/MAC association policy that hetzner have in place. My guess, the MAC address of your VM took the ownership of your public IP. May be you could change it through hetzner customer portal? In that circumstance, I would Ensure PVE NIC is isolated from...
  4. V

    Proxmox VM and host machine cannot make outgoing requests

    If you search in that forum, allot of people (mostly Germans) have issues with that cloud provider. Check out the official network settings approaches according to your cloud setup: https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve In your tries, it seems you want to...
  5. V

    Proxmox VM and host machine cannot make outgoing requests

    Your PVE server is hosted on a cloud provider ?
  6. V

    [SOLVED] Change MAC of host machine (or pas NIC through to vm)

    you will have to play with "hwaddress ether 00:11:22:33:44:55" within /etc/network/interfaces to replace your PVE nic MAC and change the MAC address within your VM or LXC Container settings.
  7. V

    Proxmox VM and host machine cannot make outgoing requests

    Hi, Your `/etc/network/interfaces` definition is not valid. Attention to the indentation. Test it with the following command. ifup --no-act -a You should see errors and warnings I strongly encourage you to follow up Proxmox documentation. Proxmox Network Configuration - Masquerading (NAT)...
  8. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    According to your screenshoot the additionnal public network to add into the `ceph.conf` is 10.0.0.0/24 [global] ... public_network = 10.0.0.81/24, 10.1.60.1/24, 10.1.80.1/24, 10.0.0.0/24 ... Regarding to your question on how to create a dummy iface on each PVE node, here a sample for one...
  9. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    Hi @mrkhachaturov, Thanks for your feedback, the advice given to @telvenes was not totally clear for the point 1. By answering your questions, I hope it would be clearer what "The Ceph Public Network is defined within a vNet associated to the Zone (evpn) defined for FRR" imply. The easy way is...
  10. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    @telvenes, make sure : 1. The Ceph Public Network is defined within a vNet associated to the Zone (evpn) defined for FRR 2. All your K3s Node must have a nic connected to a vNet associated to the same or another Zone (evpn) defined for FRR 3. If K3s node's nics subnet differs to the Ceph Public...
  11. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    For being even more precise : https://datatracker.ietf.org/doc/html/draft-white-openfabric-06#section-2.2
  12. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    @gpoudrel nice catch. I did the change into the local conf "/etc/frr/frr.conf.local" of each node and applied again Proxmox SDN then it worked fine :) Unfortunatelly, I can't update the original post with your discovery. According to OpenFabric specs, it means nodes with tier 0 are at the edge...
  13. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    Well in terms of topology if you have a look on the first diagram, it’s the case but with 2 links. In your case if I understand well, you have a single hardware link on each node connected to a switch, if so, your setup would be even simpler by assigning the IP to your NIC instead of the loop...
  14. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    It was mainly to try OpenFabric as IGP protocol. Of course I could use ISIS or even OSPF with their drawbacks too. In my case, OpenFabric is most flexible protocol for future topology changes and more import without flooding the network.
  15. V

    Multiple VMs on the same bridge no DNS

    You faced the same kind of behavior I had in the past with PFSense / OPnsense and FreeBSD in general within KVM. I never was able to make it work properly, I replaced it by Vyos. Just in case have a look also the offloading settings of your NIC from the PVE host with ethtool. ethtool -k eno8...
  16. V

    Proxmox in Hetzner + NAT

    I suggest you to follow that guide. Even if it’s old, the networking settings should be still correct. https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve/de/#Netzwerkkonfiguration_KVM
  17. V

    [TUTORIAL] [Full mesh (routed setup) + EVPN] it is feasible even by using SDN!

    Dear Community, I'd like to share with you my recent discoveries. For a while, I had few hardware components laying around to provide 10GB connectivity in between my cluster of 3 Proxmox servers. Obviously, it was the time to make an upgrade from 2.5gb to 10 GB. But unfortunately, I'm still...
  18. V

    Windows 2019 is very slow and unstable after upgrading to Proxmox V8

    Whawww, that increased the RAND 4k Writes IOPS by 200% :eek: Thanks allot before: KDiskMark (3.1.4): https://github.com/JonMagon/KDiskMark Flexible I/O Tester (fio-3.33): https://github.com/axboe/fio...
  19. V

    Windows 2019 is very slow and unstable after upgrading to Proxmox V8

    How you do that ? BIOS ? Edit: thanks, it was obvious to modify the grub. The additional argument is ‘mitigations=off’ As soon as I can restart, I’ll check the differences
  20. V

    Windows 2019 is very slow and unstable after upgrading to Proxmox V8

    1 year in raiz1 since the dataset have been created with new QVO ssds. I have 2 machine with the same setup (2x3 QVO SSDs) they are all with 4% of wearout and a tbw calculated ~25TB. All have the aproxymatelly same smart values as following. Model Family: Samsung based SSDs Device Model...