Search results

  1. H

    Proxmox 7.x not network

    I think there is something a miss in the upgrade/installation w.r.t. ifupdown2 not being installed, but being "needed" somehow :shrug:
  2. H

    linux bridge vs ovs Bridge

    That said, the OVS world, has SDN (Software Defined Networking) features where you can (using a "controller" of sorts) create rules on how to switch traffic based on various criteria, thus the notion of port isolation could well be implemented, but not with the same ease (and here ease includes...
  3. H

    linux bridge vs ovs Bridge

    My "port isolation" is to put each client/stack in their/it's separate VLAN, and then have a single firewall managing the traffic accordingly. Then the OpenVSwitch shares/connects to an interface that is 802.1q trunking to the other OpenVSwitches on the other cluster members, and that way I can...
  4. H

    Proxmox 7.x not network

    My first guess: ifupdown2 is needed
  5. H

    v6 to v7 - OVH SYS

    Have done it on the OVH (proper) Infra2 servers with only the need to "force" ifupdown2 installation with the OpenVSwitch bridge connected to the vRack not coming up.
  6. H

    l2arc and zfs volumes

    DO go and read up the OpenZFS wikis and performance tuning information. You'll need to remember that L2ARC is the 2nd level of the ARC (in RAM), and you need RAM (ie'. reduce ARC) to have the pointers to L2ARC and that L2ARC is ephemeral, ie. reboot, and it's gone and will be re-primed as you...
  7. H

    Downgrade 7.0 to 6.4

    Why btrfs and not ZFS?
  8. H

    Proxmox VE 7.0 released!

    That was only the first "lab" system (And yes, it was `ldd` I used to track that problem ;) ), the 2nd/3rd cluster installation/upgrade had no netdata
  9. H

    Proxmox VE 7.0 released!

    3x 6.4 -> 7 upgrades, all failed with the Open-vSwitch bridge, some how not liking or adding the physical interface they are "bound" to. My colleague found the "solution" appears to be a simple installation of `ifupdown2` and a reboot to fix :shrug: Also, the one I was playing.testing...
  10. H

    VPN not working inside virtual machine (KVM)

    Okay, Scale - that's new, guess they had to setup new/different networks and to save a few extra IPs have the "internal network" with that 100.64.0.1 I notice an IP on bond0 - I would've expected that bond0 would've been attached to the vmbr0 without an IP as the IP should be vmbr0, shouldn't...
  11. H

    Open Vswitch - VLAN tagging

    OPenVSwitch and utils must be separately installed, else you'll be using the Linux bridge utils (nothing wrong per se)
  12. H

    VPN not working inside virtual machine (KVM)

    Don't be skimpy and stingy, just fork out the money (once off) for a /27 or /26, my consulting invoice would be more than worth the 3x IPs (broadcast, network and gateway) 100.64.0.1 - where is that? You should have the OVH gateway in there AFAIK unless you are doing mac-bouncing and then...
  13. H

    Open Vswitch - VLAN tagging

    Ask your network team whether that VLAN 61 is native or 802.1q tagged. If it's native, then just attach the VM to the vmbr0 untagged, and things should just work(TM) If it's tagged, then openvswitch should already see it tagged and only then you'll want to enable tag 61 to the VMs config.
  14. H

    VPN not working inside virtual machine (KVM)

    I miss the `ip r` inside the VM (and the node) Hosting ProxMox cluster in OVH myself, I'll advice to put the /27 on the vRack, and then have the VPN an IP on that side, solves a few other issues. I noticed this in the logs: Fri May 21 12:49:44 2021 /sbin/ip route add VPNIP/32 via 100.64.0.1...
  15. H

    Proxmox on OVH

    I've been using OVH since ~2014, and the vRack is the reason why (Hetzner's vswitches came quite late after I've already invested time in understanding and getting ProxMox working on OVH's network) Yes, OVH's vRack has limits like no IPv6 routed into it. Yes, their public/outside interface...
  16. H

    Issue backing up Windows VM's to Backup Server

    Looks like a guest agent communication problem?
  17. H

    Proxmox API: Adding disk only if not yet existing?

    Today I did stumble on it in the pct(1) documentation though ;) (But not the QEMU/qm(1) yet )
  18. H

    [SOLVED] Unlocking VM via API still not possible?

    Need to ask as I'm hitting similar (via stop) that it MUST be a root@pam "login" (ie. going throught the api_user + password getting a "ticket" and then doing the API calls with that ticket) and NOT an api token (ie, root@pam!tokenName), correct?
  19. H

    Proxmox API: Adding disk only if not yet existing?

    Ah! missed that, thank you!! https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_options https://pve.proxmox.com/pve-docs/qm.1.html https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/config I see the references to [file=]<volume> in ide/virtio/scsi, but nowhere...