Search results

  1. H

    (General SPAM issue): Bugzilla.proxmox being used for SPAM/SEO

    just got this https://bugzilla.proxmox.com/show_bug.cgi?id=1007#c15 that is peppered with spam URLs Where is the best place to log/report them?
  2. H

    pct restore "simple" not using the stored rootfs information

    I'm 100% with you @fabian and I did read the documentation. I'll refer you to the definition and use of brackets in writing: the sentence should stand on it's OWN... the brackets are OPTIONAL details, and that's why the simple explanation and the results I experienced "surprising". Now I know...
  3. H

    Downgrade 7.0 to 6.4

    ZFS is one of the great features/etc. to use with PVE I.M.O. Containers isn't great 'cause off Linux's NIHS - yes I'm using them, gone a while, but you want to use KVM/qemu/VMs to compare to ESXi Sounds like you need no reason so just go back to ESXi ;(
  4. H

    pct restore "simple" not using the stored rootfs information

    You missed the points: 1. It's counter intuitive 2. The Documentation is NOT clearly EXPLICIT about it... only in parathesis which in the English language is "optional" https://www.google.com/search?q=parentheses+meaning&oq=paranthesis
  5. H

    pct restore "simple" not using the stored rootfs information

    but the problem here is that the "expected" is that it'll parse the rootfs/mpX/etc. and use that exact storage in that config. I'd rather say that the manuals then need to be specific: "Simple mode will ALWAYS ONLY use local storage, no parsing the config, only the layout/sizes of the storages...
  6. H

    pct restore "simple" not using the stored rootfs information

    Good day, Trying pct restore on the CLI today for the time, on PVE 7.0, it seems to only use the "local", not the rootfs config inside the backup, and I seem to need to provide the storage parameter to use the rootfs on the correct storage...
  7. H

    zfs trim

    Right lets start at the "beginning" :) The ways I've understand and use it: Host: They give the KVM-QEMU VM guests a block device. If you allocate it on ZFS as a ZVOL, it's a block device for all practical purposes, and the guest can write there within the boundaries. BUT the fun here, once...
  8. H

    l2arc and zfs volumes

    Cheap, fast, reliable, you can only ever have 2
  9. H

    Proxmox 7.x not network

    I think there is something a miss in the upgrade/installation w.r.t. ifupdown2 not being installed, but being "needed" somehow :shrug:
  10. H

    linux bridge vs ovs Bridge

    That said, the OVS world, has SDN (Software Defined Networking) features where you can (using a "controller" of sorts) create rules on how to switch traffic based on various criteria, thus the notion of port isolation could well be implemented, but not with the same ease (and here ease includes...
  11. H

    linux bridge vs ovs Bridge

    My "port isolation" is to put each client/stack in their/it's separate VLAN, and then have a single firewall managing the traffic accordingly. Then the OpenVSwitch shares/connects to an interface that is 802.1q trunking to the other OpenVSwitches on the other cluster members, and that way I can...
  12. H

    Proxmox 7.x not network

    My first guess: ifupdown2 is needed
  13. H

    v6 to v7 - OVH SYS

    Have done it on the OVH (proper) Infra2 servers with only the need to "force" ifupdown2 installation with the OpenVSwitch bridge connected to the vRack not coming up.
  14. H

    l2arc and zfs volumes

    DO go and read up the OpenZFS wikis and performance tuning information. You'll need to remember that L2ARC is the 2nd level of the ARC (in RAM), and you need RAM (ie'. reduce ARC) to have the pointers to L2ARC and that L2ARC is ephemeral, ie. reboot, and it's gone and will be re-primed as you...
  15. H

    Downgrade 7.0 to 6.4

    Why btrfs and not ZFS?
  16. H

    Proxmox VE 7.0 released!

    That was only the first "lab" system (And yes, it was `ldd` I used to track that problem ;) ), the 2nd/3rd cluster installation/upgrade had no netdata
  17. H

    Proxmox VE 7.0 released!

    3x 6.4 -> 7 upgrades, all failed with the Open-vSwitch bridge, some how not liking or adding the physical interface they are "bound" to. My colleague found the "solution" appears to be a simple installation of `ifupdown2` and a reboot to fix :shrug: Also, the one I was playing.testing...
  18. H

    VPN not working inside virtual machine (KVM)

    Okay, Scale - that's new, guess they had to setup new/different networks and to save a few extra IPs have the "internal network" with that 100.64.0.1 I notice an IP on bond0 - I would've expected that bond0 would've been attached to the vmbr0 without an IP as the IP should be vmbr0, shouldn't...
  19. H

    Open Vswitch - VLAN tagging

    OPenVSwitch and utils must be separately installed, else you'll be using the Linux bridge utils (nothing wrong per se)
  20. H

    VPN not working inside virtual machine (KVM)

    Don't be skimpy and stingy, just fork out the money (once off) for a /27 or /26, my consulting invoice would be more than worth the 3x IPs (broadcast, network and gateway) 100.64.0.1 - where is that? You should have the OVH gateway in there AFAIK unless you are doing mac-bouncing and then...