Search results

  1. spirit

    Network Bridge Permissions for Non Admin Users

    A vnet in sdn is a bridge, configured at datacenter level then deployed locally on each host. That's why it's the same permission name.
  2. spirit

    Host losing network when starting Windows 2025 VM

    maybe you have your proxmox host ip on a specific vlan ? (eth.X) , and vm on a non-vlanware bridge with same vlan ?
  3. spirit

    Dutch Proxmox Day 2025 - Free Proxmox Community Event

    @tuxis Are you looking for speakers ? Maybe I could make a talk on my current work on SAN snapshot support. (I have already done a conference in French, slides are almost ready, I just need to translate them to English)
  4. spirit

    Server restarting? unknown reason at midnight

    services restart occur on logrotate, but they are no impact on the vms. (the "server" in the log, is the service, not the host) /etc/logrotate.d/pve /var/log/pveproxy/access.log { rotate 7 daily missingok compress delaycompress notifempty...
  5. spirit

    Network Bridge Permissions for Non Admin Users

    go to the network zone on the left tree, then add permission on the whole zone or on a specific bridge/vlan
  6. spirit

    Network Bridge Permissions for Non Admin Users

    you need to add PVESDNUser role to your bridge
  7. spirit

    HA Cluster on a cheap

    you need HA, if you want to auto restart vms of a dead node to another node. you can install corosync qdevice on your truenas as 3rd node. https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
  8. spirit

    Proxmox Harware Requirements Windows server 2025 & Linux Red Hat 9/10

    no, qemu can't emulate missing cpu flags. (vm will not start if you choose a virtual cpu model newer than your physical cpu model)
  9. spirit

    SDN VNet with 802.1q tags (Q-in-VNI) Support

    you need to enable vlan aware on your vnet (in advanced options)
  10. spirit

    PVE 7to8 online upgrade

    This is the way.
  11. spirit

    [SOLVED] 1 Gbps Limit VM to/from Host (RT8125)

    host ip is in the same subnet and same vlan than the vm ?
  12. spirit

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    please test with pve8 && recent qemu, I remember than some iothreads crash has been fixed since. (and pve7 is EOL anyway)
  13. spirit

    ifupdown2 defunct after updating to 8.2

    until you have infiniband switches && application using infiniband, no, you don't need it ^_^
  14. spirit

    Nvidia VGPU - No virtual devices

    mdev devices are different than pci functions, so you can't see them with lspci, but you can see them with "mdevctl list" you simply need to pass the mdev (nvidia-xxx) in the pci passthrough gui
  15. spirit

    Dutch Proxmox Day 2025 - Free Proxmox Community Event

    ah, in english, great. I'll try to come this year :)
  16. spirit

    vGPU + physical display?

    you can't use physical output. vgpu is mostly use for gpu compute (ai, machine learning,...). or you need a fast network display protocol like for cloud gaming.
  17. spirit

    Proxmox 8 update stuck on MACsec when rebooting

    the problem with "upgrade" is that it's only upgrade installed packages. if an update of a proxmox package have a depend on a new lib not yet installed, it could break it. (This is different than debian, where the package list is fixed over the whole cycle. Proxmox can add new features and...
  18. spirit

    [TUTORIAL] FABC: Why is ProxmoxVE using all my RAM and why can't I see the real RAM usage of the vms in the dashboard?

    memory is not reserved at vm start (until you define static memory hugage in vm conf directly). so it can be dynamically allocated to a different vm. then, if a vm is reserving a memory page, it's reserved. Note that windows is allocating all memory pages with 0 at at boot. (so it's...
  19. spirit

    Handling Division-Based VLANs Across Sites in Proxmox SDN

    Well, I don't known how far are theses sites, but you can't have too much latency (3~5ms) for 1 cluster. (and keep quorum (a majority of nodes up) if you have if 1 site down). The vcenter approach is pdm. (separate clusters by site), and should be the design for your setup normally.