Search results

  1. spirit

    Need Advice. Replacing consumer NVMe used for Ceph DB/WAL in a 3-Node Cluster

    don't use wal/db disk on separated drive, simply use your enterprise ssd as osd with wal/db disk inside
  2. spirit

    Default route advertisement to SDN EVPN

    ah ok ! ^_^ for the restart, I really don't known (maybe it's an option of fortigate side), but you can use "/usr/lib/frr/frr-reload.py /etc/frr/frr.conf --reload" to reload config without restarting the frr process
  3. spirit

    Default route advertisement to SDN EVPN

    (just to be sure, your frr.conf is on fortigate ? forgate use frr ? )
  4. spirit

    Default route advertisement to SDN EVPN

    you need to announce a default type5 route from your fortigate, and it should already by done with your address-family l2vpn evpn default-originate ipv4 default-originate ipv6 exit-address-family "neighbor VTEP activate" need to be in "address-family l2vpn evpn", like in your config...
  5. spirit

    Upgrade FRR 10.2.2-1 failed on node

    this message looks strange. what is the content of /etc/frr/daemons ?
  6. spirit

    Performance Issues with CEPH osds

    what is your nvme model ? seem unrelated, icmp is network, slow storage can't impact it. but slow network between the osd could impact io. (do you have tried some icmp test between your proxmox nodes directly ?) should be investigated
  7. spirit

    [SOLVED] Migrating from Hyper-V to Proxmox disk and skill issues

    you can use " qm disk import <vmid> yourexportedfile.qcow2 <targetstorage> --format raw|qcow2", it'll import the disk, do the format converstion and add the disk to vm configuration .
  8. spirit

    Ceph keeps crashing, but only on a single node

    mmm, both ceph-mon && ceph-mgr crashing is really strange. I'm not aware of a ceph bug in 18.2.7. maybe ask to the ceph mailing list, but for me, it's look like an hardware problem (ram or maybe cpu)
  9. spirit

    Network Bridge Permissions for Non Admin Users

    A vnet in sdn is a bridge, configured at datacenter level then deployed locally on each host. That's why it's the same permission name.
  10. spirit

    Host losing network when starting Windows 2025 VM

    maybe you have your proxmox host ip on a specific vlan ? (eth.X) , and vm on a non-vlanware bridge with same vlan ?
  11. spirit

    Dutch Proxmox Day 2025 - Free Proxmox Community Event

    @tuxis Are you looking for speakers ? Maybe I could make a talk on my current work on SAN snapshot support. (I have already done a conference in French, slides are almost ready, I just need to translate them to English)
  12. spirit

    Server restarting? unknown reason at midnight

    services restart occur on logrotate, but they are no impact on the vms. (the "server" in the log, is the service, not the host) /etc/logrotate.d/pve /var/log/pveproxy/access.log { rotate 7 daily missingok compress delaycompress notifempty...
  13. spirit

    Network Bridge Permissions for Non Admin Users

    go to the network zone on the left tree, then add permission on the whole zone or on a specific bridge/vlan
  14. spirit

    Network Bridge Permissions for Non Admin Users

    you need to add PVESDNUser role to your bridge
  15. spirit

    HA Cluster on a cheap

    you need HA, if you want to auto restart vms of a dead node to another node. you can install corosync qdevice on your truenas as 3rd node. https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
  16. spirit

    Proxmox Harware Requirements Windows server 2025 & Linux Red Hat 9/10

    no, qemu can't emulate missing cpu flags. (vm will not start if you choose a virtual cpu model newer than your physical cpu model)
  17. spirit

    SDN VNet with 802.1q tags (Q-in-VNI) Support

    you need to enable vlan aware on your vnet (in advanced options)
  18. spirit

    PVE 7to8 online upgrade

    This is the way.
  19. spirit

    [SOLVED] 1 Gbps Limit VM to/from Host (RT8125)

    host ip is in the same subnet and same vlan than the vm ?
  20. spirit

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    please test with pve8 && recent qemu, I remember than some iothreads crash has been fixed since. (and pve7 is EOL anyway)