Search results

  1. spirit

    Which bridge for VLAN SDN Zone?

    1.you can use any vmbrX plugged on enox without any vlan enox.Y. then create the vnets where you'll defined the vlan tag number better to use vlan-aware on vmbrX, but it's not mandatory. 2. yes. simply defined the vnets , move the vm interfaces to the vnets, delete the old vlan setup 3...
  2. spirit

    SDN VLAN SNAT now working

    It's currently not supported, as you need to have the vnet to be gateway the vm. (so, you have same ip on each host in the same vlan, it'll not work). Currently, it's only working with layer3 zones. (simple && evpn zones)
  3. spirit

    Integration with OVS/OVN

    I known 1 user working on it (I have tried to help him), but no news since last year. I can share code if needed. (but don't have time to work on it)
  4. spirit

    Cluster malfunction

    you can edit /etc/corosync/corosync.conf on each node (don't forget to increase config_version), restart corosync on each node. then copy /etc/corosync/corosync to /etc/pve/corosync.conf when the cluster is ok
  5. spirit

    [SOLVED] [SDN/QinQ Issue] Bridge Mismatch Between Proxmox SDN Zone and /etc/network/interfaces.d/sdn

    if vmbr0 is not vlan-aware, the service vlan is set on the physical interface enslaved in the defined bridge. if vmbr0 is vlan-aware, the service vlan is set on vmbr0 interface vlan interface. both should work normally.
  6. spirit

    random crash on PVE no sub

    seem to be memory related. Maybe hardware related
  7. spirit

    Does Proxmox plan to support SCSI over FC?

    Hi, proxmox don't use storage snapshot for backup. I'm currently working to add snapshots for shared lvm (no official target date, I'm hoping for pve9). It'll work with lvm over scsi|fc.
  8. spirit

    SDN suddenly stopped working on one node

    please provide your sdn configs /etc/pve/sdn/*.cfg
  9. spirit

    Cluster aware FS for shared datastores?

    and the write performance with ocfs2 or gfs2 is not great too from my last year tests. (mostly on new block allocation and when you take snapshots)
  10. spirit

    PVE 8.4 Keeps Crashing!

    maybe your evo is dead ? Don't use thes shitty evo drives on proxmox, you are going to burn them with pve sync writes.
  11. spirit

    VxLAN and 1500 MTU

    proxmox firewall ? or a physical firewall/router somewhere on your network ? (in this case, the mtu of the interfaces of the firewall need to be increased too)
  12. spirit

    VxLAN and 1500 MTU

    Indeed, you need to increase mtu on your physical switches ports to 1550 for example, to handle the 50bytes of vxlan, if you wan to use 1500 in the vms.
  13. spirit

    Cluster aware FS for shared datastores?

    I'm currently working to add snapshot on shared lvm, no target date yet. ("when it's done").
  14. spirit

    [SOLVED] Taking 40min to create a VM...

    I don't known how much datas or files you need to write for the whole setup, but what do you expected with 1 hdd ? Maybe try to format your disk with lvm thin , instead using qcow2 files for your vm.
  15. spirit

    SDN with multiple VLAN trunks

    currently trunks are only available on vmbrX directly, and filtering of allowed vlan is done in vm configuration directly (net: ....,trunks=4-10;56;100) the sdn usage is currectly really 1vlan = 1 vnet. (because they are also ipam,dhcp,subnets,.... where it can't work with multiple networks...
  16. spirit

    Concern with Ceph IOPS despite having enterprise NVMe drives

    300k iops at 4k is around 10Gbits/s. I don't known if the full mesh network is able to balance correctly traffic across both nics ? but anyway, 10gbits is pretty low . (1nvme can reach 10gbit/s). so you need ~50gbits minimum for full speed. also note that read use less cpu than write, so...
  17. spirit

    Concern with Ceph IOPS despite having enterprise NVMe drives

    you can increase the mon_max_pg_per_osd value ceph devs are going to increase the value for next version anyway. ceph config set mon mon_max_pg_per_osd 500 for example. With a low number of osd, it's really recommand to increase it, becaue of pg_lock contention. (or you can also create...
  18. spirit

    Concern with Ceph IOPS despite having enterprise NVMe drives

    do you have tried with mutlple rados bench in // or increasing the -t value ? (maybe are you cpu limited on client ? ) but 40000 iops seem to be quite low. (you could increase the pg_num to 1024 , the new recommandations for nvme is 200 pg by osd) maybe can you try to run rados bench...