Search results

  1. B

    OVSBridge help

    Thanks, I setup a vxlan and see the same issues between the VMs. Just some clarity my two VMs are a NAS1 and NAS2 instance, I replicate between the two so wanted to get a full 9-10 Gbit/s between the two. note: everything is MTU=8950 across this. * - I see pretty high numbers on the retry...
  2. B

    OVSBridge help

    all interfaces and bridges using MTU = 9000. But not getting anything close to 10Gpbs over that OVSBridge. I think I just need to get a 10Gbps managed switch, save myself all these headaches with this.
  3. B

    OVSBridge help

    Diagram: I found this issue, the default setting when adding an interface is to include the proxmox firewall. I've never had to disable this in the past over the native network. But seems like this was the issue on the OVS bridge. When I disabled the firewall on each vm interface: My VM to...
  4. B

    OVSBridge help

    I have 3 nodes in my cluster, and am using OVSBridge setup. This is a variation of : https://pve.proxmox.com/wiki/Open_vSwitch#Example_4:_Rapid_Spanning_Tree_.28RSTP.29_-_1Gbps_uplink.2C_10Gbps_interconnect I was able to get all nodes to sync and work, they can ping and I get very good...
  5. B

    Help with SDN

    Thanks I do not know either going to leave it alone for now...
  6. B

    Help with SDN

    So maybe I do not understand something... I have 2 10Gbps nic cards (they each have two ports) NIC 1: Port1 and Port2 NIC 2: Port1 and Port2 I do not have money to by a 10Gbps RJ-45 switch for these two nodes. So I thought I could use a standard CAT 6e cable between Nic1 Port1 and Nic2...
  7. B

    Help with SDN

    Well this is very odd... I removed the SDN config and rebuilt it. did not change the underlying settings they all were the same (vbrs etc). But now jumbo frames are working. BTW, the MTU setting was 9000 across the board last time as well, I've only been reverted to get the link working from...
  8. B

    Help with SDN

    Hi yes... there is no switch this is direct NIC to NIC. There does not seem to be any response from the ping: root@pvenode01:~# ping 192.168.0.2 PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data. 64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.152 ms 64 bytes from 192.168.0.2: icmp_seq=2...
  9. B

    Help with SDN

    This is from Node 02: 6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr2 state UP group default qlen 1000 link/ether 3c:ec:ef:1b:c6:dc brd ff:ff:ff:ff:ff:ff 12: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000...
  10. B

    Help with SDN

    I do not know why it worked either... But was able to add it back and it's working , maybe something was not quite loaded as you said. BTW, does this only work with MTU 1500? I noticed when I go to 9000 on the interfaces and bridge (and 8950 for the VXLAN zone and vms) all hell breaks loose...
  11. B

    Help with SDN

    By removing the gateway from the vmbr2 on both nodes it seemed to work... I can now ping vm to vm. I want a dedicated link between nodes (basically NIC to NIC, no switch between) and as long as both use the same subnet no need for any gateway. But should it have worked with it set?
  12. B

    Help with SDN

    Node01: /etc/pve/sdn/vnets.cfg vnet: myvet1 zone vxlan01 tag 100000 /etc/pve/sdn/zones.cfg vxlan: vxlan01 peers 192.168.0.1,192.168.0.2 ipam pve mtu 1450 /etc/network/interfaces auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto eno2...
  13. B

    Help with SDN

    I am trying to setup VXLAN using the SDN feature as outlined here: https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_zone_plugin_simple My requirement is to basically use a 10G interface between the two nodes for better VM to VM transfer rates (NAS replica). Maybe there are better...
  14. B

    Why is not nic not seen?

    Well that is odd, the second 10Gb nic is that eno1 (used ethtool to figure out it is a 10G nic)... why is that? where do these names come from?
  15. B

    Why is not nic not seen?

    See the following [03:00.1 and 03:00.1]: root@pvenode02:~# lspci -k| sed -n '/Ethernet/,/driver in use/p' 03:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) Subsystem: Super Micro Computer Inc Ethernet Controller 10-Gigabit X540-AT2...
  16. B

    Noob need help: My Zpool size is half what I expect.

    Thanks for the help... yes maybe it would help to clarify. I will be passing the "dpool01/Media" as a large disk to the TrueNAS vm (virtIO). The "dpool01/Security" to my zoneminder vm again as a large disk. The others prob as iSCSI or more likely NFS. Only Media is going to TrueNAS. I like...
  17. B

    Noob need help: My Zpool size is half what I expect.

    OK here is what I'm planning on for this... I will be giving the Media Datasets to the TrueNAS VM, the rest can go to other vms for other purposes. Any problems/suggestions? Please note: I have anther zpool for the VMs OS disk (NVME 1 x 1TB mirror), so this pool is just for large data sets...
  18. B

    Noob need help: My Zpool size is half what I expect.

    Raidz2 + Raidz2 not as a mirror but striped. I was experimenting with that so when I want to add more drives how best to add to an existing raidz2. To grow to pool basically. As I know you cannot just add disks to raidz2. So just practicing to see if I do a raidz2 with 8 drives how best to...
  19. B

    Noob need help: My Zpool size is half what I expect.

    Thanks, I thought about a pass through but my l2 write cache is not attached to the HBA. Can I pass through a partition from my nvme devices?
  20. B

    Noob need help: My Zpool size is half what I expect.

    Thanks for the response! This helps... wow 33%... I have a chassis with space for 14 HDD and 2 SSD for read cache. I only have 8 HDDs now, but want to expand later. I do not know if zfs can do that yet (expand raidz2), seemed like it was in plan 2 years ago, but is it real yet? If it can...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!