Search results

  1. Openvswitch issues

    You need a minimum of 2 ports to set up a bond. ovs-vsctl add-bond vmbr0 bond0 ens18 will tell you that it needs a minimum of 4 arguments (that is, 2 physical ports).
  2. Proxmox VE 5.0 and Open vSwitch

    The problem starts from the fact that the initial proxmox install is lacking any access to the proxmox repositories by default. Thus `apt install openvswitch-switch` installs the Debian Stretch version which is severly broken. I had to dig trough the forum and stumble upon a mention of the...
  3. Openvswitch issues

    I'm running into the same issues with openvswitch (with the pve-no-subscription repository installed). A simple bridge like: auto vmbr0 allow-ovs vmbr0 iface vmbr0 inet manual ovs_type OVSBridge Would not work with openvswitch 2.7.0-2 from the no-subscription repository. This is a fresh...
  4. Proxmox VE 5.0 and Open vSwitch

    I'm also having issues with Proxmox 5.1 and openvswitch. I started with a simple configuration: auto vmbr0 allow-ovs vmbr0 iface vmbr0 inet manual ovs_type OVSBridge This should simply create just a bridge with no ports and it should technically work perfectly. Yet, after I reboot, there...
  5. Fibre Channel, shared storage, how?

    Just a single SuperMicro server with 12 Bays + 2 internal for storage (I don't have the exact model on hand right now). Unfortunately the client didn't have enough budget for more so he chose to cheapen out on the storage part, against my advice... I explained the issue(s), so that's gonna be on...
  6. Fibre Channel, shared storage, how?

    You mean to add more machines? No budget yet :(
  7. Fibre Channel, shared storage, how?

    I was under the (probably wrong) impression that FCoE processing is offloaded to the actual card, thus the CPU / kernel (?) not doing much of extra work to transfer the data back & forward. I'll keep that in mind. I would LOVE to do Ceph, but the nodes we have ordered only have 3 drive bays...
  8. Fibre Channel, shared storage, how?

    Isn't FC(oE) faster than iSCSI over 10GE? Also would have the advantage of of not requiring an "extra" software layer? This is will new to me and I'm just theorizing right now. I'm waiting for the hardware to be delivered.
  9. Fibre Channel, shared storage, how?

    You can find X540-T2 on eBay for $135/ea from chinese sellers. I haven't tried it, yet. This will be my first time doing this, so I hope I'm not gonna run into any issues. Yes, I am planning to have HA, tough for the moment the single point of failure will indeed be the storage, depending on...
  10. Fibre Channel, shared storage, how?

    Meanwhile my plan was a bit revised. I will have to do FCoE instead of FC (not a big difference), due to not being able to score some cheap FC cards. Yes, FCoE was cheaper, for some reasons. Anyway, I'm planning on creating two different ZVOLs on the zpool, one for SSD storage and one for HDD...
  11. Fibre Channel, shared storage, how?

    Could you be a bit more specific on Multipath? I'm still planning on doing it on top of ZFS (because I want the daily snapshots), but it's not clear to me how multipath works and the documentation from Red Hat is sort of lacking.
  12. Fibre Channel, shared storage, how?

    This is probably a stupid idea, but: Could I use ZFS on the target, and then add LVM on top of a zvol, and export the zvol as a LUN for the initiator(s)?
  13. Fibre Channel, shared storage, how?

    HA is a must, unfortunately.
  14. Fibre Channel, shared storage, how?

    Thank you very much. This will be at least my fall-back plan if I don't figure out how to get ZFS working in a similar way.
  15. Fibre Channel, shared storage, how?

    Thank you. So there is no need to worry about simultaneous access on the same PVs/VGs when using LVM? I am leaning more on actually doing ZFS (because we're deep into it, more than LVM), but the docs/wiki only mention ZFS on iSCSI (not simply SCSI which is what FC is basically). Could this be...
  16. Fibre Channel, shared storage, how?

    Unfortunately I don't have a budget for a specialized SAN, so I will have to use LiO or FreeBSD's tools to do that. I have read the Wiki in regards to storage, but it's still not clear. Seems like the best candidate would be to use ZFS over iSCSI, but Fibre Channel is SCSI not iSCSI, so I'm...
  17. Fibre Channel, shared storage, how?

    Hello, I'm looking into a new setup using Proxmox. I will have 4 x Proxmox Nodes, each with a FC HBA. These will be connected to another Server with FC HBA in target mode, running FreeBSD or Linux (not sure yet) with 2 zpools (we will have different storage for SSDs and HDDs). What is not...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!