You need a minimum of 2 ports to set up a bond.
ovs-vsctl add-bond vmbr0 bond0 ens18 will tell you that it needs a minimum of 4 arguments (that is, 2 physical ports).
The problem starts from the fact that the initial proxmox install is lacking any access to the proxmox repositories by default. Thus `apt install openvswitch-switch` installs the Debian Stretch version which is severly broken. I had to dig trough the forum and stumble upon a mention of the...
I'm running into the same issues with openvswitch (with the pve-no-subscription repository installed).
A simple bridge like:
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
Would not work with openvswitch 2.7.0-2 from the no-subscription repository.
This is a fresh...
I'm also having issues with Proxmox 5.1 and openvswitch.
I started with a simple configuration:
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
This should simply create just a bridge with no ports and it should technically work perfectly. Yet, after I reboot, there...
Just a single SuperMicro server with 12 Bays + 2 internal for storage (I don't have the exact model on hand right now). Unfortunately the client didn't have enough budget for more so he chose to cheapen out on the storage part, against my advice... I explained the issue(s), so that's gonna be on...
I was under the (probably wrong) impression that FCoE processing is offloaded to the actual card, thus the CPU / kernel (?) not doing much of extra work to transfer the data back & forward.
I'll keep that in mind.
I would LOVE to do Ceph, but the nodes we have ordered only have 3 drive bays...
Isn't FC(oE) faster than iSCSI over 10GE? Also would have the advantage of of not requiring an "extra" software layer? This is will new to me and I'm just theorizing right now. I'm waiting for the hardware to be delivered.
You can find X540-T2 on eBay for $135/ea from chinese sellers.
I haven't tried it, yet.
This will be my first time doing this, so I hope I'm not gonna run into any issues.
Yes, I am planning to have HA, tough for the moment the single point of failure will indeed be the storage, depending on...
Meanwhile my plan was a bit revised. I will have to do FCoE instead of FC (not a big difference), due to not being able to score some cheap FC cards. Yes, FCoE was cheaper, for some reasons.
Anyway,
I'm planning on creating two different ZVOLs on the zpool, one for SSD storage and one for HDD...
Could you be a bit more specific on Multipath?
I'm still planning on doing it on top of ZFS (because I want the daily snapshots), but it's not clear to me how multipath works and the documentation from Red Hat is sort of lacking.
This is probably a stupid idea, but:
Could I use ZFS on the target, and then add LVM on top of a zvol, and export the zvol as a LUN for the initiator(s)?
Thank you.
So there is no need to worry about simultaneous access on the same PVs/VGs when using LVM?
I am leaning more on actually doing ZFS (because we're deep into it, more than LVM), but the docs/wiki only mention ZFS on iSCSI (not simply SCSI which is what FC is basically). Could this be...
Unfortunately I don't have a budget for a specialized SAN, so I will have to use LiO or FreeBSD's tools to do that.
I have read the Wiki in regards to storage, but it's still not clear.
Seems like the best candidate would be to use ZFS over iSCSI, but Fibre Channel is SCSI not iSCSI, so I'm...
Hello,
I'm looking into a new setup using Proxmox.
I will have 4 x Proxmox Nodes, each with a FC HBA.
These will be connected to another Server with FC HBA in target mode, running FreeBSD or Linux (not sure yet) with 2 zpools (we will have different storage for SSDs and HDDs).
What is not...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.