Search results

  1. C

    Open vSwitch across physical servers

    Thanks very much for your input - I'll go with switching rather than mesh nodes for redundancy and ability to scale. - 'Public' networking (public, private & management) separated by VLAN on 10Gbps LACP - Ceph on 10Gbps LACP - Corosync with redundant ring, one 1Gbps network on one switch & one...
  2. C

    Open vSwitch across physical servers

    Do you think it's valid to recommend the "Meshed network" approach as documented here? https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server I've been thinking using this technique, I could setup as below. It'd be greatly appreciated if you wouldn't mind casting your eye over it and...
  3. C

    Open vSwitch across physical servers

    Thanks Spirit - I believe we did have LACP but quite why the failover didn't work I'm not sure. I'm pretty sure there's something we're not being told by our infrastructure provider...
  4. C

    Open vSwitch across physical servers

    Hi Spirit, According to our provider the active went into a semi- down state and the standby failed to kick in. It caused enough issues with our Xen cluster so know it'd definitely cause problems with Fencing once we've moved over to Proxmox. Personally I don't see how it could have happened so...
  5. C

    How would Ceph/Proxmox handle a switching failure

    Hi Lorenz.S, I wonder if you could take a look at my new topic - https://forum.proxmox.com/threads/open-vswitch-across-physical-servers.95989/ Is it possible to use Open vSwitch as a cross communication between servers, and run heartbeat through there, eliminating the need for the physical...
  6. C

    Open vSwitch across physical servers

    Hi there! We have three physical servers, each with 2 x 2P 10G NIC's and 1 x 2P 1G NIC. 1 x 10G NIC will be bonded, active active for Ceph 1 x 10G NIC will be bonded, active active for public/private networking 1 x 1G NIC will be bonded, active active for management & Corosync Each NIC will...
  7. C

    How would Ceph/Proxmox handle a switching failure

    Thanks very much for your response. I'll take that on board! Last thing I'd want is everything to be knocked offline by a temperamental switch. Chris.
  8. C

    How would Ceph/Proxmox handle a switching failure

    We've been looking at a new setup that has three nodes. They will be running Proxmox and Ceph in a hyperconverged config. Each node has theoretical switch redundancy using LACP. However, not for the first time our providers switching has had some kind of failure with a switch and LACP standby...
  9. C

    Migrating from Xen to Proxmox/ceph incrementally

    So initially with the single node, I created my Ceph Pool by using an OSD based crush rule. replicated_rule_osd was created by logging into the PVE & running ceph osd crush rule create-replicated replicated_rule_osd default osd (thanks to zamnuts on Stackoverflow -...
  10. C

    Migrating from Xen to Proxmox/ceph incrementally

    I probably should have Google'd a little more as I would have undoubtedly come across that so apologies! I'm actually testing something now off the back of a Stackoverflow answer (https://stackoverflow.com/a/66362327/504487). I've been able to create a pool that uses OSD replication rather than...
  11. C

    Migrating from Xen to Proxmox/ceph incrementally

    Hi, Firstly I'd like to say hello - I'm new to Proxmox and so far, very impressed! I've got a single node test setup that I've been having a play with. We're looking to migrate our three-node Xen cluster to Proxmox using Ceph. I've got a test setup here which is just a single node. What we're...