Search results

  1. B

    backup on remote facility ?

    thinking loudly, I am thinking to use PBS to backup anything from servers in the local DC to the server on our remote facility (we have transit link with them via a tunnel). Do you think it is OK to backup everything on a server in the remote facility over 1G link? maybe asynchronously using a...
  2. B

    backup server with remote storage or USB? (edited)

    So waiting a proper baclup machine due to limited supply. I would like to start to use the proxmox backup server. The size of backup sholdn't be large. At max probably 1TB on disk. I have an unused spare machine with 2x10Gbe + 4x1GbE, 2x240 GB SSD and 16GB RAM. The storage can't be expanded...
  3. B

    iscsi multipath priority

    if i force the order of the sesdion it mostly work. though i can simply load balance indeed.
  4. B

    iscsi multipath priority

    well i wanted to do dome failover since the 2 links are not on the same switch. also in peiority use the 10G link, the other one is a 5G link.
  5. B

    iscsi multipath priority

    it has 2 nics at least but i don't see where i can put the priority
  6. B

    iscsi multipath priority

    I am unsure why but by when setting multipath in failover mode it always take the second interface which is the backup in priority: root@pve1:~# multipath -ll mpatha (36e843b6ad25d4d5ddf7dd4af2daf13dd) dm-2 QNAP,iSCSI Storage size=3.8T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+-...
  7. B

    mixing vlans and bonding?

    I was thinking to have it aside the bond (ie set on th the vlan on base interfaces) but yeah finally did it the normal way with a vlan over the bond :) for ISCSI mpath i just made sure to pass the vlan corresponding to the switch backup ..
  8. B

    mixing vlans and bonding?

    Does anyone know if it's possible to mix bonding and vlas? Say I would like to setup a bond over 2 interface eno1 and eno2, and the interface eno1.3000 and eno1.3001. The main purpose of it is to use iscsi multipath while using the bond for ceph.
  9. B

    looking for storage guidance with Hyper-Converged Ceph Cluster

    I am finally testing the ceph storage on one platform. I have 2x256G m.2 MVME disk and 2x980G SSD drives . I am wondering if it's better to put the system on 1 nvme disk and put the log on the other other one. CEPH would use the 2 SSD disks as storage. Or setup the 2 m.2 disks as zfs error. I...
  10. B

    add a storage to a vm located on the same machine with ceph

    I am wondering if the following scenario is possible: 1) a vm is created 2) the software in the vm can instruct ceph to create a new storage located on the same machine I guess it is always possible to do it if i use a N-copy strategy where N = the number of the machines. But is there any...
  11. B

    setup 2 local zpools?

    I wanted to setup a new cluster and have the following constraints again. I will have 3 machines with chassis that can accept either 2x2.5" disks + PCIe or 4x2.5" disks, all machines will have 2x10GbE connections+4x1GbE. Box are chosen mainly to minimise the noise and place in the office). The...
  12. B

    metrics bandwidth?

    What is the expected bandwidth for metrics? Cluster is non a 10G network. I am wondering if using 1G should be enough or better to use 2x10G.
  13. B

    ceph with one-copy block device

    which is fine for a follower Imo. In such case you have the following design Master (or any big data node) with a 3-copy (or 2-copy). Data are stored first to read. Reads happen on followers, data is replicated from the master. You can also filter part of it This allows to scale dynamically...
  14. B

    proxmox cluster and ceph network redundancy with 3 nodes

    So I will go for an active-backup bond strategy + vlan as it sonds easier to go. The only thing I am asking myself is if I am adding an ethernet port using an USB adapter to put the main cororsync ring in. Wonder if it's something that could resist ofver the time compared to "just" using a vlan...
  15. B

    ceph with one-copy block device

    ok perfect. I was worried there is anything special for it :) Indeed. Imo the "beauty" of it is abstracting the logic of storage, so you don't put a raid / machine like you can do with local storage and configure the placement of data you want. Ie want to be consistent on 3 machines or just on...
  16. B

    ceph with one-copy block device

    I wonder how proxmox react if a vm stored on a one-copy block device works if the OSD or the machine crash. Will it be reported as failing correctly ? I'm using the a 1-copy strategy to setup database follower that can be started/stopped any times as described on this paper...
  17. B

    proxmox cluster and ceph network redundancy with 3 nodes

    true... but too late :( well i guess with some qos it should not be that bad...
  18. B

    proxmox cluster and ceph network redundancy with 3 nodes

    true ... i misread the doc. so i guess my only choices are to use a dedicaced card or play with vlan+ qos.
  19. B

    proxmox cluster and ceph network redundancy with 3 nodes

    Rather than buying another card i am thinking i can reuse the ipmi port from supermicro (sideband) . Did someone try such thing?
  20. B

    proxmox cluster and ceph network redundancy with 3 nodes

    I didn't thought to this. I need to look how I can do it with this board and the case I have. thanks for the hint!