Search results

  1. A

    Ceph on single 10GB

    Ok. For the moment I think I will use gluster only for fast MANUAL migration, without HA. Or maybe ZFS replication I will check the smartest solution with this limited resources... Thank you!!!
  2. A

    Ceph on single 10GB

    Yes, I have 2 Nic to handle on each of the three server. 1x1GB and 1x10GB In case I can't add more, how can you advise to use this links?
  3. A

    Ceph on single 10GB

    If I will use gluster it is possible? It is less heavy for nic and CPU? Can be corosync share same nic with gluster?
  4. A

    Ceph on single 10GB

    Hi, I have 3 nodes 2x10GB nic each I will use proxmox cluster + VM traffic on one Nic and Ceph on the other. will it be enough? Tnx in advance...
  5. A

    Vlan Tagging

    Finally: seems that eno1 is not usable in our contract... we need another nic...
  6. A

    Vlan Tagging

    Already do this...
  7. A

    Vlan Tagging

    Support respond: "It sounds that you try to configure a second uplink. Please note that each server has only one uplink. And only this uplink can use the vSwitch." Well... no... I just want to use second network card for private traffic, is this an uplink? Don't think so... I am wrong?
  8. A

    Vlan Tagging

    Ok I try to move the tagged interface on the other nic: enp1s0 and it works... So seems to be a problem on eno1. WEIRD...!!! I'll try to talk with support. Just in case: do you think I miss something...? Ps. This is the conf that work: auto enp1s0.4000 iface enp1s0.4000 inet manual #VLAN...
  9. A

    Vlan Tagging

    I'll try that, but still 2 host don't ping each other... To be sure I also stop the firewall on 2 proxmox. I don't have any other firewall or IP filter: my plan on Hetzner does'nt have firewall. Of course, 2 poxmox are in the same vswitch tagged with 4000 I'm pretty sure the configuration is...
  10. A

    Vlan Tagging

    ... can someone help me? Tnx...
  11. A

    Vlan Tagging

    Hi, I have 3 nodes on Hetzner with 2 Nic's/each. I need to configure first nic for internet acces on 3 hosts and navigation for the VM and for the proxmox cluster. Second NIC only for the storage network. So I connect 3 hosts to 3 vswitch: - Public - Cluster - Storage Then I'll try to make a...
  12. A

    Pfsense + Haproxy inside Proxmox at Hetzner

    Are you able to solve your problems? Just to understand if it's doable (I'm in a similar context right now)
  13. A

    3 node hosts on Hetzner: storage, HA etc.

    I just have some doubt to clarify: By design we need to nat all our VM, so I think we will use a VM acting as gateway for all the other VM on the same cluster. Question is: what happens if we move to another physical node the gateway? It's possible or we will have network service disruption? If...
  14. A

    3 node hosts on Hetzner: storage, HA etc.

    Hi, do you have any other option at good price? Ps This project must be a replica of what we have in house
  15. A

    3 node hosts on Hetzner: storage, HA etc.

    Hi, I'm about to start a project on Hetzner provider with 3 proxmox node. I wish to have a network storage on dedicated 10GB nic's (ceph or gluster will be fine) and maybe a dedicated nic also for cluster proxmox. Anyone has experience on this topic? Do you think it's doable? tnx in advance
  16. A

    Qemu for Proxmox (pve-qemu) with ALL Supported KVM and Emulated CPUs Debug and Release DEP Builds Available

    Hi lillypad, I try this solution on my single-pve-testing host. Install is pretty straightforward, still I haven't see any new qemu cpu platforms... There is anything I miss here??? Tnx
  17. A

    Network configuration with Proxmox on Hetzner dedicated Server

    After some struggles I managed to use new IP class on Hetzner dedicated server... This is my configuration (my subnet is a /29): - Debian/Proxmox Host: source /etc/network/interfaces.d/* auto lo iface lo inet loopback iface lo inet6 loopback auto enp35s0 iface enp35s0 inet static address...
  18. A

    Gluster on top of RAID5

    Never heard before...! I will looking into it.
  19. A

    Gluster on top of RAID5

    Yes, you are right. Obviously the 3 nodes will be redundant in every other aspect to avoid or minimize fails. But if a stop of services are allowed, data are safe in raid5 so you can recover from a node fail simply restarting it o reinstalling it. This is good only on a small home project or...
  20. A

    Gluster on top of RAID5

    I was thinking to save disk and avoid 3 replication Gluster waste of space, to create a gluster distributed volume (with no redundancy) on top on Raid5 hardware hotswappable on 3 nodes. It is a viable solution or a very bad scenario? Tnx in advance...!!!