Search results

  1. M

    Single SAS Port Passthrough (Dual Port HBA)

    Hello Guys. Is it possible to passthroug the ports of a dual sas hba to two different vm's? root@prox11:~# lspci -s 19:00.0 -v 19:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) Subsystem: Broadcom / LSI SAS9300-8e Flags: bus...
  2. M

    Auto add new VM to HA resource

    Any update to this? Maybe an option in the creation-wizard of a new vm (and in the recovery)?
  3. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I solved the problem by changing the order of the commands: nOK source /etc/network/interfaces.d/* post-up /usr/bin/systemctl restart frr.service OK post-up /usr/bin/systemctl restart frr.service source /etc/network/interfaces.d/* P.S. I didn't add the line "source ..."...
  4. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    when I fire up "ifreload -a" in shell i get the same error as mentioned above (not more). but when I execute "/usr/bin/systemctl restart frr.service" everything seems to be ok. didn't you add the line in your config?
  5. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I reverted the "lo1-thing". This could not be the problem. As mentioned in the manual you have to add the line "post-up /usr/bin/systemctl restart frr.service" in /etc/network/interfaces to reload the service after config upgrades in gui. And this throws an error ("ifreload -a" is...
  6. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    By the way: Can someone tell me which traffic goes through which connection on a cluster? Throught which network goes traffic (oobe) of (builtin) backup / corosync / cluster (same as corosync?) / migration? Is there an useful network diagramm of proxmox cluster with ceph?
  7. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    @admartinator Did you read my question above?
  8. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    @alexskysilk I have 8 interfaces per node (2x 25G / 2x 10G / 2x1G / 2x1G) and i want to avoid the use of a switch for ceph and cluster/corosync as it reduces the points of failure (and there is no need for external connection). So I want 2 separated frr routers for ceph (25G) and...
  9. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    i've tested every possible variation but i don't get it to work...
  10. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    maybe we can find a solution together :) I've added a second configuration (openfabric) to the nodes. now it looks like this (node1): root@prox01:~# cat /etc/frr/frr.conf # default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in # /var/log/frr/frr.log # # Note: # FRR's...
  11. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I have no clue how to modificate the config file I have posted above to create a second (separated) fabric for e.q. IP 10.10.12.101/32...
  12. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    Hello Guys! I'm setting up our new cluster at the moment. The cluster network is a 25 GBit Full-Mesh configuration between the nodes (up and running! ;-) ) To follow the KISS principle and reduce the point(s) of failure I thougt about a second mesh for corosync (with fallback over public...
  13. M

    BackupExec (Windows VM) - Best practice Backup2Disk Storage

    Hello guys. I plan to change the harddisks of the B2D-Storage in our BackupExec VM. Recently this is a zfs-mirror configured on the pve host which is connected to the vm via a virtio block device because of problems with the virtio scsi driver at installation time. (see...
  14. M

    VM: Same name of disks on different storages

    Hello. I have a running VM on ProxVE 8 with 3 disks on 3 different storages. They all have the same (file-) name. That makes it a bit confusing if you check the content: Second problem: There is no "notes" field or similar that shows the name of the corresponding VM. This could be a...
  15. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    It seems to work... till now... ;-) Thank you for your help!
  16. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    I followed the guide und added the following line (24GB) cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=25769803776 The result is an entry (after reboot and "update-initramfs -u -k all") in: cat /sys/module/zfs/parameters/zfs_arc_max 25769803776 The UI shows: Actually the ram usage...
  17. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    How do I limit the size of memory for zfs, e.q 24GB ?