Hello Guys.
Is it possible to passthroug the ports of a dual sas hba to two different vm's?
root@prox11:~# lspci -s 19:00.0 -v
19:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
Subsystem: Broadcom / LSI SAS9300-8e
Flags: bus...
I solved the problem by changing the order of the commands:
nOK
source /etc/network/interfaces.d/*
post-up /usr/bin/systemctl restart frr.service
OK
post-up /usr/bin/systemctl restart frr.service
source /etc/network/interfaces.d/*
P.S. I didn't add the line "source ..."...
when I fire up "ifreload -a" in shell i get the same error as mentioned above (not more).
but when I execute "/usr/bin/systemctl restart frr.service" everything seems to be ok.
didn't you add the line in your config?
I reverted the "lo1-thing". This could not be the problem.
As mentioned in the manual you have to add the line
"post-up /usr/bin/systemctl restart frr.service"
in /etc/network/interfaces to reload the service after config upgrades in gui.
And this throws an error ("ifreload -a" is...
By the way:
Can someone tell me which traffic goes through which connection on a cluster?
Throught which network goes traffic (oobe) of (builtin) backup / corosync / cluster (same as corosync?) / migration?
Is there an useful network diagramm of proxmox cluster with ceph?
@alexskysilk
I have 8 interfaces per node (2x 25G / 2x 10G / 2x1G / 2x1G) and i want to avoid the use of a switch for ceph and cluster/corosync as it reduces the points of failure (and there is no need for external connection).
So I want 2 separated frr routers for ceph (25G) and...
maybe we can find a solution together :)
I've added a second configuration (openfabric) to the nodes. now it looks like this (node1):
root@prox01:~# cat /etc/frr/frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's...
Hello Guys!
I'm setting up our new cluster at the moment.
The cluster network is a 25 GBit Full-Mesh configuration between the nodes (up and running! ;-) )
To follow the KISS principle and reduce the point(s) of failure I thougt about a second mesh for corosync (with fallback over public...
Hello guys.
I plan to change the harddisks of the B2D-Storage in our BackupExec VM.
Recently this is a zfs-mirror configured on the pve host which is connected to the vm via a virtio block device because
of problems with the virtio scsi driver at installation time.
(see...
Hello.
I have a running VM on ProxVE 8 with 3 disks on 3 different storages. They all have the same (file-) name. That makes it a bit confusing if you check the content:
Second problem: There is no "notes" field or similar that shows the name of the corresponding VM. This could be a...
I followed the guide und added the following line (24GB)
cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=25769803776
The result is an entry (after reboot and "update-initramfs -u -k all") in:
cat /sys/module/zfs/parameters/zfs_arc_max
25769803776
The UI shows:
Actually the ram usage...