Thank you,
Udo - what about the pre-up command, is it necessary to first set the mtu on the physical interface ethX, and then on bondX ? ..or you need it because you use the logical interfaces/subinterfaces eth2.3, eth3.3 to start with ?
I have dedicated only and was wondering if I need that...
Just realized that this will only change the mtu temporarily. So what about changing it via /etc/network/interfaces adding the mtu size on bond0 and issuing ifdown bond0 && ifup bond0 would that do the job or I need to reboot ?
Thx
I need to adjust MTU on an interface, not Proxmox interface but the interface facing ceph to make it more efficient over 10Gbps network.
It is a bonded interface let's say bond0, bonding eth2 and 3 added to vmbr1 interface.
I was planning to just do this after hours by issuing ifconfig bond0...
I have a pve cluster connecting to a 3 node pve+ceph over 10Gb. The ceph nodes are pretty powerful, I mean 2 x Xeon CPUs with 24 cores total and plenty of RAM.
I see some IO wait when looking at the CPU usage under proxmox gui. The io wait is about a little bit more than half of the CPU usage...
I am occasionally getting clock skew 2-3 times a day with 0.052-0.059, was wondering if I should be concerned ?
I have a local NTP on the same subnet and I see in the trace all the nodes get NTP from this local NTP server.
Is the best practice to put the NTP server on one of the nodes and...
Well I have 4 x 10Gbps interfaces (2 NICs with 2 ports each) per server so I can separate Ceph public and Ceph private networks completely and with bonding two ports on two different cards achieve redundancy on the switch as well as in case one of the NICs fails.
...well I have 18 10K 6Gbps SAS drives with 3 dedicated storage servers each with multiple 10Gbps interfaces. Hence I was wondering if better to bond or use dedicated interfaces (if possible) and if I gain any speed benefits and/or only redundancy with bonding. the drives are JBOD but the RAID's...
I guess I did not ask the question correctly, is it possible to use two interfaces on one node without bonding them either on private or public network ? and if it is, is there a difference in performance ?
With ceph network is there a difference in performance if I use 2 network interfaces directly for example eth1 and eth2 vs. bonding the same interfaces into vmbr ?
I tried that with no luck in. Thank you for suggestion tho... I went with a physical server for this machine, the amount of time troubleshooting this was not worth it. One day when the feature becomes easier to implement I will be glad to switch to VM again.
I know that it does the job but how effective it is ? Should I expect any problems if I just change the etc/network/interfaces file and do ifdown/ifup ?
Should I first bring the interface down ifdown make the change and bring it back up ifup or it does not matter ?
Thank you
I am trying to pass through a NIC to one of my VMs. I have one NIC with 4 ports and 1 nic with 2 port which I want to pass through to a VM.
DMAR: IOMMU enabled
DMAR-IR: Enabled IRQ remapping in x2apic mode
DMAR: Intel(R) Virtualization Technology for Directed I/O
I verified my hardware...
I am not sure if I understand, there should be no VLAN tags , this is an access port the VLAN tag is stripped before it get out on this port. As far as I know, and I can confirm that today there should be no VLAN tag on the forwarded traffic , traffic that is going to the destination port to...
how do you pass the nic to the VM?
The NIC is on vmbr1 that includes eth3 interface directly connected to the port 2 (destination port)
I also made sure that while testing I am selecting the proper interface in Wireshark.
And what is your VM config look like?
Windows VM
bootdisk: virtio0...
I have a Cisco switch with SPAN enabled. It is a very basic configuration with port 1 being the source port and port 2 being the destination port in both directions.
Port 1 in access mode belonging to VLAN 10 with native VLAN 1. Port 2 is an access mode belonging to VLAN 1 with native VLAN 1...
Assuming you are dealing with FreeNAS 9.10 - check the DNS and host table on the FreeNAS, I had similar issue but permanently not being able to use the NFS (storage is offline) and it turned out to be a DNS issue (even though DNS 2 and DNS 3 were configured properly).
Check last 2 or 3 posts...
was jsut looking at the roadmap and release history. I see upcoming Ceph Jewel and DRBD9 improvements - are we talking weeks for like 4.2.x update or months for 4.3.x bigger update ?
Thank you
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.