Search results

  1. M

    Reserving HW resources for Hypervisor (physical machine) itself

    Hello. When changing /etc/nova/nova.conf file on all Compute nodes: reserved_host_memory_mb=8192 # This is 8GB reserved_host_disk_mb=10000 # This is 100GB reserved_host_cpus=2 ... and restarting nova service. So when we look on the OpenStack, after above reservation is done on all...
  2. M

    Reserving HW resources for Hypervisor (physical machine) itself

    Hello. So as it is possible on OpenStack and VMware (ESXi), primary for the system or even specific Host/Hypervisor/physical machines, on which you do not want to overprovision VMs/LXcs. In practice when you have a need to host the guest (VMs or LXC containers) that can not be overprovisioned...
  3. M

    Reserving HW resources for Hypervisor (physical machine) itself

    Thank you for the idea how to manually do it so, but my question was: is it possible from GUI/config file?
  4. M

    Reserving HW resources for Hypervisor (physical machine) itself

    For example, OpenStack on every Compute node (one with Hypervisor) had a config file related to Hypervisor/Host specific configuration/tunning. Specificaly this file is named: /etc/nova/nova.conf And it has (among the other things) three sections like this one (with which you can reserve...
  5. M

    Reserving HW resources for Hypervisor (physical machine) itself

    Hello. I have a question related to the Proxmox VE nodes (hardware/bare metal). Are there any possibility to reserve some hardware resources for Hypervisor itself, like: Reserved amount of RAM Reserved CPU cores Reserved disk spaces (if needed) Similar as it is possible on OpenStack...
  6. M

    MACVTAP as future replacement of classic NIC+BRIDGE+TAP interfaces

    Advantage may be removing one layer (bridge interface) and therefore speed things up and having "lighter" configuration :) ... as i understand from IBM presentation there is speed up on net communication between VMs on the same node: They (IBM) comment the improvement using MACVTAP: HERE (There...
  7. M

    MACVTAP as future replacement of classic NIC+BRIDGE+TAP interfaces

    MACVTAP support 3 modes, and one of them are plain bridge. ... as explained HERE (i am proposing this as an idea :) )
  8. M

    MACVTAP as future replacement of classic NIC+BRIDGE+TAP interfaces

    Hello. Are you considering the future replacement of classic Bridge+TAP interface (to VM network) with MACVTAP ?. MACVTAP is a relatively new replacement for TAP interface, but also use /dev/tapXY as the old one (classic TAP) so can be easly used with QEMU. Since MACVTAP can be connected...
  9. M

    eBPF (+XDP) firewall

    There is also one interesting project using XDP: "stapbpf – System Tap’s new BPF backend". Initial info (from 2017.) can be found here: https://developers.redhat.com/blog/2017/12/13/introducing-stapbpf-systemtaps-new-bpf-backend/ And for 2019. we can se more and more maturity level ( 1 ) ...
  10. M

    eBPF (+XDP) firewall

    Regarding OpenvSwitch and XDP: In version (v2.12.0 - 03 Sep 2019) we can see: * Add Linux AF_XDP support through a new experimental netdev type "afxdp". And per their manual how to compile it, we can see that it is "experimental" for now (Sep 2019) but can be used. Since AF_XDP is in production...
  11. M

    eBPF (+XDP) firewall

    Great config examples can be found here: https://www.netronome.com/documents/305/eBPF-Getting_Started_Guide.pdf (Not only for offloading using NFP NICs) Read also: https://cilium.io/blog/2018/11/20/fb-bpf-firewall/ There are also project like iptables like command...
  12. M

    eBPF (+XDP) firewall

    Hello. Starting with Linux kernel 4.18. we have a production ready XDP + eBPF capabilities, which is now included in production, starting with RedHat Enterprise Linux v.8.1. and of course included in CentOS Linux 8.1. Other will follow soon. For those not familiar, in simple words: XDP is...
  13. M

    PVE Crashing Starting LXC Container (5.1)

    We have a same problem, that can not start any of the LXC containers anymore: pveversion -v ===================================================== proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager: 5.1-41 (running version: 5.1-41/0b958203) pve-kernel-4.13.13-2-pve: 4.13.13-32...
  14. M

    VLAN in GUI

    Most of my usecases are: ethxy cards ==> bond --> bridge ==> VLANs For few years i was mostly use: OpenvSwitch Bond --> OVSbridge ==> OVS IntPort for every VLAN that i need (one OVS-Intport per VLAN ): For example: OVS IntPortVLAN100 - some with IP/Netmask/DG OVS IntPortVLAN200 - some...
  15. M

    VLAN in GUI

    Dear. It will be a good choice to add a native support for VLANs in GUI for : Linux native interfaces (LAN card, Bond, Bridge) For now it is possible to use it, by manually modifying: /etc/network/interfaces, but it will be much better in GUI (with syntax check) In OpenvSwitch it is...
  16. M

    netmap (+VALE) + QEMU

    Update: VALE/DPDK/OVS performance tests : http://cnp.neclab.eu/vale According to this article: http://docs.openvswitch.org/en/latest/intro/install/dpdk/ OpenvSwitch can be build using DPDK library's for fast packet processing, so this can be another approach. One more interesting presentation...
  17. M

    netmap (+VALE) + QEMU

    According to this article: https://www.linux-kvm.org/images/c/c5/Kvm-forum-2013-High-Performance-IO-for-VMs.pdf Using netmap with VALE switch ( http://info.iet.unipi.it/~luigi/vale/ ) will get a much higher throughput even 5x than virtio (from presentation, for 64 byte packets up to 2.5Mpps in...
  18. M

    OpenvSwitch 2.5 (with connection tracking) as replacement for iptables firewall model in the future

    As i understand, current network model of Proxmox VE is (as in picture - logical view): And a new one (with OpenvSwitch 2.5.x, using nf_conntrack ) will be without the need of : using OpenvSwitch flows as possible Firewall solution since it is not build as Firewall (support only stateless...
  19. M

    OpenvSwitch 2.5 (with connection tracking) as replacement for iptables firewall model in the future

    Hello. Since OpenvSwitch v.2.5 is out now, and it has support for connection tracking with linux kernel module (nf_conntrack) can we expect in the future that Proxmox VE will replace "old" network model and introduce a new one. Situation is similar with OpenStack network model --> Video...
  20. M

    Problem with VLAN in Proxmox 4.1

    Proxmox VE 4.2 + OpenvSwitch on 2 nodes in cluster (+ iSCSI storage for Quorum ) Simple scenario : eth0 connected to Switch trunk port (802.1Q). OpenvSwitch Bridge (vmbr0) : Bridge ports (eth0). LXC config (LXC1 , LXC2): Network : Bridge : vmbr0 VLAN tag : 11 VM config (VM1 , VM2)...