Search results

  1. M

    Firewall is disabled but the port is closed

    Hi. I have disabled Firewall everywhere in Proxmox interface for performance needs, but now I see that one of the tcp ports, I need to be opened on the node is closed. And I don't know how to open it. Thank you everyone.
  2. M

    Fast connection between containers

    And this speed is adequate for hardware RAID 10 with SAS drives? If so, I was mistaken, sorry. And thank You for being so patient. And last question: can I remove two bridges I created on my cluster with working containers, assign IP addresses (another two than I have now for vmbr0 and1...
  3. M

    Fast connection between containers

    Disabled firewall on the host and on every container, rebooted. vmbr2 is still 1Gbit slower than vmbr0. Thinking about moving from vmbr0/vmbr1 to one bond and using it. Can I create something like e1000 or vmxnet for LXC containers?
  4. M

    Fast connection between containers

    Does vmbr2 have hardware limitations? It is not connected to any NIC. And it's speed is 15.2, but vmbr0's speed is 16.4. And vmbr0 is limited by NIC's speed. I was hoping, that vmbr2 will be much faster, as it's completely virtual.
  5. M

    Fast connection between containers

    On different containers. Is there any way to overcome this limitation? 16.4 Gbits/sec through vmbr0, which is connected to eth0 (Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)). And eth1 has the same configuration. And 15.2 Gbits/sec trough vmbr2, which is virtual.
  6. M

    Fast connection between containers

    Sorry, I've just checked firewall settings, and it have no any rule string, but it's running. Should I disable it? This server is located in protected network.
  7. M

    Fast connection between containers

    It's a bit slower than You have posted in #2 :) Both are running on the same host and have similar configurations, using subvolume format. As far as I know, I don't. I mean, it's disabled in Proxmox Web UI. I don't know, if it's enabled somehow else on the host, or on clients, or on the router...
  8. M

    Fast connection between containers

    Thank You! Here are my /etc/network/interfaces: on the host (vmbr0 and vmbr1 are bridges to physical networks, I'm planning to remove them and make a bond with eth0 and eth1 and vmbr2 is completely virtual for interaction between containers): auto lo iface lo inet loopback iface eth0 inet...
  9. M

    Fast connection between containers

    Hi. I'm new to networking, so I'm looking for the right way to configure my LXC containers (Proxmox VE 4.4, one host, several Ubuntu 16.04 LXC containers) for better performance in data transferring between them. Thanx for any help.
  10. M

    Change mount point's options

    Upd. It's all working fine, thank You again.
  11. M

    Network questions

    Thank You for the quick answer! Here are my new settings. Host: auto lo iface lo inet loopback iface eth0 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.10.100 netmask 255.255.255.0 gateway 192.168.10.1 network 192.168.10.0...
  12. M

    Change mount point's options

    Oh! And will I need to do something in the container, with it's kernel, after that module will be installed on the host system?
  13. M

    Change mount point's options

    With LXC I have better performance. Am I right? So can I install this module on the host, mount share on it and create mp to it in a container?
  14. M

    Change mount point's options

    Sorry. I need to mount a CIFS folder. And I need to use a specific option "wine": //192.168.10.2/change /mnt/change cifs credentials=/etc/credentials1,rw,iocharset=utf8,noperm,wine,nounix,nostrictsync 0 0 I have installed an etercifs package, which is needed for Wine@Etersoft, a custom Wine...
  15. M

    Change mount point's options

    Hello. Is there a way to change mount point's options on LXC container's startup? I need it because of kernel module, installed in one container only, needed for commercial Wine's version to mount CIFS resources.
  16. M

    Different drives, best performance

    Hello. Here are some stupid noobs questions. I'm trying to configure LXC containers to work on the old Dell server. I have PERC 6/i RAID controller there, 8 drives. The plan is to use 2 mirrored drives for the system and LXC containers and some drives with RAID 10, for data (files and...
  17. M

    Network questions

    Sorry for silly questions, I'm new to networking. Here is my /etc/network/interfaces from the Proxmox host: auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.10.100 netmask 255.255.255.0...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!