Search results

  1. D

    [SOLVED] Linux Bridge on a public address

    what shows the command "iptables -t nat -nL -v" on proxmox node?
  2. D

    [SOLVED] Linux Bridge on a public address

    can you ping 10.0.0.1 from inside guest VM? did you set the network guest config to the correctly bridge vmbr2?
  3. D

    [SOLVED] Linux Bridge on a public address

    The setup we did was to make POSTROUTING NAT, for PREROUTING NAT, just add the necessary rules below the POSTROUTING sector that you already have on your network setup.. probably you will need to prerouting to the correct inbound guest tap interface, each guest has a specific TAP interface...
  4. D

    [SOLVED] Linux Bridge on a public address

    your server doesn't have a KVM port? like ILO or iDRAC?
  5. D

    [SOLVED] Linux Bridge on a public address

    did you rebooted the host to apply the changes? maybe you had a static ip config on eth0 that is not reflecting the changes to vmbr0
  6. D

    [SOLVED] Linux Bridge on a public address

    is difficult to investigate the problem just with this print screen... you may need to see other logs!! As I can see the vmbr2 is up, maybe the problem is with eth0/vmbr0
  7. D

    [SOLVED] Linux Bridge on a public address

    below is the complete setup of /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual # interface iface - public address interfae bridge auto vmbr0 iface vmbr0 inet static address 94.76.xxx.xxx netmask 255.255.255.192 gateway 94.76.xxx.xxx bridge_ports eth0...
  8. D

    [SOLVED] Linux Bridge on a public address

    leave eth0 alone... you will use bridge for this setup!! your public IP address will be addressed to the vmbr0, and the NAT interface bridge will be vmbr2, configure your guest to use 10.0.0.1 as default gateway (or whatever the invalid IP schema you use on your network)..
  9. D

    [SOLVED] Linux Bridge on a public address

    have you tried it? # interface iface - public address interfae bridge auto vmbr0 iface vmbr0 inet static address 94.76.xxx.xxx netmask 255.255.255.192 gateway 94.76.xxx.xxx bridge_ports eth0 bridge_stp off bridge_fd 0 # internal iface - used to bridge VMs with...
  10. D

    Host instable after update 5.0.21-5

    Hi, I'm using Proxmox 6.0-12.. After updating the kernel to 5.0.21-5 the /var/log/syslog start to ouput these erros below: VM 106 qmp command failed - VM 106 qmp command 'balloon' failed - Invalid parameter type for 'value', expected: integer VM 103 qmp command failed - VM 103 qmp command...
  11. D

    [SOLVED] Solutions for Consistently High CPU Usage

    Interesting question.. How many vCPU slots can we power on simultaneous for VM guest in terms of overcommit?
  12. D

    Host memory usage

    I confirmed that ZFS ARC cache is consuming this memory... Running the command below, shows ZFS arc cache using almost 35GB... awk '/^size/ { print $1 " " $3 / 1048576 }' < /proc/spl/kstat/zfs/arcstats So, I have three questions: 1- There is any way to limit this ARC cache on ZFS-local...
  13. D

    Host memory usage

    Hi, It appears that some local filesystem is doing a lot of cache, (maybe ZFS) ? I executed: echo 3 > /proc/sys/vm/drop_caches Host memory lowered to 20GB...
  14. D

    Host memory usage

    Hi, My proxmox 6.0 host is reporting 56GB of RAM usage, but the sum of my total VMs (5) running is only 15GB (considering the total memory allocation, not used memory on guest), this host does not run CEPH or any other services, just KVM process, what is consuming so much memory if the sum of...
  15. D

    A subdomain per virtual machine

    This will not be possible with only 1 routable IP address.. you cant redirect the same SOURCE IP/PORT to different "internal" servers at same time.. NAT does not support it, you can do some tricks with apache proxy but only for protocols http/https.
  16. D

    ADD IP

    Hi, You will need to make a PREROUTING NAT on your Internet router (where the Internet routable IP resides) and redirect it to your internal server (physical or vm) using a FULL NAT or TCP/UDP based Port NAT. If you don't have management access to this router, you will need to ask it to your...
  17. D

    Shared Storage for a PVE Cluster

    I think is not possible, since the snapshot file is a "point in time" of the original VM, the snap should reside in the same storage where the original vm disk is. Also you must be careful on this, maybe I'm wrong, but at least on a vmware environment, when you create a snap, all the I/O writes...
  18. D

    Shared Storage for a PVE Cluster

    Maybe this can help you through: https://pve.proxmox.com/wiki/ISCSI_Multipath After you setup the iSCSI connectivity to your storage , just create a LVM filesystem on Proxmox pointing to your iscsi LUN disk. For ZFS over iSCSI you will need to have a ZFS-enabled storage, zfs-over-iscsi is...
  19. D

    Shared Storage for a PVE Cluster

    Open-iscsi and multipathd can do that job on Debian regarding HA for the iscsi SAN networking, Proxmox has a built-in locking method for writes, LVM-tick on top of iscsi will use this method to take care of it!! don't worry, the only requirement is that all nodes must live on the same HA...
  20. D

    Shared Storage for a PVE Cluster

    Your storage (DELL) is probably already redundant (two power supply, two iscsi controllers, etc) why be worried about using iscsi for shared storage? My recommendation, use LVM-tick filesystem on top of iscsi for shared storage.. and you will be good!!! GlusterFS for my personal experience is...