Search results

  1. A

    Random reboot due multicast hasg table full

    Thanks for your reply. I just paste it here http://pastebin.com/gpx4ewhm
  2. A

    Random reboot due multicast hasg table full

    Hi, I am experienced random reboot on nodes in a cluster. after much screening logs line found out this messages in kernel.log: Feb 22 16:31:06 node6 kernel: [2486811.543673] fwbr104i0: Multicast hash table maximum of 512 reached, disabling snooping: fwln104o0 Feb 22 16:31:06 node6 kernel...
  3. A

    ceph dashboard refresh time

    Thanks for the great work from Proxmox developer team, such a nice and convenient ceph dashboard being created. One parameter that I would I to know is how to reduce the refresh time for performance output? now it refresh almost every 6 to 7 sec, would it be possible to reduce to 2 - 3 sec so...
  4. A

    Preview, Feedback wanted - Cluster Dashboard

    Looks nice ! It would be great if you can incorporate info like ceph-dash (Ceph Cluster Placement Group Status) too.;) Just a small column or value is sufficient. Even though this can be monitored from Proxmox GUI, but we relied a lot on ceph-dash to see the PG status, especially when...
  5. A

    100% Wearout on Kingston SSD

    I am running few nodes, each of them with mixed brand of SSD used. Strange that all Kingston variant SSD showed 100% wearout on all nodes. But the SSD still active and in use. What could be wrong here? https://postimg.org/image/3ptnnfb7p/ S.M.A.R.T. output https://postimg.org/image/7fymh2b71/...
  6. A

    ceph base kvm template delete failed

    Hi, After upgrade to latest release PVE 4.2, i can't delete KVM template created under ceph. TASK ERROR: rbd error: rbd: error settng snapshot context: (2) No such file or directory. Not sure anyone experienced similar issue?
  7. A

    reverse proxy - VNC Strange behaviour

    Here the nginx config. I observed that if make other 2 nodes status as down in nginx upstream, it work perfectly, no freeze screen in noVNC. Would it be possible round-robin load balancing will send requests to all nodes at same time that causes conflict or confuse in Proxmox noVNC...
  8. A

    reverse proxy - VNC Strange behaviour

    hi, I am doing lab test on nginx reverse proxy with load balancing. Running latest Proxmox VE 4.2 I am able to access proxmox web GUI via normal https via port 443 to all nodes and it will automatically select available node as per nginx/conf.d configuration...
  9. A

    ip spoof - ipset

    em ..... I need to confirm again. but as long as I put rp_filter=1, cluster communication broken.
  10. A

    ip spoof - ipset

    Hi, Thanks for reply. net.ipv4.conf.all.rp_filter=1 break the cluster communication. if net.ipv4.conf.all.rp_filter=0 all cluster node reconnected again. now I realised the IN to OUT traffic is blocked if changed to an IP not belongs to the container (ipfilter-net0 active). but it can...
  11. A

    ip spoof - ipset

    Hi, I thought it should disable all connection when setup ipset, and only allow any traffic to the assigned IP. Yes, other firewall rule working fine. This is what i have:: ipset list Name: PVEFW-0-management-v6 Type: hash:net Revision: 6 Header: family inet6 hashsize 64 maxelem 64 Size in...
  12. A

    ip spoof - ipset

    hi, what do you meant by level3 TCP/UDP? Switch or within proxmox interface?
  13. A

    ip spoof - ipset

    thank you for your respond. But no luck, even I change to ipfilter-net0, i still can dump in additional IP within the container and it is responding to any connection after add container firewall rules. here are the host firewall: Chain veth201i0-IN (1 references) target prot opt source...
  14. A

    ip spoof - ipset

    Hi, I am learning firewall setting on Proxmox and trying to understand ip spoof prevention. My Firewall scenario is: LXC IP: 10.60.60.8 (eth0) Datacenter level: on Host level: on Container level: on IPfilter for container: on ipfilter-eth0 added for above ip on container level via Proxmox...
  15. A

    openvswitch cannot communicate

    https://pve.proxmox.com/wiki/Open_vSwitch look at the example given.
  16. A

    openvswitch cannot communicate

    you need to create OVS internal port on top of OVS bridge before you can use.
  17. A

    Proxmox 4.1 can not start LXC Container

    Try to create new CT with quota option disable, it should good to go.
  18. A

    Infiniband & PVE 4.1

    I bring up this thread again to see any further information or successful use case that uses infiniband in Proxmox 4.1. From https://pve.proxmox.com/wiki/Infiniband only indicated until version 3.4.
  19. A

    Infiniband & PVE 4.1

    Hi, I am in the mist of sourcing used 10G devices for old engine (run as ceph storage clusters). From other forum knowing that another type of switching - infiniband. May I know following compatible with Proxmox VE4.1? Dell JR3P1 C6100 4 nodes, 2 ports QDR (40Gb/s) Mellanox 4036 Volitaire...
  20. A

    Backup failed PVE 4.1

    I had tried to modified 100.conf under /etc/pve/qemu-server by remove iothread=1. Backup starting working, but halfway errors and saying that vm is not running (vm crashed and shutdown) == Update == after change the conf file, reboot server, backup working fine.