Search results

  1. H

    Cluster problem. Node is red, but online

    I have the same problem. pvesm status show local dir less 1% on all node the dir is /var/lib/vz but it is 16G and empty
  2. H

    Lost a node in cluster under "server view"

    It come again, it's node002 this time:confused:
  3. H

    Lost a node in cluster under "server view"

    I meet the problem again. Now, I stop corosync and can not start it(timeout) there are a lot of error in /var/log/daemon.log Oct 9 09:25:32 node006 pve-ha-crm[3037]: ipcc_send_rec failed: Connection refused Oct 9 09:25:32 node006 pve-ha-lrm[3040]: ipcc_send_rec failed: Connection refused Oct...
  4. H

    Proxmox with OpenvSwitch

    my exapmle, bonding -> vmbr -> vlan allow-vmbr1 bond1 iface bond1 inet manual ovs_bonds eth2 eth3 ovs_type OVSBond ovs_bridge vmbr1 ovs_options bond_mode=active-backup pre-up ( ifconfig eth2 mtu 9000 && ifconfig eth3 mtu 9000 ) mtu 9000 auto vmbr1 iface vmbr1 inet manual ovs_type...
  5. H

    [SOLVED] How to verify the IP address of vm

    there is a simple scripts #coding=utf-8 #!/usr/bin/python import re #use 'arp-scan' and 'egrep' to create file about mac-ip and vms configure #arp-scan --interface=vlan100 10.205.1.0/24 > vms.addr #arp-scan --interface=vlan50 192.168.10.0/24 >> vms.addr #egrep...
  6. H

    [SOLVED] How to verify the IP address of vm

    thnaks. I use IP and application as name of vm, but some vm has two or more nic and ip. I want statistics. I find a command 'arp-scan ' , it show all mac address and IP and then 'egrep -i --color '(mac address)' /etc/pve/nodes/*/qemu-server/*.conf' to show vmid.conf and the net in it arp-scan...
  7. H

    [SOLVED] How to verify the IP address of vm

    I want to associate vm with its IP addr,but there is some troubles: I use openvswitch create bond, bridge and vlan when I run "ip addr", I can see only the card name such as "tap100i0" but no the ip address: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast...
  8. H

    Lost a node in cluster under "server view"

    yes. time is sync I try to restart corosync on node6, now the corosync service is failed and could not been started :confused: # systemctl status corosync.service ● corosync.service - Corosync Cluster Engine Loaded: loaded (/lib/systemd/system/corosync.service; enabled) Active: failed...
  9. H

    Lost a node in cluster under "server view"

    thanks for your reply. the 'pvestatd' is active on all nodes: * pvestatd.service - PVE Status Daemon Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled) Active: active (running) since Tue 2016-06-28 23:05:19 CST; 2 days ago Process: 2752 ExecStart=/usr/bin/pvestatd start...
  10. H

    Lost a node in cluster under "server view"

    Sorry for my poor english。 I have a cluster with 6 nodes,it running OK over six months。 today,I found the icon of node6 is red fork,all vms on nodes is grey with only vmid(without name). Then I run "pvecm nodes" in console,the result is normal,6 nodes are online. I have restart pveproxy...