Search results

  1. M

    New upgrade issue

    0.2 to 0.5ms
  2. M

    New upgrade issue

    is that possible to set eth1 NIC for Corosync connection? i'm using local storage for VM if yes how can did that?
  3. M

    New upgrade issue

    I got something like that root@lxx~# omping -c 10000 -i 0.001 -F -q 1xxx omping: Can't find local address in arguments root@lxxx:~# now some nodes their NOVNC stuck in loading.
  4. M

    New upgrade issue

    The issue still remain after match all kernel, sometimes the NOVNC for VM on nodes just stuck only loading and not open and connections sometimes goes out again but this time it shows connection loss but still moving around the Cluster.
  5. M

    New upgrade issue

    Thanks you i did that hope this will resolve the issue.
  6. M

    New upgrade issue

    How can match all them while i do update and they all up to date?
  7. M

    New upgrade issue

    How this possible exactly while all nodes are up to date root@xx:~# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. root@xxx:~# What i need...
  8. M

    New upgrade issue

    Here is 1 - root@xx:~# pveversion -v proxmox-ve: 5.0-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.0-34 (running version: 5.0-34/b325d69e) pve-kernel-4.13.4-1-pve: 4.13.4-25 pve-kernel-4.10.17-4-pve: 4.10.17-24 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0...
  9. M

    New upgrade issue

    root@xxx:~# pvecm status Quorum information ------------------ Date: Mon Oct 23 08:08:52 2017 Quorum provider: corosync_votequorum Nodes: 11 Node ID: 0x00000001 Ring ID: 8/129720 Quorate: Yes Votequorum information ----------------------...
  10. M

    New upgrade issue

    When i do top -c i saw the pmxcfs and corosync was taken 100% of the CPU so i did corosync now it lower down but still their GUI down and unable to connect the Cluster. Oct 23 07:40:31 xx pmxcfs[1259]: [dcdb] notice: received sync request (epoch 1/11341/0000043D) Oct 23 07:40:31 xx...
  11. M

    New upgrade issue

    They all the same version and setup as well now the nodes are shows up but Connection refused (595) and most of them their GUI not longer working !
  12. M

    New upgrade issue

    Hello, i upgrade the Cluster lately however this issue keep happened each few hours i attached screenshot. the connection loss but i still see the summary to fix this i have to restart the corosync proxmox-ve: 5.0-25 (running kernel: 4.10.17-4-pve) pve-manager: 5.0-34 (running version...
  13. M

    Ceph

    Hello, i'm looking to use Proxmox Ceph my servers configuration the following. 2U 8 x bays 240 SSD 7 x 4TB enterprise drives 1 x PCI SSD as journal The main issue is that motherboard have 8x SATA2 and 2x SATA3 ports , So i can't use all drives in SATA 3 port the tech guy telling me that...
  14. M

    Proxmox 5 reboot bug?

    Well this will fix in the coming version as your statement? Becuse ot really annoying issue.
  15. M

    Proxmox 5 reboot bug?

    Yes all Clone no matter what they have the sane issue in rebooting. Just tested some last 2 days.
  16. M

    Proxmox 5 reboot bug?

    it seems i figured what the issue is, after few testing it seems the issue following If i do ISO install into KVM guest manually it works fine but if i do clone the issue happened.
  17. M

    Proxmox 5 reboot bug?

    Hello, 1 - The root running as ext4 2 - the /vz/ VPS as LVM. 3 - localisation = i use Local Storage for VPS Data i don't do anything changed since they are automatic add as /var/lib/vz and NFS Sharing for backups and iso only
  18. M

    Proxmox 5 reboot bug?

    I don't do anything special my drives followed. / root files /swap /var/lib/vz installed the Debian 9 then using Proxmox package Add the Proxmox VE repository: echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list Add...
  19. M

    Proxmox 5 reboot bug?

    Yes i followed The wiki, unfortunately i can't move over the Debian since this will required at last 10 servers of it.