Search results

  1. W

    403 Permission check failed (permission denied - invalid PVEVNC ticket)

    Solution (at least for me): https://forum.proxmox.com/threads/novnc-from-remote.34634/#post-169844
  2. W

    [SOLVED] NOVNC from remote

    After a long time struggling, I finally found a solution to this. But first of all many thanks to all forum posters also trying to get this working and sharing their results – that helped a lot. The following is a brief description of my setup: Starting point is my lab cluster with tree nodes...
  3. W

    403 Permission check failed (permission denied - invalid PVEVNC ticket)

    This is exactly where I am stuck too. It looks like the server only waits 10 seconds for a connection. If I start a telnet session within 10 seconds I get a connection: # telnet a.b.c.d 5900 Trying a.b.c.d... Connected to a.b.c.d. Escape character is '^]'. RFB 003.008 If I click connect in...
  4. W

    [SOLVED] NOVNC from remote

    Hi, I am also trying to setup an external NOVNC website using pve2_api.class.php from here: https://github.com/CpuID/pve2-api-php-client/blob/master/pve2_api.class.php My test lab consists of one proxmox node and one linux VM located in the proxmox subnet. The VM has apache and PHP installed...
  5. W

    fencing option in datacenter.cfg

    Hi everybody, The help page for datacenter.cfg is listing this option: I cannot find any information on how to configure /etc/pve/ha/fence.cfg. I know it's experimental but we would like to look into it and see if it can bring back IPMI fencing (did work better for us). Can someone please...
  6. W

    [SOLVED] new cluster node only partially working

    Seems I'm still missing something for multicast in my FW ruleset. Temporarily disabled the node firewall rules and all is fine now. Thanks for your help.
  7. W

    [SOLVED] new cluster node only partially working

    Oh my ... thanks. Hopefully last question: ist this output ok or should I investigate further regarding multicast packet loss? node1 : unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.035/0.098/0.367/0.036 node1 : multicast, xmt/rcv/%loss = 10000/0/100%, min/avg/max/std-dev =...
  8. W

    [SOLVED] new cluster node only partially working

    After fixing a wrong firewall ruleset (missed to allow corosync ports on multicast ip ranges) and restarting corosync all nodes are now showing all other nodes as up in their web interfaces. Output of corosync tools is ok on all nodes as well: # corosync-cfgtool -s Printing ring status. Local...
  9. W

    [SOLVED] new cluster node only partially working

    /etc/pve/corosync.conf (IP replaced): # cat /etc/pve/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: node5 nodeid: 4 quorum_votes: 1 ring0_addr: node5 } node { name: node3 nodeid: 3 quorum_votes: 1 ring0_addr: node3 } node { name...
  10. W

    [SOLVED] new cluster node only partially working

    Just did that and now I'm even more confused: Only the newly added node seems to have a multicast IP: # corosync-cmapctl -g totem.interface.0.mcastaddr totem.interface.0.mcastaddr (str) = 239.192.9.133 On all other nodes it's: # corosync-cmapctl -g totem.interface.0.mcastaddr Can't get key...
  11. W

    [SOLVED] new cluster node only partially working

    After adding an new cluster node I recogniced that some nodes cannot migrate to the new node. Everything is on the same no-subscription version: proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve) pve-manager: 4.4-5 (running version: 4.4-5/c43015a5) pve-kernel-4.4.13-1-pve: 4.4.13-56...
  12. W

    migrate all and limiting parallel jobs not working with HA-VMs?

    Hi Dietmar, as suggested: https://bugzilla.proxmox.com/show_bug.cgi?id=1236 Regards, Dirk
  13. W

    migrate all and limiting parallel jobs not working with HA-VMs?

    Hi all, not sure if this is a bug: When starting a migrate all I can limit parallel jobs to e.g. 1 job at a time. Because HA migration tasks only start background tasks and return almost immediately the job limit does not work. The next migration is started while the background task from the...
  14. W

    Code signing on vncterm (VncViewer.jar) expired

    Hi Tom, You are right; I ended up with a complicated approach. Only because I did not find a working solution for direct noVNC access without logging in to the GUI first and starting noVNC from there. If there is a working solution for that, it would really appreciate a link or example. Thank...
  15. W

    Code signing on vncterm (VncViewer.jar) expired

    Hi Tom, Thanks for your reply. I would love to use noVNC and in fact, that was my first approach. However, because I cannot grant access to the PVE GUI to the VM owner (company policy and because cluster node states are always visible) I see no way to set this up. I have read some posts...
  16. W

    Code signing on vncterm (VncViewer.jar) expired

    Hello, while trying the code mentioned here (the inline applet): https://forum.proxmox.com/threads/proxmox-api-vncproxy-vncwebsocket-novnc.22538/ I was prompted with a security warning. Looking into the details showed an expiration date "Sat Jul 23 01:59:59 CEST 2016". Is there a chance to...
  17. W

    migration bandwith

    Do we habe an option to set this (migrate_speed) globally on a cluster?
  18. W

    All Cluster Nodes rebootet while migrating VM

    Hi manu, thanks for your reply. After taking a look at the logs it seems that there has been a short time with network problems between the cluster nodes. I see corosync totem retransmit messages an all nodes just before the watchdog restarted the nodes. All our nodes have two 1G NICs bonded...
  19. W

    All Cluster Nodes rebootet while migrating VM

    Hi all, wow, that was a busy hour or so ... On our 8-Node cluster (with ceph) I have started to update the nodes one by one. I've done this by migrating all VM (no LXC in use) to other nodes an then triggering the update and a restart from the web interface. This procedure did work many times...
  20. W

    GUI suggestion

    Hi LnxBil, Thanks for your reply. Maybe I was not exact enough. I know that I already can deny access to some hardware information in the main pane using roles and permissions. What I was asking for is the additional hiding of the (cluster) nodes in the navigation tree and in the search result...