Search results

  1. F

    VM do not get IP from udhcpd in LXC

    Hi, VM dot not receive IP from DHCP server running in LXC. No FW, no iptables, no ebtables, all policies on ACCEPT, (both in LXC & Proxmox VE host). The DHCP server gets the requests, but the VM seems not to receive them. The only specificity: they are both in the same tagged VLAN 66. Vlan...
  2. F

    [SOLVED] Kubernetes : sharing of /dev/kmsg with the container

    Hi, You must load all required modules on the host (the proxmox server), and all the loaded modules will be available in the containers. my /etc/modules : # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot...
  3. F

    Bridge fdb (Forward DB) fills with 4095 mac for each VM (more than 32000 mac per host).

    And is this normal to have so many thousands of records when checking the bridge fdb with: bridge fdb show ?
  4. F

    Bridge fdb (Forward DB) fills with 4095 mac for each VM (more than 32000 mac per host).

    Hi, I had a very strange behaviour, that basically completely broke my bridging in Proxmox. (I ended up with kind of 2 isolated set of hosts that were able to exchange ARP info, but not between the 2 sets). My config: 3 Proxmox hosts in cluster, with on each host: 1 Ethernet NIC - Default...
  5. F

    [SOLVED] Kubernetes : sharing of /dev/kmsg with the container

    Thanks for the summary. kubernetes seems to work properly on LXC. (but was not so easy)... I used "kubernetes the Hard Way" on github and a video explaining how to port that to LXC: https://www.youtube.com/watch?v=NvQY5tuxALY&t=327s And added your tip for kmsg... Thanks !
  6. F

    2.0 RC1 : authentication key already exists

    Hi, I feel like exhuming some living dead... But, yes, there was an option to reset the cluster configuration without complete reinstallation. Not sure from which site I took the procedure, but the one here seems quite similar...
  7. F

    Shutdown timeout produces TASK ERROR: VM quit/powerdown failed - got timeout

    On a Windows 2003 SBS VM, that is quite long to shutdown ( about 200s ), the VM correctly shuts down but I have this error after 30s : TASK ERROR: VM quit/powerdown failed - got timeout The problem is not in the VM, as it shutdowns correctly, but only a reporting timeout problem. I tried to...
  8. F

    How to reset cluster on 2.0 RC.

    Thanks leancode, This also did it for me. Just the rm -rf /etc/pve/nodes/* did not work : root@proxmox1:~# rm -rf /etc/pve/nodes/* rm: cannot remove `/etc/pve/nodes/*': Transport endpoint is not connected I had first to start the pve-cluster, then delete the content of nodes, then restart the...
  9. F

    How to reset cluster on 2.0 RC.

    So, here are ther results : I cleared as described, then rebooted both nodes. I recreated the cluster on both nodes, then I tried to add a node : with pvecm add 192.168.232.41 on nodes with IP 192.168.232.42. I got the message "authentication key already exists" I tried to add node2 to cluster...
  10. F

    How to reset cluster on 2.0 RC.

    Thanks leancode. I'll give it a try. I didn't replied to dietmar immediately because I first wanted to recheck. There is in fact no trace in the logs : root@proxmox2:~# /etc/init.d/cman restart Stopping cluster: Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping...
  11. F

    How to reset cluster on 2.0 RC.

    Quite similar on both nodes : root@proxmox1:~# pvecm status cman_tool: Cannot open connection to cman, is it running ? And a tail on syslog shows : Mar 24 22:27:13 proxmox1 pmxcfs[1360]: [status] crit: cpg_send_message failed: 9 Mar 24 22:27:13 proxmox1 pmxcfs[1360]: [status] crit...
  12. F

    How to reset cluster on 2.0 RC.

    My cluster configuration is not working (. I would like to reset it. How can I do ? P.S. : Reinstalling is not an option as I have a long setup on DRBD, and I already reinstalled twice. Just for info, my problem is : "cman_tool: corosync daemon didn't start Check cluster logs for details" when...
  13. F

    2.0 RC1 : authentication key already exists

    Bad news... It took me some 3 hours to install & configure the cluster with synchronous drbd replication on SSD... So, I'll reinstall everything in 3 weeks. Thanks,
  14. F

    2.0 RC1 : authentication key already exists

    Thanks e100, I think you're totally right. But as I made many different attemps I might not have a clean configuration and I have the same error message on proxmox2. How can I fully reset the cluster configuration ? Thanks,
  15. F

    2.0 RC1 : authentication key already exists

    Hi, I'm building a 2 nodes cluster, but I can't get rid of this error when I try to add my second node : authentication key already exists Even the first time I added it. What could be wrong ? In fact I have this same message regardless of the hostname or IP I provide, even incomplete/wrong...
  16. F

    Proxmox vs Oracle VM

    Hi, We were looking for a virtualization solution for your new IBM blades and we evaluated Oracle VM and ProxMox. In order to have results we had to use the very latest Oracle VM software, and the beta version of ProxMox in order to have the shared storage capabilities. Here is a short (and...
  17. F

    1.4 Beta cluster : ERROR: Ticket authentication failed

    [Solved] Re: 1.4 Beta cluster : ERROR: Ticket authentication failed Problem Fixed : The date was too much different on both servers. Now that I ntpdate both of them, the error is gone. Thanks !
  18. F

    1.4 Beta cluster : ERROR: Ticket authentication failed

    What might trigger this error ? How can I check ? Is there a log file with more details ? Can I reproduce the error on the command line without using the pveca -l command ? Regards,
  19. F

    1.4 Beta cluster : ERROR: Ticket authentication failed

    Yes, First by upgrading, Then, by reinstalling both nodes from the ISO. Regards,
  20. F

    1.4 Beta cluster : ERROR: Ticket authentication failed

    [Solved] 1.4 Beta cluster : ERROR: Ticket authentication failed Hi, I'm unable to join the slave node to a master. Either after upgrade or after a fresh installation. On my master node, I have : kvmcap0:~# pveca -l CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---DISK 1 ...