Search results

  1. A

    How to attach existing HD image to a new VM?

    A vmdk image file is often in fact a raw file (if it is not a dynamic file). You should try raw format. Or, you should convert your vmdk file to qcow2 (but it takes time) : qemu-img convert -f vmdk win2003-pve.vmdk -O qcow2 win2003-pve.qcow2 This example is taken from ...
  2. A

    French proxmox users meetup , anybody interested ?

    Hello, I would be interested too. It would easier for me in Paris, but it is possible also in Lille.
  3. A

    Activate license with 'invalid' domain

    Are you behind a proxy ? If so, you have to configure the proxy in Datacenter, options. I had the same error until I configured the proxy (it is something like http://proxy:8080).
  4. A

    PVE 3.2 : no split button for console on one node

    Hi Tom, I just tried, an indeed it works. I did not thought it was due to browser cache, as I was using the same browser (Firefox) for all nodes, but after clearing the cache, the 'console' option in datacenter did appear, and also the split button, and spice is now working. Thanks a lot.
  5. A

    PVE 3.2 : no split button for console on one node

    Hi, I have a four nodes cluster. It was updated recently to pVE 3.2. On node 1, I cannot connect on VMs with spice. On the three other nodes, It is working fine. I can even connect on VMs on this node with Spice from other nodes. On this node, I have no option 'Console viewer' in web...
  6. A

    Update cluster to 3.2 and kernel 3.10 : quorum lost

    We have around 400 hosts on the local network, but we are on a private 10.x.y.z network, divided in several vlans. I already increased the arp table size in /etc/sysctl.conf : # Force gc to clean-up quickly net.ipv4.neigh.default.gc_interval = 3600 # Set ARP cache entry timeout...
  7. A

    Update cluster to 3.2 and kernel 3.10 : quorum lost

    Hi Dietmar, spirit, I tried on the last node I installed, which has still no VM. The first : # echo 1 >/sys/class/net/vmbr0/bridge/multicast_querier did not seem to change the situation. The second : echo 0 > /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping seems to have reduce the...
  8. A

    Update cluster to 3.2 and kernel 3.10 : quorum lost

    Just a quick notice. It is said in the release notes that the 3.10 kernel was for "testing only". I thought, even if it had no openvz patches, it was good enough to be in the enterprise repository. I can say from my experience it is not the case. It seems that at least one node with a 3.6.32...
  9. A

    Update cluster to 3.2 and kernel 3.10 : quorum lost

    Hi all, I got recently a new server, Dell PE R620, and decided to install it using PVE 3.2. As I only use KVM, I decided to also install the new kernel 3.10 from Enterprise repository. I had a little problem with this kernel, as my server did not reboot properly first time. In fact it appeared...
  10. A

    Proxmox Node running for 327 Days!

    Let me say that I don't consider this a great achievement. Not having rebooted means you did not upgraded any kernel during this time, perhaps not did any upgrades at all ? I don't condider this wise. You have to plan some maintenance time to do these upgrades, to upgrade kernels and other PVE...
  11. A

    Rootdelay kernel parameter needed with Proxmox VE 3.2 and Adaptec 6405 with ZMCP

    Hello, Which kernel do you use ? 3.6.32 or 3.10.0-1, available in pve 3.2 ? You should look at this thread, on kernel 3.10. http://forum.proxmox.com/threads/17306-New-3-10-0-Kernel Look at the end, the other option you can use, cleaner in my opinion, is 'scsi_mod.scan=sync'. You have to add it...
  12. A

    New 3.10.0 Kernel

    Hi, Same error here with a Dell PE R620 server, with H710p Raid controller, and 8 500 Go NearLine SAS drives in Raid 10. At boot, it says 'no controller found', then, 'LVM volume not found', and 'volume pve-root not found'. In the rescue shell (busybox), 'lvm vgchange -a -y' then Ctrl-D...
  13. A

    Dell Openmanage with Proxmox 3.1

    Hi, I also installed OMSA 7.3 on 3.1 this afternoon, it is working, but I am not completly convinced. If you install srvadmin-all, it depnds on a lot of packages : # apt-get install srvadmin-all ... Les NOUVEAUX paquets suivants seront installés : cim-schema hicolor-icon-theme libargtable2-0...
  14. A

    Advice after node hardware failure - how to re-add server in cluster after reinstall

    Re: Advice after node hardware failure - how to re-add server in cluster after reinst Yes, I used other VMIDs, it was just impossible to restore a VM on its old VMID (ghost VM). Now that the srv-virt2 node is again in the cluster, I was able to delete these ghost VMs, and re-use them to...
  15. A

    Advice after node hardware failure - how to re-add server in cluster after reinstall

    Re: Advice after node hardware failure - how to re-add server in cluster after reinst I did a 'pvecm add IP-cluster -force, and indeed it seems to work. No error : # pvecm nodes Node Sts Inc Joined Name 1 M 14632 2013-12-09 10:33:50 srv-virt1 2 M 14632...
  16. A

    Advice after node hardware failure - how to re-add server in cluster after reinstall

    Re: Advice after node hardware failure - how to re-add server in cluster after reinst Just to add some context, we don't have fencing configured, and VMs were stored locally. So, when we will add again the node to the cluster, it will not find the VM locally. Most VMs have been restored on the...
  17. A

    Advice after node hardware failure - how to re-add server in cluster after reinstall

    Re: Advice after node hardware failure - how to re-add server in cluster after reinst Hi Udo and Dietmar, Thanks for the answers. It is reassuring to know that -force should work. I think I have first to delete the previous node ? pvecm delnode srv-virt2, before re-adding the re-installed node ?
  18. A

    Advice after node hardware failure - how to re-add server in cluster after reinstall

    Hi all, We had two disks on a server which failed in a raw, and we lost the Raid (Raid 10). I replaced the disks and reinstalled proxmox on the server, with the same IP and hostname (srv-virt2) than before, this is perhaps not the best option... There are three nodes in the cluster (srv-virt1...
  19. A

    Updates for Proxmox VE 3.0 - including storage migration

    OK, solved, I addes in my endpoint init script the required lines : ### BEGIN INIT INFO # Provides: endpoint # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Example initscript # Description: This file...
  20. A

    Updates for Proxmox VE 3.0 - including storage migration

    Yes, but I think I understand what happened. I installed an init script, endpoint (a tool from netiq, or Ixia, to check network bandwidth), and I think this init script is no more compliant with new wheezy init scripts. Now I try to find a way to remove them correctly... Alain