Search results

  1. A

    KVM templates and default root password

    Yes, it would be nice a great idea, but the problem is, how we "fetch" this new generated keys to show user/customer in WHMCS, is no way. dietmar: With this, it is nice if user when VM is created open the VNC console and try to login. In many cases, since VM is created and customer login (to...
  2. A

    Linux KVM vm´s don´t halt completely

    Hi wolfgang, thank you for your suggestions! I´ve installed acpid on VMs and the same result, with command "halt". But if i do a "shutdown -h now" the VMs power off correctly (having acpid installed or not). The other option (qemu-guest-agent) you refer is the option "KVM Hardware...
  3. A

    KVM templates and default root password

    Hello, With "Convert to template" feature in proxmox really is very easy to create a predefined KVM template, but of course the new vms created based on this template have the same root password that predefined template that is not the best in some cases. I use WHMCS (and modulesgarden proxmox...
  4. A

    Linux KVM vm´s don´t halt completely

    Hello, In Linux KVM virtual machines (and this happens with all Linux distros), when a shutdown is performed, on proxmox vm appears running and in VNC console the message "Halting". So the virtual machines are ready for shutdown, but i have to force a "hard" stop to shutdown definetively the...
  5. A

    RAM high load on node with no VM running

    Hi LnxBill, thank you for your response and your useful details. I´ve done some suggested changes and tests and i can confirm: 1. Limit ARC is a must (recommended the half of real memory -in doc-). If ARC is limited, and almost full, i can´t create new VM/containers: so it´s okay, i receive a...
  6. A

    RAM high load on node with no VM running

    Hi LnxBill, thank you so much for your comments. Due i´m new on proxmox & zfs world i am following the excellent proxmox install&config guide and of course taking a look at forum post, really many of the main issues can be encountered in forum yet. I am not native english speaker, so maybe i am...
  7. A

    RAM high load on node with no VM running

    Yes, you are absolutely on right. Reading post i can test the @Nemesiz suggestion: zfs set primarycache=metadata rpool/swap zfs set secondarycache=metadata rpool/swap zfs set compression=off rpool/swap zfs set sync=disabled rpool/swap zfs set checksum=on rpool/swap And a ( as suggested by...
  8. A

    RAM high load on node with no VM running

    Searching about this issue i found this I´ll give a try. All considerations regarding performance/ zfs parameters or limits that i currently have are very welcome!
  9. A

    automatic failover VM

    Yes, just i refer. If one node fail (a unexpected down/crash or manteinance) all VMs can run consistent. Yes, it´s very nice in case of manteinance,i s not the concept of ie. "vzmigrate" You give me a bunch light on this sense, please correct me if i am wrong or i did not understand the...
  10. A

    RAM high load on node with no VM running

    Hi, After i have created 10 testing VMs (and all running) i had a long freeze -no access to vm / proxmox or ssh). The system responds after about 10 minutes and here the logs -i did not reboot- dmesg log: INFO: task ntpd:3237 blocked for more than 120 seconds. Tainted: P -- ------------...
  11. A

    Adding SSD for cache - ZIL / L2ARC

    Great Nemesiz, your case give me more light about this :) The next thing i´ll test will be creating multiple VMs finding server limits, to check how performance is and a idea about an aproximate server density to be stable and ready for production. Thank you so much again for all comments.
  12. A

    RAM high load on node with no VM running

    Really i want to test all benefits and of course obtain an aproximation of what server density i can get for this hardware i have. Of course this is not yet a production system and now the next step will be create more and more vms -openvz and kvm with different OS- to check server degradation...
  13. A

    RAM high load on node with no VM running

    Hi LnxBil, thank you so much for your reply. I am realling testing the system: as i said previously i come from OpenVZ under ext4 partitions and i note a great difference in this sense, but it seems i am on right direction ;) I don´t know if a vm is deleted, this memory is being free auto, or...
  14. A

    RAM high load on node with no VM running

    root@hn2:~# free -m total used free shared buffers cached Mem: 24098 17661 6436 0 4 99 -/+ buffers/cache: 17557 6541 Swap: 23551 0 23551 root@hn2:~# cat /proc/sys/vm/drop_caches 0 root@hn2:~# echo 3 > /proc/sys/vm/drop_caches root@hn2:~# cat /proc/sys/vm/drop_caches 3...
  15. A

    VM bash error: innapropiate ioctl for device

    Later this fail i made a apt-get update (and i got the new kernel from enterprise repo: 2.6.32-43-pve) and now it seems its working as expected, no issues or errors in VNC console now...really i don´t know if this was a bug that was corrected in this last update. By the way, i still testing and...
  16. A

    Adding SSD for cache - ZIL / L2ARC

    Thank you so much Nemesiz for the tools, i´ll give try soon. So, you consideer that hardware node memory usage can be considered normal (when all virtual machines halted)? I just write a new post with the free -m output in different scenarios. Greetings!
  17. A

    RAM high load on node with no VM running

    I am doing some performance checks in a dedicated server with latest proxmox 4.1 (including the today kernel enterprise repo update). Once rebooted i started checks: Node just restarted, no vms running: root@hn2:~# free -m total used free shared buffers cached Mem: 24098 870 23227...
  18. A

    VM bash error: innapropiate ioctl for device

    I am getting the following issue in a VM container (openVZ), latest 4.1 proxmox with all updates (from enterprise repo). I attached the screenshot of the issue, over VNC console. -bash: cannot set terminal process group (-1): Innapropiate ioctl for device -bash: no job control in this shell...
  19. A

    automatic failover VM

    "If you need to reboot a node, e.g. because of a kernel update, you need to migrate all VM/CT to another node or disable them." Please permit me another doubt regarding this afirmation: i suppose if we have a cluster with 3+ nodes for HA with proxmox, COULD tolerate 1 server node down, so about...
  20. A

    automatic failover VM

    I completely agree with this question. For a non expected fatal node crash what should we expect or how should we act?