Search results

  1. M

    cloning

    That's to bad. It would be really useful. Do you know of any plans to implement it?
  2. M

    cloning

    Just so I understand the command properly, the "200" at the end is the new VM ID, right? qmrestore --storage local --repeat 10 --unique vzdump-qemu-150-2011_12_04-15_53_02.tar 200 So, if I repeat the task 10 times, will it increment from 200 and up to 209? Or can I specify the VM IDs I want...
  3. M

    SaltStack with Proxmox

    I've been testing Salt itself on Ubuntu systems off and on for the last few months. It works really well. In an extended scenario, Proxmox could even use Salt to monitor all the VMs (including Windows at some point) and report on detailed state of guests. Pre-configured templates could be...
  4. M

    Proxmox Android client - UPDATE: Version BETA released

    Great Android app. Really nice for being BETA!
  5. M

    SaltStack with Proxmox

    SaltCloud was just announced: http://salt-cloud.readthedocs.org/en/latest/. It seems to be a natural fit with Proxmox, and since Salt itself is relatively easily installed on Debian, integration shouldn't be too hard. Personally, I think SaltCloud and Proxmox integration could be extremely...
  6. M

    Backup

    Is there a way to override this behavior? In 1.x it was set on the VM level. That it's now set globally is fine, however, in some cases I would prefer to be able to set more, or less, backups to keep depending on the VM and what its function is. For some VMs I only need a to run one backup per...
  7. M

    Clustered storage status (ceph & sheepdog)

    That's great news. I need to find another server to install Proxmox on now in the same data center as my storage machines. I have three older, non-VT, test machines ready for Ceph deployment. Will report back with any progress and/or questions. Thanks for the update.
  8. M

    [SOLVED] Execute/run command on all cluster nodes

    Yeah, I've read good things about Chef. I found Chef to have a somewhat higher learning curve than Salt. But that might just be me. For others interested, and to include "all" devops tools, Puppet will do most, if not all, that Chef and Salt does too.
  9. M

    [SOLVED] Execute/run command on all cluster nodes

    Found it! Here you go: http://packages.debian.org/search?searchon=names&keywords=Salt
  10. M

    [SOLVED] Execute/run command on all cluster nodes

    Yeah, I think you need to install ZeroMQ first and maybe some of the Python modules. Take a look here: http://salt.readthedocs.org/en/v0.9.8/topics/installation/debian.html Also, it's not obvious from the page above, but Salt was just accepted into Debian, so there should be an up-to-date...
  11. M

    [SOLVED] Execute/run command on all cluster nodes

    Looks like a great solution. Personally I've been using Salt (saltstack.org) for similar tasks. That said, because Salt is a full remote execution environment (based on ZeroMQ) it can also do a lot of other stuff. Salt was just accepted into Debian too.
  12. M

    Transfer OpenVz e KVM from 1.8 to 2.1 without backup...

    Yes, all you need to do is to rsync or scp the files over from the 1.8 host to the new 2.1 host. I don't recall the exact location of the .conf files, but you can easily find it by running: find / -name "*.conf" ... from the console of either host.
  13. M

    NFS Share (example please)

    Great. Glad I could save you further frustration. Enjoy Proxmox and FreeNAS. Both are excellent products.
  14. M

    Move VM from 1.9 to 2.1

    I agree with Quix. Just run the backup on the 1.9 host. SCP the backup over to the 2.1 server's backup directory and make sure you change the .gz file extension to .tar.gz. Then, simply restore the VM on the new machine. It's definitely the easiest and quickest way.
  15. M

    NFS Share (example please)

    Hi princeo, I think your problem is on the FreeNAS box. Are you using ZFS on the FreeNAS? If so, you probably need to create a ZFS data set on top of your ZFS pool. For example, if you want two ZFS shares, let's say 'backup' and 'iso', shared over NFS, you do NOT want to manually create those...
  16. M

    Clustered storage status (ceph & sheepdog)

    Very exciting! Looking forward to the first beta release. :)
  17. M

    qemu with RBD support

    Thanks hverbeek for the info. That's pretty similar to what I had in mind. Unfortunately I'm not in Germany. :S I'll have to test it on my test cluster. My current setup is built directly on Proxmox ISOs, though.
  18. M

    qemu with RBD support

    Are you considering Ceph at all instead? It seems Ceph is fairly stable and also part of qemu. Or is there some show stopper when it comes to Ceph as an alternative?
  19. M

    qemu with RBD support

    Looks like RDB was only included in the Linux kernel in 2.6.37 or later. My Proxmox host shows kernel 2.6.32-11-pve. Having RDB for testing going forward would be great. Alternatively, having access to Sheepdog would be great too. What's the status of Sheepdog in Proxmox? If I'm not wrong it...
  20. M

    qemu with RBD support

    I'm hoping to set up and run Proxmox 2.0 and using Ceph in my lab for testing. hverbeek: Can you describe the setup of your environment in greater detail?