Switching to NFS

matthew

Renowned Member
Jul 28, 2011
211
5
83
Currently I am running Proxmox on a single drive. Later I want to add a NFS to this Proxmox box and move all the running instaces to the NFS since it will have more storage and raid. How would that be done? I am not actually migrating to a different node but rather a different storage medium.
 
I did the same just yesterday. It is really simple. shutdown vm - create new harddisk on the NFS-store - install ddrescue (apt-get install ddrescue) - clone the harddisks (dd_rescue /dev/local/vm-101-disk-1 /dev/nfs/vm-101-disk-1) - adjust the bootdisk via config-file (nano /etc/qemu-server/101.conf) - have fun with your shared storage ;)

best regards
macday
 
you can also do a simple backup/restore with vzdump, using the --storage option pointing to the new nfs storage. what kind of disk images do you got now? raw?
 
I guess maybe I am looking at this wrong. I installed Proxmox on a single drive. I want all the OpenVZ and KVM containers to be stored on the NFS since it will be RAID1 or something for safety. Do I have to move the entire Proxmox install to NFS to do this?
 
There is no problem running containers using KVM on an NFS. This works quite nicely. For VZ, these usually cannot not be on an NFS. For containers hosted off the NFS, you also need to take into consideration snapshots. There are some limitations when trying to when trying to make snapshots of running containers. That simply doesn't work.

If you want RAID1 mirror for redundancy, then place the entire proxmox installation on the server with the RAID1 array. If your single drive system goes down, even if you have a NFS with a redundant RAID type, it won't help--everything will be offline, even though you'd have your data on the redundant nfs drives, which is still better than loosing your data in a single drive system.
 
When looking at KVM and OpenVZ there seem to be pros and cons of both. The major thing I do not like about KVM is that if you give a KVM 100G of drive space it actually takes 100G of drive space. And also, if you give it only 10G of drive space and later need 200G there seems to be no easy way to do that. At least that I know of? If it were not for that limitation I think I would prefer KVM.
 
With KVM the disk space usage is dependent on the disk type. qcow files grow as needed, raw and virtio will use the disk space assigned, in your example 10 or 100G.

It's pretty easy to expand raw disks to 200G when needed with dd. There are plenty of forum posts on this, but you may find the following helpful:

create your 200 G disk:
qemu-img create -f raw newdisk.raw 200G

then migrate your data:
dd if=olddisk.raw of=newdisk.raw

There are other ways to do this as well .. and don't forget if you are using qcow or other disk types, you need to convert it first. Make backups first!

Here is some other info I found:

http://www.linux-kvm.com/content/how-resize-your-kvm-virtual-disk
 
So can an OpenVZ container be converted to a KVM container or hardware? Can a KVM container be converted to hardware?
 
So can an OpenVZ container be converted to a KVM container or hardware?
yes, but as you do not have a kernel in a container you have to create a more or less empty KVM first and the copy the data. Only for experts but possilble.

Can a KVM container be converted to hardware?

A KVM VM can be moved to hardware without problems, e.g. using clonezilla live-cd´s.