[SOLVED] proxmox 4.2 disks migration - read only file system

spartakus333

New Member
Apr 26, 2016
8
0
1
45
I migrated to 4.2
but when trying to get my old VMs working in the new environement no way !!
here's an example I 've :
vm1-disk.vmdk (is a vmdk disk in an old proxmox 3 server) that I want to get it work in proxmox 4.2 in a VM with id 100.

I tried :

1- to copy vm1-disk.vmdk to [local-storage]newvm-disk.vmdk under /var/lib/vz/images/100/
I tried the cache modes (write through and write back)
=> result the VM freezes and when entering a debug mode I 'll get "read only file system" each time I create a file
I also tried converting the vm1-disk.vmdk to qcow2 and row formats, no way the same result !!

2- created a new disk under [ZFZ-STORAGE].vm-100-disk1 and cloned the old vm1-disk.vmdk to the new
disk with clonezilla (device to device)
=> result exactly the same

3- created a vzdump backup in the old server and restored it in the new one
=> the VM booted correctly... but I 've just done a reboot and it freezes again and I got the same error
read only file system !!

The new interface in 4.2 is very cool but let's get the main functions working ...:)

Thanks a lot !
 
Hi,

it is general no good idea to use vmdk, it is only for compatibly.
So try backup and restore to an raw image.

What is the OS of this machine?
 
Hi thanks a lot for your reply
I did the following

1- converted my original vmdk file to the row format with the follwing command

Code:
qemu-img convert -f vmdk vm-100-disk-1.vmdk -O qcow2 vm-100-disk-1.raw

2- did the backup of my vm on [proxmox3-node1]

3- restored the backup on [proxmox4.2-node1]

4- started the restored vm
I got the following error:
Code:
kvm: -drive file=/var/lib/vz/images/105/vm-105-disk-1.raw,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on: file system may not support O_DIRECT

5- I fixed that error by changing the disk cache mode from "no cache" to "write through"

6- restarted the restored vm again
result
- the vm started
- but the disk is still accessible in read only file system

Note : the os of the hosted vm is ubuntu 12.04

Thanks a lot for your help
 
Could it be that your rootfs is on ZFS.

If so please use zfs pool plugin.
 
  • Like
Reactions: spartakus333
Hi thanks a lot for your reply
I did the following

1- converted my original vmdk file to the row format with the follwing command

Code:
qemu-img convert -f vmdk vm-100-disk-1.vmdk -O qcow2 vm-100-disk-1.raw

qcow2 format you wrote is not "raw". Rename your image file to .qcow2 and tell KVM that it is qcow2 format, not raw format.
 
wolfgang you're the ONE !

as you said, I used the zfs plugin and did the following:
1- created a new file system using the command
Code:
 zfs create rpool/zfsdisks
2- added a new storage:
storage (Datacenter -> [Storage] -> Add, choose "ZFS", called "zfsvols", as "ZFS Pool" choose "rpool/zfsdisks"
3- when importing my old VM I choosed "zfsvols" as storage. in which the new disk will be created

And magic ! When I started my imported VM every thing turned to the normal
and no more this horrible message of "Read only file system" ! :)

Just a last few things:

1- did I a mistake when I installed poxmox that way (I choosed zfs+raid0)
here's what I have for the command "zfs list" ?

Code:
root@node2:~# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rpool  414G  485G  96K  /rpool
rpool/ROOT  10.1G  485G  96K  /rpool/ROOT
rpool/ROOT/pve-1  10.1G  485G  10.1G  /
rpool/data  297G  485G  96K  /rpool/data
rpool/data/vm-100-disk-1  66.0G  550G  1.09G  -

2- did this has influence on the performance as I noticed that the FSYNCS decreased a lot ?
Code:
root@node2:~# pveperf /rpool/zfsdisks
CPU BOGOMIPS:  24741.80
REGEX/SECOND:  2705815
HD SIZE:  485.33 GB (rpool/zfsdisks)
FSYNCS/SECOND:  218.32


Again thanks very much for your help "wolfgang"
and to all the PROXMOX dev team and community
 
Last edited: