[SOLVED = does not matter] PM keeps adding format=raw

mailinglists

Renowned Member
Mar 14, 2012
641
68
93
Hi,

i use PM 5.2 ZFS based cluster with zvols and pve-zsync.

Sometimes VMs, which neved have had "format=raw" in their config, get it automatically added.
I guess they get it after being migrated from one node to another.
I will keep a look out for this after future migrations, to tell you exactly when it gets added.

I've noticed this in the past and have manually edited config files to remove this setting from all the VMs, but it keeps coming back. I think this might be a bug.

Anyone else noticed this?
 
And does this cause any actual problems for you?
 
I just found another VM with format=raw.
I know it did not have it and all I did was live migrate it --online --with-local-disks.
Shoud I open a BUG or will you?
 
this is actually not a problem

if you use the zfspool plugin, then all vm disks are of format 'raw' it does not make any difference if it is in the config or not

what wolfgang probably meant is to use '.raw' files on top of a zfs filesystem
 
Hi,

I have tested it and it works without a problem.
I guess you may switch the storage at the live migration.

The raw tag will be added at the live migration and is no problem for pve-zsync.
 
wolfgang, did you test for a VM that has had no pve-zsync job before, or did you test it on a VM that already has had sync enabled and working?
In my attempts, it works if live migration is done after pve-zsync is setup, but fails as shown on the link, if I try it for the first time and VM has that RAW setting.
 
If you setup pve-zsync and than do a live-migration, how should this work?
pve-zsync works lockly and not cluster-wide. That means if you migrate the target is gone.
Also, if you make a live-migration all snapshots are lost and without snapshots pve-zsync is not working.
 
I am aware of the downfalls of live migration and loosing of the snapshots. I have scripts in place to deal with that, so backup PM node always has backups for as far back as set (script renames ZVOL and just starts sync job from scratch and then after set time removes the old ZVOL).

However this is not something relating directly to my initial pve-zsync failing when format=raw is wrongly set for ZVOLs after live migration.
Allow me to point out a few cases where usage leads to additional work of having to remove that format=raw.

Here is an example.
I create a new VM. I live migrate it. At some point later, backups for this VM are required. I set them via pve-zsync. They fail. I manually fix the config and remove that RAW option. Backups start working. I would like not to have manually remove the format=raw each time. I think this is a bug.

Here is another one.
I live migrate VM with pve-zsync backup enabled, because let's say I updated the hipervisor and VM must not go down twice to migrate away and back. I loose snapshots and format=raw is added. Once VM is back at the original host, PM backup node tries to pull it with pve-zsync. It fails due to lack of snapshots. I work around that with my scripts by renaming old ZVOL as stated at the beginning. But then before I can create another job, I have again go to the PM node where VM resides and remove format=raw. I still think this is a bug.

While I might understand, that there might be a big problem implementing live migration, that keeps ZVOLs snapshots, I do not understand why fixing of adding format=raw with VMs on ZVOLs during live migration would be a problem to fix. Or do it other way around, fix pve-zsync to acceps ZVOLs which have format=raw set in config, but are acutal ZVOLs.

Hopefully you understand now why, I think this is a bug and it should be fixed for these two use cases.
 
Well, pve-zsync is supposed to ignore the the `format=` option in the config. Could you post the error messages you see along with the disks found in the VM's config at the time of a failure?
 
But there the disk had `backup=0` when the error was posted.
 
wbumiller, it seems to work now. Sorry for waisting your time.
Here is proof:

p24
Code:
root@p24:~# cat /etc/pve/qemu-server/100.conf
agent: 1
boot: cdn
bootdisk: scsi0
cores: 4
cpu: Westmere
memory: 4024
name: CentOS7.3
net0: virtio=D2:42:B5:57:35:3F,bridge=vmbr1,firewall=1,rate=8,tag=XXX
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-100-disk-1,discard=on,format=raw,iops_rd=333,iops_wr=333,mbps_rd=100,mbps_wr=100,size=15G
scsihw: virtio-scsi-pci
smbios1: uuid=6ef0eb13-aa65-4615-a625-f0ea2abdaf5a
sockets: 1

p27 - backup destination
Code:
root@p27:~# pve-zsync sync --source XXX.24:100 --dest rpool/backups/daily --maxsnap 30 --name vm100daily --limit 100000 --verbose
full send of rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12 estimated size is 4.04G
total estimated size is 4.04G
TIME        SENT   SNAPSHOT
12:50:15    113M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:16    211M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:17    309M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:18    406M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:19    504M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:20    602M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:21    699M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:22    797M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:23    895M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:24    993M   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:25   1.06G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:26   1.16G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:27   1.26G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:28   1.35G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:29   1.45G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:30   1.54G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:31   1.64G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:32   1.73G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:33   1.83G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:34   1.92G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:35   2.02G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:36   2.11G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:37   2.21G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:38   2.30G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:39   2.40G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:40   2.50G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:41   2.59G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:42   2.69G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:43   2.78G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:44   2.88G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:45   2.97G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:46   3.07G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:47   3.16G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:48   3.26G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:49   3.35G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:50   3.45G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:51   3.54G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:52   3.64G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:53   3.74G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:54   3.83G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:55   3.93G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
12:50:56   4.02G   rpool/data/vm-100-disk-1@rep_vm100daily_2018-08-03_12:50:12
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!