[SOLVED] The current guest configuration does not support taking new snapshots

The image is not qcow2. For VMs, you can convert it with Move Disk in the Hardware view of the VM. Just select the same storage and qcow2 as the format.
Hello again,
Quick Question.

I have a server with 3 VM's
I installed first 2 VM and then another drive for backups 1 month later.

Code:
root@proxmox:~# pvesm status
Name                Type     Status           Total            Used       Available        %
backup_drive         dir     active       960302804       136496700       774951644   14.21%
local                dir     active        98559220        18923912        74585760   19.20%
local-lvm        lvmthin     active       832868352       130593757       702274594   15.68%

Code:
root@proxmox:~# qm config 102
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
ide2: local:iso/ubuntu-20.04.2-live-server-amd64.iso,media=cdrom
memory: 12228
meta: creation-qemu=6.1.1,ctime=1644894666
name: ubuntu-cP
numa: 0
onboot: 1
ostype: l26
parent: Ubuntu_cPanel_03_12_22
scsi0: local-lvm:vm-102-disk-0,size=96G
scsihw: virtio-scsi-pci
sockets: 1
unused0: backup_drive:102/vm-102-disk-1.qcow2

Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-11 (running version: 7.1-11/8d529482)
pve-kernel-helper: 7.1-13
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11
pve-kernel-5.13.19-6-pve: 5.13.19-14
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1
when I installed the last 3rd VM I did not select qcow2 at first.

I noticed afterwards, that this 3rd VM was probably created under backup_drive because the Local-lvm did not have an option for qcow2 so I did not use that for move disk.
The backup_drive was the only one available for Move Disk and select qcow2 not the Local-lvm. this has not option for dropdown to select anything?

Now I read a little more and I am thinking Local-lvm is qcow2 as Default ?

Because I moved 3rd VM to local-lvm, the disk section does not say .raw or .qcow2 but still the Snapshot button is active.

so I am assuming when creating Local-LVM it is automatically qcow2 yes ?

@fabian On file-based storages, you need qcow2 for snapshots. Otherwise, the storage needs to support them, see here for a list.

Also what do you mean by On file based storages ?
so when added my backup_drive this is now considered a file-based storage? because I use it as backup drive?


is there a better option to set the backup_drive? in that list see here for a list.
or for the general purpose of backups of VM's - this is good enough the way it is ?

if someone can explain so I can understand better sorry for the dumb questions. still learning.

Thank you Kindly for your time and answer

Spiro
 
Last edited:
I noticed afterwards, that this 3rd VM was probably created under backup_drive because the Local-lvm did not have an option for qcow2 so I did not use that for move disk.
The backup_drive was the only one available for Move Disk and select qcow2 not the Local-lvm. this has not option for dropdown to select anything?

Now I read a little more and I am thinking Local-lvm is qcow2 as Default ?

Because I moved 3rd VM to local-lvm, the disk section does not say .raw or .qcow2 but still the Snapshot button is active.

so I am assuming when creating Local-LVM it is automatically qcow2 yes ?
No, it's raw, but it supports snapshots (using a different mechanism).
Also what do you mean by On file based storages ?
so when added my backup_drive this is now considered a file-based storage? because I use it as backup drive?
File-based storages are those that use files to store images. Thin-LVM uses logical volumes (raw) and supports snapshots on those directly.

is there a better option to set the backup_drive? in that list see here for a list.
or for the general purpose of backups of VM's - this is good enough the way it is ?
Depends on your needs. The table lists which storage types are file-based (Level column) and which support snapshots, etc.
 
  • Like
Reactions: Spirog
No, it's raw, but it supports snapshots (using a different mechanism).

File-based storages are those that use files to store images. Thin-LVM uses logical volumes (raw) and supports snapshots on those directly.


Depends on your needs. The table lists which storage types are file-based (Level column) and which support snapshots, etc.
Thank you so Much @fabian I appreciate you for all your answers and assistance as well..

Kind Regards,
Spiro
 
Depends on your needs. The table lists which storage types are file-based (Level column) and which support snapshots, etc.
@fabian Sorry one last quick Question?

if I wanted to change the my backup_drive to Thin-LVM - is it possible now?
If so is there a cli command to do this?
would it be a good Idea incase I need to use it for another coouple VM's or Container's in the future

Thanks
 
@fabian Sorry one last quick Question?

if I wanted to change the my backup_drive to Thin-LVM - is it possible now?
If so is there a cli command to do this?
would it be a good Idea incase I need to use it for another coouple VM's or Container's in the future

Thanks
No, there is no single command to do that. It depends on where the directory actually is. If it's on a separate drive, you can move the data away, re-format it and create LVM-Thin on it. Otherwise, it'll be more involved.
 
If it's on a separate drive
Yes it is a sperate drive. 1TB. all i have is a few backups from 2 of my VM's

is there an article on how to re-format it and create LVM-Thin on it.? I don't really need the backups I have, cause after reformatting drive and creating LVM-Thin I can do manual backup's right away.

thanks for any assistance on how to reformat the drive and create LVM-thin for that drive
 
Yes it is a sperate drive. 1TB. all i have is a few backups from 2 of my VM's

is there an article on how to re-format it and create LVM-Thin on it.? I don't really need the backups I have, cause after reformatting drive and creating LVM-Thin I can do manual backup's right away.
Once you have unmounted the drive (and removed the fstab entry or systemd mount unit), it should be possible via GUI. [Your Node] > Disks > Wipe Disk (be sure to select the correct disk!) and then [Your Node] > Disks > LVM Thin > Create: Thinpool
thanks for any assistance on how to reformat the drive and create LVM-thin for that drive
 
  • Like
Reactions: Spirog
Once you have unmounted the drive (and removed the fstab entry or systemd mount unit), it should be possible via GUI. [Your Node] > Disks > Wipe Disk (be sure to select the correct disk!) and then [Your Node] > Disks > LVM Thin > Create: Thinpool
@Fabian_E I went ahead and umount then removed line in fstab
then did your above process.

the only thing I noticed it created 2 backup_drive under starage in PVE

Code:
NAME                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                     8:0    0 931.5G  0 disk
└─sda1                                                  8:1    0 931.5G  0 part
  ├─Backup_Drive-Backup_Drive_tmeta                   253:11   0   9.3G  0 lvm 
  │ └─Backup_Drive-Backup_Drive                       253:13   0 912.8G  0 lvm 
  └─Backup_Drive-Backup_Drive_tdata                   253:12   0 912.8G  0 lvm 
    └─Backup_Drive-Backup_Drive                       253:13   0 912.8G  0 lvm




followed your directions is this normal ?

Thanks again in advance for your assistance
Kind Regards,
Spiro
 
Yes, that's just how lsblk lists the logical volumes. The thin pool has a metadata and a data logical volume associated to it. To see things from LVMs perspective, you can use commands like pvs, vgs, lvs.
 
Hi,
Is it safe to move storage when the VM is dead?
what do you mean with "dead"? If the VM is shut down, moving the volume will do just that and update the reference to the volume in the VM configuration, but not touch anything else about the VM.
 
  • Like
Reactions: Xaviero
Hi,

what do you mean with "dead"? If the VM is shut down, moving the volume will do just that and update the reference to the volume in the VM configuration, but not touch anything else about the VM.
i see... Thanks for reply
 
Hi,

what do you mean with "dead"? If the VM is shut down, moving the volume will do just that and update the reference to the volume in the VM configuration, but not touch anything else about the VM.
by the way, by doing "move storage" in the active state of the VM

Will there be "corrupted data"???
 
by the way, by doing "move storage" in the active state of the VM

Will there be "corrupted data"???
No, there won't. But it can slow down guest IO, because it's done in the background while the VM is running. The way drive-mirror in QEMU (this is used when the VM is running) is implemented ensures that the target image will be consistent. When the switch to the new disk happens, it is ensured that all writes have finished and guest IO is blocked for a short time frame to avoid interference.
 
  • Like
Reactions: Xaviero
Well, there could be a bug, but I haven't heard of any reports. The drive-mirror operation has been there for a long time and is well-tested.
 
  • Like
Reactions: Xaviero
as mentioned before it is caused that you have external mounts
try to comment this share in .conf file eg:
/etc/pve/nodes/proxmox/lxc/100.conf

#mp0: .........

take snapshot, then remount it again

it works for me
 
On directory storages, you'd need qcow2 to be able to create snapshots, but for containers, using qcow2 is not possible. Have a look at this list to see on which storages PVE supports snapshots.
I had a file-based storage (directory) with qcow2 VMs. However, I couldn't create any snapshots.

Now I have switched my file-based storage (directory) to LVMThin and now I can create snapshots.

This information can be found in the storage types documentation and appears to be incorrect: "2: On file based storages, snapshots are possible with the qcow2 format."

At least it didn't work for me.
 
This information can be found in the storage types documentation and appears to be incorrect: "2: On file based storages, snapshots are possible with the qcow2 format."

At least it didn't work for me.
Works for me.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!