[SOLVED] The current guest configuration does not support taking new snapshots

manjotsc

Active Member
Jul 2, 2020
73
6
28
Montreal, Quebec
I am getting this error under snapshots " The current guest configuration does not support taking new snapshots ", I was able use this feature and now suddenly I am getting this error.

Thanks,
 

Attachments

  • Annotation 2020-08-04 010711.png
    Annotation 2020-08-04 010711.png
    177.6 KB · Views: 551
  • Annotation 2020-08-04 010724.png
    Annotation 2020-08-04 010724.png
    97.9 KB · Views: 520
  • Annotation 2020-08-04 010742.png
    Annotation 2020-08-04 010742.png
    95.4 KB · Views: 496
Hi,
please share the configuration of an affected guest (wtih pct config <ID> or qm config <ID> depending on whether it is a container or a VM). Did you add/convert any virtual drives recently?
 
All of them are affected. I added new hard drive to my system.

Code:
arch: amd64
cores: 4
hostname: Plex
memory: 4000
mp0: /mnt/4TB/Plex,mp=/mnt/Plex
mp1: /mnt/S1TB/,mp=/mnt/S1TB
nameserver: 192.168.40.4 192.168.40.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=4A:E5:19:82:58:B1,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: VMDrive:100/vm-100-disk-0.raw,size=20G
searchdomain: manjot.net
startup: order=1
swap: 4000
unprivileged: 1

root@vms:~# pct config 101
arch: amd64
cores: 1
hostname: ntopng
memory: 1000
nameserver: 192.168.40.4 192.168.40.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=92:EF:41:83:5A:B0,ip=dhcp,type=veth
ostype: ubuntu
rootfs: VMDrive:101/vm-101-disk-0.raw,size=20G
searchdomain: manjot.net
swap: 512
unprivileged: 1

arch: amd64
cores: 2
hostname: WebServer
memory: 1000
mp0: /mnt/4TB/Plex,mp=/mnt/Plex
mp1: /mnt/S1TB/,mp=/mnt/S1TB
nameserver: 192.168.40.4 192.168.40.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=2A:E7:4E:5E:19:F6,ip=dhcp,type=veth
net1: name=eth1,bridge=vmbr0,firewall=1,hwaddr=ba:b1:cd:99:7a:88,ip=dhcp,type=veth
net2: name=eth2,bridge=vmbr0,firewall=1,hwaddr=22:ff:42:cc:3b:e7,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: Seagate1TB:102/vm-102-disk-1.raw,size=10G
searchdomain: manjot.net
startup: order=3
swap: 1000
unprivileged: 1

arch: amd64
cores: 2
hostname: Grafana
memory: 1000
nameserver: 192.168.40.4 192.168.40.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=7A:3E:CF:87:93:B0,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: VMDrive:104/vm-104-disk-0.raw,size=30G
searchdomain: manjot.net
startup: order=4
swap: 3000
unprivileged: 1
arch: amd64
cores: 2
hostname: phpmyadmin
memory: 1000
nameserver: 192.168.40.4 192.168.40.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=c6:e4:f9:d6:75:fb,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: VMDrive:105/vm-105-disk-1.raw,size=15G
searchdomain: manjot.net
swap: 2000
unprivileged: 1

arch: amd64
cores: 2
hostname: MySQL
memory: 1000
nameserver: 192.168.40.4 192.168.40.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=c2:2b:1c:0c:91:55,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: VMDrive:106/vm-106-disk-1.raw,size=30G
searchdomain: manjot.net
swap: 2000
unprivileged: 1

boot: dcn
bootdisk: ide0
cores: 2
ide0: VMDrive:103/vm-103-disk-0.raw,size=11020590K
ide2: Seagate1TB:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
memory: 512
name: GMDiag
net0: rtl8139=3E:40:82:10:F2:EB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: wxp
scsihw: virtio-scsi-pci
smbios1: uuid=ae5f5c12-b5da-4f97-8ade-2c424744782b
sockets: 1
startup: order=5
vmgenid: f7f50645-fdd0-4480-a51d-960f6f9157d4
 
On directory storages, you'd need qcow2 to be able to create snapshots, but for containers, using qcow2 is not possible. Have a look at this list to see on which storages PVE supports snapshots.
 
  • Like
Reactions: crc-error-79
This one is a VM, I am getting the same error. I am confused.

Code:
boot: dcn
bootdisk: ide0
cores: 2
ide0: VMDrive:103/vm-103-disk-0.raw,size=11020590K
ide2: Seagate1TB:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
memory: 512
name: GMDiag
net0: rtl8139=3E:40:82:10:F2:EB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: wxp
scsihw: virtio-scsi-pci
smbios1: uuid=ae5f5c12-b5da-4f97-8ade-2c424744782b
sockets: 1
startup: order=5
vmgenid: f7f50645-fdd0-4480-a51d-960f6f9157d4
 
Last edited:
ide0: VMDrive:103/vm-103-disk-0.raw,size=11020590K

The image is not qcow2. For VMs, you can convert it with Move Disk in the Hardware view of the VM. Just select the same storage and qcow2 as the format.
 
Hi, everybody!

I'm a Proxmox newbie and currently experiencing the same error message.

My configuration is quite unusual: I have a Openmediavault 5 qcow2 VM running fine with one external USB HDD (for local backup) and two physical HDDs attached to it (for SnapRAID's data and parity). For that reason, I'm unable to take its snapshots.

Here's the output of above commands:

Code:
root@newyork:~# lvs
  LV   VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- 59,66g             0,00   1,59                           
  root pve -wi-ao---- 27,75g                                                   
  swap pve -wi-ao----  8,00g


root@newyork:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree 
  pve   1   3   0 wz--n- <111,29g 13,87g


root@newyork:~# pvs
  PV         VG  Fmt  Attr PSize    PFree 
  /dev/sda3  pve lvm2 a--  <111,29g 13,87g


root@newyork:~# pct config 101
Configuration file 'nodes/newyork/lxc/101.conf' does not exist


root@newyork:~# qm config 101
agent: 1
boot: order=scsi0;net0
cores: 1
memory: 4096
name: nas
net0: virtio=76:DE:E8:4C:83:26,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: storage:101/vm-101-disk-0.qcow2,size=16G
scsi1: /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA19H8DD,backup=0,replicate=0,size=7814026584K
scsi2: /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1A8S19,backup=0,replicate=0,size=7814026584K
scsihw: virtio-scsi-pci
smbios1: uuid=4a33c56e-f431-4bd9-89cd-8c5993557f47
sockets: 1
usb0: host=2-1,usb3=1
vmgenid: 446a3205-545d-4830-b3a2-c63047215a35

Please, could anyone help me to figure out what's happening?

Thanks in advance!
 
Hi,
yes, you cannot take snapshots, because you pass through physical disks to that VM. In PVE we currently only support taking snapshots if all attached drives support snapshots. There is an open feature request to make this more flexible.
 
Any plans on making this possible in future?
Not that I'm aware of. qcow2 is not really designed to be used outside of QEMU, as you need to be aware of the internal structure of the file when reading/writing to it.
 
@Fabian_E is this same with storing CT or VM to an additional drive? I recently attached a directory to my proxmox and store some of the VM and CT there. I notice that I can't make a snapshots when VM/CT is located there.
 
Hi,
@Fabian_E is this same with storing CT or VM to an additional drive? I recently attached a directory to my proxmox and store some of the VM and CT there. I notice that I can't make a snapshots when VM/CT is located there.

On directory storages, you'd need qcow2 to be able to create snapshots, but for containers, using qcow2 is not possible. Have a look at this list to see on which storages PVE supports snapshots.

Please share your VM configuration and the output of pveversion -v if you are using qcow2.
 
Hi,




Please share your VM configuration and the output of pveversion -v if you are using qcow2.



I am able to create a snapshot with the rest of my VM/CT, this particular CT as well is able to create a snapshot before I move the disk to the new SDD that I add.

Code:
arch: amd64
cores: 8
hostname: vm2160
memory: 16384
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr1,firewall=1,gw=192.168.2.2,hwaddr=xxxxxxxxxx,ip=192.168.2.160/24,type=veth
onboot: 1
ostype: centos
rootfs: PX1VPS:217/vm-217-disk-0.raw,size=250G
searchdomain: google.com
swap: 16384
unprivileged:  1


Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
For containers, qcow2 cannot be used, so the underlying storage needs to support snapshots.
 
For containers, qcow2 cannot be used, so the underlying storage needs to support snapshots.
HI,

I recently reinstalled pve onto an 800gb intel server grade ssd.
The drive was auto split into two potential storage sections:
local 100gb
local-lvm 657gb
I also installed an internal server grade sas drive as a directory for storage.
I can only create CT snapshots to the local-lvm 657gb portion of the SSD (if the CT is installed on this local-lvm),
regardless of where the CT Template is uploaded.
What is puzzling,... is why on the same SSD drive will it only allow CT snapshots to the larger local-lvm and not the "local 100gb" portion.

Which brings me to another question:
Can I resize those two sections (local/local-lvm) to meet particular needs.
At first it seemed potentially ideal for what might be this use case...
The smaller section of local 100gb for CTs and the larger section local-lvm 657gb for VMs,
and all CT/VM templates/ISOs and backups/snapshots etc on the additional storage drive.

Now, not so ideal.
Options, carefully explained during the install process (specially for non-techies like me) could be a game changer...
OR
Is this configuration less than ideal for speed etc.
Should the CTs and Templates / VMs and ISOs be on the same drive and portion of that drive.
 
Last edited:
Hi,
HI,

I recently reinstalled pve onto an 800gb intel server grade ssd.
The drive was auto split into two potential storage sections:
local 100gb
local-lvm 657gb
I also installed an internal server grade sas drive as a directory for storage.

I can only create CT snapshots to the local-lvm 657gb portion of the SSD (if the CT is installed on this local-lvm),
regardless of where the CT Template is uploaded.
What is puzzling,... is why on the same SSD drive will it only allow CT snapshots to the larger local-lvm and not the "local 100gb" portion.
This can be configured by editing the local storage (in the UI Datacenter > Storage > Edit) and selecting the Container content type.

Which brings me to another question:
Can I resize those two sections (local/local-lvm) to meet particular needs.
LVM currently does not support making thin pools (like local-lvm is) smaller. So you'd need to destroy the thin pool (first, backup/move all the images on it if it's already in use), extend the pve/root logical volume by the desired amount of space, and re-create the thin-pool with the rest of the free space in the volume group (and restore the backups).

At first it seemed potentially ideal for what might be this use case...
The smaller section of local 100gb for CTs and the larger section local-lvm 657gb for VMs,
and all CT/VM templates/ISOs and backups/snapshots etc on the additional storage drive.
In Proxmox VE, snapshots are stored on the same storage as the VM/Container drives themselves. Also note that you won't be able to snapshot containers if you use a file-system based backing storage like local. What speaks against using local-lvm for both container and VM drives?

Now, not so ideal.
Options, carefully explained during the install process (specially for non-techies like me) could be a game changer...
OR
Is this configuration less than ideal for speed etc.
Should the CTs and Templates / VMs and ISOs be on the same drive and portion of that drive.
 
The image is not qcow2. For VMs, you can convert it with Move Disk in the Hardware view of the VM. Just select the same storage and qcow2 as the format.
This was greatly appreciated. to help
Does any other format work for snapshots or just qcow2 ( just wondering) cause there are 3 options raw qcow2 and vmdk

Also which one of the 3 work best for stability and speed?

Thank you
 
This was greatly appreciated. to help
Does any other format work for snapshots or just qcow2 ( just wondering) cause there are 3 options raw qcow2 and vmdk

Also which one of the 3 work best for stability and speed?

Thank you
On file-based storages, you need qcow2 for snapshots. Otherwise, the storage needs to support them, see here for a list. I don't think there's anything fundamentally wrong with vmdk support in QEMU, but it's certainly not a native format. Speed-wise, raw should be a little bit better than qcow2, stability-wise I don't think there's an issue with either.
 
  • Like
Reactions: Spirog

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!