Hello again,The image is not qcow2. For VMs, you can convert it withMove Diskin theHardwareview of the VM. Just select the same storage and qcow2 as the format.
Quick Question.
I have a server with 3 VM's
I installed first 2 VM and then another drive for backups 1 month later.
Code:
root@proxmox:~# pvesm status
Name Type Status Total Used Available %
backup_drive dir active 960302804 136496700 774951644 14.21%
local dir active 98559220 18923912 74585760 19.20%
local-lvm lvmthin active 832868352 130593757 702274594 15.68%
Code:
root@proxmox:~# qm config 102
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
ide2: local:iso/ubuntu-20.04.2-live-server-amd64.iso,media=cdrom
memory: 12228
meta: creation-qemu=6.1.1,ctime=1644894666
name: ubuntu-cP
numa: 0
onboot: 1
ostype: l26
parent: Ubuntu_cPanel_03_12_22
scsi0: local-lvm:vm-102-disk-0,size=96G
scsihw: virtio-scsi-pci
sockets: 1
unused0: backup_drive:102/vm-102-disk-1.qcow2
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-11 (running version: 7.1-11/8d529482)
pve-kernel-helper: 7.1-13
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11
pve-kernel-5.13.19-6-pve: 5.13.19-14
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1
I noticed afterwards, that this 3rd VM was probably created under backup_drive because the Local-lvm did not have an option for qcow2 so I did not use that for move disk.
The backup_drive was the only one available for Move Disk and select qcow2 not the Local-lvm. this has not option for dropdown to select anything?
Now I read a little more and I am thinking Local-lvm is qcow2 as Default ?
Because I moved 3rd VM to local-lvm, the disk section does not say .raw or .qcow2 but still the Snapshot button is active.
so I am assuming when creating Local-LVM it is automatically qcow2 yes ?
@fabian On file-based storages, you needqcow2for snapshots. Otherwise, the storage needs to support them, see here for a list.
Also what do you mean by On file based storages ?
so when added my backup_drive this is now considered a file-based storage? because I use it as backup drive?
is there a better option to set the backup_drive? in that list see here for a list.
or for the general purpose of backups of VM's - this is good enough the way it is ?
if someone can explain so I can understand better sorry for the dumb questions. still learning.
Thank you Kindly for your time and answer
Spiro
Last edited: