Managing backup with iSCSI storage

coolnodje

New Member
Nov 25, 2024
4
1
3
When using iSCSI storage, the disk backup and snapshots are all taking care of by the SAN.

Is there a possibility to backup only the VM configuration without its disks content?
 
Is there a possibility to backup only the VM configuration without its disks content?
Hi @coolnodje , welcome to the forum.

It could be as simple as copying the file located here: /etc/pve/qemu-server/
However, instead I would recommend using API or saving these outputs:

Code:
root@pve-1:~# qm config 10000
boot:
meta: creation-qemu=9.0.2,ctime=1732554477
smbios1: uuid=48e9380d-d3bb-4f7a-8681-70392d9e3bd6
vmgenid: a997b5d1-95ab-41ec-8492-5cb991a534df


root@pve-1:~# qm showcmd 10000 --pretty 1
/usr/bin/kvm \
  -id 10000 \
  -name 'vm10000,debug-threads=on' \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/10000.qmp,server=on,wait=off' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/10000.pid \
  -daemonize \
  -smbios 'type=1,uuid=48e9380d-d3bb-4f7a-8681-70392d9e3bd6' \
  -smp '1,sockets=1,cores=1,maxcpus=1' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc 'unix:/var/run/qemu-server/10000.vnc,password=on' \
  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
  -m 512 \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'vmgenid,guid=a997b5d1-95ab-41ec-8492-5cb991a534df' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:fa18d7246ac' \
  -machine 'type=pc+pve0'



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
When using iSCSI storage, the disk backup and snapshots are all taking care of by the SAN.
Depending on what type of guest you have, you may need to create your own orchestration to quiesce the guest file system before initiating a back end snapshot/backup or you may end up with dirty data- possibly causing corruption and data loss. I would advocate to use a normal backup procedure regardless.
 
  • Like
Reactions: Johannes S
Thanks for your answers.

I just stumble upon the possibility to uncheck a "backup" option under a VM's instance Hardware->Hard Disk option.
I think it basically does what I was looking for when using PVS as a backup engine.
Except it doesn't seem possible to manage that globally on a VM selection.

It would be tricky to match the Vm conf backup and the storage but I don't think it matters as the VM conf is not likely to change often.

So I guess that follows your advice @alexskysilk to use a std procedure.
Or is the std Backup not going to quiesce the guest FS in this case?
Actually what could go wrong on the guest FS that could make a ZFS snapshot on the Storage side go wrong ? I just can't figure out.

I guess I need to try a few restoration out to check how it behaves.

ZFS on iSCSI is probably more adapted for this config, I want to test that but I'm using TrueNAS as a Storage backup and it doesn't seem supported by the plugin.

@bbgeek17 how do you restore such an API output ? Will proxmox detect the restored VM if restoration is not using CLI?
What about restoring on a new PVE instance, would that work as well?
 
how do you restore such an API output ? Will proxmox detect the restored VM if restoration is not using CLI?
What about restoring on a new PVE instance, would that work as well?
Frankly, the easiest solution for you is to copy the file via shell (you can use ansible or any other automation) and recover by simply placing the file in appropriate folder, either on existing or new PVE.
Of course, you need to make sure you dont have ID collisions.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Or is the std Backup not going to quiesce the guest FS in this case?
What I meant for standard backup is to use a supported backup solution, eg PBS. a supported solution will issue a freeze to the guest before commencing backup, meaning that the file system is in a fully committed state.

Actually what could go wrong on the guest FS that could make a ZFS snapshot on the Storage side go wrong ?
exactly what I mentioned. when the guest file system and the underlying disk file systems dont communicate, there is no guarantee that what's in the guest buffers is going to be written to backup, and all those open files will remain unclosed. sometimes thats ok. sometimes its not. In either case it isnt a wise idea.
ZFS on iSCSI is probably more adapted for this config,
not relevant to the above point. zfs over iscsi allows the host to control snapshots- which means the normal tooling will work correctly; none of that matters if you dont use the tooling provided.