How to backup selected partitions used by a W11 VM.

blah

New Member
Feb 8, 2026
3
0
1
Hi,
I have installed Proxmox VE 9.15 in a system on which W11 was installed on a 1TB nvme drive.
I have setup Proxmox config with a 1TB sata drive for the Proxmox OS and a backup directory, and the nvme drive with the existing W11 OS partitions and a partition for Linux VMs.

Practically the W11 VM uses the existing W11 partitions nvme0n1p1..nvme0n1p4, and the nvme0n1p5 partition is used for Linux VMs disk space.

With this setup the backup of the W11 VM starts to save the entire 1TB nvme disk (rather expected as it is defined in W11 VM).
But as a result the backup also includes the nvme0n1p5 partition which is not relevant for the W11 OS and my backup directory runs out of space.
I search in Proxmox but I could not find how to just tave a backup of the selected partitions used by that W11 VM.

Is this possible using the web admin page or via line commands ?
Or better, can a VM be created so it uses only 'selected existing' partitions, assuming they all live on the same drive ?

Below details of the setup :

Code:
lsblk
NAME                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                    8:0    0 953.9G  0 disk
├─sda1                 8:1    0  1007K  0 part
├─sda2                 8:2    0     1G  0 part /boot/efi
└─sda3                 8:3    0   952G  0 part
  ├─pve-swap         252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root         252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta   252:2    0   8.3G  0 lvm
  │ └─pve-data-tpool 252:4    0 815.4G  0 lvm
  │   └─pve-data     252:5    0 815.4G  1 lvm
  └─pve-data_tdata   252:3    0 815.4G  0 lvm
    └─pve-data-tpool 252:4    0 815.4G  0 lvm
      └─pve-data     252:5    0 815.4G  1 lvm
nvme0n1              259:0    0 931.5G  0 disk
├─nvme0n1p1          259:1    0   300M  0 part <- vfat SYSTEM
├─nvme0n1p2          259:2    0    16M  0 part <- MS Reserved
├─nvme0n1p3          259:3    0 249.5G  0 part <- ntfs Windows
├─nvme0n1p4          259:4    0     1G  0 part <- ntfs Recovery
└─nvme0n1p5          259:5    0 680.7G  0 part /mnt/pve/nvme_vms

W11 VM configuration data:
Code:
agent: 1
audio0: device=ich9-intel-hda,driver=spice
bios: ovmf
boot: order=sata1;net0
cores: 8
cpu: x86-64-v3
machine: pc-q35-10.1
memory: 8192
meta: creation-qemu=10.1.2,ctime=1768864563
name: VMW11
net0: virtio=BC:24:11:32:C3:AA,bridge=vmbr0
numa: 0
ostype: win11
sata1: /dev/disk/by-id/nvme-WD_Blue_SN5000_1TB_25031T802483,cache=writeback,discard=on,size=976762584K
scsihw: virtio-scsi-single
smbios1: uuid=fee96b98-d53d-441b-b895-80666ae65ab6
sockets: 1
tpmstate0: nvme_vms:300/vm-300-disk-0.qcow2,size=4M,version=v2.0
usb0: spice
vga: qxl
vmgenid: 51f64bf6-0c58-415f-b0d1-9724f81facb6

Thanks for your help.
 
Hi,
Thanks for the suggestion. I gave it a try but the disk was simply excluded in the backup job.
Sounds rather normal when the backup checkbox is unchecked.
It looks like Proxmox does not support taking backup for selected partitions on disks that are not defined as storage in a node, even if they are attached to VMs. Perhaps that is possible using Proxmox CLI Commands.
For now I am afraid I'll have to save my data using native Linux commands outside of Proxmox env.

Code:
INFO: starting new backup job: vzdump 300 --storage backups --remove 0 --compress zstd --notes-template '{{guestname}}' --mode stop --node IT12 --notification-mode notification-system
INFO: Starting Backup of VM 300 (qemu)
INFO: Backup started at 2026-02-27 21:07:19
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: VMW11
INFO: exclude disk 'sata1' '/dev/disk/by-id/nvme-WD_Blue_SN5000_1TB_25031T802483' (backup=no)
INFO: include disk 'tpmstate0' 'nvme_vms:300/vm-300-disk-0.qcow2' 4M
INFO: creating vzdump archive '/backups/dump/vzdump-qemu-300-2026_02_27-21_07_19.vma.zst'
INFO: starting kvm to execute backup task
WARN: no efidisk configured! Using temporary efivars disk.
swtpm_setup: Not overwriting existing state file.
INFO: attaching TPM drive to QEMU for backup
INFO: started backup task '4d89ebcb-f557-4858-b18b-e39392ed5038'
INFO: 100% (4.0 MiB of 4.0 MiB) in 0s, read: 4.0 MiB/s, write: 12.0 KiB/s
INFO: backup is sparse: 3.99 MiB (99%) total zero data
INFO: transferred 4.00 MiB in <1 seconds
INFO: stopping kvm after backup task
INFO: archive file size: 7KB
INFO: adding notes to backup
trying to acquire lock...
 OK
INFO: Finished Backup of VM 300 (00:00:02)
INFO: Backup finished at 2026-02-27 21:07:21
INFO: Backup job finished successfully
 
Well in my setup the data of the Linux VMs is installed in partition nvme0n1p5 (after the partitions used by the W11 VM) and the backup files for those Linux VM are copied to the directory 'backup' installed on sda3.

My problem is that the backup file for the W11 VM also includes the disk space on nvme0n1p5 with the Linux VM data.
As a result the backup directory receiving the 1TB disk backup file runs out of space .. it got allocated by default at 100GB and I need to find a way to increase the size of the backup directory which could be some sort of workaround to my problem.
But still a restore from this backup file would of course replace the data for the W11 VM AND also the data for the Linux VMs.. Not so good !
 
You have a really weird setup, and arguably you're implementing it wrong.

> Practically the W11 VM uses the existing W11 partitions nvme0n1p1..nvme0n1p4, and the nvme0n1p5 partition is used for Linux VMs disk space
> sata1: /dev/disk/by-id/nvme-WD_Blue_SN5000_1TB_25031T802483,cache=writeback,discard=on,size=976762584K<br>

You're passing thru the entire disk to the VM, and somehow the host still has access to p5 for Linux VMs? You may need professional help.

> As a result the backup directory receiving the 1TB disk backup file runs out of space .. it got allocated by default at 100GB and I need to find a way to increase the size of the backup directory

Fix your backup disk first. IDK if anyone besides "AI" gave you (extremely bad) advice on how to set this up but you appear to have knocked the whole thing into a cocked hat, which is not a supported configuration.

If you're using LVM on the backup disk, forget about it and just wipe / gdisk / reformat the whole drive with 1 partition and mkfs.xfs on it. Then you have free space for the entire drive size. If you need graphical help with this, boot systemrescuecd and redo the backup disk with gparted.

Then, power off the Windows VM and backup all other VMs. Once you have backups of everything, do a P2V conversion of the win vm with e.g. Veeam free agent - backup to a Samba share and restore the backup over Samba into a proxmox VM with a single virtual disk.

You can install webmin (runs on port 10000) to help setup a share, or edit the smb.conf file and add a simple stanza for the directory on the backup drive, systemctl restart smbd and smbpasswd -a useridhere to give userid access.

https://github.com/kneutron/ansitest/blob/master/ZFS/zfs-fix-smb-conf.sh

See code ^^ for example stanza.
 
I question that setup to begin with.
You are passing through the whole disk to a guest, while at the same time mounting/using a portion/partition of it on the host.
Wish you good luck that you do not experience some kind of corruption sooner or later. Definitely not a setup that I personally would trust/rely on, at all...

Just my 2 cents...