I have Proxmox VE as VM on Proxmox VE physical host via nested virtualization.
It has two disks:
- scsi0: Operating system, Debian + Proxmox VE
- scsi1: Storage for Proxmox VE with VM images and ISO images, XFS filesystem
By default only one VM with two 20 GiB disks – one for operating system, and one for log data (XFS filesystem) – running Elasticsearch, Fluentd, Kibana is being started inside this VM.
I just made a backup of that VM. Then I reran the backup just a few minutes later without stopping or restarting the VM.
Fast incremental mode (dirty-bitmap) however only recognized about 10 GiB as unchanged and 100,1 GiB still as dirty:
To me it appears that it seems to treat the second disk as completely changed. While I expect some changes there due to the startup and running of that log VM, I would not expect all 100 GiB to be changed. Is the second disk somehow omitted from fast incremental with dirty bitmaps?
If so, anything I can do about that?
Here is the configuration of that VM:
UPDATE: This does not appear to happen on all VMs with two disks.
It has two disks:
- scsi0: Operating system, Debian + Proxmox VE
- scsi1: Storage for Proxmox VE with VM images and ISO images, XFS filesystem
By default only one VM with two 20 GiB disks – one for operating system, and one for log data (XFS filesystem) – running Elasticsearch, Fluentd, Kibana is being started inside this VM.
I just made a backup of that VM. Then I reran the backup just a few minutes later without stopping or restarting the VM.
Fast incremental mode (dirty-bitmap) however only recognized about 10 GiB as unchanged and 100,1 GiB still as dirty:
INFO: VM Name: […]
INFO: include disk 'scsi0' 'pve:1011/vm-1011-disk-0.qcow2' 10G
INFO: include disk 'scsi1' 'pve:1011/vm-1011-disk-1.qcow2' 100G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/1011/2020-07-14T09:44:57Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '94ce6a5e-4a79-469e-a791-15c4e26c54ba'
INFO: resuming VM again
INFO: using fast incremental mode (dirty-bitmap), 100.1 GiB dirty of 110.0 GiB total
To me it appears that it seems to treat the second disk as completely changed. While I expect some changes there due to the startup and running of that log VM, I would not expect all 100 GiB to be changed. Is the second disk somehow omitted from fast incremental with dirty bitmaps?
If so, anything I can do about that?
Here is the configuration of that VM:
root@tuxmaster:/etc/pve/qemu-server# cat 1011.conf
agent: 1
boot: dcn
bootdisk: scsi0
cores: 2
cpu: host
memory: 12288
name: […]
net0: virtio=[…],bridge=vmbr1,tag=10
numa: 0
ostype: l26
sata0: none,media=cdrom
scsi0: pve:1011/vm-1011-disk-0.qcow2,discard=on,size=10G
scsi1: pve:1011/vm-1011-disk-1.qcow2,discard=on,size=100G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=[…]
sockets: 1
vga: qxl
vmgenid: […]
UPDATE: This does not appear to happen on all VMs with two disks.