Full-clone of template crashes pfSense VM

bufu

New Member
Nov 27, 2022
8
0
1
Hi,

so I recently got into Packer and created some VM templates with, which all worked fine. During testing, I only used linked clones, so there was no issues. However, now that I'm done with creating the templates and I tried creating a working VM using a full-clone of the template, it crashes my pfSense VM.

I set up qemu-guest-agent on pfSense, fully updated proxmox and tried the solution from here, but with no success.

Here is the error from syslog:

Code:
Dec 14 11:37:40 proxmox pvestatd[1837]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - got timeout
Dec 14 11:37:41 proxmox pvestatd[1837]: status update time (8.312 seconds)
Dec 14 11:37:50 proxmox pvestatd[1837]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 51 retries
Dec 14 11:37:50 proxmox pvestatd[1837]: status update time (8.337 seconds)

The console of the pfSense VM shows the following message:

pfsense_WRITE_DMA.png

Networking will stop working some time after and I will have to force stop and restart the VM.

Here is the config of the pfSense VM:

Code:
agent: 1
boot: order=ide0;ide2
cores: 2
cpu: host
hostpci0: 0000:85:00
hostpci1: 0000:87:00
hostpci2: 0000:82:00
ide0: local-lvm:vm-100-disk-0,discard=on,size=32G
ide2: local:iso/pfSense-CE-2.6.0-RELEASE-amd64.iso,media=cdrom,size=749476K
memory: 4096
meta: creation-qemu=7.0.0,ctime=1669568663
name: pfsense
net0: virtio=BE:11:48:DE:2B:E5,bridge=vmbr1
numa: 0
onboot: 1
ostype: other
scsihw: virtio-scsi-pci
smbios1: uuid=f596268e-f078-46ae-9444-8a68f04b36f0
sockets: 2
startup: order=0
vga: qxl
vmgenid: 8bb8c2c4-a775-411c-9db0-028a06f90f8d

The config of the template I am trying to clone (it's a Kali linux VM, but the same thing happened with Windows Server):

Code:
agent: 1
boot: order=ide0;ide2
cores: 2
cpu: host
hostpci0: 0000:85:00
hostpci1: 0000:87:00
hostpci2: 0000:82:00
ide0: local-lvm:vm-100-disk-0,discard=on,size=32G
ide2: local:iso/pfSense-CE-2.6.0-RELEASE-amd64.iso,media=cdrom,size=749476K
memory: 4096
meta: creation-qemu=7.0.0,ctime=1669568663
name: pfsense
net0: virtio=BE:11:48:DE:2B:E5,bridge=vmbr1
numa: 0
onboot: 1
ostype: other
scsihw: virtio-scsi-pci
smbios1: uuid=f596268e-f078-46ae-9444-8a68f04b36f0
sockets: 2
startup: order=0
vga: qxl
vmgenid: 8bb8c2c4-a775-411c-9db0-028a06f90f8d
root@proxmox:~# qm config 902
agent: 1
boot: c
cores: 2
cpu: kvm64
description: Kali 2022.4
ide0: local-lvm:vm-902-cloudinit,media=cdrom
ide2: none,media=cdrom
kvm: 1
memory: 8192
meta: creation-qemu=7.1.0,ctime=1670852399
name: kali-2022.4
net0: virtio=6E:1D:09:4D:8A:60,bridge=vmbr1,firewall=0,tag=20
numa: 0
onboot: 0
ostype: l26
scsi0: local-lvm:base-902-disk-0,cache=writeback,iothread=0,size=64G
scsihw: virtio-scsi-pci
smbios1: uuid=56d57d62-3def-4875-a9d8-ee6b48aeaa2f
sockets: 2
tablet: 0
template: 1
vga: type=std,memory=256
vmgenid: a6804438-50a5-4e09-8137-d31532d4f1de

Output of pveversion -v:

Code:
agent: 1
boot: c
cores: 2
cpu: kvm64
description: Kali 2022.4
ide0: local-lvm:vm-902-cloudinit,media=cdrom
ide2: none,media=cdrom
kvm: 1
memory: 8192
meta: creation-qemu=7.1.0,ctime=1670852399
name: kali-2022.4
net0: virtio=6E:1D:09:4D:8A:60,bridge=vmbr1,firewall=0,tag=20
numa: 0
onboot: 0
ostype: l26
scsi0: local-lvm:base-902-disk-0,cache=writeback,iothread=0,size=64G
scsihw: virtio-scsi-pci
smbios1: uuid=56d57d62-3def-4875-a9d8-ee6b48aeaa2f
sockets: 2
tablet: 0
template: 1
vga: type=std,memory=256
vmgenid: a6804438-50a5-4e09-8137-d31532d4f1de
root@proxmox:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1

If you need to look at the packer templates, you can find them here.

I hope you can help me fix this. I know I could just use linked clones, but that just seems like a bad workaround for me, and I don't want to setup my VMs again if I change the template.
 
Last edited:
UPDATE:
I just tried removing all network adapters, including the PCI passthrough ones and changed the GPU back to default instead of SPICE, did not help. There was a new error message in syslog however, although that might have occurred while trying to force stop the VM:

Code:
Dec 14 12:23:38 proxmox kernel: INFO: task kworker/u98:0:360151 blocked for more than 120 seconds.
Dec 14 12:23:38 proxmox kernel:       Tainted: P           O      5.15.74-1-pve #1
Dec 14 12:23:38 proxmox kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 14 12:23:38 proxmox kernel: task:kworker/u98:0   state:D stack:    0 pid:360151 ppid:     2 flags:0x00004000
Dec 14 12:23:38 proxmox kernel: Workqueue: writeback wb_workfn (flush-253:20)
Dec 14 12:23:38 proxmox kernel: Call Trace:
Dec 14 12:23:38 proxmox kernel:  <TASK>
Dec 14 12:23:38 proxmox kernel:  __schedule+0x34e/0x1740
Dec 14 12:23:38 proxmox kernel:  ? submit_bio_noacct+0xa8/0x2b0
Dec 14 12:23:38 proxmox kernel:  ? submit_bio_noacct+0x290/0x2b0
Dec 14 12:23:38 proxmox kernel:  schedule+0x69/0x110
Dec 14 12:23:38 proxmox kernel:  io_schedule+0x46/0x80
Dec 14 12:23:38 proxmox kernel:  wait_on_page_bit_common+0x114/0x3e0
Dec 14 12:23:38 proxmox kernel:  ? filemap_invalidate_unlock_two+0x50/0x50
Dec 14 12:23:38 proxmox kernel:  __lock_page+0x4c/0x60
Dec 14 12:23:39 proxmox kernel:  write_cache_pages+0x214/0x460
Dec 14 12:23:39 proxmox kernel:  ? __set_page_dirty_no_writeback+0x50/0x50
Dec 14 12:23:39 proxmox kernel:  generic_writepages+0x54/0x90
Dec 14 12:23:39 proxmox kernel:  blkdev_writepages+0xe/0x20
Dec 14 12:23:39 proxmox kernel:  do_writepages+0xd5/0x210
Dec 14 12:23:39 proxmox kernel:  ? fprop_fraction_percpu+0x34/0x80
Dec 14 12:23:39 proxmox kernel:  ? __wb_calc_thresh+0x3e/0x130
Dec 14 12:23:39 proxmox kernel:  __writeback_single_inode+0x44/0x290
Dec 14 12:23:39 proxmox kernel:  writeback_sb_inodes+0x22a/0x4e0
Dec 14 12:23:39 proxmox kernel:  __writeback_inodes_wb+0x56/0xf0
Dec 14 12:23:39 proxmox kernel:  wb_writeback+0x1c4/0x280
Dec 14 12:23:39 proxmox kernel:  wb_workfn+0x300/0x4f0
Dec 14 12:23:39 proxmox kernel:  ? __schedule+0x356/0x1740
Dec 14 12:23:39 proxmox kernel:  ? wb_update_bandwidth+0x4f/0x70
Dec 14 12:23:39 proxmox kernel:  process_one_work+0x22b/0x3d0
Dec 14 12:23:39 proxmox kernel:  worker_thread+0x53/0x420
Dec 14 12:23:39 proxmox kernel:  ? process_one_work+0x3d0/0x3d0
Dec 14 12:23:39 proxmox kernel:  kthread+0x12a/0x150
Dec 14 12:23:39 proxmox kernel:  ? set_kthread_struct+0x50/0x50
Dec 14 12:23:39 proxmox kernel:  ret_from_fork+0x22/0x30
Dec 14 12:23:39 proxmox kernel:  </TASK>
Dec 14 12:23:39 proxmox kernel: INFO: task qemu-img:360508 blocked for more than 120 seconds.
Dec 14 12:23:39 proxmox kernel:       Tainted: P           O      5.15.74-1-pve #1
Dec 14 12:23:39 proxmox kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 14 12:23:39 proxmox kernel: task:qemu-img        state:D stack:    0 pid:360508 ppid:360466 flags:0x00000000
Dec 14 12:23:39 proxmox kernel: Call Trace:
Dec 14 12:23:39 proxmox kernel:  <TASK>
Dec 14 12:23:39 proxmox kernel:  __schedule+0x34e/0x1740
Dec 14 12:23:39 proxmox kernel:  ? try_to_wake_up+0x218/0x5c0
Dec 14 12:23:39 proxmox kernel:  schedule+0x69/0x110
Dec 14 12:23:39 proxmox kernel:  io_schedule+0x46/0x80
Dec 14 12:23:39 proxmox kernel:  wait_on_page_bit_common+0x114/0x3e0
Dec 14 12:23:39 proxmox kernel:  ? filemap_invalidate_unlock_two+0x50/0x50
Dec 14 12:23:39 proxmox kernel:  wait_on_page_bit+0x3f/0x50
Dec 14 12:23:39 proxmox kernel:  wait_on_page_writeback+0x26/0x80
Dec 14 12:23:39 proxmox kernel:  write_cache_pages+0x13b/0x460
Dec 14 12:23:39 proxmox kernel:  ? __set_page_dirty_no_writeback+0x50/0x50
Dec 14 12:23:39 proxmox kernel:  generic_writepages+0x54/0x90
Dec 14 12:23:39 proxmox kernel:  blkdev_writepages+0xe/0x20
Dec 14 12:23:39 proxmox kernel:  do_writepages+0xd5/0x210
Dec 14 12:23:39 proxmox kernel:  ? inotify_handle_inode_event+0x10c/0x210
Dec 14 12:23:39 proxmox kernel:  ? fsnotify_handle_inode_event.isra.0+0x7d/0xa0
Dec 14 12:23:39 proxmox kernel:  filemap_fdatawrite_wbc+0x89/0xe0
Dec 14 12:23:39 proxmox kernel:  filemap_write_and_wait_range+0x72/0xe0
Dec 14 12:23:39 proxmox kernel:  blkdev_put+0x1d4/0x210
Dec 14 12:23:39 proxmox kernel:  blkdev_close+0x27/0x40
Dec 14 12:23:39 proxmox kernel:  __fput+0x9f/0x260
Dec 14 12:23:39 proxmox kernel:  ____fput+0xe/0x20
Dec 14 12:23:39 proxmox kernel:  task_work_run+0x6d/0xb0
Dec 14 12:23:39 proxmox kernel:  exit_to_user_mode_prepare+0x1a8/0x1b0
Dec 14 12:23:39 proxmox kernel:  syscall_exit_to_user_mode+0x27/0x50
Dec 14 12:23:39 proxmox kernel:  ? __x64_sys_close+0x12/0x50
Dec 14 12:23:39 proxmox kernel:  do_syscall_64+0x69/0xc0
Dec 14 12:23:39 proxmox kernel:  ? irqentry_exit_to_user_mode+0x9/0x20
Dec 14 12:23:39 proxmox kernel:  ? irqentry_exit+0x1d/0x30
Dec 14 12:23:39 proxmox kernel:  ? exc_page_fault+0x89/0x170
Dec 14 12:23:39 proxmox kernel:  entry_SYSCALL_64_after_hwframe+0x61/0xcb
Dec 14 12:23:39 proxmox kernel: RIP: 0033:0x7f8461b0f11b
Dec 14 12:23:39 proxmox kernel: RSP: 002b:00007ffd9b892d20 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
Dec 14 12:23:39 proxmox kernel: RAX: 0000000000000000 RBX: 00005574337508d0 RCX: 00007f8461b0f11b
Dec 14 12:23:39 proxmox kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000008
Dec 14 12:23:39 proxmox kernel: RBP: 000055743374a570 R08: 0000000000000000 R09: 0000557433753770
Dec 14 12:23:39 proxmox kernel: R10: 0000000000000008 R11: 0000000000000293 R12: 0000557433725690
Dec 14 12:23:39 proxmox kernel: R13: 0000000000000000 R14: 00005574328f8ff0 R15: 0000000000000000
Dec 14 12:23:39 proxmox kernel:  </TASK>
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!