[SOLVED] Best way to create a Windows VM on an NFS share with snapshots

MisterDeeds

Active Member
Nov 11, 2021
127
25
33
34
Hi together

I am looking for the best way to create a Windows VM on an NFS shared storage which also supports snapshots.

According to the following list (https://pve.proxmox.com/wiki/Storage), it is possible to create snapshots only if the VM is saved in qcow2 format. But now when I create a VM like this (the discard option is enabled) the following step appears for over an hour:

Code:
Formatting '/mnt/pve/NAS01-Vm/images/999/vm-999-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=53687091200 lazy_refcounts=off refcount_bits=16

Also the Proxmox service seems to be blocked or interrupted:
Unbenannt.PNG

Why is that?
And what is the best way to create a Windows VM on NFS storage using the snapshot function?

Thank you and best regards

Lars
 

Attachments

  • syslog.txt
    42.5 KB · Views: 1
I guess the staff would like to know your versions (pveversion -v), VM config (qm config 999) and maybe cat /etc/pve/storage.conf to better understand that "INFO: task qemu-img:26061 blocked for more than 604 seconds.".
 
Hi, yeah sure:

Code:
root@PVE01:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-7-pve)
pve-manager: 7.0-14+1 (running version: 7.0-14+1/08975a4c)
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-12
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-13
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.13-1
proxmox-backup-file-restore: 2.0.13-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20210831-1
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.1.0-1
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-18
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

root@PVE01:~# qm config 999
lock: create

root@PVE01:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

nfs: NAS01-Vm
        export /volume1/Vm
        path /mnt/pve/NAS01-Vm
        server 172.16.1.81
        content images,backup,rootdir
        options vers=4.1
        prune-backups keep-all=1

nfs: NAS01-Backup
        export /volume1/Backup
        path /mnt/pve/NAS01-Backup
        server 172.16.1.81
        content backup
        options vers=4.1
        prune-backups keep-all=1

root@PVE01:~#

The VM is unfortunately still hanging in the "lock: create" state
 
Hi together

:rolleyes: It looks like it was a network configuration error...

I have a Synology with a bond interface and MTU 9000. As well as a two stacked switches from FS.com where I have also set both ports to MTU 9000. But since the ports are a link aggregation, this is considered its own interface, so the MTU there must also be set to 9000.

Code:
SW#show interface AggregatePort 3
Index(dec):71 (hex):47
AggregatePort 3 is UP  , line protocol is UP
  Hardware is AggregateLink AggregatePort, address is 649d.99d0.a5fd (bia 649d.9                                                                                               9d0.a5fd)
  Interface address is: no ip address
  Interface IPv6 address is:
    No IPv6 address
  MTU 9000 bytes, BW 50000000 Kbit
  Encapsulation protocol is Ethernet-II, loopback not set
  Keepalive interval is 10 sec , set
  Carrier delay is 2 sec
  Ethernet attributes:
    Last link state change time: 2021-11-13 13:13:36
    Time duration since last link state change: 0 days,  1 hours, 47 minutes, 57                                                                                                seconds
    Priority is 0
    Medium-type is Fiber
    Admin duplex mode is AUTO, oper duplex is Full
    Admin speed is 25G, oper speed is 25G
    Flow control admin status is OFF, flow control oper status is OFF
    Admin negotiation mode is OFF, oper negotiation state is OFF
    Storm Control: Broadcast is OFF, Multicast is OFF, Unicast is OFF
  Bridge attributes:
    Port-type: trunk
    Native vlan: 1
    Allowed vlan lists: 1-4094
    Active vlan lists: 1-50
  Aggregate Port Informations:
        Aggregate Number: 3
        Name: "AggregatePort 3"
        Members: (count=2)
        Lower Limit: 1
        TFGigabitEthernet 1/0/21                 Link Status: Up
        TFGigabitEthernet 2/0/21                 Link Status: Up
    Load Balance by: Source MAC and Destination MAC
  Rxload is 1/255, Txload is 1/255
  Input peak rate: 8119693 bits/sec, at 2021-11-13 15:58:34
  Output peak rate: 1586261450 bits/sec, at 2021-11-13 15:58:34
   10 seconds input rate 19895 bits/sec, 11 packets/sec
   10 seconds output rate 1303974 bits/sec, 76 packets/sec
    515506 packets input, 71792239 bytes, 0 no buffer, 583 dropped
    Received 23 broadcasts, 0 runts, 38 giants
    38 input errors, 0 CRC, 0 frame, 0 overrun, 0 abort
    1749169 packets output, 12088798440 bytes, 0 underruns, 0 no buffer, 108 dro                                                                                               pped
    0 output errors, 0 collisions, 0 interface resets
SW#

Maybe this will help someone.

Best regards Lars
 
Unfortunately, it is not completely fixed. Despite the disk format qcow2 no snapshors can be created:

Unbenannt.PNG

Unbenannt.PNG

Code:
root@PVE01:~# qm agent 999 ping
root@PVE01:~# qm config 999
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
efidisk0: NAS01-Vm:999/vm-999-disk-1.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: none,media=cdrom
machine: pc-q35-6.1
memory: 16384
meta: creation-qemu=6.1.0,ctime=1636815126
name: VWS-NNNN
net0: virtio=02:42:08:19:B7:A0,bridge=vmbr1,firewall=1
numa: 0
ostype: win11
scsi0: NAS01-Vm:999/vm-999-disk-0.qcow2,cache=writeback,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=02cfdbf8-4ae0-4df8-a617-23ff8ead22bb
sockets: 2
tpmstate0: NAS01-Vm:999/vm-999-disk-0.raw,size=4M,version=v2.0
vmgenid: 0485a5c3-48f1-421c-bab8-a5861b24dc3f
 
Last edited:
Your TPM is still "raw" not "qcow2" and PVE in general won't allow snapshots unless really everything can be snapshotted. Did't worked with TPM yet, but I guess that might be a problem.
 
  • Like
Reactions: MisterDeeds
You are right! That was the problem. I use Server 2022, I don't really need a TPM. Removed and snapshots available!

Thank you very much!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!