Hey guys,
Since the latest kernel update, I seem to be having an issue with one VM in particular, no others seem affected.
For some reason, the cloud-init drive has disappeared. If I try to re-add, I get the following message: unable to parse zfs volume name 'cloudinit'.
The same does occur for any freshly cloned VM that I try to remove and re-add the cloud-init drive.
Since the latest kernel update, I seem to be having an issue with one VM in particular, no others seem affected.
For some reason, the cloud-init drive has disappeared. If I try to re-add, I get the following message: unable to parse zfs volume name 'cloudinit'.
The same does occur for any freshly cloned VM that I try to remove and re-add the cloud-init drive.
Bash:
qm set 1254 --scsi1 proxmox:cloudinit
update VM 1254: -scsi1 proxmox:cloudinit
unable to parse zfs volume name 'cloudinit'
Bash:
❯ pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.1.10 (running version: 8.1.10/4b06efb5db453f29)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
pve-kernel-5.15: 7.4-4
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-5-pve: 6.5.13-5
proxmox-kernel-6.5.13-3-pve: 6.5.13-3
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
frr-pythontools: 8.5.2-1+pve1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.3
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.5
libpve-cluster-perl: 8.0.5
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.6
libpve-network-perl: 0.9.6
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.4
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve1
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.5-1
proxmox-backup-file-restore: 3.1.5-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.5
pve-cluster: 8.0.5
pve-container: 5.0.9
pve-docs: 8.1.5
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.10-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.1.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve1
Bash:
❯ cat /etc/pve/storage.cfg
dir: local
disable
path /var/lib/vz
content vztmpl,rootdir
shared 0
lvmthin: local-lvm
disable
thinpool data
vgname pve
content images,rootdir
nodes frodo
zfspool: proxmox
pool proxmox
blocksize 64k
content images,rootdir
mountpoint /proxmox
sparse 1
cifs: snippets
path /mnt/pve/snippets
server 192.168.50.111
share snippets
content snippets
domain HOME
prune-backups keep-all=1
username alex
cifs: unraid
path /mnt/pve/unraid
server 192.168.50.111
share proxmox
content backup
domain home
prune-backups keep-last=3
username alex
cifs: ISOs
path /mnt/pve/ISOs
server 192.168.50.111
share ISOs
content vztmpl,iso
domain HOME
prune-backups keep-all=1
username alex
pbs: PBS
datastore pbs
server 192.168.0.98
content backup
fingerprint ---
prune-backups keep-all=1
username root@pam
Bash:
❯ qm config 1254
agent: 1,fstrim_cloned_disks=1
balloon: 0
bios: ovmf
boot: c
bootdisk: scsi0
cipassword: **********
ciupgrade: 0
ciuser: alex
cores: 20
cpu: host
description: - To detatch from Ubuntu pro, run sudo pro detach%0A%0A- Secureboot breaks Nvidia card%0A%0A- Memory balooning does not work if you have passed through a PCI device.
efidisk0: proxmox:vm-1254-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:65:00,pcie=1
hotplug: network,usb
ipconfig0: ip=192.168.50.254/24,gw=192.168.50.1
machine: q35
memory: 40960
meta: creation-qemu=7.1.0,ctime=1676917435
name: HDA
net0: virtio=BC:24:11:56:6A:D5,bridge=vmbr1,queues=20,tag=50
numa: 0
onboot: 1
ostype: l26
rng0: source=/dev/urandom
scsi0: proxmox:vm-1254-disk-1,cache=writethrough,discard=on,iothread=1,size=250G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=e5d5d79c-ad04-4640-b7f9-d8ef3a35584b
sockets: 1
sshkeys: ssh-ed25519%20AAAAC3NzaC1lZDI1NTE5AAAAIOFLnUCnFyoONBwVMs1Gj4EqERx%2BPc81dyhF6IuF26WM%20proxvms%0A
startup: order=2
tablet: 0
tags: hda;vm
vmgenid: 5a67dc3e-5b54-4347-90c1-fa40f071a263
Bash:
❯ pvesm status
Name Type Status Total Used Available %
ISOs cifs active 27340591000 12381341336 14959249664 45.29%
PBS pbs active 942828160 734726912 208101248 77.93%
local dir disabled 0 0 0 N/A
local-lvm lvmthin disabled 0 0 0 N/A
proxmox zfspool active 942931968 159629784 783302184 16.93%
snippets cifs active 27340591000 12381341336 14959249664 45.29%
unraid cifs active 27340591000 12381341336 14959249664 45.29%
Last edited: