Proxmox 7.3 spice issue

lessfoobar

New Member
Feb 13, 2022
15
1
1
125
Code:
kvm: warning: spice: reds.c:2620:reds_init_socket: getaddrinfo(127.0.0.1,61000): Address family for hostname not supported
kvm: warning: spice: reds.c:3516:do_spice_init: Failed to open SPICE socket
kvm: failed to initialize spice server
start failed: qemu exited with code 1
I'm getting this error when I set the main display to spice on my VMs. So far I've tested with Fedora Server 37 and pfsense. It all started because of recommendations from the pfsense documentation link. So I can start the VMs from the web ui but the VMs does not start at boot which is a bummer for pfsense because I don't have network then. Because of that I have to log into the server and start the VM from the terminal which provedes the above error. As soon as I switch the display to default I can boot the vm just fine.
1672831181275.png
 
Last edited:
Hi,
could you please provide the output of pveversion -v, qm config <vmid> and cat /etc/hosts?
(Replacing <vmid> with the actual numeric ID of your pfSense VM.)

Anyway, this seems to be a somewhat-known problem, according to multiple upstream bug reports.
A workaround for this seems to be using ::1 as SPICE listen address.

You can try that out by running
Code:
qm set <vmid> -args '-spice tls-port=61000,addr=::1,tls-ciphers=HIGH,seamless-migration=on'
before starting the VM again with a SPICE display.
 
Last edited:
No, args can only be set using the qm tool (or editing the configuration file directly, but preferably the tool).
This is by design, as the option should be considered expert-only - these arguments are passed straight to QEMU as-is and thus should be used with care.
 
So the funny thing is at host boot I can't start the vms, as soon as I log into the webui or the thru the cli at the server I can start them manually.

Code:
Task viewer: VM 101 - Start

swtpm_setup: Not overwriting existing state file.


kvm: warning: Spice: reds.c:2620:reds_init_socket: getaddrinfo(127.0.0.1,61000): Address family for hostname not supported
kvm: warning: Spice: reds.c:3516:do_spice_init: Failed to open SPICE sockets
kvm: failed to initialize spice server
stopping swtpm instance (pid 20105) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1

pveversion:

Code:
pveversion -v
proxmox-ve: 7.3-1 (running kernel: 6.2.11-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-6.2: 7.4-3
pve-kernel-5.15: 7.4-3
pve-kernel-helper: 7.3-8
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.11-1-pve: 6.2.11-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

qm config 101

Code:
qm config 101
agent: enabled=1,freeze-fs-on-backup=1,fstrim_cloned_disks=1,type=virtio
autostart: 1
bios: ovmf
boot: order=virtio0;net0
cores: 1
cpu: IvyBridge,flags=+aes
description: nbde001p vm
efidisk0: VM-OS-Drives:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
machine: q35
memory: 1024
meta: creation-qemu=7.2.0,ctime=1679434630
name: nbde001p
net0: virtio=52:54:00:00:00:11,bridge=vmbr4,firewall=1,link_down=0,mtu=1
numa: 0
onboot: 1
ostype: l26
protection: 0
rng0: source=/dev/urandom,max_bytes=1024
sata0: ISOs:iso/rhel-9.2-x86_64-dvd.iso,media=cdrom,size=9371072K
sata1: ISOs:iso/rhel_9_nbde_ks.iso,media=cdrom,size=352K
scsihw: virtio-scsi-single
smbios1: uuid=de0b1d05-5a3e-46d3-ba03-89410b8fcd69
sockets: 1
startup: order=2
tags: tang
tpmstate0: VM-OS-Drives:vm-101-disk-2,size=4M,version=v2.0
vga: type=qxl,memory=128
virtio0: VM-OS-Drives:vm-101-disk-3,discard=on,iothread=1,size=10G
vmgenid: 715a99e9-8e9b-4d2d-a2ba-d02afaa093dc
vmstatestorage: VM-OS-Drives

and the /etc/hosts

Code:
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.88.100 proxmox.DOMAIN.com proxmox

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 
to circumvent this phenomenon I have a local systemd timer that is starting the vms after 10 min :D, I'm sure its not the way it is supposed to work but yeah
 
So the funny thing is at host boot I can't start the vms, as soon as I log into the webui or the thru the cli at the server I can start them manually.
I don't see any SPICE args in the VM config above, did you apply the args I mentioned and posted the correct VM config?
This is still the same issue as in the first post.

to circumvent this phenomenon I have a local systemd timer that is starting the vms after 10 min :D, I'm sure its not the way it is supposed to work but yeah
That sounds like a strange issue is is not supposed to happen.

proxmox-ve: 7.3-1 (running kernel: 6.2.11-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
First: This is strange too, did you fully upgrade your system using apt update && apt full-upgrade?
Notice the full-upgrade, which is important.
 
I don't see any SPICE args in the VM config above, did you apply the args I mentioned and posted the correct VM config?
This is still the same issue as in the first post.


That sounds like a strange issue is is not supposed to happen.


First: This is strange too, did you fully upgrade your system using apt update && apt full-upgrade?
Notice the full-upgrade, which is important.
Hey Christoph,
thanks for the continuous replies and sticking up with me. You are correct. I haven't done the args part. As soon as I saw it working with my workaround I didn't want to put extra args in the vm that in the future I'll forget about. I've done the full-upgrade as you recommended.
Code:
pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.11-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-6.2: 7.4-3
pve-kernel-5.15: 7.4-3
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.11-1-pve: 6.2.11-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
I've added the args as well. As you can see I get the same output.
Code:
qm config 101
agent: enabled=1,freeze-fs-on-backup=1,fstrim_cloned_disks=1,type=virtio
args: -spice tls-port=61000,addr=::1,tls-ciphers=HIGH,seamless-migration=on
autostart: 1
bios: ovmf
boot: order=virtio0;net0;sata0;sata1
cores: 1
cpu: IvyBridge,flags=+aes
description: nbde001p vm
efidisk0: VM-OS-Drives:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
machine: q35
memory: 1024
meta: creation-qemu=7.2.0,ctime=1679434630
name: nbde001p
net0: virtio=52:54:00:00:00:11,bridge=vmbr4,firewall=1,link_down=0,mtu=1
numa: 0
onboot: 1
ostype: l26
protection: 0
rng0: source=/dev/urandom,max_bytes=1024
sata0: ISOs:iso/rhel-baseos-9.1-x86_64-dvd.iso,media=cdrom,size=8641M
sata1: ISOs:iso/rhel_9_nbde_ks.iso,media=cdrom,size=352K
scsihw: virtio-scsi-single
smbios1: uuid=de0b1d05-5a3e-46d3-ba03-89410b8fcd69
sockets: 1
startup: order=2
tags: tang
tpmstate0: VM-OS-Drives:vm-101-disk-2,size=4M,version=v2.0
vga: type=qxl,memory=128
virtio0: VM-OS-Drives:vm-101-disk-3,discard=on,iothread=1,size=10G
vmgenid: 715a99e9-8e9b-4d2d-a2ba-d02afaa093dc
vmstatestorage: VM-OS-Drives
And the same error:
Code:
swtpm_setup: Not overwriting existing state file.
kvm: warning: Spice: reds.c:2655:reds_init_socket: binding socket to ::1:61000 failed
kvm: warning: Spice: reds.c:3516:do_spice_init: Failed to open SPICE sockets
kvm: failed to initialize spice server
stopping swtpm instance (pid 8220) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
However with the args I can not start the vms. I'm getting the same code. If I remove the args I can start them 5 min after the host is up
 
Last edited: