I am attempting to mount an NFS share and am getting the following error no matter if I use the GUI or this command:
pvesm add nfs offsitebackup --server <myip> --export "/server-backup"
create storage failed: error during cfs-locked 'file-storage_cfg' operation: storage 'offsitebackup' is not...
I'm not re-IP'ing the box. I'm actually trying to just tell it to use one of its different IP addresses for cluster communications. Also doesn't cover any changes I need to make to the Ceph nodes.
Point is moot now since my datacenter recovered the private network.
My datacenter recently suffered a catastrophic failure causing our private network to no longer be fully operational
I have a 4 node cluster with both public IPs, and a private network with private IPs. I also have Ceph running on all nodes. My cluster network and cluster/public network for...
Is it possible to have the video still visible on the console UI when running a VM with no video adapter, and Intel GVT-g configured? Right now running like this and heavily dependent on RDP working.
I see these instructions for libvirt on an Arch system with an GVT-g VM, but not sure how this...
I have Windows VMs with just an Intel GVT-g GPU and Display set to None (so there's no other video card present in the VM). How can I still obtain access to the console via noVNC, SPICE, or anything else?
So slowly increasing past 300 load is normal, even though this is a 4 node cluster all running similar VMs and workloads and the nodes with more than 1 VM are showing less load average? To me this seems like a resource allocation leak.
Screenshot of top as well as from admin interface attached.
There's only one VM running that isn't taking all the available memory so this makes little sense.
I have OPNsense running on two different VMs in an HA CARP configuration.
The VMs are connected to custnet (which is connected to the OPNsense VMs as the LAN interface) with the below configuration:
root@virt-slc-90:/etc/pve/sdn# cat vnets.cfg
vnet: custnet
tag 200000
zone intvxlan...
Had to do this in order to fix things:
apt install libpve-common-perl=6.1-3 qemu-server=6.2-3 pve-manager=6.2-6 pve-container=3.1-8 pve-qemu-kvm=5.0.0-4 and then reboot every node in the cluster.
I based this on my recent apt logs to downgrade to a known working version.
I think I figured out the issue:
/usr/share/perl5/PVE/QemuServer/PCI.pm line 425 references a variable $vmid, but it's not declared or defined anywhere else in the file.
So at first glance this looks like it wants to use the correct mdev GUID. Then when it actually tries to run QEMU it uses the wrong one. I tried looking at the conf files in /etc/pve/qemu-server but the GUID doesn't seem to be defined there and seems to be generated by Proxmox. So there's...
Well the problem there is you can't just keep making virtual GPUs - if you have too many it won't let you create more. Likewise if you try to reuse a GUID it won't let you. You'd have to delete the vGPU and readd it (or just reuse the same one).
The problem is that Proxmox generates the GUID to...
I have the same error - the device does seem to be made. However I am having the same issue.
It seems like the device is being made with the right value:
[ 1277.732498] vfio_mdev 00000000-0000-0000-0000-000000000401: Adding to iommu group 18
[ 1277.732499] vfio_mdev...
It seems to be on host crash, not on a normal reboot of the host. Perhaps one way to test this is to have a 3 node cluster and then cause one of them to kernel panic.
@spirit
I'm right now using VXLAN in a mesh configuration with 3 hosts. It seems that whenever a host crashes the network connections between the rest of the hosts seem to reset and blip momentarily (even though the traffic should be flowing directly between them without any effect). Is there a...
xcp-ng (Basically the open source equivalent of Citrix XenServer) supports GVT-g via the xengt kernel modules (similar to how Proxmox uses the kvmgt modules). This would imply that Citrix XenServer has support as well. Maybe not for newer processor architectures but definitely for...
I'm not blaming Proxmox but what probably would have been a more helpful response is actually clarifying whether support would include offering a 5.7 kernel to a customer or if this is out of scope.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.