Sadly, I must tear down my server and re-purpose the hardware as a desktop machine. I'll be using KVM/libvirt for virtual machines on the desktop.
How do I export proxmox VMs for use with that technology? Is there anything in particular that is required? Would it be easier to just transfer...
Oh god, does it need to be turned on... It definitely isn't on either VM. That doesn't seem to be mentioned anywhere relevant *sigh*... I need to find another career.
Enabled and started the service in my Linux VMs, now shuts down properly.
Just per VM system logs. For example, I am having issues with pci passthrough though in amongst my logs are a bunch of other things related to random workers and backups. I'm assuming that you don't have a feature like that because it doesn't really exist?
Indeed, I need to operate a little better if I'm going to do this properly. Right now I'm just flying by the seat of my pants so to speak and it appears I'm flying a little close to the sun!
Will it just be a matter of re-installing the PVE instance? i.e. will any block storage allocated to...
That might be a good idea, I was hesitant to do that if the project is focused on more important things that may or may not appear immediately beneficial to me. I'll give it a shot and see what they say.
Is it possible to downgrade from PVE7?
As an aside, I'm starting to get a little frustrated with this situation. Given I am new to the community, I am obviously missing the historical and contextual information that may help to understand more about this particular problem.
Why is this not a...
Proxmox 7.1-7
AMD Epyc 7402
Gigabyte MZ72-HBO
Various Win10 Pro and Arch Linux VMs that are not responding to shutdown from the GUI. Arch Linux machines have qemu-geust-agent installed.
Syslog:
Jan 08 16:21:28 central pvedaemon[1516470]: <root@pam> end task...
Managed to get some logs, after the VM randomly crashed and I tried to boot the VM:
Jan 06 11:38:22 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Jan 06 11:38:22 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Jan 06 11:38:22 central...
Negative, running kernel 5.13. I will update and apply the workaround.
I'm also getting random shutdowns of the VM, very annoying.
Really looking forward to stable pcie passthrough, these issues are starting to become a barrier to continued use.
Re-opening this as it has started happening again seemingly without reason. Unless some kind of update has been pushed from the backend, I haven't touched a single thing. In fact, I've not used this VM in about a week.
This is the only relevant output:
Jan 04 18:26:49 central...
After (not) much deliberation, I am looking to pursue the following configuration:
2x Compute servers, with small U.2 SSD array for current working set. Immediately recent projects awaiting revisions can spillover to an additional local SSD SATA array. Everything replicated using GlusterFS on...
After spending some time learning that SAN is the traditional storage route and that Ceph + "hyperconverged" is actually encouraged now that we have the technology, I am a little lost on how to proceed with my home lab. I don't know if using Ceph would be like trying to shoe horn a new...
Super low priority feature of course, but would be really great to see NFS come to the GUI.
Right now I'm using TrueNAS in a VM for lazy config of my NFS shares and I would like to not do that in future without sacrificing a GUI.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.