See lots of these entries as well.
[Tue Dec 1 07:03:03 2020] scsi host10: BC_298 : MBX Cmd Completion timed out
[Tue Dec 1 07:03:03 2020] scsi host10: BG_1108 : MBX CMD get_boot_target Failed
[Tue Dec 1 07:03:31 2020] INFO: task systemd-udevd:504 blocked for more than 120 seconds.
[Tue Dec...
Updated some of my test nodes this morning.
On one of them, which is a HP DL 380 Gen9, it boots up without any networking. This is specific to 5.4.73-1, if I go back to 5.4.65-1 all is well. I don't think its a driver issue as the host is using ixgbe and bnx2x.
Ethtool just reports the...
Hey guys we are getting ready to roll out a new production ceph cluster using proxmox.
I wanted to get some input on monitors. Does everyone always use external monitors, or do most use their OSD nodes?
I want to have 3 monitors, but was thinking of maybe 1-2 external monitors and 1 monitor...
Finally pinned this issue down. At some point I installed the numad package when the cluster was on proxmox5. Once moved to proxmox6 the the numad package was causing the VM to hang because it was moving it around to different numa nodes. Basically causing the VM to hang for periods of time.
The only other issue I can seem to find is the following.
2020-07-30 05:14:32 starting migration of VM 100 to node 'fpracprox1' (10.211.45.1)
can't deactivate LV '/dev/Data/vm-100-disk-1': Logical volume Data/vm-100-disk-1 is used by another device.
2020-07-30 05:14:33 ERROR: volume...
4 node cluster of 2x HP DL 560's, 1x HP DL 380, 1x Supermicro and Nimble storage. This cluster was on 5.x for a couple years now with no issues.
The issue only exists on one of the four nodes. Along with that the issue seems to only exist for the VM.
At random times the VM looses network...
I want to confirm that these steps have resolved my issues, but I still have to use the args option in the VM config to actually get a display on a physical monitor.
1) Moved /etc/modprobe.d/vfio.conf to /etc/modprobe.d/vfio.conf.old
2) Edited /etc/default/grub by adding...
Yikes that doesn't sound very promising. What are the chances of getting hostpci to work properly? I would love to just add a hostpci line but the display never works.
Interesting I had 7 machines I was able to correct with the 5.3.18-3-pve kernel.
With 5.4.34-1-pve if I add the video adapter as a PCI device in the GUI and start the VM one time, /dev/vfio/1 gets created and then I can boot the VM with the original args line.
Hoping the dev's can figure...
Ok, so when I went back a kernel, I went back to the latest Proxmox5 kernel (4.15.18-28-pve). However, when I go back to 5.3.18-3-pve the issue is resolved and the vm starts and displays as expected.
So this does look like some type of kernel issue with 5.4.34-1-pve
I had a 6 setups update to 6.2 last night that are having some issues booting a VM with IGD passthrough.
Here is what I get with the first boot of the VM.
kvm: -device vfio-pci,host=00:02.0,addr=0x18,x-vga=on,x-igd-opregion=on: vfio 0000:00:02.0: failed to open /dev/vfio/1: No such file or...
This started off as a Luminous cluster on 5.4 (Which was rock solid and we never had any issues). About a month ago I moved it to 6.1.8/Nautilus and my monitors have started giving me a ton of problems. Hoping for a bit of help as I am not sure what I can do.
Sometimes on node reboots I...
I was able to get things back in order following this thread, but still very concerning.
https://forum.proxmox.com/threads/directory-var-lib-ceph-osd-ceph-id-is-empty.57344/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.