Search results

  1. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    See lots of these entries as well. [Tue Dec 1 07:03:03 2020] scsi host10: BC_298 : MBX Cmd Completion timed out [Tue Dec 1 07:03:03 2020] scsi host10: BG_1108 : MBX CMD get_boot_target Failed [Tue Dec 1 07:03:31 2020] INFO: task systemd-udevd:504 blocked for more than 120 seconds. [Tue Dec...
  2. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Same issue on 5.4.78.1 from the testing repo as well.
  3. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Attached a screen shot of the kernel message I am seeing in the logs. But that is about it.
  4. A

    HP DL 380 Gen 9 issues on 5.4.73 & 5.4.78 kernel

    Updated some of my test nodes this morning. On one of them, which is a HP DL 380 Gen9, it boots up without any networking. This is specific to 5.4.73-1, if I go back to 5.4.65-1 all is well. I don't think its a driver issue as the host is using ixgbe and bnx2x. Ethtool just reports the...
  5. A

    External Ceph Monitor

    That is pretty much along the lines I was thinking but I wanted to confirm. Appreciate the input.
  6. A

    External Ceph Monitor

    Hey guys we are getting ready to roll out a new production ceph cluster using proxmox. I wanted to get some input on monitors. Does everyone always use external monitors, or do most use their OSD nodes? I want to have 3 monitors, but was thinking of maybe 1-2 external monitors and 1 monitor...
  7. A

    Odd network issue after upgrade to Proxmox6.2

    Finally pinned this issue down. At some point I installed the numad package when the cluster was on proxmox5. Once moved to proxmox6 the the numad package was causing the VM to hang because it was moving it around to different numa nodes. Basically causing the VM to hang for periods of time.
  8. A

    Ceph Octopus

    Anyone have any input on Octopus testing? Going pretty well? We are very interested in the new replication features!
  9. A

    Ceph Issues

    I managed to totally mess up my ceph cluster. Is there any possible way to start ceph from scratch but re-use the OSD's and preserve the data?
  10. A

    Odd network issue after upgrade to Proxmox6.2

    The only other issue I can seem to find is the following. 2020-07-30 05:14:32 starting migration of VM 100 to node 'fpracprox1' (10.211.45.1) can't deactivate LV '/dev/Data/vm-100-disk-1': Logical volume Data/vm-100-disk-1 is used by another device. 2020-07-30 05:14:33 ERROR: volume...
  11. A

    Odd network issue after upgrade to Proxmox6.2

    Yep storage network is solid. Its a private dedicated 10G network.
  12. A

    Odd network issue after upgrade to Proxmox6.2

    4 node cluster of 2x HP DL 560's, 1x HP DL 380, 1x Supermicro and Nimble storage. This cluster was on 5.x for a couple years now with no issues. The issue only exists on one of the four nodes. Along with that the issue seems to only exist for the VM. At random times the VM looses network...
  13. A

    IGD Passthrough setup broke with Proxmox 6.2

    I want to confirm that these steps have resolved my issues, but I still have to use the args option in the VM config to actually get a display on a physical monitor. 1) Moved /etc/modprobe.d/vfio.conf to /etc/modprobe.d/vfio.conf.old 2) Edited /etc/default/grub by adding...
  14. A

    IGD Passthrough setup broke with Proxmox 6.2

    Yikes that doesn't sound very promising. What are the chances of getting hostpci to work properly? I would love to just add a hostpci line but the display never works.
  15. A

    IGD Passthrough setup broke with Proxmox 6.2

    Interesting I had 7 machines I was able to correct with the 5.3.18-3-pve kernel. With 5.4.34-1-pve if I add the video adapter as a PCI device in the GUI and start the VM one time, /dev/vfio/1 gets created and then I can boot the VM with the original args line. Hoping the dev's can figure...
  16. A

    IGD Passthrough setup broke with Proxmox 6.2

    Ok, so when I went back a kernel, I went back to the latest Proxmox5 kernel (4.15.18-28-pve). However, when I go back to 5.3.18-3-pve the issue is resolved and the vm starts and displays as expected. So this does look like some type of kernel issue with 5.4.34-1-pve
  17. A

    IGD Passthrough setup broke with Proxmox 6.2

    I attempted to role back pve-qemu-kvm and qemu-server without any luck as well.
  18. A

    IGD Passthrough setup broke with Proxmox 6.2

    I had a 6 setups update to 6.2 last night that are having some issues booting a VM with IGD passthrough. Here is what I get with the first boot of the VM. kvm: -device vfio-pci,host=00:02.0,addr=0x18,x-vga=on,x-igd-opregion=on: vfio 0000:00:02.0: failed to open /dev/vfio/1: No such file or...
  19. A

    Ceph Nautilus Issues

    This started off as a Luminous cluster on 5.4 (Which was rock solid and we never had any issues). About a month ago I moved it to 6.1.8/Nautilus and my monitors have started giving me a ton of problems. Hoping for a bit of help as I am not sure what I can do. Sometimes on node reboots I...
  20. A

    [SOLVED] Ceph Luminous to Nautilus Issues

    I was able to get things back in order following this thread, but still very concerning. https://forum.proxmox.com/threads/directory-var-lib-ceph-osd-ceph-id-is-empty.57344/

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!